Why AI-generated images are getting better by looking worse

AI image generators have become significantly more convincing by adopting an unexpected strategy: making their output look worse. Instead of creating glossy, overly perfect images, the latest models now replicate the imperfections typical of smartphone cameras.

Allison Johnson writes for The Verge that early AI-generated images were easy to spot, featuring telltale signs like extra fingers and rubbery limbs. Modern systems have largely eliminated these obvious flaws, but faced a new challenge: images looked too polished and artificial.

Google’s Imagen 3 model, released in its Gemini app, represents a shift toward realism. The system mimics characteristics common in phone photography, including aggressive sharpening, boosted shadows, and the flat lighting typical of computational photography. “Google might have sidestepped around the uncanny valley,” says Ben Sandofsky, cofounder of camera app Halide.

Other companies are following suit. Adobe’s Firefly offers a “Visual Intensity” control to reduce artificial gloss, while Meta includes a “Stylization” slider for adjusting realism. Video generators like OpenAI’s Sora 2 have even mimicked grainy security camera footage.

The trend raises concerns about distinguishing real from fake imagery. The C2PA’s Content Credentials standard offers a potential solution through cryptographic signatures. Google’s Pixel 10 phones now label all images with their origin, whether AI-generated or camera-captured. However, widespread adoption remains limited. Until most devices and platforms implement such standards, identifying authentic imagery becomes increasingly difficult as AI systems master the art of imperfection.

About the author

Related posts:

Stay up-to-date:

Advertisement