AI detection tools for images and videos can identify some artificially generated content, but none are reliable enough to serve as definitive proof. Stuart A. Thompson reports for The New York Times that a series of more than 1,000 tests on over a dozen detection tools revealed significant gaps in accuracy.
The tools scan for hidden watermarks, composition errors, and other digital clues to determine whether content is real or generated by AI. Most performed well when identifying straightforward fakes, such as images created with simple text prompts. However, they struggled with more complex visuals and images that blend real content with A.I.-generated elements.
Video detection remains particularly weak. Only a few tools can analyze video at all, and their results were inconsistent. Audio detection proved stronger, with tools from companies like Sensity and Resemble.ai correctly identifying fake voices even in heavily altered clips.
A key risk is false negatives and false positives. Most detectors failed to catch subtle AI edits to real photographs, such as added smoke or altered backgrounds. Conversely, some tools mistakenly flagged real images as fake, a problem with serious consequences during breaking news events.
Mike Perkins, a professor at British University Vietnam, describes the situation as an “arms race.” As AI generators improve, detectors struggle to keep pace.