- Adding grain to AI-generated images makes them harder to identify as fake, the New York Times reports.
- The likelihood of detection drops from 99% to 3.3% when pixelated noise is added to images.
- The finding comes as users in the US and abroad begin to use AI images to influence election campaigns.
From falsified campaign ads to stolen artwork, AI-generated images have been responsible for a wave of disinformation online in recent months.
Now, the New York Times reports that AI detection software — one of the frontline defenses against the spread of AI-generated disinformation — can be easily fooled by simply adding grain to AI-generated images.
The Times’ analysis shows that when an editor adds grain — that is, texture — to an AI-generated photo, the likelihood of software identifying the image as AI-generated goes from 99% to just 3.3%. Even the software Hive — which showed one of the best success rates in the Times’ report — could no longer correctly identify an AI-generated photo after editors made it more pixelated.
As a result, experts warned that detection software should not be the only line of defense for companies trying to combat misinformation and prevent the distribution of these images.
“Every time somebody builds a better generator, people build better discriminators, and then people use the better discriminator to build a better generator,” Cynthia Rudin, a computer science and engineering professor at Duke University, told the Times.
The Times’ analysis comes at a time when users are increasingly deploying AI-generated misinformation online to influence political campaigns, Insider reported. Ron DeSantis’ presidential campaign, for instance, distributed fake images of Donald Trump and Anthony Fauci earlier this month.
Read the full article here