The rapid adoption of artificial intelligence has major implications for our privacy. A new study by two researchers shows that it is becoming increasingly difficult for people to distinguish between a face created using AI and a real one. Surprisingly, the researchers said their study showed that fake images are more reliable than real ones. Researchers are now asking for more safeguards to prevent “deep fakes” from usurping our lives. The researchers warned of the serious implications, saying AI-synthesized text, audio, image and video have already been used for fraud, propaganda and “revenge porn”.
The researchers asked participants to distinguish faces created using the state-of-the-art StyleGAN2 from those that were real. They also asked participants about the level of confidence the faces evoked in them. The results were surprising. They revealed that the synthetically generated faces were very photo-realistic and difficult to distinguish from real faces. Participants also rated them as more trustworthy.
During the initial experiment, the participants’ accuracy rate was only 48%. During the second experiment, the rate improved slightly to only 59%, despite training the first round. The researchers then conducted a third round to verify reliability. With the same set of images, they found that the average score for fake faces was 7.7% more reliable than the average score for real faces.
AI-synthesized text, audio, image, and video are “weaponized” for non-consensual intimate imagery, financial fraud, and disinformation campaigns, the researchers said in the study, published in Proceedings of the National Academy of Sciences. (PNAS). “Our assessment of the photorealism of AI-synthesized faces indicates that synthesis engines have crossed the strange valley and are able to create faces that are indistinguishable – and more reliable – than real faces,” they added.
The study authors – Sophie Nightingale from Lancaster University and Hany Farid from the University of California – also warned against the scenario where people would be unable to identify AI-generated images. “Perhaps most pernicious is the consequence that, in a digital world in which an image or video can be faked, the authenticity of any embarrassing or intrusive recording can be called into question.”
Researchers have offered some guidelines against deep forgeries. These guarantees include the incorporation of robust watermarks in the image and video synthesis networks.