Deepfakes are being used for a variety of nefarious purposes, ranging from disinformation campaigns to inserting people into porn, and the doctored images are becoming more difficult to detect. A new AI tool identifies them in a surprisingly simple way: by looking at the light reflected in the eyes.
According to a press release, computer scientists at the University of Buffalo developed the deepfake-spotting algorithm, which analyzes the reflections in portraits’ eyes to determine their authenticity. The researchers claim that the algorithm is 94 percent effective at identifying false images in a study accepted in the IEEE International Conference on Acoustics, Speech, and Signal Processing — an intriguing new approach to verifying real image and video.
Researchers have developed an algorithm that identifies deepfake portraits by looking at their eyes. University at Buffalo computer scientists has developed this tool that automatically identifies deepfake photos by analyzing light reflections in the eyes.
The system detects forgeries by analyzing the corneas, which have a mirror-like surface that produces reflective patterns when illuminated by light. The AI system looks for these flaws by mapping out a face and analyzing the light reflected in each eyeball. It produces a score that can be used as a similarity metric. The lower the score, the more likely it is that the face is a Deepfake.
The reflection on the two eyes in a photograph of a real face will be similar because they are seeing the same thing. Deepfake images generated by GANs, on the other hand, frequently fail to accurately capture this resemblance. Instead, they frequently have inconsistencies, such as different geometric shapes or mismatched reflection locations.
Windows To The Soul
The algorithm detects the false imagery due to a surprisingly simple reason: deepfake AIs are terrible at creating accurate eye reflections. In the press release, Dr. Siwei Lyu, SUNY Empire Innovation professor and lead author of the study, said, “The cornea is almost like a perfect semisphere and is very reflective.” “As a result, anything that comes into the eye with light emitted from those sources will leave an image on the cornea.”
In experiments described in a paper accepted at the IEEE International Conference on Acoustics, Speech, and Signal Processing, which will be held in June in Toronto, Canada, the tool proved 94 percent effective with portrait-like photos.
“The cornea is almost like a perfect semisphere and is very reflective,” says Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering and lead author of the paper. “As a result, anything that enters the eye with light emitted from those sources will leave an image on the cornea.
“Because they’re seeing the same thing, the two eyes should have very similar reflective patterns,” he adds. “It’s something we don’t usually notice when we look at someone’s face.” Deepfake AI, on the other hand, is surprisingly bad at creating consistent reflections in both eyes.
Deepfake Dangers
The algorithm’s development is part of Lyu’s evangelism, which highlights the growing need for tools to detect deepfakes. His warnings and expertise on the subject landed him in front of Congress in 2019, where he testified on the dangers of deepfakes and how to combat them.
“Unfortunately, a large portion of these types of fake videos were created for pornographic purposes, which [caused] a lot of… psychological damage to the victims,” he stated in a press release. “There is also the potential political impact, with the fake video showing politicians saying or doing something they are not supposed to say or do. That’s not good.”
Not to mention, deepfakes can create uncanny, fake videos of mega-famous Scientologists.
The tool’s most obvious flaw is that it relies on reflected light in both eyes. The inconsistencies in these patterns can be fixed with manual post-processing, but the method will not work if one eye is not visible in the image. It has also only been shown to be effective on portrait images. If the person in the picture isn’t looking at the camera, the system will almost certainly produce false positives.
The researchers intend to investigate these issues in order to improve the efficacy of their method. It will not detect the most sophisticated Deepfakes in its current form, but it will detect many of the cruder ones.