Notebookcheck Logo

Recognizing deepfakes: Difficult, but not hopeless

False, true or half-true? Who knows for sure? (Source: pixabay/Elf-Moondance)
False, true or half-true? Who knows for sure? (Source: pixabay/Elf-Moondance)
False claims, misleading reports, lack of context: nothing new. Accompanied by AI-generated photos, videos and sound, however, the whole thing takes on unexpected credibility. But at least there are a few insights that can help with debunking.

Many current studies dealing with the problem of "deepfakes" are sobering. The fakes created using artificial intelligence can put advertising messages into the mouths of celebrities or show politicians in photos that never happened.

If you put this material in front of people, the results are always similar, as the magazine "Science" has compiled. You could also flip a coin to decide what is real and what is not.

After all, one reason for the miserable results is said to be that the phenomenon of deepfakes is relatively new and hardly anyone is confronted with having to decide between fake and genuine content in everyday life.

And unfortunately, there is also a lack of awareness of the simple means by which a person's lips, for example, can be synchronized with a new statement.

Not a simple recipe

Nevertheless, there are some exciting studies that offer a little hope. For example, it was found that the visual cortex reacts completely differently to AI-generated faces than to real photographs. Unfortunately, other processes in the brain overlay this signal so that, as before, the test subjects are no better or worse at recognizing forgeries.

However, there are a few signs, and it is possible to learn to recognize the false images. Sometimes hands look completely wrong and like to have six or only four fingers. Sometimes legs appear that don't belong to anyone. Quite often, shadows fall incorrectly - on the shirt collar to the left and from the sunglasses to the right.

But of course the AI is getting better, and such obvious weak points are becoming increasingly rare. But then another phenomenon comes into play. By training on more and more material, faces become more uniform, more regular and more perfect. There are no blemishes and imperfections. Body shapes are also idealized rather than natural. This looks less beautiful and more uncanny.

Incidentally, this also applies to fake voices. The AI does not tend to generate typical slips of the tongue, a brief stutter or a lack of recording quality.

The more complex the fake becomes, the easier it is to detect. If, for example, only a sound recording is to be checked for authenticity, most fail. If there is also a video, best with subtitles, significantly more test subjects recognize the fake content.

And after all, anyone who knows that images and videos can be manipulated and created in this way and that the technology has a few weaknesses can recognize deepfakes more reliably. Otherwise, what has always been true applies: it never hurts to be critical and ask questions.

Source(s)

static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
Mario Petzold, 2024-02- 5 (Update: 2024-02- 5)