- What is deepfake?
- What are the investigative tools for anti-deepfake?
- Anti-deepfake approach #1: searching for the original work
- Anti-deepfake approach #2: Active forensic
What is Deepfake?
Deepfake is the contraction of “deep learning” and “fake news”. It refers to the creation of false multimedia content (images, video, sound). Content that is not recordings of real physical scenes or events. It used to be called special effects or computer-generated images. However, to make new with old, its name was changed to give it a more trendy appeal, while including the much apprehended “Artificial Intelligence”.
Nothing new under the sun, then?
Yes, two critical points.
1) The tools are democratized and are easily found on the Internet
Extremely complicated and precise algorithms have been developed in the laboratory and are now available as open-source. Moving David Beckham’s lips to synchronize them with a text, making Barack Obama say anything and animating still images to revive Marylin Monroe is now accessible to almost anyone.
Moreover, if it remains too complicated, one can easily hire deepfakers on the Internet who will do the job at $20 per video.
2) The process is above all non-collaborative
Hollywood special effects have the immense advantage of being produced in a collaborative environment. Sensors are placed on the bodies of the actors who have been scanned to have the most faithful 3D model possible. In short, everything is done to facilitate the post-production task.
Obviously, Vladimir Putin never consented to the making of the deepfakes where he appears.
Experts can still discern imperfections in these deepfakes, but their realism is improving at high speed. It is no longer enough to see it to believe it; you can no longer trust what you see on the Internet. Journalists are petrified by the idea of relaying deepfakes without their knowledge, which would ruin their credibility. Some predict the apocalypse in the media world.
What are the investigative tools for anti-deepfake?
Media Forensics is the investigation of digital content: are they real or fake? Like Sherlock Holmes, sometimes all you need is a magnifying glass: Why does Barack Obama never blink on this video? Why are the edges of his eyes and mouth blurred when the rest of his face is clear? Even when the editing and special effects cannot be seen with the naked eye, digital imaging tools can identify marks invisible to the naked eye. Specific ways of generating deepfakes leave characteristic patterns in the pixel values, or on the contrary, not leave the same traces as a digital camera sensor.
Anti-anti-deepfake, how far will the response go?
It is an endless game of cat and mouse, police, and thief: If I know that your forensics tool detects the presence or the absence of such and such traces in my deepfakes, I will pass them in my anti-anti-deepfake tool to remove / insert them artificially. And so on … long series of scientific articles are to be expected where one will block the advances of the other, which blocked the previous results and so on. It will not be fascinating (opinion of the scientist here).
Who of the cat or the mouse will win in the end?
That is the question! We don’t know it yet, and will we ever know it for that matter?
FOCUS ON TWO ANTI-DEEPFAKE APPROACHES
Let us illustrate two particular anti-deepfake approaches that we like:
1) The search for the original work.
Some deep-fakes and especially photo-montages, use and divert original content. The manipulations are apparent when we have before us original content and manipulated content. Hence the idea of finding original content with a search engine like Google Image.
Yes, but how can I certify that I have found the original image correctly?
This is where invisible watermarking acts as a seal to attest to the originality of the content. Who creates it and when? It is done by trusted and accredited news agencies ( Associated Press, Reuters, NY Times, etc.) after thoroughly vetting the content. Others offer certified authentic image repository. Add to that the idea of a decentralized registry and another buzzword pops up: blockchain.
2) Active forensics
Recreating the creation chain of an image from the passage of light through the lens of a camera to the final .jpg file is complicated.
Especially since each manufacturer has its own cooking recipe, each camera model or processing software has its settings and so on.
And finally, artificial intelligence and its deep neural networks have recently replaced processing chains with somewhat opaque processes.
It is challenging to know if it is normal or not to note the presence or absence of traces, especially when the EXIF metadata has been deleted. Unless you deliberately create them in “official” processing chains. This is the key idea of a scientific article to be presented next week at the IEEE International Conference on Computer Vision (ICCV 2019) conference by Pawel Korus and Nasir Memon, Tandon School of Engineering, New York University. This article is currently buzzing on the web.
The authors propose to train two deep neural networks: the first converts a ‘raw sensor’ image (RAW format) into a good quality image so that traces of manipulation (JPEG compression, rescaling, etc.) will be detectable by the second network.
The first network facilitates the forensic task of the second: A good academic idea which assumes that camera manufacturers will all implement this algorithm. Furthermore, that deep fakers will not be able to train a neural network that could destroy this ongoing forensics work. The game of cat and mouse, police and thief, continues….
“Neural Imaging Pipelines – the Scourge or Hope of Forensics?”, Pawel Korus and Nasir Memon, Tandon School of Engineering, New York University.
Presented at ICCV conference.
Photo: monsitj / iStock