Facebook Creates New Deepfake Detection System
While we might be getting accustomed to seeing deepfakes used for a range of relatively harmless situations, from a cheerleader’s mom trying to give her daughter a bolster against her peers to a viral Tom Cruise impersonator, they can also be used to create more seriously malicious content like plastering someone else’s face onto a pornographic scene or damaging a politician’s career.
Deepfakes are digitally altered videos or photos of someone’s face using AI that can look all-too-true. They can be so pernicious that, for instance, the state of California banned them in politics and pornography last year.
We can easily be tricked into believing what we see, and the technology behind deepfakes only keeps improving.
To try and counter such altered footage from circulating, Facebook and Michigan University State researchers announced on Wednesday, June 16, they have created a novel method of detecting deepfakes, and which generative model was used to create them.
The team hopes that its method will provide researchers and practitioners with “tools to better investigate incidents of coordinated disinformation using deepfakes as well as open new directions for future research.”
Deepfake detection systems already exist, but because these are usually trained to detect specific generative models, as soon as a different model — one that the system was not trained on — appears, the system can’t figure out where the deepfake came from.
How the team’s system works
So the team decided to look one step further and extend the image attribution beyond the limited set of models presented in the training.
It mostly boils down to reverse engineering.
“Our reverse engineering method relies on uncovering the unique patterns behind the AI model used to generate a single deepfake image,” said Facebook’s Hassner.

“With model parsing, we can estimate properties of the generative models used to create each deepfake, and even associate multiple deepfakes to the model that possibly produced them. This provides information about each deepfake, even ones where no prior information existed,” said the team.
The team trained its system using a fake image dataset with 100,000 synthetic images that were generated from 100 different publicly available generative models. Their results were substantially better than previous detection models.
This type of deepfake detection model might come in handy, especially for governmental agencies, police, and social media sites that are desperately trying to keep such fake information from circulating on their platforms. No date was shared about when we can expect this model to go live, but it’s good to know researchers are working on such methods.