Detecting deepfakes

Scientists use machine learning to expose deceptive, sometimes dangerous, videos.

By Octavio Ramos | December 1, 2020

Deepfake Graphic Opt
Deepfakes use artificial intelligence to create convincing images, audio, and video hoaxes. Los Alamos National Laboratory

Since the 1950s, filmmakers have used computer-generated imagery (CGI) to produce breathtaking special effects for blockbuster films. Over time, CGI has become more sophisticated and easier to produce, creating fantastic creatures like the dragon in The Hobbit trilogy and crafting realistic models of actual human beings.

“The sophistication of this technology continues to evolve quickly. It’s getting to the point that we will no longer be able to trust our own eyes.”
- Juston Moore

Today, what used to take months of intense labor, multiple computing systems, and millions of dollars to produce can now be done on a home computer in a matter of hours. Thanks to advances in artificial intelligence technology, anyone can create startling videos by using sophisticated but surprisingly accessible and cheap software programs. These programs have led to a phenomenon known as a “deepfake.”

“A deepfake is a manipulated video recording, either doctored footage or completely fabricated performances,” explains Juston Moore of the Advanced Research in Cyber Systems group at Los Alamos National Laboratory.

The most common type of deepfake is a video portrait. “A source actor is filmed speaking, and then special software transfers the target’s (say Barack Obama’s or Donald Trump’s) facial mannerisms—including head position and rotation, eye gaze, and lip movement—over the source actor,” Moore explains. New audio is provided by an actor capable of mimicking voices. The end result is a video of a target saying something they never actually said.

“The sophistication of this technology continues to evolve quickly,” Moore says. “It’s getting to the point that we will no longer be able to trust our own eyes.”

Deepfake technology has been used to create amusing videos, such as one with Gene Kelly’s head replaced by that of Nicolas Cage for a “Singing in the Rain” dance sequence. But deepfakes can be insidious, posing a threat to national security.

Imagine a convincing deepfake of a world leader declaring war or a well-liked actress making a terrorist threat. To demonstrate quickly that such videos are frauds, a team of Los Alamos researchers is exploring several machine-learning methods that identify and thus counter deepfakes.

Garrett Kenyon, a member of Moore’s team, is working on an approach inspired by models of the brain’s visual cortex. In other words, Kenyon's models recognize images much like the brain does.

“Our detection technology consists of cortically inspired algorithms,” Kenyon explains. “Think of these cortical representations as pieces of a jigsaw puzzle. Our algorithms are so powerful that they can reconstruct the same jigsaw puzzle—the video portrait—in an infinite number of ways.”

The team discovered that the jigsaw pieces used to reconstruct real video portraits are different from the ones used to reconstruct deepfakes. The disparities are what enable the software under development to tell the difference between a real video portrait and a deepfake one.

Kenyon notes that better, more realistic deepfakes are under constant development. For example, one new target for deepfakes is body manipulation—videos that show, for example, couch potatoes playing professional sports, performing advanced martial arts, or executing 100 chin-ups with ease. Although full-body manipulation is still in its infancy, the age of the “digital puppeteer” is here.

“We are up against a rapidly moving target,” Kenyon says. “Thus, we are constantly working on speeding up and improving our algorithms. Advanced deepfakes may fool the brain, but we’re working to ensure that they don’t fool our algorithms.”