This article is more than 1 year old
Will 2023 be the year of dynamite disinfo deepfakes, cooked up by rogue states?
And if so, what are we gonna do about it?
Foreign adversaries are expected to use AI algorithms to create increasingly realistic deepfakes and sow disinformation as part of military and intelligence operations as the technology improves.
Deepfakes describe a class of content generated by machine learning models capable of pasting someone's face onto another person's body realistically. They can be in the form of images or videos, and are designed to make people believe someone has said or done something they haven't. The technology is often used to make false pornographic videos of female celebrities.
As the technology advances, however, the synthetic media has also been used to spread disinformation to fuel political conflicts. A video of Ukrainian President Volodymyr Zelensky urging soldiers to lay down their weapons and surrender, for example, surfaced shortly after Russia invaded the country, last year.
Zelensky denied he had said such things in a video posted on Facebook. Social media companies removed the videos in an attempt to stop false information from spreading.
But efforts to create deepfakes will continue to increase from enemy states, according to AI and foreign policy researchers from Northwestern University and the Brookings Institute in America.
A team of computer scientists from Northwestern University previously developed the Terrorism Reduction with Artificial Intelligence Deepfakes (TREAD) algorithm demonstrating a counterfeit video featuring the dead ISIS terrorist Mohammed al Adnani.
"The ease with which deepfakes can be developed for specific individuals and targets, as well as their rapid movement — most recently through a form of AI known as stable diffusion — point toward a world in which all states and nonstate actors will have the capacity to deploy deepfakes in their security and intelligence operations," the report's authors said. "Security officials and policymakers will need to prepare accordingly."
Stable diffusion models currently power text-to-image models, which generate fake images described in text by a user. They are now being adapted to forge false videos too and are producing increasingly realistic and convincing-looking content. Foreign adversaries will no doubt use this technology to mount disinformation campaigns, spreading fake news to sow confusion, circulate propaganda, and undermine trust online, according to the report.
- Intel says it can sort the living human beings from the deepfakes in real time
- China reveals draft laws that heavily restrict deepfakes
- Facebook, academics think they've cracked spotting deepfakes by spotting how they're generated
- US Senate approves deepfake bill to defend against manipulated media
The researchers urged the governments around the world to implement policies regulating the use of deepfakes. "In the long run, we need a global agreement on the use of deepfakes by defense and intelligence agencies," V.S. Subrahmanian, co-author of the report and a professor of computer science at Northwestern University, told The Register.
"Getting such an agreement will be hard, especially from veto-wielding nation states. Even if such an agreement is reached, some countries will likely break it. Such an agreement therefore needs to include a sanctions mechanism to deter and punish violators."
Developing technologies capable of detecting deepfakes won't be enough to tackle disinformation. "The result will be a cat-and-mouse game similar to that seen with malware: When cybersecurity firms discover a new kind of malware and develop signatures to detect it, malware developers make 'tweaks' to evade the detector," the report said.
"The detect-evade-detect-evade cycle plays out over time…Eventually, we may reach an endpoint where detection becomes infeasible or too computationally intensive to carry out quickly and at scale." ®