In June of 2023, the FBI released a public service announcement warning of the dangers of generative AI for explicit content creation, sextortion, and harassment.
Deepfakes — manipulated media designed to show someone saying or doing something that is fabricated — are on the rise, and technological advances make them harder to detect. Add the misplaced ingenuity of fraudsters, revenge seekers, or misguided creators who have not weighed the impact of their deepfakes, and you have the recipe for numerous disasters.
We've seen deepfake parodies of Tom Cruise and the two-and-a-half-minute video where impressions by comedian Jim Meskimen were used to deepfake 20 celebrities. Deepfakes even appeared on Season 17 of "America's Got Talent."
In June of 2022, Tom Graham and Chris Umé, creators of generative AI content-creation company, Metaphysic, partnered with singer Daniel Emmet to deepfake a singing Simon Cowell. Metaphysic returned to the AGT season finale with hyper-realistic AI projections of Cowell, Howie Mandel, and Terry Crews in a deepfake operatic performance.
Software that can do that and more is found with an internet search and executed by novices to create deepfakes. The subjects aren't only actors but politicians and private citizens with content created, in most cases, without their knowledge or consent.
In June of 2023, quantitative developer Tom Antony of Kerala, India, released a video highlighting hazards of deepfake technology. In a YouTube video, a penitent Antony cautions concerning deepfakes, addresses responsibility and ethics, and pledges to discontinue making deepfakes without a person's permission.
The video came after receiving comments of acclaim and apprehension on his YouTube and Instagram accounts over his deepfakes featuring Malayalam actors as characters from "The Godfather."
Deepfakes have circulated on the internet for many years as political satire. However, politicians and awareness organizations are now using deepfakes to bring attention to its potential perils.
In 2019, research organization Future Advocacy partnered with U.K. artist Bill Posters to create a deepfake campaign video showing Boris Johnson and opponent Jeremy Corbyn endorsing each other for prime minister of Great Britain. Their goal? To demonstrate the potential of deepfakes to spread misinformation.
In the U.S., congressional candidate Phil Ehr deepfaked Republican Matt Gaetz to bring awareness to Russian disinformation campaigns intending to influence the 2020 elections.
In March 2022, deepfakes impacting the security of nation-states made headlines. Nearly a month after Russia invaded Ukraine, a deepfake surfaced of Ukrainian President Volodymyr Zelenskyy urging Ukrainian soldiers to surrender to Russian invasion. The manipulated video quickly spread online but was removed by social media companies for violating misinformation and manipulated media policies.
In early June 2023, reports of a deepfaked Russian President Vladimir Putin declaring martial law aired on Russian radio and television; in Venezuela, AI-generated videos are reportedly disseminating political propaganda. The implications of deepfakes are clear concerning national security, but what happens when they transcend politics or parodies?
A 2019 report from technology research firm DeepTrace claimed that 99% of pornographic deepfakes featured female celebrities. In a 2019 interview with the BBC, actress Bella Thorne discussed her plight against illicit deepfakes, expressing worries over violations against noncelebrities.
In January 2023, a Twitch streamer made news for reportedly being caught on a website known for making AI-generated pornography of fellow female gamers.
In its June 2023 PSA, the FBI noted that perpetrators manipulate content sourced from social media, internet, or images requested from victims — including minors — into sexually-themed deepfakes. While acknowledging the "significant challenges" in removing deepfakes from the web, the FBI added that sextortion and harassment of victims are typical.
The White House formed the Task Force to Address Online Harassment and Abuse in late 2022. Several states, including California, Illinois, New York, and Virginia, passed or presented legislation outlawing deepfakes in pornography and politics.
New York's bill would make sharing deepfakes a federal crime, and an anti-deepfake bill is currently pending before the governor of Illinois. Activists recommend expanding laws covering libel, defamation, identity fraud, or impersonating a government official to include deepfakes.
In the U.K., an Online Safety Bill amendment criminalizes creating, viewing, or sending deepfake pornography, and could be punishable by two years of prison. The new law makes charges and convictions easier by removing the requirement to prove a perpetrator's intent to cause distress.
Offenders could also be placed on the sex offender register. Additionally, the U.K. government is crafting legislation to make threatening to share intimate images illegal.
In a world where seeing is no longer believing, knowledge is the precursor to forming an offensive but having legal ground to stand on provides the ultimate defense. I encourage lawmakers in the U.S. and other nations to work expeditiously to establish laws providing a clear route to justice and vindication for victims of illicit deepfakes in the U.S. and abroad.
This technology will only get more sophisticated. Time is of the essence.
V. Venesulia Carr is a former United States Marine, CEO of Vicar Group, LLC, and host of "Down to Business with V.," a television show focused on cyberawareness and cybersafety. She is a speaker, consultant, and news commentator providing insight on technology, cybersecurity, fraud mitigation, national security, and military affairs. Read more of her reports — Here.
© 2024 Newsmax. All rights reserved.