Tech

AI Deepfake Scams: Hollywood Pushes Congress to Fight Back Against Digital Fraud

Artificial intelligence has revolutionized many industries, but it has also introduced serious risks, particularly with AI deepfake scams. From manipulating celebrity voices and images to deceiving unsuspecting individuals, these scams have reached alarming levels. Hollywood celebrities, including Steve Harvey, Scarlett Johansson, and Taylor Swift, are calling for stricter regulations to combat this growing digital menace.

Steve Harvey, best known for hosting Family Feud and his radio show, has found himself at the center of this controversy. While AI-generated memes of Harvey in humorous scenarios—such as portraying him as a rockstar—may seem harmless, more malicious actors have used his voice and likeness for fraudulent schemes. Scammers have created AI-generated videos mimicking Harvey’s voice, claiming that people can receive government-provided funds, leading many to fall victim to fraud.

AI Deepfake Scams Are Increasing at an Alarming Rate

The problem isn’t just limited to Steve Harvey. Celebrities like Joe Rogan, Taylor Swift, and Brad Pitt have all had their images and voices cloned using AI-generated deepfake technology. In one shocking case, a woman in France lost $850,000 after scammers convinced her that she was helping Brad Pitt with a financial transaction.

Scarlett Johansson has also spoken out about AI-generated content imitating her likeness without consent. In a viral video, an AI-generated version of Johansson was seen responding to Kanye West’s antisemitic remarks, despite the fact that she had no involvement in the matter. “It is terrifying that the US government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of AI,” Johansson stated in a February interview.

The impact of AI-generated scams extends beyond celebrities. Everyday individuals are being tricked into believing fake videos, audio messages, and deepfake images that appear highly convincing. With scammers leveraging synthetic media, fraudsters are successfully deceiving people into transferring money, disclosing sensitive information, or engaging in unauthorized transactions.

Hollywood Calls for Stricter AI Deepfake Laws

As AI deepfake scams grow, Hollywood is now demanding action. Steve Harvey is actively pushing for legislative measures to penalize those responsible for fraudulent AI-generated content and to hold platforms accountable for hosting such content. Speaking at Tyler Perry Studios, Harvey emphasized, “My concern now is the people that it affects. I don’t want fans of mine or people who aren’t fans to be hurt by something.”
To address these concerns, lawmakers in Congress are considering multiple bills, including an updated version of the No Fakes Act. This bipartisan bill, backed by Senators Chris Coons, Amy Klobuchar, Marsha Blackburn, and Thom Tillis, seeks to hold creators and online platforms liable for unauthorized AI-generated content. The bill proposes fines of $5,000 per violation, which could accumulate to millions of dollars if viral deepfake videos or fraudulent AI-generated media are hosted on platforms.

Additionally, the Take It Down Act, another legislative proposal, aims to criminalize AI-generated deepfake pornography, which has become a growing issue, particularly for women targeted by non-consensual deepfake content. The bill has even gained support from former First Lady Melania Trump.

Challenges in Regulating AI Deepfake Content

While Hollywood and lawmakers push for stronger laws, opposition exists. Several advocacy groups, including Public Knowledge, the Center for Democracy and Technology, and the Electronic Frontier Foundation, argue that the No Fakes Act introduces excessive regulations. These organizations fear that overly strict laws could harm freedom of speech, limit digital creativity, and lead to an influx of lawsuits.
In an open letter to Congress, these groups acknowledged the risks of AI-generated fraud but warned that the proposed legislation might set dangerous legal precedents. They emphasized the need for balanced policies that protect both individuals and digital content creators while preventing misuse of deepfake technology.

How AI Detection Technology is Fighting Back Against Deepfake Scams

As Congress debates new laws, technology firms are stepping up to combat AI deepfake scams. Companies like Vermillio AI have developed advanced tools to detect and remove fraudulent AI-generated content. Vermillio’s TraceID technology scans the internet to track manipulated media and automate take-down requests.

According to Vermillio CEO Dan Neely, deepfake content has exploded in recent years. “Back in 2018, there were roughly 19,000 pieces of deepfake content. Today, there are roughly a million created every minute,” Neely told Flash News.
Vermillio uses AI-powered “fingerprinting” techniques to differentiate between authentic and manipulated media. By analyzing millions of data points, the technology can locate instances where images or voices have been altered using large language models (LLMs), the core of many popular AI-generation tools.
However, while celebrities and high-profile figures can afford services like Vermillio, everyday users have limited resources to protect themselves against AI-powered fraud. This highlights the urgency for new laws and better cybersecurity measures to safeguard both public figures and ordinary people from AI-driven scams.

Why AI Deepfake Scams Pose a Serious Threat

The rise of AI-generated fraud presents numerous challenges:
  • Financial Scams: People are tricked into believing fake investment schemes or government grants using AI-generated voices of celebrities.

  • Identity Theft: Criminals can clone voices and images to bypass biometric security measures.

  • Misinformation & Fake News: AI-generated videos can manipulate public opinion and spread political disinformation.

  • Non-Consensual Content: Women and minors are increasingly targeted with deepfake pornography, leading to reputational damage and psychological distress.

The Need for Immediate Action Against AI Deepfake Scams

Steve Harvey and other celebrities are making it clear: action is needed now. “The sooner we do something, the better off we’ll all be,” Harvey said. “Because, I mean, why wait? How many people do we have to watch get hurt by this before somebody does something?”

As AI deepfake scams continue to evolve, it is crucial for governments, technology companies, and the public to work together to establish stronger regulations, improve AI fraud detection, and educate individuals about the dangers of synthetic media. Without immediate action, these scams will only become more sophisticated, putting millions of people at risk.

With Congress considering new laws, the battle against AI deepfake fraud is just beginning. Whether it’s through stronger legislation, better AI detection tools, or public awareness campaigns, the fight against AI deepfake scams is far from over.

Leave a comment

Your email address will not be published. Required fields are marked *