[ad_1]
If AI Voice Cloning Can’t Be Stopped, That Would Serve as Red Flag for Policymakers

Do you have what it takes to build defenses that can easily and reliably spot voice cloning generated using artificial intelligence tools? If so, the U.S. Federal Trade Commission wants to hear from you.
See Also: Entering the Era of Generative AI-Enabled Security
The agency last November announced a Voice Cloning Challenge designed “to encourage the development of multidisciplinary approaches – from products to policies to procedures – aimed at protecting consumers from AI-enabled voice cloning harms, such as fraud and the broader misuse of biometric data and creative content.”
The challenge promises to reward $25,000 to the top entry, provided it meets three key requirements, plus $4,000 for second place and $2,000 to each of up to three honorable mentions.
The FTC said it hopes the Voice Cloning Challenge “will “foster breakthrough ideas on preventing, monitoring and evaluating malicious voice cloning,” as the ability of AI tools to generate ever more convincing-sounding fakes improves.
The challenge is open for new entries until Jan. 12. Entrants must submit a one-page abstract, 10-page detailed explanation, and can optionally also send a video showing how their submission would work.
Terms and conditions apply. Only individuals or small groups – comprised of fewer than 10 people – can win the cash prizes, although one large organization could win a recognition award that comes with no remuneration.
The FTC said all entries will be judged on the following three criteria, tied to the answering the specified questions:
- Practicality: “How well might the idea work in practice and be administrable and feasible to execute?”
- Balance: “If implemented by upstream actors, how does the idea place liability and responsibility on companies and minimize burden on consumers?”
- Resilience: “How is the idea resilient to rapid technological change and evolving business practices?”
The FTC doesn’t just want consumer-level defenses that would be easy for individuals to implement. Ideally, it wants to see defenses that work “upstream” to battle such things as fraudsters attempting to extort victims, as well as to combat the illicit use of actors’ own voices, before such attacks can even reach consumers. Ensuring those defenses can maintain users’ privacy is another goal.
While this might sound like a tall order, the challenge is also designed to test whether effective defenses against AI voice cloning might even exist.
“If viable ideas do not emerge, this will send a critical and early warning to policymakers that they should consider stricter limits on the use of this technology, given the challenge in preventing harmful development of applications in the marketplace,” the FTC said.
Fraudsters Challenge Security Defenses
The agency’s challenge highlights how fraudsters keep turning the latest tools and technology to their advantage.
One tactic being increasingly adopted by criminals involves virtual kidnapping or “cyber kidnapping,” in which they pretend to have abducted an individual, as seen in a recent case involving a Chinese teenager in Utah. In some cases, experts say criminals also use real-sounding audio of the supposedly abducted individual as proof, and sometimes hijack the individual’s SIM card, so they can’t be reached by family or co-workers, who get pressured to pay immediately – or else (see: Top Cyber Extortion Defenses for Battling Virtual Kidnappers).
Another growing concern are AI tools that don’t just generate convincing-sounding audio, but also convincing-looking “deepfake” video to match. Last week, Singaporean Prime Minister Lee Hsien Loong warned that scammers have been using deepfake videos featuring his likeness to hawk cryptocurrency scams, and cautioned Singaporeans to claims of crypto giveaways or guaranteed crypto “returns on investments.”
Criminals are already using deepfake videos to try and bypass financial services firms’ know-your-customer identity verification practices, and the problem is only going to get worse, experts warn.
“As AI makes it easier and cheaper to impersonate someone’s likeness and identity markers – often found in a breach – it will become simpler for attackers to take over accounts and steal money, data, impact brands,” and more, said Rachel Tobac, CEO of SocialProof Security, in a post to X, formerly known as Twitter.
[ad_2]
Source link