The government agency that governs communication in the United States has rendered robocalls with AI-generated voices unlawful.
The Federal Communications Commission (FCC) announced the decision on Thursday, stating that it will take effect immediately.
The FCC stated that it provides the state with the authority to pursue any criminal actors responsible for these calls.
It comes amid an increase in robocalls impersonating celebrities and political candidates.
"Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters," said FCC chairperson Jessica Rosenworcel in a statement Thursday.
"We're putting the fraudsters behind these robocalls on notice."
Should we be concerned about an attack by vocal clones?
The move follows an incident last month in which New Hampshire residents received robocalls imitating US President Joe Biden ahead of the state's presidential primary.
The calls advised voters not to vote in the primary. Approximately 5,000 to 25,000 were put in.
According to New Hampshire's attorney general, the calls were traced back to two Texas corporations, and a criminal investigation is already ongoing.
The FCC stated that these calls have the ability to mislead customers with disinformation by impersonating public leaders and, in some cases, close family members.
The agency also stated that, while state attorneys general can pursue firms and people behind these calls for crimes such as fraud, this recent step renders the use of AI-generated voices in these calls unlawful.
By doing so, it "expands the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law."
In mid-January, the FCC received a letter signed by attorneys general from 26 states requesting that the commission act to limit the use of AI in marketing phone calls.
"Technology is advancing and expanding at an alarming rate, and we must ensure that these new developments are not used to exploit, deceive, or manipulate consumers," said Pennsylvania Attorney General Michelle Henry, who spearheaded the campaign.
The letter follows the FCC's November 2023 Notice of Inquiry, which asked for opinions from around the country on the use of AI technology in consumer interactions.
Deepfakes, which employ artificial intelligence to create video or audio of someone by altering their face, body, or voice, have surfaced as a big worry throughout the world, particularly in nations such as the United States, the United Kingdom, and India, where key elections are now or will be held soon.
Senior British politicians have been subjected to audio deepfakes, as have politicians from Slovakia and Argentina.
The UK's National Cyber Security Centre has issued a warning about the risks that AI fakes represent for the country's upcoming elections.
This article was originally published on the BBC.