In an unanimous ruling yesterday, the Federal Communications Commission issued a declaratory ruling, announcing that using voices generated by artificial intelligence in robocalls is now illegal under the Telephone Consumer Protection Act. While using AI to perpetrate a fraud or scam was already illegal, the declaratory ruling now makes it illegal to use AI to generate voices in those calls, too.
The Why: The ruling was a swift response to a call that was made to individuals in New Hampshire last month that purported to be from President Joe Biden, but which was actually faked. More of these types of calls are expected as the 2024 election cycle heats up, which necessitated the FCC’s quick action on the topic.
The What: The ruling, which takes effect immediately, is also aimed at helping state Attorneys General in their efforts to thwart illegal robocalls by giving them additional “legal avenues” the FCC said.
- Unless companies have obtained prior express written permission from the called party, are initiating for an emergency purpose or are exempt, the use of artificial or AI-generated voice robocalls are now illegal.
The Who: A bipartisan coalition of 26 state AGs wrote to the FCC endorsing the ruling. The Attorney General of New Hampshire has announced an investigation into who may have made those robocalls impersonating the president.
The Last Word: “Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said FCC Chair Jessica Rosenworcel. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”