Tech Giants Ink Accord To Combat AI Election Deepfakes

The accord's signatories say they will target content that deceptively fakes or alters the appearance, voice, or actions of key figures in elections.

Tech Giants Ink Accord To Combat AI Election Deepfakes

Most of the world's major tech giants, including Amazon, Google, and Microsoft, have committed to addressing what they term fraudulent artificial intelligence (AI) in elections. 

The twenty companies have agreed to combat voter-deceiving material. 

They say they'll use technology to identify and fight the substance. 

However, one industry expert says that the voluntary agreement will "do little to prevent harmful content from being posted."

On Friday, the Munich Security Conference unveiled the Tech Accord to Combat Deceptive Use of AI in the 2024 Elections. 

The problem has come into sharp focus since up to four billion people are expected to vote this year in countries such as the United States, the United Kingdom, and India.

The deal includes vows to build technology to "mitigate risks" associated with fraudulent electoral material generated by AI, as well as to offer transparency to the public about the actions companies have taken. 

Other efforts include sharing best practices and educating the public on how to identify falsified information. 

Signatories include social media giants X (previously Twitter), Snap, Adobe, and Meta, which owns Facebook, Instagram, and WhatsApp. 

However, the agreement has certain flaws, according to computer scientist Dr. Deepak Padmanabhan of Queen's University Belfast, who co-wrote a paper on elections and AI. 

He told the BBC that it was encouraging to see corporations realize the vast spectrum of issues brought about by AI.

But he added that they needed to take more "proactive action" rather than waiting for content to be posted before attempting to remove it. 

That may imply that "more realistic AI content, which may be more harmful, may stay on the platform for longer" than blatant fakes, which are easier to discover and delete, he said. 

Dr. Padmanabhan also stated that the accord's efficacy was weakened by its lack of clarity in identifying harmful content. 

He used the example of Imran Khan, an incarcerated Pakistani politician who used AI to deliver speeches while in prison. 

"Should this be taken down, too?" he questioned.

The signatories to the deal say they would target anything that "deceptively fakes or alters the appearance, voice, or actions" of significant electoral leaders. 

It will also look at music, visuals, or videos that provide voters with inaccurate information about when, where, and how to vote. 

"We have a responsibility to help ensure that these tools are not weaponized in elections," said Microsoft president Brad Smith. 

Lisa Monaco, the US deputy attorney general, told the BBC on Wednesday that artificial intelligence (AI) might "supercharge" disinformation during elections. 

Google and Meta have already stated their standards for AI-generated pictures and videos in political advertising, which require advertisers to identify deepfakes or AI-manipulated material.

This article was originally published on the BBC.