Meta will force political advertisers to disclose instances of AI or digital manipulation in Facebook and Instagram advertisements.
The social media platform already has standards in place regarding the use of deepfakes, but this goes a step further, according to the corporation.
Politics, elections, and social problems Advertisements must reveal any digitally manipulated picture or video beginning in January.
A blend of human and AI truth-checkers will govern global policy.
In a statement, Meta stated that this would include manipulating what someone said in a video, altering photographs or footage of real-life occurrences, and presenting people who appear real but do not exist.
When advertisements are designated as digitally modified, users will be alerted. Meta informed the BBC that it would include this information in the advertisement but did not elaborate on how it would be presented.
Advertisers are not required to notify whether minor adjustments, such as cropping or color correction, have been made "unless such changes are consequential or material to the claim, assertion, or issue raised in the ad."
Meta already has standards in place for all users, not just advertisers, regarding the usage of deepfakes in videos.
Deepfakes are eliminated if they "would likely mislead an average person into believing a video subject said words that they did not say."
Ads linked to politics, elections, or social concerns must now declare any digital change, whether done by a person or AI, before going live on Facebook or Instagram.
Threads, Meta's second social media network, follows the same regulations as Instagram.