Meta plans to develop technology that can recognize and categorize pictures created by other companies' artificial intelligence (AI) capabilities.
It will be available across Facebook, Instagram, and Threads.
Meta has previously labeled AI photos created by its own algorithms. It says it expects the new technology, which it is currently developing, will generate "momentum" in the sector to combat AI fraud.
However, an AI specialist told the BBC that such technologies are "easily evadable."
Meta states in a blog post by senior executive Sir Nick Clegg that it plans to increase its labeling of AI fakes "in the coming months."
In an interview with Reuters, he acknowledged that the technology was "not yet fully mature" but that the business hoped to "create a sense of momentum and incentive for the rest of the industry to follow."
However, Prof. Soheil Feizi, head of the Reliable AI Lab at the University of Maryland, stated that such a system may be simple to navigate.
"They may be able to train their detector to be able to flag some images specifically generated by some specific models," he was quoted as saying by the BBC.
"But those detectors can be readily circumvented with some lightweight processing on top of the photos, and they may also produce a significant number of false positives.
"So I don't think that it's possible for a broad range of applications."
Meta has recognized that their tool would not function for audio or video, despite the fact that these are the media on which much of the worry about AI fakes is concentrated.
Instead, the company says it is requesting users to mark their own audio and video posts, and it "may apply penalties if they fail to do so."
Sir Nick Clegg also agreed that it is impossible to test for text created by programs like ChatGPT.
"That ship has sailed," he informed Reuters.
This article was originally published on the BBC.