Meta Platforms’ top policy executive announced on Tuesday that the company will begin detecting and labelling images generated by other companies’ artificial intelligence services in the coming months, using a set of invisible markers embedded in the files.
Embedded files
Meta will apply the labels to any content with the markers that is posted to its Facebook, Instagram, and Threads services
In an effort to signal to users that the images, which often resemble real photos, are actually digital creations, according to the company’s president of global affairs, Nick Clegg, in a blog post.
The company already labels any content created with its own AI tools.
Once the new system is up and running, Meta will do the same for images created using services run by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google. Clegg said.
The announcement provides an early look at an emerging system of standards that technology companies are developing to mitigate the potential risks.
Associated with generative AI technologies, which can generate fake but realistic-looking content in response to simple prompts.
“Even though the technology is not yet fully mature, particularly when it comes to audio and video, the hope is that we can create a sense of momentum and incentive for the rest of the industry to follow,” Clegg said in a statement.
In the meantime, he added, Meta would begin requiring people to label their own altered audio and video content, with penalties if they failed to do so. Clegg did not explain the penalties.
He added that there is currently no viable mechanism for labelling written text generated by AI tools such as ChatGPT.
To read our blog on “Meta unveils encrypted messaging feature for Messenger,” click here