According to Biden administration, AI companies such as Open, Alphabet, and Meta Platforms have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer.
AI-generated Content
The companies, which include Anthropic, Inflection, Amazon.com, and OpenAI partner Microsoft, have pledged to thoroughly test systems before releasing them, as well as share information on how to reduce risks and invest in cybersecurity.
The decision is seen as a victory for the Biden administration’s efforts to regulate technology, which has seen a surge in investment and consumer popularity.
“We welcome the President’s leadership in bringing the tech industry together to hammer out concrete steps that will help make AI safer, more secure, and more beneficial for the public,” Microsoft said in a blog post on Friday.
Since generative, which uses data to create new content such as ChatGPT’s human-sounding prose, gained popularity this year, lawmakers around the world have begun to consider how to mitigate the emerging technology’s risks to national security and the economy.
In terms of artificial intelligence regulation, the United States lags behind the European Union. In June, EU lawmakers agreed on a set of draft rules that would require systems such as ChatGPT to disclose AI-generated content, assist in distinguishing so-called deep-fake images from real ones, and provide safeguards against illegal content.
In June, U.S. Senate Majority Leader Chuck Schumer called for “comprehensive legislation” to advance and ensure safeguards.
A bill in Congress is being considered that would require political advertisements to disclose whether Intelligent was used to create imagery or other content.
President Joe Biden, who is hosting executives from the seven companies at the White House on Friday, is also working on an executive order and bipartisan legislation on artificial intelligence technology.
As part of the effort, the seven companies agreed to create a system to “watermark” all types of content, from text, images, and audios to Artificial-generated videos, so that users can tell when the technology has been used.
This watermark, which is technically embedded in the content, presumably will make it easier for users to spot deep-fake images or audios that may, for example, show violence that did not occur, create a better scam, or distort a photo of a politician to cast the person in an unflattering light.
It is unclear how the watermark will be visible when the information is shared. The companies also promised to prioritize user privacy as Artificial advances, as well as ensuring that the technology is free of bias and is not used to discriminate against vulnerable groups.
Other commitments include solutions for scientific problems such as medical research and climate change mitigation.
To read our blog on “OpenAI nears 1 billion new users milestone achievement,” click here















