US

7 A.I. Companies Agree to Safeguards After Pressure From the White House

Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to strive for safety, security and trust even as they compete over the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — will formally announce their commitment to the new standards at a meeting with President Biden at the White House on Friday afternoon.

The announcement comes as the companies are racing to outdo each other with versions of A.I. that offer powerful new tools to create text, photos, music and video without human input. But the technological leaps have prompted fears that the tools will facilitate the spread of disinformation and dire warnings of a “risk of extinction” as self-aware computers evolve.

On Wednesday, Meta, the parent company of Facebook, announced its own A.I. tool called Llama 2 and said it would release the underlying code to the public. Nick Clegg, the president of global affairs at Meta, said in a statement that his company supports the safeguards developed by the White House.

“We are pleased to make these voluntary commitments alongside others in the sector,” Mr. Clegg said. “They are an important first step in ensuring responsible guardrails are established for A.I. and they create a model for other governments to follow.”

The voluntary safeguards announced on Friday are only an early step as Washington and governments across the world put in place legal and regulatory frameworks for the development of artificial intelligence. White House officials said the administration was working on an executive order that would go further than Friday’s announcement and supported the development of bipartisan legislation.

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement announcing the agreements. The statement said the companies must “uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety.”

As part of the agreement, the companies agreed to:

  • Security testing of their A.I. products, in part by independent experts and to share information about their products with governments and others who are attempting to manage the risks of the technology.

  • Ensuring that consumers are able to spot A.I.-generated material by implementing watermarks or other means of identifying generated content.

  • Publicly reporting the capabilities and limitations of their systems on a regular basis, including security risks and evidence of bias.

  • Deploying advanced artificial intelligence tools to tackle society’s biggest challenges, like curing cancer and combating climate change.

  • Conducting research on the risks of bias, discrimination and invasion of privacy from the spread of A.I. tools.

“The track record of A.I. shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out A.I. that mitigates them,” the Biden administration statement said on Friday ahead of the meeting.

The agreement is unlikely to slow the efforts to pass legislation and impose regulation on the emerging technology. Lawmakers in Washington are racing to catch up to the fast-moving advances in artificial intelligence. And other governments are doing the same.

The European Union last month moved swiftly in consideration of the most far-reaching efforts to regulate the technology. The proposed legislation by the European Parliament would put strict limits on some uses of A.I., including for facial recognition, and would require companies to disclose more data about their products.

Back to top button