The Biden administration announced Thursday new policies aimed at ensuring the safe and responsible use of artificial intelligence at federal agencies.

“I believe that all leaders from government, civil society and the private sector have a moral, ethical, and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit,” Vice President Kamala Harris told reporters on a call Wednesday previewing the announcement.


What You Need To Know

  • The Biden administration announced Thursday new policies aimed at ensuring the safe and responsible use of artificial intelligence at federal agencies

  • The policies create standards for risk management, transparency and oversight

  • The new rules stem from President Joe Biden’s October executive order that attempted to establish safeguards for the largely unregulated, rapidly evolving AI industry while also encouraging further innovation and use of the technologies

  • Vice President Kamala Harris said the U.S. wants other nations to follow its lead “and put the public interest first when it comes to government use of AI.”

The policies create standards for risk management, transparency and oversight. The new rules stem from President Joe Biden’s October executive order that attempted to establish safeguards for the largely unregulated, rapidly evolving AI industry while also encouraging further innovation and use of the technologies.

Starting Dec. 1, agencies be required to have verified that AI tools they use do not endanger the rights and safety of Americans, Harris said.

The vice president offered an example: If the Veterans Administration wants to use AI in its hospitals to help doctors diagnose patients, the agency must first demonstrate the tool does not produce racially biased diagnoses.

Agencies must also be transparent about the AI tools they have deployed. They will be required to publish a list of artificial intelligence systems they use, an assessment of the risks those systems might pose and a plan for managing those risks. 

In addition, the Biden administration is ordering agencies to each designate a chief AI officer “with the experience, expertise and authority to oversee” all AI technologies it uses, Harris said. The policy also requires agencies to establish AI governance boards by May 27.

“I'll say that these three new requirements have been shaped in consultation with leaders from across the public and private sectors, from computer scientists, to civil rights leaders, to legal scholars and business leaders,” the vice president said.

Harris said the U.S. wants other nations to follow its lead “and put the public interest first when it comes to government use of AI.”

“President Biden and I intend that these domestic policies will serve as a model for global action,” she said.

Office of Management and Budget Director Shalanda Young said the rules are needed because “every day, federal agencies make decisions that have profound impacts on the lives of Americans. When these agencies use AI, they have a distinct responsibility to manage its risks, and the public deserves confidence that the federal government will use the technology responsibly.”

Artificial intelligence is being heralded by some as revolutionary technology that could reshape many aspects of life — everything from searching the internet to curing diseases. But some argue it could also make people expendable at their jobs, be used to promote disinformation or spiral out of the control of humans if systems are allowed to write and execute their own code. 

Young announced the Biden administration plans to hire 100 AI professionals. The National AI Talent Surge initiative was also listed in the president’s executive order. 

The new employees will “implement this guidance and advance the safe and trustworthy use of AI,” Young said.

And as OMB looks to procure new AI technologies for the government later this year, it is issuing a request for information to collect public data to better inform its decisions, Young said.