Last October, President Joe Biden signed an executive order to ensure the safe and responsible development and use of artificial intelligence (AI). In January, three months after its signing, the Biden-Harris administration said major AI developers will now be required to report their safety test results to the government.

The landmark executive order was issued in response to the growing development of AI systems. The order implements new guidelines for safety and security in AI development and use to minimize the risks of AI posing a threat to civil rights and privacy.

What is the executive order?

The executive order is focused on “safe, secure and trustworthy artificial intelligence.” However, it also includes provisions promoting AI innovation and competition in the United States and advancing U.S. leadership abroad.

Biden’s executive order sets standards for privacy, equity and civil rights, consumer protection, workers’ rights and the responsible innovation of AI.

The Biden-Harris administration established the following actions in the executive order as essential for safety and security when dealing with the emerging technology.

Privacy and consumer protection

The order requires the most powerful AI systems to release critical information to the Department of Commerce. Federal agencies were further tasked with developing tests, standards and tools to guarantee AI is secure before it is made available to the general public.

It further establishes standards for biological synthesis screening to protect against the possibility of AI being used to engineer dangerous materials.

The Department of Commerce was tasked with developing watermarking to label AI-generated content to protect the public from deception and fraud perpetrated through the use of AI systems such as deepfake technology.

The order emphasized the significance of strengthening cybersecurity, doing research that protects privacy, and assessing how agencies evaluate and use commercially available data.

AI, equity and civil rights

In addition to previous efforts such as their Blueprint for an AI Bill of Rights, the Biden-Harris administration directed actions against potential irresponsible uses of AI leading to abuses in justice, healthcare and housing.

Concerns surrounding the ways AI can perpetuate bias and discrimination increased when leaders from the Equal Employment Opportunity Commission presented the risk of AI as a method of “digital redlining” earlier last year.

The executive order provides guidance to landlords, federal benefits programs and contractors to ensure AI algorithms or systems do not undermine equity and civil rights.

It implements training and coordination between the Department of Justice and federal civil rights agencies to address the prosecution of civil rights violations related to AI.

The order further seeks to establish fairness in the criminal justice system through its development of the “best practices” for AI use in surveillance, risk assessments, predictive policing, pretrial release and sentencing.

Addressing workers’ rights

The order included clauses to “mitigate the harms and maximize the benefits of AI for workers.” This involves addressing the job displacement, undercompensation of workers and increased workplace surveillance caused by the introduction of AI to the job market.

A 2022 Pew Research Center study found 19% of American workers were in jobs most exposed to AI, where important activities may be either replaced or assisted by AI.

The executive order called for a report on AI’s potential impact on the labor market and the identification of ways for the federal government to support those facing “labor disruptions,” including those caused by AI.

Advancing responsible innovation

The executive order has provisions to promote the continued advancement of AI research, innovation and competition in the U.S. and employ these standards of safe and trustworthy AI abroad.

The administration also directed the federal government to acquire AI products and services, engage in responsible deployment of AI and hire more AI professionals.

Why is it important?

This executive order is the first to establish regulation on the use and development of AI systems in the United States. President Biden said the order serves as a vital step in his work to “promote and demand responsible innovation.”

President Biden invoked the Defense Production Act (DPA) for the signing and implementation of the AI executive order.  This makes private companies notify the federal government about large-scale AI productions and systems to prove they will not harm national security or the U.S. public.

“I believe history will show that this was the moment when we had the opportunity to lay the groundwork for the future of AI,” Vice President Kamala Harris said at the U.K. Summit on AI Safety. “And the urgency of this moment must then compel us to create a collective vision of what this future must be.”

Biden and Harris state this executive order builds on the work they have done with the AI Bill of Rights and their consultations with various countries on AI governance frameworks.

The administration said they will continue to create strong international AI frameworks with partners and allies going forward to govern the safe use and development of AI systems.

What has been done?

In putting out this order, the federal government sought to mitigate risks while continuing to promote the innovation and advancement of AI.

The executive order's goals included requiring the most powerful AI systems to release critical information to the Department of Commerce in accordance with the DPA, which will be fulfilled through the new requirement for AI companies to disclose their safety test results.

The Department of Commerce has worked to create a proposal that would require  U.S. cloud companies to disclose if they provide foreign AI training and servers to developers. The proposed regulation would further require them to verify foreign users’ identities in an effort to prevent cyber threats to national security.

Federal agencies have seen an increase in the number of data scientists and AI professionals hired in cooperation with the AI Talent Surge, nine of whom have submitted risk assessments to the Department of Homeland Security regarding AI’s use in infrastructure sectors.

A key concern surrounding the execution of the Biden-Harris administration’s initiative to prevent potential AI threat has been its failure to enact legislation in support of these efforts.

The Biden-Harris administration said they will continue to pursue bipartisan legislation to help America set the standard for responsible AI innovation.

The U.S. Secretary of Commerce Gina Raimondo announced the formation of the U.S. AI Safety Institute Consortium (AISIC), housed under the U.S. AI Safety Institute, on Feb. 8, 2024.

The AISIC includes more than 200 member companies and organizations that will contribute to the advancement of the priority actions outlined in Biden’s executive order.

“By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly,” Raimondo said.

Thumbnail graphic created by Kim Jao / North by Northwestern