AI Pulse Check: Will the Biden Executive Order on AI Survive the Trump-Vance Administration?
President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO), announced on October 30, 2023, marked a historic milestone in the regulation of artificial intelligence (AI) systems and technologies in the United States. Now, as the country prepares for President-elect Trump to take office in January 2025, we review the key US federal and state AI legislative developments since the EO's enactment and consider how the new Administration may approach AI legislation and policy going forward. We also consider what organizations can do to prepare.
What the EO Said
The EO was very broadly framed and aimed to "advance and govern the development and use of AI in accordance with eight guiding principles and priorities":
- Ensuring the Safety and Security of AI Technology
- Promoting Innovation and Competition
- Supporting Workers
- Advancing Equity and Civil Rights
- Protecting Consumers, Patients, Passengers and Students
- Protecting Privacy
- Advancing Federal Government Use of AI
- Strengthening American Leadership Abroad
For each area, the EO tasked various government agencies to develop more specific guidelines and parameters, and it contemplated the establishment of working groups, interagency councils, and a research coordination network. Entities that were expressly called out for regulation included, among others, critical infrastructure providers (e.g., certain energy companies), infrastructure-as-a-service providers, financial institutions, and synthetic nucleic acid sequence providers. The EO stipulated different implementation timelines for various actions, from 90 to 365 days.
The Vulnerability of the EO
A key vulnerability in the EO is that it is only an executive order, which does not have the durability of legislation and therefore, can be overturned by the Trump-Vance Administration. Even before the election, there were concerns that the US AI Safety Institute (AISI), created in furtherance of the EO to study AI system risks, may be dismantled if the EO were repealed. In a recent letter, more than 60 companies, nonprofits, and universities, including OpenAI and Anthropic who collaborate with the AISI on AI research and testing, requested that Congress enact legislation codifying the AISI in 2024. A related concern is that without the AISI and analogous initiatives prompted by the EO, the United States may lag other countries in AI innovation.
Developments Under the One-Year-Old EO
The EO directed federal agencies to adopt a wide range of actions to promote and regulate AI development and use within the federal government, including over one hundred (100) actions scheduled within the first year. Despite the ambitious first-year plan, the government appears to be on track, according to a progress report issued by the Biden Administration at the end of October 2024, which touts that all scheduled actions have been completed.
Key developments undertaken by federal agencies included:
- Establishment of the AISI. Following the issuance of the EO, the National Institute of Standards and Technology (NIST) established the AISI. Broadly tasked with advancing AI safety and addressing risks posed by AI systems, the AISI’s initial focus has been on priorities assigned to NIST. Guiding the AISI’s activities is the first-ever National Security Memorandum (NSM) on AI, which designated the AISI as the spearhead for the federal government’s efforts aimed at AI model testing and rapid and responsible AI adoption (including notably by the Department of Defense and Intelligence Community). The AISI also formed a consortium of AI stakeholders to assist in this effort.
- Safety and Security Testing for AI Development. The federal government has promoted more rigorous safety and security testing for development of AI systems, including the Department of Commerce requiring reporting from developers as well as deployment of pre-development testing of new models through signed agreements.
- Guidance and Resources. Several efforts have focused on developing guidance and tools to manage AI risks, including:
- Frameworks published by NIST for managing risks related to generative AI and dual-use foundation models (e.g., NIST-AI-600-1 on Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile);
- Guidance to prevent and address discrimination that arises from AI deployment (e.g., Risk Management Profile for Artificial Intelligence and Human Rights);
- Support for addressing privacy risks that AI systems create or exacerbate, such as use of privacy-enhancing technologies or incorporation of AI use into existing privacy impact assessment processes (e.g., Protecting Privacy When Federal Agencies Use Commercially Available Information);
- Guidance on intellectual property rights related to AI (e.g., Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office); and
- Principles to help employers use AI in the workplace in a manner that empowers workers and avoids violation of employment laws (e.g., Artificial Intelligence And Worker Well-being: Principles And Best Practices For Developers And Employers).
- Sector-Specific Guidance. In addition to broader and more general guidance on managing AI risks, federal agencies have also developed sector-specific resources, including:
- A Responsible AI toolkit released by the Department of Defense (DoD) for AI deployment in DoD projects;
- Department of Energy tools to test models’ risk to nuclear security;
- Recommendations and safety and security guidelines for deployment of AI systems in critical infrastructure (issued by the Department of Homeland Security);
- Guidance and resources for deployment of AI in education (issued by the Department of Education to support schools, leaders, as well as education technology developers);
- The Framework to Advance AI Governance and Risk Management in National Security, as directed by the NSM on AI;
- Principles for use of AI in drug development processes and medical devices (developed by the Department of Health and Human Services (HHS));
- Resources to track and support mitigation of harms arising from use of AI in healthcare settings (issued by the HHS);
- Guidance on AI deployment in the housing sector, and specifically, complying with legal restrictions on discrimination; and
- Guidance on deployment of AI systems in public benefits programs, including those issued by the Department of Agriculture and the HHS.
- Development and Deployment: Federal agencies have also used resources to support deployment of AI systems, including by:
- Piloting AI use to support cybersecurity for vital government software systems;
- Supporting innovative AI use and development through grant awards;
- Training researchers and supporting AI curricula and programs at all educational levels;
- Funding AI-related scientific research;
- Investing resources into AI deployment in key sectors, such as clean energy, health, manufacturing, and national security; and
- Publishing reports on opportunities for AI-supported growth in key sectors, such as clean energy.
- AI Support Infrastructure. In addition to directly supporting AI deployment and development, federal agencies have focused on the infrastructure needed for further expansion, including physical infrastructure, such as datacenters (through the Task Force on AI Datacenter Infrastructure) and increasing the AI talent pipeline (through immigration policies and support for AI education and career pathways).
- Prevention of Misuse of AI. The federal government also initiated measures to directly combat misuse of AI systems and technologies, including by:
- Developing a Framework for Nucleic Acid Synthesis Screening, aimed at helping to prevent the misuse of AI for engineering dangerous biological materials;
- Identifying measures to label and detect AI-generated content to prevent spread of misinformation;
- Marshaling resources and coordinating development of technology to prevent AI tools from being used to generate sexual abuse material; and
- Issuing proposed rules to require cloud providers to report use of cloud resources by foreign actors for potentially malicious activities (see Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities).
- Government AI Resources. The EO also sought to strengthen the government’s ability to harness AI's power, including through the issuance of government-wide policies on AI governance; the issuance of policies on responsible acquisition of AI technologies; the hiring of AI practitioners to bolster AI expertise within federal agencies; and the establishment of mechanisms to identify opportunities and share information regarding effective AI deployment in the federal government.
- International Leadership and Collaboration: The US government has also taken a number of steps to advance US leadership on AI development abroad, and to support international collaboration, including by developing and/or supporting multilateral documents regarding AI security and development (including, notably, the Council of Europe’s Convention on AI and Human Rights, Democracy, and the Rule of Law) and by convening and participating in various multi-state summits, information exchanges, and partnerships.
The Status of Federal AI Legislation
Although the Biden report details significant progress by the federal government in terms of executive actions to support AI development and safety, federal legislation thus far has been limited. Over one hundred (100) AI-related laws have been introduced by legislators in the past year, and prominent leaders in Congress have laid out a broad, bipartisan Blueprint for an AI Bill of Rights aimed at providing a framework for key rights and restrictions lawmakers agree should be enshrined in law. Yet, few of these bills have made significant progress and no notable legislation has gained enough support to pass into law. The result is that, despite the activity touted by the Biden Administration in the year following the EO, AI regulation in the United States is still in a state of flux.
Key US State AI Developments During the Biden Administration
State level AI legislative developments over the past year have been voluminous and varied. The exercise of tracking AI-focused bills and laws across states, and identifying common parameters and themes, can be challenging for several reasons. Developments fluctuate per state and across states. For example, earlier this year, there were over three hundred (300) pending AI bills and enacted AI legislation. Today, they are fewer than two hundred (200). While some states have multiple pending bills, the focus of many of them is essentially the same (e.g., multiple employment-related bills in New Jersey). Certain bills and laws do not focus on AI specifically or primarily but cover aspects that are key to AI development and use (e.g., privacy). In addition, some cities, like New York City, spearhead their own AI-focused initiatives that may feed into state agendas. These parallel developments require constant attention to track accurately. The summary below focuses on the state of AI-specific legislation today.
Enacted AI-Focused Legislation
State AI-focused laws that are in effect can be grouped into key categories, restricting:
- AI-generated content about candidates in campaigns or elections (e.g., AK HB 129, AZ SB 1359, CA AB 2355 and HI SB 2396);
- AI-generated images of children (e.g., AL HB 168, CA AB 1831, CA SB 1381, ID HB 465, WA HB 1999 and SD SB 79);
- AI-generated sexually explicit images (e.g., CA SB 926, ID HB 575, IN HB 1047, NY SB S1042A and LA SB6);
- AI use in consumer facing communications (e.g., CA A 2905, CA AB 3030 and MN SF 4097);
- Unfair discrimination (e.g., CO SB24-205, CO SB21-169 and IL HB3773); and
- AI use in employment (e.g., IL 820 ILCS 42, IL HB3773 and NYC Local Law 144 of 2021).
Incrementally, states have enacted AI laws targeting, among other areas, content-sharing platforms (e.g., CA SB 981 and NJ AR 141), developers of AI systems (e.g., CA AB 2013 and CA SB 942), uses of AI in healthcare (e.g., CA SB 1120), uses of AI in insurance (e.g., CO SB21-169), artists' rights (e.g., IL HB 4875), and definitions of "AI" or relatedly, "person" and "personhood" (e.g., MI HB 5143, ND HB 1361 and UT H 249).
States with the largest number of enacted AI-specific laws include California, Colorado, Idaho, Illinois, New York, and Utah.
The state law that received a lot of attention this year is the Colorado AI Act, which goes into effect on February 1, 2026, and focuses on preventing algorithmic discrimination of consumers. This law is considered the first comprehensive AI legislation in the United States. For more information on the Colorado AI Act, please see our summary here.
Pending AI-Focused Legislation
Pending state AI-focused legislation seeks to cover a range of areas, at times overlapping with the areas covered by AI legislation enacted in other states. Examples of these areas include:
- Watermarks or explicit disclosure regarding AI-generated content (e.g., CA AB 3050, CA AB 3211, CA SB 970, IL HB4611, IL HB5321, OH SB 217 and PA HB 1598);
- Misuse of an individual's voice or likeness (e.g., OH HB 367, OK HB3073, PA SB 1045 and NJ A4480);
- Deepfakes and AI generated images (e.g., CA AB 1856 and CA AB 1872);
- Social media platforms (e.g., CA AB 1027, IL HB 3943 and NJ A4479);
- Insurance (e.g., CA SB 1229, IL HB 4611 and PA HB 1663);
- Healthcare (e.g., IL HB 5321 and IL SB 2795);
- Employment (e.g., NJ S2964 and NY AB 7859); and
- Real Estate (e.g., NY AB A7906A and NY SB S7735).
States with the largest number of pending AI-specific bills include California, Illinois, New Jersey, New York, Pennsylvania, and Rhode Island.
Potential AI Regulation and Policy in the Trump-Vance Administration
While President-Elect Trump leveraged AI technology during his campaign (e.g., posting AI-generated photos), he did not focus on AI policy extensively. Trump did reference the energy demands of AI in his convention speech: "We have to produce massive amounts of energy [...] AI needs tremendous [energy] – literally, twice the electricity that's available now in our country, can you imagine?" When it comes to AI policy, it is likely that Trump will build on the approach of his previous administration which was described by his Chief Technology Officer at the time as being to “limit regulatory overreach” and “promote a light-touch approach.”
Trump's 2016 Administration
AI was not a key policy issue for Trump during his first Administration. However, it was Trump who executed the first Executive Order on AI in February 2019. Executive Order 13859 on Maintaining American Leadership in Artificial Intelligence focused on promoting federal investment in AI research and development and on reducing barriers to the use of AI technologies. At the end of his Presidency in December 2020, Trump signed Executive Order 13960 on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which focused on ensuring federal agencies' use of AI and promoted "public trust and confidence while protecting privacy, civil rights, civil liberties, and American values". Compared to the EO, Trump's Executive Orders were limited in scope.
A Repeal of the EO?
Trump has committed to repealing the EO and the 2024 GOP Platform document states: "Joe Biden’s dangerous Executive Order […] hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing." Trump allies have reportedly drafted an Executive Order to fill the gap with a plan to "make America First in AI" and roll back "unnecessary and burdensome regulations". Any replacement Executive Order is likely to reduce government regulation of AI, which Republicans see as stifling the free market's ability to innovate.
Links with Tech
Businessman and investor Elon Musk featured heavily in Trump's campaign and Trump has stated that he would enlist Musk to head up a "government efficiency commission". Musk co-founded OpenAI before leaving and setting up his own AI startup, xAI, which announced that it has raised US$6 billion earlier this year.
While calling on California to pass bill SB 1047 on AI safety, Musk recently stated that "for over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public". If Musk has a role in Trump's second Administration, he may well influence its approach on AI.
VP Vance
Vice President-Elect Vance may look to take on an AI role similar to Kamala Harris' in the last Administration. He stated at a Senate hearing on AI and privacy in July 2024 that "very often CEOs, especially of larger technology companies that I think already have advantageous positions in AI will come and talk about the terrible safety dangers of this new technology and how Congress needs to jump up and regulate as quickly as possible. And I can't help but worry that if we do something under duress from the current incumbents, it's going to be to the advantage of those incumbents and not to the advantage of the American consumer." Vance appears to be concerned about the impact of AI regulation on smaller tech companies and some have interpreted this statement as an endorsement of open-sourced AI.
Congress and the Courts
To date, Congress has not introduced overarching legislation to regulate AI and most developments have occurred at the agency and state levels, as described above. House Speaker Mike Johnson has adopted an anti-regulatory position on AI and this approach will likely continue in a Republican-controlled Congress. At this time, Senator Ted Cruz (R-TX), currently ranking member of the Senate Commerce Committee and a vocal critic of the EO and tough AI regulation, would likely lead the Committee.
Furthermore, the courts will continue to help shape the future of AI, including in grappling with how intellectual property laws are to be enforced in this new era.
What Organizations Can Do
As developments and use cases around AI continue to proliferate and as the new Administration steps into power, organizations can continue following a practical risk-based approach appropriate for them, while actively monitoring ongoing federal, state, local and international AI developments. Because many of these approaches are based on generally-accepted AI risk management practices, they are likely to survive whatever may come next. While organizations may have differing needs and resources, the following list of "Dos" and "Don’ts" can be helpful in this exercise.
Note: Items in the “Don’t” category may be acceptable for your organization under certain circumstances. Please consult with your counsel for specific legal advice.
Clifford Chance and Artificial Intelligence
Clifford Chance is following AI developments very closely and will be conducting subsequent seminars and publishing additional articles on new AI laws and regulations. If you are interested in receiving information from Clifford Chance on these topics, please reach out to your usual Clifford chance contact or complete this preferences form.
Tools and Resources
For more information about US state level AI activity, please see here.
For more information our US state level privacy developments, please see here.
If you would like to catch up on our publications regarding the EO or other AI developments generally, please visit our AI Hub
Please contact us for additional AI resources such as:
- Global AI Legislation to Watch Tracker
- US AI Legislation to Watch Tracker
- US State Privacy Laws Comparison Chart.