EU and US AI Regulatory Push Overlaps Across Global Business
As jurisdictions race to understand and build appropriate regulatory and governance structures around artificial intelligence, the EU is frequently regarded as the frontrunner. However, the US is forging its own federated approach and isn’t necessarily behind. For organizations, is alignment with one or the other the best path to success? Ultimately, the choice won’t be either-or.
Historically, each jurisdiction has followed a distinct approach to regulating tech. The EU is seen as prescriptive, comprehensive, and wide-reaching, relying on a strong tradition of protecting fundamental rights and values through legislation.
The US follows a federated approach to tech regulation, comprised of contemporaneous federal, state,and local laws, along with sector-specific standards and best practices, and variations based on perceived market needs.
The consequence of these two different, yet not incompatible approaches, is that regulatory authorities and organizations globally have no option but to engage.
The EU and the US have each regulated AI consistently with their historical regulation of technology. Differences in these approaches are evident in application, structure, focus, and enforcement.
The EU AI Act sets binding and comprehensive rules that will directly apply to businesses throughout the AI value chain. The most stringent requirements address providers of AI systems, including as regards prohibited or high-risk AI, or general-purpose AI models, but obligations will also apply to deployers, distributors, importers, and others.
President Joe Biden’s executive order lacks the force of law but contains broad and detailed mandates forvarious federal agencies and select persons that may produce regulation of those same constituents.
The new EU regulation follows a risk-based approach and aims to address the most pressing AI risks while encouraging innovation through four key building blocks. Its long legislative process will end soon, with requirements kicking in within respectively 6, 12, 24, and 36 months.
The Biden administration issued its executive order without much warning. It focuses on US leadership in AI and instructs federal agencies to develop best practices, guidelines, and laws across eight general areas, with initial implementation timelines from 90 to 365 days.
While the order’s enforcement relies on collaboration among and enforcement by individual agencies, EUenforcement comprises strong measures, supervision at the EU and member state levels, and sanctions,including fines of up to 7% of the total worldwide annual turnover.
Shared Focus
The EU AI Act introduces specific requirements for providers of general-purpose AI models, or GPAI, with further requirements for providers of such models that pose systemic risks due to their particularly high-impact capabilities.
The US executive order requires developers of potential dual-use foundation models and acquirers anddevelopers or possessors of a potential large-scale computing cluster to provide the federal government with specific reporting and information, with emphasis on results for biological weapons acquisition and use by non-state actors, discovery of software vulnerabilities, and software use to influence events.
The EU and US directives have analogous definitions for models they seek to regulate.
Extra-Territorial Reach
The EU AI Act’s rules will apply to non-domestic entities, such as a US provider that places an AI system orGPAI model on the EU market, or that puts into service an AI system in the EU. To prevent circumvention of its rules, the EU regulation will also apply to a US provider or deployer of an AI system where the output of that system is (intended to be) used in the EU.
The executive order’s impact on non-US companies will depend on each specific agency and regulation—i.e., consider the requirements for foreign resellers of US infrastructure as a service providers, which will be required to verify the identity of any person who obtains an account from such reseller and submit reports when a foreign person transacts with a provider to train a large AI model that could enable malicious cyber-enabled activity.
The executive order and the EU AI Act will regulate AI across sectors. While no specific areas fall outside the scope of the US order, the EU excludes areas such as AI exclusively for military, defense, or national security. Other sectoral specificities aim to avoiding overlaps with sectoral legislation and review and/or update to reflect the EU’s new requirements.
Broader Regulatory Landscape
The EU AI Act’s impact should be considered within a broader regulatory landscape at the member state and EU levels. Recent relevant EU initiatives include the revised 1985 Product Liability Directive (whose initial text is being revised) and the proposed AI Liability Directive, along with guidance on AI enforcement under EU texts such as the General Data Protection Regulation and the Digital Services Act.
The US executive order should be viewed alongside various federal agency developments and dozens of new AI state bills, ongoing application to AI of existing laws, and new AI-focused government investigations and civil lawsuits. There is broader, overarching recognition that successful regulation will result from deep cross-functional collaboration among agencies, the private sector, academia, and other stakeholders.
EU and US developments mirror AI developments around the world—targeted rules and regulations in the Asia-Pacific region and mainland China, and an overlay of guidance on existing laws in Hong Kong and Singapore.
Japan and Australia’s governments consider AI-specific legislation and conduct public consultation, butrely on adjusting existing law or supplement it with high-level ethical principles and regulatory guidance. With this in mind, APAC governments closely monitor the space and maintain an agile approach.
Some suggest the US pioneered AI development while the EU started the conversation with a systematic approach to regulation and enforcement. But both have played a part in evolving development, use, and deployment of AI. The debate over which region was first in the AI game isn’t so relevant anymore.
Organizations should pay close attention to both parts of the world, keeping in mind that these regulations are evolving.
Reproduced with permission. Published March 2024. Copyright 2024 Bloomberg Industry Group 800-372-1033.