Highlights
The European Union’s Artificial Intelligence Act establishes the first comprehensive horizontal legal framework for regulating AI systems across the EU
The EU AI Act will enter into force on Aug. 1, 2024, with the majority of its provisions becoming enforceable on Aug. 2, 2026
The Act has broad extraterritorial implications, extending its reach to providers who put AI systems into service within the EU market, regardless of their location, therefore potentially including U.S. businesses
The European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689, was published on July 12 in the Official Journal of the European Union. This marks the establishment of the first comprehensive horizontal legal framework for regulating AI systems across the EU. The EU AI Act will enter into force on Aug. 1, 2024, with the majority of its provisions becoming enforceable on Aug. 2, 2026.
While the compliance timeline may appear extended, the process of developing an AI compliance program is intricate and time-intensive. It is imperative for businesses to commence their compliance efforts promptly to ensure they are adequately prepared to meet the regulatory requirements.
This landmark legislation, which has been in negotiation and development phases since 2021, has undergone extensive revisions, aiming to create a harmonized legal environment for the creation, marketing, deployment, and use of AI systems throughout the EU. The Act has broad extraterritorial implications, extending its reach to providers who place or put AI systems into service within the EU market, regardless of their location. Thus, a number of U.S. businesses will be within its scope, depending on the exact role in their use of AI. It also applies to providers or deployers established outside the EU if the AI systems output is used within the EU.
The Act covers deployers, importers, and affected individuals within the EU, though it lacks clarity regarding distributors. Certain exemptions are specified within the Act. It does not apply to AI systems developed and used solely for scientific research and development. Activities involving research, testing, and development of AI are exempt from the Act’s provisions until the AI is placed on the market or put into service, although real-world testing is not covered by this exemption. AI systems released under free and open-source licenses are also exempt unless they are classified as high risk, prohibited, or generative AI.
The EU AI Act adopts a risk-based approach, assigning different regulatory requirements based on the level of risk associated with AI systems.
- Unacceptable Risk: AI practices that pose a clear threat to fundamental rights are prohibited. This includes AI systems that manipulate behavior or exploit vulnerabilities (e.g., based on age or disability) to distort actions. Prohibited AI also includes certain biometric systems, like emotion recognition in the workplace or real-time categorization of individuals.
- High Risk: AI systems classified as high risk must adhere to stringent requirements. These include implementing risk-mitigation strategies, using high-quality data sets, maintaining activity logs, providing detailed documentation, ensuring human oversight, and achieving high standards of robustness, accuracy, and cybersecurity. High-risk AI examples include critical infrastructures (e.g., energy and transport), medical devices, and systems determining access to education or employment.
- Limited Risk: AI systems with limited risk, such as chatbots, must be designed to inform users that they are interacting with AI. Deployers of AI generating or manipulating deepfakes must disclose the artificial nature of the content.
- Minimal Risk: AI systems with minimal risk, such as AI-enabled video games or spam filters, face no restrictions. Companies may opt to follow voluntary codes of conduct.
Medical Uses
AI intended for medical purposes is already regulated as a medical device in Europe and the United Kingdom. It must undergo a thorough assessment before being marketed, in accordance with the EU Medical Device Regulations 2017 (MDR) and the EU In Vitro Diagnostic Medical Devices Regulation (IVDR). Under the Act, any AI system that is a Class IIa or higher medical device, or uses an AI system as a safety component, is defined as high risk.
High-risk AI systems will need to adhere to a comprehensive set of additional requirements, many of which align with the stringent conformity assessment standards currently mandated by the MDR and IVDR. The Act permits medical device notified bodies to conduct AI conformity assessments, provided their AI competence has been evaluated under the MDR and IVDR. This implies a unified declaration of conformity, although the exact implementation details remain unclear.
Recent Updates
The European Commission has established a new EU level regulator, the European AI Office, which will operate within the Directorate-General for Communication Networks Content and Technology.
The AI Office will be responsible for overseeing and enforcing compliance with the AI Act’s requirements for general purpose AI (GPAI) models and systems across all 27 EU member states. Its duties will include monitoring emerging systemic risks associated with GPAI development and deployment, conducting evaluations of capabilities and models, and investigating potential cases of infringement and non-compliance. To assist GPAI model providers in achieving compliance, the AI Office will develop voluntary codes of practice, adherence to which will offer a presumption of conformity.
The AI Office will spearhead international cooperation on AI matters, strengthen connections between the European Commission and the scientific community – including the forthcoming scientific panel of independent experts – and support joint enforcement actions among member states. It will serve as the secretariat for the AI Board, which coordinates efforts among national regulators. It will also facilitate the establishment of regulatory sandboxes to allow companies to test AI systems in controlled environments and provide information and resources to small and medium-sized enterprises to aid their compliance efforts.
Timeline of Developments
With the publications in the Official Journal, the dates to comply with the regulations are now confirmed. Here is what to expect:
- Aug. 1, 2024 – AI Act will go into effect
- Feb. 2, 2025 (6 months later) – Chapter I and Chapter II (prohibitions on unacceptable risk AI) will apply
- Aug. 2, 2025 (12 months later) – Chapter III Section 4 (notifying authorities), Chapter V (general purpose AI models), Chapter VII (governance), Chapter XII (confidentiality and penalties) and Article 78 (confidentiality) will apply, with the exception of Article 101 (fines for GPAI providers)
- Aug. 2, 2026 (24 months later) – The remainder of the AI Act will apply, except for Article 6(1)
- Aug. 2, 2027 (36 months later) – Article 6(1) and the corresponding obligations in this Regulation will apply
Next Steps
Once the Act becomes operative on Aug. 1, these milestones will follow according to Article 113. The Codes of Practice must be finalized within nine months of the Act’s commencement according to Article 56. The European Commission will then have an additional three months, for a total of 12 months, to approve or reject these Codes via an implementing act, based on the advice of the AI Office and Board. The AI Act also mandates the AI Office to facilitate the “frequent review and adaptation of the Codes of Practice.” Given that the standardization process will exceed the timelines set by the AI Act, these Codes of Practice for general purpose AI model providers will be instrumental in ensuring the effective implementation of the regulation.
For more information, please contact the Barnes & Thornburg attorney with whom you work or Kaitlyn Stone at 973-775-6103 or kaitlyn.stone@btlaw.com or Michael Zogby at 973-775-6110 or michael.zogby@btlaw.com. Aury Quezada, summer law clerk, assisted with this alert.
© 2024 Barnes & Thornburg LLP. All Rights Reserved. This page, and all information on it, is proprietary and the property of Barnes & Thornburg LLP. It may not be reproduced, in any form, without the express written consent of Barnes & Thornburg LLP.
This Barnes & Thornburg LLP publication should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own lawyer on any specific legal questions you may have concerning your situation.