loader
Page is loading...
Print Logo Logo
Global AI Regulation – Europe Takes Momentous Action to Manage AI

Alerts

Global AI Regulation: Europe Takes Momentous Action to Manage AI

Highlights

The European Parliament granted final approval to the EU Artificial Intelligence Act by an overwhelming majority

EU Member States are expected to grant their final approval to the EU AI Act by May, with provisions of the law likely to become effective this year

AI developers, distributors, importers, manufacturers, and providers face penalties up to €35 million, or 7% of their global revenue for violations of the Act

The European Parliament granted final approval of the EU Artificial Intelligence Act on March 13, 2024, by a vote of 523 for passage, 46 against, and 49 abstaining. The Act faces a final hurdle – approval by EU member states – before its provisions gradually take effect. The goal of this Act is to “ensure a high level of protection of health, safety, fundamental rights, democracy and rule of law and the environment from harmful effects of [AI].”

The Act marks the first governmentally enacted, comprehensive regulation of AI technology and is likely to spur on other countries to follow suit, just as the EU’s enactment of General Data Protection Regulation (GDPR) acted as a catalyst for other countries to pass analogous regulations.

Scope and Possible Penalties

The Act covers developers, distributors, importers, manufacturers, and providers within the EU. Additionally, the Act has extraterritorial application, to providers and “deployers” of AI, if the output is intended for use in the EU.

The EU AI Act provides detailed rules for AI systems depending on the level of risk. For example:

  • The AI Act bans the use of “Unacceptable” AI. Examples of “Unacceptable” AI include the use of real-time facial recognition in public areas and social scoring systems.
  • “High-risk” use is considered AI that could lead to safety impacts or negatively affect fundamental rights. While “High-risk” models are permitted, these systems will be required to pass assessment before being introduced to the public. Examples include AI dealing with the management and operation of “critical infrastructure,” AI systems used to “influence the outcome of an election,” and AI used to evaluate the credit score of natural persons.
  • For AI systems of limited or minimal risk, providers will need to comply with certain transparency requirements like disclosing that content was generated by AI and allowing users to make informed decisions when engaging with the AI. The AI Act would also require safeguards preventing AI systems from generating illegal content.

Companies involved in the development stage of AI would also have to publish detailed summaries on the models used to train their AI systems.

Violations of the Act could result in high monetary penalties – up to €35 million, or 7% of the offending actor’s global revenue. Additionally, the Act mandates that EU member states “shall lay down the rules on penalties and other enforcement measures, which may also include warnings and non-monetary measures, which…shall be effective, proportionate, and dissuasive.” The Act does not provide any further guidance on what from these non-monetary penalties may take.

Next Steps for the Act

The Act has gone through several iterations since the European Parliament began work on the Act in 2018. Since the first public draft in early 2021, the Parliament added the risk categorizations and has worked to strike a balance between innovation and protection. This enactment comes amidst the backdrop of global calls for regulation of AI and fears that the technological advances are outpacing our current understanding of its potential risks.

Having passed the Act, the European Parliament will now send it to the member states for their final approval, which many see as a formality. Given the Act’s history, and its overwhelming passage in Parliament, the Member States will likely approve the Act as early as May. Once approved by the Member States, the Official Journal of the European Union will publish the Act and it “shall enter into force on the twentieth day” thereafter.

The Act contains a phased rollout, setting benchmarks along the way. For instance, within six months of its enactment, the “prohibitions” take effect. Thereafter, the Member States must establish a regulatory body to oversee the enforcement of the Act – identifying violations and imposing penalties – within 24 months of the Act’s enactment.

While passage of the Act by the European Parliament marks a major step in the regulation of AI technologies, it leaves several questions unanswered, namely the non-monetary penalties that will be imposed, the veracity with which Member States will monitor and identify violations, and whether other countries will pass similar regulations.

For more information, please contact the Barnes & Thornburg attorney with whom you work or Michael Zogby at 973-775-6110 or michael.zogby@btlaw.com, Kaitlyn Stone at 973-775-6103 or kaitlyn.stone@btlaw.com or William Carlucci at 973-775-6107 or william.carlucci@btlaw.com.

© 2024 Barnes & Thornburg LLP. All Rights Reserved. This page, and all information on it, is proprietary and the property of Barnes & Thornburg LLP. It may not be reproduced, in any form, without the express written consent of Barnes & Thornburg LLP.

This Barnes & Thornburg LLP publication should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own lawyer on any specific legal questions you may have concerning your situation.

RELATED ARTICLES

Subscribe

Do you want to receive more valuable insights directly in your inbox? Visit our subscription center and let us know what you're interested in learning more about.

View Subscription Center
Trending Connect
We use cookies on this site to enhance your user experience. By clicking any link on this page you are giving your consent for us to use cookies.