Blockonist Banner
AI News

The E.U. Makes History with the World’s First All-Encompassing AI Legislation

EU AI

In a pioneering move, the European Union’s legislative body has overwhelmingly endorsed a comprehensive regulatory framework governing artificial intelligence technology across its 27 member nations. After a five-year journey since its initial proposal, the Artificial Intelligence Act is poised to take effect later this year, establishing itself as a global benchmark for governments grappling with the challenge of regulating the rapidly evolving AI landscape.

Balancing Innovation and Oversight

The newly approved legislation, hailed as a human-centric approach, aims to ensure that humans maintain control over AI technology while leveraging its potential for economic growth, societal progress, and unlocking human potential. Dragos Tudorache, a Romanian lawmaker who co-led the negotiations on the draft law, emphasized this vision prior to the vote.

Major tech companies, while recognizing the need for regulation, have lobbied to shape the rules in their favor. OpenAI’s CEO, Sam Altman, initially caused a stir by suggesting the ChatGPT maker could withdraw from Europe if unable to comply with the AI Act, though he later backtracked on those remarks.

Regulating Generative AI

The EU’s AI Act takes a risk-based approach, subjecting AI applications to varying levels of scrutiny based on their perceived risk. Low-risk systems, such as content recommendation or spam filters, will face voluntary requirements and codes of conduct. High-risk applications in critical sectors like healthcare or infrastructure will face stringent requirements for data quality and user transparency.

Certain AI uses are outright banned due to unacceptable risks, including social scoring systems that govern human behavior, some types of predictive policing, and emotion recognition systems in schools and workplaces. Additionally, police use of AI-powered remote biometric identification systems in public spaces is prohibited, except for serious crimes like kidnapping or terrorism.

In response to the rapid advancements in generative AI models like ChatGPT, the law introduces provisions for regulating these powerful systems. Developers will be required to provide detailed information on the training data used, comply with EU copyright laws, and label AI-generated content as artificially manipulated.

Overseeing Systemic Risks

The most advanced and potentially systemic risk-posing models, such as OpenAI’s GPT-4 and Google’s Gemini, face additional scrutiny. Companies providing these systems must assess and mitigate risks, report serious incidents, implement cybersecurity measures, and disclose energy consumption.

Global Efforts Toward AI Governance

While the EU has taken the lead, other nations and global organizations are also working on their own AI governance frameworks. The United States, China, Brazil, Japan, and the United Nations are among those developing AI guidelines and regulations.

The Path Ahead

The AI Act is expected to officially become law by May or June, with provisions taking effect in stages over the next few years. Each EU member state will establish an AI watchdog to handle citizen complaints, while Brussels will create an AI Office to enforce and supervise the law for general-purpose AI systems. Violations could result in fines of up to 35 million euros or 7% of a company’s global revenue.

Italian lawmaker Brando Benifei, co-leader of the Parliament’s work on the law, indicated that this is not the EU’s final word on AI regulation, with the potential for further legislation after summer elections, including areas such as AI in the workplace.

technology across its 27 member nations. After a five-year journey since its initial proposal, the Artificial Intelligence Act is poised to take effect later this year, establishing itself as a global benchmark for governments grappling with the challenge of regulating the rapidly evolving AI landscape.