European parliamentarians have overwhelmingly approved the Artificial Intelligence Act, which will make certain technology use cases illegal and require AI solution providers to provide transparency. Now work will begin to ensure its use.
Overview of the AI Act
The AI Act has been hotly contested since it was proposed in 2021—some of its provisions, including a complete ban on biometric mass public surveillance systems, had to be watered down at the last minute. Other regulations will take years to enforce. The legal text of the document is still awaiting final approval, and it will come into force approximately in May or June. We’ll keep you updated as the process unfolds.
Prohibited and High-Risk AI Systems
Prohibited systems include solutions such as social rating platforms, emotion recognition platforms in workplaces and educational institutions, as well as systems designed to influence citizen behavior or exploit vulnerabilities. The law will not affect American tech giants until 2025, so OpenAI, Microsoft, Google, and Meta will continue to fight for dominance, taking advantage of legal uncertainty in the United States, adds NIX Solutions.
Impacts and Controversies
French President Emmanuel Macron criticized the AI Law, saying it creates a harsh regulatory environment that discourages innovation. Some European AI companies will find it harder to raise funds, giving their American competitors an advantage. The AI Act may set an example for policymakers in other regions, including the United States, who have yet to propose regulation of the industry.
Despite the controversies, the AI Act represents an important step towards regulating AI in the EU. Its relatively transparent and widely debated rulemaking process has given the AI industry an idea of what to expect. While it may not lead to dramatic changes overnight, the document demonstrates the position of the regional authorities regarding AI.
We’ll keep you updated as the AI Act continues to shape the future of AI regulation in Europe and beyond.