[ad_1]

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance

European Union Will Enact Comprehensive Regulations on AI

Europe Reaches Deal on AI Act, Marking a Regulatory First
The European Parliament in session in Brussels in 2020. (Image: Shutterstock)

European lawmakers and officials announced a compromise late Friday over a regulation on artificial intelligence in the works since 2021, making the trading bloc first in the world to comprehensively regulate the nascent technology.

See Also: Entering the Era of Generative AI-Enabled Security

Representatives from the European Parliament, commission and member nations have been in intermittent negotiations since June including a marathon 22 hour session that began Wednesday and later talks that stretched to nearly midnight in Brussels on Friday. The compromise still requires final approval from Parliament and by the European Council, a body of direct national representatives – a last stage in European lawmaking usually considered a formality (see: EU AI Act Talks Drag on Past Expected End Date).

“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” said Thierry Breton, the European commissioner for internal market, who had a key role in negotiations.

The penalties for non-compliance with the rules can lead to fines of up to 7% of global revenue, depending on the violation and size of the company.

What the final regulation ultimately requires of AI companies will be felt globally, a phenomenon known as the Brussels effect since the European Union often succeeds in approving cutting-edge regulations before other jurisdictions. The United States is nowhere near approving a comprehensive AI regulation, leaving the Biden administration to rely on executive orders, voluntary commitments and existing authorities to combat issues such as bias, deep fakes, privacy and security.

European officials had no difficulty in agreeing that the regulation should ban certain AI applications such as social scoring or that regulations should take a tiered-based approach that treats high-risk systems such as those that could influence an election outcomes with greater requirements for transparency and disclosure. The regulation also prohibits mass scrapping of images from the internet to feed facial recognition algorithms.

Full details of the deal are not known, but a Parliament statement says the compromise bill will allow real time biometric recognition in public with prior judicial authorization in cases of searches for victims of crime such as abduction or trafficking, the prevention of a terrorist threat or identification of an extreme criminal suspect. Whether to allow real-time recognition was a difference between Parliament and the European Council, with parliamentarians favoring stricter prohibitions. The compromise will limit retrospective facial recognition searches of video footage to serious crimes.

Parliament says it succeeded in imposing guardrails for foundation models – machine learning trained on vast amounts of data, including OpenAI’s GPT series of models. Developers of foundation models with high impact will have to conduct evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the commission on serious incidents, ensure cybersecurity and report on their energy efficiency, lawmakers said.

The deal has not been welcomed everywhere, with European Digital Rights criticizing it for what it says are loopholes in the compromise. AI developers can evade regulation, the group said, if they report their systems as falling below the EU’s high risk threshold. It also took issue with a ban on emotional recognition systems, saying a prohibition on them in workplaces and educational settings “illogically omits the most harmful uses of all: those in policing and border and migration contexts.”

Many of the regulations detailed in the regulation will not take effect for a year or more, a fact that’s already led some European privacy regulations to voice concerns that dangerous algorithms may gain a foothold in European society before they can be stopped (see: EU Artificial Intelligence Act Not a Panacea for AI Risk).

Some European countries have already begun setting up national AI authorities to enforce the regulation. If Europe’s last major technological law, the General Data Protection Regulation, is any guide, full enforcement of the AI Act could take years of challenges and court rulings before regulators apply the full extent of the law, a circumstance that critics of GDPR enforcement say is still ongoing with matters such as commercial data transfers to the United States and targeted advertising by online platforms.



[ad_2]

Source link