[ad_1]

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Standards, Regulations & Compliance

European Parliament Overwhelmingly Approves AI Act

Europe Closes in on Rules for Artificial Intelligence
European Parliament President Roberta Metsola, center, in a June 14, 2023, press conference after lawmakers voted for the AI Act (Image: Fred Marvaux/European Union)

European lawmakers on Wednesday voted overwhelmingly in favor of restrictions for the artificial intelligence industry, approving a regulatory package obliging generative AI model makers to mitigate societal risks and banning a slew of applications, such as biometric recognition, in public places.

See Also: Live Webinar | Breaking Down Security Challenges so Your Day Doesn’t Start at 3pm

The European Parliament’s approval of the AI Act – with 499 voting in favor, 28 against and 93 abstaining – puts the proposal on track for a final round of negotiations known as the trilogue and involving the Parliament, the European Commission and the European Council, a body made up of direct representatives of member governments.

“Any time technology advances, it must go hand in hand with our fundamental rights. This regulation represents a new age of scrutiny. This is about Europe taking lead and we will do it our way,” said European Parliament President Roberta Metsola during a post-vote press conference.

The vote comes as Silicon Valley shows no sign of pausing in its race to incorporate AI into products. Analysis published Thursday by consultancy McKinsey finds that generative AI has the potential to automate tasks consuming roughly two-thirds of employees’ time today in professions including marketing and sales, software engineering, and research and development.

Cybersecurity practitioners have worried about generative AI’s ability to augment the credibility of phishing attacks and boost low-level criminals’ coding abilities. Other have touted AI’s ability to identify those attempts as cybercrime, in effect fighting AI with AI.

The tech industry has shown mixed receptiveness to closer government oversight. Although far ahead of other global power centers in enacting AI limits, Brussels isn’t alone in contemplating regulation. The Biden administration is examining creation of an “AI accountability ecosystem” while Beijing is readying a censorship regime for generative AI.

The European proposal classifies AI systems based on their risks, and the European Parliament is expanding the list of banned applications to include biometric identification systems in publicly accessible spaces; bulk scraping images to create facial recognition databases; and systems that use physical traits, such as gender and race, or inferred attributes, such as religious affiliation, to categorize individuals.

AI systems classified as high-risk, such as those used in critical infrastructure, law enforcement or the workplace, would come under elevated requirements for registration, risk assessment and mitigation as well as human oversight and process documentation. Lawmakers added social media recommendation algorithms to the high-risk list.

Although the EU’s effort to regulate AI began in 2022, well before the unveiling of ChatGPT, the arrival of the chatbot spurred lawmakers into writing new provisions for developers of foundational models. Prior to introducing a product onto the market, makers have to demonstrate that they reduced risks to “health, safety, fundamental rights, the environment and democracy and the rule of law.” They would also have to publicly disclose detailed summaries of copyrighted data used to train the model.

Trilogue talks are set to begin Wednesday evening with the aim of reaching an agreement before January.



[ad_2]

Source link