[ad_1]

Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific

Government Currently Focused on Assessing AI Risks, Fostering Innovation

UK in No Rush to Legislate AI, Technology Secretary Says
Secretary of State for Science, Innovation and Technology Michelle Donelan speaking Wednesday before the U.K. Parliament (Image: U.K. Parliament)

The U.K. government is in no rush to legislate artificial intelligence, Secretary of State for Science, Innovation and Technology Michelle Donelan said on Wednesday, warning that a hard regulatory approach to AI could risk stifling innovation in this emerging sector of the economy.

See Also: Freeing public security and networking talent to do more with automation

Donelan testified on Wednesday before Parliament’s Science, Innovation and Technology Committee.

In an August letter, committee lawmakers raised concerns about the government not prioritizing AI regulation, stating that the U.K.’s slow response in regulating the technology could erode Britain’s position “as a center of AI research” (see: UK Lawmakers Call for Swift Adoption of AI Policy).

Committee members’ concerns increased following the European Union’s AI Act, which they warned could be “difficult to deviate” from, citing the global heft of Europe’s General Data Protection Regulation as an example.

At the hearing on Wednesday, Stephen Metcalfe, Parliament member for South Basildon and East Thurrock, raised concerns about whether the EU’s regulation gave the trading block any advantage over the U.K.

Donelan, who heads the Department for Science, Innovation and Technology – or DIST, said the agency is not in a rush to legislate the technology. The goal of the government, she said, is to assess the risk posed by the technology first before regulating it.

“There are downsides to the legislation because it takes too long, as the technology develops at a faster pace,” Donelan said. “We are not saying that we will never regulate AI, rather, but the point is: We don’t want to rush and get it wrong and stymie innovation.”

Unlike the EU’s AI Safety Institute, which will likely be functional within the next two years, the U.K’s Safety Institute, announced on the sidelines of the U.K. AI Safety Summit, has already begun work and is in a position to evaluate models now, Donelan said.

She also touted the voluntary commitments from top AI companies to audit their algorithms before publishing in the market – measures that will help the U.K. regulate the technology in the absence of a statute, she said.

A March AI policy directed the country’s data, competition, healthcare, media and financial regulators to monitor AI within their jurisdictions, resulting in independent scrutiny by these agencies.

Last week, the British Competition and Markets Authority announced a probe into Microsoft’s stake in ChatGPT maker OpenAI, and the Information Commissioner’s Office imposed a fine of 7.5 million pounds on Clearview AI for privacy violations.

Some experts have previously stated that this approach risked creating fragmentation and duplication of regulations within various departments (see: UK’s AI Leadership Goal ‘Unrealistic,’ Experts Warn).

When committee chair Greg Clark’s asked how the government intends to tackle potential policy fragmentation, Donelan said her department is working to empanel a central regulator body within her organization to coordinate AI oversight.

“One of its key functions will be horizon-scanning to help the regulators identify some of the gaps in their policy implementation and support their operations.”

In November, the U.K. National Cyber Security Agency warned that actors are likely to take advantage of the developments in artificial intelligence to disrupt the U.K.’s general elections slated to be held in 2025 (see: UK NSCS Highlights Risks to Critical Infrastructure).

Mark Clarkson, Parliament member for Heywood & Middleton, questioned the measures undertaken by DSIT to tackle AI-generated deepfakes targeting election security. Donelan said the government is working with ally nations and tech companies, including social media platforms, to develop guidelines for watermarking AI content and to identify AI-generated materials.

“Do I expect that by the next general election, we will have robust mechanisms in place that will be able to tackle these topics? Absolutely, yes,” Donelan said.



[ad_2]

Source link