China and Europe lead efforts to regulate AI

A robot plays the piano at the Apsara conference, a conference on cloud computing and artificial intelligence, in China on October 19, 2021. As China revamps its technology regulations, the European Union is developing its own regulatory framework to curb the AI ​​but has yet to cross the finish line.

Str | AFP | Getty Images

As China and Europe try to rein in artificial intelligence, a new front is opening up around who will set the standards for the burgeoning technology.

In March, China has rolled out regulations governing how online recommendations are generated by algorithms, suggesting what to buy, watch or read.

It is the latest salvo in China’s tightening grip on the tech sector and sets an important marker in how AI is regulated.

“For some people, it was a surprise that last year China started drafting AI regulations. It is one of the first major economies to put it on the agenda of the regulations,” Eurasia Group geotechnical practice director Xiaomeng Lu told CNBC.

As China revamps its rulebook for the technology, the European Union is crafting its own regulatory framework to curb AI, but it has yet to cross the finish line.

With two of the largest economies in the world showcasing AI regulations, the field of AI development and business globally could be on the verge of a significant shift.

A global playbook from China?

At the heart of China’s latest policy are online recommendation systems. Companies must tell users if an algorithm is used to show them certain information, and people can choose not to be targeted.

Lu said this is a significant change because it gives people more control over the digital services they use.

These rules come amid a changing environment in China for their biggest internet companies. Several of China’s tech giants, including Tencent, Alibaba and ByteDance – found themselves in hot water with the authorities, especially around antitrust.

I see China’s AI regulations and the fact that they act first as large-scale experiments that the rest of the world can watch and potentially learn something from.

Matt Sheehan

Carnegie Endowment for International Peace

“I think these trends have changed the attitude of the government on this considerably, as they are starting to look at other questionable market practices and algorithms promoting services and products,” Lu said.

The measures taken by China are remarkable, given the speed with which they were implemented, compared to the timeframes other jurisdictions typically work with on regulations.

China’s approach could provide a playbook that influences other laws internationally, said Matt Sheehan, Asia program fellow at the Carnegie Endowment for International Peace.

“I see China’s AI regulations and the fact that they’re going first as basically running large-scale experiments that the rest of the world can watch and potentially learn something from,” he said.

The European approach

The European Union also develops its own rules.

The AI ​​Act is the next major tech legislation on the agenda in these busy few years.

In recent weeks he close of negotiations on the Digital Markets Act and the Digital Services Acttwo major regulations that will limit Big Tech.

The AI ​​law now seeks to impose a global framework based on the level of risk, which will have far-reaching effects on the products a company brings to market. It defines four categories of risk in AI: minimal, limited, high and unacceptable.

France, which holds the rotating presidency of the Council of the EU, launched new powers national authorities to audit AI products before they reach the market.

Defining these risks and categories has sometimes proved difficult, with MEPs calling for a facial recognition ban in public places to limit its use by law enforcement. However, the European Commission wants to ensure it can be used in investigations while privacy campaigners fear it will increase surveillance and erode privacy.

Sheehan said that while China’s political system and motives are “completely anathema” to European lawmakers, the technical goals of the two sides have many similarities – and the West should pay attention to how China approaches them. to launch the project.

“We don’t want to emulate any of the ideological or speech controls that are being deployed in China, but some of these issues from a more technical standpoint are similar across different jurisdictions. And I think the rest of the world should be watching what happens outside of China from a technical point of view.”

China’s efforts are more prescriptive, he said, and they include rules for recommending algorithms that could curb tech companies’ influence on public opinion. The AI ​​Act, on the other hand, is a general effort that aims to bring all AI together under one regulatory roof.

Lu said the European approach will be “more onerous” for companies because it will require pre-market assessment.

“It’s a very restrictive system compared to the Chinese version, they basically test products and services in the market, and don’t do it until those products or services are introduced to consumers.”

“Two different worlds”

Seth Siegel, global head of AI at Infosys Consulting, said that because of these differences, a schism could form in the way AI develops on the global stage.

“If I try to design mathematical models, machine learning and AI, I will take fundamentally different approaches in China compared to the EU,” he said.

At some point, China and Europe will dominate how AI is controlled, creating “fundamentally different” pillars on which the technology will thrive, he added.

“I think what we’re going to see is that techniques, approaches and styles are going to start to diverge,” Siegel said.

Sheehan disagrees that there will be a splintering of the global AI landscape due to these different approaches.

“Companies are getting a lot better at adapting their products to different markets,” he said.

The biggest risk, he added, is that researchers will be sequestered in different jurisdictions.

AI research and development crosses borders, and all researchers have a lot to learn from each other, Sheehan said.

“If the two ecosystems cut the ties between technologists, if we prohibit communication and dialogue from a technical point of view, then I would say that poses a much greater threat, to have two different universes of AI that could end up by being quite dangerous in the way they interact with each other.”

Back To Top