icon

Rules on artificial intelligence: the European Union is working on it!

Stephen Hawking predicted in 2014 that artificial intelligence might destroy humanity in the future… While that may still sound like science fiction, the fear of the consequences of artificial intelligence is increasing at the moment. This, of course, has everything to do with the rapid development of intelligent systems – of which Chat GPT in particular has been in the spotlight recently.

The end of humanity may still be a bit far fetched, but we already saw in the Benefits-affair (“Toeslagenaffaire“), for example, that AI systems can intentionally or unintentionally lead to discriminatory outcomes. Among other things, based on nationality, family composition and salary, an algorithm of the Inland Revenue decided who was checked manually. The Data Protection Authority announced in January that it would start additional monitoring of “life-threatening” algorithms.

And recently, 1100 techprominent people even called for a temporary brake on the development of artificial intelligence in an open letter. We should all take at least six months to think about how to plan and control the development of artificial intelligence with care. Right now, AI labs would be caught up in a race to develop increasingly powerful digital minds, and even their creators would no longer be able to understand, predict or reliably control those minds. Advanced artificial intelligence could radically alter the history of life on Earth, according to the letter’s signatories. Elon Musk, one of the founders of OpenAI and a co-signatory of the letter, announced – somewhat surprisingly – just last week that he was working on a new algorithm: Truth GPT; an alternative to Microsoft and Google’s algorithms.

Regulation artificial intelligence

Legislators have not been idle in recent years either, although there has been considerably less focus on this. The European Union has been working on a new regulation to regulate artificial intelligence since 2017, and it will be some time before it actually comes into force.

In the draft proposal, the European Commission considers that artificial intelligence can help achieve beneficial social and environmental outcomes, and provide important competitive advantages for businesses and the European economy. At the same time, the European Commission also signals that artificial intelligence poses new risks or negative consequences for individuals or society. Given the speed of technological change and potential challenges, the new regulation should result in a balanced approach in the field of artificial intelligence.

Main purposes

On the one hand, the regulation aims to ensure that AI applications are safe and comply with European Union values and, on the other, it regulates AI applications economically. In summary, the new regulation should:

Prohibited and risky AI systems

The draft regulation adds the deed to the recitals and immediately includes a long list of prohibited AI applications in Article 5. For example, AI applications that interfere with the behaviour of individuals without their awareness are prohibited by definition, or applications that take advantage of the vulnerability of specific groups – e.g. with disabilities – that are likely to cause physical or mental harm.

Article 5 also prohibits government agencies from using AI applications to assess or classify the reliability of individuals based on their social behaviour or personality traits if the resulting score results in unfavourable treatment of those individuals and is disproportionate to their social behaviour. Perhaps such a system sounds like something that could only exist in TV series like Black Mirror, but a social credit system is currently being hard at work in China. Lower social status in that system could, for example, make it harder to obtain a mortgage. And in 2019, it was even reported that millions of Chinese with a lower social score were prevented from buying air and train tickets. As far as we are concerned, a very welcome ban in the draft regulation.

The regulation also includes a definition for high-risk AI systems. These include, for example, when systems pose risks to physical or mental health or when systems pose risks to fundamental rights. Such systems must meet various technical conditions, among others. For example, they must incorporate a “risk management system” that identifies and analyses certain risks on an ongoing basis.

To be continued

In summary, developments in the field of artificial intelligence are rapid and concerns about the potential negative social consequences of artificial intelligence are high. At the same time, serious European legislation is in the pipeline within the European Union to regulate artificial intelligence to safeguard fundamental rights. We are following these developments with great interest and will no doubt blog about them more often in the coming period.

Rules on artificial intelligence: the European Union is working on it!