For every new technology comes the time to be regulated, and now it is the turn of artificial intelligence. The latest news we have is from April 21, 2021, the historic date of the presentation of a proposal for AI regulation by the European Commission.
The proposal is only at the beginning of its process, in fact it will first have to be approved by the European Parliament, but in the meantime we can already have an idea of how the European political power is trying to regulate artificial intelligence.
Let's look at the main points of the proposed regulation.
1) The aim of the European Regulation on AI
The first question that many people ask is why regulate artificial intelligence? First of all, it must be said that this is not a directive, but rather a regulation that member states will then adapt to.
The aim of a European Regulation on AI is primarily to protect users from human rights violations and other related harms, including privacy.
So, will this put a limit on companies that deal with innovation and AI? It is a question that many ask, but we feel positive about it: the Union's will is to provide a common regulation, which provides an ethical reference point and prevents artificial intelligence from getting out of hand.
Additionally, by adopting the company's point of view, the end user will increase their level of trust in AI itself, which will have positive implications for the company.
2) To which companies does the new regulation apply?
The new regulation applies to all companies that produce, import or distribute AI technologies in Europe. This means that if a company imports AI produced by others for use on European soil, it will have to comply with the regulations in force here.
The impact of the regulation will not be the same for all companies.
In fact, it is not the same thing to use a chatbot for your e-commerce and instead to use biometric facial recognition data to create a sort of "public report card" of citizens, or for the so-called gait recognition.
For this reason Every company will have to understand what risk range their AI technology falls into.
3) The new risk bands for artificial intelligence and privacy
For companies, there are four main risk bands.
1- Low risk
Included in the low risk are all companies that use Artificial Intelligence for very mechanical tasks that have little impact on user rights: for example, email spam filters.
2 - Limited risk
Chatbots for e-commerce or online customer service belong to the limited risk range. It could become mandatory in the future to specify that you are talking to a machine and not a human, as well as to specify in a deepfake video that it is modified content.
3 - High risk
High risk may include all those systems that process a large amount of data - including sensitive and/or personal data - with artificial intelligence and draw conclusions from them that directly impact people's lives.
We are talking about the transport sector, or access to work or education, for example in the automatic selection of CVs for a job position.
In this risk range, artificial intelligence comes into contact with fundamental human rights.
4 - Unacceptable risk
Some applications of AI directly manipulate user behavior. Take real-time biometric recognition applied to social control, for example.
In conclusion, we can say that with this regulation another milestone has been reached, after the white paper on artificial intelligence and after the GDPR on privacy. We will probably be waiting for the approval by the European Parliament for a few years.
To understand what practical implications the regulation will have on digital companies and startups, we will have to wait for the regulatory adaptation by the Italian State, at the end of the European process.
In the meantime we can get an idea about the Practical applications of artificial intelligence for business.
To be continued!