It seems to be the opposite of the assumption of artificial intelligence, but it is true: even a machine can have prejudices. The delicate process of machine learning depends on the quality of the data we train it with, so it is not uncommon for it to reiterate common preconceptions in our society.
For this reason, those who work in artificial intelligence are constantly trying to devise new systems for controlling prejudices, especially when these can penalize one social category rather than another. A clear case is the use of artificial intelligence for the selection of candidates for a job position.
Artificial intelligence and curriculum
Today we can delegate many repetitive and boring functions to a machine; for example, the selection of hundreds of resumes.
Why should a recruiter manually browse and analyze every single entry in a candidate’s professional experience, when all they need to do is run AI and wait for the response?
The analysis of candidates can be based on finding the right skills listed in the CV, but also on sentiment analysis of the handwritten sections. A still pioneering field in Italy, while many foreign companies have already become accustomed to these mechanisms for personnel selection.
The LinkedIn Case
The most well-known and used by our human resources is undoubtedly LinkedIn, which connects recruiters and candidates for a specific job position. On the one hand, certain job positions are suggested only to ideal candidates; on the other, the best of these candidates are "suggested" to Human Resources. A sort of recommendation made by an artificial intelligence.
Artificial Intelligence and Discrimination
The problem arose when LinkedIn noticed that this selection gave rise to alleged prejudices.
Artificial intelligence rewarded those who tended to respond more promptly to job ads, or who described their past experiences in greater detail, highlighting their acquired skills. Or, those who applied for more ambitious positions than their current role were given preference.
By “privileged” we mean that these candidates were displayed in the recruiter’s Linkedin dashboard under the heading “recommended profiles”.
So what was the problem?
In essence, that These candidates all belonged to a specific social group.
Fixing AI with Statistical Parity
In the field of algorithms applied to the selection of candidates, the problem of statistical parity: it is a philosophical-computer theory that foresees represent the demographics of a sample even at its top.
Here, those suitable for a profession are the champion, while the “profiles recommended” to the recruiter by LinkedIn are the top.
In practice, this means that if in the group of suitable candidates for a profession there is 10% of a certain social group, and if out of the 10 candidates suggested to the recruiter there is not even one from this social group, then the artificial intelligence will apply a corrective, inserting one (10%).
At least that’s what LinkedIn is committed to doing from now on.
We can say that the biases of AI programming are corrected by further AI.
Artificial Intelligence Programming and Meritocracy
The idea behind this fix might be that machine learning is still too raw to be meritocratic.
Sometimes the results of machine learning escape our control, and if we are talking about social contexts the risk is that the machine develops what in statistics and scientific laboratory observation is called distortions (bias). LinkedIn is already a rather virtuous case, because the company's top management declares that the algorithm is blind to gender, ethnicity and photography. Instead, in other human resources selection mechanisms this blindness is not necessarily declared.
For this the great challenge for intelligent algorithms is to truly reward merit, avoiding prejudices that perhaps derive from human habit, from our history, and from our personal beliefs.
It is important to pursue this goal because today it becomes very difficult to give up the advantages that Artificial Intelligence Brings Businesses.
In the Human Resources sector we have not only the help of AI in selecting CVs, but also in the practices of Gamification to encourage teamwork, or to train new resources, or even to gather basic information on candidates for certain job positions.
In short, we don't have to design a robotic ally.
However, it is up to the human intellect to program it correctly.