From online games to select the best candidate, to software that analyzes cover letters, A.I. is advancing within the recruitment and selection of new personnel. What are the opportunities and risks of this development? And how can we limit the risks as much as possible? Associate professor Pascal Wiggers and researcher Hans de Zwart of the Amsterdam University of Applied Sciences investigated this on behalf of the City of Amsterdam. Their research report ‘If the machine chooses’ will be published this week.
A.I. is increasingly used in the recruitment and selection of new personnel. Algorithms determine who sees a vacancy, or which candidate comes best from an online assessment. But does it lead to fairer odds if the machine chooses over a recruiter? In some cases, A.I. help with this, but that does entail the necessary risks; as associate professor Applied A.I. Pascal Wiggers and researcher Hans van de Zwart (AUAS). In their report If the machine chooses: diversity and inclusion, they describe the opportunities and risks of this development.
The researchers interviewed more than 20 suppliers of recruitment and selection software, recruitment specialists and organizations that use this type of software. They also did a literature search.
What opportunities does A.I. offer?
In the report, the researchers first describe the following opportunities of artificial intelligence for fairer recruitment and selection:
A.I. is currently already being used from the first phase: the vacancy. For example, there is software that screens vacancy texts for inclusive language use and for ‘bias’, i.e. unconscious preferences. It is known, for example, that men mainly respond to certain required skills, such as ‘stress resistant’ or ‘commercial’. The software provides synonyms that women can identify with.
For the selection phase, there are AI tools that go through resumes and remove personality traits, such as name and gender. There is also software that analyzes who has certain skills.
Tapping into a larger file
Certain software also makes it possible for outsourcing to search in a more targeted way and in a larger number of databases, in order to find candidates or groups that were not previously visible. For example, organizations can search a more diverse and larger database of candidates thanks to A.I.
Bias (sometimes) more controllable
The result of AI systems is in some cases more visible and measurable than the results of a selection process by recruiters. In that sense, a human bias is more difficult to measure.
When an outcome via A.I. gives food for thought – as in the well-known Amazon experiment – and an algorithm appears to contain bias, an organization can adjust the algorithm.
Finally, A.I. efficiency benefits. Large organizations in particular can use this to automate parts of the recruitment and selection process. For example, a manager or HR employee does not have to read hundreds of letters themselves.
What are the risks?
The researchers also mention the risks of the increasing use of A.I. in recruitment and selection:
Status quo strengthened
The first real risk of A.I. is that this further reinforces a bias. For example, software is currently already being used that automatically creates a profile of the ideal candidate when a vacancy occurs. The software compiles this from qualities of the best performing employees of an organization; the current employees.
Suppose the majority of the current workforce is male and under 40, then these characteristics are taken into account, and there is a good chance that an AI system will link this to success.
Need a lot of reliable data
This immediately exposes the following problem: only with large amounts of reliable data can an AI system come to reliable conclusions. Companies must have access to this data themselves, which entails a lot of costs and labor hours.
Specific persons can be disadvantaged
Because A.I. systems are based on statistics, they do not deal well with individuals who deviate from the norm in one way or another. Even if you take measures to make the outcomes fairer for certain groups, you still cannot protect specific individuals. It is therefore difficult for artificial intelligence to place a strongly deviating CV.
Bias becomes systematic
Selecting data carefully and setting requirements for outcomes help to counteract bias in an AI system. However, this does not guarantee that a system does not discriminate. The technology is so complex that forms of bias can still escape attention. There is a risk that such a bias will become structural when the technology scales up.
Conclusion: certainly not flawless
The conclusion, according to the researchers: artificial intelligence is certainly not flawless. No tool is 100 percent reliable, and by itself