Computer

Human or machine: the computer says “no”

Human or machine: the computer says "no"

Organizations are increasingly using algorithms and automated decision-making to help them make decisions about people, but how far is this a step in the right direction?

Many employers are now including algorithms and automated decision making in hiring and other personnel processes. The London School of Economics and Political Science recently reported that more than 60% of businesses have adopted new digital technologies and management practices as a result of COVID-19. Our own Dentons AI survey found that 60% of companies use or pilot AI today.

While using these AI tools provides benefits to an organization, such as speed and cost savings, employers should be aware of the legal implications of over reliance on AI without proper understanding. legal risks and controls in place.

Consideration of data protection law

The UK GDPR (as it applies to the UK post-Brexit) provides that data should be processed “lawfully, fairly and transparently”. The complexity of explaining AI decision-making processes presents a challenge. However, clarity of logic (if not technical details) is most important here. Technologies are also being developed to help with more clarity.

Additionally, companies must ensure that the processing of employee data does not have an undue detrimental effect on the individual. This would be “unfair” treatment. It would also likely mean that they could not rely on “legitimate interests” as a legal basis for processing (and another basis for processing would be required).

When organizations use algorithms to process special category data (e.g. health, race, and religion), this requires greater protection. Most likely express consent is required, unless used in a way that is necessary to comply with employment law requirements. These circumstances are likely to be limited in practice, and it is difficult to rely on consent in an employment context (see below). Thus, the use of special category data to support AI should be considered very carefully.

In addition, there are rules on “automated decision making”. The UK GDPR specifically prohibits “uniquely automated decision-making that has a similar legal or significant effect”, unless:

  • you have the express consent of the person;
  • the decision is necessary to enter into or perform a contract; Where
  • it is authorized by EU or Member State law.

The first criterion here is whether the outcome of the AI ​​decision has a “significant legal or similar effect”. Not all AI decisions will meet this requirement. However, there will be many in an employment context that can – candidate screening tools; role suitability assessment tools, etc.

The grounds for allowing this activity set a high bar for employers to meet. Consent may appear to be most relevant in an employment context, but there is a risk that the power imbalance between a job applicant and a potential employer may result in consent not being considered freely given ( and, as such, invalid). Where consent is relied upon as the basis for processing, organizations should also bear in mind that individuals have the right to refuse or withdraw consent at any time, without prejudice (in practice, this means that they could have the right to move to a process that does not involve automation).

What is “necessary” to conclude a contract can be difficult to establish. Guidance from the Information Commissioner’s Office states that processing must be a targeted and proportionate step that is integral to the provision of the contracted service or taking the requested action. This exemption will not apply if another human-involved decision-making process was available.

Thus, a targeted use of this technology in these vital decision-making processes – more likely to support, rather than replace, a human decision – will be necessary to avoid running into some GDPR hurdles.

Before introducing algorithms and automated decision-making as part of any process, organizations should prepare a data protection impact assessment (DPIA) to analyze, identify and minimize data protection risk to ensure compliance with the UK GDPR.

Review of the Equality Act 2010

Algorithms are human-created and as such are inherently susceptible to certain biases. A significant concern could arise if the algorithm inadvertently leads to discrimination in violation of the Equality Act.

For example, an automated recruitment system could discriminate if it:

  • favors one gender over another (including rating the language more commonly used by male applicants more strongly than the language more commonly used by females);
  • places disproportionate weight on length of service in previous roles over experience/skills, which could lead to risks of age discrimination; Where
  • does not recognize foreign qualifications in the same way as those from the UK (which could expose an employer to claims of racial discrimination).

Any automated decision-making process that does not provide safeguards against discrimination on the basis of disability and reasonable adjustments could also put the employer at risk. There are examples of people whose disability impacts their ability to answer multiple-choice tests satisfactorily, despite being able to answer the question using free text. An automated process that does not incorporate flexibility (including appropriate triggers for human checks) could lead to equality issues.

A robust AI tool can recommend recruitment candidates that surprise an organization. We know diverse teams work well, but that’s not always reflected in hiring decisions. Diversity and a range of personality types can challenge existing (often unconscious) preferences related to team cohesion. This could lead recruiters to wonder if the AI ​​tool got it wrong and needs to be changed/cancelled, or if it instead exposed a potential bias in the human decision-making process left unchecked until ‘now.

Considerations for Employers

Biases and discriminations can unfortunately be found in AI tools, often unintentionally stemming from the humans who program them or from inherent biases in the datasets used to “train” the technology.

Despite this, AI can also be the solution (or at least a useful part of it) to arrive at fairer decisions. As technology continues to develop, algorithms can be programmed to detect and hopefully reduce discrimination and bias in decision making. And, perhaps, we should be willing to accept some startling results from AI that actually correct for unidentified biases in the human decision-making process (1:0 human robot).