• news-banner

    Expert Insights

AI and HR - How can employers reduce the risks associated with using artificial intelligence to help manage their workforce?

The term "Artificial Intelligence" may seem intrinsically linked to the world of the future, conjuring images of evil robots, intent to destroy humanity, or humanoid beings, eerily similar to you or me.  In fact, AI is here today, playing a huge part in the world around us.  AI is any electronic device or system which can solve the kind of complex problems we would usually associate with human intelligence, through machine-learning algorithms (an algorithm is essentially a set of instructions for a computer to follow).  This includes anything from Siri and Alexa to a self-driving Tesla, Netflix's recommendations and personalised thumbnails to Google's sense of direction.  

Like most new technologies, AI can make our lives easier and allow us to achieve more.  It's therefore no wonder that organisations are beginning to utilise AI in all aspects of employment and people management.  People Management highlighted the growing use of AI in recruitment, where AI can help by automating candidate screening, scheduling interviews and maintaining candidate databases.  It can perform administrative tasks, including managing holiday entitlement, absences and performance data and can even support talent management, by predicting when an employee might leave.

It's easy to see why employers in the food and drink sector are keen to utilise AI.  However, the first adopters of any new technologies can face risks and challenges, particularly if they do not understand all of the implications of the systems they are using.  So, what risks should employers be alive to?

Last year, an Uber Eats driver spoke out after having been dismissed by the company's algorithm .  Uber Eats was using facial recognition software (a type of AI) in an attempt to stop people other than drivers using an account.  The software failed to recognise one of its drivers and he was dismissed as a result.  Aside from the issues this raises about the gig economy (which we will have to leave for another day), it highlights some of the potential pitfalls in using AI

Many businesses could benefit from appropriate use of AI akin to facial recognition software.  For example, food and drink manufacturers could use it to ensure that only qualified staff have access to certain systems in their factories, and to monitor precisely who is doing what task in their warehouses.  However, it won’t be plain sailing.

Facial recognition software is notoriously worse at recognising black people than white people and worse at recognising women than men. The dismissed Uber Eats driver said the software used by his employer was "racially discriminatory and should be abolished until perfected".   Use of such software, without careful controls, could not only lead to discrimination claims but could also cause enormous reputational damage.  Employers should tread carefully and ensure that they put proper controls in place to double check any decisions made by AI.

AI software runs on algorithms which are trained on large data sets.  The algorithm uses the data to "learn" and improve.  This means that, if you feed AI a biased data set, you get a biased outcome.  Amazon learnt this lesson the hard way in 2014 when it rolled out a new experimental hiring tool, designed to sift through applicants and narrow down the list of potential new recruits.  The AI was trained on existing recruitment data, looking at the patterns contained in CVs submitted by successful and unsuccessful candidates over the previous 10 years.  Therefore, the AI was taught to recruit exactly the sort of people that Amazon had recruited previously, which might not have been a bad thing were it not for the unfortunate fact that Amazon had mainly hired men over those 10 years. The algorithm learned that men are more desirable candidates than women, penalising any applicant who included words such as "women's" in their CV. 

One may think that Amazon could have dealt with the problem by removing gendered language from the CVs before the AI got involved.  However, to date, this has not proved possible. Research has shown that it's essentially impossible to completely hide gender during the recruitment process; AI can identify when an applicant is male or female even where a human being or, indeed, other AI has supposedly de-gendered the CV.  

Amazon scrapped the tech in 2018.  Clearly, the use of recruitment AI which exacerbates existing biases is detrimental to diversity and could open employers up to claims for discrimination.

The Register recently highlighted a different discrimination risk associated with AI use in recruitment – disability discrimination.  AI which screens candidates based on gaps in their working history, or which analyses a candidate’s facial expressions or tone during a video interview, may disadvantage disabled applicants.  A manufacturer, using AI to monitor employee productivity in its factory or warehouse, or a distributor using AI to harvest information about driver times, routes and speeds, could likewise fall foul of equality legislation.

Aside from the discrimination risks, employers are also very often under an obligation to provide explanations to employees for certain decisions, as part of the implied term of trust and confidence between an employer and employee.  How can an employer, who has blindly relied on the results of an algorithm that it does not understand, ever hope to provide such an explanation?

It is therefore vital that employers conduct their due diligence and have some understanding of the tools they are using.  Of course, AI systems are suitably complex that most of us will have no ability to understand precisely how they work; even those who wrote the code controlling the AI won’t be able to tell you why the AI has given the answer it has.  Yet, those relying on AI in the workplace should still have some basic knowledge about the tools they are using.  What is the purpose of the tool? What characteristics is it assessing?  What checks and balances are in place?  To what extent is the tool being relied upon to make decisions?

Alongside the discrimination risks, there are potential data protection pitfalls.  UK GDPR limits the circumstances in which employers can make solely automated decisions and requires transparency where such decisions are made.  A "solely automated decision" is a decision where there has been no human influence on the outcome.  For example, if an employer's clocking-in system automatically sends a warning to an employee about punctuality, that would be a decision taken solely by automated means.  However, if the system instead sends a flag to an HR manager, who takes the decision to issue a warning, that decision is not solely automated.  

GDPR says that automated decisions are unlawful if they have a "legal effect" or a "similarly significant effect" on the data subject.  Hiring and firing would definitely fall under this umbrella, meaning employers are limited from relying on fully automated decisions in this regard.  There is an exception where automated decision making is "necessary" to enter into a contract, but this is poorly defined.  It is somewhat tricky to see where some level of human intervention would not be possible for employment purposes.  

Even where decisions are not fully automated, use of AI could potentially lead to other data protection issues.  Have a think about the incredibly broad definition of personal data - "any information relating to an identified or identifiable natural person".  Imagine that my employer is training new AI software on its workforce data.   Say that I am the only person named Briony who has ever worked for my employer but one day another Briony applies, and the AI software makes certain assumptions about that other Briony, based on what it learnt about me.  Are those assumptions my personal data?  If the AI says "don't hire this new Briony, she is probably a wrongun'", what does that tell you about me and does that amount to my personal data?  

This might be an overly simplistic example - I doubt that anyone would code AI software to make recommendations based on a single data point - but there really are potential issues here and the answers are not always clear cut.  Employers will need to think carefully about how and why they use employee personal data, ensuring they have a legal basis for each processing operation and are fair and transparent about their actions. 

Ultimately, this is an exciting area and not one to fear.  AI can help to make our lives easier and allow us to make better, more informed decisions.  However, it can only do that if we understand how it works and its limitations and, vitally, consider when it is necessary for an actual human being to step in. 

The above is a general overview and we recommend that independent legal advice is sought for your specific concerns.  If you require further information in relation to the points raised in this article you should contact Briony Richards, who is a Solicitor and member of the Employment and Immigration Team at Charles Russell Speechlys LLP, specialising in Employment law.   Briony can be contacted on briony.richards@crsblaw.com

Our thinking

  • Mary Bagnall writes for FMCG CEO on the recent Thatchers v Aldi court ruling

    Mary Bagnall

    In the Press

  • Beyond Dry January: The Rise of the Low and Non-Alcoholic Beverage Sector

    Iwan Thomas

    Insights

  • New food and drink ads regulation & impact on live sports broadcasts

    Sarah Johnson

    Insights

  • Combatting Lookalikes Revisited - clouds lift for brand owners as Thatchers wins its appeal over Aldi copycat cider

    Mary Bagnall

    Insights

  • The Law Society Gazette quotes Mary Bagnall on Aldi’s infringement of Thatchers’ trademark

    Mary Bagnall

    In the Press

  • Food safety, restrictions on unhealthy foods, employee rights and preventing economic crime: Trends to look out for in the Food & Beverage Sector 2025

    Jamie Cartwright

    Insights

  • City AM quotes Mary Bagnall on the Thatchers v Aldi trademark appeal

    Mary Bagnall

    In the Press

  • Emma Humphreys writes for RICS Built Environment Journal on section 18 provisions (Peachside Limited v Lee & Keung)

    Emma Humphreys

    In the Press

  • Charles Russell Speechlys advises The Nero Group on its acquisition of coffee brand 200 Degrees

    Keir Gordon

    News

  • Employment (Allocation of Tips) Act 2023 – Coming into force on 1 October 2024

    Michael Powner

    Insights

  • ITV News, The Guardian, City AM, The Daily Express and various other local titles quote Michael Powner on the Tips Act

    Michael Powner

    In the Press

  • BBC News quotes Jamie Cartwright on a wrongful death lawsuit relating to a severe allergic reaction at Disney World

    Jamie Cartwright

    In the Press

  • Meghan's American Riviera Orchard trade mark - not quite the setback that the media suggests

    Charlotte Duly

    Quick Reads

  • All change for UK immigration?

    Owen Chan

    Quick Reads

  • Michael Powner, Isobel Goodman and Hauwa Ottun write for Law 360 on the Tips Act

    Michael Powner

    In the Press

  • A Glimpse into Saudi Arabia's Tourism and Leisure Vision 2030 and Beyond

    Reem Al Mahroos

    Quick Reads

  • Rose Carey and Katherine Dennis write an opinion piece for The Grocer on skilled worker visa changes and the impact on the food and drink industry

    Rose Carey

    In the Press

  • Combatting lookalikes in the light of Thatchers v Aldi

    Mary Bagnall

    Insights

  • The Grocer quotes Kelvin Tanner on the impact of upcoming visa changes on the hospitality industry

    Kelvin Tanner

    In the Press

  • Megan Paul writes for The Grocer on why green energy can be a 'money saver' for retailers rather than a 'money spender'

    Megan Paul

    In the Press

Back to top