June 9, 2023
You can’t seem to go onto social media lately without seeing a post about AI and its use in businesses. It's everywhere! But as UK legislation continues to play catch-up with technological developments, there is still (?!) no specific regulation that addresses the growing use and applications of artificial intelligence (AI).
Well, escalating advancements in this kind of technology space (particularly emerging tech), are forcing employers and professionals alike to turn to existing legal frameworks instead. In their attempt to grasp how AI interacts with the law – including employment law – businesses are now more than ever exposed to high risk whilst we are still in the early iterations of the tech.
Before we get into it, let’s take a look at what AI is all about.
“Artificial intelligence” or “AI” is a bit of a catch-all term for machines that can replicate human behaviours. The technology has been tested and improved over many years, with the ultimate goal of creating a seamless simulation of programmable human intelligence (iRobot anyone?!).
AI tools and technology are set to supercharge and expand the possibilities for employers in their ability to screen CVs at a faster pace, analyse performance data to predict how many and which employees are likely to leave within 12 months of starting a new role, and more! Pretty nifty right?
Generative AI can make the interactions between employers and employees customisable and more personable at a larger scale, guiding HR teams through using natural language ask-and-respond mechanisms, digesting and summarising a high volume of content such as employee reviews and feedback, and even acting as a first response for employee engagement and productivity support.
In this blog, we discuss why AI is relevant to employers and HR professionals, with a deep dive into its applicability and influence on recruitment, reviews and dismissals (including redundancies), and discrimination claims, as well as some commentary on how the UK Government is approaching the situation.
Let’s get stuck in!
To begin at the beginning – you’re thinking of hiring some new staff and wondering whether your good friend artificial intelligence can help. Although the whole point of AI is mimicking human behaviours and fulfilling what are regarded as mundane tasks, there are risks that come with its use, especially when it comes to essentially judging candidates on their suitability for a job.
Unconscious bias (while still a controversial concept) is of increasing concern to companies in general, but AI might not be the way to resolve the issues presented. At this stage, and to effectively tackle this ongoing problem, businesses should review their economic, social and governance (“ESG”) and diversity plans, taking a firm-wide, neutral approach to excavating and removing bias. Employers could consider instigating training (both during onboarding and throughout service) and organise discussion groups on how to be more aware of bias and inequality, rather than solely relying on shiny new automated processes.
For larger companies, this is becoming more relevant with increased spotlights on gender pay gap reporting and diversity targets, as well as escalating pressure from the public calling for employers to demonstrate a proper commitment to equality. In 2022, we saw the UK’s information commissioner heating up investigations into AI, whilst other bodies, such as The Alan Turing Institute, have intoned that, “The use of data-driven AI models in recruitment processes raises a host of thorny ethical issues, which demand forethought and diligent assessment on the part of both system designers and procurers.”
This shows that the onus is most definitely on employers, who have a responsibility and duty of care over their workforce, to embed equality into the organisations’ practices, from the beginning of the hiring process, maintained throughout service, and baked into processes for dismissal. Speaking of the wellbeing of staff…
In accordance with sections 2 and 3 of the Health and Safety at Work etc Act 1974, every employer has a responsibility to ensure, so far as is reasonably practicable, the health, safety and welfare of all workers whilst they are at work. The workplace can be a hazardous place, depending on the industry, however, keeping a continuous eye on an entire workforce is challenging for most organisations (especially large ones!) With the support of AI, and in particular computer vision technology (CVT), health and safety managers do not have to rely solely on CCTV monitoring and physical walkarounds to identify and prevent safety issues. CVT can identify threats to safety and physical health in real-time, without the hassle. Even high street giant Marks & Spencer has reported success from its trial of CVT!
Alice Conners, HSE Specialist at M&S, said “We can’t be everywhere at once… AI is like having an extra set of eyes, and it helps us in keeping our colleagues safe.”
High levels of surveillance in this manner may also lead to the perception of an invasion of privacy by workers. This can lead to negative effects such as stress, anxiety and depression – for instance, if AI is mis-used to calculate how long staff spend in restrooms or how many breaks they take, leading to negative performance reviews. It is therefore super critical to communicate and consult with workers early in the implementation process to help alleviate these concerns and find a way that works for everyone without detriment to physical or mental health.
This is not just AI, this is M&S AI…
Once you’ve recruited a team, the last thing you’ll want to do is hurt anyone’s feelings or cause unintended difficulties for your workforce. Retaining talent can be even more challenging than onboarding employees in the first place, so it’s important to understand how the law protects individuals – and crucially, how AI will need to be adapted to take these factors in to consideration.
We all know that The Equality Act 2010 contains the law that protects employees from discrimination on the grounds of protected characteristics. But did you know that this is another area of employment law that is already displaying tension with AI-use?
There are several cases in circulation illustrating employers getting into trouble for using various software to make decisions that impact an employee’s day to day responsibilities. Uber has repeatedly fallen under the employment spotlight, and this time, it was for use of facial recognition software, which led to claims that the tech was not accurately analysing skin colour. Some Uber drivers were completely banned from the platform where the software couldn’t recognise a non-white face, which not only damaged their reputation but also led to huge losses in income, (and a whole lot of extremely annoyed cabbies!)
These sorts of system errors can easily open employers up to discrimination claims. But it’s worth noting that there is a distinct difference in the law between direct and indirect discrimination.
Let’s bust some jargon:
If an AI system treats employees differently, and it is found that this is because one of the protected characteristics applies to them, this could lead to a direct discrimination claim.
Equally, as an AI system fundamentally operates on an algorithm or set of rules, this could be classified as a PCP and result in an indirect discrimination claim.
Indirect discrimination can be excused if the employer can justify an otherwise discriminatory rule or arrangement on objective grounds, by showing that: (a) it has a legitimate aim it is pursuing; and (b) the rule or arrangement is a proportionate means of achieving that aim.
Monitoring feedback and responding appropriately to any complaints or issues with using the tech and generally communicating well with employees to ensure that their livelihoods and ability to complete their job aren’t being negatively impacted by AI is a starting point. However, it’s also difficult to predict how such claims would progress given the lack of firm legislation in the area. Businesses should try to engage with anticipated regulation, such as the UK’s recently-published AI Whitepaper (see below for more on this), early on to plan for changes that may need to be made to business models, product offerings, technical procedures and approach to ESG and compliance. From an employment perspective, every single one of these elements could have an impact on employees and how they perform at work, so it’s crucial that HR professionals take an outcomes-based view to prepare for what new legislation may introduce. If you’ve got any questions, then we’re on hand to help!
Saying goodbye to your staff can be uncomfortable at the best of times, and there are lots of rules around how to ensure that your final decision is fair. But the simplest of these rules is that such a decision, should be explained or justified by a person. That is, a real live human being. And this is where AI becomes relevant in the way it is starting to be rolled out into industry.
If that decision to dismiss someone is largely or completely driven by an AI algorithm, the data it uses or produces to come to that conclusion may be wholly inaccessible, or really tricky to explain to someone not qualified to understand the complex technology. This is what’s called a ‘black box issue’, and employers can’t hide behind this to disclaim responsibility for the matter.
While an employer could use information obtained from AI when considering whether to dismiss, any decision made solely on AI without being properly interpreted and explained to the employee will be considered unfair. This can lead to time consuming and very costly settlement negotiations and/or claims in the courts.
As mentioned above, the UK Government has recently published an initial AI Whitepaper – a “pro innovation approach to AI regulation” – to try to address the topic as a whole. This states that, after the implementation of what will (hopefully) be the UK’s new AI regulatory framework, “the Equality and Human Rights Commission (EHRC) and the Information Commissioner Office (ICO) will be supported and encouraged to work with the Employment Agency Standards Inspectorate (EASI) and other regulators and organisations in the employment sector to issue joint guidance.”
This guidance could address the use of AI systems in recruitment or employment more generally, providing clarification on, for instance:
This is all purely hypothetical at this stage, as we are still waiting for proposals to become proper law. But, in the meantime, we have a few top tips for how employers can take preventative action and a pro-active approach towards mitigating the risks we’ve highlighted here.
It’s super important that organisations have back-up testing and measures in place when using AI for any reason. Check the data being inputted into the algorithm and analyse the results and output to ensure there’s no funny business each time you use it. It might sound like a headache and contrary to increasing efficiency, but the risk of claims and damage to reputation is much greater.
When it comes to implementation, there are lots of questions you should be asking across all areas of the business, including:
If you’re not sure whether you could be facing a potential claim or want to know more about how to prevent issues in recruitment or redundancy processes, come and speak to us! Our new Flamingo HR Subscription might be just the ticket to fulfil your employment needs.