Using AI ethically in HR: is it possible?  

9 min read  |   23 May, 2023   By Helen Astill, HR Solutions and Cherington HR

An animated image is shown, with a woman and a robot working opposite each other at desks
    

A lot has been written about the use of artificial intelligence (AI) over recent months, with the wide use of software such as ChatGPT or Google’s chatbot, Bard. But how safe is AI to use in businesses - and what ethical issues should SMEs consider? 

You might not realise the extent to which AI is now embedded into everyday life. The chatbot we interact with when we try to get an answer to a question online or on the phone, are just two examples. Many of us also have Amazon Echo (Alexa) or Google Home devices, but how far can it go? And what checks should HR maintain when using AI? 

 

Science fiction - now science fact?  

The science fiction of yesterday is rapidly becoming the science reality of today, with expert systems now able in many cases to spot early signs of illness and disease as well as target security and terrorist interventions. We’ve even got to the stage where chess computers can beat Grandmasters.   

It’s no longer simply about measuring things against human programming, but the programs are now learning from the observations they make of our responses. The danger comes if our responses are flawed, because that could result in the programs becoming flawed too.  

The AI systems themselves warn that their responses may not always make sense – so just taking answers at face value isn’t enough. It’s important to be precise and ask the right questions to get the best use of them, along with checking the answers received from any technology.  

 

Using AI in recruitment  

The questions around the use of AI in managing people have traditionally been focused on the use of computer software, for analysing job applications and making recruitment decisions.   

This is covered by the Data Protection Act and The Information Commissioner's Office (ICO) has produced guidance on how to deal with AI-assisted decisions.  It defines these as, “decisions that can be based on predictions, recommendations, or classifications.” They can consist of decisions that are solely automated processes or those that also have human intervention.   

But it’s important not to lose sight of who is held accountable for such decisions and for HR to ensure that there’s human oversight of the process. 

 

What does the technology say?  

I asked Alexa about the ethical implications of using AI in recruitment. The response was limited, but worryingly only focused on the benefits. It replied, “AI can augment the recruiting process by automating time-consuming recruiting and hiring tasks to more accurately identify the right candidates for a position, ensuring diverse and fair selections and reducing bias in the workplace—a common goal businesses are looking to achieve.”  

I’d agree that it can save time, and it will be tempting particularly for SMEs to want to make use of time-saving systems wherever possible to minimise costs and ensure consistency.  

However, I’m not convinced that it necessarily reduces bias, because the selection criteria programmed into the expert system may reflect existing biases in the workplace.  

For example, if an artificial intelligence system is told that historically the best performers currently have been male, white and middle-aged, it may perpetuate that dynamic as an element of the selection criteria. This means that looking for people with similar characteristics (rather than seeking out diversity) may reduce the ability of an organisation to be more creative.  And it may miss the best candidates in doing so.  

Incidentally, I asked the same question of ChatGPT and that produced a far more detailed and balanced response focusing on five main points relating to: bias; lack of transparency; privacy; autonomy; and fairness.  

 

Using AI ethically in HR: is it possible?  

I’ve already highlighted the issue regarding bias and the potential for inadvertently perpetuating existing biases. But the concern about transparency is one that needs to be taken seriously. You may find that applicants can be mistrustful of an obscure process that they might feel leads unfairly to their exclusion. If you’re unable to tell applicants the basis of selection for shortlisting etc., then they may be unhappy to proceed. As an employer, you may miss out on good candidates as a result of this.  

Applicants may also be concerned about the arrangements for handling and processing their data. Employers need to be able to reassure candidates precisely where their personal data is being held and the arrangements for ensuring its security. Giving each candidate a copy of your privacy notice will help deal with that concern.  

If there is no human intervention in the process at all, the lack of autonomy (with regards to screening and selecting candidates) may disadvantage candidates with disabilities. If they can’t participate in the selection process without any reasonable adjustments, that would be unfair and unlawful.  

The other factor to consider is the fact that algorithms may focus on prediction of performance based on specific factors - such as education history or previous work experience. This would disadvantage those who perhaps may have taken a career break because of childcare responsibilities or those that may have accrued their expertise through experience rather than from formal qualifications.  

 

All their own work?  

One aspect that employers need to consider is that applicants may start to become lazy and use AI to generate their applications. Therefore, it’s important to have a way of checking that the CVs and covering letters are indeed a candidate’s own work. Sometimes it will be easy to spot, if you get several applicants all producing identical or very similar documents. But it does mean that employers may need to consider introducing a policy on the use of AI as well as using plagiarism checkers (to ensure that applications aren’t the work of a computer program).  

 

Moving forward with AI  

Careful design and monitoring of the algorithms will ensure that aspects like plagiarism are addressed, but this is time-consuming and requires a lot of testing - so may be costly. That may be within the reaches of larger businesses who can commission the programming required.  But for smaller businesses, that’s less likely to be an option. SMEs need to be alert to the issues and ensure that there’s human oversight of processes that include AI. For example, no one would want good candidates to be deselected because they didn’t meet the recruitment criteria exactly.  

However, there are lots of opportunities for businesses to use AI in making administrative processes slicker and freeing up staff (especially HR teams) to undertake more valuable work. For example, an AI bot might be able to answer routine questions from employees, such as, “how much annual leave do I have left?” rather than needing a human to calculate that and respond.  

 

Seeking HR support  

If you’re unsure about the options within your SME in terms of expert systems, make sure that you seek HR advice before you begin. 

When managed correctly & supervised by humans, AI has the potential to change the way we work, for the better – once all the ethical implications are considered. 

Helen Astill (MEd, MA(HRM) Chartered FCIPD) is the Director of Cherington HR, and HR Services Director at HR Solutions.  

Helen

Author: Helen Astill, HR Solutions and Cherington HR

Helen was named HR Consulting MD of the Year in Acquisition International’s 2021 Influential Businesswoman’s Awards and has since been a finalist in the Herefordshire and Worcestershire’s Awards for both Professional Excellence and Excellence in Customer Service.

Back to listing

Sign up to get the latest HR and people management insights straight to your inbox