Is AI the Frenemy of HR?

Tami Lamp
Ilene Siscovick

We often fear the worst when we encounter something new. As children, it’s the first day of school, and they won’t make any friends.  As we get older, we transfer these fears to what’s happening with our children… especially when health-related (speaking as parents who have spent many unnecessary hours waiting in the pediatrician’s office).  

In terms of technology, many predictions over the years have been made about computers taking away jobs, spying on people, or making choices that are dangerous to humans.  

These predictions sometimes come true, but often without the sting of negative consequences.  

In the late 1970s, Jeremy J. Stone warned about the potential job displacement caused by Artificial Intelligence. He predicted that automation and intelligent machines could eliminate significant numbers of jobs, particularly in manufacturing and clerical work.  

We are still waiting for the doomsday scenarios. With AI some jobs have been eliminated.  Certain repetitive tasks have been taken over by computers or automation, but there are new categories of jobs and skills that have emerged at higher levels of pay.

Why the history lesson?  Because AI is again in the headlines, many HR professionals are grappling with the role of HR both from an organizational and job role perspective.    

So, we ask the question again: What will AI do for HR professionals? How will AI enhance the ability of People Managers to improve Talent decision outcomes?  We will explore four HR scenarios to find the answer.  

AI and Chat GPT – why now?

The term AI is much older than most people realize, going back to the Dartmouth Conference in 1956 when a group of academics sought to explore ways that computers could exhibit intelligent behavior.  What started out in academia, made its way into commerce.  

In the 2000s there was a resurgence of Machine Learning when the processing of big data became inexpensive alongside breakthroughs in computer power.  Machine Learning is a category of AI that involves using algorithms to learn the behavior of large amounts of data and then make predictions on new, unseen data.  

Perhaps the most recent wave (and for many the most exciting) in AI is called Generative AI which the well-known ChatGPT and Midjourney are both historical Generative AI generates new and unique content based on historic data.  For instance, it can write a poem on a new topic in the writing style of a famous author or create an image in the style of a famous painter.  (More on this below)

As to why now, we don’t control the timing of innovation.  But what we can and must do is figure out how specific innovation is to be integrated into our work (and personal) lives.  

Candidate Sourcing still has many flaws

Using AI for candidate sourcing is perhaps one of the simplest uses of classic Machine Learning.  An algorithm goes through a database and then learns which traits or aspects of work history will make someone most likely to succeed.  Based on these criteria, the algorithm is trained to analyze the data from candidates that are applying for open positions and select the best candidates.  

Simple to do, but also easy to mess up.  In 2015 Amazon was forced to shutter its CV vetting algorithm after it identified a bias toward hiring women.  Even when it was programmed to avoid gender, the algorithm found ways to demote the candidacy of women by penalizing CVs containing terms such as “women’s chess club captain” or graduates from two all-women’s colleges.  

The reason for this bias is technical – the algorithm trained on historical data where most software engineers were men.  Even though the specific problem with the algorithm was fixable from a programming perspective, the damage was done.

Furthermore, there are less obvious ways in which candidate sourcing based on AI can perpetuate bias.  For instance, if historically successful candidates have attended Ivy League universities, then an algorithm may prefer these candidates from these institutions.  But perhaps there are worthwhile candidates who lacked the financial means to pay the tuition for a private education who would be just as suitable if not more so.  

For the HR end user who is not trained in the data science discipline, they are reliant on the expertise of the professionals and lack the background to steer algorithms away from the wrong data sets. Excuses don’t matter because it’s the outcomes that we care about, and decision-making that reinforces gender (or other bias) is the opposite of a positive outcome.  People Managers rely on these processes to source candidates with the best combination of skills and capabilities.

NLP for Sentiment Analysis is a safe bet

Natural Language Processing (NLP) is the ability of a computer program to understand spoken and written human language.  One of the best applications of NLP is when it is used to analyze long-text survey answers or answers from online anonymous facilitated conversations.  

The limitations of employee surveys are well-known: survey fatigue, biased responses for many reasons including anonymity, and the lack of qualitative insights.  One way to improve the quality of the insights is to ask open-ended questions about specific topics or initiatives and give people the opportunity to provide their feedback anonymously.  NLP can be used to classify answers and draw conclusions by reading each comment, something that is not humanely possible when speaking on large companies with hundreds or thousands of employees.

Employees – especially those on the front line – often know the answers to questions that C-suite cannot answer, the view from the trenches is very different than the boardroom. With AI-based tools, HR can bridge that gap with qualitative insights that can significantly impact the business and potentially give People Managers the opportunity to solve problems and pivot.

 

Generative AI for employee onboarding – a (mostly) safe bet

Employee onboarding has been identified as one of the most suitable applications for Generative AI.  Algorithms can learn almost everything a new employee needs to know, ranging from information about the company’s product and solutions to internal processes and procedures.  

A new employee can ask a ChatGPT-like tool-specific onboarding question (please explain to me how to use this benefit) and get an answer that is based on what the algorithm understands about the specific employee (you will be eligible in 6 months). This enhances the user experience through a positive self-service experience.

The reason why the use of ChatGPT is considered a relatively safe bet is that the information exposed to the algorithm can be limited to vetted content.  There was a recent case of a lawyer who used ChatGPT for legal research and filed a briefing with fake cases.  The problem with using the public ChatGPT is that it's difficult to control the information that is accessed and used to generate the answer.  But if HR limits the learning on generative AI to specific materials that have been manually controlled, then this key concern can be mitigated.

As with all new technologies, the best practice advice is to always proceed with care, but Generative AI seems to have a promising future for employee onboarding.  

Using AI for Predictive Behavioral Analytics

A Machine Learning algorithm extracts hundreds of variables from both the HRIS and IT systems including job role, tenure, age, site, email, and calendar usage patterns. The algorithms are trained to look for normal behavior patterns and then to find signals of abnormal patterns.  

For instance, if there is a spike in hours worked, Behavioral People Analytics can be used to detect a burnout risk.  

A picture containing text, screenshotDescription automatically generated
Source:  Watercooler.ai

 

The type of analysis described here can be applied to multiple risk factors.  For instance, the data can show that one person is at risk of resignation based on the infrequency of communication with their direct manager whereas another person is at risk of resignation because their manager is communicating excessively.  

The potential of Behavioral People Analytics is significant because it can be used to detect talent risks such as attrition or disengagement.  

But of course, human behavior is complex.  Behavioral People Analytics is a powerful tool, but it needs to be administered and controlled by humans who can provide context to behavior.  The person who is working too many hours may not be at risk of burnout because they are supporting a critical project which is important to their personal and professional development.  The person who seems isolated may be laser-focused on an important deadline and has temporarily limited their communications with team members. Automatically Intervening because an algorithm suggests so can be disruptive or even harmful to the individual’s performance or well-being.    

Behavioral People Analytics is a powerful tool, but it requires context and human insights if it is to be used effectively. People managers can evaluate these risks and benefits for better productivity and outcomes.

Summary and Conclusion: the devil is in the details

The relationship between AI and HR is still in its infancy because traditionally HR has been a late adopter of AI-based solutions.  But as AI becomes more widespread and the HR cases more prevalent, we expect to see a convergence between HR-Tech and AI.  

As to the original question: What will AI do for HR professionals?  AI can be a great enabler of change, but it should come with guardrails.   Used in the right way, AI can help replace manual or repetitive processes or find insights that can improve the lives of employees or change the business.

AI can enhance the relationship of the HR professional and the People manager for better business outcomes.

 

   

 

 

Close pop-up

Book a demo

Fill in the form below and we'll
get back to you as soon as we can.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.