I for one am optimistic about the potential for AI to let us do MORE of the human work. More time to be empathetic. More time to listen and catch a nuance. More time to follow up and keep our people feeling safe, respected, and cared for.
Virtually every Employee Relations (ER) team I speak with says they are ready to embrace AI. Most of them express an interest in using AI to enhance analytics and tap into aggregated data to help them uncover trends, run predictive models and speed up case data analysis to ensure compliance.
Most teams quickly see the value in using AI to accelerate their productivity, provide caseload relief and enable a stronger focus on the right actions. More specifically, they envision AI helping them automate time-consuming tasks such as transcribing interviews and drafting both case summaries and investigative reports. It makes sense too; if AI can take on the tedious administrative tasks, ER professionals are free to do the more fulfilling work that gives us real purpose and job satisfaction. More productivity with less burnout? Yes, please.
So why aren’t most teams already all in on AI? Risk.
Most organizations who plan to include AI in their ER tool sets are still wrestling with legal, ethics and compliance concerns. That’s totally fair, given the sensitive nature of most ER information. Many organizations also express concerns about protecting IP and sensitive customer information. All valid concerns.
As ER works through these concerns, develops an approach and clarifies how AI will enhance value, two things are already clear. First, we must prioritize the human element of what we do. Most AI still sounds flat and void of emotion or compassion – an essential part of how ER teams handle employee matters. Mishandled, AI can introduce unnecessary bias and damage trust. Second, we can’t forget we are working with people. People who have real doubts and even fears about what AI means for their own job security.
To that end, I recommend several guardrails your team should consider as you define and articulate your approach to AI:
Eliminate inefficiency, not humanity.
At the very least, inhumane AI can be frustrating. Think of the last time you got stuck in a chatbot loop that went nowhere. If you just ‘wanted to talk to a human’, you understand the point. At its worst, inhumane AI can do real damage. Just ask the class action lawsuit members suing Humana for the AI-powered algorithms that denied their family members proper care.
It’s our responsibility to protect the humanity of what we do. Above all else, we must make sure our organizations understand and agree that AI cannot and will not replace the human elements of our role. Empathy, intuition and interaction will always remain irreplaceable in employee relations. As investigators we excel in curiosity, asking the right questions, understanding context, following our intuition and drawing thoughtful conclusions. These are skills we shouldn’t aim to replace.
Instead, let’s anchor our approach around efficiency by automating administrative and tactical aspects of the job. Let’s lean into increasing efficiencies and speeding up processes, but balance that with the understanding that quicker isn’t always better in investigations.
Mitigate bias.
The recent release of Google’s generative AI tool Gemini is an almost comical example of how bias can impact results. While Google’s intent was to mitigate racial stereotypes and reduce bias, the algorithms they used to train the dataset overcorrected and gave users clearly distorted results. Think female popes.
Less laughable is the bias that adversely impacts people’s ability to apply for a job or earn a promotion. AI bias within talent management platforms can inadvertently make poor decisions and scale discrimination at an alarming rate compared to humans. The consequences of unchecked bias are already here. In 2023, the EEOC settled its first AI lawsuit, with class action lawsuits pending. In 2024 the agency released guidance to help employers understand where their responsibilities lie.
As ER professionals, it’s our responsibility to define where AI does and does not play a role to ensure we don’t introduce bias. We must understand while AI is useful to help us shape the work, it cannot complete it. Critical reflection and consideration for the human elements and accuracy of information are too invaluable to effective resolution and positive outcomes for employees and the organization as a whole. For example, AI should stop short of drawing conclusions from investigation findings, focusing instead on presenting factual summaries. This method prevents the potential misinterpretations that can arise from AI recognizing patterns that a human investigator would understand as out of context or nuanced. Our discussions with legal and employment counsel experts continually address these concerns, particularly how bias can emerge during investigations and the importance of not exacerbating it with AI. For example, we avoid incorporating personal demographic information such as pronouns, ethnicity, race, or gender into AI-generated summaries to prevent the introduction of bias. We are carefully crafting our instruction set and prompts to yield the most accurate results from AI inputs.
Choose your technology partners wisely.
The EEOC has made it clear. If an employer “administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.” Translation: choose your software partners wisely.
The tools we use will only be as good as the datasets they are built upon. While we don’t have to be the algorithm architects, we do have a responsibility to ask the right questions. As we vet partners and look for purpose-built tools, we should seek to understand the who, what, and where origins of automated content and best practice advice. If it’s coming from a widespread crawl of the internet, run.
I for one am optimistic about the potential for AI to let us do MORE of the human work. More time to be empathetic. More time to listen and catch a nuance. More time to follow up and keep our people feeling safe, respected, and cared for. As we uncover ways to swap tedious tasks for more fulfilling work, I believe we will feel more purpose and finally get the chance to add even more value to the organization.