An Ethical, Human-Centred Approach to AI in Human Resources, with Matissa Hollister

Protected: Home

Could Artificial Intelligence tools decide who gets hired or fired, who gets a raise, or who’s ready to be a mentor? Some already are, to varying levels of success. Hundreds of AI-based tools already exist for use in Human Resources tasks, including hiring, training, and employee engagement, but it’s often difficult to discern their use value, let alone how to use them effectively and ethically, an arguable essential in HR.

Subscribe:

On the Delve podcast, Desautels Professor Matissa Hollister discusses how organizations can navigate and overcome the responsibilities and challenges they face when implementing AI in Human Resources. In collaboration with the World Economic Forum and over 50 other researchers, industry experts, and HR professionals, Hollister recently authored the toolkit and accompanying white paper Human-Centred Artificial Intelligence for Human Resources: A Toolkit for Human Resources Professionals. The toolkit is up-to-date guide on key topics and steps in the responsible use of AI-based Human Resources tools, and includes checklists focused on strategic planning and the adoption of specific tools.

Human bias is AI bias

Alleviating fears around AI, especially whether these tools will make decisions that humans wouldn’t, boils down to education about how the tools fundamentally function. AI primarily learns from real world data and identifies patterns within historical data—it does this much faster and more effectively and efficiently than humans can, but it’s only applicable to the specific contexts in which that data were created, such as a particular workplace and its goals and timelines. When those contexts change, such as during the Covid-19 pandemic, AI tools don’t tend to work well.

“AI is a cutting-edge tool that encodes the status quo,” says Hollister. Since AI learns from existing real-world data, all the world’s aspects—including bias, unfairness, and inequality—end up being reflected in the data and can be encoded in the system. “The system is not capable of discerning what aspects of the real world we like and we think should be kept, versus the aspects of the real world that we’re not so happy about and we would like to get rid of.”

Most of today’s AI tools for HR are aimed at making people’s work and their work experiences more efficient or ensuring that certain information is processed completely and results are more accurate. AI can automate the most mundane and tedious parts a job, allowing a worker to spend more time doing the parts of the job that require empathy, judgment, and creativity. Even so, organizations still need to be vigilant about how they use AI tools.

“There are a lot of places where people are trying to just replicate the human process that already exists,” says Hollister. “That’s where you’re at the most risk of automating the person’s job: if it’s the worst part of the job, maybe it’s okay, but if it’s the main part of their job, that’s more problematic. On top of that, that’s also where you’re most likely to encode the bias, because you’re just trying to replicate what the human was doing, you’re not offering any improvement.”

Responsibility and ethics in technology

Hollister’s management research primarily focuses on changes in organizations, jobs, employment, and careers, with a broader lens on technology. Yet she dove into the AI and machine learning arena when she saw its similarities to the statistical methods she typically used, including people analytics, which looks at patterns (such as who gets promoted in an organization) and can help identify problems and their causes.

“I immediately started seeing how the use of AI technology, such as creating algorithms to predict things, could have both beneficial and detrimental consequences in the workplace,” she explains. She’d also seen the same news stories worrying people the world over, such as an algorithm for determining prison sentences in the U.S., facial recognition AI tools for policing, and managers using AI to decide who gets a job.

Hollister was already compiling a list of AI for HR tools when she saw the opportunity with the World Economic Forum. She knew that decisions on AI, especially AI tools that affect HR, couldn’t be left up to Information Technology specialists or executives, but should be in the hands of multiple stakeholders who understand how the whole of the organization—including its employees—works and how the organization reacts to change.

“The person who’s using the AI tool should understand exactly what this tool is doing,” says Hollister, but implementing AI effectively goes beyond each user. The toolkit’s checklists outline what to consider before adopting an AI tool, as well as where AI can go wrong.

“This toolkit emphasizes questions you should ask: How did [the AI tool creator] define high capability? What does it actually mean to be high capability [as a worker]? Where did the data come from and what does it capture?” she explains. While the interior math of AI tools is vital to their functionality, actually implementing the tool well depends on decisions about the tool’s exact task and whether it’s a good idea for the organization. Often, vendors count on the fact that HR professionals don’t understand AI and will accept a company’s claims about its AI tool before questioning how it’s going to help them or hurt them.

“The hope is that it will make organizations think more about why they’re using AI in the first place,” says Hollister. “It may very well be a waste of money if they just buy the tool and don’t think through the implementation and monitoring part.”

Putting people first in HR and AI

Hollister explains that part of her work is to keep the human in Human Resources throughout the AI implementation process, but it’s also about explaining what that really means in terms of alleviating bias and making fair decisions.

“People often state as a general AI principle, whether it’s in HR or anything else, that we want the human to make the final decision, but part of why people are using these systems is because they think the human hasn’t made the best decision,” says Hollister. “Part of what we lay out in the toolkit is that it’s important to still allow the human to make the final decision, but also provide them with guidelines about how they should be using the system and how they need to also document their decision.”

On that note, Hollister provides a slogan to contemplate: “Artificial Intelligence, is not very intelligent, unless used intelligently.” She explains that human innovation is what makes AI so powerful. While the AI tool is looking for patterns, it’s the human who needs to identify where to look and why.

As Artificial Intelligence tools continue to proliferate and offer solutions to problems both old and new, more critical inquiry is needed around not only what tools work best in what contexts, but what their ethical implications and real-world human fallout could be. For more insights, listen to the Delve podcast with Matissa Hollister and read Human-Centred Artificial Intelligence for Human Resources: A Toolkit for Human Resources Professionals and more in the World Economic Forum’s Human-Centred Artificial Intelligence series.

This episode of the Delve podcast is produced by Delve and Robyn Fadden. Original music by Saku Mantere.

Delve is the official thought leadership platform of McGill University’s Desautels Faculty of Management. Subscribe to the Delve podcast on all major podcast platforms, including Apple podcasts and Spotify, and follow Delve on LinkedInFacebookTwitterInstagram, and YouTube.

Matissa Hollister
Assistant Professor, Organizational Behaviour