Delve podcast: An Ethical, Human-Centred Approach to AI in Human Resources, with Matissa Hollister (Read Transcript)

Delve podcast, June 2, 2022, hosted by Robyn Fadden: An Ethical, Human-Centred Approach to AI in Human Resources, with Matissa Hollister
Robyn Fadden – host: Could Artificial Intelligence tools decide who gets hired or fired, who gets a raise, or who’s ready to be a mentor? Some already are, to varying levels of success, alongside all kinds of technical and ethical questions. Today, hundreds of AI-based tools exist for use in Human Resources tasks, including hiring, training, employee engagement and other human-centred areas, but it’s often difficult to discern their use value, let alone how to use them effectively and ethically.
Robyn Fadden – host: Welcome to the Delve podcast. I’m your host for this episode, Robyn Fadden. On this episode, Desautels Professor Matissa Hollister discusses how organizations can navigate and overcome the challenges they’re facing around implementing AI in human resources. Professor Hollister recently authored an AI Toolkit for Human Resources Professionals, a collaboration with the World Economic Forum and several other researchers and industry experts. The toolkit and its white paper aim to make the use of AI tools for HR much clearer to HR professionals, including those who don’t have a technical background. It’s an up-to-date guide on key topics and steps in the responsible use of AI-based human resources tools, and includes checklists focused on strategic planning and the adoption of specific AI tools.
Robyn Fadden – host: Welcome to the Delve podcast, Professor Hollister. Thank you for being here. Questions about how artificial intelligence will affect work have been asked since the dawn of machine learning, which of course has changed over the years – the work of deep learning AI tools today now seems much less fantastical or mysterious than it did in the 1960s or even the 80s. Today, AI tools are used across industries, across sectors and across fields.
Robyn Fadden – host: Both the benefits and the challenges of AI and its ethical consequences apply to human resources, of course, where naturally we think about people and the direct effects of AI tools on people – we think about bias, we think about ethics, but we also think about increasing efficiency, easing workload and allowing people more time for things like strategic planning or creative work. The toolkit you created is part of an AI series by the World Economic Forum, such as an AI toolkit for the C-suite – I’m curious about why you, your collaborators, and the World Economic Forum prioritized a toolkit for Human Resources at this time?
Matissa Hollister: A lot of it was actually driven by me. I applied to become a fellow from McGill. My research has always been on changes in jobs and employment and careers. And I had talked broadly about technology, but not specifically about AI. That was in 2017. And I was stunned when I learned what AI was, machine learning currently, that it really was sort of a more elaborate version of the statistical methods that I had been using for a long time, and that I taught in classes. And because I was a researcher on inequality in the workplace careers in the workplace, I immediately started seeing how the use of this technology that creating these algorithms to predict things could have both beneficial and detrimental consequences in the workplace. So I was already investigating some of that on my own, beginning to compile a list of all the AI for HR type tools that were out there when this opportunity to do the fellowship with the forum came up.
Matissa Hollister: When I arrived, they were very nice and open and said, What would you like to work on? And I said, well, how about this question of using AI to inform HR decisions? And they were supportive of it, partially because even at that point, I think they were realizing, I would say a few years earlier, the earliest was probably 2015-2016, when people really started getting somewhat worried about the potential ethical consequences of AI, and particularly its potential to encode bias. Often when articles were written, the examples would be either, well, the first example that came up was this algorithm that’s in the United States for determining prison sentences and stuff that had a lot of issues. But then people would often talk about facial recognition and policing, of course, as the other biggies. And then people would talk about, what if AI is determining who gets a job? It was often used as an example of a worrisome use case of AI. They had started, they had a draft of a paper that they had kind of stalled, that they had tried to look at a little bit of this. I think they recognized this as an opportunity, because I had the expertise to take it on. And it turned out to be very much good decision.
Matissa Hollister: A little over a year ago, the European Union announced that they were going to create regulation for AI, and their system, they’re trying to figure out how to regulate AI. The focus was on high-risk AI. They listed these are the high-risk AI uses, and one of them was very explicitly named was the use of AI and employment and HR. Now the forum is continuing to do work, they’re focus is switching a little bit more towards policy now, but continue to do work on AI and HR, because they recognize that it’s one of a handful of really high-risk use cases that are causing a lot of concern.
Robyn Fadden – host: What does AI use in human resources look like today?
Matissa Hollister: Probably the most widespread use is in hiring and recruiting, because that’s one of the biggest challenges in HR, it takes the most resources. It’s trying to process sometimes huge volumes of data, lots of incoming applications. It’s an area where organizations are really looking to save time and money. So it’s a place where I think a lot of startups and larger organizations have jumped in to try and use AI to ease that that point, and they’ve taken a million different approaches, even an AI hiring tool. I think we have a quote in the toolkit, I’m not sure if I debated keeping it or not because it was for the World Economic Forum, so I wasn’t sure everyone would understand that every AI tool is a snowflake. We in Canada understand what we say with that, right? Everyone is unique. So even if they all are screening resumes, they’re going to end up looking at slightly different things. But I would say the recruiting and hiring processes is by far the biggest area, but it’s spreading to all different kinds. So I’d say some work on predicting turnover. Some work on coaching, training, automatic career pathing and training recommendations or career recommendations.
Matissa Hollister: Just in the last year, I would say there’s been more and more focus on the kind of skills and skill mapping and reskilling. So those would be the biggest but that’s pretty much. There hasn’t been as much on promotions and things like that, but pretty much all the rest, what you might call the employee lifecycle: hiring, attrition, training, mentoring. There’s a few out there that are trying to build, like recommend who should work together in teams.
Robyn Fadden – host: You worked with an active group of over 50 people from many different backgrounds, including HR professional associations, to design the AI for Human Resources toolkit for use by HR professionals. What does it provide for them and how could it help guide their decision making?
Matissa Hollister: It’s really aimed at providing a baseline of education to HR professionals about how AI works, and what are the key considerations that they should be thinking about before using a tool. We developed this idea of a checklist, which is kind of just a list of almost a sort of a due diligence list, before you buy this tool, here, you should be able to answer the following questions or ask the vendor the following questions and get a decent answer. And so it kind of steps people through and just make sure that they’ve thought through this process if they’re going to either build or procure an AI based HR tool. So that’s the real focus. And then the idea was that this would also indirectly affect the marketplace of the tools themselves. Because if you one of the things that really struck me was when you go to the websites of some of these AI based HR tools. They make these grand claims, they often talk in very abstract terms where they say we identify high potential candidates, and automatically filter them for you.
Matissa Hollister: This toolkit emphasizes questions you should ask, like how did you define high capability? Because they actually have to measure that in order to create a tool. So what does it actually mean to be high capability? Where did you where did the training data come from? What were the inputs you considered in that. And once you actually like the in the interior math, all the fancy computer science is kind of in this this middle part. And part of what I argue is that middle part is actually, I think, the least important part of how the tool is going to operate. It’s really the decisions about what is the task that the tool should do in the first place is going to do in the first place? Is that a good idea or bad idea? Where did the data come from? Most of the time, there’s an outcome like high performing, how did you measure it? Is it a good measure? What does it capture? What does it does doesn’t capture? And then what are the inputs are the predictors, right? What are the factors that we think we’re going to base our prediction of whether the person is going to be high or low, and this is where all these tools vary. And so understanding that will give you a really good idea of how the tool work, and the sort of the math that links the inputs to the outcome is the complicated part, but it’s actually not the most important part to understand.
Robyn Fadden – host: In the toolkit, at the time of writing, there were about 250 tools, but that number has grown as vendors have seen how many more organizations are getting on board with using AI in every department and looking for the right tools.
Matissa Hollister: Part of the idea is that these vendors are counting on the fact that HR professionals don’t understand AI to kind of make these grand claims, and sell their product without people really questioning how it’s actually going to work and how it’s going to help them or hurt them. And so the goal was by informing the HR professionals, we’re going to improve the their use of it. But in the process there were a number of startups who were involved in the project, partially because they saw themselves as the good actors. And they were having a very hard time differentiating themselves from actors that they thought were being a lot less careful with what they were doing. By setting the standard, having the companies ask the hard questions meant that they could sort of rise above the others, and that hopefully will create a market for better-performing systems.
Robyn Fadden – host: So potential clients are asking these questions, finding that certain companies are offering what look to be the right solutions, but the clients want more objective information about these tools.
Matissa Hollister: In my people analytics class, we talk a lot about machine learning. It’s a big chunk of my class. In my exam, I find existing tools, and they have to write an essay, exploring the tool and whether they think it’s a good one or not, and how they think it works. Some of the worst examples that I found when I first taught the class four or five years ago, I go back now and they don’t exist anymore. Or there’s a few where they did stuff, which was sort of sounded cool, but was sort of like, oh, that sounds a little weird and sketchy. And would employees really like that. And how’s that going? That can undermine trust. And you go back and they’ve changed their business model. And they no longer do that, right? Because they’ve realized even though technically it was possible, ethically it was problematic. So there’s a number of companies that have shifted what they’ve done over time, I think as people become more savvy, and said, Well, that sounds cool. But it doesn’t sound appropriate.
Robyn Fadden – host: One of the things that the toolkit’s white paper discusses is human-centred AI. Why is human-centred AI such an important aspect of the AI tools and implementing them that you’re talking about?
Matissa Hollister: Many people said to me, we want to keep the human in human resources, obviously. But actually, a lot of the toolkit in the human centred part was less about that, although that was part of the focus. Because actually, when you say you want to keep the human in human resources, people are actually not that clear about what they mean by that. We have a long history of humans making biased decisions in HR. People often state as a general AI principle, whether it’s in HR or anything else, that we want the human to make the final decision. But part of the point, people are using these systems because they think the human hasn’t made the best decision. So part of what we tried to lay out in the toolkit was that it’s actually important to still allow the human to make the final decision, but also provide them with guidelines about how they should be using the system and how maybe they need to also document their decision.
Matissa Hollister: But also part of the human-centred part was about having organizations think about the purpose of using the technology and trying to promote. Some people contrast this, the sort of automation versus augmentation view of AI. So are we developing AI to replace humans? Are we developing AI to augment human capabilities? So one of the things that the toolkit pretty explicitly discusses is that  both the people who are going to use the tool so the HR professionals themselves may feel threatened by using an AI in HR tool because it’s threatening to replace them. As well as the people who are going to be impacted by the tool, the employees, the job applicants should be involved in the process of, of designing or selecting a tool. And that is partially just an ethical thing. But actually also, I think, really important business-wise, what we see is a lot of companies thinking that they need to invest in AI because it’s the hot thing, buying these tools and then discovering that nobody’s using them because either it was never a good idea in the first place, it wasn’t actually addressing a need in the organization, or people are afraid of it, they don’t understand how to use it, they’re actively sabotaging it, because they think it’s going to replace them. So actually involving the people who understand the process and are going to be impacted by the process is going to end up with having a more successful implementation and actually achieving that productivity gain that you’re hoping.
Robyn Fadden – host: I’m glad you mentioned augmentation versus automation in terms of AI’s different uses. Because while the fear about being replaced is real, many AI tools can make people’s work and their work experiences better, such as ensuring that results are more accurate. These tools can also make an organization’s systems function more efficiently and with fewer errors if implemented correctly and if the right tool. The toolkit helps to identify what is missing in an organization’s AI and data strategy when it comes to HR. We know that AI is a flexible technology that primarily learns from real world data and identifies patterns within historical data – and it does this much faster and more effectively and efficiently than humans can. But it’s only applicable to the specific contexts in which that data were created – AI tools don’t work so well when those contexts and their conditions change.
Matissa Hollister: I have a slogan I created which I like, which is an AI is a cutting-edge tool that encodes the status quo. The real strength of machine learning systems is that they’re learning from the real world. And that’s typically contrasted by what people would call rules-based attempts at getting to artificial intelligence in the past. Because they’re learning from the real world, the system is not capable of discerning what aspects of the real world we like and we think should be kept versus the aspects of the real world that we’re not so happy about and we would like to get rid of. So particularly the view that somehow, because a computer is doing it, it’s making it more objective, or more fair, doesn’t include bias, is completely not true. Because it’s learning from the existing real world and all aspects, including bias, unfairness, inequality, end up being reflected in that data and can end up getting encoded in the system.
Robyn Fadden – host: Regarding AI and context, did you find that Human Resources has its own distinct challenges for implementing AI?
Matissa Hollister: We did a survey of the experts in our project community for the white paper we wrote, and there were a number of people who were excited about AI. I said that AI can only do certain tasks. And so some of it’s about implementation, right? I can’t switch context. But the other thing is that it needs to be a task for which you have enough data for it to find these patterns. It’s most capable of doing tasks that are repeated a lot. And it’s interesting, 20 years ago there was a paper written about what jobs are at risk of automation before machine learning really existed. And the idea there was that it was routine tasks, tasks that you could write out that that that computer program to do. The machine learning has really up-ended that, and you don’t need to specify all the rules with the machine learning – you can find the patterns in the data, but you still need to have enough data. And so the tasks that it can learn are repetitive tasks.
Matissa Hollister: One of the things that the people in the project community were pointing out was that that is actually an opportunity. Because then what machine learning can do, one place that where you can be effective in using machine learning, is to automate the most mundane and tedious part of your job, but you’re just doing the same thing over and over again. And, you know, boy, would I love to not have to do this, right. And if the machine learning can speed up that process, part of their argument was actually it can make human resources more human, by allowing the humans to spend the time doing the parts that require empathy are human judgment and taking out the repetitive parts. So that’s one place where you can augment in a collaborative way, which is to try and identify the most mundane parts of the job. The other place that I think you can augment is by identifying new capabilities that you wish you could do, if only you had the capability to process all that information. So employee satisfaction surveys, where people allow people to write in text about what they what they think. If you have a large company, you can’t sit there and read through that kind of stuff. But if machine learning natural language processing can filter through that, and pull out the most common topics that it’s identifying, it’ll allow you to do something you never could do before.
Matissa Hollister: I think those are the two places where there’s the most opportunity. There’s a lot of places where people are trying to just replicate the human process that already exists. And that’s where you’re both at the most risk of automating the person’s job because they’re doing exactly that task. So if it’s kind of the worst part of the job, maybe it’s okay, but if it’s their main part of their job, that’s more problematic. And on top of that, that’s also the place where you’re most likely to encode the bias because you’re just trying to replicate what the human was doing. You’re not offering any improvement. That was my new slogan I came up with a couple weeks ago, which is artificial intelligence is not very intelligent, unless used intelligently. I think that’s too many intelligences in a sentence. But I did have a piece, it was part of a of a piece that I published with the World Economic Forum and maybe it was also republished with Delve that talked about this, that human innovation is what makes AI so powerful. It’s that the AI tool is actually not that smart. It’s just looking for patterns, but it’s the human to say, oh, you know, what would be great to look for patterns in would be this, right? Here’s a task that we haven’t been able to do in the past. And you can do some really cool things that way.
Robyn Fadden – host: I’ve heard it said about some AI tools, that they may give you answers that you hadn’t even known you were looking for. Since AI is based on real world data, as you’ve said, it’s difficult if not impossible for AI to not replicate human bias. An example you’ve mentioned elsewhere is that Amazon created a hiring tool that ended up favoring men over women – what the tool really showed was that men were being more successful in that particular organization than women because of the way the organization functioned. It’s actually showing you what’s going on in how your organization is structured and how it’s actually biased.
Matissa Hollister: That’s in some ways where I draw the line between people analytics and machine learning, is that what they should have done was people analytics, which is understanding you could do the same thing where you look at the patterns of who gets promoted, and who gets ahead in your organization, and identify problems and try and track down the causes and address them. That’s kind of more people analytics, machine learning is kind of like let’s just take the data from the past and try to automate it. And it can go out of control, I thought.
Robyn Fadden – host: Are there aspects of HR work that AI tools can’t be used for? Is this because of technological capacity or a lack of data, especially when it comes to smaller organizations? Or a question of ethics? Or both?
Matissa Hollister: One place is the data availability issue. And there are interesting tradeoffs in the HR setting, because a lot of the popularity of AI really came out of the availability of massive data, like social networks, millions and millions and millions of observations. And there’s no company, except for a few that have a million employees – I know of a couple, but not very many. And even within that company, are we talking about a tool where the patterns that we identify with play across all jobs? Or do we need to do something that’s specific to a job, we can be down to very, very small numbers of people. There’s some tradeoffs there where you know, some of the AI vendors, I know only focus on large companies, others pull data across companies. And then you really have to think about, well, is the data from this other company relevant, how similar is the context,  how are they deciding what the relevant training data is? There can be advantages to that because one of the other dangers of AI is it’s only recommending the best combination of the different combinations that you’ve tried in the past – you have this historical data. So if you’ve only ever hired from prestigious universities, you don’t even have information on whether that matters or not, because you’ve never ever hired from a non-prestigious university. But if another organization has, you know, their greater diversity of experiences actually might help you, or if they have more data on a more diverse workforce, or something like that pooling can be actually helpful. So it’s a bit of a tradeoff there.
Matissa Hollister: So it’s not impossible to use AI in a small organization by pulling the data, but there’s a certain assumption that the kind of the pattern, the context and the patterns are still applicable. So it would really depend upon the kind of job we’re talking about, the kind of organization we’re talking about, the task that we’re talking about, whether we think that’s applicable, to bring the data together. Frankly, one of the things I tell my students in my classes that there’s sort of this illusion that AI is really accurate, right. And that really depends upon the context and the task.
Matissa Hollister: Interestingly, the big successes of AI, machine learning, had been around language and around image recognition and things like that – very, very tough tasks and ambiguous tasks. But it’s still about sort of seeing something right in front of you and saying, like, is this a cat or a dog? Or is this you know, what does the speech mean? And it’s concrete in front of you. Many other tasks, and especially in HR, are talking about human processes trying to predict into the future – is this person five years from now going to be a good employee. And AI is just not going to be that accurate, nor is a human. The part of the challenge of hiring is that we don’t really know who’s going to be the best worker. But the idea that somehow the machine learning is going to be sort of magically better and super accurate, is a little bit unlikely. And so I tell my students, don’t expect any model, whether it’s people analytics, or machine learning to be highly accurate in an HR setting. All of these systems kind of have to be taken with a grain of salt. And so no more ambiguous things, things that are highly affected by random chance that you can’t measure. Things that are just really difficult to measure are influenced by things that are really difficult to measure that can’t be put into the algorithm in the first place are things that will be difficult to make machine learning.
Robyn Fadden – host: The tools can be most useful in HR in a more holistic, human-centred context.
Matissa Hollister: That’s part of the reason where I came to the conclusion that the chapter on implementation that we put in the toolkit was really important, which is, and transparency that the person who’s using the tool should understand exactly what this tool is doing. So selling it as we identify high capability candidates is very misleading. Versus we identify candidates who are predicted to score high on your performance measure, and you know, what your performance measure measures, and based on the following inputs, so it’s measuring skills, or it’s measuring soft skills, or you should know what factors it’s considering, and maybe even list the factors that we know it doesn’t consider, so that the human knows – I know that it doesn’t consider whether or not the person is going to cause conflict or something. So if in my interview, I get that five, that’s something I should be looking for, because it’s not part of the algorithm.
Robyn Fadden – host: Where could this toolkit lead managers in terms of the ethics of using AI at work, especially when it comes to decision making?
Matissa Hollister: I think this is where the emphasis was really in the toolkit on this idea of having a checklist and showing, educating people on where AI can go wrong, and what should be considered about a tool before adopting it, and then having a series of questions that really just lead the organization through the decision. So starting with Why are you why are you using this tool? How do you think it’s going to help? What are the risks, risk level of the tool, where do the risks come from? Understanding how the tool works, and then thinking through how you’re going to implement it and monitor it. And so the hope is that it will both make organizations think more about why they’re using AI in the first place, as well as if they’re using the system to develop a plan and recognize that it may very well be a waste of money if they just buy the tool and don’t think through the implementation and monitoring part.
Robyn Fadden – host: As Artificial Intelligence tools continue to proliferate and offer solutions to problems both old and new, more critical inquiry is needed around not only what tools work best in what contexts, but what their ethical implications and real-world human fallout could be. Thanks to our guest today on the Delve podcast, McGill Desautels Faculty of Management Professor Matissa Hollister for giving us an overview of these issues in the human resources realm. You can find Matissa Hollister’s AI Toolkit for Human Resources Professionals, as well as more information in the World Economic Forum’s Human-Centred Artificial Intelligence series, at weforum.org.
Robyn Fadden – host: Thank you for listening to the Delve podcast. You can follow DelveMcGill on Facebook, LinkedIn, Twitter and Instagram. And subscribe to the DelveMcGill podcast on your favourite podcasting app.