Disruption in the Workplace: Artificial Intelligence in the 21st Century

Read transcript

With Yann LeCun, Director of AI at Facebook & Silver Professor and Founding Director of the Center for Data Science at NYU and Professor Matissa Hollister
Yann L.: People don’t keep the same job their entire career. Right?
Matissa H.: Yeah.
Yann L.: They change a career pretty often. And so, people need to learn the basic skills that will allow them to learn. So, it’s learning to learn, it’s not learning things. It’s learning to learn really. That becomes more important than anything else.
Host: Welcome to season one, episode one of Delve, a podcast from McGill University’s Desautels Faculty of Management where we’ll hear from management researchers and practitioners as they explore the latest ecological, social and economic challenges that we face as a society. I’m your host, Moe Akif, and today we’ll tackle a question that seems to be on everyone’s mind. Is A.I. coming for your job? The introduction of new technologies has transformed job markets for centuries, and today, history repeats itself as artificial intelligence, machine learning and autonomous technologies are changing jobs and shifting the ground beneath jobs that were once stable. Who better to help us understand the repercussions of the fourth industrial revolution than Yann Lecun director of A.I. Research at Facebook, and Silver Professor and Founding Director of the Center for Data Science at New York University. In discussion with Matissa Hollister, an assistant professor of organizational behavior at McGill University, who specializes in the changing nature of work and the labor market, they’ll help us get a handle on the evolution of A.I. technology, how it’s applied in the context of jobs, and how workers can prepare for the future.
Matissa H.: So, I want to start by thanking you for coming and thinking all of you in the audience for attending this very interesting event. So, as already mentioned, I’ve been studying the changing nature of careers and the employer-employee relationship over the last four decades, and a lot of my work has focused on documenting the shift from long-term jobs, where there was an expectation that you would work mostly with one employer and look for internal promotions, to the shift to shorter term work and expectation of careers that would span multiple employers. This may seem like an obvious trend for you, but actually when you look at government labor force statistics both in the U.S. and Canada, it’s proven to be actually quite difficult to find evidence of this trend in the past. And there’s many other researchers who have previously concluded that there’s not much happening here.
Matissa H.: And so, this move towards short term work is relevant for today’s topic in two ways. One, is that technological change is likely a cause of the shift towards short term work. More rapid technological change means that skills become obsolete faster and so there’s less of an incentive for employers, and potentially even employees, to develop and maintain a long-term employment relationship. The second part, though, is that I don’t think the technological change is the sole reason for the shift to short term work. Other factors include the rise of global competition, the increasing power of shareholders, which tend to focus more on short term profits, the declining power of unions, and this is led to an increasing view of workers as a cost that needs to be minimized. And so, my second connection to this talk is I worry a little bit about the context in which A.I. is being developed and implemented, and how this might impact these trends. Could you describe hopefully in as little technical language as possible, what is deep learning and how is it different from previous evolutions of A.I.?
Yann L.: So, machine learning of course was the beginning of the appearance of A.I. People have been working on A.I. since the 50’s, and the phrase was coined in the 50’s, and very early on, people realized learning was probably going to be an important competence of artificial intelligence. But, there’s been several waves of interest in machine learning techniques. The first way was in the late 50’s, early 60’s, and it kind of died out for a number of years. And then reappeared in the 80s and then died out again. And now it’s repairing under the name of deep learning. So, what deep learning is, the reason it’s called deep is, by contrast with previous machine learning techniques that we can call shadow, but that will be unfair. Essentially, the traditional machine learning techniques do relatively simple computation. So, instead of programming your machine directly by writing a sequence of instructions that can traditional programming, you write a relatively short program which has lots of parameters that are adjustable.
Yann L.: And then, you train the machine to find a setting of the parameters that will get the machine do what you want. And so a typical example is that you want to train a machine to recognize speech or recognize objects in images. You could act lots of images, of say cars and airplanes, and you show an image of a car to the machine and you wait for it to produce an answer. And if the answer is different from car, then you tell it, “you got the wrong answer,” here is the correct answer. You need to adjust the internal parameters so that next time you show the same image, the answer will be closer to the one you want. So, in the past, the part of the machine that was able to train this way was relatively simple and much of the work had to be done through engineering, by constructing a way for the machine to represent the images.
Yann L.: For example, in such a way that the learning algorithm could actually do something with it. And that required a lot of manual intervention, a lot of skills and sort of engineering. And then, deep learning, what deep learning is is basically a way to automate this part. So, instead of having a piece of the system that is handcrafted and a piece that’s trained, the entire system is trained, and it’s called deep because you can conceptually see the system as being composed of multiple layers of processing.
Yann L.: So, the image is fed at one end and then it gets processed by sort of these multiple layers. And then at the end he produces an output and all those layers are trained from end to end simultaneously. That’s what you call deep learning. What this technique has brought to the table is, over the last five years or so, even though the the the basic techniques are very old, they’re 30 years old or 25 years old. But, over the last five years, because of the increase in the power of computers and because of the availability of large data sets on which to train those systems, we’ve seen an incredible improvement in the performance of image recognition systems, video analysis systems, speech recognition systems and text understanding systems, text translation, language translation system.
Yann L.: So, all of those systems, now deployed by all the big companies, use deep learning. When you talk to your phone and the phone can recognize your query, or your search query if you are on Google, or whatever, it’s a deep learning system that understands your speech. When a post from one of your friends, who’s, you know, is posting in a foreign language and don’t understand, it’s posted on Facebook and that is translated automatically in the language you understand. That’s also done by the planning system. All the work on self driving cars that you hear about, there’s a lot of companies that are very excited about the possibility of having autonomous cars. That will use as deep learning, and we’re going to see a lot more applications of this in the near future.
Matissa H.: And so, one of the differences from really very other different approaches to artificial intelligence, in terms of machine learning as a more general principle, is that you’re not telling the machine this is what a car looks like. You’re telling the machine, here’s a bunch of data, and you figure out the pattern that defines what a car is and it’s learning. That’s what it means by it’s learning.
Yann L.: That’s right.
Matissa H.: And then the deep learning is just allowing that learning to be much more complex than before.
Yann L.: Yeah. Basically to feed the machine directly with the raw image.
Matissa H.: And, one thing you wanted to comment on was that the, you think that these machines are not necessarily as advanced right now as people think.
Yann L.: Right. So, it’s very easy to get a little confused when when the machine does a particular feat at a level that is above human performance, like you know, those systems can do, you know, you train them to recognize images and they can recognize, you know, obscure species of plants from the shape of the leaves or they can recognize, you know, breeds of dogs or species of birds, right? And most people can’t do this. I mean some people who train themselves to do this can, so it’s the scenario by which you, you know, you show an image or a text to a machine, and then you tell it what the correct answer is. So, it’s a little bit like showing a picture book to a small child, and you know, here’s an elephant, you say “it’s an elephant.”
Yann L.: Then after a few examples of that the child kind of recognizes the concept, right? So it’s like this, except we need thousands of examples of each of the categories, in most cases. Most of the learning that humans and animals do, is not of this type. We learned most of what we know about the world by just observing, or by interacting with objects. And that kind of learning, we don’t quite know how to do yet. We have some ideas. All of us are working very actively on trying to find ways to make this work, but it doesn’t quite work. And, until we find ways to do this, we’re not going to have truly intelligent machines. And, you know, it’s a necessary condition to make significant progress towards general intelligence, but it’s not a sufficient one either. So, we don’t know where the next obstacle will be after that. So, it might take a quite long time before we have truly intelligent machines.
Matissa H.: So you’ve already mentioned a few applications of artificial intelligence. So, at the moment, artificial intelligence is mostly being used in deep learning to learn a very specific and narrow task as we just discussed. So what are the common characteristics of those tasks that at the moment, deep learning is good at learning? What do you need for that task in order to be able to train an A.I. to do it, and potentially do it better?
Yann L.: So, the thing that for which those techniques work well are, you know, anything that a human can learn to do in less than, I mean can learn to do in a longtime but then can perform in less than half a second or so. So, things that don’t require a lot of thinking and reasoning, and kind of, running over. So the, you know, perceptual task of this type. If you look at a scene, neuroscientists tell us that you can pretty much tell which objects are in your visual field in less than a hundredth millisecond about, you know, a 10th of a second or so. So, any tasks that animals and humans can do really to be quickly like this, those things are pretty good for. What that translates into is tasks, that for which you can collect thousands or millions of examples, and that, those examples I’ve been labeled by humans. So you know, what output is required to correspond to particular inputs.
Matissa H.: And, how about what jobs, what might characterize the kinds of tasks that are very unlikely to be learned by artificial intelligence anytime in the near future?
Yann L.: Okay, so you have to put horizons and it’s very difficult because, as I said before, you know, we might make some progress towards general intelligence, and in my opinion that will take, you know, a few decades. So, we have some time, but before that happens. So ,with the techniques that we currently know about, and there are, you know, extensions that that will occur in the next few years. The type of task I think that can be automated are the ones for which we have lots of data, etc., For which there isn’t obvious mapping from input to output. So, there is sort of an easy decision to make that doesn’t require a lot of thinking that maybe requires to take into account a lot of different variables. Then those systems can actually do a better job than humans. And, they would be more consistent about the decisions they make.
Yann L.: They won’t get tired. So for driving a car, for example, if we can build systems of this type that can drive cars, you know, accidents due to inattention, for example, would be reduced by a lot. So, that would be a opportunity to save lives with, with A.I. Similarly for a very promising set of applications of image recognition, take these comments on that in particular, is for medical image analysis. And so, I think, you know, in the near future, there’s going to be automated system that can essentially process a lot of medical images and sort of eliminate the simple cases, and then send the more sort of tricky ones or difficult ones or suspicious ones to the radiologist and the doctors, who then will be able to concentrate on the difficult cases.
Yann L.: Ultimately, I think where those systems are going to be used for are assisting creation. So, it’s going to be, what it will do is that when you have a system that can, you know, turn a rough line drawing into a painting in a particular style, and we already have technology like this, it would just allow a lot more people to be creators. I think this is going to be a, it’s going to be an effect of amplifying human creativity. And, I think, human creativity and human to human communication is what is going to become a variable.
Matissa H.: And so, I wanted to discuss one example because I’ve been looking into it myself personally, which is using the, you said some companies are even doing now, of what’s called artificial intelligence to evaluate resumes and to recommend the best job candidate. And what do you think about that application?
Yann L.: Yeah, so there’s a number of applications that a number of companies, large and small, have wanted to apply to. I mean, wanted to use machine learning techniques for, they’re not necessarily deep learning techniques by the way. A lot of them use very simple machine learning techniques that were around 20 years ago.
Matissa H.: Okay.
Yann L.: And, the problem with this is, how to make sure that the decisions are unbiased perhaps, less biased than the human decisions that are otherwise made. And also, those systems generally are decision aids. So, they don’t actually make the decision, they produce inputs that are then interpreted by humans to make the decisions. And so, these are situations where you want the system to actually produce explanations. So, any decision about people’s lives, like, you know, do I offer you a job? Do I give you a mortgage? Do I, you know, am a judge, do I let you go on bail? Those are things that affect people’s lives. That’s, you know, situations for which you want explanations out of the system. And so, there’s been talks about the fact that neural nets and deep learning are sort of difficult objects from which to generate explanations. I don’t think that’s the case. They’re not any more difficult than super techniques. It’s just more difficult because they produce more accurate answers, basically.
Matissa H.: But, one of the dangers of the deep learning and depending upon the application, is that it’s trained on real world data. And therefore, one has to be very cautious about the data that it’s being trained on, right?
Yann L.: Yeah.
Matissa H.: So, it’s not going to be better than a human, it’s going to reflect whatever those human decisions that it’s trained on, is trained to. So, if you use past hiring data to train your algorithm for hiring, then that data will reflect any biases that were in the humans, reflected in the humans that created that data in the first place. And so, eventually, I think we discussed it, it might be possible to try and fix some of that, but at least at the moment, at its most basic level, it’s learning the real world at its best and its worst at the same time.
Yann L.: Well, so there’s sort of two remarks on this. So the first one is, you can actually get machines that are better than any individual person who has generated the data because the data generally is produced by an ensemble of people and there’s wisdom to the crowd to some extent, right? So, individual variations are kind of smoothed out when you have a large dataset from multiple people. So, that’s one point.
Yann L.: The second point is, there are techniques that people are working on. I wouldn’t say they are completely kind of, you know, recipes that everybody can apply. But, there is quite a lot of work on trying to sort of de-bias data, in a way that if there are certain variables that you don’t want to use, you don’t want your system to use. Not only that, but you don’t want the system to use other variables are correlated with it. There are techniques that try to remove the information about those variables from the systems. And so, in the end, you might get a system that is less biased than any human actually doing the same task. So, I mean I’m very hopeful that there’s going to be methods, and sort of good techniques to build systems of this type, that actually are considerably less biased to the corresponding human decisions.
Matissa H.: So, as I mentioned earlier, I am interested also in the context in which A.I. is being implemented, and there’s two aspects to that. One aspect, is that it does seem like a lot of A.I. research is being funded and conducted inside private companies, including your employer, Facebook. And, should this be a concern? Does that influence the kinds of tech applications that companies, that A.I. is being put to use for?
Yann L.: So, first of all, I should say that is true that some companies have invested massively in A.I. research, but still the majority of good ideas come from academia. Many of them from here, from Montreal, from [inaudible 00:18:20] lab, is sitting right here from Mira. And, the new tracing phenomenon has occurred in the last few years, which is that most companies that are involved in A.I. research actually practice open research. So, it’s certainly the case at Facebook, where all the work that is done at Facebook A.I. Research is published, and most of the code is distributed in open source. And, the reason we’re doing this is because we don’t think AI is a solved problem, of course. And it’s going to take the entire research community to make significant progress. And that had kind of ripple effects in other companies.
Yann L.: So, Google became more urban than they were in the past. Apple even started publishing papers, which they had never done before. So, it’s producing a bit of a cultural change in the attitude towards towards research. So, after the upstream research has been done and you publish papers, and you invent a new technique, you compare it on public data set and you show that it works well, then he goes into product development, and that part is generally not published and it’s, you know, trained on internal data and everything. But, the basic ideas are all published.
Matissa H.: So, looking more specifically within the context in which A.I. is often implemented within a company. There’s a paper that I often have my students read that was written by one of my former advisors, and looked at the implementation of a technology that it turns out actually you were part of developing, which was check scanning software and banks. And, in that paper they had an interesting contrast where they looked at two different departments where the technology was introduced, and in one department, what had been a job where one position did four different tasks. The computer replaced one of those tasks, and the result was that they actually broke up the rest of the tasks into individual tasks. And that ended up with some pretty unappealing jobs. One of the jobs was literally taking out the staples and ordering the checks in the order to give the computer.
Matissa H.: Another job was just typing in the numbers that the computer couldn’t read. Interestingly, in contrast, the other department, they consulted with the workers ahead of time. They actually had more specialized jobs, looking at different types of exceptions to check problems. And in consultation with the workers, they actually did the opposite. They combined several tasks together. They created more complex, more interesting jobs, and actually use the computer to take away the most frustrating and annoying part of their job. And so, the takeaway from that and it actually emerges quite a few times in social science research on technology, is that the technology itself is not a determinant of how it impacts work. That how it’s implemented and how it’s developed beforehand, and put into place within the company makes a big difference as well.
Yann L.: Yeah, so I think what happens in the situation. So I think the situation you were referring to with a check reading, and I, in the early 90’s we’re involved in the AT&T bell labs in developing a check reading system. And, this was deployed widely by a company called NCR, which at the time was a subsidiary of AT&T. And by the late 90’s we were not connected with our project anymore because the company had split up. But by the late 90’s, the system that we developed was reading on the order of 20% of all the checks in the U.S., and what it was doing was essentially it was a large machine that you put a stack of checks and it would kind of read the checks extremely quickly. Several thousand per minute, and it would accept to read about half of the checks.
Yann L.: So, half of the check would be automatically read, and never seen by humans within this bank. And then the other half would be sent to the people you were talking about. So, you know, I felt a little bad because you know, that’s, you know, half of the people being out of a job. But, in fact, no, they weren’t out of a job. But, what happened is, that this entire process can lower the cost of processing for the banks, and the employees ended up doing all the tasks that were actually probably less frustrating than, you know, sitting at a screen reading checks all day.
Matissa H.: How often is that the practice, that an A.I. researcher says, “I have an idea of a task. I’m going to go and actually meet the people doing that task, and sit down with them and talk to them about what they like and don’t like about their job, and how can I make it better.?” Is that a common practice, or is it usually an A.I. scientist says, “I know how I can do a task,” and they go ahead and do it, and they don’t really think about the worker?
Yann L.: Okay. So, I think it’s pretty rare for a sort of an A.I. scientist in academia to do this yet. But it’s not rare for people who actually want to deploy A.I. technologies in the real world. So, you know, there’s kind of a whole chain of research and development, right, where the basic research might have been done 20 years ago. And then, you know, it only became practical in the last few years, which is what happened with deep learning. And, there’s still a lot of, kind of, theoretical research and basic research on this. But, then there is a whole lot of people who want to apply this technology to various things and they have to talk to the users of that technology to figure out how to best, kind of, build it so that it actually serves a purpose.
Matissa H.: So, as we discussed, I’m very interested in careers and I had to make this somewhat personal. I know you have three sons,` and so I’d be interested to know what kind of career advice you’ve given your sons in terms of, since you probably a better sense of what’s coming in the future than other people do. What, how have you advised your
sons in terms of being successful?
Yann L.: One of them studied law…
Matissa H.: Yeah.
Yann L.: … A lawyer. Second one is a mathematician, and the third one is actually majoring in economics and also study computer science and data science. You know, and again, this was before a bit of all the events that occurred recently about A.I. I think what I would recommend people is to learn things that have a long shelf life, and sort of specialized, not specialized, but, sort of, things that make you unique.
Yann L.: So, if you have a particular combination of skills that don’t exist very often, if you learn things that have a long shelf life likes, for example, mathematics and physics, you know, that’s not going to change very much. And, you would think that because computers are good at computation, you know, scientists would be useless, but that’s not the case, at least not for a very long time. So, you know, we might talk again in 30 or 40 years and things might be different if we figure out how to build more intelligent machines. But I think, ultimately, there will still be, you know, A.I. systems will be out of service. There’ll be an amplification of intelligence, and not replacements. The same way, you know, the, the complex part of our newer cortex is actually subservient to our reptilian brain.
Matissa H.: That’s interesting. And, how important do you think it is that everybody becomes sort of technologically literate? Is that going to be an important skill in the future?
Yann L.: Well, it’s already the case, right? I mean, we are considerably, everyone in this room, you know, everyone in the world is considerably more technology sophisticated than the average person, you know, 30 years ago or 50 years ago, or later on a hundred years ago. So that goes with the time.
Matissa H.: But, do we need to know, does everyone need to learn how to code?
Yann L.: So, in the sense that learning to code is another way of sort of reducing a complex problem to a simple set of instruction, which is sort of a very basic skill that people need to have. We used to say, right, in sort of classical education, European education in the mid 20th century, you had to learn Latin. Why? It’s not clear or you even had to learn math. Even if you know, ultimately, you’re in math basically has replaced Latin for that purpose. But it’s, you know, basically to sort of build your mind, right, and know how to think. And, coding is one of those things that make you think about how you reduce a complex problem into simple operations and things like this. So, it’s not like everybody has to be a programmer or a computer scientist, but the basic skills that is required for coding, I think is a very good skill to have. Yes.
Matissa H.: So, thinking about how we should prepare for the future, lets start with educational institutions since we’re both from them. How do you see educational institutions as potentially needing to adapt and change with technology and AI in mind in the future?
Yann L.: So, the first thing perhaps is that the technological progress is accelerating…
Matissa H.: Yeah.
Yann L.: … And it’s, an educational institution are known for their conservatism, and their slow change.
Matissa H.: I’m shocked by that statement. Yes.
Yann L.: I’m sure everyone here is shocked as well. So, I think it’s going to be increasingly difficult for academia to keep up with the technological transitions. And you see certain schools, you know, the transition, for example, the success, the recent success of, of AI and deep learning. Some of the more conservative schools actually completely missed the boat on this. Institutions will have to find ways to combat excessive conservatism in that respect. But also, I think concentrate on not necessarily teaching students what is useful right now, but what could be useful for their entire career. People don’t keep the same job their entire career, right? They change career pretty often. And so, you know, people need to learn the basic skills that will allow them to learn. So, it’s learning to learn, it’s not learning things, it’s learning to learn, really. That becomes more important than anything else.
Matissa H.: Yeah. And then finally, what do you think government or society, more broadly, what can people be thinking about or should be thinking about now to ensure sort of the more positive rather than the more dystopian future?
Yann L.: So, again, I have to have to say, I’m not an economist and I’m certainly not a politician, but I think it’s more of a political question than a technical question. So, clearly the
progress of of technology seems to be causing an increase in wealth and income inequality, and governments will have to find ways to correct for that. And, you know, it’s already happening. It’s not due to A.I., A.I. has nothing special in that respect. I mean people think there is some qualitative difference about A.I. that will make it qualitatively different from other technological progress. I don’t actually believe so.
Yann L.: I think it’s sort of the same phenomenon that we observe, you know, all of the technology. So, you know, going back to the first industrial revolution, when, you know, most of the population in North America and Europe was working in the fields and you know, 60 years later or a hundred years later, it’s 2%. And, you know, those transformations occur. It’s not that the number of jobs decreases, you know, new jobs are created. There’s a lot of people today, you know, walk in jobs that didn’t exist 10 years ago. So, I’m not worried about the fact that jobs will be taken away by robots. That’s not the issue. Someone said, “I think we’re not going to run out of jobs until we run out of problems,” which I think is an interesting remark.
Host: That was Yann Lecun and Matissa Hollister talking about A.I. technology and the future of jobs for the Delve podcast. This episode is adapted from an integrated management symposium posted by the Marcel Desautels Institute for Integrated Management at McGill. If you enjoyed this podcast and want more insights, you can subscribe on your podcast app of choice or visit us at mcgill.ca/delve.