Dean Gillian Lester on AI and the Future of Work
How much is artificial intelligence—and the algorithms it generates—changing the experience and nature of work? What does that mean for employees’ privacy and the future of the legal profession?
Gillian Lester, Dean and Lucy G. Moses Professor of Law, is an employment law scholar whose interests include income inequality, public finance policy, workplace law, and the design of social insurance laws and regulations. Here, she shares her insights and ideas about the ways artificial intelligence (AI) is revolutionizing the world of work.
Will the exponential growth of AI transform workplaces and the lives of employees in ways similar to past technological advances?
Absolutely. It’s a technological paradigm shift, like the Industrial Revolution. The deep interplay between technology and the nature of work is a reality that goes back to the printing press and, before that, the invention of tools—the march of human progress. Work is always in a dynamic relationship with social and technological change, including automation. There are some jobs that used to take multiple people to do and that now, a single machine can do without human assistance.
Will there be a wholesale loss of jobs?
That question is more complicated. Some jobs will disappear in the short run, for sure—some estimates are that one-quarter of existing jobs in the U.S. and Europe could be replaced entirely by AI, with others being affected to varying degrees. In the longer run, AI will certainly create jobs as well, as innovation begets new forms of production, challenges, and opportunities. The balance between these forces remains to be seen.
What about privacy and AI? Employers are using AI to monitor workers. Amazon surveils warehouse employees with cameras and the scanners they use to track inventory. Many corporations have access to their employees’ emails. Aren’t those examples of an invasion of privacy?
Surveillance in the workplace is not a new phenomenon. There have always been bosses watching workers to make sure they’re moving the widgets along the assembly line. And there’s nothing per se unlawful about a manager monitoring a worker to measure their productivity and see that the task at hand is getting done. That said, there are more and less humane ways to monitor workers, and it will be important, as AI makes surveillance easier, to attend closely to basic protections of dignity and civil liberties of workers.
What are the legal protections for public and private sector workers in terms of privacy?
For workers in the public sector, the Constitution doesn’t have a section devoted purely to privacy—although to the extent workplace privacy enjoys any constitutional protection at all, much of it grows out of Fourth Amendment jurisprudence on search and seizure.
For those who don’t work in government, some combination of tort law and state statutes offer a patchwork of privacy protections. So, for example, there might be legislation preventing an employer from monitoring worker productivity by using a hidden camera that also records any conversations the employees might have. As detailed and extensive as legislative and common law privacy protections might appear to be, the devil is in the details. It is often held that so long as a worker has consented to monitoring, surveillance, or a search, the employer is within its right to engage in the practice without legal liability. But that consent can be sweeping. A worker might sign something when they fill out their employment application that states that they hereby consent to whatever monitoring happens in the future—and they might not even remember signing the form. But are they really going to sue for unconscionability of the contract? And if you work at Amazon, and that’s the big local employer, and you need your job, you may not have that much choice about consent to that monitoring.
What types of workers are most vulnerable to surveillance?
Workers with less nuanced or complex tasks are more likely to be monitored, and that will tend to track a class hierarchy in the workplace. Workers in the lowest-wage jobs are most likely to be subject to automated monitoring techniques that may be done through invisible electronic means. Workers who are the least advantaged may face the greatest burden of this type of monitoring. Gig workers—the Uber drivers, the TaskRabbit workers—may be using their own phones, but the platform they work with is monitoring them; it’s just wired into the system, and that’s part of the deal. But those gig workers are also independent contractors and not defined as employees for purposes of laws that might exist to protect employees and require, for example, their consent. So, gig workers beware: You may be subject to a great deal of surveillance enabled by the technology platform you need to do your job.
Switching to the legal profession, many law students and young lawyers are worried that AI and ChatGPT will take away their jobs. Is that a reasonable fear?
AI makes certain tasks that primarily junior lawyers have done in the past much easier. ChatGPT can write a reasonably competent memorandum that will cite cases and synthesize precedents in a particular area, and produce a first-draft level narrative account in an area of law. It can also be useful in taking complicated language and breaking it down into simpler language. But there are absolute hazards associated with overdependence on ChatGPT. It can invent cases that don’t actually exist, and it can make assertions that are overly confident. You really need to check all citations and make sure everything is correct as well as balanced. When you’re a lawyer, if you get things wrong, the consequences are very serious.
What are some unique lawyering skills that AI can’t replace?
When you’re training to be a lawyer, a big piece of what you’re learning is to exercise nuanced judgment in hard situations where the answers aren’t obvious. And bots are not so good at looking at all circumstances; what they know is what they see in the database. But what they don’t know is the preferences of the client, the risks of different strategies in light of the idiosyncrasies of the parties, and so on. A bot can’t “read the room” in a negotiation or a courtroom. These are real limitations that bring into sharp relief the distinctiveness of human judgment, as well as the interpersonal, relational side of lawyering.
There’s so much fear about AI. What do you see as the upside?
This new technology will be a game changer in the advancement of human knowledge, creating untold new possibilities, new innovations, and new frontiers of human activity.