Deep Dive Episode 258 – AI & Antidiscrimination: AI Entering the Arena of Labor & Employment Law [Keynote Address]

Artificial Intelligence (AI), once the stuff of science fiction, is now more than ever a part of everyday life, regularly affecting the lives of individuals the world over, sometimes in ways they may not even know. AI is increasingly used both in the public and private sectors for facial recognition, dataset analysis, risk and performance predictions, and much more, though how companies use it and the actual input it has can be unclear.

Experts have warned that the expanded use of AI, especially in areas related to labor and employment, if uninvestigated, could pose serious issues. Some contend that the use of AI tools can help make hiring processes more efficient and perhaps remove human biases from the equations. Others note that while this may be an admirable goal, many AI tools have been shown to produce discriminatory outcomes. The opaque nature of how some of these AI tools operate further complicates matters, as how an AI came to a particular decision and the data it referenced may not be clear to the human reviewer, thus making the identification of discriminatory practices harder to identify.

All of these issues, especially given the increasing use of AI tools in the hiring processes of many companies, raise several questions concerning AI’s entrance into the Labor and Employment space. What benefits and challenges does using AI in hiring present? How can AI be used to combat discrimination? What happens when AI itself is discriminatory, how can that be identified and addressed? What statutes and regulations apply to AI, and do the existing legal and regulatory frameworks concerning anti-discrimination in labor and employment suffice to address the novel nature of AI?

Listen to the panel discussion here: Deep Dive Episode 259 – AI & Antidiscrimination: AI Entering the Arena of Labor & Employment Law [Panel Discussion]

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music and Narration]

 

Introduction:  Welcome to the Regulatory Transparency Project’s Fourth Branch podcast series. All expressions of opinion are those of the speaker.

 

On April 4, 2023, The Federalist Society’s Regulatory Transparency Project hosted a live luncheon and panel discussion titled “AI and Anti-Discrimination: AI Entering the Arena of Labor and Employment Law.” The event featured opening remarks from EEOC Commissioner Keith Sonderling. The following is the audio from his address. 

 

Steve Schaefer:  Good afternoon. Welcome to today’s programming, “AI and Anti-Discrimination: AI Entering the Arena of Labor and Employment Law.” We are happy to have you with us today. My name is Steve Schaefer, and I’m the Director of The Federalist Society’s Regulatory Transparency Project. In order to get right to the programming, I will keep biographies brief. Note, as always, The Federalist Society takes no position on particular legal or public policy issues.

 

I am pleased to introduce the Honorable Keith E. Sonderling, a Commissioner on the U.S. Equal Employment Opportunity Commission. On September 22, 2020, the U.S. Senate confirmed Mr. Sonderling, on a bipartisan basis, to serve as a commissioner on the EEOC. As Commissioner, Mr. Sonderling has published numerous articles on the benefits and potential harms of using artificial intelligence-based technology in the workplace and speaks globally on these emerging issues. 

 

Prior to his confirmation to the EEOC, Commissioner Sonderling served as Deputy and Acting Administrator of the U.S. Department of Labor’s Wage and Hour Division, which administers and enforces federal labor laws, including the Fair Labor Standards Act, The Family Medical Leave Act, and the Labor Provisions of the Immigration and Nationality Act.  

 

We are pleased to have Commissioner Sonderling give the keynote today. Please join me in welcoming Commissioner Sonderling.

 

Keith Sonderling:  Thank you very much. All right. Well, thanks for having me. I’m really looking forward to this discussion today, and being a part of this panel, I think we’re really going to dive into some of the trickier issues moving forward on how you actually regulate this and what businesses should be doing now and what the government’s approach should be, as well. But from my perspective, I wanted to lay out the groundwork of how AI is being used in HR. I’m going to go over some examples, give you a brief primer on the law, and then we’ll get into the panel.

 

So, right now, as you all know, AI is all over the news with ChatGPT and everything else, and the reason is because AI is just very fast, it’s efficient, it’s making business decisions for companies in a way that’s making them a lot of money. And that’s a really good thing. Right? So when businesses use AI in certain contexts, and they see it’s working, what do they do? They want more. And then what happens? More investing goes into AI, and you get it into all different arenas in the corporate sector.

 

So, whether it’s making deliveries faster, widgets faster, you name it, AI is out there. So you have these smart computer engineers in Silicon Valley saying, “Okay. Well, now we’ve, for the most part, figured out how to use AI in business. What else can we automate? What else can we use AI in? Oh, there’s HR over there. Well, that seems like a very simple function. Right?” Most of people’s experiences with HR are not dealing with the complexities that we deal with at the EEOC, so, “Oh, it’s mainly getting forms, and we can automate some of that.” And that’s good. Some of that can be automated.

 

But then, diving into, “Well, what is the biggest problem facing HR?” And that’s the H in the HR, which is the human. Why? Because of the bias—the bias that has long plagued human decision-makings in our space—one of the reasons that EEOC exists. So now, how do we eliminate the human from that process? How do we automate that process? And as you’ll find out and as we’ll discuss and as the EEOC has been talking about, that’s not very easy thing to do, especially because here, when you’re dealing with AI in HR, you’re dealing with some of the most fundamental civil rights we have in the workplace. And that’s the ability to enter in the workforce and thrive in the workforce without being discriminated against.

 

And when I started talking about AI in HR—or I start talking about this—a lot of people just — their minds go to this dystopian future of these robot armies replacing humans. Right? And that has caused, in a sense, for a lot of people to say, “Well, we don’t need to worry about this yet. We’re not having robots in our workforce.” And there’s a lot of stats that are really out there that are making the news—like The World Economic Forum predicts in the next four years 85 million jobs will be displaced by some sort of AI automation. 

 

The Brookings Institute did a study, which I like to cite to you because it looked at the Department of Labor’s database. And in the Department of Labor’s database, they have around 759 job categories. Of those 759 job categories, 740 of them have some near-term risk of automation, and we’re seeing in the news that companies are buying robots. Orders are up 40 percent, and it’s going to be cheap to do that. There’s some studies out there that show replacing 2-3 workers every single year is only going to cost companies around $10,000, so most of the conversations about AI in the workplace relate to that. Now that I’ve said it, we’re going to completely move on because that’s not what we’re talking about today. 

 

From the standpoint of what I want to talk about today, is how AI interacts with workplace anti-discrimination laws. And the issue is that HR departments now, like most businesses, have more information about their individual workforce than ever before. In HR, data’s everything. It makes the difference between a good hire and a bad hire, a bad promotion and a good promotion, and what I’m trying to raise awareness of in this space is the difference between lawful and unlawful decision-making. And the HR tech market, the spending is just tripling. It was 17 billion dollars in 2021, and that’s expected to go up to 30 billion dollars. So it’s out there. It’s happening. It didn’t happen overnight.  

 

Whether you’re aware of it or not, AI is being used in the entire employee lifecycle, from the very beginning—to write job descriptions. AI screens resumes. It chats with applicants. It conducts job interviews. It predicts if an employee will accept a job they were offered and then how much you have to pay them to get there. There’s even programs out there that will tell you, “Okay. Once this employee has now started, this is where they should sit. This is who they should interact with.” There’s software out there that identifies employees’ current skills and potential skills. There’s software out there that tracks employee productivity. There’s sentiment software that every day will tell you if your employees are happy or not by looking at their Microsoft Teams or in their Outlook. And there’s also software out there that if you fall short of your expectations, you get a message saying, “You’re fired.”

 

This isn’t my forward-looking prediction for each of these softwares. In all seriousness, there’s thousands of programs out there for you to buy with a lot of vendors happy to sell them to you, and the reason that we’re seeing these being so — infiltrated so quickly is a lot of them promise to advance diversity, equity, and inclusion by mitigating workplace bias. So a lot of the larger companies that can afford this software—whether they’re buying it on the outside or building it internally — when you say, “Help me with D, E, and I. We have some programs to do it. How many millions of dollars can we give you to do that?”

 

So now, deciding for these companies whether or not to integrate AI in their HR systems is no longer a question. The question now is how do you implement them? For what purpose are you going to use it, and more importantly, how does it comply with long-standing employment law? And whether or not — it’s a discussion for another day, whether this is a good thing or a bad thing, whether HR should be completely replaced. That doesn’t matter to us at the EEOC.

 

From my perspective, if AI is carefully designed and properly implemented, it really can help companies with their push to advance diversity, inclusion, and accessibility in the workforce by mitigating the unlawful risk of discrimination. So as most of you are attorneys in this room or close to this area, you know that workplace decisions have been vulnerable to bias—again, what my agency does and why we collect around 500 million dollars for victims of discrimination every single year. But there are some really basic studies out there that bias at the earliest stages of the hiring process — so a male and a female submit the same resume. A male is more likely to get selected. There are studies out there that show if African Americans and Asian Americans whiten their resume—and what do I mean by that? They just remove any characteristics that would show what their status is, racial references, ethnicity—they receive more callbacks than when they’re not on there than when they are. 

 

So a lot of times, companies just don’t become aware of this discriminatory conduct until it is too late, and this is where, really, AI can come in very early and eliminate bias at the earliest stages of the hiring process. An example I like to use because it’s very simple: You think about an applicant’s name. An applicant’s name, what does it tell you about the employee’s ability to perform the job? Absolutely nothing. But what it does tell you is a lot of potential protected characteristics that the EEOC says you’re not allowed to make a decision on, such as the applicant’s sex, national origin, religion, or race. 

 

Likewise, there’s a lot of programs out there that your first interview with a company is done completely on an app. So when you walk into an interview, whether it’s now on Zoom or in person, that person who’s interviewing sees a lot about you. Right? They see you and, again, all those factors you cannot make an employment decision on. So it can help with intentional discrimination at the earliest phases of the hiring process because if you’re looking at the words the candidate says—how they’re responding to your question—and you can’t see them; you can’t see that that applicant is a certain race, national origin; you can’t see that they’re pregnant, disabled, or religiously observant because no matter what — you have two candidates. How do you know if a hiring manager just doesn’t hire somebody who’s not disabled and goes with somebody else and say, “Well, you’re equally qualified.” You don’t know that decision because they’ll just say, “We went with the more the qualified candidate.” But in reality, they may have been making that unlawful decision. So at least, using AI at the earliest stages, that doesn’t happen, and that person can get further than they were before.

 

So now I’ll flip everything I just said, and at the same time, if AI is poorly designed and carelessly implemented, it can discriminate on a scale and magnitude far greater than any individual HR professional. Because like any bad HR decision, poor uses of artificial intelligence can damage the trust in the organization that uses it, create a toxic culture, and then hurt the profitability of the company, which is why the AI programs were bought. 

 

So AI is only as good as its purposeful and insightful application by the companies that are actually using it. And the big part of AI—and this is not unique in the HR space. It’s just easy to understand in this space—it’s only as good — AI is only as good as the training data on which the algorithms rely. And what does that mean in our space? It means, who’s the applicant pool? Who are the employees? So for example, an algorithm that relies solely on the characteristics of a company’s current workforce to model the attributes of the ideal candidate will just replicate the status quo. So if a workforce is made up of primarily employees of one race, gender, national origin—you pick the protected characteristics—the algorithm’s going to just automatically screen out those people. 

 

And the most infamous example of this point—and it’s widely used in the AI space, and who knows if it’s even true anymore. The legend goes that one of these companies went to an AI vendor to diversify their workforce, and they said, “Here are my best people in this position. Go use your fancy machine learning, your fancy computers, and find what makes them so great, without bias, without — let the computers make that decision.” And the resume screening company, using AI, used machine learnings, look at the patterns, and they came back, and they said, “The most likely indicators of success at your company is being named Jared and having played high school la crosse.” Now, a lot of you may have heard this one already because it’s widely used, but it just shows you why that was because that’s the pool they were looking at, as well. 

 

So at the EEOC, our mission is to prevent and remedy employment discrimination and advance equal opportunity for all in the workplace. Many of the laws that we’re going to discuss predate AI by over half a century, but it’s really important from my position to make sure that anyone who is using AI or being subject to AI know that these laws apply equally to AI systems as they’ve done since — they have been since they were enacted in the 1960s.

 

So I need to just give a brief, brief law school lesson here—for those of you who don’t practice labor and employment law—then I promise I’ll give you some examples. And Aram can grade me because he is the dean at GW. But, essentially, to keep it simple, there’s two theories of discrimination under our laws: intentional disparate treatment, and unintentional disparate impact. And one requires intent, and one is based upon a neutral policy that winds up discriminating. And what’s really important in our space is that liability, for the most part, is going to be the same either way. 

 

So really, really simple textbook examples of disparate impact: a company just wants their workers to come to work on time, so the only qualification to work at this employer is living in the zip code next to the office. We don’t care anything else about you, you just have to live in the zip code. Well, if the zip code is predominantly made up of a certain race, sex, national origin, it’s going to have a discriminatory outcome there.  

 

Intentional discrimination—disparate treatment—very simple: I’m going through resumes. This resume’s of a woman. I don’t want to hire any women. Right in the trash. Very easy. So how that applies to AI — and we don’t have a lot of examples of how AI is actually — cases that we can report upon and discuss—and during our panel, we can talk about some of the reasons that might be — is for the disparate impact and unintentional — there was an alleged—now a widely used—story from Amazon, that they created a program to rate resumes for a certain position, and they used the training data on everyone who had applied and was accepted to the job in the past ten years. 

 

So they ran the machine learning, and then the computer was asked to rate the applicants from 1-5. If you were a woman, you were automatically rated lower than men, and they looked at that, and they said, “Well, this is a function of the training data set,” because there wasn’t many women in the data set, so automatically, it looked and downgraded you based upon that characteristic. And this wasn’t a proof of misogynistic intent on behalf of the AI. It was a function of the data fed to it in the first place. So most people in the AI community—whether you’re talking about AI in employment, whether you’re talking about it in housing, insurance, you name it—use that example, specifically — just basically, garbage in, garbage out. Pretty simple. It’s used widely to show how AI can discriminate.

 

But what’s not talked about as often—and what I’m trying to raise more awareness of—is AI being used to intentionally discriminate. Now, you may say, “Well, how could AI intentionally discriminate because it’s — there’s no human involved? It’s just making these decisions based upon those patterns.” Well, it can. And this is something that when we talk later about some of the best practices of who has access to the system because there’s nothing preventing somebody to then go inject their bias into the AI system and then scale discrimination far quicker than throwing one resume in the trash. 

 

So an example of this is a case not filed by the EEOC. It was one that the ACLU brought against Facebook and other companies, relating to their online advertising practices. [inaudible 16:41] under state age discrimination law, where they said employers could use Facebook’s tool that they used for other kinds of advertisements—for products which they can very specifically limit it and show you the products, and that’s how Facebook makes a lot of money in that way—but doing that for employment advertisements. 

 

And there, it was alleged that it was limited to certain age groups. And the difference between advertising a product and advertising a job is that those are federally protected, and you can’t discriminate. So there, if you weren’t in that specific age group, even if you were qualified for the job, you didn’t see the advertisement, based upon being in those protected characteristics, which was done through clicking and having the algorithm sort it that way.  

 

So the case settled pretty quickly—and again, we weren’t involved in that—and they said they were going to take down the portal. But it’s a good example to show, with just a few clicks, how discrimination can be scaled here, intentionally. And even worse, for the victims of discrimination—those who are on the other side—it goes even further than some of the pre-civil rights job advertisements that we saw before. 

 

So a lot of you have seen the historic black-and-white photos of job advertisements. If you haven’t, just Google pre-civil-rights job advertisements. They’re pretty — very — I don’t want to say — funny is not the right word because, at this point, it is funny looking at them, but they were serious job advertisements. And the famous one was, “No Irish need to apply.” You’ve seen that in the black-and-white photos in store fronts. Well, at least, there, you knew you were being discriminated against because of your religion. You knew that job opportunity was not available, and if there was civil rights at the time, then you would be able to exercise those.  

 

Here, it goes even further because it withholds the very existence of the opportunity based upon your membership in the protected class. So all those individuals who didn’t see that potential job advertisement, who were qualified for the job, and who would have applied if they weren’t screened out because of their membership in a protected class, could potentially have been involved in that case. So you see how quickly these cases can get very large.

 

So going back to the example that I gave you before—and with each of these you have to look that there’s significant benefits to them, but there’s some also significant risks as well—I said that a lot of companies are using your voice or the way you respond to answers, not the way you look, during these interviews. So on an app, they look and use machine learning to natural language processing—all the fancy buzzwords—to see the order in which you’re answering questions. And again, that could be a really good thing because, again, how — have employers take that skills-based approach, which they’re all looking to do, saying, “Here’s how you responded to the question, and we’re going to have our machine learning go rate you based upon the words you’re saying.” 

 

Well, the issue there is if the transcription can’t pick you up, let’s say, because you have a heavy foreign accent or you’re disabled — so if I have a thick foreign accent, and I give responses, and it only picks up 50 percent of what I’m saying, and I’m rated at below 50 percent, but somebody who speaks fluent English gets rated at a 99 percent, even though I was saying better responses and I’m more qualified for the job, I will be lowered based upon my national origin. Or if I have a disability—if I slur or stutter—and then the computer can’t pick me up, I’m also then being judged there, as well, and that’s disability discrimination. So could see with each of these how careful employers have to be, both on the purchasing side and the implementing side because each of them can actually help remove bias but also increase bias, as well. 

 

And I will conclude with one more example before we get to the panel — is that now, with people returning to work, a lot of employees have gone to having a lot of some of the HR managerial functions done by algorithms. So your only point of contact may be with a chatbot, and that chatbot is telling you what to do that day for work. And that chatbot is telling you how many deliveries you need to make or how many widgets you need to make, which is fine. It could help make things go quicker, more efficiently. But at the same time, if it’s automatically rating you without some kind of human intervention—or in the AI world, “human in the loop”—then it may not know that you’re struggling that day and not hitting your goals and automatically lowering your performance review because of a disability, because of a need for a religious accommodation, or because of a pregnancy, all things which human managers walking the floor would see. 

 

And under the Americans with Disability Act, the manager or the HR department would then engage in the interactive process under the ADA to make sure they can have accommodation while they’re doing their job to perform at the same level. And there’s not an algorithm sophisticated enough yet to know that this person is not performing well that day because of a disability or because of a need for accommodation that that human touch, that human element would be able to find. 

 

So this really tees up the important questions for employers, for companies, for designers/ developers of what programs you’re going to build, and more importantly, how they’re going to comply with our long-standing laws because, as I like to say, our laws may be old, but they’re not outdated. They are applying equally to decisions made in the 1960s, with pen and paper, as they are now, to these products that are being developed, being used, or haven’t been developed yet. And that’s really important not to lose sight of because, from employers’ perspective, whether or not you intend to discriminate, if there is discrimination, that’s what we’re going to look at, whether it’s done by a human, whether it’s done by a computer or a combination of both. And employers can’t lose sight of that, especially some of the way that these programs are being sold. 

 

So I really look forward to our panel now. I hope this gives you enough information on how AI is being used in HR and some of the high-level legal issues, and now we’ll dive into some of the fun stuff. So thank you, again, for having me.    

 

[Music]

 

Conclusion:  On behalf of The Federalist Society’s Regulatory Transparency Project, thanks for tuning in to the Fourth Branch podcast. To catch every new episode when it’s released, you can subscribe on Apple Podcasts, Google Play, and Spreaker. For the latest from RTP, please visit our website at www.regproject.org.

 

[Music]

 

This has been a FedSoc audio production.

Keith Sonderling

Commissioner

Equal Employment Opportunity Commission


Cyber & Privacy
Emerging Technology
Labor & Employment

Federalist Society’s Administrative Law & Regulation Practice Group

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content