Deep Dive Episode 259 – AI & Antidiscrimination: AI Entering the Arena of Labor & Employment Law [Panel Discussion]

Artificial Intelligence (AI), once the stuff of science fiction, is now more than ever a part of everyday life, regularly affecting the lives of individuals the world over, sometimes in ways they may not even know. AI is increasingly used both in the public and private sectors for facial recognition, dataset analysis, risk and performance predictions, and much more, though how companies use it and the actual input it has can be unclear.

Experts have warned that the expanded use of AI, especially in areas related to labor and employment, if uninvestigated, could pose serious issues. Some contend that the use of AI tools can help make hiring processes more efficient and perhaps remove human biases from the equations. Others note that while this may be an admirable goal, many AI tools have been shown to produce discriminatory outcomes. The opaque nature of how some of these AI tools operate further complicates matters, as how an AI came to a particular decision and the data it referenced may not be clear to the human reviewer, thus making the identification of discriminatory practices harder to identify.

All of these issues, especially given the increasing use of AI tools in the hiring processes of many companies, raise several questions concerning AI’s entrance into the Labor and Employment space. What benefits and challenges does using AI in hiring present? How can AI be used to combat discrimination? What happens when AI itself is discriminatory, how can that be identified and addressed? What statutes and regulations apply to AI, and do the existing legal and regulatory frameworks concerning anti-discrimination in labor and employment suffice to address the novel nature of AI?

Listen to EEOC Commissioner Sonderling’s keynote address here: Deep Dive Episode 258 – AI & Antidiscrimination: AI Entering the Arena of Labor & Employment Law [Keynote Address]

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music and Narration]

 

Introduction:  Welcome to the Regulatory Transparency Project’s Fourth Branch podcast series. All expressions of opinion are those of the speaker.

 

On April 4, 2023, The Federalist Society’s Regulatory Transparency Project hosted a live luncheon and panel discussion titled AI & Antidiscrimination: AI Entering the Arena of Labor & Employment Law. The following is the audio from the event.

 

Steven Schaefer:  Now I would like to introduce our panel. I will keep my introductions short to get right to our discussion. 

 

I would like to note before we begin that we take intellectual diversity very seriously, and we did have a confirmed leftist center speaker who declined to participate more recently. 

 

After our discussion, there will be the opportunity for audience Q&A at the microphones that you see at either side of the room. 

 

We welcome Philip Miscimarra as our moderator today. Mr. Miscimarra is a partner at Morgan Lewis where he leads the firm’s NLRB special appeals practice and is co-leader of Morgan Lewis Workforce Change, which manages all employment, labor, benefits, and related issues arising from mergers, acquisitions, startups, and related issues from workforce reductions and other types of business restructuring. 

 

Mr. Miscimarra served as the former Chairman of the NLRB under President Trump. Prior to becoming the Chairman, he was appointed to the NLRB by President Barack Obama on April 9, 2013, and was approved unanimously by the Senate Committee on Health, Education, Labor, and Pensions on May 22, 2013. He was confirmed by a voice vote in the US Senate on July 30, 2013, and served from August 7, 2013, to December 16, 2017.

 

We welcome Aram A. Gavoor. Mr. Gavoor is the Associate Dean for Academic Affairs at the George Washington University Law School and a nationally recognized scholar in the fields of administrative law, federal courts, and national security law. Mr. Gavoor was an advocate in the Civil Division of the U.S. Department of Justice. And in private practice, Associate Dean Gavoor has litigated federal court appellate and trial cases involving high-profile challenges to statutes, regulations, and policies. He received The Attorney General’s Award for Distinguished Service in 2019.

 

We welcome Mr. David Fortney. Mr. Fortney is a co-founder of Fortney & Scott, LLC, a Washington, D.C.-based firm counseling and advising clients on the full spectrum of workplace related matters, including employment discrimination and labor matters, compliance programs, government contracting, international dispute resolution, and developing strategies for avoiding or responding to workplace-related crises. Mr. Fortney previously served as the chief legal officer of the U.S. Department of Labor in Washington, D.C., during the term of President George H.W. Bush.

 

Please join me in welcoming our panel.

 

Hon. Philip Miscimarra:  Good afternoon, everybody, and welcome to our panel discussion on the impact of artificial intelligence on employment and human resources issues. Today we’ll be discussing everything from the ethical considerations of using AI in hiring and performance evaluations to the challenges of upskilling and reskilling a workforce in the era of automation. So sit back, relax, and enjoy the discussion, and remember if a robot asks you to hand over your job, just say no. 

 

Here’s the question: did I write that introduction, or was that introduction written by ChatGPT? I’ll tell you it was a 32-word request to ChatGPT only edited for the sake of brevity, which I’m not normally known for. Here was the request: “Write an introduction I can use as a moderator of a panel discussion on the impact of artificial intelligence on employment and human resources issues, and include some humor in the introduction.” And so, as to the joke—which, by the way, some of you laughed at—”If a robot asks you to hand over your job, just say no,” I would give that a B+. However, it’s humbling to think that an algorithm, in about 10 seconds based on a 32-word request, could have written the same type of introduction that I would have written, in my case, which involved four and a half years of serving as Chairman or Board Member on the National Labor Relations Board, 40 years as a Labor and Employment lawyer, participation in a symposium at MIT on artificial intelligence, and eight years of college and graduate school. I keep wondering, “What could I have spent all of that time doing if I would have only known that ChatGPT could have performed the way it did with respect to my 32-word request?” 

 

So in fact, we do have a great panel, and I’m just going to move onto the panel, and then we’ll ask some questions. And we’re going to start with Aram Gavoor. Aram.

 

Prof. Aram Gavoor:  Thank you. Pardon me standing. The Professor and Appellate Advocate in me thinks better on my feet. Thanks so much, Philip. Thank you, Stephen. And thank you to The Society for putting together this really thought provocative discussion today. 

 

My perspective will be that as an administrative law/federal courts generalist — and I’ve specialized in artificial intelligence issues as well, tech in the law. So to be very clear, I am not an employment law expert, but I think there’s a lot of really critical and important information to be gained to consider the milieu, the background upon which these discussions about AI and HR are happening. 

 

At the federal government level, we don’t have a statutory body that has been enacted by Congress that governs the subject matter. Some would argue that we’re very far behind Europe. We don’t have a GDPR, and we have a number of other constraints that make it pretty clear that Congress is sitting in torpor. 

 

So then we’re looking at the other branches of government, predominantly the executive branch. So the executive branch has been active and certainly has been pretty thoughtful and consistent, I would say, with regard to an incremental and more specific body of AI recommendations and utilization of a whole-of-government approach with regard to AI. Of course, controlling for political party of the Oval Office occupant. 

 

So it began in the Obama administration 2016, the two concurrent documents issued, Preparing for the Future of Artificial Intelligence and also the National Artificial Intelligence Research and Development Strategic Plan, very, very general, just issue spotting that these are important issues. Then, in the Trump administration, with a slew of executive orders—and I am keeping it concise—Executive Order 13859 in 2019, executive order on Maintaining American Leadership in Artificial Intelligence, and also Executive Order 13960 in December of 2020, Promoting the Use of Artificial Intelligence in the Federal Government. And it was the Trump administration that decided to charge this the National Institute of Standards and Technology with laying out a body of guidance on how AI should responsibly be used. 

 

In the Biden administration, there are two key documents that I think are worthwhile. The first is the blueprint for an AI Bill of Rights. Note a blueprint for a bill of rights, not the actual Bill of Rights. So you’re seeing the development of technology and use cases and societal change in this subject matter far outpacing the government’s ability to catch up. And then, I think critically executive order from February of just this year 14091, which is a build off of Executive Order 13985 styled further—I’m missing the word here, but relating to racial equity and support for underserved communities through use of the federal government, which has three key AI provisions. One is preparatory language of Section 1 and two other provisions as well. 

 

So looking, let’s say, at the October ’22 blueprint for an AI Bill of Rights, there’s a section with regard to AI discrimination, but it’s really short. It’s like six pages long, and then it references the need to develop AI standards and shunts off to an organization called the Data & Trust Alliance, which has it’s also very short algorithmic bias safeguards for workforce, and ultimately, we are left with a lot of questions and few answers. 

 

Now, building off of Commissioner Sonderling’s statements, let’s look at our EEO laws. Keep in mind that they premised in voluntary compliance. That’s a very big important point because as we’re looking at administrative laws operation here, you have an absence of congressional legislation. Bicameralism and presentment is really not taking us anywhere so far and probably won’t be taking us anywhere in the coming months, at least for the foreseeable future. So then you have the administrative state led by the President, but there are constraints there too, predominantly the Administrative Procedure Act of 1946. 

 

So as we know, from the last summer and the Supreme Court’s issuance of West Virginia v. EPA, the application of what many scholars and commentators describe as a weak version of the major questions doctrine that just concludes, A, if the statute doesn’t authorize this significant major interpretive change from around the time of the enactment, and a long time has passed since then, the agency doesn’t have the authority to interpret the statute in a different way. So that’s a guardrail that is really important, I think, that could limit the ability of the executive branch and its agencies of interpreting long-existing statutes in a way that expressly regulates AI. However, the government has plenty of flexibility to apply existing statutes where AI has been used for existing circumstances that were the predicate consideration of Congress. 

 

Now, there’s two different ways in which agencies can regulate, as I’m sure you know, rulemaking that’s governed by 553 (b) and (c) of the APA that requires notice and comment. The benefit is public participation but also one of the costs that takes a lot of time to do so. You’re looking at a quarter of a year up to a half a year. And then, there’s, of course, going to be a lot of follow-up litigation because it’s very difficult to do anything in American society today without someone being upset about it and then suing about it. 

 

And then, you have the other tool that we’re seeing government leaning on quite heavily, which is the power of enforcement, in the context of the EEOC, reacting to charges that come before it. For the power of enforcement, agencies are able to, on a case-by-case basis, determine the lawfulness of conduct that has already occurred and potentially determine its retroactive legitimacy. So this presents certain advantage to agencies because they’re able to move more efficiently, more nimbly. But it also creates some big deficits as well. You don’t have public participation with regard to agency enforcement practice. It takes conduct that is lawful, or not expressly unlawful at the time of commission, rendering it retroactively unlawful. And because the parties that are the subject or the target of enforcement behavior tend to be less likely to defend their rights fully through the agency and all the way up through the courts, the judiciary are reacting somewhat less to the important questions of the day. And consequently, when regulated parties settle or acquiesce to agency interpretations or enforcement actions following charges, then a body of common law is developed but at the agency level. The agency’s sometime style is prescient. So that’s a different level of a dystopian future in one respect. 

 

In another respect—trying to take the arguments of what one would—like a left-leaning scholar, would describe, try to keep it middle of the road—the left would argue, “Hey. Society is being affected right now. The Congress is not acting efficiently or necessarily in the way that it should, and the executive branch has enough discretion to cover the gaps with existing statutes, and therefore it’s important for the government to step in for the betterment and the welfare of the people, certainly disadvantaged minority classes of person that are governed under the Civil Rights Act. 

 

So with that, I’ll take a pause. Allow the other discussants to speak, and I’m sure there’ll be a lively discussion that follows. 

 

Hon. Philip Miscimarra:  Thank you very much. The next speaker will be David Fortney.

 

David Fortney:  Thanks, Phil. And I’ll just remain seated if that’s okay. I’m not sure what the protocol is, but we’ll go that way. All right. 

 

So I want to try to bring in perspective some of the views of what we hear from our clients. We represent employers. We represent employers across the spectrum: small, medium, a lot of large, multistate, multinational employers. It is clear that we are in the midst of this radical change. We all changed when we started using cell phones or word processors. The difference is that the pace at which this is occurring is unprecedented in how it’s occurring, so we find all of these lagging factors. And I think Aram’s done a very nice job of laying out the regulators, the statutes. So what is the framework we’re working under? 

 

On the other hand, obviously, the new technology is wonderfully appealing. I mean, Phil’s story lays out every employer’s thinking, “Wow. In today’s uncertain economic times, my resources are strapped. I don’t need Phil Miscimarra. I can have ChatGPT. It’s great, and I don’t need a roomful of Phils,” etc. And so, there’s a lot of motivation. I mean, recent press as some of the industries that had scaled up on DEI and put together cadres of humans to monitor that have — those were some of the first people that they let go in the current environment within the last six months, not that they’re not paying attention, but they are relying more on these machine monitoring, assessment, and recalibration. And I would like to thank all of you for actually showing up in person. This is a very quaint gathering that in six months may not be even held anymore. 

 

But with that, where do we go because the law is lagging, and the challenges are great, and the marketplace, the appetite for this is almost insatiable. I’ve heard some of the government regulators, Commissioner Sonderling, but others say, “Well, maybe people shouldn’t use it — shouldn’t use AI.” That’s unrealistic. I mean, if you use LinkedIn, you’re using AI. If you’re using — I mean, AI permeates most of us. You get an email, and you get those three or four automatic responses now. Vicki Lipnic sends me an email, and I say, “Oh. That’s great.” Well, that’s just — by me clicking one key that has an automated response. So those are all soft forms but examples of AI and how it has already — we have it. 

 

One of the things that we have done in this very uncertain time when you look at the legal landscape, as has been pointed out, is complicated. So we have — but we do have some federal laws that still govern us. Yes, they are from a very different period. We have this set of guidances called the Uniform Guidelines promulgated under Title VII. They are regulations for EEOC. They are — excuse me, guidance for the EEOC, binding regulations for federal contractors enforced by the Labor Department and well recognized by the courts. Under the Uniform Guidelines, you can take the employment selection tools—so this was obviously written 45 years ago before anyone knew what AI stood for—but you can still take employment selection tools and go through a process complicated. It’s called validated, meaning that it –basically, it serves a legitimate business interest. It advances that interest and it does not unnecessarily discriminate. It’s an efficient way of achieving the business interest. So think about hiring people, promoting people, how you pay people, all of that. So the question is can we take these existing tools — and those tools are very important to employers because it provides a legal, safe harbor, meaning that even if a judge or a jury would find what you did was unlawful, if you can say, “Look. I had this validated, and I followed a validated process,” that that immunizes you from legal liability. So that’s a very powerful concept. 

 

Now, there’s a lot of reasons, a lot of concerns about, well, can you really apply those to AI? That was a question that a group of us had spent a lot of time on as employers were looking at that. And so under the Institute for Workplace Equality we did ask Vicki Lipnic—and the reason I highlighted her a moment ago — and Vicki, formerly commissioner of the EEOC and acting chair of EEOC—to head up a task force. We’ve got about 40 people, multidisciplined: plaintiffs, lawyers, employer lawyers, I/O psychologists, statisticians, data people, AI developers, AI vendors. I mean, we tried to put everyone together in a room. This took 18 months, okay, so this was not a simple process, but to slog through and ask the ultimate question—not as the law might have been, but as the legal tools we have today—can we do things that are positive to apply and be confident in the tools we’re using? And the answer is not perfect, yes, but largely yes, that most AI tools can be validated, which is a very positive thing. And I want to just put that out there. So it doesn’t need to be as — it doesn’t mean that these other legal developments shouldn’t be pursued. They need to be. But those are the tools that are in the tool chest today. 

 

Complicating the employer landscape is this rapidly developing — it was mentioned before that we don’t have federal preemption, so under basic federalist’s concepts that means in all the states there are free at it. And boy they are free at it. Whether it’s New York City with its animated employment decision tool requirement that you have to both assess what the tool is and publish the results — this is kind of like the U.K. pay studies that are on place. You have to publish those to a variety of states requiring notices to be put out. Illinois, if you’re using video as Commissioner Sonderling mentioned, you have to advise the candidate of that, allow them to consent to that. We have many states that are looking to come online and bring more of these tools on, which may or may not be good, but I will tell you from our client’s perspective, when you run a multistate or multinational effort, the concept it is going backwards by decades to say, “Well, I’m only going to do the New York method or the Illinois method or the this.” And we all faced a little bit of that with California’s — I’ll call it we thought a little perhaps over the edge wage hour requirement, who was a contractor or employee. But that, at least you could more easily define which segment of your workforce would be subject to the California requirement. AI tools don’t work like that. They effectively impact everyone in your organization. You don’t have a separate HRI, human resource information system, just for your Illinois workers or your New York City employees.

 

So this, I think, is actually a challenge that we have in conceptually, even though, typically we’d say, “Look. Not having a heavy-handed federal regulatory state is a good idea.” We have some clients that are beginning to think maybe we need at least consistency, consistency. My answer in part is “Careful what you ask for because you may get a heavily regulated state.” But that’s at least one of the issues that’s actively being pushed about.

 

And final thing that—and I’ve got lots more, but I’ll save it, Phil, for when we have a discussion—it isn’t just the employers that are concerned about the speed and using these tools and the efficiencies that can come. About—what—a week ago, there was the open letter by many of the tech designers — so I think we’re up to about 3,000 individuals who have signed this open letter, including Elon Musk, including Steve Wozniak, an Apple confounder. So these aren’t just sort of fringe people. These are people in the heart, whose business it is to design and push forward. Basically, what they’re saying is, “You know what? If we go beyond ChatGPT for” — which Phil described a moment ago. “If we keep going beyond that”—and we are. I mean, it’s rolling at a very quick rate—”we, the designers of these products, are very concerned about what’s going to occur. We think there should be a six-month hiatus in the deployment of additional tools.” It’s an interesting concept. Further, they are actually seeking that there be federal statutes enacted to regulate this—to put guardrails on—regulatory requirements imposed because they’ve said that, effectively, if we just leave this sector to its own designs, I mean, we’re just going to veer off and do things that people, perhaps, don’t want. 

 

I’ll just quote to you quickly from their open letter because I think that it’s powerful. “AI systems are now becoming human competitive at general tasks.” This is my point. That’s why I’m replacing Phil because I’ve got someone that can do that. Soon we’ll just have a little screen up here as our moderator. 

 

Hon. Philip Miscimarra:  I’d like the videotape to be destroyed as soon as this panel ends. 

 

David Fortney:  But they go on and say, “Should we let machines flood our information channels with propaganda and untruth?” That’s one of the other problems is there’s no quality control mechanism. “Should we automate away all jobs including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control over our civilizations?” These are their questions, the developers of these services. So that’s powerful. And it goes on. This is an open letter, and I would encourage you to look at it, but it raises powerful thought-provoking questions that I think all of us, as members of society, in grappling with what do we do. The automation is continuing apace, and that’s kind of their point. The marketplace for using and developing efficiencies is absolutely there. And typically, for the best of reasons, but the unintended consequences, which is what I think they’re highlighting, are real too. 

 

So with that, I do look forward to our discussion, and thank you very much. 

 

Hon. Keith Sonderling:  And Phil, I knew it was ChatGPT when you said upscaling and rescaling because that’s way too fluffy of a term for an NLRB lawyer. 

 

Hon. Philip Miscimarra:  Well, it’s funny. I’ll have right back at you, Commissioner Sonderling, with the first question. 

 

In the world of the NLRB, some of the leading cases that still deal with technological change involve, in one case, prefabricated doors, that was a Supreme Court case that was decided more than 50 years ago. Another leading case deals with the use of containers in the shipping industry, and that was a case that was decided 40 years ago. And so, given the lag associated with even relatively simple types of technological change, how capable do you think the EEOC is, and other administrative agencies, to really even attempt to keep up with the type of technological advancements we’re seeing now?

 

Hon. Keith Sonderling:  I don’t think we can, and I don’t think you could ever expect to get everyone to be able to keep up technologically with the private sector. I don’t think that’s the balance of how it should work. I think what we should be doing is saying what we know best and forget whether it’s fancy technology, forget whether it’s a HR manager who’s racist or biased, or whatever you want to call that person who’s making those discriminatory decisions. What we know is results. And what our investigators are going to be looking at when they go out on federal investigations, or lawyers trying to prove, is bias in the results. And how we got there is a different story — how, now, we prove that whether it’s made through artificial intelligence or whether it’s made through a human. 

 

I mean, what are we faced with now? When we have to prove bias and we have to prove discrimination, we have to resort to subpoenaing witnesses and deposing people and trying — there’s so many people like to focus on the black box of algorithm. If we only had the inside of the algorithms, we’d be able to show that big tech wants to discriminate. We’ll be able to show bias in whatever context you’re talking in. But what are we left with now? How do we get inside somebody’s brain? All we have is depositions now, and generally, everyone’s very honest during a deposition, so of course we find out if they made a discriminatory hiring decision through opposing that. 

 

Hon. Philip Miscimarra:  And you state that right after they give their name. Yeah. 

 

Hon. Keith Sonderling:  Exactly. Of course, I fired that person because of their age or race, or religion. 

 

So in a sense, looking at the — I’m flipping this, in a sense, where it’s the same — true with the cases related to the EEOC. This whole disparate impact theory came out of the 1960s from a Supreme Court case on tests being put that required a high school degree and looked for those characteristics. So it’s no different than that with this technology. 

 

But going back, when our investigators show up, we’re going to look at results. We are not going to understand. You could show me the algorithm. Give us the proprietary AI. What am I going to be able to do with it? It could be changing 100 times. Our federal investigators are never going to have that understanding. But looking at how you got there, in a sense, AI can make that more transparent. So my example now with how we’re left with proving employment decision of trying to squeeze it out of somebody, in a way, if there is a track record through transparency, explain ability, all of these buzzwords, of how that decision, that computer decision, made that employment decision, whether it’s “Here is the skills we looked at. Here is where we recruited. Here is the data set. Here are the applicant pools. Here’s who had access to the algorithm.” That’s a lot more transparent than what we’re left with now. 

 

So in a way, I’m trying to flip the script and say using AI can actually help the EEOC in its mission. It could help us prove discrimination, whether intentional or not. Or in a sense, from an employer’s perspective, too, if you have invested in these systems, and you have proper policies, procedures, handbooks in place, if you’re training those who have access to the algorithm, you should feel confident in establishing the good faith defense or showing what characteristics actually went into, with a track record now, that’s documented digitally, which we didn’t have before. So it’s just really, in a sense, ignoring all the distractions for us about the technology about whether it applies. It’s saying, “Here is the result. Let’s start there,” because that’s what we’ve been doing since the 1960s.

 

Hon. Philip Miscimarra:  Yeah. And I’d be interested, all panelists, David made reference to a— as one concept—a moratorium in the development of new AI tools for a six-month period. Is it feasible—from the perspective of regulating artificial intelligence and related tools—is it feasible as one element in the approach to be — either to stop it for a period of time or to slow it down? And let’s start, Keith, with your thoughts and then David and Aram.  

 

Hon. Keith Sonderling:  Okay. Yeah. So for my part, if companies want to develop programs that are deemed aggressive, deemed to take down humanity, it’s still a free country. And if employers want to use those for the wrong purposes, then there’s going to be liability for them. So unless Congress makes it illegal for companies to develop products related to this generative AI, from our perspective, we just have to deal with the consequences of it. 

 

So I don’t think we should be in the business of telling the private sector, stifling innovation, and telling them whether they should create new products or shouldn’t. We just have to make sure that whoever’s developing them and ultimately using them, who face that liability, understand the consequences of the various laws that it impacts, which is very difficult in that sense because it literally impacts every single law on the book. And I think that’s more of the challenge than saying, “No. You can’t do it.”

 

Hon. Philip Miscimarra:  All right, David.

 

David Fortney:  I think a pause is unrealistic. I agree with that. And I hate to sound cynical, but I’m wondering if the signatories to this letter if it’s just not to assuage their conscience on what they’ve developed maybe. But maybe that’s being too hard-nosed. 

 

But I do think — I mean if the purpose is to allow the law to catch up, look, we’re in Washington. We know six months is the blink of an eye. Nothing’s going to happen in six months. Period. The law’s been trying to catch up on this—as the Commissioner has explained—for years. So six months is not the case. 

 

Whether there should be a broad-based consensus on whether the laws need to be changed and how to go about it, that strikes me as a more constructive discussion and dialogue to bring in all the stakeholders. There’s not going to be a consensus. There’s not even a consensus to Commissioner Sonderling’s point that the current law is insufficient. And I tend to personally be aligned more in that direction. The current law may not cover every single incidence, but it’s pretty good. But whether there’s room for improvement, that’s always a discussion that I think we should welcome. 

 

I don’t think it’s realistic to think that you’re going to pose a hiatus. I don’t think it would be honored, and I just think in the current economic times with all the demands employers are facing that will not occur, so we should do something a little more realistic, I think, instead of a hiatus. 

 

Hon. Philip Miscimarra:  And Aram, is it feasible, as one element of a regulatory approach, either to stop AI development or to slow it down?

 

Prof. Aram Gavoor:  I don’t think it is. It’s very difficult to do. So first, you’d have to have Congress moving with such alacrity. Let’s assume that’s the case because Congress moves quickly when there was COVID. It did so quite intentionally and significant. I get a federal eviction moratorium that Congress enacted in a severe enough emergency and within our memory. So Congress can move very quickly if it wants to. 

 

Then the other problems is this creates a slew of other constitutional issues like the Contract Clause. What if you already have these processes in development, which we do, and then these companies just go belly up because they just can’t do anything with it. Then at what point is it like a takings potentially? And there’s a whole slew of very complex but important issues that one would look — need to look at. I mean, it’s economically impossible, and my assumption is that all of these, like the ChatGPT, it has already been created. It’s probably already been deployed in several sectors. My assumption is that the military probably already has very advanced versions of this stuff already in deployment. 

 

And then, also, what’s the solution. Is it a government takeover, or do we establish ministries of truth? And alongside that, does Congress have the ability to regulate the rest of the world? It doesn’t. So U.S. competitors are still developing this technology and would not subject themselves to this moratorium. So this would just stifle American innovation. 

 

Now, am I fearful? Yeah, I am. I’ve played around on ChatGPT-4. I’ve utilized DAN Mode, which jailbreaks it, and it allows it to answer some questions that otherwise it wouldn’t answer before. At least, I did that with ChatGPT-3. You can just look up DAN Mode, D-A-N Mode, and it’s just cut, paste, and then it answers some more questions that otherwise it wouldn’t be constrained to answer. But I think the solution really needs to be a collaborative one with industry, government, and also Congress to lay out some guardrails at a sufficient level of generality to guide the technology for the best benefit of our society.

 

Hon. Philip Miscimarra:  And here’s another question. The European Community had been—I think it’s fair to say—in the league when it came to data privacy over the U.S. What’s happening now in the European Community and other countries in comparison to where we are in the U.S. when it comes to regulating AI? Any panelist.

 

Hon. Keith Sonderling:  Well, they’re trying to get ahead of this like they did with GEPR. They’re taking a much different approach, which is much more complex, where they’re saying, “We’re going to take a risk-based approach to regulating artificial intelligence. And we’re going to categorize product use. So we’re going to tell you if it is a certain use depending on the industry and type of product what requirements you’re going to have and from a low risk to a unacceptable or a high risk.” And they’ve said the use of employment is in the highest risk category possible, which subjects it to robust disclosures, auditing, and other requirements as well. 

 

What’s different over what the EU’s proposing, which is different than here in the United States, is they’re also trying to put vendor liability in there. And they’re saying you’re a vendor whether you design, you employ and put it on the market, or if you are creating it internally for yourself and deploying it, then you’re — have that vendor liability as well, which is significantly different. And here, that actually gives some teeth to the vendors who are out there designing and developing these programs. 

 

So they’re saying that “We’re going to tell you the use of this product if it’s high risk or low risk. We’re making that decision from the government. And we’re going to create a new enforcement agency to then go ahead and enforce it at the different member state levels.” 

 

So that’s what’s going on in Europe, which is a much more—I don’t want to say aggressive approach of having a governing body there, but then the government saying, “Well, here’s how we determine how you’re — the risk of you using this product.”

 

Hon. Philip Miscimarra:  And let me ask Aram, who wrote an interesting article that dealt with mitigating artificial intelligence bias in administrative agencies. And all of us have some familiarity with administrative agencies. Could you just talk a little bit about your observations regarding bias in administrative agencies when it comes to this area and what efforts would be constructive in terms of trying to mitigate that bias?

 

Prof. Aram Gavoor:  Thanks, Phil. So my concern is that government agencies developing, using, and procuring their own AIs for the purposes of their work. This is already happening with U.S. Department of Justice utilizing algorithmic tools for purposes of enforcement. EPA is doing it as well, as I understand. And this can carry with it a number of interesting features. 

 

One is on the investigations and the enforcement side, you’re talking about a domain of executive branch behavior that is actually unregulated by the Administrative Procedure Act. The APA has very, very little constraints, very few constraints, on administrative enforcement and investigative tools that leads up to an enforcement action. Once an action takes place, then you’re mainstreamed into the APA’s guardrails. But before that, that’s an issue.

 

Also, with regard to grant administration, if it’s a just big black box, a black box concern, well, under the APA, you have to have — if it is reviewable, it’s a final agency action under Section 706 and 704 of the statute. It’s reviewable for arbitrariness and capriciousness or non-conformity with the law. And an administrative record needs to be generated so that the Court can have a snapshot view of whether the agency had before it at the time of its decision-making. If that information is not traceable, if it’s not somewhat transparent, then that presents a significant problem for traditional APA oversight of agency behavior. 

 

When it comes to grants, for example, if you have an AI tool helping with that, and it is not properly trained, it might, essentially, become its own policy-making entity just having outcomes for a particular subset based on how it’s trained and not based on the correct statutory or regulatory factors that have been promulgated to notice in common rulemaking or a guidance document.

 

So I think the domain of AI in government is a really important one. And this is one area—maybe I’m being facetious here—I’m almost happy that government’s a little bit behind because it allows the best technology use cases to be developed in the private sector and then the sound ones to be developed to be adopted within the government. 

 

Hon. Philip Miscimarra:  And that’s interesting because the — I would say 95 percent of what most of us see when it comes to artificial intelligence and regulatory issues, it involves the concern that the use of artificial intelligence represents or constitutes some violation in the private sector, and the government’s trying to keep up from a prosecutorial perspective. When it comes to the potential use of artificial intelligence, to identify or to prosecute violations—I’ll ask this of both Keith and David—what would that mean in terms of burdens of proof when it comes to proving a violation. And what would that mean in terms of due process, in terms of understanding what the basis was for a particular prosecution or allegation of a violation? Commissioner Sonderling, why don’t you start? Then, David, your comments.

 

Hon. Keith Sonderling:  Yeah. I mean, I think it raises really interesting questions when you’re putting additional burdens or trying to change longstanding burdens of proof—which I believe only Congress can do—related to the enforcement of these laws in the self-rights context. And is there now a need to be a higher standard when a computer is making it versus when a human’s making it? And do we need to reevaluate that equation? 

 

I bluntly think not. And obviously, it’s my job right now to enforce the laws that have been on the books, but I don’t think that when it comes to altering any of these standards that it is necessary at all because, again, it’s very simple to simplify it from the position I’m in now. We are just looking at the outcomes of there. And there’s a well-established system of getting there whether it’s under a very series of discrimination or ways to test for disparate impact. Those exist, and I do not believe they should be altered. 

 

I do think we should be bringing more light to them and showing how it works with each different use case of AI. And I think that that’s the burden on the EEOC and the federal government across the board whether it’s in housing, whether it’s financed any use of this technology or in the criminal justice system. Potentially, I know you could have a whole other panel on the use in the criminal justice system in sentencing and conviction and all that, but from our perspective, it is just so important to go through every single use and show how the law applies.

 

And to David’s point earlier about localities trying to get involved in this as well, I think that’s also dangerous for employers as well. Although they should be commended for trying to take on a very complex issue, if they’re not also ensuring that employers have to — in those jurisdictions have to comply with all federal laws, then they’re going to have a problem because in the New York case, when you’re asking to do a bias audit, and you’re putting new standards for employers in New York City to audit certain — pre-deployment audit on certain characteristics, well, then if you’re doing it, shouldn’t you be doing it on the other characteristics the EEOC’s going to look at either way there? So that’s kind of the challenge with other people getting in this space and also not having a new standard.

 

So the other thing with New York is that, okay, so you’re going to require this testing. You’re going to require audits. You’re going to require that public disclosure. Well, what does that mean? Okay, so the EEOC’s been doing it one way since 1978. Now you’re putting this new requirement, so here’s the million-dollar question. How are you going to do it, New York? And then it’s been delayed. And then, ultimately, they come out with some guidance that says, “Well, look to the EEOC.” And especially with their disclosure requirement, it’s like, “What does that mean?” We’re going to require you to disclose which employers haven’t had to do before. If they find bias, does that mean “Oh we found some discrepancies, and we fixed it”? Or our program horribly discriminates against—you name it—Hispanic women, whatever it is. So those — when you’re starting to layer on different requirements, whether it’s through your original questions on the burdens or what you have to do, without that uniformity or without the going back to the basics of what the EEOC requires, you’re just causing, basically, significant confusion and more harm than good. 

 

Hon. Philip Miscimarra:  And David, same question, and I’ll expand it to include private litigant. To the extent that you have violations that are alleged to be proven using artificial intelligence tools in a courtroom or by an administrative agency like the EEOC, what does that do in terms of burdens of proof? What’s that mean in terms of due process considerations?

 

David Fortney:  Thank you. It’s a thoughtful question. Now we’re getting, as they say, down to the nub of it a little bit because this is where it comes to play.

 

So we have this multi-tiered very complicated legal landscape. The states, I think, largely are tracking basically what the European model is: transparency, disclosures, audits, and consent by the participants. 

 

Hon. Philip Miscimarra:  Which, by the way, there’s nothing preventing employers from doing that by themselves.

 

David Fortney:  And not putting employers in some of that, I think, is a best practice, but it’s not required. So that, in terms of states from an enforcement standpoint, Phil, that makes it easier for states because that’s a very checklist-type of item. Did you publish what the — that you’re using this tool? Did you give people a chance to consent, opt-in, or opt-out, etc.? And respectfully, I think most state agencies can kind of handle that. I mean, it may be hard for our clients to comply with that or understand or interface with the federal, but the states, they shrug and say, “Not our problem.” And then, our federalist system, that we live with that at the moment. When we — so we have to live with that component, and that, as I’ve indicated, is changing very rapidly. 

 

Then we have the federal system. Most of the laws we’re dealing with are focused on employers. And most employers are not the ones building these tools. They are customers. They buy these tools from AI development from vendors who develop these tools with all sorts of things. And as Keith mentioned, they have a black box component. 

 

I mean your typical company they’re like, “I need more efficiency in screening because I get 5,000 applicants a day, and I laid off three-quarters of my HR department, so I just need someone to — some process to get through that efficiently, and you know who I want. No, they don’t all need to be lacrosse players, but I want people that can get this job done.” That is largely what many companies are looking at today. So they go and there’s a variety of tools out there that say, “We’ll do a good job in scraping through applicants or going out and finding people and tapping them on the shoulder and saying, ‘Don’t you want to apply through various job boards?'” 

 

And so, when the burden of proof comes, when a company — if they’re faced with let’s pretend, hypothetically, it’s the EEOC and they say, “Well, we’re very concerned because your hiring practices indicate X. How did you come to pick those people?” And depending on the particular AI tool that’s being used, it’s like, “Well, we went forward with bona fide requirements for a job, and we said, ‘These are the people we want. These are the vacancies. And help us rank. And yeah, we did interview people, and we told people it was a Zoom call, and they all wanted to come. No one wanted to fly into headquarters in Seattle to talk to us, so we interviewed them all on Zoom. And here’s who we picked based on that.’ Very simple.” That could be problematic if the problem came through the “black box.” And the burden is on the employer because you used the vendor’s product. 

 

There’s one other development, though, that I think both plaintiffs and maybe EEOC is starting to pivot a little bit. The employment discrimination laws primarily focus on employers. They also focus on employment agencies, employment agencies. And I like how Keith approaches this, which is let’s go back where we didn’t have machines doing this. But you’d go to an agency and say, “Yeah. I need a laborer. I need someone strong and maybe from this neighborhood would be great.” So you have — and the agency only gives you resumes of —

 

Hon. Keith Sonderling:  And same with the unions. 

 

David Fortney:  — unions, etc. They’d fill those discriminatory requirements. All right. And clearly, the law in 1964 was designed to say, “We don’t want that to continue. We recognize that practice.” Terrific. That’s on the books. 

 

Now, today, some of the tools—at least, as alleged—EEOC settled a case within the last week against the DHI—DHI—in which the allegation was that the AI tool was being used to discriminate. It was a job board. And anyone could post on the job board that wanted to, but they alleged that the DHI, the job board tool, would allow discriminatory postings to go forward. So the settlement was that they had to affirmatively use AI to ensure that none of the postings were discriminatory. Kind of a page out of the Facebook settlement involving discriminatory housing. “We’re going to make sure that the ads aren’t discriminatory.”

 

Another case, Workday is a very popular human resource information system that many employers use to do a whole variety of tasks. Workday has been sued in private litigation. The EEOC took a pass on this one—they decided not to go to charge—in which the plaintiff says—they literally—”I applied to 80 or 100 companies that I think used Workday, and I didn’t get a job. And I’m a member of ‘He’s over 40, African American, has a disability, has a whole litany of things.’ And therefore, I think Workday is what caused this to occur. And I’m suing — I’m not suing the 80 to 100 employers. I’m not going to directly sue Workday.” Stay tuned. It was just filed. We don’t know the outcome yet of that, but it shows, to the burden of proof and innovative ways, it’s kind of like how water finds its way in. These claims are going to come forward. The law may or may not be outdated, but so the attack point is not quite — is getting to the vendors. The vendors are in the best position, typically. Employers are primarily on the hook. It can be hard to defend, but the employment agency theory is one that’s developing rapidly and is worth following. 

 

Hon. Philip Miscimarra:  Just two questions before we open this up to the audience for questions and answers. And by the way, I’m actually — you think that I’m speaking. I’m just moving my mouth. My cell phone is set to the Google Bing search engine, which is actually generating the audio right now. 

 

David Fortney:  What is your ATM PIN? 

 

Hon. Philip Miscimarra:  Black box question number one is—and I’ll direct this to Aram and David—everyone regards AI — the kind of essence of AI is that there’s a black box that the algorithm that does a lot of stuff, and it’s not necessarily — uses the same type of logic that anyone of us uses. How many products are out there in the marketplace right now that you think basically don’t work, meaning they don’t perform the function that they are advertised to perform, and not necessarily the result of any intentional misrepresentation. That it’s we’re kind of in the Wild West right now in terms of publishers of various tools and AI products. What do you think in terms of just how many products are out there that are not putting aside legality may not actually be performing the tasks that they are purchased to perform or licensed to perform?

 

Prof. Aram Gavoor:  That’s a really good question, Phil. A technical one that I don’t know if I can really conclusively answer, but I will say that you have proprietary software. You have a lack of understanding within society. There’s no subject matter regulation by government, and it really comes down to just private contracting between parties. There’s no real way of knowing. I think it comes down to the successful market encumbrance, being able to prove that their products work. And I bet that, at least in the context of AI and HR, most firms aren’t just going to jump into replacing their entire processes with an AI product hopefully without engaging in some due diligence. But it probably is like a subset of firms that don’t do that, and that’s a big problem. 

 

Hon. Philip Miscimarra:  Yeah. David, what do you think?

 

David Fortney:  I think there’s a lot of I’ll call it “unintended consequences.” Many companies and at least researchers and software engineers and so forth, they say, “Look. What do you want? Do you” — “I want pay equity. I want to be fair in hiring.” “Okay. So that means that I’ll make sure every time you hire someone, I’ll check and see how — I’ll calibrate my hiring decision for you.” The AI tool does this. “You’re kind of underutilized in terms of women in these engineering roles, so I’ll just, through the tool, tilt a little bit, and I will effectively suppress some of the male qualified applicants and really hone in on females. And I’ll just get those numbers right back up to where you want them.”

 

That is largely what we have found because people are like, “Wow. I started using these tools. I’m great. I’ve got pay equity. My hiring numbers are spot on at 42 percent female. Why, I hired 42 percent. It’s perfect. I’m absolutely perfect.”

 

And those are flags. And so, the only way, in my experience, you can get that is if there, in fact, is either enhancement or suppression, which likely is unlawful, but it occurs albeit for the best of reasons, but it occurs. And it’s very hard to find out how. And in my experience, I would like to think that the clients are being more careful. 

 

Honestly, the corporate lawyers tend to be the last ones that are checked off. The compliance unit, the compensation unit, HR, they’re like, “Hey. We got this new software we’re bringing online Q2.” And the lawyers are like, “What? What is this?” “Oh yeah. We signed the contract. We’re good. In fact, it’s going to be great. You’re going to love this because you’re always saying how bad our hiring numbers are. We fixed it. We went to that retreat with you.” This is literally in the last two weeks a conversation with [inaudible 00:54:29]. “We went to the retreat. You were all over us. We had to get it fixed. We went out. We spent a quarter million dollars. We have the problem solved.” And we’re now trying to unpack and figure that out, and it may not be quite as good as advertised. But I don’t think it’s so badly, but it was —

 

Hon. Keith Sonderling:  And now our corporate logo is on all the vendors’ sites —

 

David Fortney:  Exactly. 

 

Hon. Keith Sonderling:  — so the plaintiff’s lawyers and class action lawyers, the government can see who’s using what. 

 

David Fortney:  So that’s — and I agree. We can’t quantify. But those vignettes, I hope, bring to life some of the concerns. 

 

Hon. Philip Miscimarra:  And last question for Keith. This is another black box question. The question’s about validation. And we know that selection tools that use artificial intelligence have an aspect to them that is enigmatic because we don’t fully understand why it is that they produce outcomes that are more successful than human beings historically have been. But when it comes to validating a process that has in its essence something that’s very difficult to scrutinize with logic when publishers or the distributors of these tools say they’ve been validated, number one, is it possible when a technology didn’t even exist two or three years ago for it to be validated in the conventional way that would pass muster with EEOC? And in general, is validation, when you’re talking about selection tools using artificial intelligence, is it the same type of validation that employers have used for the past 30 or 40 years? Or does it really have to have some different type of validation because it’s a different type of selection process?

 

Hon. Keith Sonderling:  Well, kind of tagging off what you said here, there’s so many products out there that — and maybe this is outside of our wheelhouse more for the FTC that say, “We’re EEOC certified, and we’re OFCCP certified because we’ve done these testing in the aggregate. And we used generic BLS data to show that this engineering position applying through a vague company for advertising purposes that we are in compliance with the Four-Fifths rule.” And they really quote the rule, that it’s completely binding. And the sense is that is also just not true because when the EEOC’s going to look at the validation related to one of these tools, they’re only looking at how it applied in your workforce in that location on that job description. And that requires going through every single line of that job description to see if all of those requirements are necessary, if they can be validated, and what that actually looks like. 

 

So when you have the sales process of this saying, “Well, there’s these longstanding tests that the EEOC and UFCP is using. We’ve paid these either academic or private studies, these IOs or a counsel to look at this to taking a generic data set, and we’re okay.” Well, look. The law allows zero percent discrimination. And although there are methods of proving disparate impact that do allow some wiggle room as a defense, at the end of the day, when you’re actually showing up in court, and you’re showing up in the EEOC, you don’t know what method of validation you’re going to get because the EEOC doesn’t always use the Four-Fifths rule and neither do plaintiff’s lawyer and neither do judges. So it really becomes a battle of the experts of whose test shows better for results for what they’re looking for. 

 

But at the end of the day, very similar to what we’ve been talking about, all those aggregate studies, all those testing methods, which I think apply equally to the — when employment assessments were on Scrantron with a pencil, at the end of the day it only matters how it’s applied in that one workforce in that one job position, and that is wholly lost in the advertising equation and the implementation equation. 

 

Hon. Philip Miscimarra:  Yes. And I think we’ll open this up to other questions. 

 

By the way, I also want to acknowledge that former EEOC Commissioner Vicki Lipnic is also in the audience, and I intend to ask her all of these questions privately afterwards because I’m interested in what Vicki has to say as well. 

 

Yes.

 

Adam Thierer:  Hi. Excellent panel. Thank you. I’ve learned a lot here today. My name’s Adam Thierer. I’m a senior research fellow with the Art Street Institute here in Washington, and I’m also the head of the Regulatory Transparency Project Emerging Technology Workforce. 

 

Last November we came out with a paper, a study, on the coming onslaught of algorithmic fairness regulations where we surveyed the kind of things that David was discussing in terms of state and local efforts that are multiplying like gangbusters, not just on the employment front, but on every possible conceivable front. And some are narrow. Some are broad-based. So we actually had a survey out there, and then my co-author Neil Chilson and I are going to come up with a follow-up study for FedSoc highlighting these points. 

 

And I want to build on that point that David made and that Phil just asked the panel about to tease out a little more detail about where we might be heading and get your opinions on how — where we’re going to — where we going to see policy going over the next two to five years, which is that when you look at all the state and local activity on this front—again, not just on the employment side, but on all algorithmic fairness issues—all roads lead back to some kind of call for algorithmic transparency, algorithmic explain ability. Let’s get under the hood, behind the black box, whatever your preferred metaphor. 

 

And the preferred regulatory vehicle to do this is some algorithmic impact assessment or AI audit. And there are important differences between those two tools, but what keeps them in — what relates them, is the idea that some body—it could be government body. It could be a private body, a certification agent, could be the courts—somehow certify or put a stamp of approval on the idea that he actually did reveal something about the algorithm to the public. This could take a regulatory or legislative overlay in the Privacy Bill that’s been pending now for a while in Congress as an entire provision about algorithmic impact assessments being mandated. The Algorithmic Accountability Act bill that’s been proposed would create a new bureau of technology at the FTC to oversee the process of algorithmic impact assessments. The state and local bills in New York, Illinois, and others, they have some process like this. 

 

So I guess the question is—and this is something the Commissioner teases out in his excellent recent law review article on this issue—is this question of is there an alternative framework that we can be all right with that would suggest that, yes, there’s something to the idea that assurances, audits, impact assessments can help validate or build trust in algorithmic systems, but that it’s done in a more decentralized flexible agile fashion through maybe a voluntary self-certification process or better yet, a professional body or association—and there are many out there that are doing this already creating a framework for impact assessments and audits—that would then get some light blessing from regulators, whether it be the EEOC or somebody else and maybe even some sort of a safe harbor from further liability. And this is something I know, Commissioner, you have —

 

Hon. Keith Sonderling:  Yeah.

 

Adam Thierer:  — a brief section on this in your law review article. I thought maybe you and the panel could talk about is this where we’re heading because of course, the EU has a much more heavy-handed —

 

Hon. Keith Sonderling:  Right. 

 

Adam Thierer:  — prior conformity assessment process. California and other states want to go very aggressive. What I’m asking you to say is like tell us how we get halfway there and satisfy the concerns of those who call for algorithmic fairness regulations without full-blown heavy-handed top-down precautionary principles kind of mandate.

 

Hon. Keith Sonderling:  And your work on this topic everyone should read. I highly cite to it, and I think you’re doing great work as well in the points you make.

 

This is a really important concept that I like to discuss is that there’s nothing preventing employers or vendors from doing this now in advance of any of these threatened regulations and in advance of any threatened lawsuits or EEOC investigations. 

 

In the employment context, employers have been doing internal audits for decades on the wage and hour side, on the independent contractor employer side, and they fix it if they see issues in advance. And that’s a big part of our mission at the EEOC is also not just to remedy employment discrimination but to prevent it as well. And there’s longstanding tools for employers to do that. Depending on the context, even in the EEOC context, with the Me Too movement, we couldn’t bring every single case on sexual harassment, but because of the publicity that it had gotten—largely because of former Acting Chair Lipnic’s raising awareness of this—the EEOC put out guidance best practices, and what it caused is HR departments to take this seriously. It caused the CEOs of boards to say, “It’s okay. We’re going to invest in this, and we’re going to be very public. And we’re going to have open-door policies. We’re going to have new policies, procedures. We’re going to audit, and we’re going to do everything we can to eliminate this problem from happening or occurring more.” 

 

So in the HR sense, in the labor and employment sense, this is happening constantly within corporations right now. And whether it’s being done by internal counsel, outside counsel, a combination of accountings — and that’s what I’m trying to argue here when it comes to AI because, as you’ve heard at length at this panel, for the time being, a lot of the standards, burdens of proof, whatever you want to call them, are the same. You can start doing that before you ever let one of these tools make a decision on someone’s livelihood. Okay. You can do that internally, whether it’s in combination with the vendor before you buy it or after you buy it with their assistance and pre-deployment. 

 

And that just puts you in a much better position than a company who buys the advertisement, who says, “Oh wow. You hire some study, and you show that this product doesn’t discriminate, and it’s validated in your sense.” Well, now I can just implement it like I do other software, which is how it’s being sold because you’re saying, “Okay. Company, spend hundreds of thousands of dollars, millions of dollars on this product, and like other products, it’s supposed to just let it go, and it’s going to be more efficient and economical.” But here when you’re dealing with civil rights and AI, you have to make sure that you’re doing those compliance things in advance, or you can really, in a sense, violate the law pretty quickly and easily in some of the examples you heard from now.

 

And my point is if you do that now, and the EEOC comes or class action lawyers comes, who’s going to be in a better position, company A that says, “Well, we just bought this. Look at the glossy brochure they gave us. They said it was validated. They said they use the EEOC’s method. And it works great, and it promised me diversity, equity, and inclusion.” And I don’t say that in a joking fashion because that’s what these programs promise. As David said, why the business functions are buying them, not the legal review, are jumping to them because it’s promising pay equity. It’s promising all these things which are top of the line for Board of Directors right now. “So we just let it go and it horribly discriminated against.”

 

Versus company B says, “We took their advertising, but we know it only applies to our workforce. We did pre-deployment testing. We saw there were some issues. We tried to correct it, whether it’s the skills, whether it’s where we advertise for the job, whether the qualifications. Whatever the metrics are you’re looking at, you can deal with internally for that specific job because it’s that specific. We found some discrimination. We fixed it.” Or, “We’re taking the risk that under the Four-Fifths rule it’s at 96 percent, and that four percent leeway at least we’ve done that testing in advance, and we have it in our file.”

 

Who’s going to be in a better position when we show up, the company that bought the sales position or did all this internal work? And that’s no different when you’re facing a wage-an-hour audit, a pay equity audit, and H-1B classification audit. You name it in our space OFCCP David can talk to. It’s no different, but for some reason, that’s — I don’t see that happening yet. I’m encouraging that to happen. And that’s something corporations can do right now, not just on the testing side, but also on the employee handbook side and the fair use of side. And I alluded to that before. 

 

Who are you having — allowing having access to these algorithms? How are they being trained? And if you have a corporate policy internally very much like you have a sexual harassment policy versus “Oh. Anyone can come in and play with it,” again, who’s in a better position? And it’s all you can do right now, and there’s nothing preventing companies from doing that yesterday. And that’s what I’m encouraging in the meantime. 

 

But instead, everyone is distracted by this “Oh. Is there a new agency? What’s New York going to do? What’s the EU going to do? Oh. The algorithm’s proprietary, so no government can get it.” It’s all nonsense. 

 

David Fortney:  Can I just tag onto that? The institute report that I mentioned by the 40 experts has a nice list of best practices agreed to by that very diverse group, and I think that that would build right out on what Keith just outlined. Thank you. 

 

Steven Schaefer:  Well, please join me in thanking our guests today. And thank you to all of you who are watching at home. 

 

[Music]

 

Conclusion:  On behalf of The Federalist Society’s Regulatory Transparency Project, thanks for tuning in to the Fourth Branch podcast. To catch every new episode when it’s released, you can subscribe on Apple Podcasts, Google Play, and Spreaker. For the latest from RTP, please visit our website at www.regproject.org.

 

[Music]

 

This has been a FedSoc audio production.

David Fortney

Co-Founder, Fortney & Scott LLC

and former Chief Legal Officer, U.S. Department of Labor


Aram Gavoor

Associate Dean for Academic Affairs; Professorial Lecturer in Law

The George Washington University


Keith Sonderling

Commissioner

Equal Employment Opportunity Commission


Philip Miscimarra

Partner

Morgan & Lewis


Cyber & Privacy
Emerging Technology
Labor & Employment

Federalist Society’s Administrative Law & Regulation Practice Group

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content