Tech Roundup Episode 18 – The Future of AI Regulation: Examining Risks and Rewards
In this Tech Roundup episode, we delve into the discussions raised by the U.S. Senate Judiciary Committee’s recent hearing on AI regulation.
Neil Chilson and Adam Thierer offer an in-depth analysis of the various approaches to AI that are being considered by regulators and in public policy circles – from voluntary efforts by the industry to promote transparency and accountability to advocating for licensing and testing requirements for AI deployment to the prospect of a global regulatory body for AI. They explore whether AI presents fundamentally new questions for policymakers, or whether regulators already have the tools they need. They discuss the risk of over-regulating this new innovative space and the potential consequences of doing so.
Don’t miss out on this extensive conversation on one of the most important issues in tech policy today.
Visit our website – www.RegProject.org – to learn more, view all of our content, and connect with us on social media.
Transcript
Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.
Welcome to the Regulatory Transparency Project’s Fourth Branch Podcast series. All expressions of opinion are those of the speaker.
Colton Graub: Hello, and welcome to the Regulatory Transparency Project’s Tech Roundup podcast. My name is Colton Graub. I’m the Deputy Director of RTP. Today, we are excited to host a Tech Roundup Discussion on the recent Senate hearing on AI and how regulators and policymakers might approach the technology going forward. We are pleased to have Neil Chilson and Adam Thierer here to discuss this important topic with us.
Neil is a senior research fellow at the Center for Growth and Opportunity. Prior to that, he was a senior research fellow for Technology and Innovation at Stand Together and the Chief Technologist at the Federal Trade Commission. Adam Thierer is a senior fellow at the R Street Institute. Prior to that, he was a senior fellow at the Mercatus Center at George Mason University. In the interest of time, I’ve kept my introductions of our guests brief, but if you’d like to learn more about either of them, you could find their full bios at rtp.fedsoc.org. With that, I’ll hand it over to Adam to guide the discussion. Adam, the mic is yours.
Adam Thierer: Thanks, Colton, I appreciate it. And, Neil, welcome. Good to see you again. So last year, Neil and I released a paper for The Federalist Society predicting the coming onslaught of algorithmic fairness regulations. And since then, interest in artificial intelligence and its regulation has exploded at all levels of government. So let me just begin by giving a very brief overview to our listeners of just how much activity’s out there right now, and then Neil will be able to supplement this list.
But starting in the Biden administration, we saw last October the White House released a 73-page blueprint for an AI Bill of Rights, floating a variety of new potential AI governance ideas. Meanwhile, the U.S. Department of Commerce recently launched a major proceeding on “AI accountability policy” that asked for public comments about how to design various types of new AI regulations or so-called AI impact assessments or audits. The Federal Trade Commission, where Neil used to work, is an agency that’s now stepped up its interest in AI in a major way with a series of blog posts and policy statements, including some with other agencies, such as the EEOC, DOJ, and CFPB.
Meanwhile, at the state and local level, we have seen an avalanche of new laws proposed having to do with everything from addressing “algorithmic bias” to concerns about automated hiring techniques or systems, among many other things. Just this week, California was considering legislation on this front about automated hiring and algorithms. New York City has already passed rules on this front that have advanced, and many other states and localities are interested in this. And let’s not forget, at the international level, the European Union is aggressively pushing a new AI regulatory regime under the so-called AI Act that would include some sort of “prior conformity assessments” and a whole host of new European-style regulations and fines.
But we’re going to talk mostly today about what’s happening at the federal level in Congress. And there, we’ve seen a lot of different laws already floated. We’re waiting for Senate Majority Leader Chuck Schumer to introduce a rumored bill on “responsible AI” that’s likely to include some sort of mandated AI transparency or explainability mandates. Last year, we saw the so-called Algorithmic Accountability Act floated, which would have created an entirely new division at the Federal Trade Commission called the Bureau of Technology to oversee some sort of AI regulation. And even the baseline privacy bill that was floated and almost passed last year that would have created a federal baseline privacy framework for the United States, it included language requiring algorithmic impact assessments as part of the bill.
But recently, the U.S. Senate Judiciary Committee held a rather amazing hearing on Oversight of AI: Rules for Artificial Intelligence on May 16th that was really incredible in terms of the number of new regulatory ideas that were floated for artificial intelligence based on concerns about everything from bias and discrimination to job loss to safety issues to intellectual property concerns and a whole bunch of other worries about so-called existential risks surrounding AI. At that hearing, there were proposals floated, including by some of the witnesses, for a formal licensing regime for more powerful AI systems or a new agency, maybe in the form of a “FDA for algorithms” with a pre-market approval regime of some sort.
And there were also calls for a complementary new global regulatory body to also regulate here, or harmonization with other nations, such as the EU and others, which are already aggressively trying to regulate. There were a whole bunch of other ideas floated, some of which I know Neil will want to go into, including everything from nutritional labels for AI to some sort of mandated disclosure or audits or impact assessments, which we’ll talk about in a moment.
But what I think was most remarkable about the hearing—and this is where I’m going to turn to Neil—is just how chummy OpenAI’s CEO Sam Altman was with members of the Committee and how many members openly gushed about how much they appreciated the willingness of Altman and other witnesses to preemptively call for AI regulation. In fact, Senator Dick Durbin at one point said that it was a “historic hearing” in terms of the way that tech firms were coming to him and begging for regulation. He noted during the hearing that AI firms were telling him, “Stop me before I innovate again,” which gave him great joy. And he said the only thing that really mattered now is, “How are we going to achieve this?”
So, Neil, let me bring you in now. Let me ask you to identify some of these sort of dangers or threats with what we’re seeing percolating right now in Congress and in the federal government more generally, and specifically to focus in on this idea that AI needs a new regulator or a formal licensing regime, because this is something you’ve written extensively about, including just in 2020 on a paper on “Does Big Tech Need Its Own Regulator?”. You went through these proposals before, but now we’re seeing them in particular to be applied to AI and algorithmic systems. So let’s start there and let me ask you for your general thoughts about what’s happening and what’s wrong with that idea.
Neil Chilson: Well, I think the hearing that you mentioned where Sam Altman testified, along with two other witnesses, was indicative of Congress’s continued interest in technology but not a deep attempt to grapple with that technology. So my big takeaway from that hearing was that you could have replaced the use of AI at almost any point with just technology or the internet, and almost all those sentences would have made sense, which is why the concerns that you mentioned were so broad. And they’re just concerns that Congress has learned to talk about in the context of internet. So the Senate right now is just bringing its set of concerns that it’s built up in talking about primarily social media and applying them to AI, which is very different in a lot of ways from the social media concerns that Congress seems to have.
So I didn’t find many of the proposals surprising at all. As you said, they sound very much like the types of proposals that Congress has been pitching for internet companies since social media became a politically — a political football to bat back and forth. And so, some of the work that I had done previously was around whether or not we needed a digital regulatory agency. At the time I wrote the paper in 2020, the focus was — the political driver was social media concerns. But the scope of these proposals was often much broader than social media companies and covered everything — everything connected to the internet, more or less. And so, that’s why these calls were for digital regulatory agencies.
And there were some serious people proposing these. The CMA in the U.K, there’s a book by Harold Feld that called for this. There was a proposal that came out of Harvard that was driven by Tom Wheeler, former FCC chair, and some others. And the Stigler Center in Chicago did a long research project with dozens of people involved and issued a report promoting the idea of having a digital regulator. Most of these approached it from a competition viewpoint, but the concerns that they raised are the exact same concerns that people are talking about in the context of AI.
And the problem with a single regulatory agency, the challenges for this type of approach is the same that I talk about in that paper—namely, why do we create regulatory agencies in the first place? Why does Congress not just solve the problem? And the answer has traditionally been the idea of division of labor, right? We’re going to assign experts to solve these problems. But in order for that to work, you need to have a scope that is narrow enough that you can have established expertise in it. And there are two different models in the U.S. for expert agencies.
On one hand, they can be experts in a specific industry or technology, so this is like the FCC or the FDA or the FAA. On the other hand, you can have expert expertise in procedure or in the type of harms. And this is the – more of the FTC model, focus on consumer harm or harm to competition across industries. These are two different models of expertise. But the problem is that these regulatory agencies, especially when we call for a new digital regulator or an AI regulator, they don’t really have either of these in mind often. They tend to say, “Hey, we’re not going to focus on any specific set of harms. Because if we were going to do that, we might assign it to agencies that are already expert in those types of harms. So we’re just going to cover everything in a specific industry.”
But then the problem is that AI, and even more broadly information technology, isn’t really one industry. It’s dozens of industries that have totally different characteristics. And I think this is part of what led to the — somewhat of the love fest that you mentioned at the hearing. You heard everything again. You heard everything from concerns about content moderation online and bias to existential risk to unemployment. And that’s because they were all talking about AI, and they just used this generic term. Well, the risks and benefits from, say, automated driving are radically different from the risks and benefits from, say, health diagnostic materials or content creation generative AIs, which tend to be the ones that most consumers are using these days.
And so, without a level of specificity, it was very easy for people to say, “Hey, I’m worried about AI.” And then everybody could say, “Oh, yeah, I’m worried about AI, too,” but they probably all meant different things when they were talking about it. And I think the CEO of OpenAI, Sam Altman, you saw that where he was agreeing, I think, often to the need for a licensing regime of some kind for AI. But he wasn’t very clear on what the — what AIs that would apply to. And I know, subsequently, OpenAI has issued a blog post to try to clarify that.
And so, I just think that the problem here is even bigger than maybe it was around if you were going to do a specific, say, social media regulator. The problem here is that we don’t really know which problems we’re trying to solve, which risks we’re trying to mitigate. And just kicking that football down to a regulator isn’t going to add expertise. It’s just going to add a level of unaccountability to the American people, frankly.
Adam Thierer: Yeah. Those are good points. And just to build on that a bit, one of the things that I found most shocking about this hearing and a lot of the discussion around algorithmic regulation today is how people seem to imagine that AI policy is happening in some sort of a vacuum, like we don’t have any existing regulatory capacity in the administrative state or something. I mean, just by the numbers, the U.S. federal government has 2.1 million civilian workers, 50 cabinet agencies, 50 independent regulatory commissions, over 430 total federal departments. And people are telling me there’s nobody paying any attention to artificial intelligence.
That seems astonishing to me. And, in fact, it is ridiculous because we know that a lot of agencies have been active for many years on this, not always in the name of AI or algorithms, but sometimes under the name of big data, and there’s been proceedings on commercial surveillance and internet of things. And agencies—like you mentioned the Federal Trade Commission, general purpose regulator—but there’s other general purpose regulators. The Consumer Product Safety Commission first came to me to talk about these issues in 2016. And there’s other sectoral specific regulators that have been active even longer—the National Highway Traffic Safety Administration, the Federal Aviation Administration. The Food and Drug Administration has done a lot of work in, including some regulating, on AI.
So the first question — and this is something you teed up in your paper on this, Neil, using, like, “What are we doing beyond what’s already being done? And can we first address maybe what went wrong with those systems before we add just another layer of bureaucracy?” So that’s question one. Why another layer? Question two is exactly—you already alluded to this—exactly what are we regulating or specifically licensing here? Because AI is a general purpose technology—also a dual use technology—and much like things before computing or consumer electronics, for which we did not have a federal consumer electronics agency or a federal computers commission, thankfully—we dealt with this in a different way, in a more decentralized, polycentric kind of fashion, with a variety of different agencies and consumer protection laws and safety laws and, of course, lawsuits, which was something that was also floated at the hearing.
And so, it seems to me like there’s a lot going on out there, but people don’t tend to acknowledge it. They instead jump to the idea like, “We need some new thing, a new silver bullet solution to this.” And it just strikes me that there’s a naïveté about the way the world of policy works from a lot of the people proposing AI policy and regulation today.
Neil Chilson: Yeah, it could be a naïveté. I think there’s also the risk—and this is the big risk of a new special agency, right—is capture by the people who are regulated by it. And one way to effectuate that would be to agree with the government that, “Hey, I need to be regulated,” and then help shape the rules. Now, I don’t want to — I can’t see into Sam Altman’s mind. I don’t know that that’s particularly his purpose, is regulatory capture, but it does seem like a likely effect of a licensing regime that would seek — that would have companies seek permission ahead of time before they go out and build new products.
On top of that, there’s just some practical unenforceability mechanisms, not just in the U.S., but obviously worldwide. It’s going to be hard to put these kinds of restrictions on Chinese companies who are going to be building this sort of thing, who are going to be exploring these technologies as well. And so, I think there’s a lot of downsides to an agency specific regulator, especially given that we have—and you pointed to the FTC’s blog posts on this—none of them were revolutionary, right? Like, they were basically applying the same principles that the FTC has applied about consumer protection and competition to this new technology, which is something that the FTC has been quite agile at in the past. Even if you don’t agree with all of their decisions, this is the — they’re following their mission to protect consumers and promote competition is not a technology specific one.
And as you pointed out, the FTC has worked on issues like this. If you look at their big data report, the concerns discussed there sound very similar to the concerns that everybody is raising about AI—that it’s going to be biased, it’s going to have real effect on people’s privacy, that it may not be accurate, that there may be misinformation or disinformation contained within its responses. All of these concerns are things that I think we’ve dealt with forever. And we’ve tried to work out ways to do it—and maybe we can do it better—but I don’t know that a new agency is the right approach.
Adam Thierer: Yeah. Yeah. Obviously, I agree with all that and a couple of other points beyond the concern about capture, which a lot of people did raise. Including at the hearing, the issue was mentioned by Senator Blumenthal and several others, like the danger of it being captured and hurting smaller developers or open-source algorithmic platforms and applications. So at least, there was some acknowledgment there in Congress, even though at the same time there were people offering Sam Altman the job of heading up this new hypothetical AI regulator, which is something I’ve never seen happen. In 31 years of covering tech policy, I’ve never seen a CEO of an industry leader offered a job as a regulator on the spot.
But let me get back to the licensing part really quick before we transition because I think this is important. I mean, there was really no discussion of the potential costs of delay associated with the licensing process. And if you’re going to have an FDA for algorithms, for God’s sakes, we’ve got a lot of history over 100 years of it about costs of delay associated with Food and Drug Administration regulation. It’s not to say that the FDA doesn’t do some important things—they obviously do—and we need safety regulations. But the reality is that algorithmic technologies evolve extraordinarily rapidly.
And I couldn’t help but note the irony of just less than three days after Sam Altman was testifying in favor of some sort of federal licensing regime, here he was releasing a new app for the Apple Store, an OpenAI OS app for Apple, that basically provided you with the system right there on your phone. And I snarkily tweeted out, “Congratulations, Sam, for evading the very regulatory process that you proposed when testifying just three days prior,” because that app would have not seen light of day for many, many months, maybe longer.
We have experience with these things, with these sort of preemptive, prophylactic types of regulatory regimes, which I like to call “Mother, may I” permission slip regimes. And, of course, all we need to do is look across the pond at Europe and know what that regime looks like in practice through things like GDPR and other data regulations. And I just — I’m astonished that these things are going undiscussed for the most part. And I think that that’s probably the next part of this fight, is like what it looks like in practice to have a licensing regime and what are those tradeoffs.
Neil Chilson: So one thing that — I said that a lot of these debates are old. The one thing that is somewhat novel in the AI space is the idea of existential risk, that AI might destroy all of humanity. And so, I think that is — I think a lot of the discussion is sort of tinged with that fear. But the hearing did not really discuss that at all. And I think that’s in part because nobody really knows what to do about that. The industry doesn’t really know. Certainly, regulators aren’t quite sure what to do about that. And I think that’s because it’s, at this point, quite speculative. It’s mostly the result of a lot of musings on internet blog posts and forums about what AI could do in the future. More science fiction than fact right now, but it’s sort of caught fire, and it’s influencing the conversation.
Adam Thierer: Yeah. One thing that was mentioned at the hearing however, Gary Marcus who testified, who’s an AI engineer, he’s been in favor of some sort of regulation now. And he’s been floating these ideas and mentioned at the hearing some sort of like international regulatory authority, perhaps modeled on the International Atomic Energy Administration or some sort of UN body to oversee high-powered systems. The devil’s in the details, and he did not provide them. But I couldn’t help but wonder if members of Congress, especially some of the more conservative members of the Senate committee that was at the hearing, were not asking like, “Well, are you talking about the UN being a global government regulator for all computational systems? And, first of all, is China going to go along with that? And, oh, by the way, didn’t the UN just hand over the security commission to Russia for an invaded Ukraine and last year?”
And last year, in the most astonishing thing I think I’ve seen the UN ever do—which is saying something—they gave control of the Nuclear Disarmament Committee to North Korea. I mean, that’s the biggest nuclear pariah state on the planet, and they’re now running nuclear disarmament efforts for the United Nations. So it raises the question of like, even if you believe there’s something to the idea of existential risk, there’s this wishful thinking about some sort of global super regulator that will be benevolent and it’s wise and be able to get everybody under one umbrella, and we’re all just a global village regulating all of these things, when the real politique of AI policy, like all tech policy, is far muddier and messier. And in many cases, we just can’t trust that international apparatus because it, in my opinion, runs up against the national security interests of the United States. And I was astonished that no members really seriously discussed that issue that day when people are floating these ideas.
Neil Chilson: Yeah. And I think that is why — why I think in this discussion, we’ll just continue to see people essentially fitting their prior concerns about change and technology and just calling them AI concerns. And that’s where the — that’s where the hearing very much went. I think we can expect to see that.
Adam Thierer: I think that’s right. I think there was some people at the hearing who alluded to the fact—Senator Chris Coons, for example—that this is just basically an extension of the social media policy wars. And all the fights we’ve been having about social media algorithms are now coming to the world of AI, such that the social media holy wars are going to become the AI holy wars, if you will. And I think that’s exactly right. And on that point, let me transition to point two, which is, I think, the more practical near-term sort of regulatory threat to AI and algorithmic technologies, which is the idea of mandating some sort of algorithmic transparency or explainability, probably in the form of mandatory algorithmic impact assessments or audits.
Let me just briefly tee up what this idea is and then ask you to comment on it, Neil, because we’re both filing right now in a proceeding at the Department of Commerce on this issue. Comments are due in early June. Basically, pretty straightforward, the idea is we’ve got impact assessments and audits in other arenas for other things, whether it’s trade issues or environmental policy or labor practices. And you can have an impact assessment before the fact, before innovation happens.
Or you can have an audit after the fact, and the audit can be done internally, or it could be done by a second or third party. It just depends. It could even be done by a government agency, as would be required by some of these proposals. There’s a lot tied up, though, in what it is we are auditing, and that’s what’s so tricky here, right? When you audit financial books, either the numbers add up or they don’t. When you have standards and audits for like how the octane and gasoline, which is something that’s regulated, either it is 87, 89, or 93 at the pump, or it’s not.
With algorithms, the subjectivity of the things being audited, reviewed, or trying to be made more transparent or explainable is so inherently, fundamentally subjective that we’re opening up a pretty big can of worms here I think—and I’ll turn to you for comment on this—but it strikes me that we’re going down the path that we’ve already gone down before with other sectors, the Federal Communications Commission and many other agencies, that’s tried to sort of regulate transparency or information practices in other ways. Because it’s almost impossible in theory to be against the idea of transparency and explainability, but what the heck does it mean? And what are the tradeoffs associated with requiring that an algorithm be more explainable from a trade secret’s perspective or a scammer’s perspective or whatever else? So let me turn to you for comment on that general idea of audits and impact assessments for algorithms.
Neil Chilson: Yeah. So audits and impact assessments are tools, I think, that companies can and have used to make sure that their products meet the market demands of consumers. And I don’t have a huge problem with companies doing such things. The questions come in, when does government need to intervene to make such things happen? And you’re pointing to a fundamental problem with some of these efforts, which is that if we are going to audit against a standard, somebody’s going to have to set the standard. And if we’re going to audit against, say, a bias or a misinformation standard, you’re running up pretty quickly into government having to say what the standard is.
And that’s why I think these transparency mandates are politically attractive, because then government officials can kind of plausibly say, “Hey, we’re not actually setting a standard here. We’re just saying you have to tell people what you do.” And the challenge there is both a technical challenge—as you pointed out, many of these systems become less accurate when you make them more rigid and explainable—and also when you say that you have to report transparency on one dimension or another dimension, some of the messiness of the algorithms will move into some of the other dimensions.
And so, I’m somewhat skeptical of these transparency mandates achieving the goals that people want. I think they will definitely impose compliance costs, and they might have — they might force companies to think about issues that they might not have thought about before. But does that actually in the end benefit users or society in the way that governments want? I’m not sure. I think a lot of it is basically a way to try to bully companies into doing substantive things without running into the First Amendment by making them do substantive things.
Adam Thierer: Yeah. I think that’s right. Although clearly, by regulating algorithms, you’re regulating code. And if code is speech, algorithms are speech at some level, so you could run right back into speech related or First Amendment concerns. But let’s stick with the compliance cost side of this. Because a lot of people look at this and say they treat transparency mandates as sort of a free lunch, like there’s no cost here. But if you look at the academic literature surrounding the idea of AI audits or algorithmic impact assessments, all roads lead back to something like NEPA, the National Environmental Policy Act of 1969. And they cite this — a lot of the people who advocate for AI audits being mandated, they cite NEPA favorably.
And yet, NEPA is right now under attack by a lot of people—left, right, and in between—for the unbelievable costs that it imposes. And people at your own organization—Eli Dourado, Will Rinehart, and many others at CGO—have done excellent work on this, showing that while NEPA’s mandated impact assessments for environmental purposes were initially quite short—sometimes just 10 pages long—the average length of those statements over the years now has grown to exceed 600 pages and include appendices in total in the thousands of pages. And they take an average of 4.5 years to complete. And some have taken like 15 to 17 years.
What that means is that many important public projects or goals never get accomplished because of just the endless compliance rigmarole. And now, they want to bring that model to the world of artificial intelligence, which moves at the speed of light. But under this sort of regime, if we had either federal licensing or some sort of mandated transparency/explainability in the form of impact assessments or audits being required, you could imagine this being slowed down endlessly. And then all the veto points that would be introduced into the process of algorithmic design would be endless. So, I mean, I just don’t see how that works. I mean, can you imagine any scenario whereby we can mandate that sort of thing but not have those problems?
Neil Chilson: I mean, that’s a softball question. I can’t imagine it. I think there are people — I think there are people who think that slowing down is the point and that that’s what we need to do. We do need to slow this down. I think what they’re missing is that it’s not just about a slowed — a change in acceleration and how fast these technologies come about. But it’s a distortion in what the technologies end up looking like. When you have veto points, where there are political decision-making considerations, the companies will try to keep their regulator happy. They will try to shape the regulator.
And the way they’ll do that is by conceding on any various number of political hot points. And that will shift depending on who’s in charge. But this is an inherently political process. And it brings things into the equation that are not aimed at delivering the best product or best service to people but meeting political demands. And I think that — I would be worried about what that process — how that process could be abused if it was in the hands of somebody who had different political considerations than I did when I was adopting it.
Adam Thierer: Yeah. I agree with that. I think all those points are excellent. And I just have to believe that the only way that an algorithmic impact assessment/auditing regime could work and not completely decimate AI innovation would be a very, very soft mandate that basically said, like, “Look, you all should do impact assessments. You should do internal work to vet these things,” which so many AI innovators already do. And this is not like something that’s completely new. I mean, there’s all sorts of internal processes to make sure that these systems are safe and secure. And, of course, you’re not going to have a really good business model built on misery and failure. So there’s just the natural competitive market effects.
But then, there are also all the other things we already mentioned—unfair and deceptive practices and consumer protection standards and civil rights law and other things—that will check these algorithms for the sort of things people want to have reviewed anyway. So I don’t know if we’ll get to that point, but I think the compromised position ends up being some sort of softest of soft mandates that industry says, “Yeah, we agree we’ll do these things,” but they’re more first-party oriented impact assessments or audits. Or maybe third party, but it’s maybe an industry certification body of some sort.
And I don’t know how that’ll work. I’ve tried to spell that out in some of my own work about other models for that, underwriters laboratory kind of model or something else. And there’s a lot of different professional associations from the IEEE to the ISO to the ACM, which are technical bodies. But they’ve already got standards for how to do these things that could be done in a more self-regulatory, self-certified kind of way. So we’re in a holding pattern on that right now, but it seems like that’s something that’s coming.
Neil Chilson: Yeah. You mentioned the NTIA comments. And related to what you were just saying, the thing that’s interesting is this is an inquiry into how to develop a productive AI accountability ecosystem. So their catch word across this entire inquiry is “accountability.” But they never say accountable to whom, and that’s kind of an important question to ask. The other thing that’s really interesting in the — I think the implication there, by the way, is that accountable to the government, maybe just to the Biden administration. But they never quite say that.
The thing that’s missing the most in that inquiry, and what you just alluded to, is that the biggest source of accountability in our market system is the customer. And if you’re not satisfying the customer, that’s a great source of accountability. And the NTIA inquiry completely ignores the market dynamics that drive companies to create good, safe, reliable products because that’s what customers want. And there’s very little justification in the inquiry—maybe there will be in the record—for why there would need to be other accountability mechanisms.
And yet, all of the accountability mechanisms that they talk about are things that could — almost all of them are things that could be used in the market process. As you pointed out, lots of companies do software audits to make sure their software is safe and that it does what is expected and that it produces results that consumers want. Yeah. And so, the question is, why would there need to be intervention to drive more accountability? The RFC doesn’t really — the request for comment doesn’t really suggest why or ask much about that but —
Adam Thierer: Just jumps into the specifics of details, right? Yeah. Let me move to one final point before we have to wrap up here, which is that when we raise these points of concern about these proposed regulatory concepts for AI, the obvious immediate pushback is, “Well, don’t you believe in anything? Don’t you want anything to be done?” And so, we get this all the time. But, of course, we’ve already identified many things that are being done or that could be done. But maybe we should tease this out a little bit more, and I want to hear from you on this.
But one thing I’m pointing out in my comments to the NTIA for this algorithmic auditing proceeding — request for comment rather, is that our government can do a lot of good in terms of sort of education, literacy, risk communication, just talking about AI benefits and risks and how to address them in sort of less restrictive ways. That’s sort of one bucket that I always try to set forth that it’s something government can do—education, literacy, awareness building. A second is one we’ve already alluded to, which is like tap existing authority, power, regulatory capacity and figure out where you can fill gaps with that before you jump to new regulations.
And then third is the whole world of — the amorphous world of soft law or multistakeholderism, which is something that has been playing out in the world of emerging tech now for many, many years, which is that with federal law being — it’s so hard to get anything through whether you want it or not. It leads to a vacuum that is filled by the idea of soft law or sort of informal steps by government and industry working together, often through multistakeholder processes where they come together, talk about problems. They hammer out best practices, codes of conduct, certifications, things like this.
Now, this is a tricky issue, especially for a lot of conservatives, because of course it raises the question about the administrative state gone wild. At the same time, a lot of good things happen in the world of soft law. In the world of driverless cars, we’ve been waiting eight years for federal legislation on robot cars, and we haven’t gotten any yet. But we’ve got a lot of rough rules of the road in terms of informal best practices and guidances and things like this. The FDA, an agency I criticize a lot in my own work, has been doing some interesting things to actually set up best practices for software as a medical device, which is just AI in the world of medicine. And a lot of it’s pretty good. It actually opens up the door, keeps the door open at least, to a lot of AI innovation in that space. It’s not perfect, but it’s something.
So let me get your thoughts on that just general last bucket. What is to be done that is more constructive, targeted approaches to some of the concerns that are being raised by members of Congress and other critics of AI?
Neil Chilson: Well, I think the key thing to do is to identify what harms specifically we’re worried about. And this is doable. I mean, these senators are thinking about specific scenarios and the types of harms that they are worried about, but they’re jumping very quickly past, “Well, do we have regulatory tools to deal with, say, fraud or cybersecurity, or do we need to create more incentives in those spaces? And where would be the right places for those?” Rather than thinking about this globally as AI and then creating a bunch of new regulatory regimes around it, I think they really ought to drill down on what the harms are that they’re worried about, identify what might be a good approach to that, see if we already have regulatory tools there. In many cases, we might; in some cases, we might need to strengthen those.
But I think that’s a much more productive path forward than taking this holistic view and saying, “Hey, we’re just going to treat this as if we’ve never dealt with these types of harms before because it has a new technology that’s associated with it.” I think that would be a much more productive way to do it. And it’s one that, like I said, the FTC has historically been a pretty good template for that. And that type of case-by-case approach especially has been effective in identifying specific harms, going after them, and by — in that way, shaping some of those norms and soft law that companies internalize and industries internalize. I think that’s a good model. I think it’s one probably that we should pursue in the AI space while looking for gaps that need to be filled.
Adam Thierer: Yeah, I agree with that. It’s more targeted, risk-based analysis and approach to algorithmic governance. And we’re not saying that there aren’t concerns or issues that will arise with AI. There clearly will be, and some of them are really, really difficult to deal with. But some of them are actually right in front of us, and we have the capacity to deal with them today. And I already mentioned some of the agencies that are active on this front.
And I don’t have any beef with the Federal Trade Commission saying things like, “Keep your AI claims in check.” I mean, if a company is out there saying, “We do great, amazing things with AI and our systems,” and they have no machine learning going on or data science behind the [inaudible 40:20], that’s just good old-fashioned fraud. We have tools to police that. This isn’t anything new under the sun. So I just wish people could refocus on channeling existing authority and power to do good things. And if, by contrast, that authority is not doing good things or is blocking progress, we ought to deal with it. We can figure that out as well. There might be a serious risk or harm associated with government being clumsy or overregulating.
I think of the Federal Aviation Administration, how they’ve held back progress on autonomous flying systems, drones, for many, many years because of their highly precautionary, principle approach to these things. So I have to believe there’s a better way to do this than what is currently being floated with these grandiose national and international regulatory schemes and licensing bodies for AI, which I think we should just conclude by pointing out would potentially decimate the competitive advantage of the United States currently enjoys in the digital marketplace and have major strategic — global strategic geopolitical ramifications as China and other nations look to accelerate their own advantages in this space. And they do have some. China is really out in front in some of these algorithmic technologies, so we have to be aware of that.
So, Neil, before I wrap up today and turn it back to Colton, I just want to offer you the chance to, if you want to say any concluding remarks, feel free, but also tell people where they can find you on Twitter and CGO.
Neil Chilson: Great, thanks. It’s been a great conversation. One final thing I’ll point out is kind of on the side about AI being too general of a term. I wrote a somewhat snarky, satirical post that said, “Hey, we don’t need a — we don’t need an agency to regulate AI. We need an agency to regulate intelligence, period. Because God knows people have misused that.” And so you can find that on my Substack, which is outofcontrol.substack.com. And you can find me there, or you can find me on Twitter @Neil_Chilson on Twitter. Yeah. And the Center for Growth and Opportunity is a great platform — group of people who are working hard to break the barriers that hold people back from reaching their full potential. And I’m happy to have recently joined to contribute to that work, in part because it lets me partner with people like you, Adam, and also appear on things like the Regulatory Transparency Project’s podcast. So it’s great to be here.
Adam Thierer: Yeah. Great. Thank you, Neil. And the only problem with your satirical piece proposing like a federal human intelligence regulator is that somebody in Congress will probably take you up on that idea. So be careful what you joke about these days, right? But, no, it’s been a great conversation. Thank you for joining me in this conversation. I can be found myself on Twitter at my name, AdamThierer, and you can find me at the R Street Institute, where we have an entire landing page now on algorithmic issues and AI policy. So just go to rstreet.org and you’ll find it all there. So with that, I’m going to turn it back to Colton. And thanks for having us today, Colton.
Colton Graub: Thank you so much for joining us today, Neil and Adam. We really appreciate your time and your insight on this incredibly important topic. We’d love to have you back in the future as this issue continues to develop, as it certainly will. To our audience, if you enjoy this discussion, please visit rtp.fedsoc.org to take a look at the rest of our content and follow us on social media to stay up to date with new content as it’s released. Thank you so much.
On behalf of The Federal Society’s Regulatory Transparency Project, thanks for tuning in to the Fourth Branch Podcast. To catch every new episode when it’s released, you can subscribe on Apple podcasts, Google Play and Spreaker. For the latest from RTP, please visit our website at regproject.org. That’s regproject.org.
Speakers
Topic
The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].