Explainer Episode 44 – The Implications of AI Innovation and Regulation

In this podcast, technology and data privacy experts discuss the evolving landscape of artificial intelligence, machine learning, and what these new technologies mean for existing and future policy and technology innovation. Without a clear regulatory framework, differing definitions and taxonomies have been adopted to regulate AI technology. What will future AI trends look like, and what should policymakers prioritize moving forward?

Related RTP content: The Coming Onslaught of “Algorithmic Fairness” Regulations

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music and Narration]

 

Introduction:  Welcome to the Regulatory Transparency Project’s Fourth Branch podcast series. All expressions of opinion are those of the speaker.

 

Chayila Kleist:  Well, hello, and welcome to the Regulatory Transparency Project’s podcast. My name is Chayila Kleist and I’m Assistant Director at the Regulatory Transparency Project. Today, we’re delighted to host a discussion on current attempts at AI regulation and the potential consequences. As always, please note that all expressions of opinion are those of the experts on today’s program as The Federalist Society takes no position on any particular legal or public policy issues. 

 

To address our topic today, we have as our moderator for this podcast Jennifer Huddleston, a policy counsel at NetChoice where she analyzes technology related legislative issues at both the state and federal level. Her portfolio and research interests include issues related to data privacy, antitrust, online content moderation including section 230, transportation innovation, and regulatory state. Additionally, Ms. Huddleston is a member of RTP’s cyber and privacy working group. I’ll leave it to her to introduce the rest of our panel. 

 

In the interest of time, I have kept my introduction of our speaker brief and other introductions will be brief as well, but if you would like to know more about any of our guests today, please feel free to visit regproject.org and read their impressive and full bios. With that however, I’ll hand it over to our moderator. Ms. Huddleston, the mic is yours.

 

Jennifer Huddleston:  Thank you. And thank you for that introduction. I’d like to elaborate a little bit. As part of the cyber security and data privacy group at the Regulatory Transparency Project, one of the topics we’re increasingly seeing come up both in that context as well as in the broader technology policy context are these questions of artificial intelligence, machine learning, and what that may mean both for existing policy, new policy, and innovation in the future. 

 

I’m happy to be joined today by two, I would say, leading experts on this exciting emerging field. Hodan Omaar who serves as senior policy analyst at the Center for Data Innovation focusing on AI policy. She previously worked as a senior consultant on technology and risk management in London and as a crypto economist in Berlin. I’m also joined today by Adam Thierer who serves as a senior fellow with the R Street Institute. His work has been fundamental to helping to make the world safe for innovators and entrepreneurs by pushing a policy vision that is rooted in the idea of permissionless innovation. Prior to joining R Street, Adam was at the Mercatus Center, and he has also served as president of the Progress and Freedom Foundation and various other thinktanks around the D.C. area. So thank you both for joining me today.

 

Adam Thierer:  Thank you.

 

Hodan Omaar:  Yeah. Thank you.

 

Jennifer Huddleston:  So to get started with, I think a lot of us have heard the terms “artificial intelligence” or AI whether it was in the news or if it was in science fiction novels or TV shows we may have seen. We also hear the term “machine learning” thrown about a lot and it seems like though these terms are just thrown out there without really explaining what they mean. So to get started, could you guys tell us what is artificial intelligence and how is it the same or different than machine learning?

 

Adam Thierer:  Sure. Well, I’m happy to start. I think it’s really important to understand that as the US general accountability office has noted in a report on artificial intelligence, “There is no single universally accepted definition of AI but rather differing definitions and taxonomies.” And so this leads to a lot of debate about what we mean by these terms including breaking it down into its components artificial versus intelligence. It’s hard enough sometimes to even define what intelligence means in this world. 

 

But basically, the way I think I define it is artificial intelligence just involves the exhibition of intelligence via machine and machine learning refers to the processes by which a computer can train and improve an algorithm or computer model without human involvement. What’s an algorithm? An algorithm is just a recipe. It’s like a recipe for a dish but in this case, it’s the code recipe that makes up a program. And so these are the basic definitions I think I operate under when I think about AI machinery. 

 

Jennifer Huddleston:  And Hodan, can you tell us a little bit about what does it — is there any distinction you would say between AI and machine learning or are those terms that we should be using together or interchangeably or how would you — do you think about those two terms in the context of each other?

 

Hodan Omaar:  Yeah. I think that’s a great question and I know Adam has done some great work on this. I think in general there are some important distinctions, but it really depends on who you’re asking. So if you’re a researcher, you might have a very particular distinction that you might be working with when you talk about machine learning versus AI. And in general, I think if you’re a journalist or a policy maker, oftentimes we are using these terms interchangeably. Algorithms, artificial intelligence, but also computer programs, automated decision-making systems, among a variety of other terms. 

 

And I think depending on which ones you use, it will really affect people’s perceptions of the properties of these systems and the way that they evaluate them. I think that the problems that can arise from terminological differences and misalignments have also different consequences for those different actors. In the media context, if journalists write about intelligent systems versus algorithms, that alters what people imagine when they read about those articles that they’re writing. And if policy makers legislate automated decision-making systems versus machine learning versus artificial intelligence, that can mean very different things on a technical level. So I think the key message here is that the language that we use and what we actually mean is very important to consider. Certainly, for the purposes of this conversation, I will be using these terms relatively interchangeably — algorithms, and AI, and machine learning as well.

 

Jennifer Huddleston:  Great. And you kind of alluded to this, Hodan, so I’ll go back to you. What are some of the ways we’ve already seen AI and machine learning used and are these things that consumers are already encountering a lot or are they things that are just starting to come about?

 

Hodan Omaar:  I think that’s a great question. There are lots of examples I could talk about whether it’s in health care — AI being used to predict health outcomes, forecast diseases, analyze medical images. There’s also lots of examples in the professional services, in legal and accounting, advertising. AI is helping in those areas to record and sort and process data, but the question of how common it is is actually, at least from where I sit, notoriously difficult to answer. There are few sources where you can look at how adoption of AI is — or how AI is being adopted across the country and also across the world. 

 

I think in the United States, one of the key sources of information is the National Science Foundation’s annual business survey. I will admit that they have data that they’ve released relatively recently, but I’ve only looked at the one that they came out with back in March of this year. And it actually shows that, overall adoption of AI across both manufacturing and nonmanufacturing industries is relatively low. Fewer than 7% of companies in both of those industries report using AI in significant ways. I think partly that’s because there are other technologies that organizations and firms have to have in order to really use AI in the ways that they want to. That can be cloud computing, especially in the manufacturing industries, you might need IOT information — sorry — internet of things devices first. And so talking about how common the use of AI is — according to this NSF data it’s actually quite low.

 

Jennifer Huddleston:  Well, and of course, we’ve seen a growing number of fun uses at least pop up on social media platforms recently. I think we’re sitting here in early December of 2022 and the big thing this week has been ChatGPT and all the different things people are using that resource for. Everything from writing rap lyrics about the GDPR to — I believe that we’ve seen a lot of creative uses really emerge on that. We certainly have seen a lot of fun things happen, but Adam, I’m curious. What are the examples of AI used today that you’re most excited about and that you think people may be experiencing quite a bit?

 

Adam Thierer:  Yeah. Absolutely. So let’s boil it down to a few things that people use in their lives almost every day. Every time we open up our phone mapping apps and get directions somewhere, there’s algorithms in the background powering that. And those maps are constantly improved through data and machine learning. Every time we shop online and get very tailored highly targeted types of personalized recommendations, there are algorithms and machine learning powering those things. Every time we use spam and virus filters on our email systems, or we utilize smart devices in our home such as Alexas or Siri on our phones, these are all algorithmically powered machine learning driven technologies and applications. 

And of course, that just really scratches the surface of what’s going on because there’s a whole heck of a lot of uses of AI and machine learning behind the scenes which we don’t ever really see in the field of medicine to manufacturing to transportation and agriculture. Really, what we’re witnessing, Jennifer, is a computational revolution that’s building upon the digital and data revolution that we’ve already experienced over the past 25 years. So in many ways, there’s a lot happening but the best is yet to come.

 

Hodan Omaar:  Could I just add onto that really quickly? I think that some of the funnest ways that I experience AI that I don’t even know that I’m experiencing is, if I’m going to the airport and the road is really smoothly paved, there’s a chance that those potholes were identified by AI. And so there are ways that we go around the world not even knowing that AI is making our lives better. Perhaps not the funnest example I can think of but getting to the airport smoothly and efficiently is a real blessing.

 

Jennifer Huddleston:  Absolutely. And Adam, you mentioned that in some ways this is a real computational revolution, and it seems like with every computational revolution that we see the regulators want to get into this as well. And with this being a regulatory transparency podcast, I want to shift a little bit to, now that we’ve talked about some of the incredible benefits of AI and machine learning, what has the regulatory landscape looked like? I mean, we’ve seen calls for regulation at a local level with things like the D.C. city council, we’ve seen international or multinational organizations such as the EU looking into AI, and then we’ve also seen at a federal level, the Biden Administration release an AI bill of rights. If you could just briefly give our listeners a landscape of, is AI currently, to use the phrase that you use in your book Adam, “born in captivity” or is this the future of permissionless innovation, and if you had your crystal ball, which direction do you think our regulators are heading in on this topic?

 

Adam Thierer:  Yeah. That’s a great question. And it turns out to be the basis of a paper that the Federalist Society has just released that Neil Chilson and I wrote entitled “The Coming Onslaught of Algorithmic Fairness Regulations.” So Neil and I head up the FedSoc RTP emerging technology working group and our experts on that working group have been identifying over the past couple of years the rise of a whole host of federal, state, local, and international proposed regulations for algorithmic systems. They are divided into broad-based regulations and then narrow regulations. Basically the same way AI is divided into broad AI versus narrow AI. 

 

At the broadest level, we’re seeing a lot of regulatory proposals around the idea of algorithmic fairness, algorithmic transparency, algorithmic justice. There’s, for example, a federal bill, the Algorithmic Accountability Act, and then there’s many state bills that have some sort of similar approach. Basically what these laws would require is forcibly requiring the creators of algorithms, broadly defined, to essentially reveal things about their code or how they are building their algorithms, or to do annual audits, or to be audited by some third party, and then maybe submit them to a regulator. In the case of the Algorithmic Accountability Act, it would be a new federal bureau of technology at the federal trade commission that would have some sort of oversight role here. 

 

At the state level, some of these things are driven by narrower concerns such as privacy, safety, security, or either concerns about some sort of bias with regards to either content or viewpoints or other matters. So there’s a whole host of different types of rules that are proliferating here in the US, but we ain’t seen nothing compared to what’s happening in Europe right now with the EUAI Act and other types of legislation pending there and in other countries. So again, there’s a reason Neil and I in our paper for the Federalist Society refer to it as an avalanche of coming algorithmic regulations.

 

Jennifer Huddleston:  So Hodan, do you have anything additional to add about these types of regulation that we’ve seen start to emerge and their potential consequences from your research to add?

 

Hodan Omaar:  Yeah. I think one of the — yeah. I second everything that Adam said. But I think one of the regulations that is really relevant and coming up pretty soon is the local regulation that we’re seeing in New York City around AI and hiring. Effective January 1st, there’s a new law regulating the use of automated hiring tools that’ll come into effect in New York City which will make it illegal for employers to use AI tools unless they’ve been audited for bias in the year prior to use and also that the results of that bias audit are made publicly available. And I think in general, hiring is an area where we’re seeing a lot more local level regulations pop up and even within the EU’s AI Act, hiring is called out as a high-risk area and so it’s of import there too. 

 

I think some of the consequences that we see or that we are likely to see is that these rules increase the barrier to entry for the companies that are making these AI systems and they also affect which systems companies choose to use. So if I’m an employer and I’m liable for using a system that’s biased, I’m going to only use or try to use the most accurate system, the best one that’s on the market. And from a policymaker’s perspective, they might be saying, “Great. Yes. That’s exactly what we want. We want to ensure that employers choose the most accurate system and that we can incentivize them or guide them to using those systems.” 

 

But interestingly, there might actually be a counterintuitive result or consequence that comes from this which is actually that you might be — regulators might actually be creating more bias because of something that’s called algorithmic monoculture which is when multiple decision makers or firms all deploy the same systems. And so in the hiring context, it’s quite easy to think about because this scenario is close to the real-world context. More than 700 companies, including over 30% of fortune 100 companies, all rely on a single vendor’s tools for resume screening. And what recent research suggests is that even if the algorithmic screening tool, the AI tool that they’re using is more accurate than everything else on the market, accuracy might actually become worse when everybody is using the same systems. 

 

And there’s a lot of max behind why this happens, and it’s based on the probabilistic properties of rankings, the way that these systems rank things. But I mean, the high level takeaway is that, in the hiring context, independence is more important than accuracy. So that’s to say if the three of us, me, Jennifer, and Adam were all firms, and we were choosing to use a system. In terms of accuracy, and what would be best, it would be better if we used three different systems than if we all used the same system. 

 

And so what does this law or this NYC law on regulating AI systems for hiring — what impact will that have on algorithmic monoculture? Will it make it worse? Will it make us all want to be using the same systems? Is that good? Is that bad? In the hiring context, research suggests that that’s bad. In the education context, maybe that’s good. The thing is, we really don’t know about different contexts, education, healthcare, we don’t know how this idea of algorithmic monoculture will impact outcomes. And so there needs to be more research on these things. And so we’re not really sure what the impact of certain types of regulations will be, and so there’s a real need to really consider everything and take things relatively slowly.

 

Jennifer Huddleston:  That’s a great way to transition to something that we’re seeing in a lot of these conversations whether it’s in the bills that are proposed or just in some of the conversations we’re seeing emerge around artificial intelligence and machine learning. Some policy makers and individuals have expressed concerns particularly about the potential for bias and discrimination in the use of AI systems and have proposed that this needs, sometimes, very dramatic new regulations. You mentioned some of the banning of the use of these tools or effectively banning the use of these tools. I was wondering if you two could explore a little more about, is bias and discrimination in AI any more likely than it is amongst humans and what are the appropriate tools for regulators or individuals or companies to address such concerns?

 

Adam Thierer:  Well, I’ll take that on first. It’s two things to be said here. There certainly could be some concern about how algorithms could be biased, are discriminatory in various ways because unfortunately, humans are biased and discriminatory in various ways and when humans code new algorithms, they might bake in certain biases that are bad. And the good news is that algorithms aren’t static. They constantly evolve. And when we find mistakes like these, we will weed them out and correct them over time. And hopefully, algorithms also help us identify human biases and eradicate those as well. 

 

But there’s another point here that’s important when we talk about, well, we’ve got to regulate to address all of these theoretical hypothetical worst case scenarios involving bias or discrimination. Well, the good news is that we already have a whole host of nondiscrimination, civil rights-oriented laws, and statutes on the books at the federal, state, local level and we have various other regulatory remedies from consumer protection laws, unfair and deceptive practices, we have a variety of other very targeted regulatory tools, recall authority that many agencies have, and so on and so forth. And let’s not forget the courts. There’s always somebody ready and eager to sue at the first sign of trouble. And our common law system isn’t perfect, but it’s also part of the remedy here when things go wrong. 

 

But beyond that, one final point. Yes, algorithmic innovators should be taking steps to preemptively address these concerns by baking in certain best practices by design. We’ve had a lot of discussion in the United States in recent years around the idea of privacy by design, security by design, safety by design. And these are good efforts. These are the centralized governance approaches to various types of technologies including emerging technologies like AI. We definitely need more professional associations and trade associations to do the kind of things they’ve been doing to establish good ethical baselines for best practices for the development, use, and sale of algorithms. But that’s a totally different thing than asking for a new overarching top-down regulatory regime where we try to take amorphous, abstract, and aspirational goals like “algorithmic fairness” or algorithmic transparency and convert them into preemptive regulatory edicts that will be somehow widely understood and easily accepted by everyone who’s developing in real time. That’s just not going to work. That’s a European style, precautionary principle-based model for regulation that will end up quashing the spirit of permissionless innovation that makes America’s technology sector the envy of the world.

 

Hodan Omaar:  If I could add on —

 

Jennifer Huddleston:  Absolutely.

 

Hodan Omar:  I would agree. I’d say, there is the potential for the use of AI systems to introduce bias or be used in ways that exacerbate discrimination. And I agree. I think that companies and organizations and policy makers should be thinking about the ways that algorithms, whether intentionally or unintentionally, exacerbate existing biases and inequalities or introduce new ones is important because algorithms do pose new challenges. They can be very complex, they can be widely implemented, and therefore affect people at new scales. But what I would add is, I think that one of the issues with many of the efforts to address AI bias and discrimination is that they can overemphasize the technical aspects of these problems. 

 

What I see happening is that sometimes policy makers and journalists are taking very complex, messy, and longstanding social problems and then reframing them as algorithmic ones. The problem with that is that the political energy and the oxygen can get focused on exclusively solving the algorithmic element of a problem in a way that doesn’t actually fully address the root cause of those problems. So Adam brought up the courts. And a good example I think is cash bails. For a long time, advocates have drawn attention to the fact that cash bails can be discriminatory. There’s plenty of literature on how cash bails can hurt the poor and disproportionately impact people of color and fueling this pervasive problem of structural racism in the criminal justice system. That’s a decade’s long problem, but recently AI systems have been introduced into these problems because some courts and jurisdictions are using AI to decide whether people should be allowed out on cash bails, and the conversation has almost exclusively been about whether or not courts should be allowed to use AI systems to make these decisions. But the underlying problem, the use of cash bails, is not one that AI created nor is it one that just getting rid of AI is going to solve. 

 

The American philosopher, Charles West Churchman, called this sort of thing “taming the growl” in 1967. You take a tough problem or a “wicked problem” as he calls them, and you carve off a piece of the problem and find a rational and feasible solution to that piece and in that way, you tame the growl of the wicked problem even if the problem still exists. And so the important thing with tools and efforts to address AI bias is that we have to recognize that technology is only one component of a broader sociopolitical problem. And that’s not to say I think that journalists and policy makers calling out AI and saying this is always bad because in some ways, the use of AI in these longstanding problems can help us see them anew. It’s just that I think that when we consider how to address those problems, it’s important to not just focus on how they can be addressed algorithmically. We really need to address the root problems.

 

Jennifer Huddleston:  So Adam, you kind of mentioned the potential for these highly regulatory regimes to emerge with some of these proposals. What do you think would be the consequences for the future of innovation both with AI and some of its applications should these heavily regulatory regimes emerge whether it’s at a city level or at an international level?

 

Adam Thierer:  Yeah. I think we have to divide the potential for a chilling effect here into two different things. One would be social or speech related effects. The other would be economic and innovation-oriented effects. In both cases, there’s cause for serious concern. If we’re going to enlist a whole new set of bureaucracies into the business of becoming code cops for algorithmic systems and computational processes, it means that we’re always going to have to basically seek out permission slips — innovators will always have to seek out permission slips from some bureaucrat to basically do basic things in real time. I mean, algorithms move at the speed of light. We need real time ongoing development of these things that’s free of the hindrances that have destroyed technological innovation in Europe and so many other countries. 

 

And so from a speech related perspective, this could have a profound chilling effect in terms of how it might curtail the vibrancy of online speech and communications. In economic terms, it might just mean a whole lot less innovation flowing out of algorithmic sectors and services. We don’t want to do something like we’ve seen in other sectors of our economy. Going back to something you alluded to Jennifer, when I talk about “born free technologies” versus “born in captivity technologies,” what I mean by that is that there are certain technologies that are blessed to be essentially born into an environment that is largely driven by permissionless innovation that does not have overarching top-down regulatory regimes by default. And that’s certainly been clear of the internet and smartphones and social media and 3D printing, robotics, a whole bunch of other fields. And you see a ton of innovation flow from those spaces for that reason. 

 

By contrast, a born-into-captivity sector are older sectors like transportation and finance and healthcare. It’s much harder to innovate in those sectors. And the amazing thing about AI and machine learning is that it’s invading all of these spaces, not just the born free sectors, but also the ones that are born into captivity whether it be with driverless cars or drones or advanced medical devices and healthcare algorithms. But the question is, how will those agencies respond? And I’m a little bit skeptical on some of these fronts. It’s going very slowly when it comes to things like drones which are basically just flying algorithmic systems. And when it comes to a lot of healthcare applications, there’s a real danger the FDA or other regulators will derail a lot of that innovation. 

 

So policy and regulation has consequences and we need to understand that unless we continue to adopt and embrace a vision of — more of a permissionless innovation vision for these technologies, we’re just not going to get out of these sectors and technologies the sort of innovative potential that we should hope for as a nation which finally, I’ll just point out, is particularly important in a geopolitical context where China is racing ahead on these technologies and trying to be a global leader where they weren’t in the past, as is the EU. I’m not as worried about the Europeans because they’re going to shoot themselves in the foot with all their new regulations and statutes and AI acts and robot liability directives and General Digital Services Act and GDPR. There’s so many layers of regulation and red tape in Europe that everybody there looks to basically move to America when they’re ready to make their play. And we need to be here with open arms and welcome them but welcome them with the right innovation culture so that we get more algorithmic innovation than the rest of the world.

 

Jennifer Huddleston:  Hodan, what do you think is the most important thing that policy makers or consumers consider when they’re hearing about these calls for AI regulation, or in the case of policymakers, when they may be tempted to overregulate AI?

 

Hodan Omaar:  I think one of the things that I have really been considering myself recently is that there is just so much cutting-edge research into the impact of different policy decisions even if those researchers don’t necessarily frame the consequences — as I mentioned earlier, this idea of algorithmic monoculture, that’s not one that I’ve seen discussed much or at all really in the policy landscape. It’s really something that’s in the research arena. So I think what policy makers really need to consider as they move forward with different policy efforts is to engage with researchers and people who are thinking about these things to understand what the consequences might be of different actions before just going ahead and doing them because it’s very hard to change regulation once it’s already in there.

 

Jennifer Huddleston:  Because AI and machine learning are such multi-use tools, they’ve certainly become a part of the debate not only in debates over these proposed AI regulations but also in other areas of tech policy whether it’s the use of artificial intelligence to do online content moderation at scale for large social media platforms, whether it’s the ways that things like generative AI could change search as we know it and questions of what that content creation may mean, or whether it’s privacy concerns that we’ve heard some express about the datasets of AI or about the use of AI in certain decision making. 

 

I’m curious how do you guys view AI in terms of the overall tech landscape? What if any other area of policy proposals to those who are really excited about AI and its potential for innovation be looking at as potentially pitfalls along the way where say data privacy regulation might have an impact on AI’s ability to continue or something that regulates AI may limit its ability to be useful in say transforming search or content moderation?

 

Adam Thierer:  Yeah. Jennifer, that’s a great question. And I recently wrote a piece called “AI Eats the World.” And it was building on the old famous Marc Andreessen piece about software eating the world that was written over a decade ago pointing out that, with the passage of time, every single segment of our economy and society came to be touched in some way by software computing and the ramifications of the digital revolution. 

 

Well, now by extension, every single segment of our society and economy is going to be touched by algorithmic technologies and computational services. And so as it does, basically all policy will involve AI and computational considerations at some level. And what that means is that it’s going to open up a whole bunch of new potential fault lines for public policy where a lot of mischief can be done by people claiming that algorithms are unfair or biased or whatever else. 

 

And we’ve already seen this play out exactly as you just discussed in the context of things like online safety and privacy and security debates. And a lot of the bills that are proliferating at the state level right now that we’ve been tracking are basically bills, some of which are even favored by conservatives, that basically say, we need to just open it up to figure out whether or not this is transparent and nondiscriminatory from the perspective of content balance. Is it fair to conservatives or fair to whatever? I mean, there is just a huge amount of mischief that could be done when you open up a Pandora’s Box of regulation like that. 

 

But I think that’s where all roads lead. I think the battle for the next five to ten years in this country is going to be all about the idea of algorithmic transparency and what that means in practice and the mechanism, the regulatory mechanism that people will seek to impose will be very much like one that’s been imposed in the world of environmental law for a long, long time. In fact, a lot of lefty legal scholars advocate that we use the National Environmental Protection Act model to basically apply it to algorithms such that every time you as an innovator deploy a new algorithm for whatever purpose, you have to run it through not only a bureaucracy but some sort of a community board, there have to be rounds and rounds of reviews, so called algorithmic design evaluations or what the Europeans call prior conformity assessments. You’re talking about layers of regulation. 

 

And why does that matter? Well, look at what it did in the environmental context which even the left recognizes today. It held up progress on hugely important public goods and public activities that had real consequence and meaning for society. It held back real progress on all sorts of key environmental initiatives and public works projects. Well, the same thing could happen with algorithmic regulation. And that’s the kind of thing that we tried to highlight in this new Federalist Society study that Neil and I wrote is that you have to really unpack what you mean by the term algorithmic fairness or algorithmic justice or algorithmic transparency and then get serious about where there’s actually serious harm that demands immediate preemptive types of regulation or where it can be better handled through decentralized processes, ex post law, best practices, multi stakeholder arrangements. That’s the better way to go at it as opposed to blocking progress by design and making all algorithmic innovation guilty until proven innocent. The standard should continue to be innocent until proven guilty or permissionless innovation.

 

Jennifer Huddleston:  So Hodan, as we wrap up, we’re sitting here in December, and we have to think about what’s going to happen next. If you could look ahead to the next year or so, what would you say are the trends when it comes to AI either in terms of AI innovations or in terms of AI regulation that our listeners who are interested in this topic should be looking for?

 

Hodan Omaar:  So I think you already touched on it quite a bit and you’ve both done a lot more work around AI and content moderation than I have, so I’d also just love to hear your thoughts. But I certainly think that we are going to start to hear much more around the role of recommendation algorithms in content moderation. I’ve been seeing this year’s reports and discourse around content moderation and AI, and it seems that the main thing I’m hearing is recommendation systems are programmed to focus on user engagement either by promoting outrage or something else and the reason for that is because it’s better for these social media companies. It’s better for their financial incentives. 

 

But actually, just even today, I was reading a paper about calling for — the thing that we need to do is to ensure that company’s recommendation systems are better aligned with not financial incentives but user value, user utility, and broader social goals. And that is something that is just so difficult to do in real life. What does that mean? I read another paper that talked about, even if you have a completely altruistic social media company that only wanted to maximize user utility, that would be very difficult for them to do the “right thing” because humans are very complex and intrinsically inconsistent. 

 

So this paper that I read, it was called “The Challenge of Understanding What Users Want” by Jon Kleinberg. It talked about how — it gave this great analogy with going to a party and eating potato chips. You go to a party, there’s a bowl of potato chips, you eat them all, and the host — the gracious host, based on what you — all the chips you ate, they might assume that you want to refill those chips. But maybe I don’t want that. Maybe I want you to save me from myself and not fill the potato chips. But you can’t tell what I want based on my behavior. My behavior and my preferences might be two different things and it’s very difficult for the host to know that. 

 

And similarly, social media can be like potato chips. It’s very difficult based on — even if a social media company wants to do the exact right thing and is, as I said, altruistic, based on what you consume, it’s hard to figure out for them — they can basically be misguided by people’s preferences. And so I think for platforms to succeed, we have to be — we’ll be thinking a lot more about recommendation systems. And I encourage people to read that paper, and I think at least what I’ll be working on next is thinking more about recommendation systems and AI.

 

Jennifer Huddleston:  Adam, do you have anything to add particularly on that question of recommendation systems and AI?

 

Adam Thierer:  Well, obviously, this is an ongoing iterative process, and no one would claim that recommendation systems or any of the algorithmic systems we’re discussing here today are perfect. There’s always going to be flaws and problems. We have to learn in real time and muddle through. We have to find ways to ensure that there’s continuous innovation in algorithmic technologies and not stop everything based upon hypothetical worst-case scenarios. 

 

The ethical concerns surrounding algorithms absolutely need to be addressed and we need to shine a spotlight on these things and take appropriate governance steps to make sure these systems are beneficial for society, but let’s never forget there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across multiple dimensions. And that’s the problem with so much academic dialogue about these issues today. A lot of the academics just assume that these technologies come about magically, and then they jump ahead to think about all the ways that things could go wrong. And we have to preemptively address them by design. 

 

Well, guess what? Those technologies will not come about if you suffocate them out of the gates with a whole host of new laws and regulations based upon hypothetical worst-case scenarios. We have to learn in real time how to roll with the punches and figure out how to respond to the problems that develop including whether it’s social media algorithms or various types of safety and security related technologies and algorithms. Mistakes will be made but the corrective needs to be more market driven, and then when things really get problematic, driven by ex post legal and regulatory remedies that are already on the books. Again, we have so much law in this country, so many regulations. Somebody needs to make a very compelling case for why we need to add yet another bureaucracy, yet another big overarching law or a patchwork of laws to the huge volume of rules that we already have which probably can handle these things quite nicely.

 

Jennifer Huddleston:  Well, thank you both for joining me today. I think this has been a great starter course for those who are interested in trying to figure out what’s going on in this area particularly as we head into what will certainly be a busy year next year for those that are interested in artificial intelligence and machine learning. Before we go, can you tell our listeners where they can find out a little bit more about each of you and your work?

 

Adam Thierer:  Sure. I’ll go first. You can always find me on Twitter @AdamThierer or @Medium where I write regularly about this, and my work is on SSRN. My academic work, my longer work, and my newer work on this will be both on the RStreet.org website, and then also a lot of work that I’m doing with Federalist Society is available on the Regulatory Transparency website. So you can find it all there.

 

Hodan Omaar:  And for me, you can also find me on Twitter @HodanOmaar and you can find my work at ITIF.org or at Datainnovation.org. 

 

Jennifer Huddleston:  Great. And it was mentioned earlier, I’m Jennifer Huddleston, and you can find me on Twitter @JRHuddles, and then you can also find my most recent work on NetChoice’s website or like Adam on the Regulatory Transparency Project’s website where you are hopefully finding this podcast. And with that, I’ll turn it back over to Chayila.

 

Chayila Kleist:  Well, thank you all so much for being with us today and for sharing your expertise and insight. For our listeners, thank you for tuning in. If you’d like to find more content like this, feel free to check out regproject.org. Again, thank you all so much and have a great day.

 

 

 

[Music]

 

Conclusion:  On behalf of The Federalist Society’s Regulatory Transparency Project, thanks for tuning in to the Fourth Branch podcast. To catch every new episode when it’s released, you can subscribe on Apple Podcasts, Google Play, and Spreaker. For the latest from RTP, please visit our website at www.regproject.org.

 

[Music]

 

This has been a FedSoc audio production.

Hodan Omaar

Senior Policy Analyst

Center for Data Innovation, ITIF


Adam Thierer

Senior Fellow, Technology & Innovation

R Street Institute


Jennifer Huddleston

Technology Policy Research Fellow

Cato Institute


Cyber & Privacy
Emerging Technology

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content