Section 230, Common Law, and Free Speech
Social media has become a prominent way for lawmakers, public agencies, experts, and governments to communicate with the public. Meanwhile, a once-obscure provision in federal communications law — Section 230 of the Communications Decency Act — has become a political football because it provides liability protections to internet-based companies like Facebook and Twitter. Our guests, Kristian Stout, Brent Skorup, and moderator Adam Thierer, are legal experts who have written about the history of media law and Section 230. They joined us for a moderated discussion featuring audience Q&A, as Stout and Skorup debated how lawmakers and courts should approach future Section 230 issues, political speech, and free speech online.
Transcript
Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.
[Music and Narration]
Jack Derwin: Hello, and welcome to this Regulatory Transparency Project virtual event. My name is Jack Derwin, and I’m Assistant Director of RTP at The Federalist Society. As always, please note that all expressions of opinion are those of the guest speakers joining us today. To learn more about our speakers and their work, you can visit RegProject.org to view their full bios. After opening remarks and discussion between our panelists, we’ll go to audience Q&A if time allows. So please enter any questions into the Q&A box at the bottom of your screen.
Today, we are pleased to host a conversation titled “Section 230, Common Law and Free Speech.” To discuss this topic, we have a great panel, featuring Brent Skorup, Kristian Stout, and Adam Thierer. Adam, who will be our moderator today, is a Senior Research Fellow at the Mercatus Center at George Mason University. He specializes in innovation, entrepreneurialism, internet, and free speech issues, with a particular focus on the public policy concerns surrounding emerging tech. With that, Adam, I’ll pass it over to you.
Adam Thierer: Well, thanks so much, Jack. I appreciate it. And on today’s webinar, we’re going to be discussing the ongoing debate over digital content moderation policies and Section 230, which, as almost everyone knows now, is the provision of the 1996 Telecom Act that basically says that digital media platforms and other online services are not legally responsible for the content hosted by their end-users.
While Section 230 and the case law surrounding it was once a fairly mundane matter, it has now become a highly charged political issue with many policymakers and pundits on both the left and the right lining up to take a swing at the law, in some fashion. Indeed, proposals to reform Section 230 have been multiplying rapidly in Congress over the last couple of sessions, as well as at the state level, where many bills have been advanced to regulate social media platforms in some fashion. But a few are already being enjoined in the courts.
While I have actually lost count of all of the activity going on in Congress on this front, I know there have been at least about a dozen proposals over the last session or two. And what is remarkable about these proposals is how a great many of them actually contradict each other in some fashion, with some parties favoring Section 230 reform, such that we encourage more content moderation of some sort, while other parties want reform to encourage less moderation of some sort.
The only thing seemingly unifying these reform proposals is a desire for some sort of expanded regulatory and/or legal oversight of digital media platforms and, potentially, a whole heck of a lot of expanded liability to go along with it.
So recently the International Center for Law and Economics released a major study, setting forth another reform proposal related to Section 230. That study is entitled, Who Moderates the Moderators? The Law and Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet. And it’s my pleasure today to welcome one of the three coauthors of this study, Kristian Stout, to today’s webinar. Kristian serves as Director of Innovation Policy for ICLE. Welcome, Kristian. Also joining us for today’s –.
Kristian Stout: Hey, Adam. Thank you.
Adam Thierer: Yeah. Also joining us for today’s discussion is my colleague Brent Skorup. Brent is also a Senior Research Fellow here at the Mercatus Center with me. Brent has written extensively on these issues and, most notably, published an article in the Oklahoma Law Review in 2020 on Section 230 and publisher liability in American law, which provided a really wonderful historical look at these issues over the last couple of decades. Welcome, Brent.
Brent Skorup: Thanks for having me.
Adam Thierer: Let me begin by first asking Kristian to briefly outline some of the key takeaways from the big new ICLE paper, and then to tell our listeners how your proposal matches up with other key reform proposals currently being considered at either the federal or the state level. And then I’ll ask Brent for a little bit of follow-up. And I, too, will have some follow-up questions. Kristian, take it away.
Kristian Stout: Great. Thanks, Adam. Yeah, happy to speak with everyone. Thanks to FedSoc for putting this on. I think it’s an important topic. As Adam notes, there have been a large number of attempts to reform Section 230 over the last several years, more than I think I can count as well.
And what inspired us to start working on this paper was to look at where those efforts were trying to do something good and where they often go off the rails. As Adam notes, I think there’s a lot of contradiction behind what those reform efforts should be trying to do, in part because I don’t think that there’s been a very good theory about how you want to approach Section 230-type of intermediate liability online.
So our paper starts from the perspective of asking if we have learned something over the last 26 years since CDA 230 became law. If we’ve learned where there are sub-optimal amounts of harmful, illegal, or tortious content that can be potentially discouraged without over-deterring free expression, we want to find those places. So that’s what our paper is trying to do.
One thing I want to stress about our paper—which everyone should go and read because it’s really — it’s quite a great paper—is it’s only a working paper. We’re starting — we’re trying to start a conversation about how you actually reform Section 230 in a way that optimizes the legal regime so that you get less harmful content but still preserve free expression.
So when we have our proposal, which I’ll go through in a quick second, we’re not saying we 100 percent have arrived at the actual answer that everyone needs to adopt. What we’re trying to do is walk through the law and economics analysis of how you think about the tradeoffs involved in content moderation. And then we take that in our paper, and then we have a section at the end where we say, “Well, this is how we think that this works out.”
But we require the input of a lot of people who have been thinking hard about this. And there’s a lot of people, I think, who have very good reasons for why they think Section 230 works great, and they don’t want to reform it. They’re afraid to lose those benefits. Our position is that there’s probably more room in the Section 230 regime, such that we can actually deter some of that illegal content.
So broadly, we think that Section 230 is a good thing, has been a great thing for helping develop internet content. I think—and I think my coauthors would agree with this—that, given what everybody knew in 1996, looking at how the case law was starting to develop, it was not completely unreasonable to try to create Section 230 the way it worked. So we are not looking, in our proposals, at any kind of a total repeal of Section 230.
What we’re looking at is trying to find an incremental way to look at the margins where bad activity occurs and look for ways that courts can actually start to move the line of what counts as a liability protection for Section 230 as new harms start to emerge that can inform and update the way Section 230 works. And we want this to be progressive. We’d like this to be a common-law type of approach. And we think that this will ultimately yield a better fit for incentives.
So, generally, what we’re saying is that we think Section 230 intermediary-liability protections for illegal or tortious content for the behavior of third parties, it should be conditioned on taking reasonable steps to curb that conduct for platforms, subject to procedural constraints that will make sure that a tide of unmeritorious litigation doesn’t actually swamp all the benefits of Section 230.
So what we do in our paper is we walk through, like I said, what those law and economics cost-benefit analysis components would look like. Generally speaking, that’s going to be looking at how you determine what the costs of litigation are, both meritorious and unmeritorious. Obviously, the costs of meritorious litigation is not something that should count against a reform because you have reasonable content costs — reasonable litigation costs there.
You want to look at what the benefits of more speech are over common-law baseline. And you want to look at what the benefits of controlling more harmful behavior are in that current regime. One of the lacunae, I think, in this area—and we haven’t solved this, and it’s something that I don’t know that we, as a community, can solve until we actually start to look at experimenting with how to change Section 230—is how to understand fully what the actual litigation, unmeritorious litigation cost, would be if you allowed more litigation at the margins.
So to a certain extent, it’s hard to answer that question. And that’s why we think that what you need is this progressive, very incremental set of changes that take place over time. So, this way, there’s something like an escape valve where, if you start to see it has started to swamp the benefits of the expression, you can actually change this as it’s going forward.
So, generally speaking, at a high level, what this means is we think that platforms should have a reasonable duty of care that it owes for the administration of its services for non-communications-related torts. For communications-related torts, we think that, largely, you should keep the regime as it is, but perhaps adding in a layer that’s something a little bit closer to distributer liability, but not in the way it traditionally worked in common law.
We think that, for instance, something more like a court order, if there’s sort of a no-fault injunction approach — if a court declares that something is defamatory, a platform would have an obligation to take it down at that point.
And the way this would work in the litigation process is that there would be a certified set of best practices that a platform has agreed that binds them. They would be able to present this in a pleading stage so that if someone came in and brought a case alleging that they breached their duty of care, they can present these certified answers or certified compliant practices. And that, essentially, would be — they would get a safe harbor at that point.
The plaintiff, at that point in litigation, would then have a heightened pleading standard, something like fraud, where they would have to show that, no, in fact, the platform did not comply with its best practices that it certified. And they would have to induce more than a mere complaint. It would have to be some sort of evidence they can bring forward to show that in order to get through the Section 230 safe harbor.
That whole process, we envision as emerging for compliance with best practices. We envision emerging from a multistakeholder process, where companies and people from relevant interest groups can get together and look at the kinds of harms that everyone’s concerned with, figure out what the best practices are for platforms to deal with these relevant to the size and scale of the kind of platform that’s at stake, and that — the practices emerge from that. The whole thing would be convened by a competent federal agency.
We don’t see the federal agency as having any kind of input into the actual standards. Their job would be more to facilitate the process. And critically, what their job would also be is to collect the results of these best practices from the litigation process in the district and state courts that see these cases and be able to prepare reports to give back to multistakeholder communities so that you can start to see this updating process for the best practices.
And then, finally, we think this whole process should sunset. At some point, industry and the multistakeholders groups would be continuing to develop these best practices. The federal agency would no longer be responsible for facilitating this process, and then courts would have a new reasonableness standard to administrate at that point.
Adam Thierer: Before I turn to Brent, let me ask two quick follow-ups, Kristian, for you to elaborate on. First, who, exactly, establishes that multistakeholder process? And is there a model? I’ve done a lot of work on multistakeholerism in other contexts. And NTIA is one model. FTC has got another. Maybe there’s some other body you’re thinking about. That’s question one.
Question two is maybe you can say just a bit more about your duty-of-care requirement and how that compares with others that have been set forth. I know, I think Brookings had a paper, or someone did — Danielle Citron, —
Kristian Stout: Right.
Adam Thierer: — Ben Wittes. So maybe you can answer those two questions before I turn to Brent.
Kristian Stout: Yeah. I think — all right. So the first question is where would the multistakeholder process originate? That’s a very difficult question. I don’t know that I have an easy answer for that. We looked at different agencies that have convenings. And different groups like ICAN and NTIA, FTC, has done some of this process. I don’t know that we have a firm answer on that right now. That’s the kind of thing, I think, if people like our proposal and actually want to work with us on that, that would be something that I would look for a lot of feedback on.
Generally speaking, I think the ones that we were leaning at, we’re looking at something almost like the way NIST is involved in standards development—in that process. That was one of the more appealing ones. But I don’t have a ton to say on which agency should convene that. The point of the federal agency is to make sure that there is some sort of authoritative convening of this process. Right? So locating that would be something once this proposal goes further down the road.
On the reasonableness standard, Citron and Wittes have good work on this. I think that the way they wrote this — I don’t want to misrepresent their work, so anybody can correct me on this, please. The sense that I got from the way they wrote this is that they would be more interested in this becoming more of a full, common law, general, reasonable duty of care from the get-go. There’s going to be no limitation on that.
There is some overlap, I think, with their idea and ours, in that I think that they would agree, probably, that what you’re really looking at is a duty of care in the design and administration of your services. It’s not, per se, that you want to hold platforms liable for the third-party speech of users. But if you’re designing services in a way that users are able to more easily conduct harassment or revenge porn or a lot of the different harms that we see happening online, that the platforms have a role to play in facilitating removing that material.
I think that we’re more hesitant to just throw this completely into the courts upfront because I do — I am very sensitive to the concern that the location costs would be really, really, really quite large immediately — at least while that system sorts itself out. Over the course of 20 years or something, maybe that would actually resolve down to something more reasonable. I think, immediately, it would be — it would not be a great way to go about that.
On the Brookings paper, if I remember their proposal correctly, they actually advocated almost — a very similar, but almost an upside-down version of what we’re saying. I believe they wanted a federal agency to actually convene something like what the UK models are thinking about—actually create an agency that develops these standards and then promulgates them to industry. I think that there’s vulnerabilities—legal vulnerabilities—in that approach. And I also don’t like a centralized management of the way these — that that would work.
Adam Thierer: Gotcha. Okay. So, Brent, let me turn to you and ask you to comment, both on what Kristian had to say about the study, the study itself, and then also bring in some of your own work or compare some of what you’re hearing here to what’s going on currently in Congress and the states.
Brent Skorup: Yeah. If I may, I’ll talk a little bit about my own paper. And there is some overlap with Kristian’s. So Jennifer Huddleston and I—my former colleague—wrote this paper a couple years ago. And it was published in the Oklahoma Law Review in 2020, right when all of the Section 230 controversies hit. And it was very fortuitous timing for us. So the paper — we document the long history of publisher liability in American law and, frankly, the erosion of publisher liability.
With publications, with newspapers and books and so forth, in the 19th century you had a regime of strict liability for [inaudible 00:17:22]. Typically, if we’re talking about publications and even what Section 230 was originally about, it was about defamation online or defamation in print. And in courts, over a century, have essentially gotten rid of publisher liability and approached what Kristian mentioned, this idea of conduit liability or common-carrier liability, which is essentially no liability at all, with some narrow exceptions.
And Section 230 is, of course, a very broad liability protection. And it resembles, we say, conduit liability and actually resembles where, of course, we’re going with publisher liability. Starting in the ’70s, courts — I mentioned, in the 19th century, you had strict liability for publication defamation. In 1933, you had the Supreme Court case in Florida, the Layne case, where they created what’s called the wire service defense, which is, essentially, if you’re taking information from a wire service and republishing it, you’re not — you’re not liable for that republication.
And this evolved over time. You had broadcasters get covered by this wire service defense. You had cable companies. You had Newsweek. I mean, it started to expand into print and even into what we would call curated or edited material. There were some cases in the ’90s, broadcasters — there was a — perhaps the clearest case was a federal case in the early ’90s.
Some apple growers brought a defamation case against, I believe, 60 Minutes and the local CBS station for its alleged defamation of apple growers using pesticides. And the court, in that case—and it was followed by other district courts in these TV cases—said, “No. These are conduits. We can’t expect TV stations to go through all their programming, their tens of thousands of hours of programming, and identify — make on-the-spot decisions about what’s defamatory and what’s libelous or be subject to multimillion-dollar lawsuits.”
And your ears should perk up. I mean, this is exactly the justification for Section 230, which is we can’t have online companies staffing up, getting warehouses, law firms full of lawyers making on-the-spot decisions about what is illegal or defamatory.
And so we trace this history. And our — frankly, I mean, this is always wonderful when this happens — this is not the paper we intended to write. It was only by looking at the case law that we uncovered that there was this expanding wire service and conduit liability that resembles, in a lot of ways, Section 230. And our conclusion is Section 230 resembles what was happening in the common law, resembles the protections that, say, TV broadcasters have today and even some print media.
And it also came at a fortuitous time when online companies were starting to become commercial and develop. And, actually, this is a — you see in history, liability protection for these infant industries. And Section 230 is liability protection for an infant industry. And it’s an open question. The internet is not an infant industry anymore. Is it time to reevaluate? But I think, at the time, it was a remarkable law. It has done a lot of good for free speech and technology in the US.
We conclude our paper by saying, as Kristian said, there are these harms online, which are becoming more and more common. The internet is not — again, it’s not an infant industry anymore. Facebook, Google, Amazon, and so forth, are not — they’re not prodigy. They’re not a bulletin board operator in the 1990s. So what can be done? Where can public policy do some good in diminishing some of the harms? And Jennifer and I, we point to three conditions that you would probably want to have to have a carve out from Section 230 from liability protection.
This would be conditions — types of speech, where there is, essentially, unanimity that it’s low-value speech—low-value type of speech—that the speech can be easily identified by software or by non-experts. The reality is these companies have hired a lot of non-experts and a lot of people all across the world to determine what’s permissible and what’s not, and so any — any type of speech that requires a difficult decision. Defamation is a good example. You can’t decide what is defamation with a non-expert or with software. And so, yeah, unanimity that it’s low value. It can be identified by non-experts.
And the third condition is that devoted efforts to remove the speech would not have a lot of collateral censorship. And, frankly, not — there’s not a lot of speech or content that would satisfy those. One we point out, and it comes up frequently, is revenge porn. This is outlawed, or there’s some liability in almost every state in the US at this point. It’s low-value speech. It can be identified by non-experts. And dedicated removal of revenge porn is not going to have a lot of collateral censorship of high-value speech.
Another might be like illegal rentals online. You can identify if something — let’s say Airbnb is illegal in a jurisdiction. It might be a dumb law, but it is a type of content that would satisfy. It’s low-value speech, you don’t have a lot of collateral censorship, and it’s easily identified by non-experts. But, yeah. And we hope lawmakers will keep those principles in mind.
Our paper — I think both sides probably see something they like and don’t like about it. I think if you’re a critic of Section 230, you might say, “Well, we don’t need Section 230 at all. Courts would come to a common law solution in time.” And that’s true. But that “in time” — that could be decades. It could be centuries. Common law is a slow process.
But that’s the paper in a nutshell. I guess, for Kristian — I really enjoyed the paper, and am really glad to see others working alongside, I would say, somewhere in the middle, where there are some who believe Section 230 is this kind of word of God, this perfect law given to man. That’s a slight exaggeration. And there are others who believe it causes all the harms online or — again, that’s a slight exaggeration. But there’s a lot of us in the middle, who see it’s not perfect, but it does do a lot of good.
You know, I’m curious what — your perception of other countries. Other countries don’t have Section 230, of course. Would you say they have fewer online harms? Do they have less speech? So we have a natural experiment, if you will. We have countries that don’t have Section 230. What would your paper — do you have a perception of that? What would your paper say about other countries without Section 230?
Kristian Stout: Yeah. That’s a great question. And I think it’s worth probably doing some sort of comparative study to look at that. I mean, off the top of my head, the difficulty is going to be the legal traditions and the social environments and the government policies. Nobody has a First Amendment. We’re the only ones that have a First Amendment. So off the bat, the baseline is going to be very different about what can and can’t be censored. So, for instance, you can’t sell Nazi memorabilia in Germany and France. The environment that that’s operating in is a completely different kind of environment than what we’re facing.
So I don’t actually have an answer off the top of my head. I think it’s a good thing to think about and probably worth doing a follow-up comparative examination to see if there are some lessons we can learn there. Most likely, you would need to limit your examination to common law countries because I think — I agree that the common law likely would have arrived at something similar to Section 230. And it might have taken decades or a century.
But what we really want to learn from is a common law approach. I want to stress that we don’t want to develop — we don’t want to advert to a, say, a civil law country that has strong tendencies towards centralizing its public policy, even though, arguably, we do that a lot in the United States anyway. We want to look at how you can have evolutionary norms for developing these things.
Adam Thierer: So let me follow up with both of you. And I’ll — I’m actually going to take off my moderator’s cap here a bit and kind of push back a little bit because Brent mentioned there’s some folks out there who think that 230 is the greatest law. Well, I might be one of them. I wrote a piece called “The Greatest Internet Law,” talking about how Section 230 was one of the key pillars of the sort of permissionless innovation environment that differentiated the United States from Europe and other countries and allowed our companies and our innovators to really flourish in this country.
But I want to — I want to ask a question far more broadly. Kristian, your piece is about the law and economics of liability in Section 230. And that is a topic that’s always been of terrific interest to me. I come from the perspective of someone who believes, as the great Aaron Wildavsky once argued, that economics as applied to liability law needs a major revision because our tort system in the United States is fundamentally broken.
And when I search The Federalist Society website for tort reform, I find a couple of hundred of results talking about the need for reform on various grounds. Our tort system, of course, number one, lacks any sort of loser-pay rule to deter frivolous lawsuits. And number two, it’s, since the New Deal era, been very much focused on redistributing incomes by searching out deep-pocketed folks who will pay even when they are not the parties primarily who were the cause of the problem.
And so my own defense of Section 230 has admittedly been a little bit of a reductionist or consequentialist approach, saying, look, part of the goal here is to counter a badly broken tort system and deter frivolous activity to allow innovation and speech to flourish. And it’s not uncommon for Congress to sometimes tweak liability standards.
I mean, we had the National Childhood Vaccine Injury Act of 1986 to deal with the fact that tort lawyers had really discouraged the development of childhood vaccines in this country. That tweak by Congress turned around that situation by granting some liability exemptions to childhood vaccine makers with some new — sort of new evidentiary standards for risk analysis to be introduced into the courts.
Number two, we had the Protection of Lawful Commerce in Arms Act to deal with firearm lawsuits in, I believe, 2005. And so, there are examples like this. Right? And so, turning back to your paper, you say—let me see—on page 37, “A few expensive lawsuits, in exchange for optimal deterrence could easily be a net social benefit.” And my question to you would — why should we believe that there would only be a few lawsuits in our overly litigious system that encourages so many frivolous claims to be filed? Because once you have a duty to care sort of standard on the books, I just think that people will just start filing suits left and right with more of this duty to care standard. And then, number two, what counts as optimal deterrence? You guys never define it in the paper. I’ll turn it back to you.
Kristian Stout: Fair questions. Yeah, so I’m completely sympathetic to everything you said about the brokenness of our tort system, and I am very sensitive to the way of defending Section 230 on those consequentialist grounds. It definitely has a sympathetic note for me.
That said, in principle, if we think that there are harms occurring online that could be deterred, that there are real people whose lives are suffering, and there’s some way to tweak the way the liability shield currently works, in principle, I think that we have an obligation to consider how we can alter the liability regime to try to preserve as much of Section 230 as we can, but figure out ways to actually help people who are being harmed under the current regime.
On the idea that — the specific question on how we figure out what socially optimal balance between harm and — it’s a great question. I don’t know that—and this kind of goes to what I was saying in my initial remarks—I don’t know that we can actually create a formula right now where we do like a hand formula and we figure out – you know, a hand formula is, obviously, not exact. But there’s no formula that you’re going to be able to come up with right now to figure out what is socially optimal. That’s almost an emergent fact that has to occur based on allowing courts to examine some of these things.
So putting all of this together, this is why we got to the point where we’re saying what you need are marginal, very incremental changes to the regime that take place slowly over a long period of time, that are guided by courts, and that have more procedural protections around these kinds of suits to make sure that it doesn’t become just a honeypot for plaintiff’s attorneys to go in and just try every theory they possibly could.
What we want to — there probably will be more filings under the way we’re envisioning this. But if we’re correct in how we’re thinking about the procedure reforms, and if other people’s inputs helps us refine that idea, you’ll still get something like a lot of automatic dismissals at the pleading stages for people who cannot overcome that initial heightened pleading standard over the certified standings.
Adam Thierer: I see. Brent, did you want to follow up at all on that? Unmute yourself. There you go.
Brent Skorup: Yeah. Just a thought. As I was reading Kristian’s paper and his coauthor’s, I enjoyed the paper. There’s a lot of great work in there. For me, I can’t separate—and courts haven’t separated, if you look at the history of conduit republication history—the practical issues surrounding republication and moderation in censorship and the free speech implications. And so, applying a law and economics framework – I mean, I love law economics. My career has been in law economics. But I don’t see — these issues of media policy and are so tied up with free speech that I’m not convinced we can apply an economics framework to issues of free speech.
If we could separate the practical issues of content moderation from the speech, but I just don’t — I wish we could. I just don’t see a way around it. And I would just — I just want to add, this whole issue — and this might be funny coming from someone who’s written a lot about Section 230, but this debate in Congress — I mean, this is an aside from the paper. This debate in Congress — I mean, this has become a political hot button issue for the left, for the right, particularly for conservatives in Congress. And I regard it as a tragedy, really, that we have hearing after hearing, dragging tech companies in to defend themselves for whatever the controversy of the day was.
What I really don’t — I really don’t see much wiggle room when it comes to Section 230. I mean, I think there can be some tweaks around the edges, but this has just consumed our political class for so long. And it’s really — resembles the media fights of, say, the 1960s and ’70s. And you read – I mean, the Fairness Doctrine, as you know, is well-publicized. But you read communication scholars at the time talking about a day in the life of the FCC. And I saw one estimate — two-thirds of FCC time was spent doing broadcast issues—that is, media issues.
And I worry we’re headed that way with Section 230 because it is a media issue. It is a lawmaker-access-to-constituents issue. And that’s why it’s become so all-consuming. But we have our political class and the leaders of our tech companies just consumed by it when they can do so much better things. And two pet issues of mine, when it comes to media policy, that I think are much more fruitful areas for lawmakers and others working this space is — one is a public forum doctrine online.
Whether we like it or not, courts are holding, in case after case, that lawmakers’ social media accounts are public forums. The question becomes, what restrictions, if any, are there on private companies to unilaterally remove public forums from the internet? I’ll point to one case in Vancouver, Washington State, last year. YouTube removed a public hearing where some people were discussing — I think Covid was part of the controversy. We don’t know because YouTube pulled the video from this public hearing from this public school district because it violated their medical misinformation content.
What are the free speech and First Amendment issues of private actors removing public forums unilaterally online? I’m not sure, but I think that’s a much more fruitful avenue for free speech advocates, and, obviously, it ties into conservative advocacy. And actually, this was an element — Twitter has a — Donald Trump has a lawsuit against Twitter. And there’s a brief argument about this. I’ll be interested to see how the court in California approaches it because it’s a — this is a live issue.
The other is state action online. Again, there’s some relevant litigation here. Alex Berenson was — he’s a former New York Times journalist. And he has litigation against Twitter. Much of the lawsuit, I don’t think, comes to much. But there is this argument about state action and the fact that the Biden administration, last summer, said that they were working with tech companies — their staff was working with tech companies to remove medical misinformation. Alex Berenson believes he was part of that purge of social media accounts. And there could be a state action claim. But it’s not just his.
There were reports in Politico that protests — anti-vaccine protests were shut down at the request of state officials. And there are others like this. And this is state action arguments, and, I think, a much more fruitful area that is not Section 230-related, which, again, as someone who is in right-of-center circles, I find it really confounding that conservative advocates and libertarians are wrapped around the axle on this Section 230 issue when there are speech harms online. But, anyway, Adam, back to you.
Adam Thierer: Yeah. Kristian, I’ll welcome some follow-up comments on that. But let me just say to our audience that’s listening, feel free to pose any questions you might have for Kristian or Brent in the chat box. And I’ll do my best to try to get to them. We’ve got a little bit more time here. But, Kristian, I’ll invite you to say anything and comment to what Brent had to say. But I will say one thing in turning it back to you for some more comments, which is, it seems to me like Brent has teed up just a couple more issues here to add to the ones that you’ve already raised and that many, many other people have raised, in terms of reasons to reform 230.
This gets to what some scholars are referring to as the Swiss cheese problem of 230 reform. It seems like, at some point, if everybody got their way, then this law would become, essentially, meaningless.
Kristian Stout: Right.
Adam Thierer: And so I guess one skeptical question back to you is if we open this thing up at all, at all to do anything, it seems like once the flood gates are open, a lot of stuff gets on. And maybe this is just a practical political question.
Kristian Stout: Right.
Adam Thierer: You’re talking about what I would regard as more like a surgical strike approach to dealing with the most heinous crimes or harms of a non-speech, non-communication-related variety, correct?
Kristian Stout: Yes. That correct.
Adam Thierer: And that would be your preferred approach.
Kristian Stout: Right.
Adam Thierer: Is that doable?
Kristian Stout: Well, so — okay. Lots of material for both of you. I’ll try to tie it all together in a big bow. Generally, I agree with a lot of what you’re saying. But I’ll distinguish some of how I’m thinking about it and I think how my coauthors think about these problems. I do think that — I agree, it’s difficult to do an economic analysis when you have speech issues involved. But that’s not to say that it’s impossible. So even — you mentioned the Fairness Doctrine, and the Supreme Court essentially rooted its view of the Fairness Doctrine in a cost-benefit analysis about administration of scarce spectrum. So that’s kind of in the background in a lot of communications policy. It could become overwhelming. And that’s exactly why I think that what the lawmakers need to avoid is creating something—even if we can assume that it would pass First Amendment rigor—creating some sort of like speech agency that’s responsible for administering these speech codes. I just don’t know that it’s possible.
And then to Adam’s point on the Swiss cheese issue — whether our surgical strike is too surgical or not broad enough, maybe that needs to change. One of the other things that motivates the way we’re looking at Section 230 reform is that taking an issue-by-issue approach — we had costs assessed, and now everybody keeps looking for other ways that they can do these carve outs in Section 230. I think, ultimately, that doesn’t do what everybody wants it to do. I think that it creates far too narrow exceptions in some ways. But then it creates sort of like a broad hammer in other situations.
I think that what you need is to have publicly agreed-upon standards that everybody is aware is working out there and that these companies are binding themselves to these court proceedings, and then allow the courts, in adjudication procedures, to figure out where this line needs to sit, given the unique features of how the internet works, relative to these other media.
So I think Brent’s right that what the common law would have arrived at is something like conduit liability. I think that there is probably more to it than that, though, because the internet is different than a broadcast network. And it is different than traditional publishers. Social norms are weaker online. Anonymity or pseudonymity is sort of how everything runs on the internet. That makes self-help much more difficult.
The scale of platforms makes the ability to amplify harmful content much greater. So I think courts would take these factors into consideration. Now, the cost-benefit analysis they do in weighing the value of speech versus the harms that it commit, I don’t know that it’s going to be something, again, that has mathematical precision. But it’s going to be something that can be iterated over court decisions. So I am optimistic that you can still see a law-and-economics-informed adjudication process that emerges there.
To Brent’s other point, I actually agree. I think the Section 230 conversation is useful in the same way that people talk about antitrust as a way to chasten big tech. I think a lot of the arguments that you see emerging from the populists that want to go after big tech using antitrust, they really make no sense based on the history of how competition law has worked, in the same way that trying to use the threat of reforming Section 230 to force conservative speech to stay online or to take down hate speech. I don’t think that those things actually make sense in a First Amendment context.
So a lot of the attention that’s being spent in both of those areas is a lot of heat, a lot of fundraising. But they’re not actually trying to do something that I think, on net, will yield better social benefits. Looking at where the state action doctrine starts to actually have teeth — so, as Brent noted, all the cases on the state action issue, I think, so far, have basically not gone the way of saying that there was the implication that there’s been federal involvement to the extent that it becomes something that can be regulated.
But there can be a line, for sure, where the government does start to involve itself with these companies in a way where it does look like there’s some direction there, and you can see that it’s the action. The same thing with the public forum issue. When things are — when you don’t have local news the way you used to, when you don’t have as much newspaper consumption, you have more people getting their news from Reddit or Twitter or YouTube, especially local news, like those things, at what point does the public forum doctrine start to say that you can’t actually just take these down because of your particular preferred policies on particular content area?
I think those are real. That is where a lot of this action is. Unfortunately, it’s harder to message politically around those things.
Adam Thierer: We’ve got a number of questions coming in. I’m doing my best to sort through them. Let me try to combine a few of them for both of you. One comes from my old friend Solveig Singleton. And she has a couple of questions, one of which is, “How does this issue relate to ongoing disputes about copyright enforcement?” And just to build on her question, one could ask, “Are we moving towards something more akin to a traditional notice and takedown regime like we’ve had since copyright was originally carved out from 230?”
And then another question here is from Larry Joseph. He says, “Section 230(c)2(a) covers material that the provider considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable. It is primarily the last one, otherwise objectionable, that tends to get applied selectively on political grounds. What’s the justification for the vague, ‘otherwise objectionable’ criteria?”
So, two different questions there. I’ll let you guys decide if you want to say anything on either of those questions from our audience.
Kristian Stout: You want to go first, Brent?
Brent Skorup: Yeah. Yeah. On the copyright — so copyright is an exception to Section 230. Right? I mean, this has been carved out by Congress. So there can be liability for copyrights, although it’s not really a due care standard; it’s a notice and takedown. You’ve got to be — if you’re an internet intermediary of some kind, once you’ve received notice of a copyright violation, there’s a duty to remove it. And by the way, I don’t think I mentioned, but when I was talking about mine and Jennifer’s proposal for how you could modify Section 230 slightly in the future, it would be a notice and takedown standard, not a liability or due care standard.
So copyright is not a part of Section 230, really. And I think if there are any conservatives who believe removing Section 230 would be some boon to conservative speech online, I’ll point you to one example of why that might not be the case. Again, we have this natural experiment of no Section 230 protection for copyright issues. And in 2020, in the runup to the election, there was a Twitter user, Carpe Donktum, who created a satirical video of CNN. It’s all been removed by now. You might be able to find it.
But it was a satirical video of CNN. It got retweeted by President Trump. Carpe Donktum calls himself, I think, a pro-Trump memesmith. He was very active in conservative politics. Twitter kicked him off, in part because of this alleged copyright violation for re-purposing the CNN clip. And this was bogus to anyone who saw it at the time. And we know it was bogus because there was later litigation in New York’s state court. The owners of the copyright of the video sued Carpe Donktum in state court. And the judge threw it out on anti-SLAPP grounds—said, “This is clearly satire.”
So this idea I think some conservatives hold that without Section 230, you would see all this conservative — all this dissonant speech. It’s just not the case. I think Carpe Donktum is a great example of where you have an area of law exempt from Section 230 copyright that’s being abused by political actors and is getting people removed from social media. I mean, I hope, if nothing else, people come away with that. I mean, this idea that removing Section 230 would be a boon to conservative speech — look at Europe. They don’t have Section 230. Is conservative speech in a better or worse position there? Look at copyright and abuse of copyright for political purposes. Yeah. So I’ll leave it at that.
Adam Thierer: Well, it’s a good point. And one thing we need to differentiate is that there’s Section 230, and then there’s the First Amendment, too, right? I mean, —
Kristian Stout: Right.
Adam Thierer: — there’s a lot of other factors that are in play here. But, anyway, Kristian, a quick comment from you, and then I have a —
Kristian Stout: Yeah. Yeah, quickly.
Adam Thierer: — couple more questions I want to get to from the audience.
Kristian Stout: Sure. On the notice and takedown point, I — so I think Section 512, which is the copyright law that’s relevant on that question, certainly needs some help. There’s been ways it has not worked as expected, I think, over the last 20 years. I would not immediately replicate a notice and takedown system for Section 230. I just don’t know that that is the right way to go about that. Maybe for something like what I mentioned earlier, where you have a no-fault injunction that declares that certain content is definitely defamatory or otherwise tortious. Maybe there’s an obligation to takedown on notice for that.
On the conservative speech point, to what Brent’s saying, and to Brent’s earlier comments on the state action and the public forum issues, I think that’s 100 percent correct, that if you repealed Section 230, you would see less conservative speech online. I think that it’s not — there’s not a — it’s not an unexpected occurrence that the major media institutions were all of a sort of center-left of left-wing bent before you had the internet explode. Fox News came out in the ’90s. Sure, that’s a counterpoint. But overwhelmingly, the consensus in newspapers and traditional media tended to have a liberal bent. The way the internet has run has given room for people who do not subscribe to a liberal orthodoxy to actually collect and discuss things.
Now, there is times when I am sympathetic—where there’s some people who get taken down where you’re like, clearly that person was not someone you want to take down. But you need to be able to step back and look at what the costs and benefits are of removing the law that makes it possible for companies to not have automatic liability whenever someone says something mildly offensive. And like it or not, there are plenty of people on the right who say things that are mildly or worse, and people would be happy to sue platforms to get them to take that speech down, absent Section 230.
Adam Thierer: I’ve got a couple more questions here. Brent, go ahead quickly.
Brent Skorup: Just quickly on Section 512. I’ve posed this question before. There is an interesting provision in Section 512. And I’m not a copyright lawyer, but I think it would be fruitful for copyright lawyers to look into. There’s a safe harbor for copyright infringement, but only if — for internet infrastructure companies—and I’m not talking social media companies—internet infrastructure companies if “transmission storage is carried out through an automatic technical process without selection of the material.”
It seems like some companies are moving away from that automatic process. There’s no litigation that I know of, but it’s an interesting provision of law that I think will probably have some relevance in the future.
Adam Thierer: Let me try to throw out just a couple more questions. Trudy has an interesting question here. It sounds like something you might see on a law school exam. “Is there currently liability for publisher distribution of an outdated negative article using search engine optimization methods of an individual private citizen’s name for the sole purpose of ranking higher on Google search results?” That’s –.
Kristian Stout: That does sound like a law school exam.
Adam Thierer: Yeah. There’s a lot tied up into that one. I think the answer is no.
Kristian Stout: Yeah. I would think.
[crosstalk 00:53:04]
Adam Thierer: There would be no liability there.
Kristian Stout: Yeah. Squinting at it, I would guess that was a “no” on that.
Adam Thierer: Yeah. Right. Thank you for that question, though, Trudy. That was a fun one. There’s an anonymous question here, you guys may have seen here, regarding the question of state actors. “How would you distinguish from the standard political ‘working the refs’ that regularly occurs in many industries from across the line to a state actor? What consequences would restrictions applied to online actors have for the real world and its private actors that may not want to host a political figure’s event in their hotel ballroom, for example?”
Brent, you want to go first on that? Any thoughts?
Brent Skorup: Yeah. I’ve given a little bit of thought to this. The Supreme Court standard is “if there is a sufficiently close nexus of a state actor and the private action.” That’s obviously pretty vague. Some thoughts — I don’t think the kind of routine bullying of companies that you see in, like, hearings, or even, say, from the White House, would qualify. What I mentioned earlier was a little different.
This was a White House spokeswoman saying, “Our staff is working with tech companies to remove anti-vaccine material.” And they also identified 12 people—12 individuals. That seems like state action. I mean, that seems clearly — and there was another example last fall. Fourteen state attorneys general had questions for Facebook. And amongst their questions was — let me see if I can get the line — “YouTube recently committed to banning several anti-vaccine activists, including some names. Will Facebook make a similar commitment?”
That’s state action. I mean, you have fourteen state attorneys general naming people and viewpoints and saying, “Will you commit to removing this?” So we don’t always — we know it when we see it. It’s not the routine bullying and the things we see in hearings or even from — it’s not even just mere questions. But I think what — particularly when you have law enforcement agencies like the executive branch or attorneys general doing it, it’s something different than a jawboning from Congress.
Adam Thierer: Yeah. Yeah. I think that’s a fair point. Guys, we’ve only got a couple minutes left here. I want to ask you just a final closing thought question, which is — I think a lot of our listeners would be interested in knowing what you guys think is going to happen this year. Maybe just sort of gaze into that crystal ball just a little bit.
State versus federal level — we’ve seen two, at least two states enjoin there — I mean, Texas and Florida, right? Their efforts have been enjoined. A lot more percolating out there. Are we going to see anything stick in Congress or anything that is going through at the state level pass muster in the courts in the next two to three years? Brent, why don’t we start with you, and then we’ll give Kristian the final word.
Brent Skorup: I don’t think so. And this gets back to its — it can be done, but it’s hard to improve on kind of the default rules by Section 230. And courts are just — they just have a very high standard when it comes to speech for any political or state laws that touch on that. And the laws I’ve seen are pretty hastily drafted. I mean, they’re political documents, and it just is not very promising.
But, yeah. I imagine there will be action. I mentioned this is what I’m worried about. We had, in the past, when the FCC was very involved in broadcast media and regulated broadcast media pretty heavily, this dominated regulators’ and lawmakers’ time to an extent that it’s really hard to understand today. But we’re seeing some of that. And I think it’s a shame. I think there are more fruitful avenues if you care about free speech and dissident speech of right or left. But I haven’t seen much, much progress to date.
Adam Thierer: Kristian, final word to you.
Kristian Stout: Sure. I generally agree with a lot of what Brent just said. I think that there is more of a risk, however, insofar as we are increasingly in a populist moment. And it doesn’t — I don’t think it takes much for people who are of a left-wing populist sentiment and populist sentiments to figure out a certain Venn diagram of what they think they can get to go after big tech. The problem is, with a lot of the bills that I see and the efforts that are being pushed is that I do think that, largely, they are mostly political documents. They are meant for fundraising purposes and for signaling to your base.
So to the extent that they do those sorts of things, I don’t think that you’re going to see this move. But where they can find bad ideas to agree on together, I don’t — I wouldn’t foreclose the idea that they could pass a law, which is why I think it’s important to — if you really are interested in figuring out how to make things move forward, you do need to, I think, look at how to fix the actual quantifiable harms in the current regime, look at things like Brent’s talking about with the state action issues and the public forum issues, and really look at what, demonstrably, you can do that isn’t solely for the purposes of fundraising. So…
Adam Thierer: Makes sense. Well, that’s it for our discussion. I’m going to kick it back to Jack. I’ll just say that you can find Kristian’s new study at laweconcenter.org. And you can find Brent’s work and my work on these issues at mercatus.org. And you can find all of us on Twitter, under our own names there. You can search for us for as long as we’re there and not de-platformed. So we’re there for a bit longer. So, Jack, back to you. And thanks for hosting us.
Jack Derwin: Thank you, Adam. Thanks for moderating today. And a big thank you to Brent and Kristian for joining us, as well. And thank you to our audience for tuning in to today’s virtual event. You can check out our website at regproject.org. or throw us a follow on any of the major social media platforms at FedSocRTP to stay up-to-date. And with that, we are adjourned.
[Music]
Speakers
Topic
The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].