Courthouse Steps Oral Argument: Murthy v. Missouri

March 18, 2024 at 4:00 PM ET

Murthy v. Missouri, originally filed as Missouri v. Biden, concerns whether federal government officials had violated the First Amendment by “coercing” or “significantly encouraging” social media companies to remove or demote particular content from their platforms.

Multiple individuals, advocacy groups, academics, and some states sued various officials and federal agencies for censoring conservative-leaning speech on the 2020 election, COVID policies, and election integrity. The plaintiffs argued the officials and federal agencies used “jawboning” tactics to force social media companies to suppress content in a manner that violated the plaintiffs’ freedom of speech. The U.S. District Court for the Western District of Louisiana issued a preliminary injunction in the case, which was then vacated in part by the Fifth Circuit, which nonetheless held that there had been some violations of the plaintiffs’ First Amendment rights. The U.S. Supreme Court then granted an emergency stay order and oral argument is set for March 18, 2024.

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music]

 

Chayila Kleist:  Hello and welcome to this Regulatory Transparency Project webinar call. My name is Chayila Kleist, and I’m an assistant director with the Regulatory Transparency Project here at The Federalist Society. Today, March 18, 2024, we’re delighted to host a panel discussion Courthouse Steps Post Oral Argument on Murthy v. Missouri, which was argued earlier today before the Supreme Court. Joining us today is a stellar panel of subject matter experts who bring a range of views to this discussion. As always, please note that all expressions of opinion are those of the experts on today’s call as The Federalist Society takes no position on particular legal and public policy issues. 

 

Now, in the interest of time we’ll keep our introductions of our guests today brief, but if you’d like to know more about any of our speakers, you can access their impressive full bios at regproject.org. I will introduce our moderator, and then I’ll leave it to him to introduce our other guests. Today we are pleased to have with us Stewart Baker, who is a partner at the law firm of Steptoe & Johnson in Washington, D.C. From 2005 to 2009 he was the First Assistant Secretary for Policy at the Department of Homeland Security. 

 

His law practice covers cybersecurity, data protection, homeland security, and travel and foreign investment regulation. He’s also been awarded a patent. Mr. Baker’s been General Counsel of the National Security Agency and General Counsel of the commission that investigated WMD intelligence failures prior to the Iraq war. He’s the author of Skating on Stilts, a book on terrorism, cybersecurity, and other technology issues, and he also hosts the weekly “Cyberlaw Podcast.”

 

And I will leave it there. One last note and then I’ll get off your screens. Throughout the panel, if you have any questions, please submit them via the question and answer feature likely found at the bottom of your Zoom screen so they’ll be accessible when we get to that portion of today’s webinar. With that, thank you all for joining us today. Mr. Baker, the floor is yours. 

 

Stewart A. Baker:  Thank you, Chayila. It’s a pleasure to be here, and it’s great to have the audience here to talk about this potentially very important case. And we’ll be exploring how important it will or will not be, and we’ve got an excellent and really well-qualified couple of people to talk about it. Adam Candeub teaches law at Michigan State. He practiced law in Washington. He’s been in government where he’s been an advisor at the FCC, a deputy assistant secretary at NTIA in the Commerce Department, and Acting Assistant Secretary in that position as well and then finally was Deputy Associate Attorney General. So Adam, it’s a pleasure to have you here. 

 

Prof. Adam Candeub:  A real pleasure to be here. 

 

Stewart A. Baker:  And Matthew Seligman who practices law at Stris & Maher and is also recently named as a fellow at the Constitutional Law Center at Stanford. He’s been heavily involved in election disputes and has testified to Congress on the dangers of election misinformation on social media on multiple occasions. And Matthew, it’s great to have you. 

 

Dr. Matthew Seligman:  Thanks for having me. 

 

Stewart A. Baker:  So let’s jump right in. This is a case that got a lot of coverage but not very detailed coverage, and the shape and the size of the case has been changing as it has moved up from the district court to the Court of Appeals to the Supreme Court. So I’m going to ask Adam to give us a summary of what the case is actually about and maybe why the Court took the case. 

 

Prof. Adam Candeub:  Sure. So the case really began with some rather eminent epidemiologists and scientists, people like Standford’s Jay Bhattacharya, Martin Kulldorff formerly of Harvard, who were saying very well-founded, well-researched statements about the government COVID response, pointing out things such as well, the evidence about vaccines affecting transmission’s not so good, oh, it looks like there’s a fair amount of evidence that there’s in fact lab-based origin stories, probably correct. And they found — not only eminent scientists but also individuals who had suffered from adverse vaccine reactions. They would be in support groups, and they wanted to talk about it and exchange their experiences. And all of these people were deplatformed from the major internet social media platforms. 

 

And it turned out after a lawsuit by the NCLA, the New Civil Liberties Alliance, and involvement by the various state AGs that this just didn’t happen. In fact, certain White House officials, sometimes agencies such as CISA working with nonprofits and academic institutions like Standford were key in identifying those individuals that should be deplatformed and in feeding information to White House and government officials who then contacted the platforms and whether you characterize it as bully pulpit or coercive exchanges encouraged strongly these platforms to deplatform people like Jay Bhattacharya and Martin Kulldorff. 

 

It’s worth pointing out that they were deplatformed for saying true things, things that turned out to be correct. It turns out that the vaccines weren’t so great against transmission of COVID. They didn’t work at all against transmissions. It turns out most people really believe that COVID came from a Chinese government laboratory. But these things were in fact stifled. 

 

So a lawsuit was brought in the District of Westminster, Louisiana, I think, and after exhaustive discovery process in which all of these exchanges between platforms and the government were disclosed, the district court in Louisiana issued a rather extensive injunction, saying, look, there has to be sort of a cordon sanitaire between government and the platforms. Otherwise, we have a violation of the First Amendment. The government is just too powerful. As Ronald Reagan says, the most frightening words that you could possibly hear is I’m from the government; I’m here to help — and that these exchanges are inherently coercive. And certainly the facts revealed them as such. 

 

It was appealed to the Fifth Circuit. They upheld the findings of the district court but issued a somewhat more restrictive, less expansive injunction. It was a 2-1 decision, and that was appealed in the Supreme Court. And that’s what we had arguments on today. So is that what you wanted, Stew? 

 

Stewart A. Baker:  Yeah. That’s very helpful. I’ll come back to you and ask what the legal principle is that’s at stake here because that was a very fact intensive discussion, and I suspect —

 

Prof. Adam Candeub:  Oh, I’m sorry. Excuse me. You’re right. The legal issue here is of course whether or not the interactions between the platforms and government actors rendered the platforms state actors. If they’re state actors, then of course the First Amendment affects and limits and prohibits what they can do. And that was really the debate, whether or not the extensive relationships, extensive interactions between government made the platforms into state actors, in which case they would be limited by the First Amendment. And of course you could sue for a deprivation of your constitutional rights, even though the government wasn’t doing it but rather its proxies the platforms were. 

 

Stewart A. Baker:  Okay. So Matthew, you have an opportunity to argue both the facts and the law on this one because I think, even in the Supreme Court, they’re still arguing the facts. 

 

Dr. Stewart Seligman:  Yes. I’ll take the opportunity to argue both. So much of what this case is about and what we saw at the oral argument this morning is a dispute about what actually is alleged to have happened. So at various points, there have been both litigants in this case and of course there’s a robust political conversation, including hearings in Congress where I testified, about what exactly the interactions between government officials and social media platforms actually amounted to. 

 

And so something that we’ll discuss is were there threats of adverse government action or not — things like antitrust actions or Section 230 reform. And were they connected to any moderation decisions? Because as we heard, the real legal issue in this case is whether these content moderation decisions, which were made in the first instance by Facebook, by Twitter, by the social media platforms — whether those decisions are ultimately attributable to the government in a way that then triggers First Amendment scrutiny. And so that’s really what’s at stake here, and there’s both a legal issue and a factual issue here. 

 

So the factual issue is what exactly happened, but there’s also a legal question of — and we can say this in a broader way and a more specific way grounded in doctrine. The broader question is we’re living in a new world of social media, and so how do the principles and doctrines that we’ve inherited from a pre-internet era about third party censorship where the leading case is about a bookstore and taking pornography off of its shelves — how those principles might apply in the social media world that we live in now and then more specifically whether the requirement here is coercion or something broader and what counts as coercion. 

 

So something that’s not in dispute is that if the government goes to a private company and says you have to stop publishing this third party speech or we will criminally prosecute you, that is subject to First Amendment scrutiny. That didn’t happen in this case, and so the question is what beyond threats of prosecution or other explicit or implicit threats of adverse government action — what can be included in that. And there will be a debate, I’m sure, over the next few minutes and then beyond about whether the conduct in this case amounted to using the bully pulpit, saying both publicly and privately — government officials saying both publicly and privately that hosting this information, this disinformation is harmful and you shouldn’t do it and how much of it goes beyond that into being quite emphatic and impolite and at what point it crosses the line into the government is ultimately making the decision about what speech is hosted and what’s not. 

 

So that’s really the legal issue. And as we can see, especially in the new social media world, it’s deeply imbedded in a complicated set of facts that are still in dispute, which is a bit uncommon at a Supreme Court case. Usually the facts are settled one way or another, but there’s been this continuing dispute about the factual record in this case and what actually happened. 

 

Stewart A. Baker:  Yeah. And actually, we got a question, and I think we should ask you because I think it’s part of the introduction to the case which is is it really necessary to find that the social media platforms are state actors here? Or is it possible to bring an action or an injunction against the government officials? And I think the answer, if I understand it, is it’s possible to bring it against the government officials, but the doctrinal root to finding that the Constitution was violated probably does run through state action. 

 

Prof. Adam Candeub:  That’s right. The parties sued were with the government, were state actors. It was Murthy, Surgeon General Murthy as well as other state actors and that they would only be liable if they were working through private actors who were essentially their joint participants in this action.

 

Stewart A. Baker:  So let me ask Adam first. What is the most coercive agreed conduct that can be pointed to? Do we have an agreement among the parties about some things that most people would say, well, that sure looks like coercion to me? 

 

Prof. Adam Candeub:  Yeah. There were threats about Section 230, taking action about reforms under Section 230 and other antitrust reforms. President Biden sort of said they’re killing people, which seems to suggest that they were engaged in criminal activity that might of course result in criminal prosecution. And the variety of sort of regulatory and legislative threats that were always sort of lurking in the background or at least was alleged by the parties. 

 

And I think that’s part of the issue. It’s sort of like when Henry II said will no one rid me of this turbulent priest. Was he asking his barons to go down to Canterbury and assassinate Thomas Becket? It’s a contextual question. 

 

And I think a lot of the justices were sort of — because it’s a contextual question that is a question of fact that really should’ve been resolved at the first level of the district court. And I think a lot of the justices who seemed a little bit uncomfortable with the Fifth Circuit decision were trying to figure out which facts do we have to ignore or which can we review on a less deferential standard in order to get the result we want? So yeah, I think that the case does turn, as Dr. Seligman pointed out, on how do we interpret these background threats. And is it required for either standing or for a finding of state action to find a very specific threat and a very specific action, or can we allow people to interpret government action more broadly? 

 

Stewart A. Baker:  So Matthew, do you agree that the references to 230 and to antitrust and to literally killing people are sufficient to be coercive, or do you want to see an actual linkage between the things they’re asking the platform to do and the bad consequences that will come if they don’t in order to say that, yeah, that’s what it takes to be coercive? 

 

Dr. Matthew Seligman:  Well, I think this is a perfect example of the factual disputes that remain in this case and not just concrete facts but also how do you characterize the, whether you characterize those as threats or not. In my view, a threat is connected to if you don’t do X, I will do Y, and it’s certainly true that there’s been a robust political dialogue about Section 230 reform and antitrust actions against big tech. And that has been a bipartisan conversation. Both of those would require legislation, and there has been bipartisan legislation that has been introduced on both of those issues. 

 

And so if we’re trying to say that these were threats, well, what’s the linkage? So I’m not aware of anything in the record that ever involves any governmental official saying to someone at a social media platform if you do not take down this content or adopt a policy with respect to content, etc., we will take this adverse government action. Now, it doesn’t have to be as explicit as that. I’m not saying it does. But that linkage was never made because these adverse government actions, these were things that both parties have been talking about for years, both before and after COVID, which is the epicenter of what this case first started about. 

 

So the fact that there might be legislation out there that would be averse to the interests of social media platforms, that alone doesn’t constitute a threat, especially in the context where this wasn’t the administration that was the primary mover on that. It was Congress. So it’s hard for me to see how those constitute threats because, again, the question here is whether these content moderation decisions, which were made in the first instance by the social media platforms — whether that decision is attributable to the government because the government compelled in some sense the social media platform to take that decision. And absent some kind of linkage with an adverse consequence, it’s not clear to me how you can possibly make that case. 

 

Instead what we have is heated rhetoric, and the example about President Biden saying you’re literally killing people — so President Biden who is sometimes taken to heated rhetoric as all politicians are, was emphatic about that. Now, does that — so Adam here said that — well, he drew comparison to something that led to a murder of the Archbishop of Canterbury. I’m not sure that we can draw a connection between surely an emphatic piece of rhetoric, yes — COVID disinformation is killing people — and then saying that that was a threat of what exactly? Of — [Crosstalk 00:18:45] of murder? 

 

Stewart A. Baker:  So let me push on that because you’ve probably represented people in Washington, and I find that I think it would be completely good Washington advice to the platforms in these circumstances to say one, the President and everybody in the White House and most of the administration is just mad as hell at you. They’re using the F bomb when they send you emails. This is not normal behavior. They really are upset with you. 

 

Two, the reason they’re upset with you is your failure to aggressively take down a post that they don’t like. You do not want to be in a situation where they are trying to decide how much effort to put into 230 or into antitrust investigations because they can cause a world of pain for you. You need to find a way out of this, and the way out is to take down some of these posts. Isn’t that what any Washington lawyer would’ve said to them? 

 

Dr. Matthew Seligman:  Well, according to at least two justices today, no. So the two justices that have served as White House counsels in the White House Counsel’s Office, Justice Kagan and Justice Kavanaugh, not in the same White House and White House Counsel’s Offices of differing administrations, different political parties, both said that actually this stuff happens all the time. 

 

Stewart A. Baker:  Yes, it does. I completely agree. That’s not necessarily the same thing as whether it was coercive. 

 

Dr. Matthew Seligman: [CROSSTALK 00:20:20] you might say that, well, okay, so that just means that the First Amendment is violated every day, and I kind of anticipate that response. But the idea here is that it’s not actually that uncommon. The bully pulpit includes the government’s right to speak about other speech, and so that’s part of what the bully pulpit is. And so for example, the preceding administration, President Trump criticized the media a lot. He called it the fake news all the time. 

 

Stewart A. Baker:  Enemy of the people, I think. 

 

Dr. Matthew Seligman:  Do we want to say that every time he did that if a media organization adjusted its content after he said fake news, NSDNC, etc. — if they adjusted their coverage in response to the bully pulpit, maybe they were convinced that they had been advised. Who knows? But my point here is that the bully pulpit includes the right to criticize media organizations. They’re not immune from that because, again, the test here is whether those content moderation decisions are ultimately attributable to the government such that they are subject to First Amendment scrutiny. Coercion is a clear case of when that can happen, and the question is whether it extends beyond that to heated rhetoric. Is heated rhetoric enough to make Facebook’s decisions actually the government’s? 

 

Stewart A. Baker:  Okay. So let me ask Adam if the government just puts a lot of bully in the bully pulpit, is that not enough, or do you think that’s the line that they’ve crossed? 

 

Prof. Adam Candeub:  Well, I think that this was a distinction that could’ve been made more strongly at argument that this is not in any way prohibiting the bully pulpit. President Biden and all of his officials can say whatever they want about COVID. This is about restricting individuals’ right to communicate. The analogy is Biden going to the telephone company and saying, oh, too bad, Dr. Seligman, you don’t get a telephone connection any longer. You can’t talk to your friends because we don’t like you. And that’s the analogy. 

 

This is not a policy debate. I believe completely the government should be able to talk about whatever it wishes. But should it be able to force third parties to deprive individuals of their right to communicate? 

 

And I think the point about Kavanaugh’s actually very funny. There was a tweet from Mark Joseph Stern, if I could read it, and it says “Brett Kavanaugh’s drawing on his experience as White House Staff Secretary to explain to his colleagues that government officials call up journalists to scold them all the time and it’s obviously not a First Amendment violation — real world experience matters on the bench.” So yeah, and one thing I would’ve wanted to ask the justices and Justice Kavanaugh is well, is that so necessary? 

 

What is so horrible in saying to government officials that there’s a real line that has to be drawn when you’re pressuring journalists and when you talk to them, so you should be really, really careful? That doesn’t mean that you can’t use the enormous and capacious ability of the federal government to get its views across on your own Twitter accounts and in your own incredible ability to propagandize and get your message out. But when dealing with journalists you have to be careful. When dealing with social media companies, you have to be careful. 

 

And so this seems to be something a little bit swampish of saying oh, it’s so important that government officials get to call and badger and bully journalists, that this right is part of the Constitution. Of course, it isn’t. The First Amendment is a limit on government’s ability to restrict speech. It’s not a protection of government speech. 

 

Stewart A. Baker:  So let me ask Matthew to go back to something you said because that is a difference in this case that the speech that was being suppressed here had nothing to do with Facebook or Twitter and their views. They were being told that there was somebody else’s speech that shouldn’t occur. And so they had, first, less of a dog in the fight. They were more easily intimidated. And the people who were losing their rights had no idea how much lobbying was going on in the background. Do either of those facts make a difference in the way you analyze the First Amendment interest here? 

 

Dr. Matthew Seligman:  Well, the First Amendment interest potentially but the ultimate legal question is, again, about whether those decisions were ultimately attributable to the government. And so something Adam just said here I think agrees with that. He used the word “force,” whether the social media platforms were forced to take down certain content. And I think that there’s a moving of the goal posts, factually and legally, here that’s been pervasive in this case. Nobody is saying that it wouldn’t potentially violate the First Amendment if the federal government forced private companies to take down content. 

 

The question is whether that happened and if it didn’t, whether something short of forcing. And so we hear words like “badgering.” We hear words like “hectoring” and so on. But to illustrate the expansiveness of the decision that’s under review here — so one of the agencies that’s been enjoined is Cybersecurity Infrastructure and Security Agency. And it’s an agency that handles threats to, among other things, elections, and the communications that CISA had in its correspondence with platforms including the following sentence that it “neither has nor seeks the ability to remove or edit what information is made available on social media platforms. CISA makes no recommendations about how the information that it is sharing should be handled or used by social media platforms.” 

 

And it’s difficult for me to see how that constitutes forcing the social media platforms to take down content like doxing of election officials, and that’s just one example of the pervasive disconnect between the extremely, I would say, serious characterization of what this case is about, the characterization that it’s about the federal government behind the scenes forcing these massive companies to pervasively censor private citizens’ speech. 

 

Stewart A. Baker:  So Matthew, isn’t —

 

Dr. Matthew Seligman:  And then when you actually look, it doesn’t look like that at all. 

 

Stewart A. Baker:  So the classic example of an extortive threat in modern American life is nice little restaurant you got here; it would be a shame if something happened to it. There’s no overt threat, and that would not be different if you said and by the way, I’m not making a threat. It would only make it worse. So to some extent, you can’t just rely on the words that people offer. You have to have some feel for what the actual context is. 

 

Dr. Matthew Seligman:  Yes, you do. For example, nothing bad ever happened to any of the social media companies in the over 50 percent of the time that they declined to take any action on the —

 

Stewart A. Baker:  Okay. Fair enough. Let me ask Adam. I thought the most troubling set of questions about where you draw this line came from the people who said, well, what if The New York Times is getting ready to publish a piece about American espionage that will seriously compromise national security? Are you really saying that the government can’t call up The New York Times and say that’s inconsistent with national security; you will be a traitor if you publish it, and we will say so all over the country? 

 

Prof. Adam Candeub:  No, of course they can say that. I still retreat to the distinction of well, this is not really what was going on. It wasn’t an editorial. This was individuals’ communications. This was the provision of a necessary service communications. 

 

But even if that were the case, what does the First Amendment say? The First Amendment says Congress shall not pass law abridging the right to speech. And in the other protection of the First Amendment there’s a clear prohibition. So the idea of even limiting speech a little bit by the government I think from a First Amendment purest is somewhat abhorrent. 

 

I would say that at some level — if you have to make me go to the First Amendment purest perspective, I’d say look, on extreme situations, of course. Utilitarian considerations should overturn, and there has to be some exception. And no one would go that a court would find that. But if you’re going to play that game, Stewart, then you have to figure out what really happened here. 

 

Here, true information was censored, true information which could have prevented schools from being closed down and having our students not lost years and years of education, businesses not being ruined, people who should not have gotten the vaccine, younger people who had no reason to get the vaccine but were forced and mandated to do so and who are now dealing with life changing events like myocarditis. So you can say, well, yes, sometimes it’s okay for the government to talk, but here, the government shut down people who opposed it and who opposed it rightfully. And we would’ve all been better off if they had been allowed to speak. 

 

And just getting back to Dr. Seligman’s point, the idea of a force — if I hold a gun to somebody and say your money or your life, you still make a choice. Some people, Ebenezer Scrooge would say take my life; don’t take my money. That’s a choice. It’s a hard choice, and therefore we call it coercion. But there’s not a quantitative difference between what the government does when it says — or the mafia says nice business here you have and putting a gun to one’s head. In the end, it’s just a hard choice, and there’s certain hard choices that we think are inappropriate for government to make. 

 

So this dichotomy that you’re trying to make, Dr. Seligman, between force, which is bad, and just coercion or suggestions or questions, which are okay, no. Every time the government speaks, there is an element of force, and the question is at what level is it acceptable. I think as the district court found and is closest to the facts, this was unacceptable. I think from the ethereal perspective of the Supreme Court from the Washington perspective of the center of bureaucracy perhaps it was unacceptable. 

 

Stewart A. Baker:  So I think that this boils down to a factual question on which we’ll all have views because it’s a Rorschach test, and the Court clearly has views. And they are more comfortable with what the government did than the district court or the Court of Appeals did, I think. So where does that leave us? Let’s assume that the Court is not going to find plain error in some of these findings. What law do we end up with? Do we end up just with an injunction that says don’t actually coerce social media platforms in the future? Adam, why don’t you start that? 

 

Prof. Adam Candeub:  Yeah. So after listening to the argument, I think that what’s likely to happen is — as I’m hoping. Certainly only three judges seemed to signal support for the Fifth Circuit opinion. I think it was, who do you expect, Thomas, Alito, and Gorsuch. The others for a variety of reasons, I think the middle three, Roberts, Barrett, and Kavanaugh seem to be on the government speech side, and Sotomayor and Kagan and Ketanji Brown seem to be on the government — protect the platforms. 

 

But one issue that we didn’t really talk about which was very, very important to the judges was standing and issues of addressability. So I think it’s very possible that we could get — rather than get into this factual debate as you pointed out, Stew, we could just get a 6-3 decision on standing in which they overturn the Fifth Circuit and dismiss the case. 

 

Stewart A. Baker:  That certainly sounded like they could get five or six votes for that proposition and that a lot of the justices would be relieved to be able to avoid making any law here. Matthew, how did you read the argument? 

 

Dr. Matthew Seligman:  I agree with Adam. I think that’s certainly a possibility and something that, again, implicates factual questions about this case. And so the justices were interested in standing. Why were they interested in standing? Did any of the plaintiffs suffer any injury as a result of the alleged misconduct? And what the justices were concerned about is that both the district court, the Fifth Circuit, and the respondents, the plaintiffs in this case had misrepresented the factual situation to try to suggest that there was a content moderation decision that was attributable to government action by sort of glossing over two year gaps in timeframes and things like that. So I do think that it’s possible that the Supreme Court will ultimately find no standing here. 

 

I also think that that would be a shame in some respects because I do think that there are important questions here about the line because I think there are some clear cases where we can say that Facebook if it’s being threatened with criminal prosecution unless it takes down certain content, well, yeah, sure. That content moderation decision is subject to First Amendment scrutiny. And then there’re going to be clear cases where content moderation decisions are not. And so the line there I think is important. I think that both the platforms, the governments, and ultimately the American people would be better off if we know what the rules of the road are. So we know that there’s going to be some governmental communication with social media platforms about their content moderation that is going to be lawful, and it would be great to know with as much clarity as possible what that is so we know what the constitutional requirements are. 

 

Stewart A. Baker:  Yeah. Frankly, if you’re willing to talk in code just a little with the tests that we’ve been talking about here, it’s very easy to make sure that all of your threats are veiled and that you send out a disclaimer at the end of your emails saying nothing that appears to be a threat in this email should be taken as such. 

 

Dr. Matthew Seligman:  Well, look, I think the Supreme Court would clearly say — and look, the law already says that this is explicit or implicit. And so a simple your money or your life with an asterisk that says nothing in this email is a threat is obviously not going to pass muster. And I don’t think that anybody watching this should think that I or anybody else thinks that it’s that easy to evade constitutional requirements. 

 

The thing that can make some of these cases, maybe not this case, but at least some of these cases difficult is when does emphatic bully pulpit speech cross the line into something else. When White House officials are emailing the trust and safety team at Twitter saying you guys are really, really messing up, this is a pandemic, how many expletives in a row do they have to use for it then to cross the line into something that raises constitutional questions? So the easy ways of evading it, the law’s already going to be able to handle that. The harder questions about okay, so how much — even if there are explicit or even implicit threats attached, how much pressure then converts those private content moderation decisions into government action? That’s where we don’t have a lot of clarity, and that’s where I think the real hard questions that we would benefit from answers lie.

 

Stewart A. Baker:  So let’s assume that the Court ends up saying we don’t think that you’ve demonstrated that you have standing because you haven’t connected the threats and the suppression to your particular posts. You haven’t shown that you could get relief that would actually prevent that in the future. It strikes me — or at least my first reaction to that is, well, that just means it’ll go back and there’ll be another lawsuit in which discovery is used to try to find a better connection. Or if in this case there hasn’t been a threat connection, every time the government criticizes what’s being posted, there are going to be people who believe they’ve been suppressed as a result of government threats. And they’re going to want to bring lawsuits so that they can get discovery. 

 

Are we in for a decade of constant lawsuits and discovery, or is there something that’s going to prevent that? Let me start with Matthew because I’m betting that he would think that that’s not a good outcome. Adam may think that’s perfect. 

 

Dr. Matthew Seligman:  Well, whether I think it’s a good outcome or not, I think it’s likely that we’re going to continue to see lawsuits about this. And there’s also the possibility of legislative action. Something that we should also bear in mind here is that the First Amendment is a baseline, and it gets very complicated here. But legislation can potentially solve some of these problems. At the very least, you could have government internal regulations about best practices and I think a robust debate about what we think appropriate communications between government officials and social media platforms is in a non-constitutional context. I think that’s an important conversation to have. 

 

So in the meantime, I think it is likely that we’re going to have continued litigation about this. Some of those cases are going to be based on ultimately pretty thin factual allegations, and some of them might not be. There are cases out there, often happening at the local level, where you have a local police chief who’s making threats saying you’ve got to take down this content or we’re going to investigate you, something like that. So there very well could be meritorious cases out there. And if the Supreme Court doesn’t give clear guidance about where the lines are, then I think there’s going to be a proliferation of litigation trying to figure out what those rules should be. 

 

Stewart A. Baker:  Adam, do you think that’s where we’re going to end up? People will keep suing and keep discovering in the hopes that they’re going to find a nugget of gold in one of these disputes? 

 

Prof. Adam Candeub:  Unlikely. And to answer that question I think we have to look at the companion case, Vullo, which was argued just afterwards. And there really was a very clear threat from a New York insurance regulator telling insurance companies not to do business with the NRA essentially. And the Court seemed much more opened to finding this to be state action and to be First Amendment violation. 

 

But from my perspective, the differences between Vullo and Murthy is Murthy is kicked down particularly on a doctrinal matter but even perhaps just even on standing. What you’ll have is simply telling government bureaucrats and White House officials that you can bully and control the social media platforms as much as you want. You just need to be really, really discrete about it. 

 

So this will be arguably the worst of all worlds where these things will go sub rosa. I will say it’ll be good for the Supreme Court, however, because they’ll be able to say that they responded to the obvious threat and then of course they made the other more insidious threat to our democracy in free speech go away because it’s invisible. I hate to sound cynical, but as you can tell, I was a little disappointed with the argument today. 

 

Stewart A. Baker:  So Matthew, let me ask. We heard Adam say that a lot of the efforts to suppress speech during the pandemic were really suppressing true speech, that now that we look back on it, we’re all a little ashamed of ourselves. And one of the reasons to be ashamed is how aggressively we pursued people who were saying things that we now suspect are true. Does that give you any concerns about how this case is likely to come out? 

 

Dr. Matthew Seligman:  So I think one of the important — so there are really two questions in what you’re asking here. One question is was there some true speech, or to bring it a little bit more broadly, things like the lab leak theory, it’s something that over the years it’s gone from something that was considered to be sort of off the wall to something that is a legitimate debate right now. I don’t think it’s proven, but I think perspective — certainly mainstream perspectives on that are different than they were before. So okay. 

 

Also, it is entirely up to Facebook if they want to host speech about the lab leak theory, and that’s true today. If they think that, look, it’s just not something that is consistent with our perspective on things, they get to today completely ban speech about the lab leak theory. That is true today, and that was true two years ago. 

 

So the important thing to remember about this constitutional question is not whether the speech is true or not. That might be relevant after the question in today’s case is answered — whether, to use the words that Adam just used, the government “controlled” the social media platforms into making those decisions. And again, Facebook has its own free speech rights. Twitter has its own free speech rights. And so the only way that the courts, which are indeed part of the government, get to control what speech Facebook or Twitter host on their platforms is if Facebook’s actions there are actually ultimately controlled, again, to use Adam’s words, by the government. 

 

And so I thought Adam is absolutely correct that the companion case today — or not companion but the second case today where the NRA, New York Insurance Regulatory Agency, when to insurers and said hey you better not insure the NRA because there’s reputational risk and oh, we might come after you in some way. That’s a very clear case where a private decision is being influenced and indeed probably controlled by the government, and so that’s something where the Constitution is at stake. Whether or not you think the NRA is supporting important constitutional rights or whether they’re supporting mass murder, it doesn’t matter. 

 

The point is that decision was attributable to the government. And contrast that to what happened with social media platforms where there was no adverse action that was ever threatened. And that’s what makes those cases different. And so it really just comes down to yeah, Facebook can ban free speech, true speech if it wants to because it’s a private company. 

 

Stewart A. Baker:  So I’ll ask you guys both this, and then we’ll take some questions. Would we be better off with legislation that required that any effort on the part of the federal government to influence takedown decisions at social media platforms be published? Like it would be a FOIA. You just say I want to see any — this would be a kind of guaranteed FOIA — anything that your agency ever sent to these social media platforms about the content they were hosting. Would that discourage inappropriate veiled threats, or am I kidding myself? 

 

Prof. Adam Candeub:  Well, you worked in D.C. for a long time. At some level at the FCC there’s certain things you can say to commissioners at the right time in the Open Meetings Act, but somehow the powers —

 

Stewart A. Baker:  They all get set. 

 

Prof. Adam Candeub:  Yeah. Exactly. I would be fine with those sorts of mandatory disclosures. I think that if you’ve got a law that required all communications between any government officials and the platforms be transparent and open, I think that would be fine. I think inevitably what would gum through Congress would be some loophole, and that would be bad. But yeah, I would support such a law. I hope it would be effective. 

 

Stewart A. Baker:  Matthew? 

 

Dr. Matthew Seligman:  Yeah. I largely agree. I think there are important exceptions that would have to be made when there’s communications about doxing people’s personal information or threats, etc., so there’d have to be some limitations. And Adam, I can imagine, would be concerned that those exceptions would then swallow the rule. 

 

I do think an important distinction can be made between individual content moderation decisions and questions about policy. So when the government is advocating that social media platforms enforce their policies in certain ways or advocating that they change their content moderation policies, that I think there’s a clear case where transparency can be helpful there. And so I would be in favor of that as long as there are, again, these case by case exceptions where there’s sensitive information that should be shielded from public disclosure. But by and large, I think transparency is better here. 

 

Stewart A. Baker:  Yeah. And we haven’t even discussed the fact that there are 40 other governments that are already doing this in very aggressive ways with the social media platforms. They just don’t happen to be ours, and we probably should know more about what they’re saying about our speech as well. Okay. Let me ask some of the questions. Rod Sullivan asked the question that I’ve wanted to know the answer to, which is how did this go from being a case against Biden to a case appealed by Murthy? Do either of you know the answer to that? 

 

Prof. Adam Candeub:  No, sorry. 

 

Dr. Matthew Seligman:  Yeah. Biden was dropped from the case at some point, and it’s not entirely clear to me how that decision was made. So he’s not a defendant. I think that was made for legal strategic reasons, but I don’t know the full story. 

 

Stewart A. Baker:  Or maybe fear that it would look embarrassing if he lost and then his name would be on a case that hurt. Okay. I think we’ve talked, Forest (sp), about the takeaways from the justices’ questions. Let me ask this one. One of the attendees said why haven’t there been lawsuits against the social media companies for viewpoint discrimination? Couldn’t the U.S. Attorney General or State Attorneys General sue them for doing content discrimination? 

 

Dr. Matthew Seligman:  Well, the — Adam. 

 

Prof. Adam Candeub:  Go ahead. 

 

Stewart A. Baker:  I’ll give this one to Matthew. I think the answer is going to be probably one that both of you share. 

 

Dr. Matthew Seligman:  Well, the First Amendment doesn’t apply to private corporations, and so it’s a restriction on government limitations on speech. Now, this is a little bit of a contested issue and one that I think raises very difficult policy and constitutional questions about — and we saw this. There was another case, the NetChoice case, at the Supreme Court just last week. There are these questions about well, okay, does social media now play such an essential role in our national dialogue that we should really treat them as hosting a sort of public square so maybe they are subject to if not constitutional scrutiny at least it’s possible for the government to regulate them in the way that the government couldn’t regulate what private forums for speech — like the government couldn’t pass a law that says what we can say in this session right now. 

 

But maybe social media’s different because it’s taken on such a pervasive role in society. So I think there are emerging questions about whether social media is different in certain ways and therefore subject to either greater government regulation than it would be before. But the doctrinal answer is that the reason why Facebook can’t be sued for viewpoint discrimination is because it’s their speech and they’re not the government. 

 

Stewart A. Baker:  So Sarah Reese (sp) asked a question that I think is a good one, which is is there any similarity between what was done here to the social media companies and a much more widely practiced regulatory tactic in which banks are discouraged from lending and providing services to a variety of business types that for one reason or another are viewed as unappetizing, dangerous, counter social, what have you? And that has happened under both Republican and Democratic administrations. It’s not viewpoint discrimination. It’s not a First Amendment issue, but it feels like the same kind of use of government power to get intermediaries to punish the people we’re really mad at. Let me ask Adam is there anything in these cases that would suggest that there’s room for relief against that kind of conduct? 

 

Prof. Adam Candeub:  Well, for instance, in Vullo, which was the case argued right after, it was precisely what—I forgot what the questioner’s name was—Sarah was talking about. The NRA was deemed bad by the New York Regulatory officials, and they pressured insurance companies and banks to stop doing business with it. And it’s really, I would say the exact same situation that we have with the social media companies. The point is — and I think it’s hard for The Federalist Society audience to quite get around their hands around it, but we live in a regulatory state where so much of what is considered private enterprise is really just doing government’s bidding. And the line has blurred so much, particularly in highly regulated areas like banking, like healthcare, and like communications, that the line is really quite diffuse. 

 

And I think that if the Court doesn’t want to deal with it in this issue, it might come up in other issues. I think Dr. Seligman was talking about the Texas social media law, which was an attempt to say platforms cannot discriminate on the basis of viewpoint. And it wasn’t necessarily because they’re the public square; it’s simply because they’re like the telephone company. 

 

When my mom sends me pictures, it’s not Zuckerberg sending pictures. It’s not their speech. They’re just the carriers of speech, and they don’t really have any expressive role to play. So I’m hoping laws like that will discourage that, but I think Sarah’s point gets to a bigger problem, which is the blurring of state and private action in our current economy. 

 

Stewart A. Baker:  All right. We’ve got some really interesting questions here and an observation. Somebody reminds us that the Chicago Tribune, which was a deeply Republican paper in the 40s, published a story saying that we had broken the Japanese naval code during World War II. And the administration was getting ready to prosecute them. It stopped simply because they decided that the Japanese government hadn’t actually been reading the Chicago Tribune. That was an anonymous attendee who offered that view. I’m not sure that that tells us anything other than that they could have, and I think they could’ve gotten a lot of — gone a long way with that lawsuit but didn’t get a chance to do it. 

 

Here’s a question. Is this similar to Chevron deference? Is this deference to the agency? And I think that’s kind of a joke, but there’s something real in it. The justices who’ve all been, several of them have been deep in the deep state, were they saying hey, I did this, so it can’t be bad? And it ought to continue; I can think of many reasons why it was a good idea? Is that what’s turning the tide on this for the otherwise conservative justices, that they remember fondly their Executive Branch service? Adam? 

 

Prof. Adam Candeub:  Yeah. I think the tweet I read suggested that. I’m sure Justice Kavanaugh spent a lot of time when he was at the White House calling up reporters and saying bad, bad, bad, don’t say that. And of course that was fine. And I think that that speaks to certain prejudices within our government, not just the Judiciary, that’s very sort of Washington centric. I will say that Justice Kavanaugh — and I’m not picking on him for any particular reason, but something stuck in my ear on this. 

 

In the NetChoice decision he said what’s abhorrent to our First Amendment is government sticking its finger on one side of the scale when it comes to free speech. You can’t prefer one viewpoint. So the Texas law, which required viewpoint neutrality by the platforms, was unconstitutional at least by apparently what Kavanaugh was saying. But here it’s quite okay for executive actors to put their finger on the scale and say, oh, you can’t do that. So I think that does represent a sort of bias of perspective. 

 

Stewart A. Baker:  All right. Matthew, I’m going to give you the last word on this. Do you want to offer your — I think you’re of the view that you’ve probably got the Supreme Court on your side on this, at least in this case. Any final thoughts on the case? 

 

Dr. Matthew Seligman:  Well, I think that the Supreme Court is likely not to uphold the Fifth Circuit’s decision. As we talked about before, there are a couple of different ways that can happen. I also think it’s not going to be the last word on a lot of these issues because I think that there are continuing questions about exactly what government’s role should be in the regulation of online debates. And Adam just pointed out one way of reading Justice Kavanaugh’s statements today versus his statements a couple weeks ago is well, he’s talking out of both sides of his mouth. 

 

It’s also true that there are two plaintiff states in the Missouri v. Biden, Missouri and Louisiana, case. And they’re Republican state attorney generals. And then there are two Republican states that had passed social media regulation laws. So on the one hand there’s advocacy for more regulation by the government of social media, and on the other hand saying that even when there isn’t a law with no obvious consequences that was too much government involvement in social media. 

 

So I think obviously this is an issue where there’s still a lot of dynamic change in how people are still settling their views on how they want government to be involved in social media. I think it’s a hard question, and I don’t think that this case is ultimately going to answer all those questions. 

 

Stewart A. Baker:  Yeah. And I’m not sure that the Supreme Court’s existing First Amendment doctrine is helping them figure out a good solution to these problems. They’ve brought a lens to the debate that doesn’t take account of many of the facts that we’re all aware of. But maybe it was too much to hope that people’s whose average age is 71 would embrace the information economy and guide us to the promise land. Okay. 

 

Adam, Matthew, thank you so much for participating. It was really illuminating. I think everybody has a better feel for how the case is going to come out and what the issues are going to be not just in this one but in the next four, all of which I assume both Adam and Matthew are going to be litigating. So we expect to see you back here to talk about the next case. 

 

Prof. Adam Candeub:  Thank you so much. It was a lot of fun. 

 

Dr. Matthew Seligman:  Thanks for having me. 

 

Stewart A. Baker:  All right. 

 

Chayila Kleist:  And if I could second those thanks on behalf of the Regulatory Transparency Project, really appreciate you carving out this section of your evenings to have this discussion. Thank you also to our audience for joining and participating. We welcome listener feedback at [email protected]. And if you’re interested for more from us at RTP, you can continue to follow us at regproject.org or find us on any of the major social media platforms. With that, thank you all for joining us today. We are adjourned. 

Adam Candeub

Professor of Law & Director of the Intellectual Property, Information & Communications Law Program

Michigan State University College of Law


Dr. Matthew Seligman

Partner, Stris & Maher LLP

Fellow, Constitutional Law Center, Stanford Law School


Stewart A. Baker

Partner

Steptoe & Johnson LLP


Cyber & Privacy

Federalist Society’s Free Speech & Election Law Practice Group

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content