[Webinar] Liability in the Digital Ecosystem: A Conversation on Biden’s New National Cybersecurity Strategy

June 19, 2023 at 2:00 PM ET

In the past several months, President Biden released a new national cybersecurity strategy. As part of that strategy, the Administration says that it will seek to “Shape Market Forces to Drive Security and Resilience – We will place responsibility on those within our digital ecosystem that are best positioned to reduce risk and shift the consequences of poor cybersecurity away from the most vulnerable in order to make our digital ecosystem more trustworthy, including by: . . . Shifting liability for software products and services to promote secure development practices.”

The concept of software liability has been the subject of much debate since it was first suggested more than a decade ago. With the new national strategy that debate becomes much more salient. In this webinar, cybersecurity experts will debate both sides of the question.

Featuring:

–Prof. Jamil N. Jaffer, Founder and Executive Director of the National Security Institute, Antonin Scalia Law School, George Mason University
–Prof. Paul Rosenzweig, Professorial Lecturer in Law, The George Washington University
–[Moderator] Robert Strayer, Executive Vice President of Policy, Information Technology Industry Council

Transcript

Although this transcript is largely accurate, in some cases it could be incomplete or inaccurate due to inaudible passages or transcription errors.

[Music]

 

Chayila Kleist:  Hello and welcome to this Regulatory Transparency Project webinar call. I hope you’re all having a wonderful holiday weekend. My name is Chayila Kleist, and I’m Assistant Director of the Regulatory Transparency Project here at The Federalist Society. Today, June 19, 2023, we are excited to host a panel discussion entitled “Liability in the Digital Ecosystem: A Conversation on President Biden’s New National Cybersecurity Strategy.” Joining us today is a stellar panel of subject matter experts who bring a range of views to this discussion. As always, please note that all expressions of opinion are those of the experts on today’s program as The Federalist Society takes no position on particular legal or public policy issues. 

 

Now, in the interest of time, we’ll keep our introductions brief. If you’d like to know more about any of our speakers today, please feel free to check out their impressive full bios at regproject.org. Today we are pleased to have with us as our moderator Robert Strayer, who’s the executive vice president of policy at the Information Technology Industry Council, ITI, where he leads ITI’s efforts to shape technology policy around the globe. Prior to joining ITI, Mr. Strayer served as the Deputy Assistant Secretary of State for Cyber and International Communications in Information Policy at the U.S. State Department. Before joining the State Department, Mr. Strayer was the general counsel for the U.S. Senate Foreign Relations Committee. He also practiced telecommunications law at WilmerHale. 

 

And I’ll leave it to him to introduce our panel. One last note, throughout the webinar if you have any questions, please submit them via the question and answer feature which can likely be found at the bottom of your Zoom screens so that our speakers will have access to them when we get to that portion of today’s webinar. With that, thank you all for being with us today. Mr. Strayer, the virtual floor is yours. 

 

Robert Strayer:  Thanks so much, Chayila. And thank all of you for joining us online today. We look forward to entertaining your questions after we have a moderated debate that we will conduct for roughly the first half hour. I’m privileged to be moderating this discussion between Jamil Jaffer, who is the executive director of the National Security Institute at the Scalia School of Law at George Mason University. And also we’re joined by Paul Rosenzweig, who is principal at Reg Branch Consulting and is a professor in law at George Washington University. 

 

The Biden Administration released its long awaited cybersecurity strategy on March 2 of this year. It had five main pillars. Those pillars were defending critical infrastructure; second, disrupting and dismantling threat actors; third, shaping market forces to drive security and resilience—that’s going to be the topic that we spend the most time on today—fourth, investing in a resilient future; and fifth, forging international partnerships to pursue shared goals. 

 

As I mentioned, the third pillar was shaping market forces. That section of the national cyber security strategy first highlighted that there was a market failure. It stated that there was a failure to impose inadequate costs on entities that introduced vulnerable products and services into our digital ecosystem. It then went on to talk about the need to shape and shift liability to those actors that are allowing those vulnerable products and services into the ecosystem. It specifically said that “Responsibility must be placed on the stakeholders most capable of taking action to prevent bad outcomes, not on the end users that often bear consequences of insecure products, nor on open source developers of components that are integral and are integrated into commercial products.” 

 

It said that the administration will work with Congress to develop legislation to establish liability for software products and services. It said that they need to establish a standard of care in the area of cybersecurity software development and specifically noted that there are existing standards out there including the National Institute of Standards Technologies’ secure software development framework. It also noted that even the most secure software development practices will not prevent all vulnerabilities. So to kick off, I’d like to turn to Paul to give us a little bit more of the landscape of where we stand today on liability for insecure software development. 

 

Prof. Paul Rosenzweig:  Well, thanks, Rob, for that and thanks to The Federalist Society for having Jamil and I here today. The answer to your question is really that as we sit here today there is in practice no liability at all for software products and services. When you purchase a Microsoft operating system or install an application or anything like that, you almost certainly clicked through a contract or if you opened it in a box, you opened up the shrink wrap and thereby agreed to the contract that was behind the shrink wrap in which the manufacturer of that software disclaimed all liability for the product and its installation and its operation. So you bought it basically as is. 

 

Now, in the context of an operating system on my laptop that’s generally — the damage to me is really rather modest, and the reason the lack of liability can sort of be understood. But we’ve had of course over the last several years any number of really significant apparent gaps in software development. One thinks of the HAFNIUM Microsoft exchange breach or the Log4j intrusion in both of which there was significant downstream consequential damages to third party users of that software, all of which was born by — all those lawsuits which were born by the users themselves and not by the people who wrote the software, which some might have said was flawed software or not. 

 

So in many ways we are today almost precisely where we were with automobile manufacturing at the start of the last century. The doctrine there was caveat emptor. If you bought a car and it had a flaw in it and the flaw led to an injury to you, well, the loss lay where it fell. And that doctrine essentially proved unsustainable over the long run and led to the development of a whole host of doctrines of product liability that are now applied to cars. And we are probably on that same arch with respect to software liability, and for sure that arch is the arch that the Biden Administration has suggested it wants to put America on with respect to software services and products. 

 

Robert Strayer:  Great. Jamil, can you maybe respond? And first off, do you agree with that analogy to the automotive industry maturing over years to have a product liability set of arrangements that are regulatory and in the court system? And then how do you see this field developing and the harm on third parties? 

 

Prof. Jamil N. Jaffer:  Look, Rob, I think there’s a few differences obviously. The obvious difference between cars and software is that as a general matter — and again, there are exceptions to this rule. But as a general matter software isn’t killing anybody; right? Nobody’s dying because of a faulty brake system in your software or leaked personal data or the like. 

 

And there are exceptions to that. We’ve seen some challenges to ransomware aimed at hospitals where we have seen potential patient deaths as a result. And it could be that larger catastrophic concerns are on the horizon with power systems and banking systems and the like. But to date, we’ve had computer software for over three decades now, and there hasn’t been the kind of call or passion around software liability for the very reason that it’s not the same kind of outcomes. 

 

Now that being said, Paul raises some really important points that we have to grapple with. And we have seen some amount of liability being pressed upon computer software manufacturers and at some level those who operate computer systems. We’ve seen the FTC bring proceedings under its unfair and deceptive trade practices authorities and try to impose some amount of fines and the like and as a result of that, when people don’t comply with the consent decrees that the FTC enters into with them. So we’ve seen some amounts of attempts to hold liability. 

 

But I guess the question is what do we hope to achieve? As far as I could tell, software manufacturers are working aggressively at every turn to close vulnerabilities, to engage in secure software development lifecycles, to close vulnerabilities as soon as they detect them. The number of times I get an update on my phone or my computer, it’s multiple times a week at times to close vulnerabilities they learn of. And so what I don’t see is some massive gap between software manufacturers that are creating capabilities, at least not the large ones, and then trying to close them as quickly as possible. 

 

So then the question becomes if you do impose liability, what’s the likely outcome? Well, we don’t have to look far. We can look overseas to places like Europe and elsewhere where they’ve imposed liability in circumstances where liability wasn’t previously present. And we see what that does is it crushes innovation. It dramatically curtails innovative activity. It curtails new and young startup players from getting in business and improving their goods, and it entrenches existing manufacturers who can afford to absorb the liability and/or build in the realm of their processes. 

 

So the U.S. software industry has been so vibrant and flourishing in part because we’ve kept the government largely out of the way. The government hasn’t gotten involved in a regulatory manner. It hasn’t gotten involved by creating liability rules that are unnecessary. And we’ve seen tremendous flourishing. It’s why the U.S. software industry is the envy of the world. And so we can adopt the European common law approach if it makes sense to do so, but we have to recognize there are likely innovation implications that will come along with that. And that’s what I worry about. 

 

Robert Strayer:  Thanks, Jamil. Paul, could you respond to two points that Jamil raised there? One is what would the likely impact be on the innovators out there? What would be the cost and potential chilling effects on development of new, smaller — especially smaller software companies? And secondly, is there inadequate motivation on the part of software companies to secure their products? Do they need this additional if you will shtick that forces them to develop more secure software so they aren’t passing that onto customers? Isn’t that something that’s inherent in their customer and client relationship that they want to make sure they have the most secure software? 

 

Prof. Paul Rosenzweig:  Well, let me answer the second of those first. The mantra in Silicon Valley is move fast and break things. And in the end, we — you and I, the American public, we suffer the consequences of the broken system. And that makes sense in the current structure of the lack of liability system. Cybersecurity or insecurity is a negative externality. My insecurity is not just limited to me. I’m part of a connected network that affects everybody else along the system. And if I fail — if I want to protect myself, I spend what is useful to me, but I don’t spend what is useful for everybody else who’s got downstream effects. 

 

A perfectly good example is the Colonial Pipeline hack of a couple of years ago where Colonial Pipeline manifestly underinvested in implementation and keeping its systems up to date. So when we say software and services, we’re not just talking about the manufacturer. We’re also talking about implementation methodology. And it’s absolutely clear that Colonial Pipeline underinvested. Why? Because they weren’t going to pay for the four days that you and I and everybody else on the East Coast didn’t have gasoline even though there were significant consequences and disadvantages to that. 

 

It really is the case that the economic case for liability of some form is to find who the Ronald Coase is would call the least cost avoider, the person who can spend the least amount of money to fix the problem. And it clearly isn’t the consumer. It isn’t you and me, and it also probably isn’t necessarily the coder, though in some cases it is. It’s often the third party implementer as well. 

 

As to whether or not it will stifle innovation, to be sure if we actually adopted a silly bugaboo European regulatory system—and I apologize if there are any Europeans on the call. I don’t mean to call you out—that was top-down hierarchical in nature, that was slow in response, it would stifle innovation. But every other business in America pretty much has lived with reasonable man liability for their willful neglect and managed to continue to innovate pretty well. Car liability, I agree with Jamil. It isn’t a death sentence, so we aren’t talking about death. 

But car manufacturing has brought us Tesla, and it hasn’t really — and the prospect that Tesla’s self-driving car might make it liable for some injuries, which is something that looks to be right on the horizon. It didn’t stop Tesla from building a new self-driving car or a new electronic vehicles for that matter. So a moderate, reasonably sensible, reasonable man requirement of some sort that has been a traditional part of common law for 500 years and has managed to withstand — managed to permit innovation through the steam revolution, the industrial revolution, the early tech revolution probably wouldn’t kill the American tech industry either.

 

Robert Strayer:  Paul, before we turn to Jamil, can you just explain briefly why the common law has not through the court system been a tool that those parties injured by cybersecurity vulnerabilities have been able to use to this point? We don’t have a Section 230 prohibiting as a federal law liability on software developers. Is it too hard to prove? 

 

Prof. Paul Rosenzweig:  It’s almost exclusively contractual in nature, which is to say that when you download and install a piece of software or you implement, it you disclaim responsibility. There’s a famous case from out of the First Circuit involving the first bank there where the bank adopted essentially two factor authentication but then didn’t implement it well. And when a company’s representatives logged in and asked to transfer the entire corpus of the corporate bank account to Romania, even though it set off an alarm without two factor authentication, the manager said no, send it. They’ve got the right log in and password. And all the money went to Romania. 

 

You would think that that would be a liability, but the district court decision was that it wasn’t. On appeal, the First Circuit reversed for a little bit more fact finding, and we wound up with probably a sealed settlement. And now they’re still litigating it as I understand it over the insurance coverage, which is a second arm. But basically when you install a piece of software or when you contract with Colonial Pipeline for the delivery of goods, they disclaim any liability for software failures — for failures that result from incorrect or inadequate software implementation or services. 

 

Robert Strayer:  Right. So the bottom line really is that contractual arrangements would be overridden by a statutory framework in some cases. 

 

Prof. Paul Rosenzweig:  Presumably that’s what the Biden Administration is planning. I think I’m reasonably sure that Jamil and I agree that this isn’t something that can be done by executive action, though query whether or not it could be done by judicial action. The old story behind auto liability is that it didn’t come initially from the legislative branches. It was the product of expanded tort liability. California v. Greenwood is the famous case that held that contracts were essentially contracts of adhesion. I remember those words from my law school. 

 

Prof. Jamil N. Jaffer:  Law school, yeah. 

 

Prof. Paul Rosenzweig:  Contracts of adhesion and thus would not be enforced, and the auto manufacturers became liable. And interestingly, at that point they ran for a regulatory response rather than face pervasive tort liability in 51 different jurisdictions. So my guess honestly would be that there’s a very small appetite in Congress for this but that failing to do that is likely to result in New Jersey or California eventually changing the rules.

 

Robert Strayer:  Right. Jamil, turning back to you, two things I would like you to address. One, Paul raised earlier this idea of the externalization of costs, especially from critical infrastructure owners and operators. If they have a failure, that cost is borne by users and businesses of their products and services. And secondly responding to Paul’s last point there about the potential proliferation of different state standards, is that another reason why we might want to consider some kind of federal framework? 

 

Prof. Jamil N. Jaffer:  Yeah. Well, look, I think this point about critical infrastructure owners/operators is an interesting one because what it really demonstrates is that this really isn’t about software liability. It’s a camel’s nose under the tent of all sorts of massive regulation of major industries across the entire economy. The point is not really about holding software manufacturers liable. It’s about holding Colonial Pipeline liable. It’s about holding the banks liable. It’s about holding the hotels liable. It’s about holding McDonald’s liable when their electronic system doesn’t make the coffee — it makes the coffee too hot. 

 

There’s all — in the world that Paul’s describing, this regulatory system, it would be great if it could be designed in a way that was business friendly and made sense and didn’t adopt European approaches. But I don’t see a lot of opportunities for that. You look at the ideas that are being bandied about. There are some good ideas. If you’re going to have a liability scheme, I do think the Biden Administration’s approach of having a regulatory safe harbor or if you meet certain standards, you’re exempt from liability — I think that’s the right one if you’re going to go down this road. 

 

But again, I guess I keep coming back to the question of what’s the goal? Do we see a mass number of software manufacturers, particularly the major ones, who are not regularly engaged in security updates, who are not implementing a secure development lifecycle for their code? Where is the problem that we’re trying to fix? 

 

This whole idea by the way, Rob, that there’s some sort of externality here, that it’s not being accounted for, nothing in the current marketplace suggests that. To the contrary, we see companies responding to consumer desire and business desire for more secure software all the time. There’s an entire industry around cybersecurity that I invest in — that I’ve worked in and invested in that is literally built around this construct. And so the idea somehow that we need a liability regime to ensure that software’s more secure I’m not sure would change that much about what companies do in terms of software. It would just impose more liability on them and make them less likely to invest in modernized software. 

 

By the way, if the whole theory as well — it all worked out fine in the car industry until Tesla, which is a great example of innovation, yes. But until Tesla, American cars have largely been the same since the early 1900s. There hasn’t been a tremendous amount of innovation, so I’m not sure I would look at American car manufacturers as massive innovators in how liability’s helped them innovate. That’s just not — yes, there have been electronic fuel injection and the like. But fundamentally, a car functions largely the same way as it did until this EV revolution. 

 

So I’m not really buying this idea that — if you compare what’s happened in software innovation since the early 80s until today and a car innovation since the early 1900s to today, I would argue there’s been drastically more per year innovation in the software industry. And that’s partly because the government has stayed out of the way, stayed out of Silicon Valley, stayed out of Silicon Alley, stayed out of the Dulles Tech Corridor, and stayed to itself. And that’s exactly why we see the innovation we have, not because the government has gotten involved with regulations and more liability and all the like. 

 

Prof. Paul Rosenzweig:  So Jamil and I obviously live in different worlds because in my world I’m looking at the top ten vulnerabilities on SISA’s vulnerability list. I just pulled them up right now. 2018, 2017, 2018, 2019, 2019, 2020, 2020, 2019, 2019, 2020, 2020. So the top ten vulnerabilities, all of them are more than three years old and unpatched. And I —

 

Prof. Jamil N. Jaffer:  That might say something about SISA more than it does about the software industry. 

 

Prof. Paul Rosenzweig:  Oh, no. This is their record of the top ten routinely exploited cybersecurity vulnerabilities and exploits as of right now. Buffer overflows have been a known vulnerability for 18 years, and nobody fixes them because it’s too expensive because they don’t bear any responsibility when the buffer overflow was used. Now —

 

Prof. Jamil N. Jaffer:  I don’t think that’s right. 

 

Prof. Paul Rosenzweig:  — I agree completely that software manufacturers are doing a better job now than they did ten years ago. All of them are routinely engaged in much better efforts to build software secured by design, and that’s a good thing. And we should encourage it. But the real answer here is that not everybody has secure by design and it means the same thing. A lot of people are selling secure by design with air quotes around it that isn’t quite as secure. 

 

I think we’re in agreement that a safe harbor would be a good thing. I’m not sure that we’re in agreement that as the liability structure is today the software manufacturers and implementers around the country are doing so at the optimal level of security, in fact, quite to the contrary given the age of vulnerabilities and the fact that things are going in the wrong direction — ransomware is up, not down. Software vulnerability, exploitation up, not down. We don’t have good metrics, which is a real loss. But we know pretty much that it’s not getting more secure out there. Why? 

 

Prof. Jamil N. Jaffer:  You know, Rob, I actually think it’s a little bit different story, Paul, than the one you’re telling. It’s not that software manufacturers aren’t updating their software, aren’t making the fixes. It’s that people aren’t implementing them. The reason why all these are the top ten vulnerabilities is not because they haven’t been fixed in the software. It’s because the vast majority of owners and operators aren’t implementing software updates on a regular basis because they’re concerned about critical systems and their ability to function at a given time. That’s a separate and distinct question. And if we think somehow software liability is going to fix it, okay, maybe. I don’t think so, but it might. 

 

But maybe the point is that we’ll put the liability on the owners and operators; right? Okay. Well, then maybe they just won’t implement the latest software. They’ll just take those systems offline. There are significant downstream consequences of a liability regime that we haven’t necessarily thought through. 

 

So look, I think there’s an opportunity here where Paul and I do agree that if you’re going to go down this road, which the administration appears committed to, not because they have the support in Congress to do it, but they appear committed to it. I think the safe harbor does make a lot of sense. I think that creating a set of industry best practices — there’s already as Rob pointed out at the beginning a framework for secure development, secure by design, that NIST has put out and SISA’s got some guidelines as well. I think that if you comply with those in your operational processes — and I think you’re right, Paul. There’s a lot to be done on making coders better, implementing more secure by design, teaching coders how to better embed security from the beginning in their code. I think there’s a lot of work to be done, and there’s no doubt that there are companies and organizations around that capability. 

 

I think the real core question, though, is what’s the best way to achieve that. And if you can show me a market failure, you could show me a tragedy of the commons here, okay, but just the fact that we don’t have a level of cybersecurity you would prefer or that the White House would prefer, that is not obvious evidence of a market failure. That may be the market functioning. That may be a lack of information. That may be a lot of things. It’s not sort of that dead weight loss that we typically look at when we talk about economic market failures. 

 

It’s easy to call it a market failure. It’s another thing to demonstrate it actually is and that regulations or liability rules are warranted here. But to the extent that we’re going to do it. I think industry best practices borrowing from things that we know work, actually doing that, and then giving people safe harbor if they apply that is the only way to go if you’re going to go down this road. 

 

Robert Strayer:  Paul, go ahead. 

 

Prof. Paul Rosenzweig:  No, no. We’ll let you moderate, Rob. You go ahead. 

 

Robert Strayer:  I was going to take it — pick up on that point about a safe harbor. Now, I know Jamil has good experience with safe harbors and liability protection when he was chief counsel of the House Intel Committee. You were part of the team that got enacted in Congress a liability protection for those that shared cybersecurity vulnerabilities with the U.S. government. 

 

I want to dig into this a little bit further on this idea of a safe harbor because one of the principles that many of us have talked about for at least more than a decade when this idea of liability has been proposed is that we don’t want cybersecurity to be a check the box exercise. That is the adversaries are always picking up on the weaknesses in the system, and if you stay too focused on meeting a set of benchmark requirements and not focused on what’s coming down the road and evolving your defenses, you may not be ready. You will not be ready for the most innovative of adversaries out there. You certainly will pick off some of these easy to fix things like making sure you’re patching a known vulnerability that as Paul pointed out has been around for years. 

 

So as I mentioned earlier and you reiterated, Jamil, there is now a NIST secure software development framework that’s had industry input. It’s now going to be something that the Office of Management and Budget requires anyone doing business with the U.S. government to meet. Is this type of framework sufficient as a safe harbor? Does a safe harbor need to have some other flexible criteria, or is the certainty of meeting something like this for a safe harbor what we really want? So I just kind of want to get your reaction to the tailoring of a safe harbor? So go ahead, Jamil. 

 

Prof. Jamil N. Jaffer:  You know, Rob, I think that the safe harbor if it’s the framework makes a lot of sense. I think there’s no problem using that. You’ve got to give strong liability protection if you’re going to have that safe harbor and have it effective. By the way, I was the senior counsel. I don’t want Sarah Jeffer (sp) or Kay Wilburger (sp) to come after me — or Krista Nessa (sp) who was the chief counsel at the House until that time, to come after me. But one of the things that we tried to do there — and we didn’t actually — we got it through the House, and it was enacted into law later on in 2015. When they did get it enacted into law, they did have to make some additional compromises. 

 

And actually, the compromises they made on liability protection actually, I think, made it less likely that people would take advantage of that opportunity. They narrowed it to just the act of sharing itself and some of the downstream consequences. And they set a different bar. Paul has talked about a couple of different standards you might use for liability. He’s talked about the reasonable man on one hand, willful, bad acts on the other. I forget the exact term that Paul used. I think there’s obviously major differences between the two. I prefer the latter and not the former. And so I think picking the right liability rule if you’re going to have one but then creating the safe harbor, making sure it’s bulletproof if you comply with the framework. 

 

And I do think the framework is a good one. If you comply with it, you get bulletproof liability protection. If that were the deal, if we were in the process of negotiating — if the deal was we’re going to get liability of some sort and you’ve got to figure out what the alternative’s going to be, that’s not a bad alternative. I still hold out hope that we don’t need these kind of liability rules. But if Paul’s right — and I don’t mean to by Pollyanna-ish about it. Clearly our nation’s cybersecurity is not where we want it to be, and there’s a lot of ways to get to that cybersecurity goal. Some of it might be regulatory. Some of it might be liability. 

 

A lot of it probably frankly is the government doing a better job working with the private sector more collaboratively and frankly incentivizing the private sector to do the right thing. I’ve always found it’s better to line your incentives up than it is to — come in with the carrot, not the stick. But if you’re going to come in with the stick, you better have a big carrot inside that stick you can offer and say, hey, look, if you do this, you won’t get hit with the switch. 

 

Robert Strayer:  Great. Paul. 

 

Prof. Paul Rosenzweig:  Well, I actually think that the safe harbor is, A, a critical component of any liability system, and, B, it’s also the hardest thing to implement. I’m actually going to argue against myself somewhat and say that I think that the framework is a difficult and inadequate safe harbor in part because the NIST frameworks don’t mutate with sufficient rapidity. They are fixed in time, and it takes the government three years to update them. And three years in our system is a lifetime. 

 

They are a good baseline, and at a minimum that’s what you should do. But the truth of the matter is is that if you’re complying with the NIST cybersecurity framework, if you’re implementing the NIST cybersecurity framework — they’ve changed frameworks today — you’re implementing stuff that is 18 to 36 months behind the best thinking on what cybersecurity means in the market today. And I suspect that the same would be true in secure software development, though I do tend to think that that may vary more slowly. 

 

It won’t vary with the same — the environmental standards vary on a year or decade long kind of time scale. Cybersecurity standards vary on a week or month long time scale. I suspect that secure software development standards will vary on a month to half year type time scale. So that may be good. But my biggest concern with the liability system going forward is that the definition of a safe harbor won’t keep up with what is actually best practices, what a reasonable man would — a reasonable person would actually do. And to that extent it’s kind of costly. 

 

Prof. Jamil N. Jaffer:  You know, Rob, this is actually a great reason to not get the government involved. What Paul has just laid out is exactly why the government being overly regulatory or overly liability friendly in this space is exactly the wrong answer because the government itself cannot possibly keep up with the speed of evolution and the speed of the threat in the cyber domain. 

 

Prof. Paul Rosenzweig:  That’s why liability —

 

Prof. Jamil N. Jaffer:  The only thing capable —

 

Prof. Paul Rosenzweig:  — is the answer, not regulation; right?

 

Prof. Jamil N. Jaffer:  Oh, yeah. Some old federal judge is figuring it out? That’s crazy. Or juries, god forbid. That’s like the worst case scenario; right? 

 

Prof. Paul Rosenzweig:  Well, that’s why they got (inaudible 00:33:45). 

 

Prof. Jamil N. Jaffer:  I am very skeptical of the ability of an older federal judge to figure this out as much as I am a member of Congress or the slow regulatory system. Look, I think that we’ve had a pretty good run of it in terms of innovation and security. And yeah, it’s not ideal. Is there more we can do? Absolutely. Are these the right answers? I tend to think probably not, but I think at the end of the day, Rob, it’s probably coming our way. 

 

I think it’s unlikely that we’re going to see the current scenario stay away as long as someone like I might help because then the question becomes what is the next best scenario. What does good look like if you’re going to accept some amount of government regulation, some amount of liability? And I think that’s where the trade space with Paul is now. 

 

Robert Strayer:  Thanks. I want to invite our online audience to submit any questions they have to the Q&A feature here in Zoom, and we’ll ask some of those. But Paul, I want to come back to you about two, I think, constituencies that should be thought through in any liability regime and get your thoughts and then Jamil’s. One would be small and medium enterprises. That is kind of going back to the scenario of Jamil was raising those startup innovators who are going to be most impacted by a potential large lawsuit on their ability to go forward, may not be able to get insurance against this. Larger corporations can manage the costs. It’s not something necessarily that should be imposed on them, but they are the more vulnerable. That is small and medium. 

 

And secondly, there’s a topic that many of our listeners are probably not familiar with which is the open source software development process, which is actually critical to our software development globally. And that is that there are so many groups that do software innovation in an open source format. Would that potentially chill their activities, the elements that they then add to larger software that’s picked up on by larger companies? So those two groups, should there be liability limitations for them, or how should we think about them in a special way? Paul, you first. 

 

Prof. Paul Rosenzweig:  Those are really good questions, and I think that you need to incorporate both parts of that into any ultimate answer. As to small businesses and innovators, one of the reasons to maybe even prefer a regulatory approach as opposed to a common law liability approach would be that you could have an easy carve out for people below a certain size or a certain revenue stream. In a pure liability regime of the sort that seems to be in contemplation, the answer would have to be that the reasonableness of efforts would depend upon the capabilities of efforts and that small businesses would be held to essentially small business standards in the same way that today small business data security programs for data breach are less robust than those for Marriott or — and we seem to negotiate that pretty well. Obviously, there are people who are caught at the margins and don’t like being put on the wrong side of the line. But we’re capable, I think, in the long run of making that distinction. 

 

The open source one is a particularly difficult one, and the only answer that I can think of really is most open source code is ultimately incorporated into some other piece. It doesn’t run independently. I don’t have — except for like a Linux operating system, there isn’t a lot of people who use open source as the entire architecture of their enterprise. 

 

And so there we’re probably talking about things like software building materials, identifying which code pieces so that the reasonable step would be identifying that you’re incorporating an open source piece into a larger piece and probably some level of due diligence to make sure that you’re using a piece of open code that is not rampant with vulnerabilities or wasn’t developed secretly by a whole bunch of Russian trolls who took over an open code forum and tried to build the vulnerability in. But those are difficult implementation questions to be sure. I’m not sure that I think that they’re reasons to forgo the entire enterprise altogether. 

 

Robert Strayer:  All right. So subpoint. Jamil, any thoughts on either of the categories? 

 

Prof. Jamil N. Jaffer:  Yeah, I think it’s interesting. If you’re a small tech company — you’ve got three or four coders and you’re building a new piece of software and you’re supposed to figure out whether the open source library that you’re relying upon from GitHub has been overrun by Russian nation state actors who have coopted it and are putting in vulnerabilities, how are you supposed to figure that out? I mean, that’s crazy. 

 

Prof. Paul Rosenzweig:  That’s — 

 

Prof. Jamil N. Jaffer:  You just said that. 

 

Prof. Paul Rosenzweig:  That’s a bit of a red herring. Let’s change the topic completely to an area like AI. If you’re one of the four guys who are building a new AI instantation (sic) today and you use ChatGPT-4 and you don’t account for the fact that it’s a known hallucinator, you’re not being responsible. You have to account for it. 

 

Prof. Jamil N. Jaffer:  Okay. Yeah, yeah, yeah. What we should do is —

 

Prof. Paul Rosenzweig:  At least in some ways. 

 

Prof. Jamil N. Jaffer:  — what we should do is — so Huawei I think is the interesting one; right? So there are literally not four. There are millions of people today, hundreds of thousands of people today across this country and across the globe innovating today on GPT-4 specifically and putting products out there like wildfire, rapidly creating innovation specifically because — and everyone knows that GPT-4’s got its hallucinations. But they’re not liable. If you made them liable, they would all be calling their lawyer, and none of the innovation you see today on top of GPT-4 would be (crosstalk 00:39:54). That’s a fact.

 

Prof. Paul Rosenzweig:  The answer would be easy. It would be transparency about we’ve included GPT-4, and it sometimes hallucinates. 

 

Prof. Jamil N. Jaffer:  As long as the rule is —

 

Prof. Paul Rosenzweig:  (Crosstalk 00:40:04).

 

Prof. Jamil N. Jaffer:  — so as long as your liability rule is you can get out of liability by telling people what you’ve done, great. I support that. I’m 100 percent with you, Paul. But if your view is —

 

Prof. Paul Rosenzweig:  Well, I think in this context that would probably be the right answer. 

 

Prof. Jamil N. Jaffer:  Okay. Fair enough. Then (inaudible 00:40:18).

 

Prof. Paul Rosenzweig:  Honestly, the innovators today who are not taking any account of that and are propagating a system that they know doesn’t work, yeah, I’m not that sympathetic to them. 

 

Prof. Jamil N. Jaffer:  I think everybody — I think all you’re saying is let people know you’re using GPT-4 and it hallucinates. Great. You’re not asking them to take account for it, not to do anything about it, don’t have to solve any problems, and there will be no liabilities. They just tell people they’re using it and it hallucinates, great. You’re not going to get any argument from me. But to be fair —

 

Prof. Paul Rosenzweig:  And then if I use — well, I don’t want to chase that too far. 

 

Prof. Jamil N. Jaffer:  To be fair, that’s just a standard shrink wrap disclaimer that will just go in the shrink wrap like everything else, and it’ll be like everything else. But if you want that, fine. I’ll give you that. I’ll let you have it. 

 

Prof. Paul Rosenzweig:  Well, I think it’d have to be more prominent, but we’ll leave that aside. The failure is they’re going to get regulated out the wazoo already in Europe and they’re going to be regulated in the United States precisely because they are acting so irresponsibly. 

 

Prof. Jamil N. Jaffer:  And this is my point. We are going to destroy innovation, and we’re going to limit all this massive innovation that’s taking place in the last few months that we’ve seen, just tremendous, unique opportunities. And we’re going to come in and cram down how to say you know what you really need to do? All you innovators, all you garage builders who are building on top of some — creating massive opportunity for people raising —

 

Prof. Paul Rosenzweig:  Do you seriously think that on a reasonable man standard it’s reasonable to use ChatGPT today without accounting for its hallucinatory stuff? That’s the question. Don’t change it. 

 

Prof. Jamil N. Jaffer:  All I’m saying —

 

Prof. Paul Rosenzweig:  Answer that. Is it reasonable for the innovators today —

 

Prof. Jamil N. Jaffer:  Yeah. I think —

 

Prof. Paul Rosenzweig:  — to use ChatGPT — 

 

Prof. Jamil N. Jaffer:  — it is. I think it is. I think all the tremendous innovation that’s being built on ChatGPT today, GPT-4 with its hallucinations, is amazing and is going to revolutionize the way the world works, yes. And I think it will be disastrous if we apply the Paul Rosenzweig rule, cram down upon all those people, and make them all go check with their lawyers to see, hey, do I need to do something more than — if it’s just a rule, tell them they should change. (Crosstalk 00:42:25).

 

Prof. Paul Rosenzweig:  Most AI innovators I know, Jamil, who are reasonable say they would never use ChatGPT for all the tea in China precisely because of its hallucinations. 

 

Prof. Jamil N. Jaffer:  And yet, and yet, and yet so many major companies today —

 

Prof. Paul Rosenzweig:  So many are moving fast and breaking things, and they’re going to leave behind a system that is destroyed. 

 

Prof. Jamil N. Jaffer:  I mean, listen —

 

Robert Strayer:  If I could, I just want to make sure —

 

Prof. Paul Rosenzweig:  Mike Pender (sp) says call on him. 

 

Robert Strayer:  I want to steer us back to cybersecurity. I know we all love to talk about AI, and we could do many webinars on it. I did want to ask a question that came in from the chat. It was about as a safe harbor is the ISO IEC 27001, which is a general cybersecurity management standard — is that something that would be appropriate as a standard for a safe harbor? I don’t know if either one of you have an opinion on that in addition — there are many international standards, potential international standards bodies that could work on standards aside from the NIST secure software development framework. Any opinion on that? 

 

Prof. Jamil N. Jaffer:  Well, listen, I think 27001 is a reasonable place to go. It just depends on which rule you want to pick and how many parts of it you need to comply with. But there’s plenty of opportunities. ISO IEC 27001 that Michael Pender lays out is a reasonable one to pick if you’re going to pick from them. 

 

Robert Strayer:  Paul, anything on that one?

 

Prof. Paul Rosenzweig:  No. 

 

Robert Strayer:  I would just, if I may, just on the AI point I think it also really depends on what’s the use case that you’re going to using that ChatGPT-4 in. There’s reasonable actions depending what’s at stake at the end of the day. Since we’ve talked about regulation a couple of times now, I want to talk about regulation again. In the first pillar of the national cyber strategy it’s focused very much on securing critical infrastructure through developing necessary cybersecurity requirements in critical sectors. It notes that there are some existing statutory authorities. There may be gaps that will be identified in the future and those gaps may need to be filled. And it says at bottom if we’re going to have regulation of a number of sectors that it should be based on existing frameworks like the NIST cybersecurity framework, voluntary consensus standards like those of the 27001 and others in that series. 

 

Of course, when you have regulation there are potential monetary penalties at the end of the day. We know that in the context of just over a year ago passed cyber instant response framework that SISA is now implementing. They’re up to $100,000 penalties for failure to report incidents. So as we look at critical infrastructure sectors, is it appropriate to look at more rigorous cybersecurity requirements being placed on them either in conjunction or as an alternative to judicial liability? Let’s start with you, Paul. 

 

Prof. Paul Rosenzweig:  Well, this is where I start moving over towards Jamil’s side of this. The regulatory strictures of the federal government are subject to a lot of problems. They’re slow moving for starters. They’re subject to regulatory caveator by big actors in a way that the liability system generally is not. They’re subject to huge information asymmetries where the government is often behind the regulatory curve. And they’re backed by governmental regulatory requirements that are often difficult and punitive administrative or even criminal, god help us. 

 

So I tend to think that — one of the reasons that I tend to favor the liability structures of the third pillar is that I tend to think of it as a far superior alternative to the regulatory structures offered in the first pillar and because fundamentally, unlike Jamil, I’m not convinced that the market is providing us with a good solution or the best solution. So this is my middle way, if you will. But I do think that it is inevitable that increasingly adverse results like Colonial Pipeline, like NotPetya, like HAFNIUM, are going to drive a call for regulation. We were yammering about AI a few minutes ago, and maybe we should do another webinar. But clearly the adverse impacts there are going to bring us regulation far too early in the developmental cycle precisely because too much bad is already happening in the public sphere. So my sense is that the regulatory structures that are called for even as to critical infrastructure there are likely to be less beneficial than the administration hopes. 

 

Robert Strayer:  Thanks, Paul. Jamil?

 

Prof. Jamil N. Jaffer:  Yeah. Look, I think Paul is right to say that there were an increase in the regulatory moving in an increasing regulatory direction both on general software and cybersecurity liability, as well as in the AI domain. I will say, though, that I don’t think that the market has functioned completely perfectly here. To the contrary — or that we’re getting the right outcome. To the contrary, I think there are informational gaps that causes the market not to function as well as it might that can be addressed by the government sharing more information directly with the private sector at a highly classified level and really getting more information about the threat out there. 

 

And then beyond that, I do think that the solution when you do identify that there may be a result that you don’t like out of the market, not that it’s a market failure necessarily but that the result is something we don’t prefer, it should incentivize better behavior by the private sector rather than try to punish bad behavior — incentivize good behavior, align our interests and the government’s interests and the public’s interest in line with that of industry. You’re likely to get a much better outcome much more rapidly. And there’s a lot we could do in that space that we haven’t tried yet. And so simply running to either the regulatory stick which I think as Paul lays out correctly is the wrong answer or even the liability stick is still the wrong answer. Let’s try all the incentives that we can first and then where there are specific unique failures, let’s talk about liability, which I agree with Paul is a better approach than regulation and more adaptable, although I don’t think it’s particularly adaptable. And so at the end of the day, I think that’s the right place to go when you think about these things across the economy and how to combine what Paul is saying, and rightly saying, that there is this trend with the need for innovation. 

 

When it comes to AI by the way, what Paul lays out is that there’s likely to be regulation soon. I agree with that. But I don’t think it’s because there are a lot of bad things happening. I think it’s because there’s a lot of fear. There’s fear of bad things. I don’t see this evidence of massive bad AI things happening that would cause regulation. I see a lot of concern, a lot of fear, a lot of fear, uncertainty, and doubt. And I think that’s going to result in a regulatory backlash sooner rather than later. And I do think Paul is right that it’ll come much too soon in the development cycle of these products which is unfortunate because I see tremendous innovation and opportunity here and worry that the government’s going to come in and throw the baby out with the regulatory bathwater. 

 

Robert Strayer:  Great. Thanks. So I mentioned the cyber incident reporting law that was enacted last spring, just over a year ago. That is regulation. It’s going to require when the rulemaking is complete submission for critical infrastructure, probably owners and operators, to submit vulnerabilities or at least intrusions into their systems to the federal government. Is that the type of maybe more bespoke, nuanced, specific regulation that might make more sense rather than a broader one that would have a broader set of criteria that a particular critical infrastructure owner would have to meet, that is more of a rifle shot? Or do you think that was also not really a necessary requirement?

 

Prof. Jamil N. Jaffer:  To me it’s still unnecessary because we haven’t tried to incentivize people to share information, at least at a high enough level. Yes, we did pass that law that you referenced earlier back in 2015. But it didn’t have real liability protection. It didn’t have real regulatory protection. It had sort of faux versions of that. It didn’t have any tax incentives. 

 

All those things are things that you could provide that would get people to give the government the information it wants. If you gave them strong liability protection, strong regulatory protection if they shared information and maybe even give them a tax incentive, people are likely to share a lot more. What’s more likely to happen in this scenario, in the critical infrastructure scenario, is everybody’s likely to run to their lawyer as soon as they have an incident and say, okay, what do I need to share with the government? And the lawyer’s going to say share it as late as possible and as little as possible. 

 

That is not what’s going to achieve the goal of mass information sharing. You want to line up incentives, not create a situation where you go to the lawyer and say what’s the thing I should do? And the lawyer’s always going to tell you do the least at the latest possible time. And that’s the thing that we’re teeing up for ourselves with this new line. I know everyone thinks it’s the greatest thing since sliced bread. I think the actual reality of it will be fairly minimal unfortunately. 

 

Robert Strayer:  Great. Thanks. Paul, any thoughts on that? 

 

Prof. Paul Rosenzweig:  I actually think that the incentive would be without a law that mandates a report the lawyer will say tell them nothing, never. And that’s their incentive, unless they get paid for their information. The history has been that breach notifications are — intrusions are kept quiet because nobody wants to be adversely impacted, either reputationally or economically. And so —

 

Prof. Jamil N. Jaffer:  But Paul, if you gave them liability protection if they report it — if you report within a certain amount of time, you get liability protection and regulatory protection for that incident — wouldn’t everybody say tell immediately? Tell them instantaneously. Tell them the minute it happens?

 

Prof. Paul Rosenzweig:  Maybe. I don’t know. I’m skeptical of the idea that people would riot. But maybe we should do that for taxpayers, too. 

 

Prof. Jamil N. Jaffer:  Yeah. Hey, maybe. It might work better than hiring 800 IRS agents. 

 

Robert Strayer:  We’ve been talking about alternatives to regulations. At least, that’s how we started out. One other important tool that’s been talked about for years and I don’t think has really delivered what people thought it might is insurance, cyber insurance. That is a tool potentially that you would price insurance based on one’s ability to secure their networks and adopt good cyber hygiene. And the insurers would help set the standards for industry that is seeking to have that insurance. And it would be a way for government not to be in the middle necessarily of setting cybersecurity requirements. And it would be a market-based solution. I just want to ask each of you what your view is of the cyber insurance market and whether that is also a practical alternative. 

 

Prof. Paul Rosenzweig:  I think the insurance market is the end result of a liability system. One of the things that we haven’t talked about yet which is a real problem that needs to be solved as part of the effort to develop a liability system is a lack of good metrics. The insurance companies can’t rate risk right now because they can’t really assess risk improvement, risk mitigation very well at all. We all sort of know that two factor authentication is better than not. But how much better and how much should you spend on it? 

 

Clearly we have an intuitive sense that here the cost is much less than the benefit, and so we go ahead and do it. But lots of implementations people don’t implement because they don’t have a sense of how to actually measure security. I think that the insurance industry is struggling right now to create legitimate cyber risk policies because they don’t have really good ways of assessing the comparative security or insecurity of two companies, Marriott and Hyatt. They can’t go in and look at them and say Hyatt’s great and Marriott is terrible or vice versa. I should be clear by the way that I know nothing about either or those, and I use them as exemplars only because people would know their names. And that’s what we need. 

 

Robert Strayer:  Jamil?

 

Prof. Jamil N. Jaffer:  Yeah. No. Look, I mean, I think Paul lays out some important points there. I think at the end of the day the question is how you compare these things. And I think we’ve got to figure out a better path to addressing these concerns. I think that right now we’re lost in this sort of back and forth debate on do we/don’t we. And I’m not sure that we’re going to find another path forward. 

 

So Rob, look, you were involved in the middle of these things internationally for many years in the Trump Administration. I think part of the challenge is that internationally we’re not on the same page. In this country we might find a middle path forward. A lot of times what we’re seeing is in the international realm is that a lot of these things are coming from the outside, GDPR and the like. We’re seeing the Europeans — AI and AI regulation, we’re seeing the Europeans set the standard likely through this AI act. And I think that’s a piece that we haven’t talked about today that at some point we need to probably talk about as well. But yeah. 

 

Robert Strayer:  That’s a great point, Jamil. I was going to raise that at some point. But the Europeans on cybersecurity for some time have adopted relatively compatible approach to cybersecurity with the United States. But in the last year they proposed something called the Cyber Resilience Act, which sets a number of very onerous requirements on industry and would seek to regulate the intermediary producers of products, not just the final user which we’ve talked a lot about. But it would effectively go after someone who’s producing the open source software, for example, that’s intermediary components. And they’re not international standards for security in the 27001 series. All those standards are built for final products and final services, not for the intermediaries. 

 

There’s also some really troublesome vulnerability disclosure requirements that require disclosure of an exploitable vulnerability that’s never even been exploited, which might well send out vulnerabilities that have not been patched under a very tight time requirement to the European Union’s commission, which really creates a honey pot for those that might seek to have access to those not yet exploited vulnerabilities. So there’s a number of issues that need to be worked through on this. And it seems to be the commission is in a rush to finish this process before the end of the year. So I could see there being a real divergence in our international approaches there. 

 

I just want to give you two last words on just kind of the overall framework. Do you think that Congress in the way it works, and that is not very quickly, could even do a liability regime in the next couple years? Or do we still see this sometime off and a lot more work to be done? Paul first, I guess. 

 

Prof. Paul Rosenzweig:  It’s not going to happen in this Congress. I’m skeptical that a liability regime will actually come to fruition in Congress anytime soon. I would expect both judicial adoption of a liability regime and possibly as states are beginning to adopt their own privacy regime, I’m going to guess that over the next five to seven years we will see some states imposing cybersecurity liability regimes. And then there’ll be arguments about preemption, federal preemption, dormant commerce clause, and all the things that Federalist Society lawyers love to listen to. 

 

Robert Strayer:  Great. Thanks. Jamil, how’s your crystal ball look on activities in this area?

 

Prof. Jamil N. Jaffer:  Yeah, no. Unfortunately, I think Paul is likely right. I think that we’re likely to see additional activity overseas as well as domestically at the state and local level. And that’s likely to drive the need for — as we’ve seen in privacy and the like need for it actually at the federal level because you don’t have this diversity of standards and rules for people to comply with. That’s the point you were making earlier and the question you asked earlier, Rob, about what happens when you see this proliferation of multiple states. And so I think Paul is right. That’s likely to happen. 

 

And so maybe it makes sense for the federal government to get ahead of that, but I also share Paul’s skepticism that the Congress is likely to do anything here in the near future. I think actually in a lot of ways the administration was sort of putting this out there as a trial balloon to sort of set the stage for a long term plan in this space and maybe even frankly to encourage the courts to think about in their common law adjudication of some of these cases to think about how products liability standards might apply in the software context. And so if in fact courts start doing that, the right approach then would be to try and develop a safe harbor. That makes a lot of sense. 

 

Robert Strayer:  Well, Paul and Jamil, thank you very much for this really robust discussion. Your expertise here is phenomenal. Chayila, back to you. 

 

Chayila Kleist:  Absolutely. On behalf of both myself and The Federalist Society, I want to say thank you to our experts and our moderator for sharing your time and expertise today. It really was a lively discussion, and I really appreciate it. Also, thank you to our audience for tuning in and participating. We welcome listener feedback at [email protected]. And if you’re interested for more from us at RTP, you can continue to follow us at regproject.org or find us on all major social media platforms. With that, thank you all for joining us today. Until next time, we are adjourned.

Jamil N. Jaffer

Founder & Executive Director, National Security Institute

Director, National Security Law & Policy Program and Assistant Professor of Law, Antonin Scalia Law School


Paul Rosenzweig

Professorial Lecturer in Law

The George Washington University


Robert Strayer

Executive Vice President of Policy

Information Technology Industry Council


Cyber & Privacy

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. All expressions of opinion are those of the speaker(s). To join the debate, please email us at [email protected].

Related Content

Skip to content