The Coming Onslaught of “Algorithmic Fairness” Regulations

The authors of this paper examine the growth of “algorithmic fairness” regulations at the federal, state, and international level, and discuss ramifications for administrative state regulation and innovation in markets. 

Contributors

Neil Chilson
Adam Thierer

 

The Federalist Society and Regulatory Transparency Project take no position on particular legal or public policy matters. This paper was the work of multiple authors, and no assumption should be made that any or all of the views expressed are held by any individual author except where stated. The views expressed are those of the authors in their personal capacities and not in their official or professional capacities.
To cite this paper: N. Chilson et al., “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations,” released by the Regulatory Transparency Project of the Federalist Society, November 2, 2022 (https://regproject.org/wp-content/uploads/The-Coming-Onslaught-of-Algorithmic-Fairness-Regulations.pdf)

Introduction

A potential avalanche of “algorithmic fairness” regulations is looming that, if triggered, would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history. 

Federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems. These new mandates are being pushed to address everything from hate speech, social media content moderation policies, child safety, anti-discrimination issues, privacy, and various other amorphous goals. 

These measures would empower government bureaucrats at many different agencies to regulate the minutiae of computer programs that will power the next great industrial revolution. If algorithmic fairness mandates snowball, the accumulating regulations threaten to derail the computational capabilities of the U.S. economy and weaken America’s ability to compete globally for AI technology, which could let China and other nations race ahead. Taken together, the rise of algorithmic fairness regulations would deal a severe blow to the permissionless innovation vision that fueled the Digital Revolution over the past quarter century. 1 Adam Thierer, “The Future of Innovation: Is This the End of Permissionless Innovation?,” Discourse, January 21, 2021, https://www.discoursemagazine.com/culture-and-society/2021/01/06/the-future-of-innovation-is-this-the-end-of-permissionless-innovation.

The Precautionary Principle Comes for AI

For many years now, various left-leaning academics and regulatory advocacy organizations have encouraged policymakers to “pass legislation on artificial intelligence early and often,” by imposing preemptive, precautionary controls on emerging algorithmic technologies.2 John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often,” Slate, September 12, 2014, https://slate.com/technology/2014/09/we-need-to-pass-artificial-intelligence-laws-early-and-often.html. Their calls for algorithmic regulation are motivated by a variety of concerns—privacy, safety, security, discrimination, etc. But they are often grouped together under the banner of “algorithmic fairness” or “algorithmic justice.” 

To address these amorphous issues, advocates have proposed formal certification regimes for AI.3 World Economic Forum, “This Underestimated Tool Could Make AI Innovation Safer,” August 5, 2022, https://www.weforum.org/agenda/2022/08/responsible-ai-certification-programmes . Such a permission slip approach would give bureaucrats the ability to examine algorithmic systems and determine whether they would be allowed onto the market, or even taken off the market if they’ve already been released. 

In addition to expanding the power of existing administrative state agencies, advocates for AI regulation have also proposed a variety of new agencies such as a Federal Robotics Commission,4 Ryan Calo, “The Case for a Federal Robotics Commission,” Brookings Institution, Washington, DC, September 2014, https://www.brookings.edu/research/the-case-for-a-federal-robotics-commission. an AI Control Council,5 Anton Korinek, “Why We Need a New Agency to Regulate Advanced Artificial Intelligence: Lessons on AI Control from the Facebook Files,” Brookings, December 8, 2021, https://www.brookings.edu/research/why-we-need-a-new-agency-to-regulate-advanced-artificial-intelligence-lessons-on-ai-control-from-the-facebook-files. a National Algorithmic Technology Safety Administration,6 Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review, Vol. 69, No. 1 (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2747994. an “FDA for Algorithms,”7 Ibid. and even a new global regulatory body called the International Artificial Intelligence Organization.8 Olivia J. Erdélyi and Judy Goldsmith, “Regulating Artificial Intelligence: Proposal for a Global Solution,” AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (December 2018): 95-101, https://dl.acm.org/doi/10.1145/3278721.3278731.  

Such measures would impose a Precautionary Principle model of regulation on almost all computational systems. While the focus on AI-based systems would seem to constrain such interventions to the highest-powered computational systems, this constraint is illusory. An “algorithm” is simply the name for any recipe of instructions given to a computer to execute a task. Many of these regulatory efforts make little distinction between different types of algorithms. Computational tools across the economy would be affected.

The regulations would treat algorithmic innovations as guilty until proven innocent by imposing “unlawfulness by default” as the standard for many AI systems.9 Gianclaudio Malgieri & Frank A. Pasquale, “From Transparency to Justification: Toward Ex Ante Accountability for AI,” Brooklyn Law School, Legal Studies Paper No. 712 (June 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099657. This would mean that AI developers would need to “affirmatively demonstrate that their technology is not harmful and self-certify or seek regulatory approval before they deploy it.”10 Margot E. Kaminski, “Regulating the Risks of AI,” Boston University Law Review, Vol. 103 (forthcoming, 2023), at 83, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4195066 In practice, this would demand that developers focus far more time on paperwork compliance and lobbying instead of inventing the next great application.11 Adam Thierer, “The Proper Governance Default for AI,” Medium, May 26, 2022.  

To address these amorphous issues, advocates have proposed formal certification regimes for AI. Such a permission slip approach would give bureaucrats the ability to examine algorithmic systems and determine whether they would be allowed onto the market, or even taken off the market if they’ve already been released. 

Regulatory Measures Accumulate

Unfortunately, these academic efforts and Precautionary Principle-inspired proposals are now bearing fruit in a variety of legislative bodies. Several bills have been floated in recent sessions of Congress to address these calls for top-down regulatory control. A recent Stanford University report noted “a sharp increase in the total number of proposed bills that relate to AI from 2015 to 2021,” and that the 117th Congress “is on track to record the greatest number of AI-related mentions since 2001, with 295 mentions by the end of 2021, halfway through the session, compared to 506 in the previous (116th) session. While the number of bills passed remains low thus far (with only 2% ultimately becoming law), it’s clear the appetite for aggressive regulation is growing.

One broad-based legislative proposal, the Algorithmic Accountability Act, would require that any larger company that “deploys any augmented critical decision process” must undertake algorithmic impact assessments and file them with the Federal Trade Commission.12 H.R.6580 – Algorithmic Accountability Act of 2022, 117th Congress (2021-2022). Covered entities would be required “to eliminate or mitigate, in a timely manner, any impact made by an augmented critical decision process that demonstrates a likely material negative impact that has legal or similarly significant effects on a consumer’s life.”  The law would also form a new Bureau of Technology inside the FTC to enforce the new mandates.

Other targeted measures such as the “Algorithmic Justice and Online Platform Transparency Act” and the “Protecting Americans from Dangerous Algorithms Act” would introduce far-reaching regulations that would require AI innovators to reveal more about how their algorithms work or even hold them liable if their algorithms are thought to be amplifying hateful or extremist content. Other proposed bills like the “Platform Accountability and Consumer Transparency Act” and the “Online Consumer Protection Act” would demand greater algorithmic transparency as it relates to social media content moderation policies and procedures. Finally, measures like the “Kids Online Safety Act” would require audits of algorithmic recommendation systems that supposedly targeted or harmed children.13 Jeffrey Westling, “Kids Online Safety Act Could Do More Harm Than Good,” American Action Forum Insight, September 22, 2022, https://www.americanactionforum.org/insight/kids-online-safety-act-could-do-more-harm-than-good/#ixzz7fcpzSq00.

Algorithmic regulation is also creeping into proposed privacy regulations, such as the “American Data Protection and Privacy Act of 2022.” The measure would require large data handlers to divulge information about their algorithms and undergo “algorithmic design evaluations” based on amorphous fairness concerns.14 Alden Abbott and Satya Marar, “Unintended Consequences: The High Costs of Data Privacy Laws,” The National Interest, July 19, 2022, https://nationalinterest.org/blog/techland-when-great-power-competition-meets-digital-world/unintended-consequences-high-costs. 

Federal agencies are also working to regulate algorithms under several different guises. The Federal Trade Commission, for example, recently started a rulemaking proceeding to regulate commercial privacy, but include many questions about algorithmic bias and justice. The Commission is seeking comment on the prevalence of algorithmic discrimination based on protected categories, how the FTC should evaluate algorithmic discrimination, and how it should address such discrimination – including whether it should adopt new trade regulation rules that ban “any system that produces discrimination.”15 Federal Trade Commission, Trade Regulation Rule on Commercial Surveillance and Data Security, 18 Fed. Reg. 51273, 51284 (Aug. 22, 2022), https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security.

The Biden Administration has also proposed “a Bill of Rights for an AI-Powered World,” to “guard against the powerful technologies we have created.”16 Eric Lander and Alondra Nelson, “Americans Need a Bill of Rights for an AI-Powered World,” Wired, October 9, 2021, https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence.   The Biden administration officials who floated the idea also suggested there may be a need for “new laws and regulations to fill gaps,” and that “States might choose to adopt similar practices.” More recently, in September, the White House released a set of six principles to guide tech policy for digital platforms, two of which focused on algorithmic regulation.17 White House, “Readout of White House Listening Session on Tech Platform Accountability,” September 8, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/09/08/readout-of-white-house-listening-session-on-tech-platform-accountability/

Several bills have been floated in recent sessions of Congress to address these calls for top-down regulatory control.

Another Patchwork of Technocratic Regulations

Several state governments have already floated different types of algorithmic regulation. Some state measures simply require additional study and oversight of AI. For example, legislation has been introduced in Colorado, Maryland, Missouri, and Rhode Island to form new AI commissions or task forces to study various policy concerns surrounding AI systems and then report back to the Governor or legislature. A 2022 Arizona law proposed a new AI study committee that would “determine how to legislate algorithms that require artificial intelligence to ensure that AI systems support human agency and fundamental rights” and “[f]ulfill ethical principles that ensure no unintended human harm occurs,” while also providing “transparency and traceability of data logs and decision-making.”18 Arizona House Bill 2685, (2022), https://legiscan.com/AZ/text/HB2685/id/2529877. Although it passed this summer, Gov. Doug Ducey vetoed the measure.

Other states have considered targeted bills focused on specific AI applications, such as facial recognition, automated vehicles, or drones. But other states have floated legislation to comprehensively regulate AI systems preemptively. For example, in 2021, the Attorney General of the District of Columbia proposed legislation that “would hold businesses accountable for preventing biases in their automated decision-making algorithms and require them to report and correct any bias that is detected.”19 Office of the Attorney General for the District of Columbia, “AG Racine Introduces Legislation to Stop Discrimination In Automated Decision-Making Tools That Impact Individuals’ Daily Lives,” December 9, 2021, https://oag.dc.gov/release/ag-racine-introduces-legislation-stop. The state of Washington has proposed an aggressive algorithmic regulatory proposal (SB 5116) that would ban the use of many algorithmic systems and processes by governments. 

Finally, still other states like Florida, Texas, and California have advanced various bills to regulate social media algorithms in the name of countering “censorship” or “protecting children.” These measures raise serious constitutional issues under the First Amendment and have already invited court challenges.20 Holly Barker, “First Amendment Hurdle Looms for California’s Social Media Law,” Bloomberg Law, September 16, 2022, https://news.bloomberglaw.com/litigation/first-amendment-hurdle-looms-for-californias-social-media-law. The California bill requires Data Protection Impact Assessments “before any new online services, products, or features are offered to the public” if it is “likely to be accessed by children.” Again, this is a “guilty until proven innocent” algorithmic regulatory regime.21 Eric Goldman, “Five Ways That the California Age-Appropriate Design Code (AADC/AB 2273) Is Radical Policy,” Technology & Marketing Law Blog, September 15, 2022, https://blog.ericgoldman.org/archives/2022/09/five-ways-that-the-california-age-appropriate-design-code-aadc-ab-2273-is-radical-policy.htm. State algorithmic regulatory activities like these are likely to proliferate in coming years and threaten to create a patchwork of convoluted compliance regimes. This will create considerable costs for innovators at a time when many other data-oriented state and federal laws are already multiplying rapidly. 

In addition to conflicting with each other, such AI regulations are likely to be internally incoherent, as well. Amorphous, abstract, and aspirational goals like “algorithmic fairness” or “algorithmic transparency” cannot be converted into preemptive regulatory edicts that will be widely accepted, well understood by developers, and effective in developing new, improved products. The practical considerations also abound. For example, one cannot judge the “fairness” of a machine learning algorithm absent the data inputs, because the entire design of the algorithm is to learn from the data. A government certification of the initial algorithm, absent data, would be meaningless as far as regulating the performance of the algorithm in operation. Furthermore, modern software development processes focus on rapid iteration, with new software versions being released weekly or even daily. Inserting a certification process into that loop would slow software improvements to the speed of government. 

Amorphous, abstract, and aspirational goals like “algorithmic fairness” or “algorithmic transparency” cannot be converted into preemptive regulatory edicts that will be widely accepted, well understood by developers, and effective in developing new, improved products.

Conservative Calls for Algorithmic Regulation

While the center of gravity for the AI regulation push is to the left of center, some calls for algorithmic regulation in the name of fairness come from conservatives.22 Adam Thierer and Neil Chilson, “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers,” FedSoc Blog, August 6, 2020, https://fedsoc.org/commentary/fedsoc-blog/fcc-s-o-rielly-on-first-amendment-fairness-doctrine-dangers. Much of this is motivated by Republican perceptions about bias in social media content moderation. Indeed, in 2020 Pew found that “90% of Republicans say it is likely that social media sites censor political viewpoints.”23 Pew Research Center, 90% of Republicans say it is likely that social media sites censor political viewpoints, August 17, 2020, https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/pi_2020-08-19_social-media-politics_00-1/. As a result, politicians have responded. 

Some conservatives have pushed hard for measures mandating “algorithmic transparency” for social media content moderation.  For example, Senator Josh Hawley’s (R-MO) “Ending Support for Internet Censorship Act” mandates that certain large tech companies undergo external audits proving that their algorithms and content-moderation techniques are politically unbiased.24 Senator Hawley Introduces Legislation to Amend Section 230 (June 19, 2019) https://www.hawley.senate.gov/senator-hawley-introduces-legislation-amend-section-230-immunity-big-tech-companies. Similarly, Senator John Thune (R-SD) has introduced the Political Bias in Algorithm Sorting (BIAS) Emails Act to prohibit large online platforms from censoring emails via algorithms.25 Thune: Big Tech Must be Held Accountable to American Consumers (June 15, 2022), https://www.thune.senate.gov/public/index.cfm/press-releases?ID=426873E3-2F5C-4188-873B-4DDE5BDBB477.  

At the state level, the Florida and Texas bills mentioned above have been pushed by Republican lawmakers to counter what they believe to be unfair discrimination against conservative speakers and viewpoints. The transparency-related requirements in these bills would require digital platforms to reveal details about their algorithmic business practices. While some of those concerns are understandable, if these laws pass constitutional muster they will create precedents that liberals will use to undermine decades of conservative efforts in the courts to defend property rights, religious freedom, and corporate speech. The laws would also generate waves of frivolous lawsuits, empowering trial lawyers and regulatory agencies in the process. 

The transparency-related requirements in these bills would require digital platforms to reveal details about their algorithmic business practices. While some of those concerns are understandable, if these laws pass constitutional muster they will create precedents that liberals will use to undermine decades of conservative efforts in the courts to defend property rights, religious freedom, and corporate speech.

Avoiding the European Regulatory Model

Most of the concerns animating proposals to regulate algorithmic systems are better addressed using more decentralized governance approaches.26 Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed. For example, algorithmic auditing can play a useful role if pursued in a more flexible, bottom-up fashion.27 Adam Thierer, “Algorithmic Auditing and AI Impact Assessments: The Need for Balance,” Medium, July 13, 2022, https://medium.com/@AdamThierer/algorithmic-auditing-and-ai-impact-assessments-the-need-for-balance-4de134e48d23. Audits and impact assessments are already utilized in many other fields to address safety practices, financial accountability, labor practices, supply chain practices, and so on. This isn’t always done pursuant to regulatory mandates. Often, it simply is good business sense. 

For AI systems, audits and impact assessments can be done by various professional certification organizations, private consultancies or law firms, or other self-regulatory bodies. When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. If someone’s algorithm is thought to be biased in some fashion, we already have many layers of anti-discrimination regulations that can address it. 

The problem with using top-down and highly precautionary regulatory mandates to address such amorphous values and abstract concerns about algorithmic processes is that it could greatly undermine the ability of the nation to realize the full potential of these new technologies. What America must avoid is the sort of heavy-handed approach that the European Union (EU) has adopted for the digital economy and which they are now ready to expand with the new Artificial Intelligence Act.

The EU’s proposed AI Act, which is advancing rapidly, creates a new European Artificial Intelligence Board that will enforce a complex system of algorithmic “conformity assessments” and impose steep fines for violations. Compliance costs will likely devastate small and medium sized enterprises and limit AI innovation across the continent, just as earlier E.U. regulations undermined the continent’s Internet companies over the past quarter century.28 Adam Thierer, “Why the Future of AI Will Not Be Invented in Europe,” Technology Liberation Front, August 1, 2022, https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe. The European Commission itself estimates that just the mandate to set up the quality management systems required by the law will cost roughly $193,000-$330,000 upfront plus $71,400 in yearly maintenance cost.29 European Commission, “Study Supporting the Impact Assessment of the AI regulation,” Final Report (April 21, 2021): 12, https://digital-strategy.ec.europa.eu/en/library/study-supporting-impact-assessment-ai-regulation

The European DIGITAL SME Alliance, Europe’s largest network of small and medium sized enterprises (SMEs) has stated that “regulation that requires SMEs to make these significant investments, will likely push SMEs out of the market,” because smaller developers have limited financial and human resources.30 European Digital SME Alliance, “Digital SME Reply to the AI Act Consultation,” August 6, 2021, at 2, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665574_en. “This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe,” the group argues.31Ibid The AI Act is the sort of Precautionary Principle-based model that the U.S. has thus far avoided for the Digital Economy, but which new algorithmic fairness mandates could usher in for America. 

The problem with using top-down and highly precautionary regulatory mandates to address such amorphous values and abstract concerns about algorithmic processes is that it could greatly undermine the ability of the nation to realize the full potential of these new technologies.

Conclusion

Every industry today relies on and benefits from computers and the algorithms that power them thanks to the U.S.’s free and innovative culture and light touch regulatory environment around software.32 Adam Thierer, “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead,” Medium, September 10, 2022, https://medium.com/@AdamThierer/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead-d608cdaba87. But the growing ubiquity of algorithms has given a new opening to those who, under the guise of concern over artificial intelligence, seek a more precautionary, centrally governed society. Regulating algorithms would insert government into ever more of our work, play, and personal lives. And it would forfeit America’s lead in technological innovation, to the detriment of all. 

Footnotes

  1.  Adam Thierer, “The Future of Innovation: Is This the End of Permissionless Innovation?,” Discourse, January 21, 2021, https://www.discoursemagazine.com/culture-and-society/2021/01/06/the-future-of-innovation-is-this-the-end-of-permissionless-innovation.
  2.  John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often,” Slate, September 12, 2014, https://slate.com/technology/2014/09/we-need-to-pass-artificial-intelligence-laws-early-and-often.html.
  3.  World Economic Forum, “This Underestimated Tool Could Make AI Innovation Safer,” August 5, 2022, https://www.weforum.org/agenda/2022/08/responsible-ai-certification-programmes .
  4.  Ryan Calo, “The Case for a Federal Robotics Commission,” Brookings Institution, Washington, DC, September 2014, https://www.brookings.edu/research/the-case-for-a-federal-robotics-commission.
  5.  Anton Korinek, “Why We Need a New Agency to Regulate Advanced Artificial Intelligence: Lessons on AI Control from the Facebook Files,” Brookings, December 8, 2021, https://www.brookings.edu/research/why-we-need-a-new-agency-to-regulate-advanced-artificial-intelligence-lessons-on-ai-control-from-the-facebook-files.
  6.  Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review, Vol. 69, No. 1 (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2747994.
  7. Ibid
  8.  Olivia J. Erdélyi and Judy Goldsmith, “Regulating Artificial Intelligence: Proposal for a Global Solution,” AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (December 2018): 95-101, https://dl.acm.org/doi/10.1145/3278721.3278731.
  9.  Gianclaudio Malgieri & Frank A. Pasquale, “From Transparency to Justification: Toward Ex Ante Accountability for AI,” Brooklyn Law School, Legal Studies Paper No. 712 (June 2022), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4099657.
  10.  Margot E. Kaminski, “Regulating the Risks of AI,” Boston University Law Review, Vol. 103 (forthcoming, 2023), at 83, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4195066
  11.  Adam Thierer, “The Proper Governance Default for AI,” Medium, May 26, 2022.
  12.  H.R.6580 – Algorithmic Accountability Act of 2022, 117th Congress (2021-2022).
  13.  Jeffrey Westling, “Kids Online Safety Act Could Do More Harm Than Good,” American Action Forum Insight, September 22, 2022, https://www.americanactionforum.org/insight/kids-online-safety-act-could-do-more-harm-than-good/#ixzz7fcpzSq00.
  14.  Alden Abbott and Satya Marar, “Unintended Consequences: The High Costs of Data Privacy Laws,” The National Interest, July 19, 2022, https://nationalinterest.org/blog/techland-when-great-power-competition-meets-digital-world/unintended-consequences-high-costs.
  15.  Federal Trade Commission, Trade Regulation Rule on Commercial Surveillance and Data Security, 18 Fed. Reg. 51273, 51284 (Aug. 22, 2022), https://www.federalregister.gov/documents/2022/08/22/2022-17752/trade-regulation-rule-on-commercial-surveillance-and-data-security. 
  16.  Eric Lander and Alondra Nelson, “Americans Need a Bill of Rights for an AI-Powered World,” Wired, October 9, 2021, https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence. 
  17.  White House, “Readout of White House Listening Session on Tech Platform Accountability,” September 8, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/09/08/readout-of-white-house-listening-session-on-tech-platform-accountability/
  18.  Arizona House Bill 2685, (2022), https://legiscan.com/AZ/text/HB2685/id/2529877.
  19.  Office of the Attorney General for the District of Columbia, “AG Racine Introduces Legislation to Stop Discrimination In Automated Decision-Making Tools That Impact Individuals’ Daily Lives,” December 9, 2021, https://oag.dc.gov/release/ag-racine-introduces-legislation-stop.
  20.  Holly Barker, “First Amendment Hurdle Looms for California’s Social Media Law,” Bloomberg Law, September 16, 2022, https://news.bloomberglaw.com/litigation/first-amendment-hurdle-looms-for-californias-social-media-law.
  21.  Eric Goldman, “Five Ways That the California Age-Appropriate Design Code (AADC/AB 2273) Is Radical Policy,” Technology & Marketing Law Blog, September 15, 2022, https://blog.ericgoldman.org/archives/2022/09/five-ways-that-the-california-age-appropriate-design-code-aadc-ab-2273-is-radical-policy.htm.
  22.  Adam Thierer and Neil Chilson, “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers,” FedSoc Blog, August 6, 2020, https://fedsoc.org/commentary/fedsoc-blog/fcc-s-o-rielly-on-first-amendment-fairness-doctrine-dangers.
  23.  Pew Research Center, 90% of Republicans say it is likely that social media sites censor political viewpoints, August 17, 2020, https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/pi_2020-08-19_social-media-politics_00-1/.
  24.  Senator Hawley Introduces Legislation to Amend Section 230 (June 19, 2019) https://www.hawley.senate.gov/senator-hawley-introduces-legislation-amend-section-230-immunity-big-tech-companies
  25.  Thune: Big Tech Must be Held Accountable to American Consumers (June 15, 2022), https://www.thune.senate.gov/public/index.cfm/press-releases?ID=426873E3-2F5C-4188-873B-4DDE5BDBB477
  26.  Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.
  27.  Adam Thierer, “Algorithmic Auditing and AI Impact Assessments: The Need for Balance,” Medium, July 13, 2022, https://medium.com/@AdamThierer/algorithmic-auditing-and-ai-impact-assessments-the-need-for-balance-4de134e48d23.
  28.  Adam Thierer, “Why the Future of AI Will Not Be Invented in Europe,” Technology Liberation Front, August 1, 2022, https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe.
  29.  European Commission, “Study Supporting the Impact Assessment of the AI regulation,” Final Report (April 21, 2021): 12, https://digital-strategy.ec.europa.eu/en/library/study-supporting-impact-assessment-ai-regulation
  30.  European Digital SME Alliance, “Digital SME Reply to the AI Act Consultation,” August 6, 2021, at 2, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12527-Artificial-intelligence-ethical-and-legal-requirements/F2665574_en.
  31.  Ibid.
  32.  Adam Thierer, “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead,” Medium, September 10, 2022, https://medium.com/@AdamThierer/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead-d608cdaba87.