Download PDF Spring Bridge on AI: Promises and Risks April 15, 2025 Volume 55 Issue 1 This issue of The Bridge features fresh perspectives on artificial intelligence’s promises and risks from thought leaders across industry and academia. Disrupting the Disruption Narrative: Policy Innovation in AI Governance Monday, April 14, 2025 Author: Alondra Nelson Governance should not be understood as an impediment to AI innovation but as an essential component of it. “Disrupt!” has been a mantra of technology-driven commerce for more than three decades. Clayton M. Christensen, the late Harvard Business School professor, pioneered the analysis of this idea in practice—which he termed “disruptive innovation”—developing influential research that highlighted strategies for identifying novel approaches to capture incumbent business and emerging markets (1997). Disruption of the existing market economy manifests in many ways. It can be technological, such as in the late 20th century when vacuum tubes were overtaken by transistors, revolutionizing semiconductor development and consumer electronics production by enabling the creation of products that were inexpensive, efficient, and portable (Riordan and Hoddeson 1998). Furthermore, disruption occurs by reshaping markets through the introduction of products that may be more accessible and affordable—often marketed as “free” despite hidden costs—and thereby expanding the consumer base (Christensen 1997; Shapiro and Varian 1998; Terranova 2000). One prevalent depiction of the disruption economy appears as a foot race, with the “hare” of technological innovation speeding past the slow-moving “tortoise” of policy and governance. This characterization has gained particular traction in discussions about artificial intelligence (AI) as its expanding use across society has elevated the matter of technology governance to one of the most pressing challenges of our time. The accompanying narrative that agile, meaningful oversight is impossible due to the speed of innovation has become especially entrenched in discussions of AI systems, which are said to evolve so rapidly and to transform society so fundamentally that policy frameworks cannot possibly keep pace. This prevailing perspective on the relationship between disruption and innovation is both incomplete and inaccurate. A critical missing element is the recognition that some tech industry actors purposely seek to evade or resist regulatory frameworks as a deliberate business strategy (Edelman and Geradin 2016), integrating into their core mission not only technological and consumer-facing transformations but also regulatory arbitrage (Cohen 2019; Zuboff 2019). Moreover, this approach includes tactics for skirting, bending, circumventing, or resisting existing legal frameworks (Hussain et al. 2020; Rahman and Thelen 2019), and serves as its own engine of disruption, not merely a secondary effect or unintended consequence of it. Some tech industry actors purposely seek to evade or resist regulatory frameworks as a deliberate business strategy, integrating into their core mission not only technological and consumer-facing transformations but also regulatory arbitrage. A Cautionary Tale for AI Governance: Ridesharing Companies and Regulatory Arbitrage The rise of ridesharing fundamentally transformed mobility patterns, traffic congestion, labor relations, and public transportation ecosystems (Calo and Rosenblat 2017; Shaheen et al. 2016), causing a range of harms from the erosion of workers’ rights and quality of life (Dubal 2017; Malin and Chandler 2017) to increased traffic congestion and associated pollution risks (Erhardt et al. 2019) to consumers’ personal safety and discrimination (Ge et al. 2020; Hoskins 2022) to surveillance of users (Rosenblat 2018). Ridesharing companies have systematically shaped public discourse around their negative impacts by employing strategic regulatory avoidance and undermining existing legal frameworks (Rahman and Thelen 2019). They divert attention from their role in creating harms to workers, transportation systems, and urban infrastructure, presenting themselves as innovators, while actively subverting regulatory oversight designed to protect public interests. Ridesharing companies’ efforts to set the terms of debate on the public harms to which they contribute exemplify a strategy of deliberate regulatory evasion and defiance. The ridesharing example illustrates key considerations for AI policymaking as AI companies similarly seek to set the conditions of their own governance by disregarding or undermining rules, laws, and policies, with concomitant harms. When the ridesharing company Uber emerged in the United States, it strategically entered markets where the regulation of alternative transportation services remained undefined and unlegislated (Christensen et al. 2015; Rahman and Thelen 2019). A decade ago, Uber and Lyft drivers in Utah faced tickets and substantial fines on behalf of the companies until they met statewide requirements for background checks, liability insurance, and other public safety benchmarks (Price 2015). In New York City, ridesharing startups confronted an established taxi medallion system, which provided policymakers with legal frameworks to temporarily resist Uber’s entry into one of the world’s largest transportation markets (Dubal 2017). Ridesharing companies intentionally operated outside existing taxi and transportation regulations, arguing that their technology-enabled services constituted an entirely new category requiring different legal treatment (Davis 2015; Thelen 2018). Understanding regulatory arbitrage as a calculated strategy (Pollman and Barry 2017) brings perceived gaps between innovation and regulation into clearer focus as deliberately engineered outcomes serving specific business interests, rather than as an inevitable consequence of the pace of technology (Christensen et al. 2015; Pasquale 2015; Zuboff 2019). When we view disruption as a strategy to circumvent laws, rather than as an inevitable outgrowth of technological development, we better understand how this approach proactively undermines regulatory guardrails (Cohen 2019). Moreover, fully understanding this dynamic opens up new possibilities for AI governance strategy by engaging in disruptive innovation through policy innovation. Building on this understanding, we see how casting governance as a drag on innovation severely limits the spectrum of possibilities for effective AI policy, hampering the development and implementation of crucial organizational, corporate, and governmental guardrails needed to mitigate risks and prevent harm; ensure the safe design, production, and deployment of new technologies; and harness their potential. This restrictive framework fundamentally limits how policymakers and the public conceptualize and pursue viable approaches to AI governance. The commercial deployment of generative AI has precipitated numerous ongoing legal challenges concerning training data provenance (Grynbaum and Mac 2023), intellectual property rights (Brittain 2025), and competition law violations (Ciaccia 2024), collectively representing an emerging wave of governance disruption. But this new wave of disruption presents an opportunity to challenge the presumed inevitability of regulatory lag and to begin to leverage policy innovation to achieve more beneficial outcomes for AI use. AI Policy Innovation: A Multi-Faceted Approach Addressing these challenges requires a renewed commitment to AI policy innovation. Policy enables desired future states across institutions, while policy innovation encompasses the conditions necessary for achieving these desired states, including strategic development and implementation of novel approaches to principles, rules, and guidelines that can address governance challenges. The perception that AI governance inherently lags behind technological development overlooks an immediate solution: the application of existing laws, rules, regulations, and standards. A significant barrier to this approach, despite being the most agile response to emerging technology, has been the persistent industry framing of AI—like many transformative technologies before it—as so fundamentally novel that existing governance frameworks cannot possibly address it (Selbst and Barocas 2018). This narrative has led to AI being characterized as essentially ungovernable. The path to effective AI governance begins with demystifying artificial intelligence itself (Crawford 2021). While AI systems demonstrate remarkable and expanding capabilities, they remain fundamentally human-created tools with specific limitations and constraints. This foundational understanding helps to counter narratives that can paralyze effective policymaking. By recognizing AI as a product of human choices and decisions, we maintain a clearer perspective on our agency in shaping its development and deployment (Winner 2021). This demystification enables more pragmatic and effective governance approaches. By recognizing AI as a product of human choices and decisions, we maintain a clearer perspective on our agency in shaping its development and deployment. To strengthen AI governance despite industry resistance, policymakers can pursue three complementary approaches: First, they can more effectively leverage and maximize existing regulatory frameworks and legal mechanisms by adapting and applying current legal frameworks to AI challenges. Second, they can develop the new policy instruments and governance structures that may be required to address unique aspects of AI systems. And third, they can embrace an iterative approach to policy development that allows for rapid learning, adjustment, and evolution as technologies and their impacts continue to emerge. This multi-faceted approach to policy innovation enables more responsive and effective governance while avoiding the false choice between public protection and technological development. Existing Laws Guardrails are essential enablers, not obstacles. In response to criticisms that regulation is too slow, governments and organizations can resist industry rhetoric that new technologies like AI transcend all prior conceptions of laws, norms, and rules. Using this approach, safeguards can be swiftly deployed through the innovative application of existing governance frameworks—including established norms, rules, laws, and standards. While many of these laws were not designed with AI in mind, their intended outcomes—including safety, inclusion, accessibility, and equitable use—remain vital goals even as technology evolves. Guardrails are essential enablers, not obstacles. In some instances, leveraging existing laws and policies represents the most expedient path to responsive technology governance. For example, President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence emphasized that “the use of new technologies, such as AI, does not excuse organizations from their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change.” The order stated that the “Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI” (EOP 2023). While executive orders may be subject to changes by subsequent administrations—this one was revoked by the Trump administration in early 2025—this example demonstrates the crucial principle that new technologies do not necessarily require reconstructing the social compact. In practice, this governance approach has been manifested through concrete actions. The US Equal Employment Opportunity Commission issued guidelines applying the Americans with Disabilities Act to the use of software, algorithms, and AI in hiring practices (EEOC 2022). Similarly, the Federal Trade Commission launched “Operation AI Comply” to pursue cases against companies that used “AI tools to trick, mislead, or defraud people” (FTC 2024). Furthermore, state-level legal and regulatory frameworks provide valuable models for AI governance. Illinois’ Biometric Information Privacy Act, for instance, offers a template for protecting individual privacy rights in the AI era, while existing anti-discrimination laws provide mechanisms for addressing algorithmic bias (Citron and Pasquale 2014). In addition, federal agencies are actively developing guidance and rules to adapt conventional laws concerning intellectual property, copyright, and fair use to the AI context (e.g., USPTO 2024). Effective AI governance must be grounded in fundamental democratic values and human rights. Just as the US Bill of Rights established essential protections for American democracy, AI governance frameworks must articulate and protect core societal values (Blueprint for an AI Bill of Rights, OSTP 2022); Lander and Nelson 2021). This requires balancing innovation with the public good, ensuring algorithmic systems respect human dignity and rights, and maintaining democratic oversight of increasingly powerful technologies. In the courts, writers, artists, musicians, and media companies have filed copyright suits against AI companies (Brittain 2023), seeking to use existing law regarding the deployment of these tools and systems in creative industries. While the outcomes of these cases remain to be seen—and the plaintiffs could lose—these legal proceedings represent regulatory enforcement and also model innovative uses of law and policy for the AI era. New Laws The creation of innovative policy tools and governance frameworks is essential for addressing AI’s distinctive challenges. Although conventional governance approaches remain useful, effective AI regulation may require developing diverse mechanisms across many institutional settings. We’re already seeing policy expansion attempts in cases like Mobley v. Workday, where plaintiffs argue that Workday, an AI software vendor, should be classified as an “employer” under employment discrimination law (Wiessner 2024). Concurrently, labor organizations have emerged as significant actors in AI governance, as demonstrated by the strategic actions of the Screen Actors Guild-American Federation of Television and Radio Artists, which successfully negotiated substantial concessions regarding AI implementation, including specific restrictions on the creation and utilization of both “digital replicas” of human performers and AI-generated “synthetic performers” (Franzen 2023). State-level legislative initiatives further exemplify this multifaceted approach to AI governance. In California, although the widely debated Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was vetoed by Governor Gavin Newsom, he signed into law 18 other AI-related bills in 2024, including measures to address election deepfakes and enhance protections for actors, which demonstrates the breadth of regulatory responses (Ables and De Vynck 2024). Similarly, the 2024 Colorado Artificial Intelligence Act establishes comprehensive obligations for AI developers and deployers, requiring them to protect consumers from foreseeable risks and harms, including algorithmic discrimination in crucial domains such as employment, education, housing, insurance, lending, and healthcare services (Colorado Consumer Protections for Artificial Intelligence Act 2024). These governance efforts illustrate how institutions are not merely responding reactively but are proactively and innovatively establishing meaningful parameters for AI development and deployment, particularly where it intersects with social and economic welfare. Iteration The dynamic nature of AI technology, however, presents unprecedented governance challenges that distinguish it from 20th-century technological innovations. Unlike relatively static technologies such as automobiles and semiconductors, some AI systems demonstrate the capacity for continuous evolution. This fundamental characteristic necessitates a reconceptualization of regulatory approaches, as traditional frameworks predicated on stable definitions and clear boundaries may prove insufficient for governing such dynamic systems. The US Department of Commerce’s National Institute for Standards and Technology (NIST) offers a model for an innovative approach to AI governance. Building upon its constitutional mandate to “fix the standard of weights and measures,” NIST has expanded beyond its traditional role of establishing fundamental measurement standards to address the complexities of AI systems. Its AI Risk Management Framework 1.0 (2023) represents a significant departure from conventional standards development, introducing software development practices such as versioning into government standard-setting processes. While this framework—created collaboratively with industry, academia, and civil society—remains voluntary, effective AI governance requires a combination of laws, norms, and standards. Such adaptable mechanisms are essential components of a comprehensive approach. This novel approach to AI policy acknowledges that contemporary technological systems extend far beyond basic measurement, encompassing complex decision-making capabilities that demand more sophisticated governance frameworks. The framework’s sociotechnical orientation recognizes that effective standards must address not only technical specifications but also human and societal factors—specifically, how AI systems interact with and impact individuals and communities in real-world contexts. The integration of versioning practices, while commonplace in software development, represents a significant innovation in governmental standard-setting. This approach demonstrates how governance frameworks can be both robust and adaptable. The development of iterative guidelines, rules, and norms has become essential for effective governance of emerging technologies, particularly advanced AI systems. Policy Innovation as Positive Disruption The prevailing discourse around disruption has typically cast governance as an impediment to innovation. However, effective governance and technology development are not opposing forces but complementary elements in creating safe, sustainable, trustworthy, and beneficial AI systems. Policy innovation—whether through the application of existing frameworks, the development of new governance mechanisms, or the adoption of iterative approaches—represents its own form of positive disruption. This disruption manifests not as a circumvention of necessary guardrails but as a creative force that can catalyze technological ingenuity and protect and enhance societal wellbeing. Effective governance and technology development are not opposing forces but complementary elements in creating safe, sustainable, trustworthy, and beneficial AI systems. By recognizing governance as an essential component of technological development and deployment rather than an obstacle to it, we open new possibilities for addressing AI’s challenges and opportunities. These examples of policy innovation—from NIST’s versioned frameworks to state-level legislative initiatives—demonstrate that governance can be both robust and adaptable, creating a foundation for AI development that is both innovative and responsible. As we continue to navigate the complexities of AI governance, the understanding that true innovation involves constructive disruption that embraces socially responsible technology design and use will be essential. Acknowledgments Thank you to Chiraag Bains and Hannah Bloch-Wehba for their thoughtful, incisive feedback which helped to improve this essay. References Ables K, De Vynck G. 2024. California passes AI laws to curb election deepfakes, protect actors. The Washington Post, Sept 18. Brittain B. 2025. Anthropic reaches deal in AI guardrails lawsuit over music lyrics. Reuters, Jan 3. Calo R, Rosenblat A. 2017. The taking economy: Uber, information, and power. Columbia Law Review 117(6):1623–90. Christensen CM. 1997. The innovator’s dilemma: When new technologies cause great firms to fail. Harvard Business School Press. Christensen CM, Raynor M, McDonald R. 2015. What is disruptive innovation? Harvard Business Review 93(12):44–53. Ciaccia C. 2024. Google’s partnership with Anthropic formally probed by UK. Seeking Alpha, Oct 24. Citron DK, Pasquale F. 2014. The scored society: Due process for automated predictions. Washington Law Review 89(1):1–33. Cohen JE. 2019. Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press. Colorado Consumer Protections for Artificial Intelligence Act, SB24-205, 2024 Regular Session 2024. Online at https://leg.colorado.gov/bills/sb24-205. Crawford K. 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. Dubal VB. 2017. The drive to precarity: A political history of work, regulation, and labor advocacy in San Francisco’s taxi and Uber economies. Berkeley Journal of Employment and Labor Law 38(1):73–135. Edelman B, Geradin D. 2016. Efficiencies and regulatory shortcuts: How should we regulate companies like Airbnb and Uber? Stanford Technology Law Review 19:293–328. EEOC (US Equal Employment Opportunity Commission). 2022. The Americans with Disabilities Act and the use of software, algorithms, and artificial intelligence to assess job applicants and employees. Technical Assistance Document No. EEOC-NVTA-2022-2. Erhardt GD, Roy S, Cooper D, Sana B, Chen M, Castiglione J. 2019. Do transportation network companies decrease or increase congestion? Science Advances 5(5):eaau2670. EOP (Executive Office of the President). 2023. Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Executive Order 11410, 88 Fed. Reg. 75191, Oct 30. FTC (Federal Trade Commission). 2024. FTC announces crackdown on deceptive AI claims and schemes, Sept 25. Franzen C. 2023. Hollywood actors’ strike ends with deal to ‘protect members from the threat of AI.’ VentureBeat, Nov 8. Ge Y, Knittel CR, MacKenzie D, Zoepf S. 2020. Racial and gender discrimination in transportation network companies. Journal of Public Economics 190:104205. Grynbaum MM, Mac R. 2023. The Times sues OpenAI and Microsoft over A.I. use of copyrighted work. The New York Times, Dec 27. Hoskins P. 2022. Uber sued in US over sexual assault claims. BBC News, July 14. Hussain S, Bhuiyan J, Menezes R. 2020. How Uber and Lyft persuaded California to vote their way. Los Angeles Times, Nov 13. Lander E, Nelson A. 2021. Americans need a Bill of Rights for an AI-powered world. WIRED, Oct 8. https://www.wired.com/story/opinion-bill-of-rights- artificia l-intelligence/. Malin BJ, Chandler C. 2017. Free to work anxiously: Splintering precarity among drivers for Uber and Lyft. Communication, Culture and Critique, 10(2):382–400. National Institute of Standards and Technology. 2023. Artificial Intelligence Risk Management Framework (AI RMF 1.0). U.S. Department of Commerce. Online at www.nist.gov/itl/ai-risk-management-framework. OSTP (Office of Science and Technology Policy). 2022. Blueprint for an AI bill of rights: Making automated systems work for the American people. The White House. Pasquale F. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press. Pollman E, Barry JM. 2017. Regulatory entrepreneurship. Southern California Law Review 90:383–448. Price M. 2015. Bill before Utah governor to regulate ride-hailing companies. Associated Press, March 26. Rahman KS, Thelen K. 2019. The rise of the platform business model and the transformation of twenty-first-century capitalism. Politics & Society 47(2):177–204. Riordan M, Hoddeson L. 1998. Crystal Fire: The Invention of the Transistor and the Birth of the Information Age. W. W. Norton & Company. Selbst AD, Barocas S. 2018. The intuitive appeal of explainable machines. Fordham Law Review, 87(3):1085–139. Shaheen S, Cohen A, Zohdy I. 2016. Shared mobility: Current practices and guiding principles (Report No. FHWA-HOP-16-022). U.S. Department of Transportation, Federal Highway Administration. Shapiro C, Varian HR. 1998. Information Rules: A Strategic Guide to the Network Economy. Harvard Business Press. Terranova T. 2000. Free labor: Producing culture for the digital economy. Social Text 18(2):33–58. Thelen K. 2018. Regulating Uber: The politics of the platform economy in Europe and the United States. Perspectives on Politics 16(4):938–53. USPTO (US Patent and Trademark Office). 2024. USPTO issues guidance concerning the use of AI tools by parties and practitioners, April 10. Online at www.uspto.gov/about-us/news-updates/uspto-issues-guidance- concerning-use-ai-tools-parties-and-practitioners. Wiessner D. 2024. Workday must face novel bias lawsuit over AI screening software. Reuters, July 16. Winner L. 2021. The democratic shaping of technology: Its rise, fall and possible rebirth. Engaging Science, Technology, and Society 7(1). Online at https://doi.org/10.17351/ests2021.825. Zuboff S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. About the Author:Alondra Nelson (NAM) is the Harold F. Linder Professor at the Institute for Advanced Study.