Download PDF Spring Bridge on AI: Promises and Risks April 15, 2025 Volume 55 Issue 1 This issue of The Bridge features fresh perspectives on artificial intelligence’s promises and risks from thought leaders across industry and academia. Guest Editors' Note: Realizing the Transformative Potential of AI Wednesday, April 16, 2025 Author: William Isaac and Marian Croak The rapid rise of artificial intelligence (AI) presents a defining moment in human history. The past year bore witness to both exhilarating progress and growing anxieties surrounding AI’s expanding role in society. The advent of generative AI, capable of crafting human-quality text and visuals, has ignited widespread innovation. Yet, this progress has also spurred concerns about value alignment, safety, and misuse and misinformation. Across the globe, regulatory and geopolitical concerns are emerging as the technology becomes more capable and competition increases. As AI becomes increasingly intertwined with our lives, the urgent need to address these challenges and ensure responsible AI development and deployment, guided by human values and societal well-being, has become paramount. This issue of The Bridge delves deep into this complex landscape, offering a rich tapestry of perspectives on AI’s promises and challenges. Articles in this issue explore critical themes such as AI evaluation science, the imperative of transparency and user trust, the emergence of spatial intelligence, AI’s potential in tackling societal challenges, policy innovation in AI governance, and the alignment of AI systems and human attitudes towards risk. Several cross-cutting themes emerge from these contributions, underscoring the interconnected nature of AI’s various facets. One prominent theme is the crucial importance of responsible AI development and deployment, encompassing safety, reliability, and alignment with human values. Another key theme is the need for transparency and user trust, enabling users to comprehend AI systems’ workings and limitations as they become more sophisticated and integrated into our lives. Additionally, the articles in this issue emphasize the transformative potential of AI across diverse domains, from healthcare and education to robotics and governance, while recognizing the necessity for careful consideration of ethical and societal implications. In this issue: Laura Weidinger, Deb Raji, Hanna Wallach, Margaret Mitchell, Angelina Wang, Olawale Salaudeen, Rishi Bommasani, Sanmi Koyejo, and William Isaac illuminate the urgent need for a more robust and comprehensive approach to AI evaluation in “Toward an Evaluation Science for Generative AI Systems.” They propose an evaluation science for AI, drawing lessons from other fields such as medicine and civil engineering, where evaluation has played a critical role in ensuring safety and reliability. Fernanda Viégas and Martin Wattenberg explore the concept of AI dashboards as a way to provide real-time information about the internal states of AI systems in “Dashboards for AI: Models of the User, System, and World.” They argue that such dashboards can promote transparency and user trust, enabling more effective human-AI interaction. Fei-Fei Li discusses the exciting advancements in computer vision and spatial intelligence in “The Next Frontier in AI: Understanding the 3-D World.” Li highlights how AI is being developed to understand and interact with the 3-D world, opening up new possibilities in fields like robotics and healthcare. Yossi Matias, Avinatan Hassidim, and Philip Nelson provide compelling examples of AI innovations that are helping to preserve our climate, improve health outcomes, and create a more accessible world for everyone in “AI’s Capabilities Make It a Powerful Tool for Driving Societal Impact.” The authors emphasize the importance of responsible AI development and deployment to ensure that these benefits are realized for all. Alondra Nelson challenges the prevailing notion that AI innovation outpaces policy development in “Disrupting the Disruption Narrative: Policy Innovation in AI Governance.” Nelson advocates for proactive and innovative policymaking to ensure that AI technologies are developed and used responsibly. Elisabeth Paté-Cornell examines the alignment of AI systems’ risks with those of human decision-makers in “Alignment of AI Systems’ Risk Attitudes, and Four Real-Life Examples.” Paté-Cornell discusses the importance of ensuring that AI systems’ risk preferences are consistent with those of humans, particularly in critical domains like healthcare and national security. As AI continues its relentless advance, we must confront critical questions about its future. How can we guarantee that AI benefits all of humanity? What are the ethical and societal implications of increasingly sophisticated AI systems? How can we foster transparency and user trust in AI? And how can we govern AI in a way that promotes innovation while safeguarding against potential risks? These are just a few of the questions that demand our attention as we navigate the transformative landscape of AI. The articles in this issue offer valuable insights and perspectives on these and other crucial issues. We hope that they will stimulate further discussion and debate, ultimately contributing to a more informed and responsible approach to AI development and deployment. We express our sincere gratitude to all the authors for their insightful contributions to this special issue. We also extend our appreciation to the entire Bridge editorial team for their tireless efforts in bringing this issue to fruition. We trust that you will find these articles both informative and thought-provoking. About the Author:William Isaac is a principal scientist and head of responsible research at Google DeepMind. Marian Croak (NAE) is vice president of Society-Centered AI and Foundational ML at Google.