Download PDF Spring Bridge on AI: Promises and Risks April 15, 2025 Volume 55 Issue 1 This issue of The Bridge features fresh perspectives on artificial intelligence’s promises and risks from thought leaders across industry and academia. Editor in Chief's Note Wednesday, April 16, 2025 Author: Ronald M. Latanision First, I want to thank William Isaac and Marian Croak for serving as guest editors of this issue on artificial intelligence (AI). They have assembled an issue that touches all the bases in terms of the pressing matters regarding AI, ranging from visual instrumentation, track system conditions, and performance standards and risks to governance, regulatory guardrails, and, ultimately, social impact. There is a lot of good that has come from AI—in health care and the development of new materials, for example. And there are surely more positive developments to come. But I am equally certain that this technology, just as others, can be and is being used abusively. The changes in AI that began in November 2022 with the release of generative AI are truly remarkable. GenAI is not just another new technology; it has the potential to revolutionize the way we work and live. The development of GenAI is earth-shattering. One could say that, to the average thoughtful person, the introduction of the telephone or the Ford Model T must have been just as momentous. What is different in the case of GenAI is that it does not just add a new dimension to our lives; it presents technology as a force beyond nature. GenAI apparently thinks and feels, though it is not yet clear on what scale and in what detail relative to human thinking. Granted, we don’t really understand the particulars of how humans think either. The projected proliferation of AI attendant on the recent DeepSeek announcements, if fully realized, would make AI even more daunting and essentially unencumbered by fiscal concerns. I worry that this technology may be heading so far out in front of humans that people may begin to broadly distrust science and technology on a level that is unprecedented. That erosion of trust would be to our collective misfortune. Technology and technologists have crucial roles to play in advancing medicine, meeting energy demand, addressing climate change, improving K-12 education, and so much more. Prior to the recent advances in AI, I hoped that we had learned useful lessons from the history of the internet and the web that would lead to a responsible and accountable integration of AI into our social fabric. But I do not see evidence that suggests that we have learned much of anything from this history. My sense, however, is that GenAI has the potential to be supremely useful and also supremely abusive (personally, socially, and culturally). We must all be concerned about reducing risks and ensuring that GenAI is used in constructive and societally beneficial ways. Like any technology, the future of GenAI will be determined by how people choose to use it: for good purposes or bad. I am confident that it will be used for both. That is why it is so essential that we introduce GenAI in ways that maximize its potential benefits and anticipates and reduces potential harms, for individuals and for society. Technologists design engineering systems based on verifiable facts. To do otherwise would lead to the failure of such systems. And the same applies in our contemporary culture in many ways. Who would, for example, trust a surgeon to operate without valid, fact-based diagnostics? Technologists must look to facts for validation in designing engineering systems that work. Could AI be trained to solve its own problems? For example, could it be required to train on validated data? We should strive to ensure that generative AI systems are based on reliable facts and evidence-based research. We—scientists, engineers, and technologists—must ensure that our work is grounded in quality data and truth. AI developers should be required to adhere to these same standards. At the same time, our goals must be the responsible development and introduction of generative AI. We should work toward a society of bots and humans that co-exist on terms that preserve rather than destroy the best that humans have to offer. Finally, I want to acknowledge two gentlemen with whom I share many conversations about technology in general but particularly about AI: Ron Smith of Innovation Toronto and Marv Goldschmitt of Bedford, Massachusetts. Ron and Marv both have distinguished careers at the leading edge of technology, and they have added a freshness to my thinking that I treasure. We don’t always agree, but we are never personally disagreeable. For this issue, we planned to include an interview, which we conducted in December 2024, with an engineer who worked to address public policy matters. Given the rapid shifts that have accompanied the new presidential administration, that interviewee requested that we not publish the interview. We honored that request. As always, I welcome your comments. Feel free to reach out to me at rlatanision@alum.mit.edu. About the Author:Ronald M. Latanision (NAE) is a senior fellow at Exponent, the Neil Armstrong Distinguished Visiting Professor at Purdue University, and editor in chief of The Bridge.