In This Issue
Fall Bridge on the Materials Genome Initiative
September 29, 2025 Volume 55 Issue 3
The Fall 2025 issue explores the Materials Genome Initiative’s progress and future outlook, showcasing advances in autonomous experimentation, sustainable polymers, next-generation batteries, and the broader role of AI in engineering.

Invisible Bridges: When AI Dies

Monday, September 29, 2025

Author: Guru Madhavan

A brush with death sharpens our focus. When the Supreme Court forced TikTok to choose between sale and silence, 170 million Americans glimpsed the loss of a dopamine-driven feed powered by an artificial intelligence (AI) moving billions in commerce. That fragility reminds us that a world built on intelligent systems can vanish suddenly, leaving little trace.

The tech industry celebrates each AI birth—sharper reasoning, smoother conversation, slicker images—but it rarely acknowledges AI mortality. This culture of disposable intelligence carries, though, both market and moral costs. Microsoft’s Tay lived and died in a feverish 24-hour cycle. Google’s Duplex withered after early promise. IBM’s Watson for Oncology faded without a eulogy, forcing hospitals to adapt. Tesla’s shift from radar to vision unsettled its systems, demanding reconfiguration. These departures seldom appear in investor presentations; they are obituaries written in invisible ink.

When AI dies, it follows three patterns:

First: abrupt termination. Core systems shut down and leave vacuums. When platforms vanish, digital communities scatter and the algorithms that powered commerce or secured transactions disappear. Adobe’s Flash discontinuation, although not AI, left agencies scrambling for alternatives for a utility once thought permanent. When DeepMind retired AlphaGo, access ended to a system that had reshaped a 2,500-year-old game. Microsoft folded Bing Chat into Copilot, altering how users accessed its tools. Google’s Gemini restructuring stirred unease. Each shift required rapid consumer adjustments, often amid uncertainty about continuity.

Second: gradual obsolescence. Systems that once seemed indispensable lose relevance as conditions shift. Legacy COBOL programs still anchor core banking workloads, carried forward by necessity. These aging sentinels, still guarding billions in transactions, grow increasingly blind to sophisticated threats. IBM’s punch cards once powered entire industries before fading into obsolescence. The telegraph, once as vital as AI today, disappeared as a mainstream service, leaving rusted wires as its epitaph. Without planning for graceful exits, AI risks the same fate.

Third: residual influence. The mark of defunct technologies lingers long after they disappear. Despite detailed documentation, much of NASA’s Apollo know-how lived in engineers’ minds. When they retired, gaps opened and vital knowledge was lost: why certain materials were used, how systems were backed up. The rotary telephone is long silent, yet its echoes remain in the dial tone and in the way we still “hang up” a call. AI systems, too, leave embedded assumptions. Even updated facial recognition models inherit biases from predecessors, quietly embedding vulnerabilities long after their code has gone cold.

Services like Gmail reflect all three patterns. Abrupt termination: a cyberattack could erase emails and authentication histories (indeed, in 2011 a software bug wiped some accounts before Google restored them from tape, and in 2020 global outages underscored operational dependence). Gradual obsolescence: outdated software may misclassify malicious emails and files as legitimate. Residual influence: algorithm-optimized text prediction erodes individual voices and creates patterns that can be exploited. We have stored our thoughts on platforms we neither own nor replace, leaving us as digital hostages vulnerable to risks we barely understand.

The designer Joe Macleod coined endineering to describe how products and services should be designed for closure. The same principle belongs in AI. Systems that die should not leave voids but relay knowledge forward, like fallen trees whose decay fosters seedlings, passing on ecological knowledge. Applied here, endineering means designing for continuity and treating intelligence as enduring knowledge rather than disposable technology. When IBM retired Watson for Oncology, its underlying methods and data informed other medical research efforts—showing how even endings can contribute to new beginnings. Dead systems should nourish successors rather than vanish, ensuring that decline becomes preparation for growth.

Forward-thinking organizations have considered this issue and implement critical practices to address it: they document institutional knowledge beyond technical specifications, track how teams adapt to technology, and conduct premortems to anticipate failures. This discipline fuels a competitive edge during transitions.

We have built AI into a dazzling but deficient genius, a machine of vast memory with no legacy. TikTok’s fragility demonstrates how regulation, political considerations, or other shocks can unravel years of algorithmic craft in a moment. As our dependence on it deepens, the challenge is no longer how we give life to AI, but how we prepare for its afterlife.

Future-proofing requires more than continuity. It calls for blueprints that transfer knowledge, threat models that account for transitions, and designs that compost old systems into nutrients for what comes next. Good engineering has always understood that knowledge is meant to endure and guide in practice. If we abandon that precept, we risk deleting brilliance: all processing, no posterity.

About the Author:Guru Madhavan is Norman R. Augustine Senior Scholar and senior director of programs at the NAE.