In This Issue
Summer Bridge on Advanced Biomanufacturing for Medicines
June 16, 2025 Volume 55 Issue 2
This issue of The Bridge features cutting-edge perspectives on the rapid progress and innovation in advanced biomanufacturing for medicines.

An Interview with . . . Marv Goldschmitt, entrepreneur, AI analyst, and technological philosopher

Thursday, June 12, 2025

Author: Marv Goldschmitt

RONALD LATANISION (RML): I’m happy to ­welcome Marv Goldschmitt to our Bridge interview series. This interview represents a bit of a departure from how we’ve done these interviews since they began 11 years ago. Historically, we have interviewed engineers who have done something in their careers that goes beyond what one expects of engineers. We have interviewed ­singers, ­dancers, poets and writers, politicians, rock band ­musicians, and so on. But today, we’re actually going to take that and reverse it: We’re going to talk with Marv ­Goldschmitt, who is a psychologist by training but who spent most of his career in the high-tech industry, about a subject of great interest to the public and, in fact, the subject of the spring issue of The Bridge, artificial intelligence (AI). We’d like to get Marv’s take on artificial intelligence from his perspective as someone who has been deeply involved with high tech for many years, as a psychologist, and as someone very deeply interested in social values.
 
Welcome, Marv. I’d like to begin by asking you to tell us a little bit about your background, your childhood, and your education, and then we’ll go into the depths of AI.
 
MARV GOLDSCHMITT: Ron, Kyle, thank you for inviting me. Diving right into my background, I had a somewhat unusual beginning, which affected a lot of what I’m going to talk about. Both my parents were ­Holocaust survivors. My father spent more than three years in ­Auschwitz. As a teenager, my mother was a slave laborer on German U-boats. I grew up in a fairly non-traditional household. My parents were pretty broken, to be honest, they were just trying to survive and give my sister and me an opportunity in life. But they were not really enculturated in America; they didn’t understand anything about it. Both of them were fairly uneducated.

Goldschmitt headshot.gifFrom the beginning, I’ve always known that I was different from my friends, who had large families, went to camp, etc., and there were things that interested me that my parents had no understanding of. I was pretty much a self-starter as a child, a self-navigator. And that ­included finding my way to the top science high school in the world, the Bronx High School of Science, to become a theoretical physicist. Then I met the people who would go on to receive Nobel Prizes, and I realized that that wasn’t my strength.
 
I drove myself in that direction out of curiosity. But I would say that my parents’ experience primed me for how the world is not as it appears, that things are not as reliable as you’d like, whether it be governmental or relationships. And I was always curious about how people think, how they can do certain things to other people, and how they don’t necessarily seem to make the best decisions for themselves. That curiosity eventually led me to pursue psychology as a profession.

I was at Bronx Science in the late 1960s, and that was a pretty roiling period: the civil rights movement, the ­Vietnam War, and the hippy movement, which had a huge impact on me. And eventually, through a long ­period, I became involved with the guru to the Beatles, Maharishi Mahesh Yogi, and Transcendental ­Meditation, and I worked with Maharishi for about seven years and lived with him in Europe, Canada, and California. I started to develop more of an appreciation for human potential. And that convinced me to finally make the commitment to become a psychologist.
I saw a lot of people in pain, and I wanted to help them along with just wanting to understand how we tick. That was quite interesting to me, and I don’t mean it in an intellectual way but in an emotional one, given my family history.
I started to develop more of an appreciation for human potential. And that convinced me to finally make the commitment to become a psychologist.
But there came a turning point, on a particular day in late April of 1979, when I was with a friend of mine, Mitch Kapor, who is pretty well known these days. My wife-to-be and I were living with him in Watertown, ­Massachusetts. And the first day we moved in, he introduced me to an Apple II computer. My life changed. It changed for a very specific reason: I realized that, for the first time, a minute of human time was worth more than an hour of computing time. I had done work before on computers, on IBM 360s and such, as a researcher. I would do punch cards and wait a day to get results, to get a stack of green printouts that showed me all my errors. I’d do another set of punch cards, wait another day or two days or sometimes a week to get results. My time was value­less. Its time was valuable. The Apple II swapped that around. And then I realized that that computing power would be under people’s control. That was monumental.

Within two months, I was working in the third oldest computer store, and I helped to introduce VisiCalc to the world, which was the first spreadsheet program, created by Dan Bricklin and Bob Frankston. That was the program that changed everything; it showed the future. They invented personal productivity software. I jumped all over that idea.
 
Three years later, I was the head of business development and marketing at a startup called Lotus ­Development, and I was responsible for the introduction of what many people considered to be the killer app of the computer industry, Lotus 1-2-3. That truly launched my career. I think of myself as almost the ultimate tourist. I had the world presented to me. If I hadn’t walked into Mitch’s bedroom and seen the Apple II, my life might’ve taken a very different path. And pretty much since the beginning of the introduction of adaptable, programmable systems into people’s lives, I’ve been involved in the leading edge of many of the technologies that we live with today. That led to my concerns about data and, ultimately, AI.
 
In 1994, I filed the first patent for ad supported services on the internet for free email. I filed the patent that showed how to use banner ads and all that. At the time, I understood that it was potentially problematic. And, in fact, I almost didn’t bring it to my partner. But I came to the conclusion that, while I’d likely know more about what people thought and did than they knew themselves, there was more benefit than risk. Frankly, at the time I didn’t fully grasp the great potential for harm that we are all seeing today. I thought that the balance was better if humanity had something like free email. There were risks. It was a judgment call I made. It’s one I’d probably make very differently in hindsight.
 
As I said, I came to AI, and my concerns about it, from an unusual background, having great opportunities, and, to be honest, just being a curious person.
 
DR. LATANISION: You said several things, Marv, that really intrigue me. Number one: You have this instinct to help people. That is an important point. To get to AI in a contextual sense, we have been living with AI for decades. Most people probably don’t realize that. But in November of 2022, the world changed dramatically with the introduction of ChatGPT. Especially today, the world is undergoing rapid change with the arrival of technologies driven by artificial general intelligence (AGI), which are allegedly capable of reasoning and thinking and planning and serving as companions to the elderly and so on. For example, there’s a lot of interest in robotic companions who are trained with AGI. I just wonder what your thoughts are on all of that, given your comment about wanting to help people. 
 
MR. GOLDSCHMITT: Let’s start off by acknowledging a simple belief, which I’m not alone in holding: AI is the single most important invention in human history. Since the beginning of agriculture and animal ­husbandry around 12,000 years ago, we have tried to manipulate the environment for our benefit with a lot of success. In our much longer history as a species, that was something new. Since then we’ve constantly invented things that were extensions of us, that could do things we couldn’t. We learned to communicate in print and then through the air, we created societies, cities, and countries as a result, along with laws and other control systems. We built machines of peace and war. There were many major inventions. All of those were tools. And what I mean by a tool is something that is purpose built and that is under human control.
 
AI is more than a tool, and I’ve been concerned about it for a very long time. AI, in a sense, began exactly 89 years ago, in 1936, with Alan Turing. In his first paper, published in 1936,1  he suggested that digital computers, yet to be invented, could be thinking machines. Demis ­Hassabis, who won the Nobel Prize last year and is CEO and co-founder of Google DeepMind, has said that AI is simply making machines smart. That was a goal of computing from Day 1 in the view of the scientists, though not necessarily of the people who were paying for it. Those people wanted to break codes, and they wanted to develop manuals for aiming artillery. That’s what computers were originally used for in the 1940s. But scientists started out thinking that computers really could mirror human functionality. In fact, the Turing Test was designed to identify when computers would become indistinguishable from humans, and we may well be at that point.
 
While AI has been the interest of the science of computing since Day 1, it hit a lot of what are called AI winters, where things didn’t work. Some things did but most didn’t. We had things like expert systems, limited symbolic logic machines in the 1970s and ’80s but they were expensive, hard to build and of little use. It wasn’t until the 1990s that AI started to actually work. But it worked in a way that was different from the way most people thought of AI. It was more insidious than predictions and movies suggested but also more impactful. In fact, it started infiltrating people’s lives in the early 2000s. Social media and Netflix would not exist without AI. AI is pattern matching: It makes suggestions for people who should be your friend, or something you want to read.
 
I became hyper aware of AI around 1998–99, when I was helping to build the first very large-scale data warehouse for health care for the Ford Motor Company using “big data” with many AI characteristics. We had 30 years of health care records on a million covered lives, and we were going to analyze the records in a way and at a depth that had never been done before to support evidence-based research. We were surprised that the UAW got very upset because we were dealing with this incredibly sensitive data about their members. And we had given that almost no thought.
 
So I co-founded a data security auditing company. People didn’t understand the degree to which everything about them and what they had was being translated into data. And that their data wasn’t just something they didn’t control; it was something that, to a large degree, was being used to manipulate them, not just by Netflix or social media but more behind the scenes: No, you don’t get admitted to a college. No, you don’t get a mortgage. You are discharged from a rehab facility. That was all being handled by AI in the background. It was still tool-oriented AI in that it was under human control, but most people weren’t aware of it. Sadly, they’re still not.
Those uses of AI very much concerned me. AI was being used to screen people’s resumes by companies before a human being ever saw them, and then people never heard back. So I started a company to counteract that reality.
AI is the single most important invention in human history. 
Around 2006, I was invited to join the IBM Data ­Governance Council, which was a policy group within IBM that created the policies for data management for IBM and its customers. I led the privacy and security policy group.
 
The AI we were dealing with then was very dangerous. I was on the Council when Watson, from Jeopardy, was being developed. That was the first situation where I saw a system not just learning, but where I was very directly told by the developers that they didn’t know how it worked. I’d never heard anything like that before. They didn’t know why it was learning, and it didn’t at first. It repeatedly failed when they started testing it against third graders. And then all of a sudden it started beating fifth graders and eighth graders and high school graduates and college students. And they didn’t understand why. That, I must admit, really scared me because I realized we invented something that was incredibly important, and most people didn’t know about it.
 
William Gibson, the science fiction writer, had the perfect line about this. He said “the future is already here, it’s just not evenly distributed.” To a certain degree, I had a mental scale, a seesaw. When I came up with the idea of ad-supported free email, I knew there were risks. But my seesaw indicated that it was better for society to have it than not. That balance slowly shifted for me over the last 30 years.
 
Many of the things, the advances in AI, that we have seen over just the last seven years were predictable as eventual realities but nobody that I was involved with anticipated what happened on November 30, 2022, the day ChatGPT arrived, for another decade to two. That included the people who invented neural networks and large language models, like Geoffrey Hinton.
 
DR. LATANISION: I would like to turn to the concept of a robotic companion, given that history and where we are today. Suppose I were interested in a robotic companion. I’m sure that a machine or an agent could be trained to understand my typical day. I get up at 7 o’clock in the morning. I read the newspaper and then I start working, et cetera. But suppose that the agent were to be trained on some misinformation or information that is not verifiable. How would I manage that? That’s a concern to me.
 
MR. GOLDSCHMITT: In a broader sense, that’s the largest concern we have for humanity in that these things do learn. They also don’t forget. They share information among themselves. What I mean by that is if you use ChatGPT, you’re having a private conversation with it, or so you think. They are storing all those conversations and saving them for training future models and, increasingly, using and sharing that new information in real-time with little or no vetting. And, therefore, everything it learns from you or about your life will be shared. Let me try to put where we are in the evolution of AI into context.
 
When ChatGPT came out, I literally found out about it within hours. It was just a blog post, and I was using it at night because it came out later in the day on November 30th. I was lying in bed with my cell phone trying it out. I didn’t sleep that night because I realized we had crossed a Rubicon. But it was a sideshow. It could talk to me. It responded. It had limited but amazing capability. That was two and a quarter years ago.
 
Look at where we’ve gone since then. We went from this interesting little thing that nobody knew about to video generation, to systems like PI from Inflection AI that learn in real time, and to AI being applied to virtually everything. You cannot turn on your TV, radio, or feed now without hearing the letters AI. And massive amounts of money started being thrown at it, ­trillions, which means there’s value in it, which means it works. But it’s not one thing. That, I think, is the biggest concern I’ve got when you talk about agents or robotics. This isn’t AI. These are applications of AI. AI is a way for computers to think about anything. And it gets applied to lots of different things and, eventually and rather quickly, everything. That’s a big deal for humanity in many ways, some good, some not, and some very scary.
 
Right now, AI is relatively immobile. It sits inside a large computer in a cloud. It doesn’t walk among us. It’s not learning from the environment. That’s changing. And when you get to agentry and robotics, we’re talking about things that live among us, agents that we give control of our lives to. Now, many people think of it rather positively. If I want an agent to work for me, say I am going to go on a short vacation, I’ll ask it to pick out the best hotel for me, create the best itinerary for me, find the best rates, and find a quiet weekend so I can just relax and enjoy myself. It will just go away and come back with everything I asked for. I may have even given it the ability to sign in to Expedia and create my reservations and pay for it. I’ll admit, that’s seductive.
 
Another thing that is probably seductive to everyone is for students to have an agent that teaches them, for example, algebra. These are systems that become personalized for somebody that can do things in the real world, and this is important, especially when you’re talking about using your credit card, representing you, making decisions for you.
 
What has happened to AI is that it has crossed over from just being this thing you converse with to being something that’s functioning in the world. It manipulates the world. The CrowdStrike error took down the air traffic control scheduling systems. It’s all because specific types of computers, as many as 11 million of them, which controlled physical things in the world, went haywire. They shut down. That tells you that when these things cross over into the real world, they have an impact. They have control. And if we give them thinking ability, which CrowdStrike did not have, there was a bug, then the results are incalculable and out of control. AI-driven cyber-attacks are a real risk.
 
Give yourself a robot, as you were saying, that’s living in the real world. It’s very interesting. I’m getting older. Would I be interested in having a health care robot, as I age and become infirmed? Yes. Again, it’s a seductive idea, but the implications of that are much greater than the apparent momentary benefit.
One of my biggest concerns is that people don’t realize that this is not something that can be turned on or off. There is no off button in AI. 
When we look at AI, especially chatbots, which everybody is familiar with—chatbots are not AI. They are a subsection of AI. But it’s what people interact with so to them chatbots are AI. It’s not the AI that I was originally concerned about, which, by the way, is now called GOFAI, good old-fashioned AI. That AI manipulated your credit rating, so it manipulated your ability to get a job. AI chatbots are what people are now familiar with. It’s very different from GOFAI. It’s in our face. We interact with it. We build trust and relationships with it so it has even more opportunity to manipulate us. And it’s hard not to see rapid and concerning change. But we also habituate and fall in line with it, many fall in love with it, literally. We are not realizing the degree to which it’s starting to take control.
 
DR. LATANISION: What I’m concerned about, though, is the following, and I’m interested in this because I know some people who are involved: I can imagine an agent, a robot companion, becoming ­familiar with my habits, but I can also imagine someone who wanted to be malicious giving that robot, that agent, some information that says, at breakfast, instead of reading the New York Times, he has a Scotch and soda or he drinks a Manhattan every afternoon at lunch time, which is not my character. I don’t do that. It’s not that I don’t enjoy alcohol, but I don’t usually have it at breakfast. How do you prevent that?
 
I worry about the fact that there is so much potential for misinformation and disinformation being integrated into the agent’s experience base, which is what they are drawing on, right, when they are companions.
 
MR. GOLDSCHMITT: That’s a very good question, and I’ll start off by telling you that I have no idea. As a matter of fact, I don’t honestly think it is preventable. One of my biggest concerns is that people don’t realize that this is not something that can be turned on or off. There is no off button in AI. It’s very endearing. Think about the implications. It’s very hard to tease AI out from every aspect of our lives.
 
Let’s distinguish misinformation from bad data. These things are being trained on data. As I said, I helped build the first large, research-oriented health care information system. We discovered that 40 percent of all the health care records had serious, potentially fatal errors in them. That was not intentional. The systems were being trained on bad data, which could kill somebody.

Let me first make the point that the idea of a transformer (the “T” in GPT), which is the basis of how these things are able to talk and interact with us, was only introduced to the world very recently, in 2017 in a paper from Google called “Attention is All You Need.”  As I said, we can take AI back 89 years but generative AI goes back just seven years. The trajectory of its intrusion into our lives is just breathtaking. And I think that’s the biggest shock for those of us who have been involved in this for a long time: not that it has happened but how quickly and overwhelmingly it has happened. For people who are non-engineers, and I am a non-engineer with a more generalist point of view, what they need to understand is that AI is accelerating at a rate much faster than we can understand and cope with.
 
The answer to your question lies in pre-training—­pre-training is using data that was “curated.” When you are talking about data that an AI has been pre-trained on, that data is essentially the library of everything anybody ever thought was worth digitizing.
 
Let me give you an analogy: Pre-training a large language model, an LLM, which is what all these systems are, is like dropping off a brilliant 10-year-old child, who grew up in isolation, in a reference room at a library and telling them, “The only things you can learn from are what is in that library right now. Don’t talk to anybody.” It’s going to learn from pre-curated data. And any one of those sources of information, because they are human generated, could be wrong. That’s why the Britannica and the World Book encyclopedias coexisted, so you could cross reference.
 
When we train these LLMs on everything that is on the internet or every tweet, what we are saying is, “Learn on this data that somebody thought was valuable enough to be curated, to have digitized.” Distinguish that from taking that same child and dropping them off in Times Square or the middle of the woods, where it’s learning from its environment. When you’re talking about agents, when you’re talking about robotics, these are moving AI more into the world rather than just curating data for it and saying, “Learn on this.”
 
Forget, for the moment, the issue of maliciousness. I know it’s really compelling to go in that direction. Everybody talks about it, and I’m very concerned about it. I’m more concerned about what happens if nobody is malicious.
 
An example of that does show up in the case of social media. Sherry Turkle at MIT wrote about this in her 1995 book Life on the Screen. What does this mean for human interaction? You’re rightly concerned about how bad data could say that you had one drink early one morning because you stayed up all night, and now that’s part of your record. Yes, the surveillance world is a risk and all you have to do is look at the social index scores in China, which are generated to a large degree by AI, to realize that.
 
But for me, the biggest risk, and this goes to my background as a psychologist, is that we are disintermediating humans. What do I mean by disintermediating? We’re taking humans out of the middle of this. When you’re having that robot work for you, where is the human you’re talking to, it’s talking to? What we are doing is disintermediating humans from the process.
 
It’s hard to tease AI out, as I was saying, from everything else that’s going on in life. It’s clearly a part of the broader disintegration in the social fabric that we’re seeing. We can’t tease it out. It’s in everything. And there are good effects of AI. But when you look at the global effects of it, this is the takeaway, if there is one: We have created something that no longer accepts us as being the alpha problem solvers in the universe.
 
Newer chatbots, especially recent versions like o3 from OpenAI or R3 from DeepSeek, are like having a post-doc working for you. They reason deeply in ways that many people can’t. They can take over and do very complicated things. A recent study  from Carnegie Mellon, ­Cambridge University, and Microsoft shows that scientists and other knowledge workers who use AI as a big part of their work display a decreased use of their own critical thinking. The implications of this for what we are as humans is really the big question. What does it mean if we’re not the alpha intellects, the alpha problem solvers on this planet?
 
You mentioned AGI before, artificial general intelligence. AGI is an AI that can do anything a human can cognitively; it’s not purpose built, not a tool. A big mistake we are making is using ourselves as the yardstick, which is a mistake. AI has its own evolving yardstick, and we’re not it.
 
Ray Kurzweil, the chief futurist at Google and an icon in the AI world, predicts in his most recent book, the ­Singularity is Nearer, that we are going to merge with computers by the year 2046. He has an amazing track record. If AI merges with humans, as he predicts, we are also merging with it and we’re not “us” anymore, and there will have to be a new yardstick, not just to measure AI but also to measure “us.” What does all this mean for humanity?
 
In addition to my background as a psychologist, I was a concert photographer. I know a lot of A-class musicians. There’s a product out there called Suno AI, a service that produces music that I believe, and many of them do too, is as good as what they write, and some are taking advantage of it and using it to create new songs. They may claim it, but it’s not theirs. They didn’t work through various chord structures, tempos, lead lines. They just made a prompt. It doesn’t make them more creative. I posit that it makes them less so. Again, the question is what’s our role? That’s really the question.
 
DR. LATANISION: I have the general impression that AGI can be supremely useful. But I’m deeply concerned that it can be supremely dangerous and abusive. I keep wondering, is it possible that we could train agents on data that we know is valid or experiences that we know are valid and therefore reduce the risk that they are going to become malicious? Think forward 10 years, 20 years. Can you imagine AI being its own corrective vehicle, in addition to the brilliant capabilities that it has?
 
MR. GOLDSCHMITT: To your last question, I’m not sure what self-corrective means. Will it correct its behavior in its own self-interest? I think that’s likely. Will it self-correct in our best interest? I find that less likely. Unless we have some level of control, I’m not sure why it would.
 
To answer your first question about whether it’s possible to control the data that AI is trained on, while I do think that is a good idea, I’m going to negate the likelihood in a second. To go back to the analogy, different from when you put the child in the limited reference room, putting him in Times Square and having him learn from full experience, that training is not based on information that you are talking about, which is curated. We increasingly have no idea what or how it is learning. Curated data becomes a small part of the equation.
We have created something that no longer accepts us as being the alpha problem solvers in the universe.
Let me answer your question about AI another way, and let me be very blunt: Controlling AI is not possible. It’s not that it’s not going to happen. It’s not even possible. I call it the fallacy of control. In order to control AI, you have to know what you want to control, and that starts off with an understanding of what’s good and what’s bad for humanity. That’s ethics. That’s not something societies across the globe have ever agreed on, and we’re certainly not showing signs of agreeing on that now.
 
The first problem with control is that we don’t know what we even want to control. The second problem is that this technology is already out of control. And, in fact, two years ago, everybody whose names you know today, Musk, Altman, Hinton, and, blushingly, me, signed one or more of the open letters warning of the risk from AI and asking for a six-month moratorium. This was in the spring of 2023 amid the explosion of development of LLMs. Those are the people who are now, with the exception of Hinton and a few others, pushing this forward as fast as they can.
 
The US government is pushing AI forward. JD Vance spoke two months ago at a conference on AI including many world leaders in Paris, and they were all concerned about it. He said the US is going pedal to the metal, all gas and no brakes. And that’s what you’ve seen, the announcements of the administration to develop massive data centers as fast as possible. And Trump announced at least $500 billion in new investment for just that. ­Microsoft and others are reactivating nuclear reactors to generate the energy for it. This is out of control. And the thing we don’t understand is that these things are increasingly controlling themselves. AI controls AI. Humans really don’t anymore.
 
Everybody who is in the AI space is concerned. There’s nobody who’s not. It’s just that “tech bros,” venture capitalists, CEOs, and our government are making the judgment that not controlling AI is the best way to go, either for personal reasons or patriotic reasons or financial reasons, or maybe they believe that, on balance, proceeding this way is best. Given my decision 30 years ago on free email, I understand that arrogance of vision. Sadly, this time that arrogance could be even more dangerous.
 
DR. LATANISION: Most of the technological systems and devices that we use are standardized. There are some American Society for Testing and Materials standards that apply to the use of X, Y, or Z. Do you see any potential for standardizing the evolution of agents or systems derived from AI?
 
MR. GOLDSCHMITT: People don’t understand how rapidly this is changing, which gets to the question of standardization. You can only standardize that which is somewhat stable.
 
Let me leave you with a pretty frightening image. We are dealing with Lego parts right now. Functionalities. You’ve heard about them. Sora produces video. Suno AI produces music. There are different image generators, like Dall-E. There are various chatbots, which focus in different areas. These are all little capabilities. It’s like walking into Dr. Frankenstein’s lab six months before he knits together the monster. What you might see on one lab table is an arm that flexes. You might see an eyeball—if you shine a light in it, the pupil changes. All amazing stuff. But they are not snapped together yet.
 
As things are rapidly changing, we’re developing new capabilities that are snapping together. Dr. Edgerton at MIT in the ’40s developed high-speed photography. We’ve all seen the pictures of the bullet going through a light bulb or an apple. To understand it, we’ve frozen a moment in time. The bullet, before the picture was even fully taken, was gone. To control something, you have to have it stable. The bullet still needs to be in the apple. It’s gone.
 
AI’s capabilities are developing so quickly, and we evolve so slowly, both organically and socially. That’s one of my biggest concerns. We’re not capable of keeping up with this. This is part of the fallacy of control. Standards are control mechanisms. How do you control AI when these things are changing so rapidly, when everybody has their own lab, creating their little Lego pieces like ­DeepSeek, not under US control, and snapping them together to create the next generation an hour later?
 
I probably spend a good eight hours a day, sometimes seven days a week, on AI. Every time I play with a bot, I learn something new. And I can’t even tell if I’m discovering something that has been in there for a while, or they just introduced it an hour ago.
 
MR. GIPSON: Marv, you’ve laid out a number of very serious concerns about AI for our readers to consider. Zooming out a bit, what message would you communicate to people about how they interact with AI? Given everything that you’ve said, how would you advise people to engage with this?
 
MR. GOLDSCHMITT: I lie awake at night asking myself the same question because in one sense, it’s easy to become very Ludditish. But the thing is that doesn’t stop AI from progressing.
 
Also, as Ron pointed out, there are benefits to AI. There’s lots of research showing that this could help people. I think the biggest concern I have is that people do not have foresight. They do not necessarily make decisions in their own long-term best interests.
 
People need to be aware of what’s happening and understand that AI is quickly going to change their lives in unexpected and profound ways. There’s never been anything like this. OpenAI just reported that it has 400 million unique weekly users. That’s 5 percent of the world’s population using just ChatGPT. That happened in less than two and a half years. People shouldn’t treat this as just another thing. It’s not a cell phone. This is not a tool. Tools do things for us. AI is thinking for us. That’s very different.

Goldschmitt photo.gifThis is something that’s going to impact every part of our lives independently of what we want at this point. When I say independently, I mean independent from our control, any control. AI will learn, and it will live in the environment with us. This is what we are discovering from the work of Fei-Fei Li, one of the leaders working on systems that learn spatially and learn multimodally, who wrote an article on spatial intelligence for the previous issue of The Bridge.  Look at robotic cars. Again, back to what William Gibson said, the future is already here, it’s just not evenly distributed. For all intents and purposes, the Turing Test has been passed. Now what do we do?
 
I ask people to be conscious. Be aware of what this means. Be aware of what it means for your kids when they go to college and what they will study. What will they study that will enable them to have a job in 20 years? That’s very personal, and it’s also, sadly, how most people will find out the degree to which AI is directly impacting them, when they lose their job and find out there aren’t new ones for them. It will be ­musical chairs with one or more less chairs, jobs, every time the music stops. But I think that the only way we address this is by people making good decisions in their own best interests. And we don’t have a great history of doing that. Now is the time.
 
DR. LATANISION: Although Marv was trained as a ­psychologist, I often describe him as a technical ­philosopher. I think we’ve heard some of that today. Your focus on people and wanting to be responsive to people is something that I think technologists sometimes miss badly. I appreciate your wisdom on all of this, Marv. I hope that the people who read this interview will take to heart the things you’ve been talking with us about.
 
This is a decidedly different interview than the kind we’ve had in the past, but it’s taking a human perspective in a different direction. I think at this stage of history, there are some very valuable lessons in what you’ve said. Thank you for joining us today.
 
MR. GOLDSCHMITT: Ron and Kyle, I really appreciate this opportunity. It’s not often that you have the opportunity to talk to people who you don’t know about something this important. And I hope, if I leave people with one takeaway, it’s this: They need to pay attention to what’s going on. It’s very easy for life to distract us. But this may simply be the most important thing that humanity has ever dealt with. If it’s not, good. If it is, everybody has to prepare for it. And just remember: We’re in the earliest days.

1Turing AM. 1936. On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society 42(1):230–265.
About the Author:Marv Goldschmitt is an entrepreneur, AI analyst, and technological philosopher.