Alex Habet shares a replay of Adam Blue's keynote at CONNECT 23 about how AI will affect banking and life in general—and how it won't.
Hi, and welcome to the Purposeful Banker, the leading commercial banking podcast brought to you by Q2 PrecisionLender, where we discuss the big topics on the minds of today's best bankers. I'm Alex Habet.
“So now comes the second Machine Age. Computers and other digital advances are doing for mental power—the ability to use our brains to understand and shape our environments—what the steam engine and its descendants did for muscle power.” The book goes on to say, “Not only are the new technologies exponential, digital, and combinatorial, but most of the gains are still ahead of us. In the next 24 months, the planet will add more computer power than it did in all of previous history. Over the next 24 years, the increase will likely be over a thousand-fold.”
These are a couple of quotes from one of my favorite books in recent memory. It's called “The Second Machine Age” by Erik Brynjolfsson and Andrew McAfee. It's not even a new book. It came out around 2016, but it was uniquely written to prime the pump, I believe, for where we are now, where we're finding ourselves with sudden capabilities and force multipliers available to us. Without a doubt, barring any focus on the economy or the crisis in banking, there is no bigger story than the profound impact of generative AI already. But also what's to come? This is not the first time we've had this kind of hype, right? That book was written seven years ago, but it's never been more important to pay attention. So we want to bring you all the best influential voices we have available to us so that you can develop a view that is true to your belief.
Today, we're thrilled to share a recent talk by Adam Blue. You might recall, Adam was on the show last fall when he came on to give us a sneak peek of what he was going to talk about at an upcoming conference at the time. It focused on realizing the power of slow and how that can have a profound impact on the path to innovation. You can even catch more around this topic online. Forbes recently published his article, specifically focused on this topic. Highly recommend you check that out.
But even more recently, Adam presented at CONNECT 2023, which is a major annual event for Q2. It's also hosted in Austin, Texas. It just happened in May. During this event, Adam addressed the sudden explosion of interest in AI and offers his unique Adam Blue-esque perspective on it all. I won't even try to preempt the takeaways. I just invite you to sit back, relax, and enjoy his talk on AI: Everything Everywhere All at Once.
Hey, everybody, and welcome to the lowest point of energy in the conference. It is not a mistake that I ask that you be given sugar in its most concentrated, most lethal form. So enjoy that. This presentation is brought to you from 100% organic sources, which I think is worth noting because we're going to talk about AI today.
So I process a lot of the world and my experience through the lens of films. I love films. It's important to me. And so I guess you get to go on this journey with me. So we got bangers coming up. Just hang with me for this next 25 or 30 minutes. Enjoy your snacks. At the very least, you can have a pleasant sugar coma.
So I really like horror movies, and I really like horror movies because I think they are reflective of the anxieties and concerns of the collective unconsciousness of the culture at the time, right? So if you think back to the ‘50s … I was not actually alive in the ‘50s, contrary to those of you in Gen Z. But if you think back to the ‘50s—I used to watch movies from the ‘50s with my parents—and there was a lot of atomic anxiety in the ‘50s. When you watch a movie about an enormous monster that arises from the ocean and destroys everything in its path as a result of a nuclear explosion, I think it's pretty easy to draw a direct line between anxiety about the Cold War and that particular film, right? It's no accident that many of these films come from Japan, right? A country whose psyche was very much imprinted by the traumas of World War II and the use of nuclear weapons. And thinking about the way that impacts art and culture, these movies are extraordinarily cathartic. And they give us a way to process some of those things that happen.
One of the first science fiction books written by Mary Shelley, written back in the 19th century is “Frankenstein,” which is a classic story. We get a new version of Frankenstein like every two to three years. Sometimes it's a Frankenstein movie, sometimes something else. Also, sorry, I have to stop here because the monster is not named Frankenstein. I'm that guy, right? The evil scientist is named Frankenstein, so it's Frankenstein's Monster, but that's, like, super awkward to say, so he'll just be Frankenstein for the duration of this. But let's understand that that's totally wrong, and if you say that you're stupid. So what is Frankenstein/Frankenstein's Monster actually about? It's about transgression. It's about the use of technology and, in the original version, some amount of the black arts to create a homunculus, a living person from, well, to be fair, a sewed-together set of undead corpses.
And that's metaphorical, not literal, of course, in the book and the movie, but it's an examination of the way in which technology frequently drives change in society that makes people very, very upset and very, very afraid. And films are dramatic and amazing, and they're rarely meant to be taken literally, except for maybe our next example. But they, again, are expressive about people's reaction to technology and the amount of fear and anxiety that comes with technology, along with excitement and possibility. And the two things are valid in equal measure because there's a little bit of both in every new technology.
This next movie I think is self-explanatory. This movie has two things I'm a huge fan of. Beautiful, beautiful outdoor spaces, and a really heartwarming, family-oriented ending. There's also a bear, and the bear does a lot of cocaine in the movie. Just, like, a lot. It's … I'm not sure what kind of fear this reflects, but if you have 90 minutes and the kids aren't around, it's at least entertaining. But I'm not sure how much we're going to take away from this one.
You should probably recognize this dog. I didn't have a real dog. The other guy got a real dog, man. Like, how am I going to compete with that? He had a real dog on the stage. But I have an interesting question for you about this dog that is a dog and a common meme on the internet, which is a fascinating way of transmitting culture, right? Memes are so compact in terms of the information they convey. Why does this dog have eyebrows? Look at his … they're better than my eyebrows. This dog has some expressive eyebrows. I saw this meme and I thought, that's gotta be Photoshopped, and, no, that’s real dog eyebrows.
And so I Googled it. The theory is that dogs have eyebrows because when we lived outside and fought each other for meat and poked things with sharp sticks, and the dogs would come and they would eat the meat and the leftover scraps in exchange for being part of the social organization, the pack for early hominids, the dogs with eyebrows were more like people. They were more expressive. They had faces. And so we hung on to the ones with eyebrows, and the ones without eyebrows, we were like, get out of here. You don’t even have eyebrows, man. So we selectively chose for dogs to have expressive eyebrows. We created this phenomenon because we could relate to this thing. This will become important later on. I know there's a lot of disconnected things here. It's like, what is this guy talking about? But I swear to God, I'm getting there.
So also, the second thing that's interesting about this picture is I made this exact face when the CEO of OpenAI said to Congress, “I think we should regulate AI.” And I was like, are you …? OK, it’s like … I just gave a group full of fourth-graders a bunch of chainsaws and a pallet full of Red Bull. You might want to check on them at some point in the next 20 minutes. That's the equivalent statement to, I think, “We should regulate AI.”
So as human beings, we, we are centered on things that we can interact with and respond to emotionally and empathically. And we look for ourselves, right? So many technologies end up being more representative of a mirror than a window because they tell you about yourself and you project yourself into them in a pretty substantial way.
So here's a great example. Remember when you were a kid and you had a Magic 8 Ball, and you would shake the 8 Ball and it had a Dungeons and Dragons dice in it with eight sides, and you would get one of the things? You would ask it a question, and then you would shake the 8 Ball, and then you would, like, go to great lengths to map what you wanted to be true, right, about the question you asked onto whatever the 8 Ball said. Like, you're not really asking the 8 Ball. You're like, “Hey, 8 Ball, build up my confidence,” right? Like, you know, “Can I make a basket this week?” And I didn't play sports, so I didn't say … I'm sure I said something nerdy. But the point is, we project that the Magic 8 Ball—as children or potentially even as adults—has some significance onto it, even though it's a completely random event.
And one thing I can tell you behaviorally is that people do not understand randomness. None of us understand randomness because you see people walk up to the roulette table and it comes up black three times, they're going to bet red. And we know from mathematical study and countless observation that there is no higher probability of it being red after three blacks in a row than it is black. And this is why casinos are in giant buildings with great air conditioning in the middle of the desert, is because we are fundamentally bad at comprehending actual randomness and stochastic behavior, right?
So AI, in a sense—watch, I’m going to get it all together, now. You'll see, I promise—AI is a set of technology that presents itself in the way that a dog with eyebrows presents itself—that we project our Magic 8 Ball expectations onto. And sometimes we might get a little too excited about what it may represent. We have seen people that are much smarter than I am who work way harder than I do, do amazing computer science stuff. And they appear to be convinced that some of these statistical models and large language models are actually potentially sentient. And I don't agree, I don't buy it. There's a lot of debate back and forth on both sides, but it's natural that people would look into the heart of the machine and see themselves.
It makes total sense to me. Other technologies, like the ability to simulate voices or the ability to construct visual art or even to construct video from samples of existing technology, they're pretty compelling. Like, I don't need to see any more Wes Anderson movie/but this other thing videos, right? Like the Star Wars one was cute, but I don't need Wes Anderson and Mad Max. I don't need Wes Anderson and the Blind Side. I don't need Wes Anderson and the Golden Child. I don't need any of those things. But these technologies are compelling because they put into people's hands the power to do something they could not previously do at a level of scale that's really extraordinary. And the technology is fascinating and amazing, but I think it's also dangerous, right? Not take over the world, we're all going to end up as batteries take over the world, we're all going to end up in the apocalypse. That's probably not going to be the way this plays out.
Negative impacts from technology often play out in ways that are radically different than we expected. And as a result, we are not prepared always for the impacts that they have because the big, loud movie-style horror movie implications are kind of fun to debate about. I mean, there are some pretty serious people that are talking about non-zero probabilities of AI taking over the world and destroying humanity. Like seriously. And I guess once you get a billion dollars, you just lose a little bit of your mind. I don't know any other way to explain it. I do not have that problem, by the way.
So here's a series of films about the intersection of technology and humanity that are centered around, essentially, the creation of a new form of life. “Metropolis” is a beautiful German expressionist film from the ‘20s. It also has a lot of interesting elements around class struggle. It's a very important movie for cinema, and it's an important movie for science fiction, I think.
Does anyone not know what “Jurassic Park” is? Do you need me to synopsize “Jurassic Park” for you? We brought dinosaurs back. It was a real bad idea. This is another film in which, ultimately, the message is, “Hey, be careful what you create because if it gets away from you, that can be challenging.” And sometimes our hubris … and man, if there's ever been a time in the last 20 years to think about hubris a little bit, I think it's right now, I mean, just point at any direction and there are examples of people getting exuberant about their own capabilities. “Jurassic Park,” to me, is kind of one of the best hubris movies. In addition, “Jurassic Park” is really interesting because it represents one of the first films in which CGI animation is done in a way that's really seamless.
And there's a guy who worked for ILM named Phil Tippett, who had done all the stop motion up to that point. And Phil Tippet was going to do “Jurassic Park” in stop motion. And then these crazy guys at ILM came up with computer animation. He was like, well, I guess my job is over and my life should probably end because I have nothing left to do since we're not going to do stop motion anymore. And the fascinating thing about “Jurassic Park” is even though they had the computer automation, even though they had the algorithms, even though they had early primitive versions of AI in the animation, they still needed animators to make the dinosaurs act like dinosaurs. Because that performance, the art of it, the understanding, that conceptualism comes from human beings. It's very, very difficult to introduce in a mechanical way. So it's an interesting movie for that reason, as well.
And the last movie has Will Smith. Insert your own joke. I can't, I don't have the effort to do it. But it's called “iRobot,” and it's yet another movie about robots, artificial intelligence, and then what could happen.
But all three of these films are interesting in that the ultimate villain in each of them in the arc is a human being. The human beings always end up being the bad guys. They make the bad choices, they disregard the needs of the technology or disregard the needs of the people using the technology. And so all of these stories point back to us as being responsible for what happens with the technology. And this is why Sam Altman saying, “I think we ought to regulate AI,” it's like, why did you build it, man? It's like, if you're that concerned, maybe take a beat, right? Go on vacation for a couple weeks and come back.
That notion of moving faster than we can manage is really important. And this gets us to a concept in technology—in AI specifically—that I think is really crucial. There’s a writer that I'm a big fan of called … his name is Cory Doctorow. He's been blogging forever. He really thinks about the intersection of humanity and technology. And he says when you encounter a technology or a platform, you should ask these two questions. Who are you doing it for? And who are you doing it to? And here's an example.
I like Amazon. I like Amazon a lot because I'm really picky about a whole lot of stuff. Ask anybody who knows me. I make this amazing potato leek soup. There is a strainer that comes from Norway from a company called Rosle that I can only find on Amazon. It's like a $65 strainer. And the mesh in the strainer is so fine I think it could part a human being and their soul under the right conditions. And I need that strainer, only that strainer, because when I use that strainer and I strain the potato leek soup, the leftover granular fat in the soup is so minimal and so fine that it sparkles like glitter when you heat it to surface temperature. And that's super important to me. So I’ve got to pay some knucklehead to pick and pull a $65 soup strainer. And then it's got to come across the world to me on an island that I chose to move to because I do not understand how public travel works. And so by the time I get the strainer, I have participated in a series of activities for Amazon that may not be as beneficial as we first thought, right? So Amazon launches it.
Who here has never bought anything from Amazon? You can punch me in the face later. Everyone else, you're as big a hypocrite as I am. Amazon brings you things that you can find easily in two-day shipping, no questions asked. But there is an invisible cost to Amazon. Like the warehouses are not amazing places for people to work. I think the Amazon organization is working on it. People are organizing in a way to try and get working conditions that work. Like, I don’t know about you, but I'd pay a little more to find out that people had slightly better working conditions. Beth feels so guilty about Amazon, she told me she takes emotional support dogs that she trains to an Amazon warehouse. That's how stressful it is there. I have no excuse. I just buy Amazon. I feel bad. I say a couple of Our Fathers and I move on.
But if you think about the platform when it came up, you could find stuff on Amazon you couldn't find anywhere else. It would get delivered right to your house wherever you were. The people that dropped it off seemed to be fairly happy. And now, whatever we are, 15 years on, I go to Amazon, I need a spatula. I'm not going to go through the spatula with you, but I need a very specific spatula. And the top 10 of the first 20 results are clearly all paid. Like these are not the best spatulas, these are the spatulas for which the seller is willing to pay Amazon the most for placement, right? So from an economic perspective, the great matching I got from Amazon before in which the seller and the buyer were matched based on the value of the exchange, right? To me and the seller, that matching now is based on which seller will pay Amazon the most to promote their ad.
Anybody just skip the top 10 on Amazon? Anybody skip the top 10 on Google? Like, I have clicked on the link for the aggregator of the hotel service instead of the hotel so many times, and it makes me so angry, like if I am swearing furiously it’s because I clicked on some stupid wrapper around the Fairmont at the Vancouver airport instead of the right one. This is an example of the platform, right? The Amazon platform—which is amazing technology, it's extraordinary—being used to extract economic rents from everybody in the system. And I'm not trying to be political here, right? Your political views are yours and mine are mine, but everybody loses when there is someone with that level of control, and Amazon is using algorithms and AI systems in order to enact that. And their scale makes it possible to drive that result in which the technology has value, right?
It's better than the alternative because I can't go find the thing I want. Maybe in part because between Amazon and Walmart, there aren't a lot of places left to buy stuff. I don't know if you noticed, but in your communities there are probably fewer small stores because you can't operate effectively at scale. But all that scale we got, and again, I'm a giant hypocrite because I've got, you know, investments in an index fund and I'm sure Amazon is in the index fund because they're in the, you know, the top whatever on tech. And I'm expecting Amazon to produce dividends and make money on the one side, but I'm also expecting Amazon to take good care of their workers and deliver things to me in a carbon-neutral way within two days because I just found out I absolutely have to have a particular thing. So I am absolutely the problem here. I am the problem because sometimes I'm irresponsible in my choices and use of technology and the way they impact other people. So the point of this is not, don't buy stuff on Amazon. That's not what I'm saying at all. The point is there's a lot of invisibility in the cost of some of these systems and that invisibility is pretty substantial. And those externalities—that's a fancy economics word we use—externalities for those things are non-trivial.
So if we talk about other kinds of systems, ChatGPT is getting a lot of attention. We're going to have a radically less whimsical discussion about ChatGPT at a panel tomorrow. I appreciate your forbearance. But ChatGPT is a fascinating model in which we've taken the collective text generated by human beings throughout what I'll call the internet history and much of the pre-internet history, and we've asked a massive mathematical model to create the relationships between words so that when you put words … I'm sorry, I'm killing you. It's all good. No, I'm just fighting you on myself, you're good … that ChatGPT literally is taking those relationships between words and then when we put input into the algorithm, what we get out is, you know, words, right? It makes sense, it's understandable text. Here's a problem. If I had an engineer and the engineer told me, man, I built that core interface and it works great, but I have no real idea what's going on underneath, I would say we're probably not going to ship that. Like if the engineers don't fully understand what's going on ... And the nature of a multilevel nested neural network is that sometimes you can't really see in the black box. It may produce fascinating outputs, it may have value, and it's not to say that we shouldn't use that, but we have to use it cautiously. We have to use it responsibly. We have to use it thoughtfully, right? Because these powerful tools can get away from you in the same way that they create value.
And so ChatGPT is a fascinating piece of technology, even though OpenAI folks have said that's as big as LLMs ever need to get. A lot of people are working on smaller models; they're working on more compact models that will run in other places. I think there's a lot of value in those models. My concern, personally, and I think other people share this concern, is where the technology sits in the ecosystem and the relationship to people makes a huge difference. If I'm interacting with you and I have ChatGPT over here to help me, or a ChatGPT-powered thing, that's really interesting and really valuable. When you put it between two people, that could exchange value, I think that's really challenging. That's a set of use cases that I think we're not as comfortable with.
And the same thing goes for the relative risks posed with the ability to, say, impersonate someone else's voice. Like that one's pretty challenging actually because we are used to authenticating each other by voice, just human to human all the time. I mean, it's really important that you, when you hear someone's voice, that is the person that you think it is, even more so than seeing them, I think, because our relationship to audio and the sound of someone's voice, I mean, it’s part of your personality, right? So having someone take that from you, that's pretty challenging. The other thing that's invisible here, I think, that we have to think about is the text that went into training ChatGPT. Like, some of that is mine, like, or at least Jack McBee ghost wrote it for me, whatever, but it had my name on it. Some of that is blog articles, some of that is other information that went out onto the internet. And I feel like I shared that with the world understanding that people would read and consume it and it would have value for them. I did not put it out there for it to be part of a high-scale enterprise and someone takes the fruits of my labors and then turns it into, you know, essentially enterprise value for someone else. And I haven't written very much that's on the internet, so I'm not terribly concerned.
But what if like a lot of your life was producing that? What if a lot of your life was producing art in a really important style that you've spent years developing? What if a lot of your life was making music that people found valuable? The ability to create new works without the permission of the author in that way is something that the technology provides, right? And I think that's OK, but using that for profit is problematic. And there is very likely a coming reckoning between the way in which these systems are trained because they are massively voracious with data and the outputs and what that value is. And so it's, again, something to be thoughtful about.
And this question comes down to a fundamental question that we always have whenever there's a technology advance, which is as people get more productive, who gets the surplus, right? We argue about it all the time. I would say much of the conflict throughout human history as people have become more efficient through various ages and industries and what have you, is an argument about how do we divide up the surplus from our communal labor and who deserves what. We are already very bad at answering that question. Like, anyone here happy with the amount of taxes they pay? No, by definition, no one is happy with the amount of taxes they pay for a variety of reasons. But that goes to the fundamental problem we have as human beings and collectively making those decisions.
The other concern that I think is very real and we have to think about is the construction of a credible artificial person, a person that on the internet—which is how many of us interact when we're not here in person with each other—those interactions, being able to counterfeit a human being can potentially undermine the nature of, I don't know, democracy, which is something that is kind of important to me. I've enjoyed it in the U.S. for a long time. I think we should maybe keep it. And so those are real concerns that we have to think about. And it's not saying don't use it. Go crazy with ChatGPT. Find ways to automate things you need to do. Use it as an inspiration if it is a tool that works for you. But also think about what the implications are around some of these questions that are difficult. And then, you know, people say like the folks at Anthropic, which spun out of OpenAI, we're going to teach our LLM how to be a good citizen. And I'm just thinking, have you raised a child? Like how much time do you have?
And it brings into question this fear. And I think a lot of horror movies and science fiction movies are about this fear. What happens when we run out of problems that can be solved by just being smarter? I mean, we're not going to solve every problem with smart. Some problems we're going to solve with hard work, some with cooperation, some with empathy, some with humility, some with just, you know, not literally but figuratively duking it out. Not every problem is going to be solved by a smart engineer in a room with a keyboard. And that comes from a guy who spent a lot of his life trying to solve problems in a room as an engineer with a keyboard. But I'm telling you, it's necessary to solve some of these problems, but is absolutely not sufficient.
This is a human face. Does anyone not see a human face here? If you don't see a human face, I need you to see a psychiatrist or the police because you're probably going to murder someone and you're a psychopath because this is unquestionably a human face. This human face has an expression. You have your own opinion of how this human face is feeling, right? Your brain is trained to see faces in things that aren't there because the human face is where you get most of your information about another person. Human face, timber of voice, tone, body language, right? I mean, I could recite Kanye West lyrics and you would still get something from my face. And I'm not going to do that but it would be … anyway, the point is, this is a dirty-ass sink, but it's got a human face in it. It's there. And that is dangerous for us because things with human-like faces are dangerous.
I like Reddit because I can turn my brain off when I flip through Reddit. And I always love the interaction of a regular person and cute animal that goes terribly wrong. And it's always like the animal has giant eyes and a relatively small mouth. And then there's biting and blood, and we're just trained to view certain things as non-threatening, as interesting to interact with. And many of the AI components that are now very impressive to people—and I agree, they're super impressive—they're impressive because they so closely simulate a human interaction, right? And that, I think, in and of itself raises the level of danger of not treating the technology seriously as we think about it.
So this is a movie, to come all the way back to the beginning. This is “Planes, Trains, and Automobiles.” I'm going to spoil the shit out of this movie, but it's 30 years old, so you’ve got to give me a pass. Alright, so here's something you don't know about this movie. This is a horror movie about travel. I don't like traveling. I'm not even a germophobe and I don't like traveling. So in this movie, Steve Martin's character, who we’ll refer to as the AI, Steve Martin has a very clean life. He's an advertising executive. He shows up and his biggest problem is he’s not going to make his first class flight home to Chicago from New York so that he can spend time with his family. Seems like a pretty OK guy In the first few scenes.
John Candy plays a shower ring, shower curtain ring salesman. This is not a real job, this is a movie job. Anyway, he's the nicest person that ever lived next to maybe Paul Walker. So Paul Walker, then we got John Candy, nicest human being that ever lived. He's in the movie. He is a hot mess, right? All throughout the movie he tortures Steve Martin with his humanity, right? He's big, he doesn't fit into things, he smokes, he's super, super loud. He's always talking to people that he doesn't know. But he is, he's alive in a way that Steve Martin is not.
And so when you first watch this movie and you're watching this for the beginning part, you think like, man, Steve Martin's just trying to get home and John Candy is legitimately terrible, and the movie goes on for a while. And then they have this weird point in the middle of the movie where Steve Martin is just like, there's a thing with pillows and an off-color joke. And then Steve Martin is mad and John Candy is like, “Hey man, I like me. I like the person I am. This is just me.” Like, you can deal with me or not deal with me. And then from that point on, the whole movie flips over, right? And you realize Steve Martin, his character is the villain in the movie. And John Candy is just trying to get him home.
Now, granted, he sets a car on fire and they wreck the car and they're in the back of a pickup truck and, like, movie stuff happens, right? But the point is, human beings are messy. Their feelings are messy. The way they move through the world is messy. But he is way more effective at getting them back to Chicago on time for Thanksgiving than Steve Martin is. Steve Martin is way out of his depth. He's outside of the space that he is comfortable in, right? So in this deliriously extended metaphor I've constructed, Steve Martin is like an AI in a sense because he has this very clean life, right? He has limited exposure, there's value to him in what he's doing. But he completely disregards anything that is happening for John Candy's character. And they move through.
And then you get to the peak point of the movie, and Steve Martin comes to this realization that John Candy is trying to connect with other people because he's desperately lonely, because he has lost his wife and he lives out of a trunk. Again, this is a movie, you can't really do that. But again, as a metaphor, it kind of works. And so the movie flips over at the end and it becomes this lovely Hallmark ending. But it is without question, absolutely an existential horror film about the fact that hell very frequently is just other people, right?
I'm disappointed because I've been saying for 17 years, sometimes I like machines better than people. And a machine finally arrived that's pretty close to a person and I'm like, that scares the shit out of me. I was totally wrong. I will take people, I want people back. I don't want to talk to a robot at the Wendy's drive-up. If I want a Double Baconator, I want somebody to know that I want a Double Baconator, right? Maybe I need an intervention. I don't know. So AI, critically important. It's super, super valuable. It has a place, but is it going to change the world? Yeah, everything changes the world now, right? But if you live in Sub-Saharan Africa and your day is filled with, like, trying to get water and then cook on an open fire, I'm not sure how much AI is really going to do for you, right?
If you interact with other people, if you do a job that requires physically touching things like plumbers, who are the new kings of the universe … Hire a plumber sometime, right? Like, they're going to put engineers out of business for ridiculously overpaid. That kind of work is still important. It still matters. It's still valuable.
And so when we think about AI and we think about what it can do, the assistive use cases are much more interesting than the replacement use cases. But there is a coming change in which the use of AI will create greater productivity for a lot of people in knowledge work who are used to kind of owning their own productivity, right? For a long time as a knowledge worker, you were like, I'm glad I don't do that job that might get displaced, right? Man, those guys are knuckleheads and now AI comes along and you're like, well, alright.
But the division of that surplus that we create with that productivity, what we do with it, will make a huge difference. So if we use AI in education and it allows teachers who are pretty important, arguably, to be more effective, where will that surplus time go? Will it go into more care for kids that are developing or will it be eaten up by something else, right? If we produce engineers that are more effective, can we find a way for engineers to spend more time with product people and with customers? Is there a way to use that surplus in a way that's meaningful and not just capture it at the end of the day?
So this brings me to my last story, which is my own hell on earth. So I was traveling through Atlanta Hartsfield-Jackson Airport. Anybody your favorite airport? No, it's no one's favorite airport.
This airport is so big and convoluted, it has its own special train underneath. And not an above-ground train where you can see out the windows, but a gross subway where outside the window is just a bunker that says you are in purgatory and these doors might open on hell for you here in Hartfield Atlanta Jackson. So I'm on the train and, you know, you got your bag and you got your backpack stacked up on your bag because you’ve got to take all your stuff with you, right? Can't check the bag because God knows what'll happen. And then you’ve got to hold onto that pole. And the pole is like a cylindrical vertical Petri dish made out of stainless steel. And I'm like, in my head—I'm a big counter—and I'm just like, it's three in the afternoon. It's probably like 1,200 hands on this pole, 1% of people are sick. That's 120. Oh God, the math's just not great.
So I'm holding onto the pole with one hand. I got the bag in the other hand, there's the guy next to me. Never seen this guy before in my life. Never seen him again. He's standing there. And that train in Hartsfield-Jackson, as it leaves the B terminal where the Delta gates are, it's like, it's a jerk. It's like a Tesla Plaid start on the train. You know what I'm talking about, right? And it's like, you got this thing, but I'm … I got the pole, like germs and all, I'm on the pole.
So the guy next to me takes this moment, he takes his hand off the pole, takes his hand off the bag, and he's going to like bend over and like get, I don't know, his passport or whatever out of the bag, and the train goes, right? And so he's gone from zero kinetic energy to like 246 pounds of kinetic energy in a matter of eight milliseconds. And I'm not gifted with a lot of physical grace, which is why I'm a bald, overweight engineer. And so in the one moment of my life when I needed it, I let go of my carry-on with one hand, I rotated over and I grabbed the pole and I put my hand out and I grabbed the guy who's falling down, and he reached for me at exactly the same moment, just took my hand. And he didn't … he was going to faceplant on the train under the airport, and it's so dirty. It's so dirty. And like, all of a sudden I'm like Gail Sayers at the end of that movie I don't remember. I'm like reaching for the football, and I get the guy and, you know, he stands up and I'm like, you know, we do that like hyper masculine toxic nod thing. Like yeah dude. Yeah.
And then I rode away and I got on my metal death tube and I went home and I'll never see that guy again. But I will remember that moment for the rest of my life. It's like the most human moment of my entire life. So now we get a color version of the picture because the happy story. So until we can find a way to put empathy in the box, until we can train it to be empathic, it's just going to be a tool.