Amid a world focused on shortcuts in the name of efficiency, is there room for beautiful, elegant innovation? Alex Habet and Adam Blue share a captivating discussion about humanity, artistic creation, and business.
Hi and welcome to The Purposeful Banker, the leading commercial banking podcast brought to you by Q2 PrecisionLender, where we discuss the big topics on the minds of today's best bankers. I'm your host, Alex Habet. So I'm delighted to welcome back Adam Blue to the show. Some of you might recall Adam was here last year as part of a preview to his talk at BankOnPurpose. Well, he's back at it again this year, giving us another glimpse inside his head and a sneak peek of his talk this year.
We discussed the iconic moments from the past year, many of the challenges and the new opportunities that are being brought by some of these really exciting advancements in tech. We also talked about leadership effectiveness, and a lot of what was captured in our conversation drives to the heart of incrementalism that ultimately ignites progress, especially when you take a moment to pause and reflect. All of this with a healthy dose of movie and music references, as you would expect. So I hope you enjoy this conversation with Adam Blue.
Adam, thanks again for coming on the show. It's been like a year since you've been on here and there's been a lot of news. You know, there have been failures, bank failures, you know, major strategic pivots, changes to risk tolerance, exciting advancements in technology. I'm just curious, you know, from the Adam Blue perspective, what were some of the most iconic shifts or moments that stick out to you when you kind of sit back and reflect over the last 12 months?
Yeah, I think, so on one hand we've got, you know, we have the launch of Open AI ChatGPT and people get their first encounter with a piece of artificial intelligence that they can grasp the enormity of, in a sense. And it's, you know, amazing and impressive and also, in in my opinion, sometimes extraordinarily stupid and abstruse and awkward. I'm not sure we're entirely sure what to do with it yet, but to say that it's not, you know, a watershed moment, certainly, in in history for the last 10 years I think would be overlooking a pretty interesting development.
So there's that whole arc, right, and people using ChatGPT and then now, you know, some of the hype cycle is cooling and people are getting a little more serious. So that's a big one.
And then the other was, you know, in the beginning of the year when people started seriously asking questions like how many banks are going to fail, which is a very different kind of question than people normally ask because I don't think anybody wakes up on any given day—maybe regulators and things— which banks will fail today, which banks might come close to failing? But there was a time period there where there was a lot of discussion about there could be a large number of financial institutions or how will this go down and what will it look like?
And so those are kind of the two ends of the spectrum. That existential terror of, like, what's happening? What's going on?
And then kind of this interesting development and technology that it kind of brought the movies and the science fiction books like right into people's screens, which was, I think, really interesting. But it's interesting, right, because those are two very extreme things.
Would you say the pendulum is now swinging today a little bit more in one of those directions or it's kind of still a battle of the two extremes?
Yeah, so if I was just to be ornery about it, I'd say I don't. I think they're very similar to me in one sense in that times like these produce outcomes and observations that are very different than the ones we saw previously.
And I studied economics for a long time and then abandoned it for a wide variety of good reasons. Like, I was going to be a really bad academic economist, which was a good reason to stop doing it. But there's this notion of structural changes, and there's a lot of econometrics and statistical theory where you go test for, “Was that a structural change or was that just part of a cyclical change,” right? And the end of the credit policies and the end of a, I don't know, certainly 15-year if not roughly 20-year expansion in the tech markets, the stock market, easy access to credit, relatively low interest rate, all those things combined together, right?
That era kind of peaked during COVID and then it kind of tips over to the other era. So in these times of structural change, I think it's not unusual to see a lot of things where you say, “I've literally never seen anything like that before,” and that that might be a little ... We've probably seen stuff just like this before. We just didn't recognize it as such. Or it happened during a time where we weren't, you know, 50-year-old dudes working in a software company that, you know, it's pretty dependent on a lot of these kind of factors.
So, ultimately, I think the two events end up being kind of similar in their surprisingness, right? You know, people talk about, like, “I was shocked but not surprised.” And these are two things where like people were genuinely surprised.
It wasn't just this sort of outrage or reaction, it was a real understanding that these were not things we expect, reasonably, to happen. So in terms of like impact and potential for value and everything else, I think they're pretty wide apart on the spectrum. So I agree with you in that sense, but I think we'll see more of these kind of events in the next 12 to 24 months. Prepare to be surprised.
I don't know if that's comforting or not, but it shouldn't be.
So I guess, out of curiosity then, as you know because you talk to our customers all the time, non-customers as well, you know, where are you finding yourself connecting the dots most? You know, so I get you look at these awesome technologies, and I use a lot of the stuff in my own personal life too now, but a lot of it when I first saw it … like I remember the first time I tried ChatGPT, I was … I didn't know what to really ask and I didn't know how to apply it to my life yet. Now it's very easy to see how it can be applied to life.
But as it relates to bringing these kinds of advancements into an institution, for example, which I suspect you're also contemplating for Q2 as well, how does this impact Q2 employees? What are the … you know, when asked the question, “Well, what am I supposed to do with this?”
Where do you find the most common dots you are trying to connect between? Yeah, internal and external to our company.
Yeah, I think there's a couple principles I try and think through and talk through with people. The first is if you don't understand what your problem is, right, then picking up a tool to try and solve that problem is unlikely to be super helpful. By all means, people should be experimenting with LLMs and generative AI you know? It's just not how it works.
So I think part of the experimentation phase is to understand what is this thing, what is it useful for, what is it not useful for, what could it be useful for? And then part of the other thing is just really deeply understanding your actual business problem or fundamental problem in people's lives. And so it's there's a little bit of this focus like on, “Well LLMs generate great text and they can generate human-readable text from a lot of data.” Maybe we could give people financial advice, right? And that's super cool. I love that idea. It's super interesting. We are working on some things along those lines.
But again, if we come back to half the households in the United States probably don't have $400 in an emergency fund, much less, you know, investable kind of liquid capital on hand, I don't know how much financial advice they need as much as they need immediate transparency and observability and what the parameters of their financial life are. And maybe we'll get that from an LLM, maybe we won't. But it's sometimes we get fixated on the problems that are easy to fix with the shiny new tool we got, right?
I've done this all the time. Like I needed a brad nailer for putting trim up to do a project in the house, and then I used the brad nailer and it was like $250 and I used it like for seven minutes and finished the thing and then I'm like, “What else can I brad nail?” And I don't have anything else that needs brad nailed. So the responsible thing is to put the tool, you know, put it on the workbench, walk away from it, come back to it when you need it. But there's just an extraordinary temptation to say, like, what else can I sous vide? What else can I cook in a cast iron? How else can I use this one sample that I keep hearing about? Like there's just this temptation, I think, sometimes.
And I know I'm guilty of this. You've fallen in love with the tool and not the problem. This is what a professor told me right before I left academic economics. He said, “I think you like the tools more than you like the problems and the science.” And I thought about that for a couple days and then I, you know, I left and went into IT because he was right. I like the tools better.
Yeah, well, we can control the tools better than the problem and the science.
That's kind of an interesting and disturbing insight into my own personality you've just given me. But thank you.
So you know, you've recently published a couple of articles—for those of you in the audience who haven't checked them out, you can find them on Forbes. You know, I'll kind of talk about them a little bit chronologically because the first one relates to what we talked about here a year ago. So in May you published about the power of slow in the tech hiring process, which you know, again, we talked a lot about it here last time. It's kind of cool to see the culmination of that in the article. But you know, just in terms of key aspects of that, you know the focus on the value of taking long-term perspective and investments on projects that might not necessarily yield results right away, and specifically pointing to what you call the tyranny of now, which by the way is a super poetic way to put it.
You know, you're fairly critical of the tendency to focus on short-term productivity and efficiency, arguing that can ultimately limit growth and diversity. So I'm curious, just as you've been out there socializing this message, what's the reaction you've been getting by and large?
Yeah, it's interesting because I think in in the main, like … Look, saying having a long-term view and building value over time is really rewarding and powerful. No one says, “I think that's really dumb, man. I don't know why you wrote that.” That's not the kind of feedback you get. The kind of feedback that I often get is that many people that are trying to build something on a long-term arc are under a staggering amount of pressure, right, to do something or get something done or deliver something now. I mean, if you read anything that's humorous about software engineering—anything—it falls into one of two classes: something so abstract that you probably need to a CS degree to understand the joke or jokes about business people and project managers wanting things done before they can reasonably be done by a human being in a reasonable amount of time. Like, there's only two kinds of jokes. And the second is far more common than the first. So the feedback I get a lot of the time is, “Hey man, this is all great, but what about if I got a deadline to do this. I got to do that, we got to deliver revenue on a quarterly basis. What about holding people accountable? How do you know that you're really going to produce actual outputs if you're focused on the long term?” And I think those are all great concerns, right? They're all super valuable.
You know, one of the marks of someone who's thoughtful and sophisticated in their approach to things is you can hold a couple different ideas in your head at the same time that may not necessarily be entirely aligned and you can kind of live with both of those things at the same time. And so, you know, in response to some of the questions or thoughts I get about that work and that series of thoughts, the other thing I would say is it's a lot easier to get small things done than it is to get big things done. And that notion of building value over time and thinking about moving slowly and making decisions that are slow and the idea of doing small things and sometimes, not always but sometimes, thinking small, I think, actually fit really well together.
And in the current climate, which is not lending itself to, say, pulling down big chunks of venture capital investing and, you know, starting massive new companies or hiring hundreds of engineers and pursuing entirely new markets, I think that the notion of small and what small can mean ends up being really beautiful. And in the same way, if you really want to build something meaningful over time, right, you have to commit to yourself that on any given day, the thing that you accomplish will very likely be a small piece of it. I mean, there will be days that you get something done that's amazing. You'll have milestones, you'll have achievements. But on any given day, your actual progress toward the goal could be fairly limited, and you have to be able to live with that. But you also have to turn whatever those things you can accomplish into really concrete, measurable units of value that can fulfill your short-term pressures, essentially.
And this is a thing, I think, that engineers that build products and do software engineering can do really well, where they take where they want to go and where they need to go in the short term and figure out a way to lace those together, right? To weave them in a way that they can deliver a series of incrementally valuable whatever it is, features, components, products, insights, but they're always pointing in a particular direction.
I think, you know, Apple probably typifies this as much as anything else. If you go back now and you look at the first iPhone and what the first iPhone could do, it's pretty limited, right? And then you look at the rise of the iCloud ecosystem and you look at the move toward not owning physical media for music and movies, right? Like, I can't count how many digital platforms I own Led Zeppelin II on. Like, I could listen to that in nine different platforms. It's the same bits, man. But as I move from ecosystem to ecosystem, right?
You just have to repurchase it over and over again, right?
I gotta pick it up again and it's priced in such a way that it's right at my convenience threshold. You know what I mean? Like am I going to pay $7.99 for Heartbreaker or am I going to pay $20 a month for a title which includes it? Am I going to do that? And the answer is, yeah, it ends up making sense. Kind of perversely, but it does make sense.
So Apple delivers the device. The device does a great many things that other devices would do. It doesn't do them as well as some other competing devices, but that’s OK, right? Because it gets better every iteration, right? The hardware is better. The camera is better, accelerometer is better. The Face ID detection is better. Touch ID was better than passwords. All of that is better. And then iCloud, iTunes, Apple TV … I mean, I don't know if you realize how dependent you are on these incremental advances, right?
Like yesterday … and our IT department is great. I get excellent support universally. So policy got pushed to my desktop that took away my iCloud thing for desktop, right? And I use that to synchronize things I'm working on between my phone, my tablet, my travel laptop, and then my at-home development machine. And so I have become … I take that so deeply for granted, right? It’s a small thing. It really is a small thing. But I know if I leave a file on my desktop it'll be wherever I need it to be. So when I'm working from home I can look at a document from wherever I am. I can go sit outside. I can use my laptop. I’m not worried about the thing. So like eight minutes into the policy taking away iCloud desktop sync, like I’m panicked and I’m … And what’s going through my mind is what’s going to happen the next time I travel? Am I going to have to copy? Am I going to have a do like a full pull from it so I have all the code that I would ever possibly need to do? Am I going to pull all these documents? Am I going to move this stuff around? And so I jumped on the Teams channel for support and they got it fixed very quickly and I just … but like the level of visceral reaction and panic I had to losing this feature, right?
And so if somebody said to me, “Hey, we launched a new kind of phone and a new this to send a new something else,” I'd be like, “Yeah, but am I going to get synced out of it?” Like I have to have that. And Apple built that up over, arguably, 10 or 15 years building those capabilities. And so they did not launch the iPhone with a fully formed iCloud solution, right? They built those things incrementally, and they listened to what people wanted and how they wanted them to work and their upsides to that feature, their downsides to that feature.
But I used to have an Android phone and then about two years ago that cost of like splitting the ecosystems just got too high, even though there are a lot of things I preferred about the Android phone. And I switched to an iPhone just for that minor frictional inconvenience to go away. But it's so fundamental, you know? It's 100 times a day kind of inconvenience, and it's a real behavior driver for me. And so I think that's a great example of having a long-term vision, right? And then being able to deliver consistently against that vision, small minor improvements time after time after time after time.
And you can kind of, if you look backwards now on some of those great product companies, you can see them kind of announce the vision, but you don't really get what they're saying until they start to deliver the pieces of it. And then you can see it's all laid out in a in a pretty elegant way, which is, I think, very difficult to do. But when done well, it builds extraordinary vision.
Now, yeah, sometimes, you know, what I found with companies like Apple that they might not even necessarily even paint the vision. They kind of put out the little feature and you're like, OK, like right here I got my phone now docked, you know, in landscape mode and it's got this nice little … with the new updates got the, you know, it's giving me a nice clock and the weather app. But then I start to notice that apps can now start to feed their own content through this standby mode, which is effectively, you know, this dark horse way that I think they're introducing the notion of potentially a home pod with a screen on it, right? And this is the interface that they're testing with, you know, so you don't even realize that this is happening. But then you look back, it's like Apple Pay, right? It started with just paying. Now no one predicted the card would come from it. No one would predict the savings accounts are going to be tied to it. But it’s just, you know, all of a sudden it’s all here. And it goes to the heart of what you’re talking about as well.
And I don't think Apple has to know that in iOS 17 they're going to ship this horizontal landscape widget feature. They don't have to know that in 2007. They just have to know that there's an interesting problem out there that's not solved yet. And you know the way we talk about it is we talk about building capabilities in the software within reason and then implementing features on top of those capabilities and not trying to anticipate too far forward what features people are going to want. And so we've built things at the structural level of the software that we ended up using in radically different ways than we ever thought we'd use as the market evolves. And I think that's, again, I think it's a way to approach the problem that's really thoughtful and you can give yourself a lot of optionality by taking, you know, shorter steps for sure. But it's tough to be disciplined with that as well, right?
I mean it’s the best thing in the world. Sometimes I wake up and I’m like … I wish I was that guy in the movie where the aliens are coming and you got to do one thing. You're Jeff Goldblum in Independence Day and you have to write the virus on a MacBook (oddly enough) that's going to infect these invasion ships from an ancient civilization from somewhere else. But don't worry about that part. And then all Will Smith has to do is he just has to fly the plane, right? Life is not like that. Life is not like that. I'm trying to write the virus that will infect the aliens. And then, like, they didn't mow the grass very well. So we got to call the landscaper and get them back out. Or I got to go to the grocery store because we got to eat today and we don't want to eat out because it's not as healthy.
And so this notion we have, this kind of like monastic kind of mythical hero … I mean, Elon Musk, about whom I would admit I have extraordinarily mixed feelings, he fits into this kind of almost post masculine kind of thing. Right? He has whatever he calls it, stress mode or beast mode. He doesn't eat or sleep and he does things and he solves problems and I'm glad that works for him because it doesn't work for anybody else. It’s not actually how it works. You know, work comes out of people collaborating. It comes out of doing some work and then stopping and then coming back and doing some work. It comes out of a lot of different things. And maybe Elon Musk can work that way and it does work for him. But I think that's not a pattern that's available to the rest of humanity because at the end of the day, the garbage truck comes to your house once a week or every two weeks, and they want your trash and your recycling and your compost to be sorted. And so you have to stop what you're doing to do those kinds of things.
And so the business of being a human being and the business of being extraordinary in your field and holding up what you want to do and meeting your teammates ends up being a lot of small bursts, fairly intensive activity marked with periods of the just mindless ennui of being alive and being a human being and having relationships with other people. And it’s really … I’m kidding. It’s really not. Those are, in some ways, I mean aren’t those the really fulfilling parts? Aren't those … like sitting down to dinner with someone and not looking at your phone like, that's pretty powerful, especially after a day where you've done a bunch of work.
So you know this notion of, like, small steps, incremental progress, work at a sustainable pace. I think those things are really interesting, and I think they're more interesting now because the removal of the easy access to capital to drive innovation. And I don't think capital drives innovation in any meaningful way. I think understanding problem, scarcity, sheer terror, those things drive innovation pretty well, right?
And beast mode.
Not for me. I tried it. It’s just not my thing.
Well, you know, he's not alone in the group of people who could be kind of classified that way that drove, you know, humanity forward. You know Einstein in his own way was unique. Da Vinci, you know, an outcast in his own society. And then you have, of course, Steve Jobs, which is well-known … you know, that sometimes could be very difficult to work with, you know like with Musk. I find it fascinating. I always find it fascinating when you find these individuals that are running multiple companies and there's a few of them out there, you know, to be able to multitask is, you know, although I listen to the interview with Walter Isaacson on the Lex Fridman podcast, he talks about how people like Musk are actually not multitaskers. They’re serial taskers, right? They can go deep for one hour and completely tune out everything else and then, like that, shift. But it doesn’t come without drawbacks, too, I guess.
To be perfectly transparent, I've had periods in my work at Q2 and other places where I did not exert even the minimum amount of effort sometimes to produce kindness in the space around. And I didn't get any more done during those time periods that I got done as I became a more mature, more fully functioning human being. And if I regret anything, anything ever, it was that it would have cost me so little just to stop and calm down and be kind. It's just, it's pretty inexpensive to do, right? And so again, I think we in the technology industry, in particular, I think we have an unnecessary and unfortunate conflation of being a genius and being an asshole. I just think we do. And in very simple terms, I think it's difficult and problematic for a lot of us. And I have worked with a lot of different people who fall into one of those groups and not the other and some people who fall into both of those groups.
And ultimately, I don't think they have to be linked in any meaningful way, right? People do things under stress. They get strung out. They don’t take care of themselves. They exhibit behaviors. Other people maybe try and help them or take care of them, maybe they don't, but there seems to be this—and I think it's this kind of TV and movies theatricality, right—that there's some intersection of unpleasant interpersonal behavior and, in particular, technology genius. I mean, I'm not even going to name names of characters throughout the history of entertainment who exhibit those characteristics in tandem, right. Because there's too many, there's too many to list, you know, and it turns out a lot of people that work very hard and are super brilliant, they just don't show up that way because it's not necessary.
And so if it works for someone to the extent that it works for you, I guess it's a way to go through the world. But I think more and more people are saying that's it's just not in the long term, right? Those little bits and pieces of unkindness, whatever they are, whatever form they take, they add up and then they show up in the way that people want to work with you and the way that people react to you and what they want to do.
And so again, it’s another … it’s a small thing, right? It’s just a small thing to think about. “Hey, there’s other people here and we’re working on this together.” And maybe you are the smartest person, maybe you are going to drive the solution, but it just costs very little to show up like a person, you know, in those moments. And more and more as I think about leadership and helping people be effective, which at some level is kind of one of the highest callings of a leader, right? You can't do it all yourself, even if you're extraordinarily talented, you need other people to show up and work hard and you need them to work together and to do that instead of some other thing potentially that they'd rather be doing.
But there's an aspect—and I mean this in the purest possible way—of like performance in leadership. The way you show up. I remember one day, the CEO of Q2, Matt Flake, and I were looking out the window of the third floor of the building we were in, and it had not been an extraordinarily rewarding day for anybody in the company. And our VP of customer support was walking to the parking lot, and he looked like he was carrying a hippopotamus on each shoulder. I mean he was bent. He was bent like a caricature of a 110-year-old man. And Matt said, “I hope nobody on his team can see him right now because I know exactly what kind of day he's had every day when I watch him walk out the door to his car.” And it really struck me because the job's not over just because you're walking to your car. I mean, it should be. Like, that would be fair. But it's, you know, if his team sees him walk out the door that way, how does that make them feel? What does that mean for them? How does he show up when he comes in the next morning? And so I think he got some coaching around that because I saw him walk out a couple days later and he looked like you look like he was, you know, in a military parade.
But that's part of a performance and it's not that it's not genuine or that it's insincere or that it's phony. It's just you have to show up for the people around you, and your body language and in your tone and when you need them to be relaxed, you should look relaxed. When you need them to be engaged, you kind of get engaged, and that part of performing that role is really important. And to some extent I believe that what separates really extraordinary leaders who grow businesses and grow people and grow value over time from leaders that are merely good. And there’s nothing wrong with being a good leader. I’ve made it a lot of my career just being a merely good leader, for sure. But what separates them is their ability to show up and execute that performance in a consistent way. We're also doing the other 47 parts of the job, right? That realization for me was pretty interesting because, again, it's made up of all these small, small things.
Yeah. So, you know, it sounds like, and again I have no insight as to what you'll be talking about in a couple weeks at the BankOnPurpose conference in Austin, but based on what I read on our website, the description is there, and I'll just read it for the benefit of the audience.
Why Small Is Kind of a Big Deal. And on the website it says that, you know, you explore the power of recentering your institution's definition of success around small wins, the wins we’re often conditioned to overlook in a loud, confusing and sometimes scary world. It doesn't just feel good to celebrate the small stuff. It's crucial to keeping your organization's integrity and culture intact because, after all, there's no big win without the small ones that add up to it. So it seems like this is all a beautiful setup to next month. Is that fair to say?
Yeah, yeah, that's super fair to say. I think Bella Pietsch wrote that copy. Man, that's a good copy, right? I'm going to give you a little secret. I work with Bella Pietsch and Tia Nieland and I'll bring like half an idea. You know, like if you've seen Get Back with Paul McCartney. And I’m Paul McCartney here, but I'm not Paul McCartney where he sits down and he's going to show them how to play Get Back and he sits down with the bass and, you know, they kind of watch and then they kind of figure it out together. That’s how I work with Tia and Bella, and what will happen is that copy that she wrote will influence, in turn, how I think about it. How I think about the topic and how I write it, and then some of the music in the words that she wrote there, and that collaboration is really valuable, really powerful for me.
So I often end up with a presentation, a deck that's built, you know, by someone from marketing and then copy that's built by somebody else. And then I will work through that material. But a lot of that doesn't just come from me, right? They’ll contribute examples. They’ll contribute thoughts. They contribute copy … Tia does a brilliant job of pulling images that are visually engaging and they're on the point of the topic but they're also a little off, which creates some space and some room or just a little, just enough cognitive dissonance, I think, to keep people engaged. And so like that copy is a great example. That could have been written a lot of different ways. There's a whole bunch of ways you could say those things. But the magic she found in those words and that sequence together.
And I hold no shame about this bias, but this is what makes me a little sad about an LLM, because I know definitively that no LLM in the world puts those words together in that order. It just doesn’t happen. It’s just not gonna happen. And if it did, it wouldn’t happen for any reason that was very interesting. It would happen because statistically they were the most likely words to occur in that order based on the learning of the model across the set of available parameters, which, man, it's great for documentation or telling somebody how to code a socket call, but I need the music in the language. And again it's not a big thing, but it does matter. It matters in a meaningful way. And so thank you for reminding me of that by reading me that beautiful piece of copy that they worked on and they wrote because that description will drive my frame of mind when I give that talk, and it will make me better without question. And it's not … you know, does anybody do anything alone? Is there anything you can actually do alone? I don’t know.
Maybe introspection in the beginning until you start working with a therapist. Maybe
OK, maybe. But if you introspect, right, are you not dependent on the conversations you've had with other human beings and the things that they said to you and what they taught you and what you saw from them?
It's a bad example because I think actually …
No, I think it's a great example. I think it's an amazing example because even the most private times of your entire existence, you are effectively … at least you have the companionship of the ghosts of your past, if nothing else. I mean, to the extent that you can remember people, you are not actually alone, even if you can't remember their names or faces or anything else. So yes, you do it alone, but it's still in some way, it’s a fundamentally collaborative activity, right? It’s hard to escape.
There is inner dialogue, which sometimes you just got to stop and think, wow, the notion of inner dialogue is just fascinating.
Did you know some people don’t have it?
I actually heard it’s like a staggering percentage of people actually don't have it.
Yeah, I don't remember the exact number, but I remember it recently being in the news and going around and a lot of people don't. At least they don't perceive that they have an internal dialogue in the way that they would relate it to a researcher, right? Like I’m pretty sure I have one. It sounds like you do. My wife does. Most of the people I’ve asked do. But yeah, it's just interesting, this notion that what happens in your own head is very different than what happened inside of other people's heads. There's a term that's gained some popularity called sonder, which is a description of the realization that every single person has their own perspective on reality in the universe. And when you walk into like a crowded place, there's all these other people that are all having their own reality, and they're in their own world, and they have their own problems, and they have everything else. And I think depending on what kind of day you're having, that can be really, really fantastic and it can make whatever it is you're doing feel very small where your problems feel very small. Or it can be probably totally alienating and really, really terrifying, I guess, depending on what kind of day you're having. But just that notion.
And then do you ever wonder that we're all just simulations built just for you? Do you ponder that?
Yeah, and I find that notion so radically uninteresting that I'm not able to ponder it for very long. You know, it's one of those things I would prefer for that not to be true because it makes the totality of existence so utterly meaningless. That and then I have a lot of evidence in trying to get people to do things or understand things I want them to understand or them trying to get me to do stuff that would suggest that there is in no way or any possibility that we are all part of some common simulation that's centered on me. So, but yeah, that's another interesting one. That's almost like a waking hallucination.
Well, then and then there's the other extreme. Like what if there's, you know, some scientists say that there's infinite copies of our universe and there is almost an exact copy of our conversation right now. But one word might be different happening in some other parallel reality. And so that now you're contorting in a totally different direction.
That notion is also fast. I find that one super plausible. I find that one actually very plausible, probably because of my deep, deep nerdiness, because that notion has been present in, you know, kind of science fiction and fantastical literature and games and stuff for a long time. So that, yeah, I'm totally on board for that one. That one makes total sense.
So you know, before I let you go, I do want to talk about this, the second article that I mentioned a little earlier that you recently published on Forbes. And this is kind of circling back to the topic of … I really love this article because, you know, the title is called Beware of Artificial Intelligence with Puppy Dog Eyes. And you kind of tackle this notion across three themes, right? The humanity, the risks, and what you point out, rightfully so, on the surplus, right? And by the way, you also managed to fit in a reference to Wall-E and Frankenstein all at the same time. I don't know if that's ever been done, but the Wall-E reference was so on point. You know, for those of you out there who haven't seen it, I highly recommend. I have little kids, so I actually was lucky enough to see it. Now I've seen it several times but the, you know, the reason I think it's such a fantastic way to portray what's going on here is that it almost in a cartoonish way is showing the lengths to which consumerism can take us unchecked, right? And like it or not, you know, we might be on that path right now. But then on the flip side, right, on humanity's creations, right, it's a plausible risk that we might lose control over these things that we're building. And then you kind of tie it up with this bow around surplus and if you'll allow me, I'll just read once again from the quote directly. “This leads us to a fundamental question that arises whenever there's a technological advance. When people get more productive, who owns the surplus? Is there a way to use the surplus created by AI in a way that's meaningful? Or will we end up as floating blobs void of humanity, staring at screens all day?” Which is what's so well depicted in Wall-E, right? So how much time do you sit around and spend, you know, worrying about, you know, that trajectory of where we're going?
Yeah, like as a fraction of my day, it’s not a huge amount of time. But I think there's a couple of interesting concerns here that intersect in a meaningful way. So because, again, my mind is bent by a long study of economics, a lot of the systems we have are basically designed to figure out, OK, we know that it takes machine capital, financial capital, human capital, and labor to produce the things we need to live and be comfortable, right? And so you make fundamental decisions collectively about, well, how do we distribute those things? How do we decide who's rewarded more, how do we decide who's rewarded less? And I would not claim to be smarter or wise enough to say a lot of things on this topic aside from, you know, general observations. Which is if you if you look at what we pay different people for different jobs, right?
And then you think about, just ask yourself the question, pick any two jobs, right? Maybe make one of them like elementary school teacher, right, elementary school teacher and another job. And then think about what an elementary school teacher pays versus what the other job pays. And then just ask yourself, if somebody had asked me to write down the salaries of all the jobs, are those the numbers I would have picked for those two jobs? It’s just kind of an interesting exercise. And the interesting thing about capitalism that a lot of people bring up, I think somebody even brought up during the Republican candidate debates recently, is it is certainly an imperfect system, but it seems to be the best system we've got that's available to us to allow people to progress and kind of arrive at a better position due to the exertion of labor and being dedicated and being skilled and capable.
And so in the same way that modern systems for allocating the surplus from manufacturing, from the collective work and organizations, whatever. And these things are all pretty well set at this point, and they change very little over time realistically. Like you look at the union negotiations between people who write and act in television shows and movies and then people who run the studio. That’s essentially an argument over the surplus, right? Like who stands to gain from the surplus that's created if they use AI in that in that area?
And I think there’s two ways it’s important. One is if you don't figure out how to allocate effectively and equitably back to people in reward for their labor and the value they create, you end up with systems that produce very unhappy people. And then they have things like crime rate and the inability to afford to buy a home for people in a wide variety of jobs right now. And they don't lead to outcomes that we would agree are generally beneficial. And we can argue a whole lot about the very specific thing, who should get what and why they should get it. But it's kind of hard to find someone where if you said if somebody works for a day, should they be able to afford a place to live and something to eat and some money to put away. But that's a hard problem to solve.
And then on the flip side, the introduction of AI feels like to me, and this is this is an intuition I have, that it will tip that balance potentially more and more towards the people that pay for the electricity that the AI uses to produce new outputs than the people who did the intellectual work of producing the items that the AI is trained on. And this is like a personal view of mine, right? That feels inequitable. You know what I’m saying? It just doesn’t feel like it’s fair. And so we’re gonna have to work through what all that is.
But the other thing that intersects with that in a meaningful way is that some of the things that quote/unquote only people could do, right? Drawing, painting, making music, writing text that was useful and effective communication to read can now in in some ways and with boundaries be done by machines at scale. And this is different from like Gutenberg invents the printing press and people write books and then they can print books and distribute them, right? That is a technology that facilitates low-friction, low-cost distribution of your thoughts, right? Whereas the LLM can be used to enhance the things that you think are right. But it also potentially can be used simply to churn out more and more things.
Like you can tell on the Internet the difference between a piece of art and a piece of content. I mean, you just can. And I'm not saying content is universally bad, but I would like to see a world with more art and less content in it. And these kinds of tools that we're building, they generate things that are content, and they have this very snake eating its own tail kind of characteristic to me, which is we produce output, we train, we get the output out, right? And without new inputs, interesting new insights, people's experience, mistakes, chaos, experimentation, a little bit of, you know, lawlessness, not like literally, but you know what I'm saying? Then you're just going to get a narrowing because the machine will start to eat the content that it's produced, and it will be trained on that. And then we'll end up with this kind of cyclical reduction down to something that's just really not very interesting.
Yeah, that’s like a circular reference in Excel. It just deteriorates your whole workbook when that happens. That’s an interesting thing. Yeah, I’m surprised I actually haven't spent that much time personally thinking about that.
But with the amount of data and information, you might actually know this, to say, don't we produce more data in like the last 12 months than we did in all of history before that? There's some crazy statistic. And I suspect that's going to go way higher now because now we just have this, you know, that these incredible tools which allow us to churn a lot more content a lot more efficiently. What's that going to do to the circular reference?
But if that content does not contain any new information, you can have more data that contains zero new information, right? Like if you measure the temperature of something every day, that's some amount of data and a rough amount of information. If you measure the temperature of something every minute, you've got more data, but you don't get 1,440 times as much information about that thing unless the temperature changes very rapidly. If you measure it every angstrom second, you don't get much more information, you get a lot more data, right?
And so the extent that human beings start or stop producing fundamentally new data or data that contains a lot of information to further train LLMs on … LLMs are very hungry in terms of their consumption of data. And so the rate at which they consume data is arguably higher than the rate at which people produce output. And an LLM that you want to be good at writing text, you have to train it on well written text, right? Temperature observations or spatial data from, you know, a robot arm in California, they're of no use to you. You need well written text that people produced because that's where the meaning comes from. And so there's this kind of dilutive aspect, right, where you can make more data from the same information, but it doesn't have any more meaning in it. Fundamentally that makes sense. And this is the way I think people feel about derivative art and music and films. Like there are how many Fast and Furious movies?
I don’t even know.
And, look, I’m not one of those people. Like, let people enjoy. If you like all 10 Fast and Furious movies, good for you. I feel like after three of them, you probably get the point of the fact. I feel the same way about Star Wars. I’m a huge fan of Star Wars, and we don’t need any more. Like, maybe let’s just stop. I’ll go back and watch the ones I like and ignore the ones I don’t. I don't really need more Star Wars because each incremental thing teaches me so much less about myself or the Star Wars universe or life or whatever. And you get this kind of inward-looking naval gazing minutiae.
Or you know, I think somebody produced like a Drake song and somebody, you know, hip hop banger with AI and it's like, OK. And so if you can do that a million times, you're not creating any new information. There’s nothing new in there that wasn’t there before. And so you really haven’t made anything. It’s kind of, it’s a copy in a sense that’s particularly soulless. Like at least at least if you watch a kid copy his favorite comic book artist, you get something interesting out of it, right? I mean, I can't draw like John Buscema in the 1960s and ‘70s when he was doing a lot of Marvel stuff. I can try and maybe I make mistakes that are interesting or maybe they're not interesting. But that process of trying to do that, there's something in it that's actually meaningful. But having an LLM or Midjourney just copy that style endlessly and produce reproductions in a style, in a sense there’s just very little there. Like it’s so costless that there’s no mistakes in it that are interesting. There’s no intersection of multiple styles.
And so I just think it's an interesting technology. It's absolutely fascinating and the way that it threatens humanity is that it distracts us from making interesting new things, right? And I mean, I'll just say one word: Instagram. Like just the collapse of what people think attractive faces for men and women are as a result of Instagram. I mean, that's an actual phenomenon. That's like the real thing. And then there's social consequences to that. There's consequences for young people. I'm glad my kids for the most part grew up before social media was really a thing. I don't know how kids grow up these days in the face of that. It seems like it would be very challenging.
So you know there's … I don't remember who said it. There's a quote where somebody said, “I studied trade and commerce so that my sons might study art and history.” And if we don't use AI in a way that we can all spend more time doing more human stuff, I'm not really sure what the point is necessarily. I think that was kind of what I was trying to convey in that bit of writing, was just there's something lost in it. It’s a valuable tool. It’s awesome. It’s very impressive to look at. I think we should use it in all ways that create value. But there's also a potential for extraordinary loss in that. I like when people write things, even if they’re not very good.
Especially when the capable people all of a sudden feel like if there's no … what's the point now, right? Like a good, you know, computer engineer with these tools not to write good code or potentially write really good code. You know, they might say, “Well what's the point of spending time learning the craft when anyone can just pick up and add a browser extension and kind of match my speed,” right? And so when you have those people who can organically produce that all of a sudden be discouraged, that's where it atrophies.
Like tools like Midjourney. So I'm not artistic at all. I can't draw, I don't know how to use Photoshop that well. I find magic in Midjourney because for the first time in my life, I can go from my brain into an image and I can alter it exactly how I'm imagining it just by switching the right words in the prompt, right? But that’s because I can’t draw. That’s why that feels magical to me. To someone who can draw, they can already do all that stuff. But I really hope that they don't get discouraged. Because now you can prompt your way to something that's similar, right?
I think it's interesting because here's my suspicion. And I guess a neurologist with a couple of million dollars with equipment can prove me wrong or not. The parts of someone's brain that they use when they draw something and the parts of your brain that you use when you describe prompts to Midjourney and get a picture out, I suspect they're quite different. I really do, and I don't think … I think it's cool that you like Midjourney. That's awesome right? I'm way too old to not just let people like things, but to me … like Jack White once said he liked the plastic Airline guitar because he liked to fight with it and it produced the kind of music that sounds like him fighting with the guitar. And there's a lot of days where that's exactly the kind of music I need, right? And so, like, there have been drum samples and guitar samples and digital audio workstations forever. And what I would say is try having access to those sounds, right, and having access to the perfect timing of a programmed drummer in Prologix or in Pro Tools, those kinds of things. That's not where the music is. They're awesome and they're amazing and they're super fun. And to me, the AI is kind of like there's even less of you than there is in that, you know what I mean?
And there's some … like draw a good circle sometime. Just sit there until you draw a pretty good circle. It's hard to draw a circle. I mean, legitimately it's difficult to draw an actual circle with your hand and instrument and a piece of paper. But you will feel something in the drawing of that circle, and it doesn't invalidate using Midjourney to make a picture of, you know, an army of Emperor Penguins in the High Elvish livery of the, you know, final scene of Lord of the Rings attacking SpongeBob SquarePants. You know, wrought 100 meters tall. Like that's awesome too, right? But just draw a circle, right? And then draw two circles. And now you have the beginning of a bicycle. Or you know, draw a pair of binoculars and now you're kind of one-fourth of the way to Wall-E. Like if you can't draw … I'm telling you, you can draw. And so don't let the easiness of some of these things overcome your desire to build those pathways in your brain.
I mean, I like auto complete encoding. I like all the tools encoding. I like all that stuff. I’ve played a little bit with the Copilot thing, and I just don’t like it. I don’t like it. I like to actually make the keys go up and down with my fingers. And here's the secret about coding, right? It's the requirements and the troubleshooting and the debugging and the performance tuning and the scaling that gets you. Writing code is difficult. It can be learned. And it's great that we have assisted tools for writing code, and people can use LLMS to write code. I think that’s awesome. But that’s a small part of the job, right? And it's necessarily a small part of the job because a lot of it is mapping between what's the problem and what's the solution. And articulating and understanding great problems is of much greater value than the ability to construct solutions, in my opinion, because it's just a scarcer kind of skill. And so I think it's all amazing. But just promise me you will take a piece of paper and pencil and you will just draw an object in your room. I’m serious. Just do it. It’ll take you three minutes. And then you’ll be an artist.
I will. I will. Before this episode publishes, if you watch the show on YouTube, I will put an image of whatever I draw. I have a cup right here. I’ll try to draw this cup, and I promise I will edit it in to be my end of the bargain.
Well, Adam, thank you again for coming back to the show. You know, thank you also for just inspiring all of us here within the company every day, but also to our clients and to the public. You know, thank you for sharing with the broader world. You’re a legend. You're a phenom and we're really fortunate to get to work with you every day.
Thanks, Alex. I appreciate it. This was a lot of fun. I appreciate your energy in these podcasts. You’re a great partner to come and talk about weird shit and I always appreciate it.
You’re welcome to come back here and talk about weird shit anytime.
Awesome. Thanks Alex.
Well, we hope you enjoyed that conversation with Adam Blue and all the different twists and turns that came along with it. If you want to catch more episodes of the show, please subscribe wherever you like to listen to podcasts, including Apple, Spotify, Stitcher, and iHeartRadio. You can also catch the show on YouTube. Don't forget to subscribe and hit that like button. It really helps get the message out there. And if you have a minute to spare, let us know what you think in the comments, and also head over to q2.com to learn more about the company behind the content. Until next time, this is Alex Habet and you've been listening to The Purposeful Banker.