In this episode of The Purposeful Banker, Alex Habet replays an intriguing CONNECT 23 panel discussion with Q2's Corey Gross, Adam Blue, and Carl Ryden and Steve Gogolak of First Citizens Bank about the practical applications of AI in banking and business in general, both in the short and long term.
Hi, welcome to The Purposeful Banker, the leading commercial banking podcast brought to you by Q2 PrecisionLender, where we discuss the big topics on the minds of today's best bankers. I'm Alex Habet.
In the last episode, you got to hear from Adam Blue, who delivered an excellent talk around the reemergence of AI that we're all experiencing today. It's really an incredible time to be alive. I sometimes can't help but think what we'd all be doing right now if we weren't facing some old-school banking industry issues, which admittedly have been distracting from a lot of the bigger picture stuff happening in technology these days. Institutions are busy putting out fires while simultaneously trying to pivot to strategically harness all these new productivity gains. It's not easy. But along with that comes a lot of opportunity, but with that opportunity comes a lot of risk, which will undoubtedly impact who is leading versus who is following the transformation in the industry.
Earlier this year, we aired another incredible talk by Carl Ryden from a few years ago, which was years ahead of its time. We followed up that episode with a panel discussion from that same day, aiming to take the insights and translate them into meaningful actions institutions can take as they position themselves for the future. That discussion, too, was ahead of its time.
Today we want to follow through on that same path. On the heels of Adam's talk at CONNECT in May, we wanted to share the panel discussion that followed his talk. What's interesting with this one is, given all the incredible advances that we've all witnessed, thinking through transformation and adoption is becoming more tactical with approachable short-term opportunities. Much more of the aspirational stuff we used to think about back then is now achievable sooner. This panel discussion prompts you to think about the possibilities through wisdom, ideas and great stories.
Corey Gross leads Q2's AI Center of Excellence. In this replay, he sits down with Adam Blue, Carl Ryden, and Steve Gogolak from First Citizens Bank. This is definitely a power panel. You can almost see the gravitational waves from this magnitude of minds coming together. I will just get out of the way and invite you all to sit back, relax, and enjoy the panel discussion on the meteoric rise of ChatGPT and the future of AI in finance.
Hope everyone is refreshed this Thursday morning, feeling good. Excited to welcome you to our panel, which is a discussion about the meteoric rise of ChatGPT, specifically LLMs, the future of AI, and how it’s not just going to transform financial services, but the impact on industry and society in general.
I'm Corey Gross. I was the founder and CEO of Sensibill, which was acquired by Q2 back in October of 2022. At Sensibill we used AI—I guess you would call narrow AI now—to read receipts and invoices as a means of helping small businesses manage their expenses and get a sense of their overall organizational spend and use the data that we collected to help financial institutions serve them better. At Q2 these days, I'm heading up our AI Center of Excellence, which you heard Kirk talk a little bit about on Tuesday, where we're going to be focused not just on helping the organization become more efficient, become smarter, enabling our team, but also the discussion of how this transformative technology can drive value for our customers and partners through our products and services.
I would like to first introduce our esteemed panel of experts. First, to my right, Mr. Carl Ryden. Following Carl, I've got Steve Gogolak. Then Adam Blue, the CTO of Q2, at the end here. Excited to be up here with this crew, folks who have been dabbling in AI and are thinking about all things generative AI and the impact on this industry.
Before I shut up for a bit and let these guys do most of the talking, I also know that my notes are probably going to go out the window very soon when this thing really gets heated up, I want to touch really briefly on what LLMs are. Because there might be a lot of folks here that hear about the rise of AI and they're like, "Well, what the hell is the rise of AI?" We've been talking about AI for 20 years. You may be using AI for a number of applications. We at Q2 have AI in multiple products and services that we offer. What's different about this version, this brand of AI than what you've been used to?
LLMs, or large language models, are being popularized through ChatGPT most prominently, which is really a different way of approaching problems than you would've seen from narrow AI. Sensibill used what would've been regarded as narrow AI, which is a very domain-specific use case. In our case, it was computer vision. It was processing data from receipts and invoices and being able to understand things like what is a merchant name? What is a product description? What is a category of this merchant? Then use that for a very specific tailored use case. Whereas large language models specialize in being general. That's understanding how to translate different forms of language, being able to, in a very human-readable way, take an input, a question that you ask it and give you back a generalized response. Very, very broad applications of large language models versus something that you would get from narrow AI, which is very specialized to a domain use case.
What I want to talk about first, and before we get into that, I also want to note that there are going to be poll questions here today. If you add this session through your Q2 conference app, you can participate in the poll questions, the first of which I'm about to put up on the screen before I turn over to these gentlemen. That first poll question is, how do you perceive the potential impact of generative AI to your organization? High potential for transformation? Moderate potential for improvement? Or limited impact expected? As you think about that first question for this group of folks to open this discussion up, Carl, benefits and use cases of generative AI. As an engineer, a tool builder, someone who gets excited about new technologies, what gets you most excited about the potential of LLM?
The basics for LLM is a transform model, but what gets me, I get excited about a lot of things, but using it ... I'm a developer. I've written software for most of my life. I think I'm pretty good. I wrote most of the first version of PrecisionLender. I use GitHub Copilot to write code for things I do non-Q2, doing robot stuff and those things. I also use GPT4 to help me as a muse as I'm writing things. I can tell you it makes me 10 times more productive. There's one thing that's inherent to the model, one thing that's side to that. One is inherent to that is, these language models, what they're really good at is translating things. If you think about them as translators as opposed to knowledge models, one of the problems people have with is they ask it a question and expect it to give it an answer.
What it's been trained on is a set of language, so it knows it can go from English to German or German to French or French to Python, Python to C-sharp, really, really well, particularly computer languages because those are bounded. But it's not a knowledge model. It only has knowledge in so much as it needs knowledge to understand language and to translate language. What happens is when you pair the language model with a knowledge model, you can do really amazing things and customize that interaction to each human being.
One of the examples I give folks is you go to the doctor. You're sick. Your eyes are watery, and you can barely see. They give you a prescription. You go to the pharmacist. They fill it. They staple to it a page of text that's six point font that's got all these chemicals. If you're taking this chemical, that you don't understand, along with this chemical, which you don't understand, you might have this ailment, which you can't understand. You couldn't even read it anyway. If you did, you wouldn't remember it because you're sick. That's not written for humans. It's written for lawyers. It's written for chemists.
If you translate that into what that human needs using a language model and say, "Hey look, I know you're sick. You're taking this drug on Tuesdays and this on Thursdays. You've just been prescribed this one. If you start to get swelling in your armpit areas, that might mean something serious. You should call your doctor or text me back at this number and I'll get your doctor. Oh, by the way, I know you're sick and you'll probably not remember what I'm telling you now, but I'm going to check. These typically occur about 24 hours, so I'm going to text you in 24 hours to make sure you're okay. You just tell me if you're experiencing these things and I'm going to get your doctor."
We've used a machine there to create an incredibly human experience at scale because we can translate the knowledge that exists in that drug interactions into something that fits that person at that moment in that situation. The absolute best. A lot of the themes at Q2 have been dynamic personalization. There's a great article in the MIT Tech Review about AI's use in education. One of the sentences it had in there: In about three years, getting text that was not generated specifically for you is going to seem crazy. The idea of this industrial age that we built, we write text, that's for everybody. What you're going to end up doing is writing text and concepts that are then used by an AI to translate this for her, translate this for him, translate this.
What's good is a lot of things they have are hallucinations. They go away because you've separated the language model and the translation problem from the knowledge model and where you're seeing this really take off is that. The second thing I'll use in GPT is when I ask it a question about how do I write this code. It gives me a specific example with specific instructions. How I would've done that before is I would've gone over to Google and Googled it. When I go into Google, I'm going into a system that is designed by its purpose to distract me. It would take me to a blog article where there's things zooming in. It will take me to things that wants me to register for some newsletter. It takes me over to someplace else, "Hey, if you like that you really like this." Their whole business model is to distract me from what I'm trying to do.
The actual thing I find so powerful about GPT writing code is it answers my question and keeps me focused on the task. The thing that would've taken me 10 hours filled with distractions, if I got to the end, now I can do in 10 minutes. It really is quite amazing. I think it's going to transform a lot of the world around us and it's going to come really fast.
Steve, hearing this, how do you think about the specific application, not just in the way that LLMs have the potential to create dynamic personalization for your customers, but even how it can drive more efficiency for the financial institution? How do you think about the use of LLMs at the bank?
To build on what Carl was saying, there's so many corollaries between health and finance. I mean, that problem that you described is exactly the problem that many of us are trying to solve for. I have a client. They are in this specific situation. Here are the things that we need to do for them. That is a difficult thing to do at scale. We do it really well on an individual one-to-one basis. But the question is, does that person that's sitting across the table in that human interaction have access to understand how to make a match like that? Here's a situation. This is the right way to solve the problem. I think the assistive nature of that, we call it a training use case if you will. But the initial part of that would be how do we understand the situation and then turn it into A, here is what you should do, here is knowledge that I can give to you. That is you're able to accelerate the pace at which you're delivering good advice, the right advice faster.
Initially that's going to be assistive to human-to-human. I'm skeptical that in the near term that it's going to be the customer directly interacting with the AI to do that just because I think you need a check-in place to make sure that it truly matches. But I think from a financial services standpoint, it's going to accelerate that aspect of things. It's going to start popping up in all these different places. It'll just become a natural way in which advisors and bankers start to use that assistive nature to communicate. One of the sessions that I was in yesterday had a very, very simple use case around this.
The woman was talking about how they went in and did secret shopping. Then asked a question about how do I do this in the mobile app to someone in the branch and the person in the branch didn't know the answer. Well, in the future, that's not going to be the case. They'll say, "Well, I'm not sure. Let me go find out." Instead of sifting through the intranet and trying to find documentation or find the right person to call, they're just going to ask an assistant and the assistant is going to tell them right away. They're going to be providing much, much better customer service through use cases like that.
Anyone here experienced with ChatGPT raise your hand. If you're using it even just to ask it questions about recipes and things like that. So a lot of folks in the room. Demystify this a little bit for us, Adam. It's not great at everything. It's a generalized model. What use cases today do you feel it's most effective at? You think about as those low-hanging-fruit use cases, whether it's for your own personal life or even in the application of financial services. What are the things that you think of the low-hanging fruit?
Yeah, I really like Carl's kind of taxonomy of the transformational capabilities of the neural network transformer models in large language models versus the underlying knowledge model that supplies the content. I saw something interesting by an AI research scientist and he says, "As human beings, we conflate performance and competence." He said you can train an algorithm to recognize that there's a dog catching a Frisbee. Then as human beings we could probably ask that same thing. Can you eat a Frisbee? Do dogs normally catch Frisbees? Do people play Frisbee in the rain? It won't know any of those things. Because it's trained to do this particular task. I've been thinking about tasks in which the transformation piece is really pivotal.
Let me give you an example. We have been building file importers into software for a long, long time. All of you have probably used them. There's a general-purpose one in Excel that I've fought with for many years. It would be really great to determine if an LLM would be good at saying, "Here is a textual description of the set of data that I need organized as a CSV file or a JSON file." It could perform that mapping. Then you would take the JSON file and you would drive it into a piece of code that would evaluate whether it was complete or not and then show you where the gaps were. Because when we receive information from people, it's traditionally narrative and then we have to perform that transformational mapping by rekeying data or something. I guarantee you in your financial institution there are use cases where there is rekeying of data because it's been captured from the end user in a narrative format or a form format.
It would be great to say, "Just send me an email with all the details about your business." Then maybe we can drive that through an LLM along with an adjunct knowledge model to say, "Here's a set of JSON documents that will instantiate this business in the digital banking application." That I think is an interesting one. To the extent that with LangChain and some of the other tool sets we're starting to see, and Carl showed me this specifically, it's pretty fascinating in a discussion we had a couple of weeks ago, the language models can be driven to iterate to improve their outputs. Being able to take the output and then inject another set of texts that says, "You misconstrued what this person meant when they said they live in California." Then that can be included in subsequent events.
This decomposition into, this is the source of knowledge and this is the transformational model, I think is one of the pivotal understandings for figuring out how that works. Then you think about Steve's use case, which I call assistive, where the LLM is in my right-hand purview and my customer is in my left hand and I'm mapping what comes across. I think that's a great way to start. Then I do think that there are use cases potentially where you may have a direct LLM interaction with the end user. But you have to be thoughtful about how do you think through guaranteeing the inputs and the outputs are reasonable and they make sense. It's probably by driving really accurate knowledge on the knowledge model side.
In the same way that today we can't show good balances if we don't get good balances, we can't give people great transform text about their financial options unless the knowledge that feeds into that is good. I think sometimes we conflate the competence of the transformer model when really what it's good at is performing this transformative task. We need to keep working on making the knowledge model part of that equation stronger and more robust, and that will create the broadest set of use cases where we can make use of the generative pieces of AI.
I just want to share with you. Do any of you guys have kids who participate in FIRST robotics? Anybody in here? Maybe not. I'm an engineer back there. Anyway, so there's one division, which is the FIRST Tech Challenge. There's 20,000 teams across the world. But the problem is there's a lot of churn. Kids will get excited about robotics. The parents who care about them really want to help them with this, and they sign up and create a team. The problem is they don't know how to build robots or write software that runs robots. They struggle with it. Then they go to the competition and they get disappointed in the outcome. The team will churn. This is the problem, churn. The source of that is that there's not access to high quality mentors.
I had two kids in high school that I helped who did build a solution to this. They used ChatGPT to create a mentor for every team in the world. What happens is you can pull this into Discord or into Slack, and if you ask it, "How do I program a PID controller," which you guys may not know that's a controller for a robot, it actually says, "OK." It is trained. It has a knowledge base of all of the documentation. The Game Manual Zero, Game Manual One, Game Manual Two. The rules of the game. How you build a robot. All the blog posts and articles written about how to do this. All the technical specifications of all the things that go into the robot, that's in the knowledge model. It pulls the right things from the knowledge and puts it in the subject matter context, but it has two other things.
One is it adds in, that question takes a user context. Who's this kid? It's a freshman. It's the first year doing robotics. They haven't taken calculus. It adds user context as this. The team context. They're on team 8,752. It's a first year team. Or they're on team 33,000, which is a five-time world champion team. What it says, "OK." You ask, "How do we program a PID controller?" It takes that question, gets subject matter context, then it adds user context and it adds team context. It says, "Translate that knowledge for that person on this team and tell them an answer." Then what it does is it does exactly that. Translate. It leverages the knowledge into a personalized response to that person. It answers that question differently for a freshman on a rookie team than it does for a senior on a five-time world champion team.
I hope you see. These two kids built this thing in a matter of weeks that they could then … any team could pull it into their Discord server. The parent who just cares about their kids experiencing this, who doesn't know how to do robotics, now has access to the best mentor in the world or someone really damn close or far better than they had before at scale. They built this in two weeks. Now one other thing we added just for safety, we added a policy context. We allowed folks to put in documents around what policies that we wanted this thing to follow. So if a kid asks about suicide, it doesn't talk. It pulls in the relative policies and if the policies trigger, it does what the policy says. So you ensure safety.
You don't just take the question and feed it to ChatGPT. You take the question, you pull in the relevant knowledge and you take this question for this person with this context based on these facts and these policies to combine that together and know that I've told it that always defer to the policies first and give me a safe and effective answers. These problems that people get concerned about in the news media, people who don't know pooh about anything, get all fired about, they are very, very solvable and are being solved with a small amount of creativity of what would I want it to do? It's immensely powerful. They turned this and now they built a thing that gives a virtual teaching assistant to teachers, to STEM teachers because it's hard to find STEM teachers, that they're going to deploy to every school. These are two high school kids. It is one of the most transformative technologies I've seen in my history, and I've seen a lot.
You mentioned some of what is being talked about, a lot of concerns being raised, potential limitations to the model, the potential for biases, all the rage in financial services … Are we going to see AI adjudicate loans based on discriminatory reasoning? Steve, how does a financial institution, when they hear this rhetoric and they hear this talk, respond to solutions that take a human potentially out of the loop when making a decision about somebody's potential financial future?
Well, after you initially freak out and say, "Oh my God, how is this going to happen?" I think Carl, what you have described in many ways is the future job structure of a lot of these things. Trying to understand what do I want to use this for? What do I want to point it to? Then how do I want to guide the way that we're going to train the model to accomplish or fulfill the outcome that is designed that we want to design it for. Then the other piece is from a displacement standpoint. The one thing that's pretty apparent, there still has to be the knowledge that's feeding the knowledge model and that knowledge isn't going to just continue to make itself up over time. One thing that's apparent is if the knowledge starts to dry up because no one's producing any knowledge anymore, well the model itself over time is going to lose its ability to be effective.
There is still all of that and that's still going to be a very human-driven aspect of things. I think in this, there are absolutely new positions that are going to be created saying like, "Okay, how do we aim these things?" But in reality there's an analog construct to what we're doing now. We have organizational design. We have communication mechanisms. We have training programs. We have all of these sorts of things that will just shift in the way that we employ the people to start to work at how do we train these models to do these things.
We don't necessarily have to employ a staff to then go through manual distribution of all of the information. Or we can get much more targeted with the way that that information gets distributed because it can be more of a poll-based model where it's like, "Here's the challenge that I have right now, give me the information I need in the moment." That's much more effective than sitting in a training course of which you retain not as much of that information. I think there's a huge amount of efficiency to be gained and there's still an orchestration of all of that that has to be done.
Can I add on?
Your job will not be taken by AI. Your job will be taken by a human who uses AI. OK? Because the human who can use it to be 10 times more effective is going to be 10 times more effective. Now that's going to allow him to create value for customers faster. This is the thing. Adam mentioned it. I gave a talk a few years ago, maybe we can find a way to put it in the app or something. It was called Build Iron Man Suits, Not Terminators. You want to think about augmented intelligence, not artificial intelligence or some replacement. We as humans are tool builders. We have been for our entire history on this planet and we have built a tool that allows us to do things we couldn't do before.
Those things have implications. We build tools that change our world. That world changes us. Then we build new tools that change our world a different way. That's been our history. But this idea of Build Iron Man Suits, Not Terminators, I think Satya Nadella has got a really good way of thinking about it. You want to build co-pilots, not autopilots. We tend to think of how do we automate this human. What you want to do is you want to find a human who's really good at something and how do I 10X them? How do I build that person an Ironman suit to do things? There are people who wrote these Game Manual Zeros, their blogs and things to help the kids in robotics, but you’ve got to go search it. You’ve got to go find it. What we've done is we've now taken that and made them 10 times more powerful. Thinking about it as a force multiplier I think is really important.
Very quickly before we move on, we have our second poll question. Before we get to that, how do we perceive the potential impact of generative AI to your organization? 43% think that there is high potential for transformation. We're getting there and pretty meaningful impact on not just the customer experience, which is the context for our second question, but also creating more effective, more efficient employees so that they can be focusing on the more creative aspects of their job, not necessarily the menial tasks. That takes up a lot of time. That second question, in what ways do you think that generative AI can enhance the customer experience? Personalized recommendation offers? Improve fraud detection, security? Automated customer support and chatbots? Streamline and efficient processes? Which is what we've been touching on here lately.
On the topic of process, we talk about governance, policy, acceptable use of AI at organizations, how AI can be built into products and services responsibly. Steve, we know that this isn't the first time a transformative technology has come to an industry like financial services. When it was mobile, when it was cloud, when it was held, internet banking, there's always going to be a concern about data leakage, privacy, security, compliance. What are the kinds of guardrails that you think about that need to be in place in order to see adoption of generative AI solutions at a financial institution and even have employees comfortably and confidently use generative AI to make their work better and become more effective?
I guess there's probably two tracks to this as I think about it. One is getting something out into your folks' hands as quickly as possible. Because if you don't do it, they're going to go seek it somewhere else. No matter what sort of policy you have, there's that aspect of finding the right tools to make yourself more productive. People are going to give that a try. They're already banks that have come out with policy saying, "You may not do this on company-owned equipment. You can't upload other data." I think having guardrails out quickly to say, "These are the things that we don't want you to do." But then getting something quickly out there is important because everyone is going to be curious and want to experiment. As soon as one institution starts to do things or people hear different use cases about what someone else is doing, they're going to say, "I want some of that too."
This'll happen very fast. I think in some of the initial places this is going to pop up naturally. There's big investment from Microsoft in this. Big investment from Salesforce in this. Some of the tools that we're already using, that's going to be the places where these things are going to start to show up. It's going to be a matter of figuring out how do we want to turn them on or configure them in such a way so that it makes sense as opposed to saying, "Well, we're just going to leave that over there and not accept that technology that's going to come out, whether it's in the Office 365 suite or it's Einstein GPT in Salesforce or things like that." I think in general it's being quick. That's the first track.
The longer-term track is going to be more around how do you set up a group within the institution to actually figure out what is this going to look like and where do we want to get value out of this? That's going to be the much longer form. What of our data do we want to provide access to? What use cases do we want to support? Those things are going to take time to gain consensus on. I think it's important to start those conversations now because if you're not doing it now, then it'll just take too long and someone else will beat you to it. Again, I think, it will be a big competitive advantage. It's just going to be one of those things that it'll be here before you expect it.
I'm from the south. You can probably hear it from my accent. You can say anything about anybody as long as you say, "Bless their heart." Community regional banks, bless their heart, have often been scared of things and try to delay and delay and put it off until the world shows them they can’t not do it. Then they do it. You can see this with ... When I first started selling software to banks in 2002, 4, or somewhere in there, we actually sold software to a bank, the whole bank was air gapped to the internet. They had one computer in the closet that was connected to the internet. Nothing else. The bank was connected to the internet. When upgrading her software you'd have to ship her a CD. If it was important, she had to unlock the closet, go in the room, download it, put it on a CD. Or, God forbid, put it on a thumb drive she'd found in the parking lot to move it over into her network.
If you think about the advent of the internet, there was a lot of fear around that. That fear feels a lot like the fear now. It gave people reason to be complacent and excuses to be complacent and not to realize this is going to change the expectation of what customers want to interact with their bank. The folks who got it early had a lead. The folks who got it late, survived. The folks who didn't get it, died. That timeframe was on the scale of this. You can do it again with mobile. You can do it again with cloud. You can do it again now with AI. I will tell you at each case I believe the timeframe between we can't possibly do this and we absolutely have to do this has gotten shorter with each iteration, and the consequences of not doing it have gotten steeper with each iteration.
You need to find ... So what should you do? I think you need to think about how do we lean into this and really 10X our customer experience. We've got some good ideas in the center we're working through, and really hyper-personalize it and be the bank you always wanted to be. Then what do you need to do that? You need some technology, but you also need a set of policies about how to do that effectively and safely that you got to discover those. That's somewhat on you, but it's also on a partner. At PrecisionLender, we started with a cloud-based solution for commercial banking in 2009, turnaround first customer in 2010, which was early days of cloud, but we knew it was going to be the future. For 80% of our customers, we were the first cloud-based solution they used of any sort with any real commercial banking, pricing, relationship, all of your relationships in the cloud. That's the thing.
What we did is we didn't have to ... Our full product was not just the technology. We actually told them ... I would talk to them, "We've never done cloud." I said, "Well, do you think one day you will?" "Yes." "Well then somebody's got to be first." Then picking who's first is really important. Somebody who can not only give you the technology, but a roadmap of how to wield it and the set of policies. The complete product, not just technology. We actually wrote the policies for banks and said, "Here's the way you buy it." We wrote the checklist, the due diligence, those things, and we gave them all the things they need to wield this new technology for good. I think we were the tip of the spear in a lot of places for banks accessing the cloud. I think you need partners to do that for you and all these things. I think Q2 has a history of doing it at every stage for banks. I believe we'll be able to do it here as well.
Adam, you touched on this in your presentation yesterday, which is, you hear a lot in the industry of calls for moratoriums on use of AI. While we stop, step back, relax, and as the world established guardrails, you mentioned that you didn't think that was a very doable idea since the genie's out of the bottle. Can you touch a little bit more on what is that rhetoric, what is being talked about, and what your feelings are about using this responsibly versus holding things back and trying to put the toothpaste back in the tube?
Yeah, I think it's a thorny question. The moratorium thing to me is just goofy. People are not going to stop working on things to the extent that they have enthusiasm for them and they produce competitive advantage. To Carl's point, waiting to discover what use cases might be useful for this technology or to build competence in understanding and using the technology is not very interesting. That doesn't mean deploying something to production for your customers. It means encouraging the rest of the organization to think about what the capabilities might be. I think the point about partners is right on. Partners can be a way to consolidate some of their expertise and their capabilities. But the other thing I would say when you think about this is there are starting to be some studies about the impact of the use of AI as an assistive technology to increase productivity for people.
To the extent that that evidence is there, that folks are more productive when they have use of an AI assistant or an augmenting technology, that's really compelling. Just that alone means that some moratorium on developing a technology is just not terribly interesting. What I would say, though, is really interesting and something to think about from the perspective of governance and policy, which are going to be really important, especially in the financial services sphere, is think about how do you want to manage the impact of introducing these technologies onto your employee base and then, therefore, into your customer base. There's a lot of evidence that suggests that using AI technology allows you to essentially transfer some of the value, knowledge, and skills from your best employees that perform the most effectively at given tasks or workflows to employees that are less effective.
That I think is a coaching model that's really, really well established and really valuable. As you pursue that, the other thing you have to think about is what does it mean to your best employees, who are probably the employees who potentially may get the least measurable productivity lift from AI because they already perform at a high level because they're possessed of the knowledge, and then employees around them that previously they had outperformed start to come up to their level. I think you have to be thoughtful about that because your systems for reward, motivation, and compensation probably are not aligned around the kinds of changes that the introduction of assistive AI might cause. This is a narrow example, but you can say we're not going to think about it. We're going to have a moratorium. We're going to wait until somebody solves these other problems. I think you need the exact opposite.
To the extent that there is fear, I think it's natural and I think it's good that there's fear. Because fear is a neurological, biological signal that there is a scenario that could cause you potential distress or harm or that you're unprepared for. If you can competently and calmly accept that fear contains information for you, that's useful for organization prioritization and planning, leaning into your fear can help you understand what are the things we should do and what are the directions we should go. Just to be very clear, I said a lot of things yesterday, most of them were absolutely serious. Some of them were at least mildly in jest. But don't anybody take away, "Hey, let's all dig holes and put our heads in the sand around generative AI." That's not the point at all.
The point is let's think about building that skillset. Let's think about the impact on the employee base. Let's think about how that drives impact on the customer base and really be thoughtful about that. Because the capability to have more of your employees execute at a level that's closer to the level of your best employees is very powerful. But you also have to think through what does that mean for the way that you manage and motivate them? What does it mean for your customers? If there's anything I've learned in my career in technology, it is that we are pretty bad at anticipating and planning for the unintended impacts and unintended side effects of choices we make. Being open to observing them and then reacting to them quickly, I think, is going to be really important.
The thing you touched on here is something that's going to have wide applicability. We have an entire economic system that's built upon people exchanging their time for money. Now we have a system that can do almost human-level work and it doesn't care about time. It cares about electricity coming in the door. How these economic trade-offs get made are different. But we've been through this before, I would say. We used to have, you traded your sweat and your toil for money and you were paid for piecework. How many bushels of pears can you pick and whatever? We moved from that to a different way. We would trade our time where we're using machines and other things to do the toil and we're not paid for the sweat of our backs. But we're going to have to change about how we do this.
This is one of the things I want to give folks that kind of a little rubric to think about. There's one we always use the PL with. For those of you have PrecisionLender, we have Andi, which we've done for a long time, which is a really great learning loop, ambient learning loop. We've used language models forever. This is the thing. Q2 is well positioned for this because we've been doing this for a while. We've been waiting for a great language model to actually make it even better. But we would always say, when people say, "I want a report or I want a dashboard," I'd always say, "Well, if I give you that perfectly report, what would you do differently if I gave you that report?" He says, "Well, I will make sure it's on so-and-so's desk every Monday morning. I'll make sure they read it and if they read it, they're going to look out for these things. If they see these things, they're going to tell Joe to do this."
How about we just build a skill for Andi that all that happens, and I'm not dependent upon the report landing on your desk and all the other stuff in that chain. How about we just do that? So we build skills for Andi to help Andi do that. The other thing I'll say is, and this is another thing about how you think about efficiency, what do you pay people to do versus what do they spend their day doing? What I'll say is, go back to my kids who built the virtual teaching assistant. This teaching assistant they built, well actually, instead of giving all the students the same 20 problems and then grading the same 20 problems and then doing the same lesson plan no matter what the results, and then giving the next set of 20 problems to all the kids, what it does is the AI will actually assign the problems based on what they did.
You'll get eight problems. You'll get 10 problems. You might get four based on what you need. Based on what you need to do. Then the AI will grade those problems. Then knowing the lesson plan for the class tomorrow will tell the teacher, "Hey, Joe's struggling with this. Henry's struggling with this. I want you to make sure he emphasize these things. You might want to spend some special time here." Now that rubric I gave you, what do you pay them to do versus what do they spend their time doing? They spend their time grading papers and doing clerical work and doing all the stuff associated with that. What do we pay them? We pay them to care. This is your thing about empathy and caring. We pay the humans, the teacher there to care about that student's development. To care about they're getting the right stuff.
What happens is the AI doesn't actually send the problems to the student. The AI suggests them to the teacher and the teacher goes, "Yeah, that looks right. And why?" It says, "Yeah, that looks right." They accept those things and that makes it so every student we teach, we're better for the next student we teach. We build this compounding thing. Adam said, "People don't understand randomness" yesterday in his talk. He's absolutely right. People don't understand randomness.
The other thing that confuses the hell out of human beings is exponential growth and compounding. Sadly, you've picked an industry that requires knowledge of both. This is one of the things where thinking about ways where we can actually don't just do the thing or do the thing slightly better. How do we separate what am I paying this person to do versus what do they do? Then how I can use AI to make this so that every customer we interact with, we get better for the next customer we interact with and we get exponentially better. That's what the people who are wielding this really, really well are doing. I think that's a worthwhile rubric to look at it.
We got five minutes, I don't know. I have a feeling there might be a question or two-
Yeah, let's take some.
You guys might just want to continue to listen to these guys.
Here's my last poll question. What we'll do is we'll send off our view for the impact of gen AI on society so we can keep a couple of minutes available for questions. Last question, question number three for the audience, how prepared is your organization for adopting generative AI technology? Which may not have made the cut. In any case, maybe the last thing ... Oh, there we go. Actively exploring and experimenting? Considering implementation near future? Or no immediate plans? That's interesting. Actively exploring …
Yeah, that's pretty ... To send off, so we can save a couple of minutes for questions. Five years from now, what does the world look like? We touched on a little bit, Carl, with education. How we can finally enable teachers to focus on student care. Take away the menial tasks. That was the theme of how generative AI solutions can help support employee growth in organizations. Take their time away from the things that bog them down, focus them on the things that they're paid to do. But five years from now, open question for the folks. What does the world look like?
In economics, when you have compliments, you need to do this thing and you need to have this thing. You need to have microprocessors and operating systems and there's compliments. You need to have both things to have a full solution. When the price of a compliment goes down, the value of the other compliment goes way up. When I said what do we pay teachers to do? We pay them to care. What do they do? They spend time doing administrative stuff and whatever. The value for a teacher who really cares is going to go way up. I'm an optimist. Pessimist are often right. Optimists are often rich. What you need to do is you need to ... That's a quote, I didn't make it up. I think what we're going to find is when you amplify the human, the human compliment of it becomes much more valuable.
Actually we're going to need people who are much more human, much more empathy, much more caring about the customers. I think what frustrates me, bless your heart, about community banks, this is what you're really stinking good at. This allows the AI to 10X that. Don't try to replace your worst employees with AI. Try to amplify your best employees, the ones who really care, using AI to deliver that experience at scale. This is the thing that is the most valuable. One of my hopeful things that I say is right now we live in a world where people interview to see if they're worthy of a job. If we get this right, we'll have a world where we interview the job to see if it's worthy of a human.
It's going to flip around and be the opposite. You say, "Well, take your stuff out of this system and enter it into that system. Taking it, doing and putting it here. Reading it from this paper. Putting it here." Is that a job worthy of a human? What it means to be a human like somewhat a caring person who can connect others. I wouldn't want them doing that all day. I want them to do something else. Have them build the content that we deliver at scale. That's my point.
Steve, Adam, final words.
Five years, that is hard to predict no matter what. But I would tend to agree. Getting much, much closer to the thing that we often talk about, which is I want to spend less time on these things that don't add value. I want to spend more time with my customers or more time with my clients. I think that will be very, very true. I mean, Carl, I think you're spot on with that. It's not just generative AI. I mean there's a number of things that are converging here that are going to get to a point where there's tremendous more value that's able to be delivered because we have the time to deliver it and we have better access to understand what value is necessary at a given moment.
I think the nature of work as a productive human activity will change. I'm not going to make a prediction on how it'll change, but I think it will change. But I would say very firmly that the relationship between people and their work is going to look very different in five years than it does today. I think AI technologies broadly will accelerate the changes that were natural experiment brought into being by the pandemic. The rise of remote work, the change in the way that people attach to organizations and structures and missions and purposes. I just think it will look very different than it does today. Both, I think there will be some great things about it as an optimist, and I think there will probably be some things that are extractive and problematic that we'll have to work on solving as well.
We hope you enjoyed that awesome discussion. Before we wrap up today, I have a special announcement. It's really been a blast working on the show for the past year, and I look forward to working on many more episodes in the future. But for the next episode, I'll be turning over hosting duties back to one of the originals who helped propel this show to where it is today. We are excited to announce that after a year hiatus from the firm, the Jim Young has rejoined Q2 to much fanfare around here. If you are a longtime listener of the show, it'll feel like you're coming back home. Be sure to tune in and check him out and you will love who his first guest will be. I'll leave it to Jim to introduce who that will be, so stay tuned. That's it for this week's episode of The Purposeful Banker.
If you want to catch more episodes of the show, please subscribe wherever you like to listen to podcasts, including Apple, Spotify, Stitcher, and iHeartRadio. We're also pleased to announce that we are now also on YouTube. You can check out our channel by searching for The Purposeful Banker in the search box or visiting youtube.com/@thepurposefulbanker. That's @ symbol The Purposeful Banker. You can watch all the great episodes from the past year and don't forget to subscribe to the channel to get all the great content delivered straight to your feed. If you have a minute to spare, let us know what you think in the comments. You can head over to q2.com to learn more about the company behind this content. Until next time, this is Alex Habet and you've been listening to The Purposeful Banker.