Earn It – Chapter 12: How Will You Earn It in the Future? [Podcast]

January 16, 2017 Iris Maslow

 

In this podcast, we review the high points of Chapter 12 of “Earn It,” which is titled “How Will You Earn It in the Future?”

   

 

Helpful Links

Podcast Transcript

Jim Young: Hi and welcome to The Purposeful Banker, the podcast brought to you by PrecisionLender, where we discuss the big topics on the minds of today’s best bankers. I’m Jim Young. I’m sharing co-hosting duties today with Dallas Wells. Thanks again for joining us. Dallas and I are back one last time to talk about the book we’ve been co-writing along with PrecisionLender CEO, Carl Ryden. It’s called “Earn it. Building your bank’s brand one relationship at a time.”

Since January of 2016, we’ve been releasing the book, a chapter at a time, a month at a time. It’s all available at the “theearnitbook.com.” In the spring, the actual physical version of the book, which I still prefer, because I’m kind of old school that way, but that will be coming out and available as well.

We’re getting ahead of ourselves. There’s still the matter of that final chapter, Chapter 12, “How will you earn it in the future?” That’s what Dallas and I are going to discuss today.

First, a little bit behind-the-scenes for the listeners. When Dallas and I first put together the outline for the book, it actually ended with Chapter 11. It seemed like a logical place as we’d gone through all the elements of pricing and relationship building. It made sense to end with a focus on the process of continued improvement.

In the meantime, things have changed a bit at PrecisionLender and quite frankly, in the banking industry. Dallas, can you talk a little bit about how our company’s current focus pretty much necessitated us writing Chapter 12?

Dallas Wells: Yeah, sure. I think actually the way we saw it, it wasn’t just a change at PrecisionLender, it was really a change everywhere. Basically, what happened is machine learning and artificial intelligence really went mainstream. It was the hot topic of 2016 for technology everywhere and even started to creep into some of the sleepier, usually slower-to-move industries like banking.

Machine learning and essentially “What do we do with these big piles of data?”, this has been an issue we’ve dealt with at PrecisionLender from the very beginning in pricing loans and profitability. We get existing consumer relationship data from all of our clients. We’ve got a massive pile of data that we’ve always done lots of fun stuff with. We always had machine learning and artificial intelligence on our radar.

The issue was, are our clients really comfortable with us adding lots of machine learning and artificial intelligence stuff to the way we price loans, the way we interact with our users? We’ve been tiptoeing into it, slowly, but all of a sudden, 2016 was clearly the time when it was mainstream enough that it felt right. It felt acceptable.

It went from this background project for us to something that we started talking about more with some of our, especially our larger clients and prospects. It was like a light bulb went off on both sides of the table. We knew there was some value here. We frankly didn’t realize how much.

The more of those conversations we got into, the more that we dug into it, we realized this shouldn’t be just something we do on the side. This should be our big top priority as we move forward. We think it’s critical enough for the entire industry, that we felt like it was worth wrapping up the book this way.

We really feel like not just pricing, but a ton of things in the banking industry are headed this way. We felt like it was not just nice to have, but a must have, to say, “This is what’s next.”

Jim Young: Still, you mentioned a receptive audience out there for this. If you’re from, I’m going to say “our generation” here, we’re both solidly in Generation X, you might start conjuring images. Actually, this did. A couple of the jokes when I started telling people what we were doing. It was, “Oh! Is this going to be Skynet and Terminator?”

Dallas Wells: Yeah.

Jim Young: There is this sort of, in how, not closing the pod doors, and that sort of thing, this image of artificial intelligence as something space-agey and honestly, a little bit ominous. How do we get beyond that and let people know this isn’t the enemy? This is, as we said, the next thing?

Dallas Wells: I think a lot of those culture things have actually been to our detriment with this because when we first started talking about it and tiptoeing into it, that rise of the machines reaction was pretty common. That’s why we’ve always been a little reluctant to push too hard on this.

I think what people started to figure out is that, Number One, just the sheer velocity of data creation. Now that everything is online and everything has analytics and everything’s generating data and using data sets, people are starting to figure out that there’s clearly value there. There’s clearly a hidden meaning, little treasures waiting to be found there.

We’re all ready to dig in and find those things. Really, AI machine learning, especially in its current state, that is really what it is. Right? It’s just a more efficient way for us to go about analyzing and digging through data.

The short version is that we’re not talking about building Skynet. What we’re talking about building is a way to program software so that we don’t have to tell it through the code, “Here are the 2,000 things that you need to look for.”

Instead, what we do is teach it to navigate its own path through the data. Say, “Here’s the data set. Here’s a rough starting point. Now, software, you go find those 2,000 things that are most important.” What that means is that we remove some of the human bias from it, of the things that we expect to be important, which a lot of times are not, and instead we find the things that really are important. The math and the data proves those out.

As we explain to people what it really means, which is the different way of getting to that answer, a more efficient way of getting to that answer, it’s not really that scary. Right? It makes sense. People say, “Yeah, we’re in for that, right? That’s not a robot coming to attack me or steal my job.” This is just simply efficiency, really, when you get right down to it.

Jim Young: One thing I think we’ll touch on a little bit is Carl, always with a turn of phrase, his version of AI, he calls it the, “augmented or amplified intelligence,” rather than “artificial intelligence,” which I think is a lot more accurate.

Not space-agey and not ominous, but then they’re still at this, “But, it’s Google and it’s Amazon. It’s self-driving cars and the way Amazon seems to know what I want to order before I even order it.” In this chapter, we make a case that actually banking is an industry that’s actually really poised to really see the potential or is right for impact for AI.

Dallas Wells: What we used in the chapter to show why, was actually a story from my old banking days, one of my former co-workers, Ed. Every banker who hears this is going … Their own “Ed” will come to mind. My version was, working at a bank that had really just pristine credit quality and had always had pristine credit quality, through the ups-and-downs of several cycles and where that credit quality came from was from Ed.

Ed was the guardian of the loan portfolio. Everything had to come through Ed. Ed had this knack for sniffing out a bad deal. It actually got to be pretty entertaining. We would come to loan committee every week and wait to see who was just going to get lambasted by Ed for bringing in the deal that they thought was good. They were clearly mistaken.

He had a real gift for it, right? It was almost like a sixth sense of the numbers as they were laid out in the analysis and in the write-ups would look good. They would make sense. Everything was within policy. Good, strong customer that we’d already been doing business with. They bring this new project that seems to make sense to everybody.

Then Ed, after looking at it for two minutes, digs in, asks three questions, and finds this, “Wow! That might just be a critical flaw. That might be something that derails this entire project.” It was fun to watch from afar. It was no fun when it was your own deal that you were bringing in.

I asked Ed one time, after one of these meetings, and he had really just torn into somebody. I said, “Ed, how do you know these things? How do you find something that we’ve had lenders and analysts digging through this deal for weeks, and they bring it in here and in 90 seconds, you’ve found something that they’ve all missed?”

To paraphrase his answer, it was basically like, “Look, I’ve been down this road before. I’ve seen hundreds of these deals because I’ve already had to clean up two banks along the way. I’m sure as heck not doing that again.” He had this gut feeling, but it was all shaped by his previous experience. Right? He’d seen deals like this. This was the outcome.

That is basically the human form of what we’re building with AI or what is potentially there for AI, which is, we have these data sets that say, “Here was the circumstances at the beginning. Here’s the critical part, what Carl calls, ‘Good, known label data.’ We know what the outcome was. Here’s what the loan looked like at the beginning. We know. Did it pay as agreed? Did it default? Did something happen where it went haywire in-between there?”

We now have thousands-and-thousands of instances of these. Whereas Ed, during his career, might have seen one particular type of deal maybe a dozen times, and in that dozen times, three of them went bad. Ed’s gut feel now is, “When I see another one that matches up that pattern, my automatic answer is ‘No.'”

Well, computers don’t have to just look at a dozen of them. Right? They can look at 100,000 deals like that. You remove a whole lot of things. You remove what we call the “lizard brain” and all of the things associated with that. Right? The faults that we have as humans in trying to recognize patterns that computers are really, really good at.

Jim Young: You don’t have the recency bias. You don’t have the confirmation bias. All those things.

Dallas Wells: Exactly. Yeah. You steer clear of those things. You also get big enough sample sizes. Even the most experienced, very best bankers have … You’ve seen what? 1,000 deals in your life, maybe 2,000, I don’t know. That’s a wild guess. Even if you’re somebody who is just looking at deals day-after-day-after-day for decades, you can never replicate the size of data sets and the iterations that a software program can see and can find links between things that we would never find.

That basic story of how Ed protected credit at that little bank and Ed was pretty good at it. There’s a lot of people as we see by the performance, every bank has an Ed. Some of the Eds aren’t as good, right? They really mismatch their patterns and end up in bad shape.

There’s an Ed, a version of Ed for credit. There’s a version of Ed for pricing. There’s a version of Ed who handles the investment portfolio, who makes security decisions. Right? Everybody’s forming judgments, making executive level decisions, based on pattern recognition. “I’ve been doing this for a while. I’ve seen this before. I know what the outcomes were, so here’s what I’m going to do this time.”

That basic concept, across the bank, “Let’s plug in data.”, and not let somebody replace it, but let somebody inform Ed, “You don’t have to rely on your gut instinct. Here’s the real pattern. Now, you can still make your judgment call, but it’s a much better informed judgment.”

Jim Young: Let’s get to that actually, because that is … Now, we’re getting into the augmented amplified intelligence. If I’m a banker listening up to this point, until you said those last couple of sentences, I might be thinking, “Great, the computer is Ed.”

Dallas Wells: Time to look for a job.

Jim Young: Exactly. Talk a little bit more about how the human fits in to this and how this is again, as we call it, “augmented or amplified intelligence” and not purely artificial intelligence.

Dallas Wells: Carl actually has a really good way of explaining this, which is basically if you look at the set of things that happened as we make a decision in banking. At the top of the [inaudible 00:13:19], you’ve got some basic gathering of information and some simple analysis.

Think of putting together your Excel spreadsheet or your spreading of financial statements.

From that information then, there’s going to be some basically prediction. Right? “Based on what we know so far, here’s what we think will happen going forward.” Below that is some human judgment. “Based on what we think is going to happen, do we think this is an acceptable risk versus return?” Then, we’re going to say, “Yes or No,” to whatever deal is in front of us.

What’s happened is basically there’s a line in there of what a computer can handle and what a human can handle. That line used to be right up where that analysis stuff happened, right? Computers are really good at calculating numbers. Put a number “1” and a number “2” in the spreadsheet. You can say, “This cell plus this cell and now you have three.” We can replicate that a whole bunch of times and it’s way more accurate than we can be.

That’s where the line was. Humans had to take that and extrapolate that, make some predictions, and that was where the human value started. Then, we had to use those to make a judgment. The line is shifting down. Instead of just that top line analysis, now it’s shifting down to also include that predictive stuff. We can build these machine learning models to say, “Based on this analysis that I saw, here’s what I think will happen based on all the times that I’ve seen it before.”

The predictive stuff gets much better, just because, again, that’s something that computers can be better at than humans can. Now, the line’s moved down, but you still have a human who has to say, “Based on that analysis, based on what we think is going to happen, we’re still going to have say, ‘Yes or No.’ Is that still acceptable within our risk framework?”

I have many more data points, much more guided information in front of me now, but a human still has to make decision. It’s been augmented by that intelligence.

As Carl puts it, “Once the things above the line, once those become commoditized and automated in a way by the machines, the value of the decision gets much higher, because there’s less value in just making a prediction and being good at making a prediction, because the machines can all do that really, really well. Now, the value all falls there with the human making the decision.”

It’s amplified, it’s augmented. Ed doesn’t lose his job. Ed just has more information in front of him to try to make his decision.

Jim Young: Yeah. Let’s bring this now full circle all the way back to our intro where we talked about good old George Bailey from It’s a Wonderful Life and how there’s this prototype of what we believe banking is and can be this community builder.

Now, we’re at Chapter 12. We’re talking about AI and massive data sets and all that sort of thing, but once we get to this point, are we really … How close or how far off are we from still the George Bailey model of a banker?

Dallas Wells: I think that touches on the augmentation or the implication we were talking about before where again, not just the decision-making, but also the human relationships become a lot more important. Those relationships can be improved by that data set as well.

The simple example here is, if you go to buy something from Amazon, right? You log into Amazon and you start searching for a TV, right? You’ll look at two or three and start to look at them. The whole time, what’s happening, is there’s machine learning going on behind-the-scenes. There’s an AI component to this that people don’t really necessarily see or care about.

What Amazon is doing is they’re saying, “Okay, we know who this customer is. We know what they’ve looked at. We know what they’ve bought in the past. We know what things they’ve reviewed in the past. We know which things we sent them and they sent back. We have all this context about who this person is, how they’ve behaved before, and what they’re looking at today.”

We compare that with all the context we have about what do people with that same profile, right? That similar path, looked at these similar things, what did they look at, what did they buy, were they happy with it or not? Right? Again, we have that good known outcome where we can start to pair up things and say, “I bet that the TV that would best fit what Dallas is looking for, is this one.”

Once I start buying that, then I say, “A customer who bought this, also bought these things.” There’s all the cables that I need. It starts to feel like a very personalized experience because it is. All that’s driven by technology. Really, the bank can function much the same way as we get better-and-better at this.

Salesforce just announced. They’re kind of a similar path as us. They just made a big announcement this year. Their big thing now is artificial intelligence. They’re calling it, “Einstein.” They’re adding machine learning into their CRM product. If you think about this from the perspective of a banker, if you have those two things working together, you now have a CRM that looks at a customer the same way Amazon looks at a customer.

I know, “What have I done before with this borrower? What history have they taken to this point? What opportunities have we looked at before with them? What was the outcome of those? What are other customers that are on the same path, what do they look like?” That’s just the sales part of it in CRM. We can do the same thing in pricing.

“We structure the deal like this. We’ve seen a lot of deals like that before, here is the outcome. Here’s what worked. Here’s what didn’t.” From the borrower’s perspective, they get a very personalized, very customized interaction with their banker. Their banker kind of comes across as not just an expert in banking, but an expert in their business and in their relationship.

That’s George Bailey to a “T” of knowing exactly what somebody’s going through, what they need, and providing that value. That’s where we’re headed with this. That’s why we think it’s so important.

Jim Young: Absolutely. The human touch is always still there. It’s always the final, most important touch when it comes to relationship building at banks.

That’ll wrap it up for this episode. Thanks again for listening. If you’ve been with us since our podcast introduced this book way back a year ago, thanks for sticking with us as we plodded our way to the finish line. A reminder, you can go to theearnitbook.com to reach each section of the book.

Details on this particular episode will be in the show notes that you can always find at explore.precisionlender.com. If you like what you’ve been hearing, make sure to subscribe to the feed in iTunes, SoundCloud or Stitcher. We love to get ratings and feedback on any of those platforms. Thanks for tuning in. Until next time, this has been Jim Young and Dallas Wells. You’ve been listening to The Purposeful Banker.

 

Previous Article
Designing for Humans, Not Averages [Podcast]
Designing for Humans, Not Averages [Podcast]

It’s common practice in the banking industry to design for the average customer. In this podcast, Carl Ryde...

Next Article
Introducing The Purposeful Banker Awards [Podcast]
Introducing The Purposeful Banker Awards [Podcast]

We are excited to introduce The Purposeful Banker Awards which will honor the best bankers in the industry ...