Shelly Liposky discusses how AI can help predict and prevent operational loss with Dr. Roger Noon at the 2021 XLoD Global Conference.
Roger Noon: Welcome everyone to this case study that will focus on how one bank is using AI in a novel way to mitigate the risk of operational loss. With me today is Shelly Liposky, who is Global Head of Business, Risk and Solutions at BMO Capital Markets. It's great to have you with us, Shelly.
Shelly Liposky: Thank you for having me Roger.
Roger Noon: So, can we get straight into this Shelly? And could we start by looking at the genesis of this project? I understand you've named the tool OLI, what is it? And what's the background that's influenced you to invest in it.
Shelly Liposky: So, we developed OLI, our operational loss Intelligence tool to predict and help prevent operational losses a couple of years ago. We implemented it in capital markets, uh, beginning of 2020, and now working with our payments, fraud and operations group to predict and help prevent losses in those areas. It's a very exciting time.
Roger Noon: Yeah. Great. And you've talked about some, some fairly big claims that predicting and preventing operational loss. So, so where, and how can OLI be used to, to do that, to prevent and predict operational loss?
Shelly Liposky: Right. So, for the most part, humans initiate transactions still, and they execute at least some steps in transactions across our industry. Those manual steps are what is susceptible to human error, that leads to a loss.
So, if you think about trading, you think about operations, you think about payments. Think about fraud insurnace. Customer service centres, anywhere there's high volume and people are executing those manual steps. So, to prevent operational loss, we can either change human behaviour, stop a lot of the errors from happening, or we can change process.
So, where OLI comes in, in instead of telling colleagues every day, be careful, be careful, be careful, OLI predicts which days have a higher risk of operational loss usually caused by human error. And we provide a very specific beahvioural prompt, um, to the actors in that process. So, for example, uh, many of our errors happen, a loss has happened because of input errors, missed execution, or miscommunication.
So again, instead of telling people to be careful, we say, "Hey, today you have an X percent chance of an event happening, slow your input, clear your cue, confirm your communications." And that is the prompt to change behaviour on that day.
Roger Noon: Yeah. So, so there's some interesting things going on there. You're using the data that you've got to look at, what is effectively, um, um, creating behavioural cues to change the way people come at their, uh, uh, their jobs and how do you stop that kind of being there, being fatigue around now it's another message or just kind of ignore it and, you know, the usual human behaviour around getting into, into habits of normality.
Shelly Liposky: Well, if we're using OLI to, uh, marry internal and external And generate a signal on the days or generate a signal every day, but only warn traders or bankers or folks in operations or payments, uh, to change their behaviour on the day when the signal was above a threshold, then you're not going to get fatigued because you're only giving them that prompt, you know, once a week or every two weeks or every three weeks or whenever OLI reaches a threshold that you've set. And so there was fatigue before, like the fatigue before was, I know I'm trying not to make mistakes here, you know, so telling me to be careful, wasn't really helpful.
I'm already trying to be careful. Um, only tell me when I need to, uh, when there's a higher risk, um, of something going wrong. And that's why we don't really see the fatigue. It takes time to get there though, uh, to train the models and really understand, um, what the right threshold is that takes some time.
But once you get there, then you really do only worry about the days where there's higher risk.
Roger Noon: Yeah. So, this is the real preventative elements of this. And I can, I can see that very clearly behaviourally, you know, I'm now I'm prompted to, I know that I make a lot of mistakes on a Friday afternoon just before I go on holiday, let's say, so I'm prompted to be aware of that.
What about the process element? Cause it sounds as though you've got an ability now to identify weak processes or points in the processes where we're more mistakes are being made and then focus your resources and prioritizing your, your sort of risk mitigation around that.
Shelly Liposky: Right. So, there's an extraordinary amount of metadata in processes.
And if you think about, if you think about an inventory of processes, you have in every process, you know, the name of the process, you've got the description of the process. You've got alignment to an operating group, a line of business, a desk, a jurisdiction, Sometimes a currency, um, you have an asset class, you've got, sectors.
Uh, you have, how many people touch that process? You have, systems used in the process. How many approvals there are, there's so much [inaudible]in the processes. So just as we take training or payments or fraud data and marry that with external data and determine which days we have higher risks, we can do the exact same thing.
We can take that metadata, marry it with other internal data and external data. Put it through machine learning models and identify which processes have a higher risk of breaking, which processes have a higher risk of errors that lead to a pretty sizable losses. So why is that valuable? Well, instead of saying you have to straight through process, everything which can cost you hundreds of millions of dollars, you can say, well, actually our real risk is coming from these handful of processes.
So, let's, let's fix the processes that, have the highest risk and you could spend a million dollars or $500,000 on those and, and, you know, 3 million or $5 million in losses. Right. So, where's the return greater?
Roger Noon: Yes. Yeah. Yeah. So the use case is pretty compelling and the way that you've laid out for your particular, your particular business, and be nice just to broaden the conversation around, you know, your, your experience of using, you know, very new technology, uh, and, and the applicability in a more sort of a general sense across, across the organization.
So, you know, what, what are the ways that you could use this approach, um, in other businesses and functions?
Shelly Liposky: Right. So, in payments, for example, um, same thing that we're looking at in capital markets, where we have the reason for the errors is usually because of an input error, miscommunication, or missed execution, right? Somebody inputs, the wrong thing. They forget to do it, or they hear something differently than was intended. And so while we get the money back usually, the amount of time spent sending the information, apologizing to the client, letting the regulator know, we just spent a billion dollars out to the wrong place.
Um, that's a lot of time, right? So, we see in payments being able to not only stop errors from happening, um, but, uh, but fixed processes the same as we saw in cpaital markets on the fraud space. This is an interesting space because, um, we see a lot of business, email compromise. Um, and a lot of fraud that changes the information in the payment order.
So, a client that goes into a payment platform every month and sends, you know, X amount of dollars to a client or to a vendor every month. They typically will look at the currency. They'll typically look at the name of the client, but they typically, aren't going to look at the 9, 12, 7, 15 digit account number.
And so when a fraudster, um, has access to that static data, they can just change the account number. What we can do is say, "Hey, you-client have a higher risk of fraud today" and prompt them behaviourally to check the account. Not just check the client name and the currency and the jurisdiction, but verify the account number.
And so we can change. This is an example of changing a client's behaviour, right? So even going outside of our walls and prompting a client to protect themselves.
Roger Noon: Yeah. I mean, listening to you, this is pretty transformative and it's, it's quite inspiring to, to hear about what we've talked about for so long in the, in the risk space, actually being out there and, and, and being used.
So, you know, certainly if I was your, your sponsor, Shelly, I think you've sold me already, but you know, in the real world, how comfortable with embracing these ideas and embracing AI as a technology that can be really sort of helped help to be much more proactive in the way we manage risks.
Shelly Liposky: Well, I mean, it's a good question. If we remind people of the internet of things, I find that's very helpful. So just, you know, how you and I are having a conversation today. We've got our iPhones, our apple watches. Um, we've got all kinds of Android devices, Alexa, in your home. Um, you know, all in your car, there's AI, right? There's AI all around us in the internet of things.
And so applying it, what's different is the application of it to our industry. And there's already acknowledgement that, you know, fintechs are very important part of this industry. So, it's becoming, I think a bit easier to say, well, why wouldn't we use AI in risks? If, if our business and our products and our competitors are using this kind of technology, why would we still use our little human brains to manage risk in those, you know, in that space?
So, I it's, it's becoming easier. I think it takes a little bit of time to explain how valuable it is. Um, but I do think that the concept of embracing AI to manage risks is becoming a lot more normal because of the way the industry is going.
Roger Noon: Yeah. Great. And, and certainly, um, in terms of being able to apply this to a number of different use cases, if you've got the problem and you've got a solution that can be aimed at the problem, it's an awful lot easier to sell than, than having the solution and trying to find a place to put it as they wish maybe some of our vendors, uh, sometimes, uh a little bit guilty of, um, so yeah, so what's, what's next Shelly for you? What, what are the next steps in your group and where do you want to take this idea further now?
Shelly Liposky: Well, I again, I think about it as human plus machine. So top of mind for me, uh, third-party suppliers conduct, um algorithmic risk, algo risks and process. So, think about the industry's outsourced, um, over the last, you know, 10, 15, 20 years, and we've taken processes that used to be done in a company and just shifted them to somebody else.
There's inherent risk there. I talked to a client the other day that said, you know, we moved our process to a vendor and now they're making the same mistakes we did. Right? So the application of these technologies is, can be applied there as well. Digitizing voice. So we've got a lot of e-com surveillance that we've done in the past, but that voice kind of just sat there and people listened to the phone to see if there was something wrong or adjudicate hits.
Now we can integrate that into machine learning and with, and integrate it into, um, other data sources and see what that tells us. That's an interesting space. Algo risk. I think of an algorithm as doing the work that a whole trading floor might've done it sometime, or whole operations group or payments group might've done it sometime in the past. And so I treat the algos as a person. What does an algo coded to do that a person is not allowed to do, um, and making sure that we have separation between developers and owners and users, so that a developer can't write code and put it into production without somebody independent looking at that. So, I think there's a lot of work in that space. And then of course, we talked about metadata and processes.
Roger Noon: Yeah. Yep. And, and there's, uh, you're clearly on the leading edge here and you had certainly half in the future, just, just to pull you back to your early days and what you've learned through all the work you've done.
I mean, for, for people listening, what advice would you have for other institutions, other people that are looking to do something similar and, and perhaps also, you know, what have you learned that maybe people could get ahead of as a result of your.
Shelly Liposky: Yeah, I think there's four main things. I've been thinking about this a bit, um, versus to focus on where you're going to get the biggest return so that you can build momentum.
Once, once there's a way to, um, prevent loss. Um, you're actually saving money and time and that money and time can be reinvested to self-fund additional activity that you, that you want to make more targeted investments in, um, including automation. Um, the second thing is to be methodical.
It's easy to get distracted by technologies that are out there. But if you're methodical and you really say, all right, well, which of these thousand technologies is really gonna solve our problem and get our return that we're looking for? You're just going to have a higher return.
Third thing is to solve problems. With people closest to the problem, the further away the solution is from the actual problem, the more work you're going to have to do to implement it. So, bringing the people is just going to accelerate the change. And then the last thing is fail fast and test and learn if something doesn't work, stop and try something else.
And we've talked about that being a bit of a cultural change in the industry. So, I think you have to directly say to people, um, it's okay to fail. Let's just fail fast.
Roger Noon: Great. Thank you. And, uh, just looking back over those, those thoughts you, I focus on the return to self-fund be methodical, um, solve problems with people that are close to the, to, to the problem itself and fail fast.
And, and it's fantastic advice. I think that last one possibly, uh, culturally is one of the hardest to get into. We talk a lot about the, the will to innovate, uh, and, and that idea of, of allowing people to, to fail and to learn from that, I think is something maybe we cna, we can all improve, but, but great that you've got those principles front and centre, Shelly so thanks for, um, sharing your experience and your learning so far.
And I just think you've done a great job stirring the curiosity of, of all those risk and control experts, listening, who are looking to get ahead of operational losses in particular, but also more generally, uh, being able to manage risk more proactively using this combination of AI, uh, data science, uh, and those, those innovation principes that you've talked about.
Um, it seems really important to inject confidence and practical experience into this NFR community because there is this threshold of a major change in risk management I think that you're, you're, you're pointing out here and I think this is a perfect example of, of what the future holds. So, I think we can catch up with you again a little bit further down the line and see how much more and how much further you can go with this.
And thanks ever so much for your time, Shelly.
Shelly Liposky: Thanks for having me, Roger.