In an interview with RMA Communications Manager Stephen Krasowski, Shelly Liposky, Head of Business Risk and Solutions at BMO Capital Markets, shares how she began her journey in banking, top challenges to managing technology risk, and her part in developing BMO’s artificial intelligence-powered platform (OLI).
Listen to the podcast:
Stephen Krasowski: This podcast is brought to you by RMA. The Risk Management Association. RMA's sole purpose is to advance the use of sound risk management principles in the financial services industry. Learn more at our RMAHQ.org. Hi, I'm Stephen Krasowski, Communications Manager at RMA. The Voices in Risk Management Podcast series celebrates the diversity of experience, background and skill that is driving the best risk management practices of today and which will be a must in meeting the challenges and complexities of the future. Today I'm joined by Shelly Liposky, Managing Director and Head of Business Risk and Solutions at BMO Capital Markets, to discuss how she began her journey in banking, top challenges to managing technology risk and her part in developing BMO's Artificial Intelligence Powered platform. Shelly, thank you for joining us.
Shelly Liposky: Thank you for having me.
Stephen Krasowski: So, Shelly, how did you get started in your career as a banker in the financial services industry and more importantly, in business risk and capital markets?
Shelly Liposky: OK, so I started in the industry doing mergers and acquisitions and then moved into chief operating officer roles and now I lead, as you said, are our Global Business Risk and Solutions team here at BMO. Our mandate is a first line mandate, including trade for supervision, business unit compliance, anti-money laundering, operation risk and resilience, algo and automation risk, and then of course, ESG Risk, which is one of the newer areas. I'm not sure how I could have planned...
I think my career path to date, but I have learned a lot from the diversity of work and collaboration with colleagues across lines of business, like Capital Markets where I am today. Wealth in retail and then across jurisdictions and lines of defense. I think having that broad scope of experience is actually really valuable. So I'm not sure I could have planned that, but I do think I'm wired to get excited about building and fixing and leading teams that help our business to grow in the right way.
Stephen Krasowski: Great. Have you ever had any mentors in your in your career journey or been a mentor for someone else?
Shelly Liposky: Yes and yes. For me, I've had quite a few mentors. I've been - I feel like I'm blessed to have had mentors, and I think some of the learnings that I take away from them, looking back were really critical at that time. But there were simple things like, you know, when I was earlier in my career and I wasn't sure you know, an opportunity presented itself and it was - it would have been a stretch for me. And I talked to a guy that was a mentor named Bill Donovan and he said, you know, I was early in my career, yet he listed out the things to illustrate that, you know, you've overcome this, you've done this, you've done that, what can't you do? And for my entire career, what I- when I'm faced with something that I'm not sure of I keep hearing his voice, you know, what can't you do?
I mean, it's very inspiring. So that gave me a lot of confidence. My parents of course, you know, my mom always used to say, don't do a half assed job, just irritating growing up as a child. But it sticks with you. My dad used to say, make sure you get an education because nobody can take it away from you. You can take your house. You can take your money or your car, but you can never lose your education. And then my mom also, it's not what you say, but how you say, it was something else she always used to say to us. So those things stick with you on a personal level. And then, you know, over the financial crisis in some of the more recent challenges that our industry has gone through, I hear things like uncertainty is the hardest part of change.
Keep going and don't worry about having the answers Just bringing people together to figure it out. So those are - I certainly had a lot of mentors and again, I feel like I've been blessed and those are some of my takeaways. And that's what I - when I mentor others, that's what I want to pay forward. Is the ability to take a chance and believe in yourself and don't worry about knowing the answers and to rely on people, rely on a team of people. You know, you're not going it alone. So mentors have been very important to me personally and I enjoy being a mentor to others.
Stephen Krasowski: What advice would you give to someone thinking about a career in banking.
Shelly Liposky: So I would say four things, I think. It's a tremendous opportunity. There are tremendous opportunities in financial services. It's fast paced, it's innovative, it's creative, it's analytical, it's rewarding, it's challenging, hard work. And in there is money to be made. But you have to be a good athlete and I think that the likelihood of being in a one role for 30 years is very low.
And so the industry is just evolving too fast. And I think those who take advantage of being a good athlete and diversifying a skill set or a knowledge base will really have extraordinary opportunities in our industry. The other thing on the opportunity side is spreadsheets don't lead. People do, right? So whatever skills you have that could be very technical or analytical, at some point you're going to have to develop the leadership skills if you want to advance in your career.
And so I think sometimes earlier in one's career in this industry, there's a lot of reinforcement of very analytical or technical skills. And then at the mid-career point level, when you kind of start leading people, what got you there isn't going to get you to the next level. And so it would be good to be ahead of that and understand, you know, be that learning those leadership skills are very important.
The second thing is letting go of old thinking. Our industry has quite a few stereotypes. A lot of movies have been made about people in financial services and I think that that's really, really changed the level of sophistication, innovation, breadth of skills have all increased, and that has required us to diversify again the types of people that are in the industry and the talent that we attract.
So I think that letting go of old thinking and stereotypes about this industry is very important and then letting go of old thinking... the other thing here is we're making money fast, year over year is old thinking. Our industry is volatile. You know, you have to stick with it. And there are going to be up years and down years.
You know, just accepting the uncertainty in our business and the asymmetry sometimes, I think is something that goes along with letting go of old thinking. Third thing is just checking your ego at the door, earning your stripes, being humble and thinking long term. And then the last thing is building relationships. Being a good colleague and being a good human being, those are things that, you know, I think its advice for any career or any industry, but is equally as important in our industry.
Stephen Krasowski: So, Shelly, what are some top challenges to managing risk with new technologies such as artificial intelligence?
Shelly Liposky: You know, this is such a loaded question and I think it's important to start with the business context. So we are a financial services industry with margin pressure, competitive pressure. We're operating in a dynamic market, economic and geopolitical environment. And there's extreme competition for high performing talent. In that context, we talked about vulnerabilities like the sophistication of our business outpacing the sophistication of our risk management.
Aging infrastructure, manual processes and having to be resilient in the face of cyber threats, which is something that with the level of sophistication that we're using, has become more prevalent. So in order to manage risk in this context, and I like to say in order to drive fast and take chances to increase profitability, in order to drive fast, to take changes, the increased profitability, we have to have the right airbags, guardrails, ability to see around the bend and we also have to be able to stop on a dime. And so what does that look like? Having a laser focus on business priorities and the risks we're taking, understanding our processes and ensuring we've got effective controls in place, and then our ability to predict when bad things will happen so that we can prevent them.
Stephen Krasowski: So what about the technology? What about the specific risks related to the technology?
Shelly Liposky: Well, in that context, with those vulnerabilities, you think about things like, at the ideation stage of a artificial intelligence tool or an automation tool. You know, if the if the use case is unethical, then we have a problem. So we have to have controls in place for that. If there's lack of - if there's on the data side, if we have insufficient data, if we don't have good security around the data, if we're not compliant with data privacy regulations, which vary by jurisdiction, we're going to have a problem.
If in the model development stage, we don't have a full set of data or if there's bias baked into the model or if the model isn't performing properly. We have a problem. And then you say, well, even if you get the idea right and the data, right and the development of a model right, what if the implementation of it isn't, you know, has unintended outputs?
What if the implementation has some type of misalignment between the implementation and the environment it's going to be used in? Then you're going to have a problem. And then operationally, we have to think about technologies as being paired with people. So human plus machine. And so we could get the machine part of it right and the data right and the development of the model right...
and then the person, the human can mess it up, right? They can just misuse the output from a model or, or simply have a human error that impacts the implementation or operationalization of that model. And so all of those risks whether it's at the ideation stage or the data or the model development or implementation or operationalization have to be managed and the themes around managing them generally fall into buckets.
And those themes could be having the right processes and controls in place at each of those stages. So think about ideation. How do we make sure that somebody can't have an unethical use case? Well, we can put a process in place to review ideas and beat them up to make sure that - and beat them up with a diverse group of people.
Not - you know, it has to be a diverse group of people. We're not going to have the right level of rigor around the review. So that's - you're thinking about process, you think about guidelines, you think about quality control at each stage from the idea to the data to the model development implementation and then the ability to operationalize, continuous monitoring of the model performance.
So that's part of an ongoing monitoring, but it has to be by the right people. So again, a machine can monitor but then you have to have somebody be able to - a human, be able to use judgment and escalate and have a feedback loop back to either the idea or the data or the development or the implementation or the operationalization.
So think about processes, controls, guidelines, feedback loops and user experience feeding back as well. So I think those are - it's a very meaty topic. It's a very simple question, but a very meaty answer because it's highly complicated and there are a lot of moving parts.
Stephen Krasowski: Shelly, was great. Now, BMO developed an AI based platform to enhance its control posture and capital markets. Can you talk a little bit about some of its capabilities and ways it can predict and prevent losses due to fraud?
Shelly Liposky: Yes, this is very exciting. We created OLI, our Operational Loss Intelligence tool to help predict when losses are likely so we can prevent them. So think about how managing operational risk has been done in the past. You have a person or a person process system that fails. Somebody writes it down or types it onto a piece of paper.
Somebody then enters it into a system. Somebody then generates a report and three months later it's presented at a committee meeting. And you talk about what happened three months ago. Very, very, very backward looking. What OLI enables us to do is marry internal data with external data, let that data flow through proprietary machine learning models to generate a signal, and that signals a percent likelihood that an error will happen on a given day.
If the signal is below a threshold, there's no action. If the signal is above a threshold, then we take action and if we think about why we lose money, in many areas, it's because of input errors, mis-execution or miscommunication. And so when that signal is above a certain threshold, we prompt actors in a process to slow their input, clear their queue and confirm their communications, because that's the behavioral counter to input errors, mis-execution and miscommunication.
So it's very exciting to be able to have forward looking data to help us predict when losses are likely and prevent them. So what are the applications of that? Well, you talked about fraud. We have process models that we can use, and I'll give a little color on each of these. Trading operations payments, call centers, really anywhere you have a lot of high volume transactions and people in the mix.
So it's a manual or semi-automated process where at some point somebody is doing something and you could have a human error or a system failure or just a process break. So on the fraud side, as you asked, we can have - think about a person logging in to a payment portal in that's a client of a financial institution, let's say and they log in and they are a coffee provider and every month they spend $10 million on coffee beans from a producer in Colombia.
So every month somebody from the company goes into a payment, portal logs in, has a wire that needs to be sent and it says $10 million x currency, x beneficiary and an account number. But they do it so often and they just go in and it's rote memory. Somebody from the company saying $10 million x currency x company, account number.
But then they get into the habit and it's just $10 million x currency beneficiary, 10 million x currency beneficiary, not ever checking the account number because we don't even do that personally. Like when was the last time that you would have checked a 17 digit account number? And so what we want to do is understand when a fraudster ends up hacking in or using business - you know compromised to reroute - change the account number and reroute the funds to their own account.
It's too late after those funds have already been sent. It's very hard to chase those funds. So we want to get ahead of that. So we would use OLI, again ingesting internal and external data through our fraud model to generate a signal when a client has a higher likelihood of a fraud happening. If they log in to a payment portal on that day, we would shout at them through the portal and say, hey you've got a higher likelihood of fraud today.
You must confirm your account number, not just the beneficiary and the amount and the currency, but you must confirm the account number. Right? And so we're getting between the client and the payment before it's initiated and changing their behavior. And we do that with - anywhere. I mean, we do it in trading, we do it in operations, we can do it anywhere, where we can get between the person and the action that causes a fraud or a loss.
On the process side, think about metadata that we can pull out of processes. So if people and hardware and software are what we call our infrastructure, then process is the thread that holds all of that together. And if that thread is weakened in any place within that process, then the infrastructure is going to fail, whether it's the person, whether it's the software, whether it's the hardware, and so if we can pull metadata out of processes and understand where those soft spots are and we understand the factors, economic conditions, volume, market conditions, people conditions, whatever those factors are that come together to make that process ripe to fail.
If we can find out the places it could fail - maybe it's at a control point, maybe it's a - maybe it's a volume capacity limit. We can then address specific investment decisions in correcting one thing instead of trying to straight through process everything, which is unaffordable for most institutions. So you have intelligence, operational loss intelligence, for a process that will help you make better decisions about where to invest.
It will also help you make better decisions about where you might need to shore up a control. Right? Or retrain a person or restaff an area as a business is growing. So we can use OLI to ingest internal and external data to generate signal that tells us when we are likely to have an operational loss and we can change behavior and have a higher likelihood of preventing loss.
Stephen Krasowski: Shelly, what would you tell other institutions looking to roll out similar systems?
Shelly Liposky: I would say that to just continue thinking about four things, I think. Focus is big, so focus on where you're going to get the biggest return to build momentum. And then once a business begins to reduce operational events and losses, it can reduce the broader, what I call human error tax, which is the time and money we spend dealing with those losses.
And that can create capacity to self fund as opposed to having incremental spend. So if you're focusing where you're going to get the biggest return, that's going to build momentum and enable further automation and opportunities. Second thing is to be methodical. It's easy to get distracted when you realize the power of AI and machine learning. So having a methodical approach to diligence, seeing those ideas and making sure the cost benefit is there will keep you on track.
Solving problems with people closest to the problem. I think sometimes we think we can't or we don't know enough to solve our own problems and so we bring external parties in. External parties can add value, but then you're losing the institutional knowledge that could be beneficial down the line. So solving problems with people closest to the problem has a pretty big return. And then failing fast, you know, test and learn.
If something doesn't work, you know, get rid of it and try something else. So focus, be methodical, solve problems with people close to the problem and fail fast.
Stephen Krasowski: Great. So, Shelly, to wrap up, do you and your team have anything planned for the future regarding AI and machine learning?
Shelly Liposky: Yes, the list is long, and I'm taking my own advice to use a very tight process to make sure we're focused and methodical. Again, I think of this as human plus machine. So top of mind, we've already talked about fraud, we've already talked about process. I think third parties are very important. We are accountable -all organizations, but certainly in our industry are accountable - for our end to end processes, regardless of who executes them. So as the industry has outsourced more, we find ourselves giving chunks of a process to a third party to execute on our behalf. And I've had - I've heard both internally and from clients who've said, you know, we outsource this activity and now we see our outsourcing partners making the same mistakes that our internal folks used to make.
And so the cost to that end to end process is still the same. You're losing money, you're having errors, there's throughput problems and there's potentially client impact. So that understanding, same thing we talked about, process errors, fraud with respect to third party providers, I think is a pretty big opportunity. And then conduct is the other opportunity that we're looking at.
We started ingesting our digitized voice, so we used to - the industry, used voice surveillance on certain registered individuals, and it's a very rich data set. It used to be analog, so you really couldn't do much with it other than pick up the recording and listen to it. Now we're ingesting our digitized voice into OLI so that we can understand, you know, what is the relationship between the metadata and the voice and conduct issues potentially or even operational failures?
But it just gives us a better understanding of a whole new set of data, the voice data that we have is very rich in metadata that we can also use to look for conduct issues or even opportunities before it gets to conduct. I believe that people, you know, come to work and want to do good. They don't want to make mistakes, but there's a context that they operate in that can, that can push them to the edge.
And so we want to understand conduct issues as they arise. But if we can get ahead of them, it's even better. If we can understand what happens before a person makes a bad decision, then we can get ahead of them and maybe provide some training or some more coaching or advice ahead of that. So third party suppliers and conduct would be two other models that we're working on.
Stephen Krasowski: Shelly, thank you again for joining us today and being a part of our Voices in Risk podcast series.
Shelly Liposky: Thank you for having me.