LumenVox Luminaries is a podcast that broadcasts thought leadership pieces on the subject of voice technology. This episode features Matt Whipple, Senior Vice President of Global Voice Biometrics Sales with his perspective on fraud and its effect on contact centers as it relates to COVID-19.
You can connect with Matt on LinkedIn here and Twitter here.
Listen to the Podcast
Read the Transcript
Hi, I’m Matt Whipple, the Senior Vice President of Sales for the Voice Biometrics Suite of products within Lumenvox. I’ve been working in voice biometrics for approximately 15 years, give or take.
So today we’re talking about fraud and specifically fraud as it pertains to current events; COVID is changing the world rapidly, and those changes are increasing fraud dramatically, particularly for contact centers. So today we’ll be discussing how COVID-19 is changing the face of fraud in the context in our world.
Q: How is COVID-19 affecting fraud in the contact center?
We’re seeing a couple of things: One is a dramatic increase in unemployment. And whenever we see increases in unemployment, we see increases in theft. People are either desperate or people become opportunistic, so let’s look at that a little further. When people are unemployed, they don’t have income; they still have mouths to feed; they’re willing to take advantage of other people or particularly companies when they can’t. So we get people who are not necessarily normally fraudsters starting to perform fraudulent activities. And the least risky way to steal from a financial institution, for example, is over the phone. Walking into a bank with a mask on and a gun in your hand is a surefire way to get caught; calling a call center, well, it’s a very difficult way to get caught. So we’re seeing fraud rising, fraud-related phone calls rising. Moreover, fraudsters are now opportunistic. They’re preying on people who are scared about the uncertainty in the market and as a result, fraudsters are doing social engineering with individuals for the sake of taking over those individuals accounts. They call a financial institution; they pretend to be you using stolen credentials–a Social Security number, a mother’s maiden name, and so on–and they are increasing their attacks within financial institutions.
Q: How can businesses mitigate the risk? Specifically, around fraud in the contact center, there are a bunch of tools. One of the things that contact centers have been doing for a long time is looking for risky transactions. If I never place a high-dollar value wire transfer out of my account and all of a sudden somebody is trying to wire a whole bunch of money out of my account, that could be a sign of fraud. But it also could be a sign of the times. Maybe I’m wired wiring money to friends and family who need it, so there it doesn’t mean there is fraud, it means that banks have to be especially cautious today on these sorts of transactions and any tools that they can have, like the kind of tools that uncover these anomalies, for example, are beneficial tools. Another layer of security–we always think of security in terms of layers—that is being deployed is voice biometrics. When a real customer is calling in on their own account, we compare their voice to their voiceprint on file, and we know who we’re speaking to. We know that this is the real customer; we also have the capability with the sound of the voice to compare a caller’s voice to a voiceprint of known fraudsters. If this is somebody who is stolen from a particular financial institution, for example, we can identify that voice as the voice of a known fraudster; we can flag those transactions and prevent those transactions; we can secure our customers account while ensuring that this company isn’t losing money to those fraudsters. This is how voice biometrics plays a dual role: one is authenticating real users and two is stopping the fraudsters from stealing.
Q: Can you go into greater detail about LumenVox Fraud Scanner? So when we’re looking for fraud within the contact center, specifically using voice, there’s a couple of areas/ a couple of techniques that we use; one of those techniques is scanning high risk calls shortly after the fact. Here’s the idea: If a fraudster is calling it on my account, and they’re using my stolen Social Security number my mother’s maiden whatever the case is, fraudsters may be performing a benign transaction, like just getting my account balance, that might be a low risk transaction, and we might not chase that transaction because there’s not a lot of damage you can do. Now that is the fraudster probing my account which will come back into play a little bit later, but specifically when we get high risk transactions—such as have fraudsters trying to change my mailing address or change my email address or trying to order a new credit card, reporting a lost/ stolen card—or the areas where the fraudster might have the opportunity to intercept my snail mail or my email or to get a new car–which they can then go use either online or in retail, there are certain transactions that we want to scan more than others. So what we do is we take the call recording just after the call. So the fraudster hangs up with the call center agent. Within an hour or a day depending on business rules, we have flexibility, we scan that call; we compare that caller’s voice to the voiceprints of known fraudsters that is people who stolen from us before.
If a fraudster is successful in an account takeover, we listen to that recording, we take that fraudster’s voice; we add it to the watchlist. Now we compare the high-risk calls from today against that watchlist of known fraudsters. If any of today’s voices matched the voiceprints of the fraudsters who have attacked us in the past those accounts/ transactions are flagged, and then what happens is the fraud analyst says “OK, we’ve got a voice matches somebody on the watchlist. It’s an account that doesn’t look like it has other signs of fraud.” Say it’s an address change, or a new card request, for example, the fraud analyst will call that customer the next morning and say, “Hey did you change your address? Did you order a new card?” If the customer says, “Yeah, I did.” Ok, well good. Then you did the right thing. If, however, the customer says, “No, I didn’t do that now.” The fraud analyst says, “OK, good, we just got something on your account; your account is perfectly safe; there’s nothing to worry about here.” We’ve created a positive customer touchpoint, and we’ve salvaged this customer from going through an identity theft, and we’ve saved the bank or the financial institution money. So it’s pretty easy process once it’s set up–these batch files occur almost automatically; the fraud analyst look at the results; they react fairly quickly; we help keep customers safe; we help keep fraud out of it out of the organization.
Q: Can you go into greater detail about LumenVox Passive Voice Biometric Authentication? Passive authentication deployment listens to the audio in real time. So you’ve all heard, “Your call may be monitored or recorded for quality and training purposes.” And it’s true, your call is being recorded, but there’s a couple of things that can be going on behind the scenes. For some of the large banks in the US and for a few of the smaller banks, what’s actually happening is as a customer is having a conversation with a call center agent, their voice is being used to initially create a voiceprint. Once that voiceprint has been built, once the agent has received consent something like, “We’re using voice security now. Is it OK if I tie your voice to the security of your account?” Consumers overwhelmingly say yes. The next time that caller calls in, we’re comparing the caller’s voice to the voiceprint on file and instead of the agent asking for last for your social or your PIN or your mother’s maiden name, the caller’s voice is being compared to the voice profile; the agent is getting a green light on their desktop saying no more security questions necessary. We lower handle times. We save operational costs. And we increase customer satisfaction as well as agent satisfaction. So this is a very positive technology in terms of customer enhancement while driving operational costs down.
Q: What sets LumenVox apart in the market today? LumenVox has a long and rich history in both speech recognition and in voice biometrics. In speech recognition we’re deployed all over the world in dozens of languages. In the voice biometrics world where we have historically played is doing password resets, doing employe-facing applications and deploying active biometrics in the IVR. What’s changing–as the market evolves, as we as a company evolve–we recognize that fraud is growing contact center fraud is growing faster than all other fraud humans are the weakest link, and fraudsters know that. They are exploiting the fact that it’s human, contact center agents whose job is to be helpful, not to be security experts, their job is to be helpful. It’s pretty easy to socially engineer them, so it’s a space that many of us within LumenVox have been playing in for a very long time. We’ve got tremendous depth in terms of a fraud detection capabilities. We’re bringing it to market in a slightly different way, which is meant to be repeatable, fast and nimble. As we’re catching fraudsters, the market’s changing quickly we’re going to differentiate by adapting more quickly than some of the big more established vendors who are already in this space.
LumenVox Luminaries is a podcast that broadcasts thought leadership pieces on the subject of voice technology. This episode features Jeff Hopper, Vice President of Business Development with his perspective on LumenVox’ next generation of conversational IVR.
I want to tell you about some work that we’re doing in our engineering team right now that will begin to become available in 2020. We’ve taken a step back and looked at the existing state of the speech recognition market for the IVR space and the product that we used to have, that we deprecated, what our competitors do, etc. And we’ve concluded that there’s a better way to go about this than the way the industry has historically.
When you look at our competition, their traditional tier-four speech recognition was speech recognition with natural language understanding. It was first and foremost 10-year-old technology and a proprietary black box. The only people who could develop an application for a customer with it was the that speech vendor’s professional services team. With my 20 years of personal experience in the space, I can only name–with the fingers on one hand–people outside of that vendor who can actually build a tier-four application successfully for you.
So our first driver to this new idea was let’s take advantage of some things that have changed in the state of the art technically, and let’s build a new platform that is more open more accessible, easier to use and not that proprietary black box, if you will, for speech recognition. So if you understand any of the history of natural language IVRs, essentially the idea is that instead of asking specific questions, like “What city do you want to fly to?” And you say, “Memphis or Nashville,” or whatever the choice is and the recognizer can only make a determination from a defined list of choices. You should be able to say things like, “I’d like to book a flight next Tuesday from Seattle to Memphis in the afternoon.” And that recognizer should be able to parse out both the intent–“I want to book a flight”–and all of the values in that statement that are necessary, like the departure city of Seattle, the arrival city is Memphis, and the travel date is next Thursday from that conversational statement that the caller makes. So the traditional mechanisms have been to build these proprietary applications that use two parts under the hood, but most people don’t realize they’re two parts. The first is the speech recognizer that takes what I said and converts it into raw text. The second part is something called an NL or an SLM, traditionally, in the speech space, a statistical language model that will take those words, parse them apart, and try to infer the meaning based on machine learning.
It is not very different conceptually to modern machine learning and artificial intelligence except that it’s built on a much older set of tools and a much more limited set of machine learning capabilities. So when you build an application like that today with our competition’s ASR offering, it is a sealed box. It’s difficult to make changes to it over time, and they tend to be extremely expensive from a professional services perspective to deliver.
So what we’re proposing, and not just proposing, but building the infrastructure for, is a new generation of conversational IVR. And we’re going to do it in a couple of ways: We’ve already done what I call part A of the three parts, and that is we have built an entirely new speech recognition engine based on the latest in machine learning processes, specifically deep neural networks so that the core recognizer that will work in this stack is absolutely state of the art/ has excellent recognition capabilities and is easy to stand up, install and configure to run in your application stack. And more importantly, it’s designed to do transcription, not directed dialogue with grammars like that old style of IVR application. It’s intended to take raw tech or raw speech from a collar and transform that into text. The second part, part B, of our application stack is going to be a new AI platform artificial intelligence that uses machine learning. It’s built on commercially available AI components that already exist today that are also state-of-the-art. They’re components from companies like Google, or some firms that Google has purchased, that Google has put out into the open source world. We’re going to build the machine learning AI piece that does the intent determination from the text and extracts those values or entities, like departure city, arrival city or whatever the particular conversation might be. From that text we can pass that back to an application in your IVR to do work. That second part is in engineering now, in the process of productization, and it will give you an excellent starting point to accomplish what is very typically a difficult process with tier-four applications today. And the tool set is one that is widely commercially adopted; there’s lots of people who already understand how to use it. We’re essentially just going to provide the plumbing to connect it into the rest of your IVR stack and our speech recognizer in a simple and easy way.
Coming on top of that in the third part of this process will be the addition of something that we’re calling on AI gateway. If you look at the slide in front of you right now, you can see the AI platform over on the right hand side and LumenVox listed down below it as one possible AI platform, but up above you see a number of other names that you’ll recognize things like Amazon Lex Microsoft Luis, Google’s dialogue flow IBM Watson and others. Those are all widely used, commercially available AI engines today that use machine learning to produce artificial intelligence that help you parse out the answers you’re looking for from the text. What we’re going to do is provide a configurable gateway that will operate from the LumenVox media server so that in your IVR applications you can take advantage of existing AI that you’ve already built with those commercial tools, things like FAQ question chat bots that are on your website today, or other mobile applications that you’ve built that use text and machine learning or AI to respond to that text. You’ll be able to take those models and add them to your existing IVR stack so you’re not starting from scratch with the learning process for the AI mechanisms. You can continue to reuse something you’ve already built and enhance it. That’s almost always less expensive than starting from scratch to build a new AI platform and a new AI model for your particular business situation.
We have some customers who are already using this approach in an experimental stack, and I say experimental–some early proof of concept applications today rather than going out of the LumenVox media server. They’re making the AI request out of their voice application platform today, which requires a little bit more work on their part. But we know that the new generation of recognizer we have in place, when combined with that kind of external AI approach, is actually working well. And then in 2020 we will add that third part of the AI gateway to the LumenVox media server to make all of the integration work simpler or quicker and easier for you.
Have questions about our next generation of conversational IVR? Contact us today!
Avaya Podcast Network spoke with our very own Jeff Hopper, Vice President Business Development, for the 8 & Out podcast, a series featuring Avaya Select Product Partners. LumenVox has been a proud Avaya Supported Select Products Provider since 2012 and offers LumenVox Call Progress Analysis, LumenVox Speech Recognizer and the LumenVox Speech-to-Text Server on the Avaya DevConnect Marketplace. In this interview, Jeff explains new, exciting innovations within LumenVox’ technology stack as well as his perspective on the industry itself—where it’s headed, and how LumenVox can continue to set itself apart with flexible, cost-effective solutions.
Read the Transcript Below:
Hey this is Bill Petty with APN, the Avaya Podcast Network. I’m sitting here live on the Avaya engage 2020 floor, talking with Jeff Hopper of LumenVox. Jeff thanks for joining us.
Thank you very much Bill. I’m really delighted to be here, despite the raspy voice from three days on the trade show floor.
I think we all have a little experience with that. So, tell me a little bit about what LumenVox is doing and what you are pitching here to our customers and channels?
Sure, LumenVox is a provider of speech recognition technologies including speech recognition, text to speech, call progress analysis and voice biometrics for authentication for your callers in your customer self-service or your contact center environment.
And how pervasive is speech-to-text these days?
You know, it used to not be so much, but now it’s everywhere. We all have things like Amazon Echoes, or Google Homes, or other personal assistant devices, so it’s become an expected component of a contact center these days for self-service and for assisting the agents.
Right, and I know LumenVox is a long-standing relationship partner with Avaya but, tell us a little bit about the kind of progression of what’s happening these days.
Absolutely. It’s one of the most exciting parts of where we are now in our journey with Avaya and with the Avaya customers and partners channel. I’ve been at LumenVox 8 years, my 7th IAUG in that capacity, and when I started we were a DevConnect member. We had developed some business overseas more than in the North American market and then we progressed into the SPP program about 5 years ago. We had some phenomenal growth in business and awareness in the Avaya customer base. We’ve taken on several dozen new large customers in the ecosystem, and just this past year, we’ve signed a further advancement of that agreement. We now have official Avaya part codes so our product can be ordered through Avaya as a reseller – making the process much simpler for everybody and just hopefully helping to accelerate the ease of adopting our speech recognition products.
So I know that as we move a company through that type of relationship models, you start with DevConnect and you go into the SPP (which is a big deal and you know they’re very selective about who they choose) and then they move into a resale model or kind of expanding the approachability and the availability of the solutions. Tell me a little bit about what’s really driving that relationship from the LumenVox and Avaya side.
I would be delighted, and I think it’s the thing from a personal point of pride I’ve been involved with the last 8 years, so I really take some joy in this. I think the best exemplar I have is our Net Promoter Score. We have sustained an 89 average Net Promoter Score, and in a business to business model, you know you’re working your keister off to accomplish that for your customers. Our customers and our partners consistently come back and say “great job,” “easy to work with,” “the product is easy to install, configure and use,” so we have just tried to reduce the friction of using speech recognition in the environment. Make it easy for everybody to use that technology effectively in their contact center and in their self-service.
Oh, that’s fantastic! You know, at Avaya, one of our key slogans at this time is “Experiences that Matter,” so apparently LumenVox is making the experience of installing and using and implementing and configuring the solution, they’re making that experience very positive. Share with me a little bit about the experience of the end-users, how is this relating to what’s going on, and how your solution is kind of a game-changer?
Speech recognition was always something kind of like Harry Potter magic. You had to go to Hogwarts and learn some secret wizard handshakes and incantations. We’ve just tried to simplify that install-configure-and-use part, and then we’ve tried to do work to enable the channel partners that actually build applications to raise their skill level, raise their expertise and user interface design. All the things that allowed them to work with the end customers to get a really great customer experience out of the applications.
How is the relationship with Avaya progressing as far as a technology development perspective as you look for opportunities to build hooks and implement within our shared structure? How are things working from that perspective?
It’s been a marvelous year. I was in New York back in November to meet with a team of Avaya executives at the Briefing Center and we exposed some new technology that we’re bringing to the table in 2020. We’re adding a new approach if you will, to conversational speech recognition. Where those have been very closed proprietary systems in the past, we’re building a mechanism to allow you to use commercially available AI tools to bring AI to conversational speech, it’s not proprietary. You’ll be able to use any of the major AI resources, you know whether it’s tensor flow in a data center level, or Google or Watson, or any of those things, and the speech recognition in your self-service applications. You can reuse AI that you’ve already created for other channels and more easily incorporated into the product stack.
How’s this improving the customers experience?
The better, or more well-trained, AI models are [improving the customer experience], and we all know the people who have been working on those kind of things. When you’ve got a better tool set, not a proprietary one, but stuff that lots of people are contributing to, it just makes it easier to get the AI right to give an appropriate response to the caller and make their experience less friction bound if you will.
As you look towards 2020 and start building these new measurement tools, what are you going to be delivering for the customer and how is that going to improve what they’re seeing in the use of the LumenVox solution?”
So, from the customer who implements this perspective, they’ll be able to take advantage of other initiatives they’ve already done, like chat bots for example, and voice enable them in their customer pathways through the contact center. They won’t have to rebuild the entire thing twice with the new learning model; they can simply voice enable. I always say, give your chatbot a voice.
I love the idea of a voice enabled chat bot because I have big thumbs.
Yes, me too.
And I have a really hard time trying to type on that little keyboard in an amount of time that somebody, or the bot, is actually waiting for me to respond. I should be able to talk into it and say, “I need, this is what I’m looking for,” and not have to type it.
Let the computer change the speech into text. One of the things that we can add here is because we have such a complete stack of products now with voice biometric authentication, we can help you secure those application pathways as well as service them with the speech recognition in the AI technology.
So, tell me something unique about what LumenVox is doing. I know we’re here on a tradeshow floor and we’re amongst all these partners; give me an idea of what LumenVox is doing that’s kind of new and unique, especially if it has something to do with what Avaya is doing in the market space.
Sure, I think the thing that we’re focused on the most, and I’ve mentioned it a little bit earlier, is making this stuff easier to use and incorporate into the solutions that get to the end customer, to the caller in that respect so that there is lower project costs, quicker time to market. Basically, accelerating and making it easier is a big enhancement because we all know it’s been an area that was kind of a black box traditionally and between technology advancements in the software and how we produce these things a real strident focus on quality and management user interface to make those things simpler, get rid of any friction we can essentially and then the growth in computational capacity with cloud computing in the general reduction in the cost of computing have all brought us to a place in time where speech should be ubiquitous and should be expected component of the customer service path.
Well, it is becoming more and more pervasive that’s no doubt.
That’s something that we’re all seeing, and I think it’s something we’re all becoming a little more comfortable with.
Especially if you’re in my generation.
Well yah, it was tough, you’d question the little device sitting on your desk that’s listening to every word you say to see if you talk to it. But I think we’re becoming a little more comfortable with that. You know, our cell phone’s listening to every word. I can guarantee it that the ads that pop up on social media because I talked about wanting to buy my wife a weighted blanket, then all of a sudden I’ve got an ad for a weighted blanket showing up on my phone within an hour. I don’t put my tin foil hat on though.
I agree with you completely. You know, we get into that question in our space. Like with the voice biometrics, we’ve built a set of products that are very secure. They made GDPR compliance in Europe for the ability to protect people’s privacy, which is essentially the highest standard around the globe today. We are very mindful of those elements of our product development process to in making all the things secure whether they’re in your premise in a private cloud or even in a public cloud in that case.
So Jeff, I know you guys are here on the floor, and for those that are out there listening, please make sure you go to the Avaya DevConnect marketplace at devconnectmarketplace.com, look up LumenVox, and find out about their solutions. Jeff, thank you so much for sitting down talking with us today.
I appreciate it very much Bill, it’s been a pleasure.
LumenVox Luminaries is a podcast that broadcasts thought leadership pieces on the subject of voice technology. Today LumenVox Luminaries is proud to present Bettina Stearn with her thoughts on what’s trending in Voice Biometrics today. Bettina has been a part of the industry since 2002, acting as a pre-sales engineer, and now as Managing Director of LumenVox’ European team. The LumenVox team is excited to be attending Call and Contact Centre Expo in London, March 18-19, stand 2061, where we”ll be talking to attendees directly about our solutions and presenting “Using Voice Biometrics in the Contact Center: A Primer,” March 18 at 11:45 am in Theatre 10. Make sure to say hi to them there, or connect with Bettina via LinkedIn here: https://www.linkedin.com/in/bettina-stearn-84b6a816/
Listen to the Podcast
Read the Transcript
What’s trending in voice biometrics?
The most trending (technology) in voice biometrics currently is definitely multifactor authentication. Multifactor authentication consists of 3 different factors, the first factor is something you have, like your phone. The second is something you know, like your PIN. The third is something you are, this can be biometrics or in a special case, voice biometrics.
What’s special about voice biometrics?
The special thing about voice biometrics is that you don’t need anything extra, other than your phone. You always have your microphone with you, and you can record your voice. With a lot of other biometrics this is much more difficult, for example, an iris scanner or fingerprint scanner you don’t always have with you. So you can use your voice much more easily.
How are businesses adopting this type of authentication?
We can see now that the businesses are adopting this; one or two years ago it was still very slow. But now we can see it’s really accelerating because of (the need required by) new regulations. The governments are forcing financial institutions, for example to add multifactor authentication.
How does LumenVox help with adopting this method of authentication?
We make it somewhat easy to do this because we will integrate it into the existing infrastructure of the enterprises. So we would help and consult with the enterprise (to determine) where to add multifactor. And there’s seamless integration. We will provide a lot of APIs and interfaces to make it as smooth as possible.
What sets LumenVox apart from others in the Voice Biometrics field?
I think the special thing is that all of our technology around voice biometrics is our own proprietary technology. This is rare in the industry, the big competitors are using third-party system integrators. But LumenVox really has its own technology stack. This makes it easy for us, LumenVox, to go deep into the code and to really tailor it exactly to the customer. We can change everything. We own the code, so this makes it easy for us.
I think the other interesting thing at the moment–what we can see in the market, what we’re developing and releasing currently–is the passive voice authentication that helps fraud detection. We have a nice, new product, addressing this, Fraud Scanner. We can create imposter lists and help the enterprises to look for imposters and address the fraud.
What’s the definition of Passive, exactly? Passive means that in a natural dialogue the dialogue between the agent and the user can be recorded and offline an enrollment will be created. So the user is not actively doing an enrollment, and he doesn’t even know it (is happening). Of course we can inform him before, but the enrollment can be done in the background, so when the user calls the second time, the call center agent can retrieve the enrollment of the person and he knows, then, the person is already enrolled. And he can then start checking the authentication of the person.
Anything else on your mind about Voice Biometrics? I think Voice Biometrics has been on the market for twenty years. But now we can see it’s like a rocket start at the moment because the demand is growing, so I think the time is now for Voice Biometrics.