LumenVox is excited to announce the release of LumenVox Version 17.0.200. In this release, we have:
- Added support for a new short-utterance transcription (Natural Language) functionality to process audio with a maximum length of approximately 30 seconds.
- Added a new Out of Service configuration option for the ASR (Automated Speech Recognizer) service, allowing system administrators to enter maintenance mode from the Dashboard, which permits currently pending requests to be completed, but any new requests will be rejected (to be potentially handled by other ASR servers in the cluster).
- Added a new feature to the ASR load-balancing mechanism to actively route ASR requests based on the language specified.
Useful for situations where you do not want to be constrained by a specific grammar, or challenged by implementing a more complex and costly Statistical Language Model, the LumenVox Short-Utterance Transcription functionality utilizes a built-in, general Statistical Language Model that has been tuned for everyday use to provide a text representation of supplied audio.
Supporting LumenVox’ commitment to making speech applications more secure and easier to administer, additional enhancements were made to our diagnostic tools and dashboard, including more robust grammar handling within the LumenVox Speech Tuner.
For a comprehensive list of improvements and features released with LumenVox Version 17.0.200, please click here.
If you’d like to watch a previously recorded webinar about the release, including participant Q&A, please click here.
In a recent post; The ROI of Speech we discussed ways in which the use of speech recognition technology has changed the face of how companies interact with their customers. Perhaps the most significant benefit realized through the implementation of a speech enabled service solution is the enhanced level of intelligence delivered during the customer interaction – the improved intelligence of the interaction. Customers are no longer bound to pushing keys to force fit their call reason into the company’s pre-determined options.
Since speech-enabled solutions provide a highly conversational interaction with customers, organizations are empowered to expand the level of intelligence their self-service solutions offer. Benefits from implementing such a solution come from two perspectives: 1) reduced costs and accelerated ROI, and 2) enhanced customer experience.
For the purposes of this post, we’ll focus on how the customer experience is enhanced by implementing a robust speech self-service solution. We’ll specifically address the questions posed in the ROI of Speech post:
- Can I engage customers in manner that allows me to dynamically generate personalized treatment that results in higher rates of self-service or cross/up sell opportunities?
- When customers don’t want to play in the IVR, can I gather enough information to avoid costly misroutes?
- Can I take what I know about the customer and provide proactive information that might resolve their need before they move into the transactional path or transfer to an agent?
Understanding how each of these factors tie into the overall speech self-service strategy will help to position the organization for success and yield an intelligent experience that customers will engage in time and time again.
Given the dynamic nature of conversational speech, companies can leverage speech technology to build very robust interactions with consumers. Let’s assume a customer calls to inquire about their checking account. Based on this customer’s profile we know that they are a high-net worth customer and would be eligible for numerous up-sell offers. Using conversational dialogue, we can begin to ask the consumer targeted questions in conjunction with what we know about their relationship with the bank. This depth of conversation would be controlled by the consumer and all information collected would ultimately be used to improve the intelligence of the customer record. Over time, the organization would have a targeted view of this consumer using a combination of profile and behavioral information.
As organizations begin to consider whether a speech-enabled solution is right for them they should take inventory of their current personalization strategies and lay out numerous use cases that could be supported through a more robust speech solution.
The most successful companies across all industry verticals recognize that holding consumers hostage within the IVR system is the most egregious error they can make. The internet is filled with horror stories of consumers being trapped in automation purgatory. In fact, being trapped in the IVR is one of most common reasons consumers hate to use automation. To combat this, companies tried, often unsuccessfully, to build “second chance” menus to capture caller intent to get them to the right location. Of course, consumers who despise automation rarely play at this level.
Fortunately, speech recognition technology provides a viable solution for both the consumer and the organization.
For companies, the conversational approach of the speech solution provides a sense of forward progress to the consumer. This approach promotes engagement and therefore reduces the rate of costly internal transfers, as well as improves the perception of the company.
Consumers benefit from easy transfers without the need for sitting through verbose second chance menus or cycles of repeated commands.
While proactive information can be successfully pushed in a DTMF solution, the use of speech recognition technology can expand the interaction, thereby delivering a much more targeted push. The depth of engagement will be far deeper with speech technology. Consumers can provide complex responses and the company can offer multiple data points in a single question. The dynamic interaction will reduce the cognitive load on a consumer as the flow of the information will more fluid and natural. This approach is highly successful in keeping calls in the self-service channel and avoiding the costlier agent channel across many industry verticals, particularly when high rates of repeat callers are common, for example credit card and bank account balance inquiries.
The power of speech technology continues to change the face of the self-service world and customer experience as a whole for improved intelligence of the interaction. Understanding the use cases for the technology requires a solid understanding of current capabilities and consumer behavior and expectations. While the questions presented above represent a significant portion of developing a business case for a speech technology solution, numerous other factors must be addressed to build a comprehensive roadmap.
In our next installment, we will discuss how speech can open opportunities for new functionality and scope of coverage across the entire self-service solution.
As our relationship with our partners mature, we find ourselves influenced to create and develop for their needs. We do not build something in hopes that our partners will buy it. We listen carefully to those around us and develop to their needs after the proper qualification is secured. Recently there has been a flurry of activity in Central and South America and especially in Brazil. One of our larger partners who have fully integrated our software into their platform, requested we develop a Brazilian Portuguese acoustic model. Given our willingness to please and our understanding of the market potential of undertaking such a venture, we recently agreed and added this to our ever growing list of languages. We used a novel approach to developing this model that we have been researching for quite some time and we think should satisfy most speech recognition applications.
The Brazilian economy is in a state that presents a favorable opportunity to increase automation to maximize their efficiencies. To answer this urgent call, LumenVox has expanded its ASR offering by creating a Brazilian Portuguese language model which will bring the number of ASR languages to 8 and the number of TTS languages to 23. Today LumenVox covers all of the America’s from Cape Horn to Cape Columbia and everywhere in between!
We will be doing a lot more with the Asian TTS languages in the future, once we figure out how to deal with some of the double byte issues in our Media Server. We just entered into QA with our new version so we should be able to share some details with you on this in just a few weeks.