(JUNE 2007) — Did you know that support for several languages is included with each copy of the LumenVox Speech Engine?
Taking advantage of our various acoustic models — the audio data that powers the Speech Engine — can help improve your speech applications. Not only can you switch between English and Spanish, for instance, but even within a language we have support for multiple dialects.
Just this month we released our U.K. English model, perfect for recognizing speakers from the British Isles, or even speakers from other parts of the world that speak English in a similar way.
We currently have acoustic models for American English, Australian English, U.K. English, Mexican Spanish, South American Spanish, and Canadian French.
Which one you should use for your application will usually be obvious. A call router in America is best served by American English, and one in Australia by Australian English.
But there are cases where it's not so clear. What if you were developing an application for South African speakers, who speak an English dialect for which there is currently no acoustic model?
In that case you may want to try UK English first, as South African English is heavily influenced by UK English, and then the other models. The same sort of logic would go for other English or Spanish dialects.
Each grammar file contains a language specification in its header. Languages are specified by the code for the language, followed by the country code for the specific dialect. So American English is en–US (en is for English; US specifies the country).
Every language also has a digits–only model that's specified by appending –di to the end of the language specifier, e.g. en–US–di for American English digits. The digits –only models can only recognize digits, but generally offer higher accuracy for those applications.
Note that you cannot use two different acoustic models at the same time. If you are supporting multiple languages in your application, be sure that the grammars activated for a given recognition all use the same language.
It's a question we hear all the time: what's the best way to convert an existing DTMF telephony application to one that harnasses the power of speech recognition?
We cover the best practices of moving from DTMF to speech in our latest white paper, which will help you get away from those nested menus. Your callers will thank you.
Give yoursef a leg up if you're thinking of embarking on such a conversion yourself by reading the white paper. More
The most recent releases of the LumenVox Speech Engine include our different acoustic models, so you can get started working with various languages immediately.
If you would like information on downloading the latest release of the LumenVox Speech Engine, please contact us. It is a free download for users with current software maintenance packages.
In order to recognize sounds from different languages, we "train" the LumenVox Speech Engine on large sets of transcribed audio from each language. The result of this process is an acoustic model, a large file that contains information about the way words in a language sound.
Recent installations of the Speech Engine include all of our acoustic models (available as a separate download now, the U.K. English model will be included in the July release).
There is a subdirectory called Lang in the Speech Engine installation directory. Inside of that is a directory called Dict that contains the acoustic models that will be loaded when the Speech Engine starts.
By default, the Dict directory only contains the American English model. If you want to use other languages, you will need to copy the appropriate models from Lang/OtherLanguages/ into the Dict directory.
Note that each acoustic model uses a significant amount of memory, so you should not load models you will not be using.
© 2017 LumenVox, LLC. All rights reserved.