Rss Categories

Adding Audio

Reference Number: AA-00606 Views: 11338 0 Rating/ Voters

Because the LumenVox Speech Engine is hardware independent, the client application has great flexibility when collecting audio data. Once the audio is acquired, your client application should ensure the data is in a supported audio format, either by collecting it in this format initially or else converting it somehow.

The audio sent to the Speech Engine must be header-less, otherwise known as "raw" audio. For example, the standard Windows .wav files have a header which needs to be removed.

Audio data is stored in a voice channel. Each speech port has 64 different voice channels. This allows 64 different audio data samples to be stored in a speech port at once, although most applications will only need 2: one for the main answer, and one holding the results of a confirmation yes/no question.

Audio may be entered at once, as a batch decode, or it may be streamed in. Batch decodes are generally used for applications where you have audio files saved to disk, while streaming decodes are used when your application is collecting audio and sending it directly to the Engine (most telephony applications will use streaming).

Batched Audio

To get your audio into the port all you have to do is collect your audio into a buffer and call LoadVoiceChannel, supplying the handle of the port, the number of the voice channel you wish to use, a pointer to the audio data, the length of the audio, and the correct sound format.

C Code

  1. void LoadAudio(HPORT hport, void* audio, int audiolength)
  2. {
  3. LV_SRE_LoadVoiceChannel(hport, 1, audio, audiolength, PCM_16KHZ);
  4. }

C++ Code

  1. void LoadAudio(LVSpeechPort &myport, void* audio, int audiolength)
  2. {
  3. myport.LoadVoiceChannel(1, audio, audiolength, PCM_16KHZ);
  4. }


In order to stream audio into the speech server, there are several parameters to set, as the Speech Engine must do voice activity detection (VAD), correctly identifying the beginning and end of speech. See Recommended Engine Settings.

If you are having problems with barge in or with the Engine chopping off words at the end of utterances, it is probably because of the way the streaming parameters are set. Please review the Recommended Engine Settings, Sensitivity Settings, and the SetStreamParameter pages.

The code below will set up streaming and set the stream parameters to the most commonly used settings.

C Code

  1. LV_SRE_StreamSetParameter(hport, STREAM_PARM_DETECT_BARGE_IN, 1);
  2. LV_SRE_StreamSetParameter(hport, STREAM_PARM_DETECT_END_OF_SPEECH, 1);
  3. LV_SRE_StreamSetParameter(hport, STREAM_PARM_AUTO_DECODE, 1);
  4. LV_SRE_StreamSetParameter(hport, STREAM_PARM_VOICE_CHANNEL, 1);

C++ Code

  1. // The port gets opened and initialized.
  2. LVSpeechPort port;
  3. port.CreateClient();
  4. // ...

  5. // Let the port detect beginning and end of speech,
  6. // and handle the speech decoding automatically
  7. port.StreamSetParameter(STREAM_PARM_DETECT_BARGE_IN, 1);
  8. port.StreamSetParameter(STREAM_PARM_DETECT_END_OF_SPEECH, 1);
  9. port.StreamSetParameter(STREAM_PARM_AUTO_DECODE, 1);

  10. // Pick a voice channel to record audio and send responses to.
  11. port.StreamSetParameter(STREAM_PARM_VOICE_CHANNEL, 1);

  12. // If you wish to use your activated SRGS grammars, the grammar set
  13. // must be LV_ACTIVE_GRAMMAR_SET

The rest of this example will be in C++. Suppose we have an interface that intermittently provides audio to us. For simplicity, assume it always sends audio in u-Law 8KHz:


  1. typedef bool (*AudioStreamCallback)(char* audio_chunk,
  2. int audio_length,
  3. void* user_data)
  4. class AudioStreamer
  5. {
  6. public:
  7. // Non-blocking function. Sends audio through the callback function
  8. // at regular intervals on a separate thread. It will stop sending
  9. // audio if the callback returns "false".
  10. void StartStream(AudioStreamCallback cb, void* user_data);

  11. // The audio thread will stop sending audio through the callback if
  12. // StopStream is called. When StopStream returns, the audio thread
  13. // is no longer sending.
  14. void StopStream();

  15. // constructors, destructors, hardware hooks, etc.
  16. // ...
  17. };

The speech port also has a callback mechanism for letting the user know what state of processing it is in.


  1. typedef void (*StreamStateChangeFn)(int new_state,
  2. unsigned int total_bytes,
  3. unsigned int recorded_bytes,
  4. void* user_data);

We can connect our speech port and the audio streamer together by way of their callbacks.


  1. struct SimpleRecognizer
  2. {
  3. LVSpeechPort port;
  4. AudioStreamer audio;
  5. };

  6. bool AudioCB(char* audio_chunk, int audio_length, void* user_data)
  7. {
  8. SimpleRecognizer* self = (SimpleRecognizer*)user_data;
  9. self -> port.StreamSendData (audio_chunk, audio_length);
  10. return true;
  11. }

  12. static void PortCB(int new_state, unsigned int total_bytes,
  13. unsigned int recorded_bytes,
  14. void* user_data)
  15. {
  16. SimpleRecognizer* self = (SimpleRecognizer*)user_data;
  17. switch (new_state)
  18. {
  20. self -> audio.StartStream(AudioCB, self);
  21. break;
  24. self -> audio.StopStream();
  25. // Retrieve answers: we will define this later
  26. break;
  28. // Stop playing prompt
  29. break;
  30. }
  31. }

Now all that has to happen is to plug the PortCB function into the port.


  1. SimpleRecognizer reco;

  2. // Initialize the speech port and the audio streamer
  3. // ...
  4. // Start the stream.
  5. reco.port.StreamSetStateChangeCallBack(PortCB, &reco);
  6. reco.port.StreamSetParameter(STREAM_PARM_SOUND_FORMAT, ULAW_8KHZ);

  7. // StreamStart will put the port into the STREAM_STATUS_READY state, which
  8. // will trigger the audio streamer to start sending audio to the port.
  9. reco.port.StreamStart();