Speech recognition with Microsoft Cognitive Speech API and non-microphone real-time audio stream

2.4k views Asked by At

Problem

My project consists of a desktop application that records audio in real-time, for which I intend to receive real-time recognition feedback from an API. With a microphone, a real-time implementation using Microsoft's new Speech-to-Text API is trivial, with my scenario differing from that only in the sense that my data is written to a MemoryStream object.

API Support

This article explains how to implement the API's Recognizer (link) with custom audio streams, which invariably requires the implementation of the abstract class PullAudioInputStream (link) in order to create the required AudioConfig object using the CreatePullStream method (link). In other words, to achieve what I require, a callback interface must be implemented.

Implementation attempt

Since my data is written to a MemoryStream (and the library I use will only record to files or Stream objects), in the code below I simply copy over the buffer to the implemented class (in a sloppy way, perhaps?) resolving the divergence in method signatures.

class AudioInputCallback : PullAudioInputStreamCallback
{
    private readonly MemoryStream memoryStream;

    public AudioInputCallback(MemoryStream stream)
    {
        this.memoryStream = stream;
    }

    public override int Read(byte[] dataBuffer, uint size)
    {
        return this.Read(dataBuffer, 0, dataBuffer.Length);
    }

    private int Read(byte[] buffer, int offset, int count)
    {
        return memoryStream.Read(buffer, offset, count);
    }

    public override void Close()
    {
        memoryStream.Close();
        base.Close();
    }

}

The Recognizer implementation is as follows:

private SpeechRecognizer CreateMicrosoftSpeechRecognizer(MemoryStream memoryStream)
{
    var recognizerConfig = SpeechConfig.FromSubscription(SubscriptionKey, @"westus");
    recognizerConfig.SpeechRecognitionLanguage =
        _programInfo.CurrentSourceCulture.TwoLetterISOLanguageName;

    // Constants are used as constructor params)
    var format = AudioStreamFormat.GetWaveFormatPCM(
        samplesPerSecond: SampleRate, bitsPerSample: BitsPerSample, channels: Channels);

    // Implementation of PullAudioInputStreamCallback
    var callback = new AudioInputCallback(memoryStream);
    AudioConfig audioConfig = AudioConfig.FromStreamInput(callback, format);

    //Actual recognizer is created with the required objects
    SpeechRecognizer recognizer = new SpeechRecognizer(recognizerConfig, audioConfig);

    // Event subscriptions. Most handlers are implemented for debugging purposes only.
    // A log window outputs the feedback from the event handlers.
    recognizer.Recognized += MsRecognizer_Recognized;
    recognizer.Recognizing += MsRecognizer_Recognizing;
    recognizer.Canceled += MsRecognizer_Canceled;
    recognizer.SpeechStartDetected += MsRecognizer_SpeechStartDetected;
    recognizer.SpeechEndDetected += MsRecognizer_SpeechEndDetected;
    recognizer.SessionStopped += MsRecognizer_SessionStopped;
    recognizer.SessionStarted += MsRecognizer_SessionStarted;

    return recognizer;
}

How the data is made available to the recognizer (using CSCore):

MemoryStream memoryStream = new MemoryStream(_finalSource.WaveFormat.BytesPerSecond / 2);
byte[] buffer = new byte[_finalSource.WaveFormat.BytesPerSecond / 2];

_soundInSource.DataAvailable += (s, e) =>
{
    int read;
    _programInfo.IsDataAvailable = true;

    // Writes to MemoryStream as event fires
    while ((read = _finalSource.Read(buffer, 0, buffer.Length)) > 0)
        memoryStream.Write(buffer, 0, read);
};

// Creates MS recognizer from MemoryStream
_msRecognizer = CreateMicrosoftSpeechRecognizer(memoryStream);

//Initializes loopback capture instance
_soundIn.Start();

await Task.Delay(1000);

// Starts recognition
await _msRecognizer.StartContinuousRecognitionAsync();

Outcome

When the application is run, I don't get any exceptions, nor any response from the API other than SessionStarted and SessionStopped, as depicted below in the log window of my application.

enter image description here

I could use suggestions of different approaches to my implementation, as I suspect there is some timing problem in tying the recorded DataAvailable event with the actual sending of data to the API, which is making it discard the session prematurely. With no detailed feedback on why my requests are unsuccessful, I can only guess at the reason.

2

There are 2 answers

1
Zhou On

The Read() callback of PullAudioInputStream should block if there is no data immediate available. And Read() returns 0, only if the stream reaches the end. The SDK will then close the stream after Read() returns 0 (find an API reference doc here).

However, the behavior of Read() of C# MemoryStream is different: It returns 0 if there is no data available in the buffer. This is why you only see SessionStart and SessionStop events, but no recognition events.

In order to fix that, you need to add some kind of synchronization between PullAudioInputStream::Read() and MemoryStream::Write(), in order to make sure that PullAudioInputStream::Read() will wait until MemoryStream::Write() writes some data into buffer.

Alternatively, I would recommend to use PushAudioInputStream, which allows you directly write your data into stream. For your case, in _soundSource.DataAvailable event, instead of writing data into MemoryStream, you can directly write it into PushAudioInputStream. You can find samples for PushAudioInputStream here.

We will update the documentation in order to provide the best practice on how to use Pull and Push AudioInputStream. Sorry for the inconvenience.

Thank you!

1
Mallock On

Hi I was able to solve this problem using NAudio.

Here Is the code that starts the audio stream recording:

computerAudioWriter = new WaveFileWriter(new MemoryStream(),computerAudioCapture.WaveFormat);
computerAudioStream = new NAudioStream();


// Start recording
computerAudioCapture.StartRecording();
computerAudioCapture.DataAvailable += async (sender, e) =>
{
  if (e.BytesRecorded > 0)
  {
    // Write data to computer audio
    computerAudioStream.Write(e.Buffer, 0, e.BytesRecorded);
  }
};                

This is the class that can be used with PullAudioInputStreamCallback.

public class NAudioStream : PullAudioInputStreamCallback
{
    private MemoryStream memoryStream;
    private ManualResetEvent newData;

    public NAudioStream()
    {
        
        this.memoryStream = new MemoryStream();
        this.newData = new ManualResetEvent(false);
    }

    public void Write(byte[] buffer, int offset, int count)
    {
        // Write data to the memory stream and set the reset event
         memoryStream.Write(buffer, offset, count);
         newData.Set();

    }

    public void Close()
    {
        newData.Set();
    }

    private int bytesCounter = 0;

    public override int Read(byte[] dataBuffer, uint size)
    {

        if (memoryStream == null)
        {
            return 0;
        }

        int bytesRead = 0;

        while (bytesRead < size)
        {
            newData.WaitOne(); // Block until there are bytes to read

            byte[] wavBuffer = memoryStream.ToArray();
            int bytesToRead = (int)(size - bytesRead);

            if (bytesToRead > wavBuffer.Length - bytesCounter)
            {
                bytesToRead = wavBuffer.Length - bytesCounter;
            }

            if (bytesToRead > 0)
            {
                Array.Copy(wavBuffer, bytesCounter, dataBuffer, bytesRead, bytesToRead);
                bytesRead += bytesToRead;
                bytesCounter += bytesToRead;
            }
            newData.Reset();
            return (int)bytesRead;
        }

    }

}