WaveOutWrite direct from Webcam audio capture callback

171 views Asked by At

I'm capturing audiodata from Webcam, using VFW and on audio capture Callback, and at the same time, inside the body of the same capture Callback, direct the sampled data to default MAPPER, using waveOutWrite.

Signal quality from Webcam is 1 channel/8 bits/11025 samples/sec. The sound format is supported by default audio device, thanks to waveOpen with FORMAT_QUERY flag.

The return of waveWriteOut is NOERROR, but what I can hear is far from my expectations. In the room is quiet and it should be sound like white noise of emptiness.

Pls, listen the sound YouTube rec

It starts, pack by pack sized about 16K, WAVEHDR structure is ok. Then it slows down gradually and exits with system unrecovered error.

What is that similar to?

Below is the audio dta reciever code from VFW, and lpWHdr comes visually Ok, even internal flag triggered to 2 = Prepared.. seems like VFW and WaveAudio are created to each other :)

public static void capAudioStreamCallback(UIntPtr hWnd, ref WAVE.WAVEHDR lpWHdr) {
    Say(String.Format(DateTime.Now.ToString("mm:ss:fff ") + "Received {0} of audio data", lpWHdr.dwBytesRecorded.ToString()));
    Application.DoEvents();
    WA.WAVEHDR_FLAGS flag = (WA.WAVEHDR_FLAGS) lpWHdr.dwFlags;
    if ((WA.WAVEHDR_FLAGS)lpWHdr.dwFlags != WA.WAVEHDR_FLAGS.WHDR_PREPARED)
                CheckWAError("waveOutPrepareHeader", WA.waveOutPrepareHeader(phwo, lpWHdr, (uint)Marshal.SizeOf(lpWHdr)));
    CheckWAError("waveOutWrite", WA.waveOutWrite(phwo, lpWHdr, (uint)Marshal.SizeOf(lpWHdr)));
    CheckWAError("waveOutUnprepareHeader", WA.waveOutUnprepareHeader(phwo, lpWHdr, (uint)Marshal.SizeOf(lpWHdr)));
    return;
}

    static void CheckWAError(string Func, WA.MMSYSERR err) {
        if (err == WA.MMSYSERR.MMSYSERR_BASE_NOERROR) { Say(Func + " WA Ok"); return; }
        IntPtr str = Marshal.AllocHGlobal(200);
        string s = "";
        WA.waveOutGetErrorText(err, str, 200);
        s = Marshal.PtrToStringAnsi(str);
        Marshal.FreeHGlobal(str);
        Say(Func + " err: " + s);
    }

I think the buffer is not overrun, because you can see the DateTime milliseconds stamp, and it ticks every 1400 milliseconds and sample rate = 11025, and the buffer size is about 16500 bytes = looks like Ok..

UPD: I just copied unmanaged buffer into managed and looked through its values. Looks like saw teeth or even overloaded sinus. 0 4 0 3 0 32 109 213 255 251 255 243 241 97 0 7 0 2 1 1 0 5 0 and then again up and down in about the same numbers and the same period. Not exactly, about the same (+/-). Also, I can record the signal from that cam using internal Windows recorder, and I can see like signal level jumps up and down on my voice up and down, so the mic of the webcam is Ok also.. I suppose that might be something wrong with VFW input audio signal feeder. Even it accepted the WAVEFORMATEX and sent back WAVEHDR, they're both ok... but the buffer data is populated with some other source, not the webcam, though VFW says it must be from Webcam, cause video is capturing from the same source, and it's working, I just added one extramessage: SendMessage(camHwnd, WM_CAP_SET_CALLBACK_WAVESTREAM, 0, audioCallback); I'm pretty much sure if I'll use waveIn instead of VFW, it will work ok.. I'll check it later on.. But why the VFW works not like it supposed to?

1

There are 1 answers

0
AudioBubble On BEST ANSWER

The problem was very simple - that was USB hardware failure. I needed to unplug USB Camera and plug in again.

But anyways, I'd like to share my knowledge about that.

1) We should use Asynchronous mechanizm of getting and sending packs of audio data to the playback end. Until the first buffer has played back, we have to avoid sending new buffer to playback. The method is called - "double" or even "tripple" buffering. And with VFW you can organize it very comfortable using WM_CAP_GET_SEQUENCE_SETUP message and CAPTUREPARAMS structure. The wNumAudioRequested parameter is used to setup how many different buffers will be cyclically used, to send audiodata to your audioCallback. It's set to 10 by default, more than enough.

2) The best way to check out if your audio signal is the valid signal is: in your WAVESTREAM Callback do Marshaling of bytes from received buffer with audiodata into managed static array of bytes. Then, inside the callback, do output of 50-100 sample' values with Console.Write(array[i] + " "), and see if the values're changing on your voice up and down. Take into account that the Zero level is at the middle of WAVEFORMATEX->wBitsPerSample value, and in my case (8 bits/sample) the values are 125 126 127 128 129. It's accepted as silence, and no signal, or Zero-noise. Once you sure you have correct audiodata, now you can go further to your goal.

3) Remember, when you're on recording from mic mode, the local output wave device better to be closed. Your goal is to collect audio data to record or send through Network. Do not try getting data and waveOuit them locally. Sometimes latency value of your speaker is bit higher than speed of sampling mic data, and you'll get messy with buffers, as it took place with me. Then I just followed the principle - "recording is when you collect, save or send audiodata, and it should be played back after recording or at the same time, but on the endpoint PC.

4) To be continued with code