How to setup sequential independent recordings with webAudio-API

302 views Asked by At

No problems to record the microphone, connect the analyzer for a nice vu-meter, re-sample the massive amount of data to something we can handle (8Khz, Mono) using 'xaudio.js' from the speex.js lib and to wrap it into an appropriate WAV envelope.

Stopping the recorder seems to be a different story, because the recording process severely lags behind the onaudioprocess functionality. But even this is not a problem as I can calculate the missing samples and wait for them to arrive before I actually store the data.

But what now? How do I stop the audio-process from calling onaudioprocess? Disconnecting all nodes doesn't make a difference. How am I able to re-initialize all buffers to reate a clean and fresh jump-in point for the next recording? Should I destroy the AudioContext? How would I do that? Or is it enough to 'null' the createMediaStreamSource?

What needs to be done to truly set everything up for sequential independent recordings?

Any hint is appreciated.

1

There are 1 answers

2
cwilso On BEST ANSWER

I'm not sure of all your code structure; personally, I'd try to hang on to the AudioContext and the input stream (from the getUserMedia callback), even if I removed the MediaStreamSourceNode.

To get rid of the ScriptProcessor, though - set the .onaudioprocess in the script processor node to null. That will stop it calling the callback - then if you disconnect it and release all references, it should be garbage collected as usual.

[edit] Oh, and the only way to delete an AudioContext is to get rid of any processing that's happening (disconnect all nodes, remove any onaudioprocess), remove any references to it, and wait for it to be garbage-collected.