I'm writing an Electron app to deliver separate audio streams to 10 audio channels, using a Focusrite Scarlett 18i20 USB sound card. Windows 10 splits the outputs into the following stereo outputs:
- Focusrite 1 + 2
- Focusrite 3 + 4
- Focusrite 5 + 6
- Focusrite 7 + 8
- Focusrite 9 + 10
Because of this, I need the app to send the audio to a specific output, as well as splitting the stereo channels. Example: To deliver audio to the 3rd output, I need to send it to "Focusrite 3 + 4" on the left channel. Unfortunately, I cannot seem to be able to do both at the same time.
I start with an audio object:
let audio = new Audio("https://test.com/test.mp3");
I do the following to get the sinkIds for the outputs:
let devices = await navigator.mediaDevices.enumerateDevices();
devices = devices.filter(device => device.kind === 'audiooutput');
The following works for making the audio output to a specific sinkId:
audio.setSinkId(sinkId).then(() => {
audio.play();
}
Works: I do the following to play only the left channel:
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
So far everything is fine. But when I try to combine these, the sinkId is ignored, and the audio is being sent to the default audio output. I have tried several approaches, including this one:
audio.setSinkId(sinkId).then(() => {
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
}
I have also tried an approach using audioContext.createChannelMerger
instead of the stereoPanner. This works perfectly on its own, but not combined with setSinkId. I get the same behavior on Windows 10 and Mac.
Any ideas?
To route the audio output of an
AudioContext
to a specific output device you would need to use aMediaStreamAudioDestinationNode
in combination with another audio element. ThesinkId
of your existing audio element doesn't have any effect anymore once it gets routed into theAudioContext
.The desired signal flow would somehow look like this:
Your
AudioContext
code would then need to be changed to route everything to aMediaStreamAudioDestinationNode
instead of the default destination.The last part is to route the
stream
of theMediaStreamAudioDestinationNode
to a newly created audio element which can then be used to set thesinkId
.Please note that using an audio element just for setting the
sinkId
is a bit of a hack. There are plans to make setting thesinkId
a feature of theAudioContext
.https://github.com/WebAudio/web-audio-api-v2/issues/10