I am working with a multi input soundcard and I want to achieve live mixing of multiple inputs. All the inputs are stereo, so I need to split them in first place, mix a selection of channel and provide them as mono stream.
The goal would be something like this mix Channel1[left] + Channel3[right] + Channel4[right] -> mono stream.
I have already implemented a process chain like this:
1) WaveIn -> create BufferedWaveProvider for each channel -> add Samples (just the ones for current channel) to each BufferedWaveProvider by using wavein.DataAvailable += { buffwavprovider[channel].AddSamples(...)... This gives me a nice list of multiple BufferdWaveProvider. The splitting audio part here is implemented correctly.
2) Select multiple BufferedWaveProviders and give them to MixingWaveProvider32. Then create a WaveStream (using WaveMixerStream32 and IWaveProvider).
3) A MultiChannelToMonoStream takes that WaveStream and generates a mixdown. This also works.
But result is, that audio is chopped. Like some trouble with the buffer....
Is this the correct way to handle this problem, or is there a way better solution around?
edit - code added:
public class AudioSplitter
{
public List<NamedBufferedWaveProvider> WaveProviders { private set; get; }
public string Name { private set; get; }
private WaveIn _wavIn;
private int bytes_per_sample = 4;
/// <summary>
/// Splits up one WaveIn into one BufferedWaveProvider for each channel
/// </summary>
/// <param name="wavein"></param>
/// <returns></returns>
public AudioSplitter(WaveIn wavein, string name)
{
if (wavein.WaveFormat.Encoding != WaveFormatEncoding.IeeeFloat)
throw new Exception("Format must be IEEE float");
WaveProviders = new List<NamedBufferedWaveProvider>(wavein.WaveFormat.Channels);
Name = name;
_wavIn = wavein;
_wavIn.StartRecording();
var outFormat = NAudio.Wave.WaveFormat.CreateIeeeFloatWaveFormat(wavein.WaveFormat.SampleRate, 1);
for (int i = 0; i < wavein.WaveFormat.Channels; i++)
{
WaveProviders.Add(new NamedBufferedWaveProvider(outFormat) { DiscardOnBufferOverflow = true, Name = Name + "_" + i });
}
bytes_per_sample = _wavIn.WaveFormat.BitsPerSample / 8;
wavein.DataAvailable += Wavein_DataAvailable;
}
/// <summary>
/// add samples for each channel to bufferedwaveprovider
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
private void Wavein_DataAvailable(object sender, WaveInEventArgs e)
{
int channel = 0;
byte[] buffer = e.Buffer;
for (int i = 0; i < e.BytesRecorded - bytes_per_sample; i = i + bytes_per_sample)
{
byte[] channel_buffer = new byte[bytes_per_sample];
for (int j = 0; j < bytes_per_sample; j++)
{
channel_buffer[j] = buffer[i + j];
}
WaveProviders[channel].AddSamples(channel_buffer, 0, channel_buffer.Length);
channel++;
if (channel >= _wavIn.WaveFormat.Channels)
channel = 0;
}
}
}
Using the Audiosplitter for each channel gives a list of buffered wave provider (mono 32bit float).
var mix = new MixingWaveProvider32(_waveProviders);
var wps = new WaveProviderToWaveStream(mix);
MultiChannelToMonoStream mms = new MultiChannelToMonoStream(wps);
new Thread(() =>
{
byte[] buffer = new byte[4096];
while (mms.Read(buffer, 0, buffer.Length) > 0 && isrunning)
{
using (FileStream fs = new FileStream("C:\\temp\\audio\\mono_32.wav", FileMode.Append, FileAccess.Write))
{
fs.Write(buffer, 0, buffer.Length);
}
}
}).Start();
there is some space left for optimization, but basically this gets the job done:
We need to provide a WaveProvider (WaveProviders[j]) for each channel.