Save to Wav file the audio recorded with .NetCore C# on Raspberry pi

2.1k views Asked by At

I am finding very difficult to find a way to store the audio captured using OpenTk.NetStandard into a proper .WAV file in NetCore C#.

What I am looking for is a solution which will work when running on a Raspberry pi, so NAudio or any Windows specific method won't solve my problem.

I found a couple of other SO answers which show how to capture audio using opentk , but nothing about how to store it in a wav file.

This is an extract of the code which should read data from the microphone I took from anther SO question, I see the AudioCapture class is the one to :

  const byte SampleToByte = 2;
  short[] _buffer = new short[512];
  int _sampling_rate = 16000;
  double _buffer_length_ms = 5000;
  var _recorders = AudioCapture.AvailableDevices;
  int buffer_length_samples = (int)((double)_buffer_length_ms  * _sampling_rate * 0.001 / BlittableValueType.StrideOf(_buffer));

  using (var audioCapture = new AudioCapture(_recorders.First(), _sampling_rate, ALFormat.Mono16, buffer_length_samples))
  {
      audioCapture.Start();
      int available_samples = audioCapture.AvailableSamples;        

      _buffer = new short[MathHelper.NextPowerOfTwo((int)(available_samples * SampleToByte / (double)BlittableValueType.StrideOf(_buffer) + 0.5))];

      if (available_samples > 0)
      {
          audioCapture.ReadSamples(_buffer, available_samples);

          int buf = AL.GenBuffer();
          AL.BufferData(buf, ALFormat.Mono16, buffer, (int)(available_samples * BlittableValueType.StrideOf(_buffer)), audio_capture.SampleFrequency);
          AL.SourceQueueBuffer(src, buf);

         // TODO: I assume this is where the save to WAV file logic should be placed...
      }

  }

Any help would be appreciate!

1

There are 1 answers

9
Andy On BEST ANSWER

Here is a .NET Core console program that will write a WAV file using Mono 16-bit data. There are links in the source code you should read through to understand what's going on with the values that are being written.

This will record 10 seconds of data and save it to a file in WAV format:

using OpenTK.Audio;
using OpenTK.Audio.OpenAL;
using System;
using System.IO;
using System.Threading;

class Program
{
    static void Main(string[] args)
    {
        var recorders = AudioCapture.AvailableDevices;
        for (int i = 0; i < recorders.Count; i++)
        {
            Console.WriteLine(recorders[i]);
        }
        Console.WriteLine("-----");

        const int samplingRate = 44100;     // Samples per second

        const ALFormat alFormat = ALFormat.Mono16;
        const ushort bitsPerSample = 16;    // Mono16 has 16 bits per sample
        const ushort numChannels = 1;       // Mono16 has 1 channel

        using (var f = File.OpenWrite(@"C:\users\andy\desktop\out.wav"))
        using (var sw = new BinaryWriter(f))
        {
            // Read This: http://soundfile.sapp.org/doc/WaveFormat/

            sw.Write(new char[] { 'R', 'I', 'F', 'F' });
            sw.Write(0); // will fill in later
            sw.Write(new char[] { 'W', 'A', 'V', 'E' });
            // "fmt " chunk (Google: WAVEFORMATEX structure)
            sw.Write(new char[] { 'f', 'm', 't', ' ' });
            sw.Write(16); // chunkSize (in bytes)
            sw.Write((ushort)1); // wFormatTag (PCM = 1)
            sw.Write(numChannels); // wChannels
            sw.Write(samplingRate); // dwSamplesPerSec
            sw.Write(samplingRate * numChannels * (bitsPerSample / 8)); // dwAvgBytesPerSec
            sw.Write((ushort)(numChannels * (bitsPerSample / 8))); // wBlockAlign
            sw.Write(bitsPerSample); // wBitsPerSample
            // "data" chunk
            sw.Write(new char[] { 'd', 'a', 't', 'a' });
            sw.Write(0); // will fill in later

            // 10 seconds of data. overblown, but it gets the job done
            const int bufferLength = samplingRate * 10;
            int samplesWrote = 0;

            Console.WriteLine($"Recording from: {recorders[0]}");

            using (var audioCapture = new AudioCapture(
                recorders[0], samplingRate, alFormat, bufferLength))
            {
                var buffer = new short[bufferLength];

                audioCapture.Start();
                for (int i = 0; i < 10; ++i)
                {
                    Thread.Sleep(1000); // give it some time to collect samples

                    var samplesAvailable = audioCapture.AvailableSamples;
                    audioCapture.ReadSamples(buffer, samplesAvailable);
                    for (var x = 0; x < samplesAvailable; ++x)
                    {
                        sw.Write(buffer[x]);
                    }

                    samplesWrote += samplesAvailable;

                    Console.WriteLine($"Wrote {samplesAvailable}/{samplesWrote} samples...");
                }
                audioCapture.Stop();
            }

            sw.Seek(4, SeekOrigin.Begin); // seek to overall size
            sw.Write(36 + samplesWrote * (bitsPerSample / 8) * numChannels);
            sw.Seek(40, SeekOrigin.Begin); // seek to data size position
            sw.Write(samplesWrote * (bitsPerSample / 8) * numChannels);
        }
    }
}