How to create AudioBuffer/Audio from NSdata

8.2k views Asked by At

I am a beginner in streaming application, I created NSdata from AudioBuffer and i am sending the nsdata to client(receiver). But i don't know how to convert NSdata to Audio Buffer.

I am using the following code to convert AudioBuffer to NSdata (This is working good)

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 
{               
 AudioStreamBasicDescription audioFormat;
 memset(&audioFormat, 0, sizeof(audioFormat));
 audioFormat.mSampleRate = 8000.0;
 audioFormat.mFormatID = kAudioFormatiLBC;
 audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked | kAudioFormatFlagIsAlignedHigh;
 audioFormat.mFramesPerPacket = 1;
 audioFormat.mChannelsPerFrame = 1;
 audioFormat.mBitsPerChannel = 16;
 audioFormat.mBytesPerPacket = 2;
 audioFormat.mReserved = 0;
 audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket = audioFormat.mChannelsPerFrame* sizeof(SInt16);

 AudioBufferList audioBufferList;
 NSMutableData *data=[[NSMutableData alloc] init];
 CMBlockBufferRef blockBuffer;
 CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
  for( int y=0; y<audioBufferList.mNumberBuffers; y++ )
  {
      AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
      Float32 *frame = (Float32*)audioBuffer.mData;
      [data appendBytes:frame length:audioBuffer.mDataByteSize];
  }
}

If this is not the proper way then please help me.... thanks.

4

There are 4 answers

5
Sabir Ali On

This is how I did it, in case anyone else is caught in the same issue. You don't need to get the data out of AudioBufferList and instead use it as it is. In order to re-create the AudioBufferList out of NSData again I need number of samples info too, So I've appended it just before the actual data.

Here's how to get data out of CMSampleBufferRef:

AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);        
NSUInteger size = sizeof(audioBufferList);
char buffer[size + 4];
((int*)buffer)[0] = (int)numSamples;
memcpy(buffer +4, &audioBufferList, size);
//This is the Audio data.
NSData *bufferData = [NSData dataWithBytes:buffer length:size + 4];

This is how you create the AudioSampleBufferRef out of this data:

const void *buffer = [bufferData bytes];
buffer = (char *)buffer;

CMSampleBufferRef sampleBuffer = NULL;
OSStatus status = -1;

/* Format Description */
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = 0xc;
audioFormat.mBytesPerPacket= 2;
audioFormat.mFramesPerPacket= 1;
audioFormat.mBytesPerFrame= 2;
audioFormat.mChannelsPerFrame= 1;
audioFormat.mBitsPerChannel= 16;
audioFormat.mReserved= 0;

CMFormatDescriptionRef format = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, nil, 0, nil, nil, &format);

CMFormatDescriptionRef formatdes = NULL;
status = CMFormatDescriptionCreate(NULL, kCMMediaType_Audio, 'lpcm', NULL, &formatdes);
if (status != noErr)
{
    NSLog(@"Error in CMAudioFormatDescriptionCreater");
    return;
}

/* Create sample Buffer */
CMSampleTimingInfo timing   = {.duration= CMTimeMake(1, 44100), .presentationTimeStamp= kCMTimeZero, .decodeTimeStamp= kCMTimeInvalid};
CMItemCount framesCount     = ((int*)buffer)[0];

status = CMSampleBufferCreate(kCFAllocatorDefault, nil , NO,nil,nil,format, framesCount, 1, &timing, 0, nil, &sampleBuffer);

if( status != noErr)
{
    NSLog(@"Error in CMSampleBufferCreate");
    return;
}

/* Copy BufferList to Sample Buffer */
AudioBufferList receivedAudioBufferList;
memcpy(&receivedAudioBufferList, buffer + 4, sizeof(receivedAudioBufferList));

status =   CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault , kCFAllocatorDefault, 0, &receivedAudioBufferList);
if (status != noErr) {
    NSLog(@"Error in CMSampleBufferSetDataBufferFromAudioBufferList");
    return;
}
//Use your sampleBuffer.

Let me know of any questions.

1
Ankush On

This is the code I have used to convert my audio data (audio file ) into floating point representation and saved into an array.firstly I get the audio data into AudioBufferList and then get the float value of the audio data. Check the below code if it help

-(void) PrintFloatDataFromAudioFile {

NSString *  name = @"Filename";  //YOUR FILE NAME
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:@"m4a"]; // SPECIFY YOUR FILE FORMAT

const char *cString = [source cStringUsingEncoding:NSASCIIStringEncoding];

CFStringRef str = CFStringCreateWithCString(
                                            NULL,
                                            cString,
                                            kCFStringEncodingMacRoman
                                            );
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(
                                                      kCFAllocatorDefault,
                                                      str,
                                                      kCFURLPOSIXPathStyle,
                                                      false
                                                      );

ExtAudioFileRef fileRef;
ExtAudioFileOpenURL(inputFileURL, &fileRef);


  AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100;   // GIVE YOUR SAMPLING RATE 
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat;
audioFormat.mBitsPerChannel = sizeof(Float32) * 8;
audioFormat.mChannelsPerFrame = 1; // Mono
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(Float32);  // == sizeof(Float32)
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mBytesPerFrame; // = sizeof(Float32)

// 3) Apply audio format to the Extended Audio File
ExtAudioFileSetProperty(
                        fileRef,
                        kExtAudioFileProperty_ClientDataFormat,
                        sizeof (AudioStreamBasicDescription), //= audioFormat
                        &audioFormat);

int numSamples = 1024; //How many samples to read in at a time
UInt32 sizePerPacket = audioFormat.mBytesPerPacket; // = sizeof(Float32) = 32bytes
UInt32 packetsPerBuffer = numSamples;
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket;

// So the lvalue of outputBuffer is the memory location where we have reserved space
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);



AudioBufferList convertedData ;//= malloc(sizeof(convertedData));

convertedData.mNumberBuffers = 1;    // Set this to 1 for mono
convertedData.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;  //also = 1
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer; //

UInt32 frameCount = numSamples;
float *samplesAsCArray;
int j =0;
    double floatDataArray[882000]   ; // SPECIFY YOUR DATA LIMIT MINE WAS 882000 , SHOULD BE EQUAL TO OR MORE THAN DATA LIMIT

while (frameCount > 0) {
    ExtAudioFileRead(
                     fileRef,
                     &frameCount,
                     &convertedData
                     );
    if (frameCount > 0)  {
        AudioBuffer audioBuffer = convertedData.mBuffers[0];
        samplesAsCArray = (float *)audioBuffer.mData; // CAST YOUR mData INTO FLOAT

       for (int i =0; i<1024 /*numSamples */; i++) { //YOU CAN PUT numSamples INTEAD OF 1024

            floatDataArray[j] = (double)samplesAsCArray[i] ; //PUT YOUR DATA INTO FLOAT ARRAY
              printf("\n%f",floatDataArray[j]);  //PRINT YOUR ARRAY'S DATA IN FLOAT FORM RANGING -1 TO +1
            j++;


        }
    }
}}
3
dynebuddha On

I used the following code snippet to convert NSData (in my case of 800 bytes packet, but arguably could be of any size) to AudioBufferList:

-(AudioBufferList *) getBufferListFromData: (NSData *) data
{
       if (data.length > 0)
       {
            NSUInteger len = [data length];
            //I guess you can use Byte*, void* or Float32*. I am not sure if that makes any difference.
            Byte * byteData = (Byte*) malloc (len);
            memcpy (byteData, [data bytes], len);
            if (byteData)
            {
                 AudioBufferList * theDataBuffer =(AudioBufferList*)malloc(sizeof(AudioBufferList) * 1);
                 theDataBuffer->mNumberBuffers = 1;
                 theDataBuffer->mBuffers[0].mDataByteSize = len;
                 theDataBuffer->mBuffers[0].mNumberChannels = 1;
                 theDataBuffer->mBuffers[0].mData = byteData;
                 // Read the data into an AudioBufferList
                 return theDataBuffer;
             }
        }
        return nil;
}
2
Gami Nilesh On

You can create NSData from the CMSampleBufferRef using the following code and then play it with AVAudioPlayer.

- (void)captureOutput:(AVCaptureOutput *)captureOutput  didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

    AudioBufferList audioBufferList;
    NSMutableData *data= [NSMutableData data];
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);

    for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){

        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        Float32 *frame = (Float32*)audioBuffer.mData;

        [data appendBytes:frame length:audioBuffer.mDataByteSize];

    }

    CFRelease(blockBuffer);
    CFRelease(ref);

    AVAudioPlayer *player = [[AVAudioPlayer alloc] initWithData:data error:nil];
    [player play];
}