I need to extract the PCM audio samples from a .wav file (or any other format) in iOS. I would also like to get the same data from a live recording using the microphone.
Can this be done using AVFoundation, or do I need to use the lower-level CoreAudio APIs? An example in Swift would be much appreciated. I'm just looking for a basic array of Floats corresponding to individual audio samples to use for signal processing.
AVFoundation
includes a class calledAVAssetReader
that can be used to obtain the audio data from a sound file. https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAssetReader_Class/index.html#//apple_ref/occ/instm/AVAssetReader/addOutput:However, the most straightforward way is probably by using the Extended Audio File Services and the
ExtAudioFileRead
function: https://developer.apple.com/library/prerelease/ios/documentation/MusicAudio/Reference/ExtendedAudioFileServicesReference/index.html#//apple_ref/c/func/ExtAudioFileReadThe Extended Audio File Services is a C API, so you'll have to deal with calling that from Swift.