(iPhone) Live FFT from iPod

481 views Asked by At

Okay guys, I've read many things about the FFT stuff, but it seems to be a bit more complicated than building a tableView.

I am searching for a way to analyze the playing audio (from iPod Library) in three ranges (low, mid, high). I think FFT is doing the job, but I'm not sure if I could filter (Lowpass, Bandpass and Highpass) the playing audio and analyze the peaks as well. So if anyone knows what is the best (by best I mean, fastest (CPU) way to do so, please help me. There will be no front-end, so I won't draw the FFT in a Window (I guess the drawing does eat a lot of the cpu).

Then I have no idea how I could analyze the audio. All the FFT Sample Codes I found are using the mic. I do not want to use the mic. I saw something getting the Audio File and exporting it to a uncompressed file, but I need a live-analysation. I've had a look at aurioTouch2, but I don't get how I could change the input from the mic to the iPod Library. I think, the part I'm searching for is here:

    // Initialize our remote i/o unit

inputProc.inputProc = PerformThru;
inputProc.inputProcRefCon = self;

CFURLRef url = NULL;
try {   
    url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, CFStringRef([[NSBundle mainBundle] pathForResource:@"button_press" ofType:@"caf"]), kCFURLPOSIXPathStyle, false);
    XThrowIfError(AudioServicesCreateSystemSoundID(url, &buttonPressSound), "couldn't create button tap alert sound");
    CFRelease(url);

    // Initialize and configure the audio session
    XThrowIfError(AudioSessionInitialize(NULL, NULL, rioInterruptionListener, self), "couldn't initialize audio session");

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory), "couldn't set audio category");
    XThrowIfError(AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, self), "couldn't set property listener");

    Float32 preferredBufferSize = .005;
    XThrowIfError(AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize), "couldn't set i/o buffer duration");

    UInt32 size = sizeof(hwSampleRate);
    XThrowIfError(AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &hwSampleRate), "couldn't get hw sample rate");

    XThrowIfError(AudioSessionSetActive(true), "couldn't set audio session active\n");

    XThrowIfError(SetupRemoteIO(rioUnit, inputProc, thruFormat), "couldn't setup remote i/o unit");
    unitHasBeenCreated = true;

    drawFormat.SetAUCanonical(2, false);
    drawFormat.mSampleRate = 44100;
    (...)

But I'm quite new to all of these AudioUnits, so I can't understand where an input is loaded. Then, the code mentioned above uses AVAudioSession. A little birdie told me, this will be deprecated, so what is the alternative?

So, basically:

  1. How can I get the currently playing audio in order to do an analyzation? Can I just use a MPMusicPlayerController and get the samples? Or do I have to build a entire AudioUnit which plays the Library?

  2. What is the fastest way (CPU) to analyze lows, mids and highs? Filtering? FFT? Something else?

  3. Will I get in trouble with the Copyrights of bought music? Because I tried to convert the playing file to PCA Samples and sometimes I have this error:

    VTM_AViPodReader[7666:307] * Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '* -[AVAssetReader initWithAsset:error:] invalid parameter not satisfying: asset != ((void *)0)'

  4. What is the "new" way to do an FFT if the whole AVAudioSession stuff won't work in the future?

1

There are 1 answers

0
hotpaw2 On BEST ANSWER

You can't get the currently playing audio (security sandbox prevents this) on iOS, unless your app is the one playing the audio using certain select APIs (Audio Queue, RemoteIO, etc.)

3 bandpass filters (made with IIR biquads) will be faster than an FFT. But even a full FFT will use a very small percentage of CPU time.

An app can't convert or play protected music from the iTunes library in a form where samples can be captured.

The FFT is in the Accelerate framework, not in the audio session.