From my understanding Audio Queue Services are a higher level API than Audio Units, which use Audio Units. OpenAL also uses Audio Units. So Audio Units is the lowest level audio API in iOS.
I need to record a sound and then apply a custom filter to the audio stream and play it back. Audio Queue Services seem to be good for recording, and they can also be used to play it back. But I'm still uncertain about wether or not they allow to apply own algorithms to the audio stream, like the AURemoteIO audio unit would.
From my personal - and sometimes painful - experience, I'd say use AudioQueue for streaming type applications. For anything else, use AudioUnit. The latter maybe lower level, but I didn't see much difference in complexity.
To be honest, AudioUnit seemed a lot more straightforward to work with.
Theoretically, with AudioUnit you should be able to use other plug-ins to apply effects. However, until iOS 5 AURemoteIO was the only AudioUnit available. Apparently there's more with iOS 5. I haven't had a chance to check yet.
If you're doing it manually by running an algorithm against the buffers, you should be able to find quite a lot of open source DSP code. There's also commercial apps. One really good library is the Dirac DSP lib for pitch shifting and time stretching.
Here's a great tutorial on using AURemoteIO in the answer to this other question:
Stopping and Quickly Replaying an AudioQueue