I'm very new to opensl es. I'm currently experimenting with the recording and playback features of opensl es for android. Right now I have a recording function which stores data in a buffer queue. I can then playback the buffer queue. Would anyone be able to explain how I can correctly manipulate the data in the buffer queue? so the playback sounds different from the recording.
My current configuration:
sampleFormat.pcmFormat_ = static_cast<uint16_t>(engine.bitsPerSample_);
//the buffer
uint8_t *buf_;
Is there any type of conversion or decoding I need to do to the data in the buffer before manipulating it?
I would really appreciate some help.
Your question is broad, what I can do is tell you how you are supposed to use it, and how you could manipulate audio data obtained from recording.
1) Once you setup your OpenSL_ES engine, recorder and player properly (many examples out there), you have given OpenSL_ES a buffer where to read pcm data from mic, and also a buffer where to read from data you would like to provide for the sink of playback, along with 2 callback functions which will be called upon completion, once the process of reading data has finished (after some time according to your settings like sample rate, size of buffer, etc), the record callback is called, from a thread created by OpenSL_ES which depending on the device and configuration might be a high priority thread usually called fast track (so you are not working on your thread in the callback, but in OpenSL_ES' thread and have to be careful not to do blocking operations there). Now if what you want is to playback audio as fast as posible, work your audio signal processing from inside the callback, if response time is not too important for you, you may use the callback as a signal for your thread to start reading process audio data in the buffer as you wish. In both cases to playback the audio you must enqueue the data (processed or unprocessed) for the playback process (playback also calls player callback upon finishing).
2) Now, if you want to process audio, you need to apply filters, there are many kinds of audio signal filters that can be applied, you should look for dynamic filters in case of real time playback. (some filters require lot of data to start processing and may be bad for real time, some others are optimized to use small chunks of data and dynamically adapt output). So you would need to make a chain of filters in a certain order to obtain what you want. The audio world is huge, you need to read quite a lot to start understanding audio processing. Audio performance is another thing and depends directly from the device you have (hard, soft).
3) Data manipulation to the buffer you obtain depends on your processor. For instance endianess, some processors may work with little or big endian and you get your data in big endian format. There is no compression so pcm data is ready for processing. (if you would like to create a wav from it you only need to add a wave header and add pcm data in the data chunk of the header, if you want other format like mp3 you also need to process your data with a compression algorithm according to the format you would like and add that data to the proper header)
Also to playback data through OpenSL_ES you need uncompressed audio data, so you can't play mp3 directly, you need to uncompress it into pcm data first
This is the basic functioning of OpenSL_ES, hope that answers your question. If something is unclear let me know.
PS: Android says Audio manipulation is easier now with the new library AAudio, which promises to accomplish the same tasks as OpenSL_ES with a third of it's complexity (there might be some issues with latency, some people have encountered but I bet they are being fixed as you read)