Question: Is there any way to use AudioTrack and setLoopPoints() to configure a loop with accuracy based on samples/frames per millisecond?
Edit: I understand that perfect accuracy can't be expected from the processing power that most android devices possess. I'd like, however, to get an average loop time close to the tempo's "real" interval in milliseconds as this is how I've based an "animation" that should also sync with the tempo (the animation is a SurfaceView which redraws a line's coordinates over the duration of the tempo's interval).
Details: I'm trying to use AudioTrack with setLoopPoints to create a accurate metronome. To do this, I'm using two wav files (Tick and Tock) to fill a byte[] array to feed to AudioTrack. Consider an example in 4/4 time where I would fill a byte[], once with Tick beginning at [0] and three times with Tock (using arrayCopy()) to [length/4],[length/2], and [3*length/4], and assume that the wav data will not overlap each other.
Rough example of what my code does:
// read each wav file's header into its own ByteBuffer (with LITTLE_ENDIAN order)
// ... then get the size of the audio data
tickDataSize = tickHeaderBuffer.getInt(40);
tockDataSize = tockHeaderBuffer.getInt(40);
// allocate space for one loop at the current tempo into a byte[] array (to be given to AudioTrack)
// e.g. 22050hz * 2 Bytes (1 per channel) * 2 Bytes (1 per 8 bit PCM sample) = 22050*4 Bytes/second
// If my tempo were 60 BPM I'd have 88200 Bytes/second (where 1 second is the whole loop interval);
// 110 BPM = 48109.0909091 Bytes per loop interval (where 1 loop interval is 0.54545 seconds)
int tempo = 110;
int bytesPerSecond = sampleRate * 2 * 2;
int bytesPerInterval = (int)((((float)bytesPerSecond * 60.0F)/(float)tempo) * 4);
byte[] wavData = new byte[bytesPerInterval];
// ... then fill wavData[] as mentioned above with 1*wavOne and 3*wavTwo
// Then feed to an instance of AudioTrack and set loop points
audioTrack.write(wavData, 0, bytesPerInterval);
int numSamples = bytesPerInterval/4;
audioTrack.setLoopPoints(0, numSamples, -1);
audioTrack.play();
Hopefully you've begun to see the problem. With certain tempos, I get only static playing in the loop (but only during the 1st and 3rd Tock [2nd and 4th sample in the loop]).
The static stops if I:
- Don't fill the byte[] with any wav data but keep the bytesPerInterval and numSamples the same (silent loop of correct duration).
- Set bytesPerInterval = bytesPerInterval % 4 (thus losing tempo accuracy)
Examples of working (no static) and not working (static) tempos and their required number of frames (Consider one second = 88200 frames):
tempo: 110 (static)
wavData.length: 192436
numFrames: 48109
tempo: 120 (no static)
wavData.length: 176400
numFrames: 44100
tempo: 130 (static)
wavData.length: 162828
numFrames: 40707
tempo: 140 (no static)
wavData.length: 151200
numFrames: 37800
tempo: 150 (no static)
wavData.length: 141120
numFrames: 35280
tempo: 160 (static)
wavData.length: 132300
numFrames: 33075
tempo: 170 (static)
wavData.length: 124516
numFrames: 31129
tempo: 180 (no static)
wavData.length: 117600
numFrames: 29400
If the answer to the question is "no, you can't use setLoopPoints() to configure a loop accurate to any millisecond", then I'd like to know of any other options. Would OpenSL ES in NDK, SoundPool, or MediaPlayer be more appropriate for generating a precise loop?
Edit 2: I've narrowed down the location causing the static issue:
// Assume a tempo of 160 BPM which requires byte[132300]
wavStream1 = this.context.getResources().openRawResource(R.raw.tick);
wavStream2 = this.context.getResources().openRawResource(R.raw.tock);
ByteBuffer headerBuffer1 = ByteBuffer.allocate(44);
ByteBuffer headerBuffer2 = ByteBuffer.allocate(44);
headerBuffer1.order(ByteOrder.LITTLE_ENDIAN);
headerBuffer2.order(ByteOrder.LITTLE_ENDIAN);
wavStream1.read(headerBuffer1.array(), 0, 44);
wavStream2.read(headerBuffer2.array(), 0, 44);
int tickDataSize = headerBuffer1.getInt(40);
int tockDataSize = headerBuffer2.getInt(40);
byte[] wavData = new byte[bytesPerInterval * 4];
byte[] tickWavData = new byte[bytesPerInterval];
byte[] tockWavData = new byte[bytesPerInterval];
wavStream1.read(accentWavData, 0, tickDataSize);
wavStream2.read(normalWavData, 0, tockDataSize);
System.arraycopy(tickWavData, 0, wavData, 0, bytesPerInterval);
System.arraycopy(tockWavData, 0, wavData, 33075, bytesPerInterval);
System.arraycopy(tockWavData, 0, wavData, 66150, bytesPerInterval);
System.arraycopy(tockWavData, 0, wavData, 99225, bytesPerInterval);
// bytesPerInterval of 33075 and 99225 (or any odd number) will be
// static when wavData is played
AudioTrack audioTrack = new AudioTrack(3, 22050, 12, 2, wavData.length, 0);
audioTrack.write(wavData, 0, wavData.length);
audioTrack.setLoopPoints(0, bytesPerInterval, -1);
audioTrack.play();
Most importantly, I'd like to understand why audio data beginning at an odd index of wavData generates static instead of the expected sound and if there is any remedy for this.
After reading your edits, I think the reason odd indices cause a problem is because you are creating the
AudioTrack
withENCODING_PCM_16BIT
(the "2" you pass in the constructor). That means every sample should be 16 bits. Try it using "3" orENCODING_PCM_8BIT
if the samples are 8 bits.