I'm having an issue trying to do echo cancellation on android using webRTC. I'm following the project posted Here for the most part, however I'm trying to directly stream from a remote device.
/* Prepare AEC */
MobileAEC aecm = new MobileAEC(null);
aecm.setAecmMode(MobileAEC.AggressiveMode.MILD)
.prepare();
/*Get Minimum Buffer Size */
int minBufSize = AudioRecord.getMinBufferSize(HBConstants.SAMPLE_RATE,
AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT) ;
int audioLength=minBufSize/2;
byte[] buf = new byte[minBufSize];
short[] audioBuffer = new short[audioLength];
short[] aecOut = new short[audioLength];
/*Prepare Audio Track */
AudioTrack speaker = new AudioTrack(AudioManager.STREAM_MUSIC,
HBConstants.SAMPLE_RATE,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, audioLength,
AudioTrack.MODE_STREAM);
speaker.play();
isRunning = true;
/*Loop around and read incoming network buffer. PlayerQueue is a read LinkedBlockingQueue set elsewhere containing incoming network data */
while (isRunning) {
try {
buf = playerQueue.take();
/* Convert to short buffer and send to aecm*/
ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder())
.asShortBuffer().get(audioBuffer);
aecm.farendBuffer(audioBuffer, audioLength);
aecm.echoCancellation(audioBuffer, null, aecOut,
(short) (audioLength), (short) 10);
/*Send output to speeker */
speaker.write(aecOut, 0, audioLength);
} catch (Exception ie) {
}
try {
Thread.sleep(5);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
}
}
When I do this I get This exception:
12-23 17:31:11.290: W/System.err(8717): java.lang.Exception: setFarendBuffer() failed due to invalid arguments.
12-23 17:31:11.290: W/System.err(8717): at com.android.webrtc.audio.MobileAEC.farendBuffer(MobileAEC.java:204)
12-23 17:31:11.290: W/System.err(8717): at com.example.twodottwo.PlayerThread.run(PlayerThread.java:62)
12-23 17:31:11.290: W/System.err(8717): at java.lang.Thread.run(Thread.java:841)
Now I've dug a bit into the code, and found out that the sampler will only accept 80 or 160 samples at a time. To compensate for this I've tried to only get 160 samples at a time, but this is less than the minimum buffer size in the AudioRecord object and produces an error.
So to get around this I also tried this code and set the queue up to only deliver a maximum of 320 bytes at a time (since we use 2 bytes for a short):
ShortBuffer sb = ShortBuffer.allocate(audioLength);
int samples = audioLength / 160;
while(i < samples) {
buf = playerQueue.take();
ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder()).asShortBuffer().get(audioBuffer);
aecm.farendBuffer(audioBuffer, 160);
aecm.echoCancellation(audioBuffer, null, aecOut, (short) (160), (short) 10);
sb.put(aecOut);
i ++;
}
speaker.write(sb.array(), 0, audioLength);
Now this should buffer each 160 element array and pass it to the WebRtc library to do the echo cancellaton. It seems to just produce random noise. I've tried to reverse the order of the resulting array aswell, but still produces random noise.
Is there any way to split up the sound sample and make it sound like the original in a way that WebRTC likes? or is there a way to get WebRtc to accept more samples at once? I think either would be good, but at the moment I'm a bit stuck.