Refer to this post, I want to write a method converting android YUV_420_888 to nv21. A more general implementation is needed although image from camera2 API is default NV21 in disguise. It is as follows:
class NV21Image{
public byte[] y;
public byte[] uv;
}
public static void cvtYUV420ToNV21(Image image, NV21Image nv21) {
int width = image.getWidth();
int height = image.getHeight();
int ySize = width*height;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); // Y
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); // U
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); // V
int yRowStride = image.getPlanes()[0].getRowStride();
int vRowStride = image.getPlanes()[2].getRowStride();
int pixelStride = image.getPlanes()[2].getPixelStride();
assert(image.getPlanes()[0].getPixelStride() == 1);
assert(image.getPlanes()[2].getRowStride() == image.getPlanes()[1].getRowStride());
assert(image.getPlanes()[2].getPixelStride() == image.getPlanes()[1].getPixelStride());
int pos = 0;
int yBufferPos = -yRowStride; // not an actual position
for (; pos<ySize; pos+=width) {
yBufferPos += yRowStride;
yBuffer.position(yBufferPos);
yBuffer.get(nv21.y, pos, width);
}
pos = 0;
for (int row=0; row<height/2; row++) {
for (int col=0; col<vRowStride / pixelStride; col++) {
int vuPos = col*pixelStride + row * vRowStride;
nv21.uv[pos++] = vBuffer.get(vuPos);
nv21.uv[pos++] = uBuffer.get(vuPos);
}
}
}
Above codes work as expected while very time-consuming for my live camera preview app(about 12ms per frame of 720p in Snapdragon 865 CPU), So I tried to accelerate it with JNI implementation to take profit from the byte-access and performance advantages:
JNIEXPORT void JNICALL
Java_com_example_Utils_nFillYUVArray(JNIEnv *env, jclass clazz, jbyteArray yArr, jbyteArray uvArr,
jobject yBuf, jobject uBuf, jobject vBuf,
jint yRowStride, jint vRowStride, jint vPixelStride, jint w, jint h) {
auto ySrcPtr = (jbyte const*)env->GetDirectBufferAddress(yBuf);
auto uSrcPtr = (jbyte const*)env->GetDirectBufferAddress(uBuf);
auto vSrcPtr = (jbyte const*)env->GetDirectBufferAddress(vBuf);
for(int row = 0; row < h; row++){
env->SetByteArrayRegion(yArr, row * w, w, ySrcPtr + row * yRowStride);
}
int pos = 0;
for (int row=0; row<h/2; row++) {
for (int col=0; col<w/2; col++) {
int vuPos = col * vPixelStride + row * vRowStride;
env->SetByteArrayRegion(uvArr, pos++, 1, vSrcPtr + vuPos);
env->SetByteArrayRegion(uvArr, pos++, 1, uSrcPtr + vuPos);
}
}
}
However, it get worse than I expected(about 107ms per frame). And the most time-consuming part is interlaced memory copying for UV buffer
So my problem is Whether any ways to accelerate and how to work it out?
Update
I accelerated it successfully(check my answer) when pixelStride
s of U,V plane are both 1 or 2, I believe it is what happening in most cases.
As @snachmsm said libyuv might help. I found an available API
I420ToNV21
, But it cannot receive pixelStride parameter, forYUV_420_888
does not guarantee no gaps exist between adjacent pixels in U,V planes.I accelerated it successfully with arm intrinsics when the pixelStride is 2(reduce to 2.7ms per frame):
Case of
pixelStride == 1
is not tested sufficiently, but I believe it will work as expected.