Android Camera dropping frames by of work in the PreviewCallback

1.3k views Asked by At

For an Android project I have to analyze camera frames real-time. Now I use the ‘android.hardware.Camera.PreviewCallback’ to receive the camera frames. The problem is that while I try to analyze the frames my FPS drops from 30 fps to 15 fps while I need 30 fps. I already tried to handle the analyzing part in a separate thread, the frames stop dropping, but the analyzing is not real-time anymore.

Does someone have a solution for this problem?

4

There are 4 answers

1
daemmie On

Possible options:

  • lower the resolution
  • optimize your algorithm (or use an other)
  • analyze your frames in c
  • if possible use shaders or maybe renderscript
  • more that 2 thread also might help. (depends on hardware)

Keep in mind, that lot's of slow devices are out there. The framerate also depends on the light situation. So if you want to publish your app, make sure your app is also able to support a lower frame rate.

0
Miao Wang On

There might be multiple ways to solve your problem, as mentioned by @oberflansch.

If you have to do YUV to RGB conversion, which is not cheap, along with other anaylsis, RenderScript might be your way out. It has a fast YuvToRGB conversion intrinsic, and can potentially give you a huge performance boost for your analysis.

Here is an example to use RenderScript with Camera: https://github.com/googlesamples/android-HdrViewfinder

You can also try to do your analysis in YUV space just as the above example, which will help you get rid of the cost of YUVtoRGB conversion.

But if you need to do it in RGB space, ScriptIntrinsicYuvToRGB is simple to use:

    // Setup the Allocations and ScriptIntrinsicYuvToRGB.
    // Make sure you reuse them to avoid overhead.
    Type.Builder yuvTypeBuilder = new Type.Builder(rs, Element.YUV(rs));
    yuvTypeBuilder.setX(dimX).setY(dimX).setYuvFormat(ImageFormat.YUV_420_888);
    // USAGE_IO_INPUT is used with Camera API to get the image buffer
    // from camera stream without any copy. Detailed usage please
    // refer to the example.
    mInputAllocation = Allocation.createTyped(rs, yuvTypeBuilder.create(),
            Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);

    Type.Builder rgbTypeBuilder = new Type.Builder(rs, Element.RGBA_8888(rs));
    rgbTypeBuilder.setX(dimX).setY(dimY);
    // USAGE_IO_OUTPUT can be used with Surface API to display the
    // image to the surface without any copy.
    // You can remove it if you don't need to display the image.
    mOutputAllocation = Allocation.createTyped(rs, rgbTypeBuilder.create(),
            Allocation.USAGE_IO_OUTPUT | Allocation.USAGE_SCRIPT);

    ScriptIntrinsicYuvToRGB yuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.RGBA_8888(rs));

    .........

    // Each time on a new frame available, do the process.
    // Please refer to the example for details.
    mInputAllocation.ioReceive();
    // YUV to RGB conversion
    yuvToRgb.setInput(mInputAllocation);
    yuvToRgb.forEach(mOutputAllocation);

    // Do the analysis on the RGB data, using mOutputAllocation.
    ..........
0
Settembrini On

In addition to what has been said above, if your image analysis can work with grayvalue Information alone, you don't need any initial conversion to RGB. If you have a resolution of n times m pixels, you just can take the first (n*m) bytes from the YUV data. This is true both for the old Camera and for Camera2.

0
Epler On

Because I had to analyze the RGB and HSV values I couldn’t use the solution @Settembrini mentioned. To solve my problem, I created a function in C using the Android NDK. The C function handles the analyzing and returns the result. Even on slower devices I reach the 30 FPS I need to. Probably in most cases the solution of @Miao Wang will stand.

For Android studio users:

  • Install the NDK from SDK Tools (File > Preferences > Appearance & Behavior > System Settings > Android SDK (SDK Tools tab)
  • Create a sub-directory called "jni" and place all the native sources here.
  • Create a "Android.mk" to describe your native sources to the NDK build system.
  • Build your native code by running the "ndk-build" (in NDK installed directory) script from your project's directory. The build tools copy the stripped, shared libraries needed by your application to the proper location in the application's project directory.

Integrate the native method in your Activity:

// loading native c module/lib
static {
    try {
        System.loadLibrary("rgbhsvc");
    } catch (UnsatisfiedLinkError ule) {
        Log.e(DEBUG_TAG, "WARNING: Could not load native library: " + ule.getMessage());
    }
}
//Create native void to calculate the RGB HSV
private native void YUVtoRBGHSV(double[] rgb_hsv, byte[] yuv, int width, int height);

Create the C part to process the data:

JNIEXPORT
void
JNICALL Java_nl_example_project_classes_Camera_YUVtoRBGHSV(JNIEnv * env, jobject obj, jdoubleArray rgb_hsv, jbyteArray yuv420sp, jint width, jint height)
{

// process data

(*env)->SetDoubleArrayRegion(env, rgb_hsv, 0, sz, ( jdouble * ) &rgbData[0] );

   //Release the array data
(*env)->ReleaseByteArrayElements(env, yuv420sp, yuv, JNI_ABORT);

}

A good introduction to the Android NDK: https://www.sitepoint.com/using-c-and-c-code-in-an-android-app-with-the-ndk/

Thanks for all the answers!