Blog post
When we started to work on one project where the camera and OpenCV library were having major roles, we were wondering what camera Api we should use. OpenCV camera has Java interface and Android support. JavaCameraView or JavaCamera2View is an interaction between the camera and OpenCV. We decided to use the new CameraX Api that is built on top of Camera2.
I will not write about OpenCV library implementation and how we implemented image analysis, I will just point in the direction of what we thought was important during development.
In this blog, I found useful information on how to integrate OpenCV in your Android project:
If you want something even simpler, import this library as a dependency in your build.gradle - check here.
When you work with images, you work with Bitmaps; when you work with Bitmaps, you deal with memory overhead, and such things as aspect ratio, resolution, and camera overall are important. You need to produce the desired results regarding quality and not have problems with memory. And not to mention, you need to glue it all together. OpenCV, its modules specific for the project, and camera (preview, capture, analyse frames).
CameraX as a part of the Jetpack support library:
- Makes development for us much easier — this is important :-),
- Has backward compatibility to Android 5.0 (API level 21),
- It uses a simpler, use case-based approach that is lifecycle-aware,
- Enables us to add effects such as Portrait, HDR, Night, and Beauty through CameraX Extensions,
- Overall reduces the amount of code.
As it is stated in official documentation, it “..enables developers to leverage the same camera experiences and features that preinstalled camera apps provide, with as little as two lines of code.” Awesome.
The core CameraX libraries are in the beta stage and it is stable to implement them in the production. Lots of issues were fixed thanks to the community feedback. E.g, on LG G3 it was the wrong preview aspect ratio, on the Samsung Galaxy S7 it was Tap-To-Focus etc.
You are probably familiar with the use cases CameraX provides (Preview, ImageCapture, ImageAnalysis) and how they are configured. Just check on Google documentation, there is a sample app, and it's worth going through it, because this Api is really great.
Let’s skip implementing use cases and rather look at some of its features.
Additionally, when working with bitmaps:
- Use some image caching library (Glide, Picasso, Coil, Fresco …)
- Don’t load on UI thread,
- Scale the image on the size of your Image View or Preview image, if you save it in internal or external storage. No need to load a 1024x768 pixel image into the memory if we will display it in a 128x96 pixel thumbnail in the ImageView.
A) ASPECT RATIO
The aspect ratio is the relationship between the height and the width of a display. In case of CameraX, we have two (4:3 or 1.33 and 16:9 or 1.78, there are also 18:9 and 19:9, but they are not yet supported). In most cases 16:9 have lesser megapixels. So, images captured with a 4:3 aspect ratio are much sharper and clearer than a 16:9, but 16:9 is the standard because the majority of content videos, gaming, etc. are available in this format.
The aspect ratio became a standard for smartphones and other devices in 2010.
The default aspect ratio for image capture and image analysis use cases is 4:3
const val ASPECT_RATIO = AspectRatio.RATIO_16_9
val imageCapture = ImageCapture.Builder()
.setTargetAspectRatio(ASPECT_RATIO)
.build()
B) RESOLUTION
It represents the number of pixels on a display or in a camera sensor. A higher resolution means more pixels. If you have more pixels, you will display a better image.
At the time of writing this blog, the Full HD 1920*1080 is the most optimal resolution. Btw, this is aspect ratio 16:9.
CameraX can automatically determine the best resolution for you if you don’t specify one, or if specified resolution is not supported. But you can set the resolution on your own. In any way, the closest match will be found from what you ask for and what the device's capabilities are.
val imageAnalysis = ImageAnalysis.Builder()
.setTargetResolution(Size(1280, 720))
.build()
Here are some tips from Google documentation:
- Preview use case — Max resolution: Preview size, which refers to the best size match to the device’s screen resolution, or to 1080p (1920x1080), whichever is smaller.
- Image analysis — Max resolution: This is limited by CameraX to 1080p. The target resolution is set to 640x480 by default, so if you want a resolution larger than 640x480, you must use setTargetResolution() and setTargetAspectRatio() to get the closest one from the supported resolutions. The limitation of 1080p for ImageAnalysis considers both performance and quality factors, so that users can obtain reasonable quality and a smooth output stream. Just to add why 640 x 480. Actually, this is a resolution that is guaranteed across all devices.
- You can't set both target aspect ratio and target resolution on the same use case. Doing so will throw an IllegalArgumentException when building the config object.
- Image capture — Max resolution: The camera device’s maximum output resolution for JPEG format from StreamConfigurationMap.getOutputSizes()
- Always check the returned image sizes on the use case output in your code and adjust accordingly.
- If the primary need of the app is to specify a resolution in order to make image processing more efficient, use setTargetResolution(Size resolution).
C) FLASH MODE
Although using flash depends on your case, it is worth mentioning that it tends to spoil the color temperature or brightness; you get this black background.
In our app, flash needs to be continuously turned on, so we used the following steps:
val camera = cameraProvider.bindToLifecycle(lifecycleOwner, cameraSelector, preview, imageCapture, imageAnalyzer)
1. Check if the device actually has a flash option.
camera?.cameraInfo?.hasFlashUnit() == true
2. Enable flash
camera?.cameraControl?.enableTorch(flashEnabled)
Additionally if you need to enable or disable the flash during an image capture.
imageCapture.flashMode = FLASH_MODE_ON
imageCapture.flashMode = FLASH_MODE_OFF
D) ROTATION
Most camera applications lock the display into landscape mode because that is the natural orientation of the camera sensor. If you take the photo in portrait, it will be rotated 90 degrees. You need the info about orientation.
We have two use cases that should receive images with the correct rotation:
- The ImageAnalyses use case’s Analyser should receive frames with the correct rotation,
- And the ImageCapture use case should take pictures with the correct rotation,
- ImageAnalyses receives rotation from the camera in the form of ImageProxy and ImageCapture in the form of ImageProxy, File, OutputStream or MediaStore Uri. ImageProxy is just a CameraX abstraction around android media image class.
E.g. ImageAnalysis ImageProxy contains rotation information, which is accessible via
val rotation = imageProxy.imageInfo.rotationDegrees
This value represents the degree to which the image needs to be rotated clockwise to match the ImageAnalysis’s target rotation. In the context of an Android app, ImageAnalysis’s rotation or the rotation of the captured image, regardless of its format, would typically match the target rotation or screen’s orientation respectively.
Since we limited our app to portrait mode, our target rotation is actually the screen orientation and regardless of the physical orientation of the device, it stays in portrait mode. We experience different rotation data on some devices.
<!-- The Activity keeps a portrait orientation even as the device rotates. -->
<activity
android:name=".LockedOrientationActivity"
android:screenOrientation="portrait" />
Check documentation for more info.
Here are the steps on how we got the bitmap from ImageProxy, rotated it, and converted it to Mat object that we needed
1. Convert the imageProxy (YUV - android standard format) to RGB bitmap format
imageProxy.use {
converter.yuvToRgb(imageProxy.image!!, bitmapBuffer)
}
Here is how you can convert yuv to rgb:
https://github.com/android/camera-samples/blob/main/CameraUtils/lib/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt
2. Rotate bitmap
fun rotateBitmap(img: Bitmap, degree: Int): Bitmap {
if (degree == 0) return img
val matrix = Matrix()
matrix.postRotate(degree.toFloat())
return Bitmap.createBitmap(img, 0, 0, img.width, img.height, matrix, true)
}
3. Convert bitmap to Mat
fun convertFrom(bitmap: Bitmap, rotation: Int): Mat? {
return withContext(Dispatchers.IO) {
var rotatedBitmap: Bitmap? = null
try {
rotatedBitmap = rotateBitmap(bitmap, rotation)
bitmapToMat(rotatedBitmap)
} catch (e: Throwable) {
Timber.d(e)
null
} finally {
rotatedBitmap?.recycle()
}
}
}
At the end, we are passing this Mat object to the OpenCV module that we were using in our project.
So, to summarize:
- CameraX is in Beta and lots of issues are solved, use it in production it will make your development much easier,
- If you want to combine the camera with libraries such as ML Kit, TensorFlow Lite, OpenCV or some other library, go with it, it is very powerful,
- Watch for memory management when dealing with bitmaps,
- Get familiar with the CameraX capabilities and demands about aspect ratio, resolution or rotation,
- Don’t forget to try the CameraX Extensions feature, there are cool features this Api provides for us with a couple of lines of code.
And don’t forget to clean the lens!