Make an Image Editing Application in 10 Minutes
In this article, we will be creating a sample application that can create a Depth of Field effect without any DSLR, using the HMS ML Kit.
Join the DZone community and get the full member experience.
Join For FreeIntroduction
We will use the HMS ML Kit in order to complete the following tasks:
Extract a user from an image using Image Segmentation,
Add Blur effect on the image,
Create a new wonderful result using Huawei ML Kit.
We will be using both the camera and gallery as input.
Demo
To use the demo, go to AppGallery and download the above application. Edit IT.
As we are using cloud services here, Internet connectivity is required.
Steps to Build the Application
1. Add Maven URL in buildscript and allprojects.
maven { url 'http://developer.huawei.com/repo/' }
2. Add classpath.
xxxxxxxxxx
classpath 'com.huawei.agconnect:agcp:1.2.1.301'
3. Add the below dependency in the app level Gradle file.
xxxxxxxxxx
ext {
mlKitVersion = '1.0.4.301'
lifecycleExtension = '2.2.0'
}
implementation "androidx.lifecycle:lifecycle-extensions:$lifecycleExtension"
// Import the base SDK.
implementation "com.huawei.hms:ml-computer-vision-segmentation:$mlKitVersion"
// Import the multiclass segmentation model package.
implementation "com.huawei.hms:ml-computer-vision-image-segmentation-multiclass-model:$mlKitVersion"
// Import the human body segmentation model package.
implementation "com.huawei.hms:ml-computer-vision-image-segmentation-body-model:$mlKitVersion"
4. Add the below permissions in the manifest file.
xxxxxxxxxx
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.INTERNET" />
5. Add the below code to auto-update the model when a new one arrives on the server.
xxxxxxxxxx
<meta-data
android:name="com.huawei.hms.ml.DEPENDENCY"
android:value="imgseg" />
Open your main activity and add 2 buttons. With one, you can open a gallery, and with another, you can open a camera.
Below is the code for both of them.
6. Upload by Camera
xxxxxxxxxx
private fun uploadByCamera() {
imageSegmentationViewModel.background.value?.recycle()
seekBar.progress = 0
val takePicture = Intent(MediaStore.ACTION_IMAGE_CAPTURE)
startActivityForResult(takePicture, 1222)
}
7. Upload by Gallery
xxxxxxxxxx
private fun uploadByGallery() {
imageSegmentationViewModel.background.value?.recycle()
seekBar.progress = 0
val photoPickerIntent = Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.EXTERNAL_CONTENT_URI)
startActivityForResult(
Intent.createChooser(photoPickerIntent, "Choosing picture from gallery"),
loadImageGalleryCode
)
}
After the pic selection, the control will come insideonActivityResult()
, where you will get the result bitmap. This bitmap we have to pass on to theimageSegmentation()
method.
xxxxxxxxxx
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == loadImageGalleryCode && resultCode == Activity.RESULT_OK && data != null) {
seekBar.visibility = View.VISIBLE
val pickedImage: Uri? = data.data
val filePath = arrayOf(MediaStore.Images.Media.DATA)
val cursor: Cursor? = contentResolver.query(pickedImage!!, filePath, null, null, null)
cursor!!.moveToFirst()
val imagePath: String = cursor.getString(cursor.getColumnIndex(filePath[0]))
val options = BitmapFactory.Options()
options.inPreferredConfig = Bitmap.Config.ARGB_8888
val bitmap: Bitmap = BitmapFactory.decodeFile(imagePath, options)
imageView.setImageBitmap(bitmap)
imageSegmentationViewModel.background.value = bitmap
imageSegmentationViewModel.originalImage.value = bitmap
imageSegmentationViewModel.imageSegmentation()
cursor.close()
} else if (requestCode == loadImageCameraCode && resultCode == Activity.RESULT_OK && data != null) {
seekBar.visibility = View.VISIBLE
val bitmap: Bitmap? = data.extras!!["data"] as Bitmap?
imageSegmentationViewModel.background.value = bitmap
imageSegmentationViewModel.originalImage.value = bitmap
imageSegmentationViewModel.imageSegmentation()
}
}
8. Add the below imageSegmentation()
method in activity or ViewModel as per your application design.
Create a copy of the previously fetched bitmap and pass it to this method.
This method is using a copy image and extracting the human boy from it. If the response is successful, we will get the callback insideaddOnSuccessListener()
. Finally, it saves the extracted image, which only contains a human body.
xxxxxxxxxx
fun imageSegmentation(){
val setting = MLImageSegmentationSetting.Factory()
.setExact(false)
.setAnalyzerType(MLImageSegmentationSetting.BODY_SEG)
.setScene(MLImageSegmentationScene.ALL)
.create()
analyzer = MLAnalyzerFactory.getInstance()
.getImageSegmentationAnalyzer(setting)
val frame = MLFrame.fromBitmap(originalImage.value)
val task = analyzer.asyncAnalyseFrame(frame)
task.addOnSuccessListener {
extractedImage.value = it.foreground
}.addOnFailureListener {
Log.d("Image Segmentation: ", "Error occurred: "+it.message)
}
analyzer.stop()
}
Create a SeekBar with the below properties and call theblurRenderScript()
method if the progress of SeekBar is changed.
The range of SeekBar is from 0 to 100, but in our blurRenderScript, we are cannot accept a value above 25; hence we are using the relative value of SeekBar progress by dividing it by 4.
xxxxxxxxxx
seekBar.progressDrawable.setColorFilter(Color.WHITE, PorterDuff.Mode.SRC_IN)
seekBar.thumb.setColorFilter(Color.WHITE, PorterDuff.Mode.SRC_IN)
seekBar.setOnSeekBarChangeListener(object : SeekBar.OnSeekBarChangeListener{
override fun onProgressChanged(seekBar: SeekBar?, progress: Int, fromUser: Boolean) {
blurRenderScript(imageSegmentationViewModel.originalImage.value!!, progress / 4)
}
override fun onStartTrackingTouch(seekBar: SeekBar?) {
Toast.makeText(this ,"Start Editing",Toast.LENGTH_SHORT).show()
}
override fun onStopTrackingTouch(seekBar: SeekBar?) {
Toast.makeText(this ,"Stop Editing",Toast.LENGTH_SHORT).show()
}
})
9. Below, two methods are used to fetch a bitmap and create a blur effect on top of it.
xxxxxxxxxx
private fun blurRenderScript(smallBitmap: Bitmap, radius: Int): Bitmap? {
var smallBitmap = smallBitmap
try {
smallBitmap = RGB565toARGB888(smallBitmap)
} catch (e: Exception) {
e.printStackTrace()
}
val bitmap = Bitmap.createBitmap(
smallBitmap.width, smallBitmap.height,
Bitmap.Config.ARGB_8888
)
val renderScript = RenderScript.create(this)
val blurInput = Allocation.createFromBitmap(renderScript, smallBitmap)
val blurOutput = Allocation.createFromBitmap(renderScript, bitmap)
val blur = ScriptIntrinsicBlur.create(
renderScript,
Element.U8_4(renderScript)
)
blur.setInput(blurInput)
if (radius<= 0 || radius > 25) {
blur.setRadius(1.toFloat()) // radius must be 0 < r <= 25
} else {
blur.setRadius(radius.toFloat()) // radius must be 0 < r <= 25
}
blur.forEach(blurOutput)
blurOutput.copyTo(bitmap)
renderScript.destroy()
combineBitmaps(bitmap!! , imageSegmentationViewModel.extractedImage.value!!)
return bitmap
}
xxxxxxxxxx
java.lang.Exception::class) (
private fun RGB565toARGB888(img: Bitmap): Bitmap {
val numPixels = img.width * img.height
val pixels = IntArray(numPixels)
//Get JPEG pixels. Each int is the color values for one pixel.
img.getPixels(pixels, 0, img.width, 0, 0, img.width, img.height)
//Create a Bitmap of the appropriate format.
val result =
Bitmap.createBitmap(img.width, img.height, Bitmap.Config.ARGB_8888)
//Set RGB pixels.
result.setPixels(pixels, 0, result.width, 0, 0, result.width, result.height)
return result
}
10. Once your background image is blurred, you can combine the previously extracted body image on top of the blurred image. By this, without any masking, you will get a radius blur effect inside your picture.
xxxxxxxxxx
private fun combineBitmaps(bmp1: Bitmap, bmp2: Bitmap) {
val bmOverlay = Bitmap.createBitmap(bmp1.getWidth(), bmp1.getHeight(), bmp1.getConfig())
val canvas = Canvas(bmOverlay)
canvas.drawBitmap(bmp1, Matrix(), null)
canvas.drawBitmap(bmp2, 0.toFloat(), 0.toFloat(), null)
imageSegmentationViewModel.background.value = bmOverlay
}
11. Last but not least, below is how we can request runtime permissions.
xxxxxxxxxx
val permission = arrayOf(
Manifest.permission.INTERNET,
Manifest.permission.READ_EXTERNAL_STORAGE,
Manifest.permission.CAMERA
)
ActivityCompat.requestPermissions(this, permission, 1)
12. Here, we will get control over whether the permission is granted or revoked.
xxxxxxxxxx
override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<out String>, grantResults: IntArray) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults)
when (requestCode) {
1 -> {
if (grantResults.isNotEmpty() && grantResults[1] == PackageManager.PERMISSION_GRANTED) {
boolean = true
} else {
boolean = false
Toast.makeText(this, "Permission denied to read your External storage", Toast.LENGTH_SHORT).show()
}
return
}
}
}
Result
Conclusion
Finally, your app is ready to test. I hope you enjoyed creating this application.
Opinions expressed by DZone contributors are their own.
Comments