Android + OpenCV: Part 3 — Face Detection (90 Degree only)

Homan Huang
6 min readApr 18, 2020

--

Face Detection has become a common tool in 2020. We can use this function on security and entertainment. In this part, I will go through the detection process. For security, we need another function called face recognition which is a machine learning process not found in this chapter. I only chat about Face detection which has become an entertainment filter in social media so people can make fun with each other. Here is the menu of this part:

1. Backup — VCS
2. HaarCascades For Face Detection
3. Raw Folder
4. Temp Library Folder
5. UI: Show Rotation
6. First Test
7. Downsize Detection Image
8. Second Test

Notice: This part is working in landscape mode at 90 Degree.

Now, let’s continue the project that we have built as part 2.

🏁1. Backup — VCS (Version Control System)

After we input some code and work, we need to backup them before we move on. That’s why we are using VCS.

I am using GitHub, which lets you back up and shares the teamwork.

If you don’t have a cloud account, please register one from Git-based on GitHub, Mercurial based like Mercurial Hosting and Subversion based like Google Cloud and Amazon Cloud.

Next, you need to click “Commit” to finish the backup. /

🐵 2.HaarCascades For Face Detection

The OpenCV library has come with a face detection model, called haarcascades which in “OpenCV SDK Folder”/sdk/etc/. You can find files that will detect eyes, front face, body, etc. I need to detect a front face which is haarcascade_frontalface.xml or haarcascade_frontalface_alt2.xml which will paste to “raw” folder.

😮 3. Raw Folder

If you can not find your raw folder in your project, let’s create a new one.

res -> ( R-click) New -> Android Resource Directory => Resource type: raw

Please copy the harrcascade_frontalface_alt2.xml to “raw” folder.

// face library 
private val faceLibInputStream = resources.openRawResource(R.raw.haarcascade_frontalface_alt2)

If you want to place the file in a separate folder, the “assets” folder is your option. You can call your model like this:

val faceLibInputStream = assets.open("haarcascade_frontalface_alt2.xml")

I like to store the file into the “raw” folder because the system will help you find the file automatically.

😅4. Temp Library Folder

OpenCV cannot use the files directly in “raw” folder. So we need to create a temporary directory and copy the model file into it.

// face library
var faceDetector: CascadeClassifier? = null
lateinit var faceDir: File
var imageRatio = 0.0 // scale down ratio

...

companion object {

...

// Face model
private const val FACE_DIR = "facelib"
private const val FACE_MODEL = "haarcascade_frontalface_alt2.xml"
private const val byteSize = 4096 // buffer size

}

Next, let’s create a new function to do this task.

private fun loadFaceLib() {
try {
val modelInputStream =
resources.openRawResource(
R.raw.haarcascade_frontalface_alt2)

// create a temp directory
faceDir = getDir(FACE_DIR, Context.MODE_PRIVATE)

// create a model file
val faceModel = File(faceDir, FACE_MODEL)

if (!faceModel.exists()) { // copy model
// copy model to new face library
val modelOutputStream = FileOutputStream(faceModel)

val buffer = ByteArray(byteSize)
var byteRead = modelInputStream.read(buffer)
while (byteRead != -1) {
modelOutputStream.write(buffer, 0, byteRead)
byteRead = modelInputStream.read(buffer)
}

modelInputStream.close()
modelOutputStream.close()
}

faceDetector = CascadeClassifier(faceModel.absolutePath)
} catch (e: IOException) {
lge("Error loading cascade face model...$e")
}
}

Now, you can call it in BaseLoaderCallback.

cvBaseLoaderCallback = object : BaseLoaderCallback(this) {
override fun onManagerConnected(status: Int) {
when (status) {
SUCCESS -> {
lgi(OPENCV_SUCCESSFUL)

loadFaceLib()

if (faceDetector!!.empty()) {
faceDetector = null
} else {
faceDir.delete()
}
viewFinder.enableView()
}

else -> super.onManagerConnected(status)
}
}
}

Also, I delete the directory in onDestroy().

override fun onDestroy() {
super.onDestroy()
viewFinder?.let { viewFinder.disableView() }
if (faceDir.exists()) faceDir.delete()
}

📱 5. UI: Show Rotation

Let’s show the Rotation on the screen. I will use this rotation on different orientations later.

At activity_main.xml,

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout ...>

<org.opencv.android.JavaCamera2View
... />

<TextView
android:id="@+id/rotation_tv"
android:layout_width="50dp"
android:layout_height="wrap_content"
android:layout_marginEnd="16dp"
android:layout_marginBottom="16dp"
android:background="@android:color/background_light"
android:gravity="center"
android:text="@string/n_0_degree"
android:textColor="@color/colorAccent"
android:textSize="24sp"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintEnd_toEndOf="parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

I insert some hard code strings at strings.xml:

<resources>
<string name="app_name">OpenCV Camera Demo</string>
<string name="n_90_degree">90</string>
<string name="n_180_degree">180</string>
<string name="n_270_degree">270</string>
<string name="n_0_degree">0</string>
</resources>

Now, I show the data on the screen when the orientation has changed. The function is called OrientationEventListener(). You will see it is quite useful.

override fun onCreate(savedInstanceState: Bundle?) {
...

val mOrientationEventListener = object : OrientationEventListener(this) {
override fun onOrientationChanged(orientation: Int) {
// Monitors orientation values to determine the target rotation value
when (orientation) {
in 45..134 -> {
rotation_tv.text = getString(R.string.n_270_degree)
}
in 135..224 -> {
rotation_tv.text = getString(R.string.n_180_degree)
}
in 225..314 -> {
rotation_tv.text = getString(R.string.n_90_degree)
}
else -> {
rotation_tv.text = getString(R.string.n_0_degree)
}
}

}
}
if (mOrientationEventListener.canDetectOrientation()) {
mOrientationEventListener.enable();
} else {
mOrientationEventListener.disable();
}

}

😐 6. Face Detection

It’s very simple to show the result. Usually, OpenCV draws a rectangle around the face once it found something.

// image storage
lateinit var imageMat: Mat
lateinit var grayMat: Mat

These are my image variables. The imageMat stores the image for the screen. The grayMat stores the image for face detection.

Let’s add some codes in the functions of CvCameraViewListener2.

override fun onCameraViewStarted(width: Int, height: Int) {
imageMat = Mat(width, height, CvType.CV_8UC4)
grayMat = Mat(width, height, CvType.CV_8UC4)
}

override fun onCameraViewStopped() {
imageMat.release()
grayMat.release()
}

override fun onCameraFrame(inputFrame: CameraBridgeViewBase.CvCameraViewFrame?): Mat {
imageMat = inputFrame!!.rgba()
grayMat = inputFrame.gray()
imageRatio = 1.0

// detect face rectangle
drawFaceRectangle()

return imageMat
}

That is!

Homan: My work is done. Let's run the app.
Sb: What!? What is the content of drawFaceRantangle()?

Homan: Sure, please take out your pen and ruler to draw ractangles on each face.
Sb: Uh...?*#$%

I am kidding. Here is the code:

fun drawFaceRectangle() {
val faceRects = MatOfRect()
faceDetector!!.detectMultiScale(
grayMat,
faceRects)

for (rect in faceRects.toArray()) {
var x = 0.0
var y = 0.0
var w = 0.0
var h = 0.0

if (imageRatio.equals(1.0)) {
x = rect.x.toDouble()
y = rect.y.toDouble()
w = x + rect.width
h = y + rect.height
} else {
x = rect.x.toDouble() / imageRatio
y = rect.y.toDouble() / imageRatio
w = x + (rect.width / imageRatio)
h = y + (rect.height / imageRatio)
}

Imgproc.rectangle(
imageMat,
Point(x, y),
Point(w, h),
Scalar(255.0, 0.0, 0.0)
)
}
}

😄 6. First Test

Let’s run the app and shoot a group photo found on Google.

The app is working only if the rotation = 90 deg. The capture frame is pretty slow at 5.04 fps.

👇7. Downsize

It’s too slow. Let’s scale the grayMat downsize.

//grayMat = inputFrame.gray()
grayMat = get480Image(inputFrame.gray())

//imageRatio = 1.0

I scale down the image with its height = 480 px.

fun ratioTo480(src: Size): Double {
val w = src.width
val h = src.height
val heightMax = 480
var ratio: Double = 0.0

if (w > h) {
if (w < heightMax) return 1.0
ratio = heightMax / w
} else {
if (h < heightMax) return 1.0
ratio = heightMax / h
}

return ratio
}

fun get480Image(src: Mat): Mat {
val imageSize = Size(src.width().toDouble(), src.height().toDouble())
imageRatio = ratioTo480(imageSize)

if (imageRatio.equals(1.0)) return src

val dstSize = Size(imageSize.width*imageRatio, imageSize.height*imageRatio)
val dst = Mat()
resize(src, dst, dstSize)
return dst
}

🙈8. Second Test

Let’s run the app.

It’s much better after scale down; however, it cannot detect any smaller images, too. So the face detection has its size limit. I found the speed is almost twice faster than the first test.

--

--

Homan Huang
Homan Huang

Written by Homan Huang

Computer Science BS from SFSU. I studied and worked on Android system since 2017. If you are interesting in my past works, please go to my LinkedIn.

No responses yet