Showing posts with label Facial Recognition. Show all posts
Showing posts with label Facial Recognition. Show all posts

Saturday, November 12, 2016

Getting the LeNet model working with Face Recognition

In my last post, I talked about how the LeNet Convolutional Neural Network model is capable of handling much more complex data than the intended MNIST dataset. We saw how it got ~99% accuracy when it learned to identify 10 faces from the raw pixel intensities.

So, let’s see the code I used to get it working.

First of all, I needed a training dataset. For that, I created a set of face images of 10 subjects with around 500 images each.

Few of the images from the training dataset
The training dataset (yep, that's my face)

I use a file naming convention as <subject_label>-<subject_name>-<unique_number>.jpg (e.g. 0-Thimira-1475137898.65.jpg) for the training images to make it easier to read in and get the metadata of the images in one go. (I will do a separate post on how to easily create training datasets of face images like this).

We'll mainly be using Keras to build the model, and scikit-learn for some utility functions. We’ll need to import the following packages,
 from sklearn.cross_validation import train_test_split  
 from keras.optimizers import SGD  
 from keras.utils import np_utils  
 import numpy as np  
 import argparse  
 import cv2  
 import os  
 import sys  
 from PIL import Image  

Monday, November 7, 2016

Can the LeNet model handle Face Recognition?

I recently followed a blog post - at PyImageSearch by Adrian Rosebrock - on using the LeNet Convolutional Neural Network model on the MNIST dataset - i.e. use for handwritten digit recognition - using Keras with Theano backend. I was able to easily try it out thanks to the very detailed and well thought out guide.

The LeNet model itself is quite simple, just 5 layers. Yet it performs impressively well on the MNIST dataset. We can get around 98% accuracy with just 20 iterations of training with ease.

The training time for the model is also quite low. I tested on my MSI GE60 2PF Apache Pro laptop with CUDA enabled, and the training time was just 2 minutes 20 seconds on average. On CPU only (with CUDA disabled) it took around 30 minutes.

LeNet giving 98% accuracy on MNIST data
LeNet giving 98% accuracy on MNIST data
As you can see, we got 98.11% accuracy, and it has correctly classified a digit that has been cut-off.

It even classifies a quite deformed '2' correctly.
LeNet correctly classifying a deformed digit
LeNet correctly classifying a deformed digit

Saturday, October 29, 2016

Getting Dlib Face Landmark Detection working with OpenCV

Dlib has excellent Face Detection and Face Landmark Detection algorithms built-in. Its face detection is based on Histogram of Oriented Gradients (HOG) feature combined with a linear classifier, on a sliding window detection scheme (Ref. http://dlib.net/) and it provides pre-trained models for face landmark detection. It also provides handy utility functions like dlib.get_frontal_face_detector() to make our lives easier.

Dlib Face Landmark Detection in action
Dlib Face Landmark Detection in action
Note: Image used for testing is in the Public Domain - https://en.wikipedia.org/wiki/File:Arnold_Schwarzenegger_edit%28ws%29.jpg

To check out Dlib with it's native functions, you can try out the Dlib example from the official site: http://dlib.net/face_landmark_detection.py.html.It works well, but we can do better.

Although Dlib offers all the simplicity in implementing face landmark detection, it's still no match for the flexibility of OpenCV. (Simply put, Dlib is a library for Machine Learning, while OpenCV is for Computer Vision and Image Processing)

So, can we use Dlib face landmark detection functionality in an OpenCV context? Yes, here's how.