Advanced Real-Time Facial Recognition Employee Attendance System.
I am Lagdhir Vaghela. I have done an Internship in the BISAG-N and have made my project of face recognition Employee Attendance System using python and machine learning in a group with Krunal Thakkar and Vedang Thakkar. Basically, In this project, I have done work related to GUI and Basic Python. To Create GUI I have used Tkinter Library. This framework provides Python users with a simple way to create GUI elements using the widgets found in the Tk toolkit. Tk widgets can be used to construct buttons, menus, data fields, etc.
What is Facial Recognition Attendance System
Facial recognition is an easy and secure way of taking down attendance. It is a biometric identification method using a face-scanning mechanism. The device captures the facial impression of employees and processes the information into a secure database. The scanned images are stored and mapped into a face coordinate structure. The software can be downloaded in any device, and the face of employees can be scanned with accuracy. Once registered, the device recognizes the matched face for all future check-ins.
In this project, the OpenCV based face recognition approach has been proposed. This model integrates a camera that captures an input image, an algorithm for detecting a face from an input image, encoding and identifying the face, marking the attendance in a spreadsheet, and converting it into a PDF file. The training database is created by training the system with the faces of the authorized students. The features are extracted using the LBPH algorithm.
IMAGE PROCESSING
The facial recognition process can be split into two major stages: processing which occurs before detection involving face detection and alignment and later recognition is done using feature extraction and matching steps.
- FACE DETECTION: The primary function of this step is to conclude whether the human faces emerge in a given image, and what is the location of these faces. The expected outputs of this step are patches that contain each face in the input image. In order to get a more robust and easily designable face recognition system.
- FEATURE EXTRACTION: Following the face detection step the extraction of human face patches from images is done. After this step, the conversion of the face patch is done into a vector with fixed coordinates or a set of landmark points.
- FACE RECOGNITION: The last step after the representation of faces is to identify them. For automatic recognition, we need to build a face database. Various images are taken for each person and their features are extracted and stored in the database. Then when an input image is fed the face detection and feature extraction is performed and its feature to each face class is compared and stored in the database.
Before starting this Project with Python we have to install OpenCV, Pillow, Pandas, Shutil, CSV, and Numpy. Those all are the Python packages that are necessary for this project in Python. To install them, simply run this pip command in your terminal:
- pip install OpenCV-python
- pip install pillow
- pip install pandas
- pip install NumPy
ALGORITHM
There are various algorithms used for facial recognition. Some of them are as follows:
- Eigen Faces
- Fisher faces
- Local binary patterns histograms(LBPH)
LOCAL BINARY PATTERNS HISTOGRAMS
This method needs grayscale pictures for dealing with the training part. This algorithm in comparison to other algorithms is not a holistic approach.
A. PARAMETERS:
LBPH uses the following parameters:
i. Radius: Generally 1 is set as a radius for the circular local binary pattern which denotes the radius around the central pixel.
ii. Neighbours: The number of sample points surrounding the central pixel which is generally 8.The computational cost will increase with increase in number of sample points.
iii. Grid X: The number of cells along the horizontal direction is represented as Grid X. With the increase in number of cells the grid becomes finer which results in increase of dimensional feature vector.
iv. Grid Y: The number of cells along the vertical direction is represented as Grid Y. With the increase in number of cells the grid becomes finer which results in increase of dimensional feature vector.
B. ALGORITHM TRAINING:
For the training purpose of the dataset of the facial images of the people to be recognized along with the unique ID is required so that the presented approach will utilize the provided information for perceiving an input image and providing the output. Same images require same ID.
C. COMPUTATION OF THE ALGORITHM:
The intermediate image with improved facial characteristics which corresponds to the original image is created in the first step. Based on the parameters provided, sliding window theory is used in order to achieve so. Facial image is converted into gray scale. A 3x3 pixels window is taken which can also be expressed as a 3x3 matrix which contains the intensity of each pixel (0–255). After this we consider the central value of the matrix which we take as the threshold. This value defines the new values obtained from the 8 neighbours. A new binary value is set for each neighbour of the central value. For the values equal to or greater than the threshold value 1 will be the output otherwise 0 will be the output. Only binary values will be present in the matrix and the concatenation is performed at each position to get new values at each position. Then the conversion of this binary value into a decimal value is done which is made the central value of the matrix. It is a pixel of the actual image. As the process is completed, we get a new image which serves as the better characteristics of the original image.
D. EXTRACTION OF HISTOGRAM:
The image obtained in the previous step uses the Grid X and Grid Y parameters and the image is split into multiple grids. Based on the image the histogram can be extracted as below:
1. The image is in gray scale and each histogram will consist of only 256 positions (0- 255) which symbolize the existences of each pixel Intensity.
2. After this each histogram is created and a new and bigger histogram is done. Let us suppose that there are 8x8 grids, and then there will be 16.384 positions in total in the final histogram. Ultimately the histogram signifies the features of the actual image.
E. THE FACE RECOGNITION:
The training of the algorithm is done. For finding the image which is same as the input image, the two histograms are compared and the image corresponding to the nearest histogram is returned. Different approaches are used for the calculation of distance between the two histograms.
Here we use the Euclidean distance based on the formula: Hence the result of this method is the ID of the image which has the nearest histogram. It should return the distance calculated in the form of ‘confidence’. Then the threshold and the ‘confidence’ can be used to automatically evaluate if the image is correctly recognized. If the confidence is less than the given threshold value, it implies that the image has been well recognized by the algorithm.
DATABASE CREATION:
The first step in the Attendance System is the creation of a database of faces that will be used. Different individuals are considered and a camera is used for the detection of faces and the recording of the frontal face. The number of frames to be taken for consideration can be modified for accuracy levels. These images are then stored in the database along with the Registration ID.
TRAINING OF FACES:
The images are saved in gray scale after being recorded by a camera. The LBPH recognizer is employed to coach these faces because the coaching sets the resolution and therefore the recognized face resolutions are completely variant. A part of the image is taken as the centre and the neighbours are thresholded against it. If the intensity of the centre part is greater or equal than it neighbour then it is denoted as 1 and 0 if not. This will result in binary patterns generally known as LBP codes.
FACE DETECTION:
The data of the trained faces are stored in .py format. The faces are detected using the Haar cascade frontal face module.
FACE RECOGNITION:
The data of the trained faces are stored and the detected faces are compared to the IDs of the students and recognized. The recording of faces is done in real time to guarantee the accuracy of the system. This system is precisely dependant on the camera’s condition
SOFTWARE REQUIREMENTS:
- It will require high processing power (8 GB RAM & 1 GB Nvidia GT740M Graphic Card)
- Noisy images can reduce your accuracy so the quality of images matters.
TECHNOLOGY:
OpenCV, LBPH Algorithm, Tkinter GUI Library, DNN Face detection model
TOOL USE:
Pycharm, Hp HD camera
SCREENSHOTS OF THE PROJECT:
FUTURE SCOPE
The future scope of the proposed work can be, capturing multiple detailed images of the students and using any cloud technology to store these images.
CONCLUSION
This Project provides a valuable attendance service for both teachers and students. Reduce manual process errors by providing an automated and reliable attendance system that uses face recognition technology.