CHIN HOWARD - Universiti Tunku Abdul Rahman

Transcription

FACE RECOGNITION BASED AUTOMATEDSTUDENT ATTENDANCE SYSTEMCHIN HOWARDUNIVERSITI TUNKU ABDUL RAHMAN

FACE RECOGNITION BASED AUTOMATED STUDENT ATTENDANCESYSTEMCHIN HOWARDA project report submitted in partial fulfilment of therequirements for the award of Bachelor of Engineering (Hons) ElectronicEngineeringFaculty of Engineering and Green TechnologyUniversiti Tunku Abdul RahmanApril 2018

iiDECLARATIONI hereby declare that this project report is based on my original work except forcitations and quotations which have been duly acknowledged. I also declare that it hasnot been previously and concurrently submitted for any other degree or award atUTAR or other institutions.Signature :Name:CHIN HOWARDID No.:13AGB03261Date:

iiiAPPROVAL FOR SUBMISSIONI certify that this project report entitled “FACE RECOGNITION BASEDAUTOMATED STUDENT ATTENDANCE SYSTEM” was prepared by CHINHOWARD has met the required standard for submission in partial fulfilment of therequirements for the award of Bachelor of Engineering (Hons) Electronic Engineeringat Universiti Tunku Abdul Rahman.Approved by,Signature:Supervisor: Dr. Humaira NisarDate:

ivThe copyright of this report belongs to the author under the terms of thecopyright Act 1987 as qualified by Intellectual Property Policy of Universiti TunkuAbdul Rahman. Due acknowledgement shall always be made of the use of anymaterial contained in, or derived from, this report. 2018, CHIN HOWARD. All right reserved

vACKNOWLEDGEMENTSI would like to thank everyone who had contributed to the successful completion ofthis project. First, I would like to express my utmost gratitude to my researchsupervisor, Dr. Humaira Nisar who in spite of being extraordinary busy with her duties,took time to give invaluable advice and guidance throughout the development of theresearch.In addition, I would also like to express my deepest appreciation to my lovingparents and family members for their constant support and encouragement.Last but not the least, I am grateful for the unselfish cooperation and assistancethat my friends had given me to complete this task.

viFACE RECOGNITION BASED AUTOMATED STUDENT ATTENDANCESYSTEMABSTRACTFace is the representation of one’s identity. Hence, we have proposed an automatedstudent attendance system based on face recognition. Face recognition system is veryuseful in life applications especially in security control systems. The airport protectionsystem uses face recognition to identify suspects and FBI (Federal Bureau ofInvestigation) uses face recognition for criminal investigations. In our proposedapproach, firstly, video framing is performed by activating the camera through a userfriendly interface. The face ROI is detected and segmented from the video frame byusing Viola-Jones algorithm. In the pre-processing stage, scaling of the size of imagesis performed if necessary in order to prevent loss of information. The median filteringis applied to remove noise followed by conversion of colour images to grayscaleimages. After that, contrast-limited adaptive histogram equalization (CLAHE) isimplemented on images to enhance the contrast of images. In face recognition stage,enhanced local binary pattern (LBP) and principal component analysis (PCA) isapplied correspondingly in order to extract the features from facial images. In ourproposed approach, the enhanced local binary pattern outperform the original LBP byreducing the illumination effect and increasing the recognition rate. Next, the featuresextracted from the test images are compared with the features extracted from thetraining images. The facial images are then classified and recognized based on the bestresult obtained from the combination of algorithm, enhanced LBP and PCA. Finally,the attendance of the recognized student will be marked and saved in the excel file.The student who is not registered will also be able to register on the spot andnotification will be given if students sign in more than once. The average accuracy ofrecognition is 100 % for good quality images, 94.12 % of low-quality images and95.76 % for Yale face database when two images per person are trained.

viiTABLE OF CONTENTSDECLARATIONiiAPPROVAL FOR SUBMISSIONiiiACKNOWLEDGEMENTSvABSTRACTviTABLE OF CONTENTSviiLIST OF TABLESxLIST OF FIGURESxiiLIST OF SYMBOLS / ABBREVIATIONSxvLIST OF 1.2Problem Statement31.3Aims and Objectives41.4Thesis Organization4LITERATURE REVIEW52.1Student Attendance System52.2Face Detection62.2.1Viola-Jones Algorithm102.3Pre-Processing122.4Feature Extraction162.4.1Types of Feature Extraction20Feature Classification And Face Recognition212.5

viii2.63Evaluation22METHODOLOGY243.1Methodology Flow243.2Input Images273.2.1283.3Limitations of the ImagesFace Detection293.3.1Pre-Processing293.3.1.1Scaling of Image293.3.1.2Median Filtering303.3.1.3Conversion to Grayscale Image313.3.1.4Contrast Limited Adaptive HistogramEqualization3.432Feature Extraction323.4.1Working Principle of Original LBP333.4.2Working Principle of Proposed LBP343.4.3Working Principle of PCA373.4.4Feature ndRecognition45Face40RESULT AND n of LBP and PCA504.4Comparison with Previous Researches514.5Comparison with Luxand Face Recognition Application544.6Weakness of the Algorithm554.7Problems Faced and Solutions Taken57CONCLUSION AND RECOMMENDATION585.1Conclusion585.2Recommendation59

ixREFERENCESAPPENDICES60Error! Bookmark not defined.

xLIST OF TABLESTABLETITLEPAGES2.1Advantages & Disadvantages of DifferentBiometric System (Arun Katara et al. 2017)62.2Factors Causing Face Detection Difficulties(S.Aanjanadevi et al. 2017)72.3Advantages & Disadvantages of Face DetectionMethods (Varsha Gupta and Dipesh Sharma,2014)92.4Summary of Contrast Improvement2.5Summary of feature extraction, the accuracyobtained from Handbook of Research on EmergingPerspectives in Intelligent Pattern Recognition (NKKamila, 2015)234.1Subjective .3Extracted Features for Different Radius of LBP inYale Face Database and Own DatabaseError!Bookmark not defined.4.4AccuracyDatabase4.5Accuracy of LBP with Different IlluminationEffects (Own database)Error! Bookmark notdefined.4.6Overall Performance of Enhanced LBPBookmark not defined.ofofBookmark15notCLAHE and HistogramError! Bookmark not defined.LBPwithYaleFaceError! Bookmark not defined.Error!

xi4.7Overall Performance of PCAError!not ApproachError! Bookmark not defined.4.9Performance of Proposed Algorithm in DifferentIntensity Range Error! Bookmark not defined.4.10Summary of Comparison with Previous Researches514.11Comparison of Proposed Algorithm and LuxandFace Recognition54

xiiLIST OF FIGURESFIGURETITLEPAGES1.1Block Diagram of the General Framework2.1Haar Feature (Docs.opencv.org, 2018)102.2Integral of Image (Srushti Girhe et al., 2015)112.3False Face Detection (Kihwan Kim, 2011)122.4Images show Checkerboard Effect significantlyincreasing from left to right (Gonzalez, R. C.,& Woods, 2008)132.5Facial images were converted to grayscale,histogram equalization was applied and imageswere resized to 100x100 (Shireesha Chintalapatiand M.V. Raghunadh, 2013)142.6PCA Dimension Reduction (Liton Chandra Pauland Abdulla Al Sumam, 2012)172.7Class Separation in LDA (SumanBhattacharyya and Kumar Rahul, 2013)2.8LBP Operator (Md. Abdur Rahim et.al, 2013)2.9Artificial Neural Network (ANN) (Manisha M.Kasar et al., 2016)192.10Deepface ArchitectureTaigman et al, 2014)3.1Flow of the Proposed Approach (Training Part) 253.2Flow of the Proposed Approach (RecognitionPart)263.3Sample Images in(Cvc.cs.yale.edu, 7

xiii3.4Sample of High Quality Images283.5Sample of Low Quality Images283.6Median Filtering Done on Three Channels303.7Median Filtering Done on a Single Channel303.8Conversion of Image to Grayscale Image313.9Contrast Improvement323.10Example of LBP Conversion333.11LBP with Different Radius Sizes353.12Proposed LBP Operator with Radius 2 and ItsEncoding Pattern.363.13Histogram of Image Blocks374.1User’s Interface (Matlab GUI)434.2Real Time Face Recognition (Automated)434.3Image Browsing and Face Recognition444.4False Recognition is supressed.444.5Attendance in Excel File454.6Accuracy of Different Radius for Different LightingConditionsError! Bookmark not defined.4.7Different Illumination EffectsError!not defined.4. 8Accuracy of Radius 1, 2,3,4,5 with Condition I, II,III, IVError! Bookmark not defined.4.9Enhanced LBP Accuracy When One Image IsTrained and Two Images Are TrainedError!Bookmark not defined.4.10PCA Accuracy When One Image Is Trained AndTwo Images Are TrainedError! Bookmark notdefined.4.11Overall Accuracy of High Quality Images and LowQuality Images Error! Bookmark not defined.Bookmark

xiv4.12Images with Different IntensityError! Bookmarknot defined.4.13Performance of Proposed Algorithm in DifferentIntensity Range Error! Bookmark not defined.4.14Images of Students With or Without WearingGlasses494.15Training Image VS Testing Image49

xvLIST OF SYMBOLS / ABBREVIATIONSχ2Chi-square statistic𝑑distance𝑥input feature points𝑦trained feature points𝑚𝑥mean of x𝑆𝑥covariance matrix of x𝑋𝑐x coordinate of center pixel𝑌𝑐y coordinate of center pixel𝑋𝑝x coordinate of neighbour pixel𝑌𝑝y coordinate of neighbour pixel𝑅radius𝜃angle𝑃total sampling points𝑁total number of images𝑀length and height of images𝛤𝑖column vector𝜑mean faceΦ𝑖mean face subtracted vector𝐴matrix with mean face removed𝐴𝑇transpose of 𝐴𝐶covariance matrix𝑢𝑖eigenvector of 𝐴𝐴𝑇𝑣𝑖eigenvector of 𝐴𝑇 𝐴λeigenvalue

xvi𝑈eigen face𝑈𝑇transpose of eigen faceΩprojected imageΩ𝑖projected image vector

xviiLIST OF APPENDICESAPPENDIXTITLEPAGEACreate Database by Enhanced LBPBookmark not defined.Error!BEnhanced LBP Encoding Process with DifferentRadius SizesError! Bookmark not defined.CEnhanced LBP Separate into Blocks and ItsHistogramError! Bookmark not defined.DEnhanced LBP with Distance Classifier Chi SquareStatisticError! Bookmark not defined.ECreate Database by PCAError! Bookmark notdefined.FPCA Test Image Feature ExtractionBookmark not defined.GPCA withDistanceHMATLAB GUIError!Distance Classifier EuclideanError! Bookmark not defined.Error! Bookmark not defined.

1CHAPTER 1INTRODUCTIONThe main objective of this project is to develop face recognition based automatedstudent attendance system. In order to achieve better performance, the test images andtraining images of this proposed approach are limited to frontal and upright facialimages that consist of a single face only. The test images and training images have tobe captured by using the same device to ensure no quality difference. In addition, thestudents have to register in the database to be recognized. The enrolment can be doneon the spot through the user-friendly interface.1.1BackgroundFace recognition is crucial in daily life in order to identify family, friends or someonewe are familiar with. We might not perceive that several steps have actually taken inorder to identify human faces. Human intelligence allows us to receive informationand interpret the information in the recognition process. We receive informationthrough the image projected into our eyes, by specifically retina in the form of light.Light is a form of electromagnetic waves which are radiated from a source onto anobject and projected to human vision. Robinson-Riegler, G., & Robinson-Riegler, B.(2008) mentioned that after visual processing done by the human visual system, weactually classify shape, size, contour and the texture of the object in order to analysethe information. The analysed information will be compared to other representationsof objects or face that exist in our memory to recognize. In fact, it is a hard challenge

2to build an automated system to have the same capability as a human to recognizefaces. However, we need large memory to recognize different faces, for example, inthe Universities, there are a lot of students with different race and gender, it isimpossible to remember every face of the individual without making mistakes. In orderto overcome human limitations, computers with almost limitless memory, highprocessing speed and power are used in face recognition systems.The human face is a unique representation of individual identity. Thus, facerecognition is defined as a biometric method in which identification of an individualis performed by comparing real-time capture image with stored images in the databaseof that person (Margaret Rouse, 2012).Nowadays, face recognition system is prevalent due to its simplicity andawesome performance. For instance, airport protection systems and FBI use facerecognition for criminal investigations by tracking suspects, missing children and drugactivities (Robert Silk, 2017). Apart from that, Facebook which is a popular socialnetworking website implement face recognition to allow the users to tag their friendsin the photo for entertainment purposes (Sidney Fussell, 2018). Furthermore, IntelCompany allows the users to use face recognition to get access to their online account(Reichert, C., 2017). Apple allows the users to unlock their mobile phone, iPhone Xby using face recognition (deAgonia, M., 2017).The work on face recognition began in 1960. Woody Bledsoe, Helen ChanWolf and Charles Bisson had introduced a system which required the administrator tolocate eyes, ears, nose and mouth from images. The distance and ratios between thelocated features and the common reference points are then calculated and compared.The studies are further enhanced by Goldstein, Harmon, and Lesk in 1970 by usingother features such as hair colour and lip thickness to automate the recognition. In 1988,Kirby and Sirovich first suggested principle component analysis (PCA) to solve facerecognition problem. Many studies on face recognition were then conductedcontinuously until today (Ashley DuVal, 2012).

31.2Problem StatementTraditional student attendance marking technique is often facing a lot of trouble. Theface recognition student attendance system emphasizes its simplicity by eliminatingclassical student attendance marking technique such as calling student names orchecking respective identification cards. There are not only disturbing the teachingprocess but also causes distraction for students during exam sessions. Apart fromcalling names, attendance sheet is passed around the classroom during the lecturesessions. The lecture class especially the class with a large number of students mightfind it difficult to have the attendance sheet being passed around the class. Thus, facerecognition student attendance system is proposed in order to replace the manualsigning of the presence of students which are burdensome and causes students getdistracted in order to sign for their attendance. Furthermore, the face recognition basedautomated student attendance system able to overcome the problem of fraudulentapproach and lecturers does not have to count the number of students several times toensure the presence of the students.The paper proposed by Zhao, W et al. (2003) has listed the difficulties of facialidentification. One of the difficulties of facial identification is the identificationbetween known and unknown images. In addition, paper proposed by Pooja G.R et al.(2010) found out that the training process for face recognition student attendancesystem is slow and time-consuming. In addition, the paper proposed by PriyankaWagh et al. (2015) mentioned that different lighting and head poses are often theproblems that could degrade the performance of face recognition based studentattendance system.Hence, there is a need to develop a real time operating student attendancesystem which means the identification process must be done within defined timeconstraints to prevent omission. The extracted features from facial images whichrepresent the identity of the students have to be consistent towards a change inbackground, illumination, pose and expression. High accuracy and fast computationtime will be the evaluation points of the performance.

41.3Aims and ObjectivesThe objective of this project is to develop face recognition based automated studentattendance system. Expected achievements in order to fulfill the objectives are: To detect the face segment from the video frame. To extract the useful features from the face detected. To classify the features in order to recognize the face detected. To record the attendance of the identified student.ImageAcquisitionfrom itionAttendanceFigure 1.1 Block Diagram of the General Framework1.4Thesis OrganizationChapter 2 includes a brief review of the approaches and studies that have been donepreviously by other researchers whereas Chapter 3 describe proposed methods andapproaches used to obtain the desired output. The results of the proposed approach wouldbe presented and discussed in Chapter 4. The conclusion, as well as somerecommendations would be included in Chapter 5.

5CHAPTER 2LITERATURE REVIEW2.1Student Attendance SystemArun Katara et al. (2017) mentioned disadvantages of RFID (Radio FrequencyIdentification) card system, fingerprint system and iris recognition system. RFID cardsystem is implemented due to its simplicity. However, the user tends to help theirfriends to check in as long as they have their friend’s ID card. The fingerprint systemis indeed effective but not efficient because it takes time for the verification processso the user has to line up and perform the verification one by one. However for facerecognition, the human face is always exposed and contain less information comparedto iris. Iris recognition system which contains more detail might invade the privacy ofthe user. Voice recognition is available, but it is less accurate compared to othermethods. Hence, face recognition system is suggested to be implemented in the studentattendance system.

6Table 2.1 Advantages & Disadvantages of Different Biometric System (Arun Kataraet al., 2017)System typeAdvantagesDisadvantagesRFID card systemSimpleFraudulent usageFingerprint systemAccurateTime-consumingVoice recognition-Less accurate compared to othersAccuratePrivacy InvasionsystemIris recognitionsystem2.2Face DetectionDifference between face detection and face recognition are often misunderstood. Facedetection is to determine only the face segment or face region from image, whereasface recognition is to identify the owner of the facial image. S.Aanjanadevi et al. (2017)and Wei-Lun Chao (2007) presented a few factors which cause face detection and facerecognition to encounter difficulties. These factors consist of background, illumination,pose, expression, occlusion, rotation, scaling and translation. The definition of eachfactor is tabulated in Table 2.2.

7Table 2.2 Factors Causing Face Detection Difficulties (S.Aanjanadevi et al., 2017)BackgroundVariation of background and environment around people inthe image which affect the efficiency of face recognition.IlluminationIllumination is the variation caused by various lightingenvironments which degrade the facial feature detection.PosePose variation means different angle of the acquired thefacial image which cause distortion to recognition process,especially for Eigen face and Fisher face recognitionmethod.ExpressionDifferent facial expressions are used to express feelings andemotions. The expression variation causes spatial relationchange and the facial-feature shape change.OcclusionOcclusion means part of the human face is unobserved. Thiswill diminish the performance of face recognitionalgorithms due to deficiency information.Rotation, scaling and Transformation of images which might cause distortion oftranslationthe original information about the images.There are a few face detection methods that the previous researchers haveworked on. However, most of them used frontal upright facial images which consistof only one face. The face region is fully exposed without obstacles and free from thespectacles.Akshara Jadhav et al. (2017) and by P. Arun Mozhi Devan et al. (2017)suggested Viola-Jones algorithm for face detection for student attendance system.They concluded that out of methods such as face geometry- based methods, FeatureInvariant methods and Machine learning based methods, Viola-Jones algorithm is not

8only fast and robust, but gives high detection rate and perform better in differentlighting condition. Rahul V. Patil and S. B. Bangar (2017) also agreed that Viola-Jonesalgorithm gives better performance in different lighting condition. In addition, in thepaper by Mrunmayee Shirodkar et al. (2015), they mentioned that Viola-Jonesalgorithm is able to eliminate the issues of illumination as well as scaling and rotation.In addition, Naveed Khan Balcoh (2012) proposed that Viola-Jones algorithm is themost efficient among all algorithms for instance the AdaBoost algorithm, theFloatBoost algorithm, Neural Networks, the S-AdaBoost algorithm, Support VectorMachines (SVM) and the Bayes classifier.Varsha Gupta and Dipesh Sharma (2014) studied Local Binary Pattern (LBP),Adaboost algorithm, local successive mean quantization transform (SMQT)Features, sparse network of winnows (SNOW) Classifier Method and NeuralNetwork-based face detection methods in addition to Viola-Jones algorithm. Theyconcluded that Viola-Jones algorithm has the highest speed and highest accuracyamong all the methods. Other methods for instance Local Binary Pattern and SMQTFeatures have simple computation and able to deal with illumination problem, theiroverall performance is weaker than Viola-Jones algorithm for face detection. Theadvantages and disadvantages of the methods is studied and tabulated in Table 2.3.

9Table 2.3 Advantages & Disadvantages of Face Detection Methods (Varsha Gupta andDipesh Sharma, 2014)Face gorithm1. High detection speed1. Long training time.2. High accuracy.2. Limited head pose.3. Not able to detect darkfaces.LocalBinarypattern1. Simple computation.2. Hightoleranceagainstthe1. Only used for binary andgrey images.2. Overall performance ismonotonicinaccurate compared toillumination changes.Viola-Jones algorithm.AdaBoostNeed not to have anyThe result highly depends onalgorithmprior knowledge aboutthe training data and affected(part of Violaface structure.by weak classifiers.1. Capable to deal withThe region contain verylighting problem insimilar to grey value regionsobject detection.will be misidentified as face.jones algorithm)SMQT FeaturesandSNOWClassifierMethod2. Efficientincomputation.Neural-NetworkHigh accuracy only if1. Detectionprocessislarge size of image wereslow and computation istrained.complex.2. Overall performance isweaker than Viola-Jonesalgorithm.

102.2.1Viola-Jones AlgorithmViola-Jones algorithm which was introduced by P. Viola, M. J. Jones (2001) is themost popular algorithm to localize the face segment from static images or video frame.Basically the concept of Viola-Jones algorithm consists of four parts. The first part isknown as Haar feature, second part is where integral image is created, followed byimplementation of Adaboost on the third part and lastly cascading process.Figure 2.1 Haar Feature (Docs.opencv.org, 2018)Viola-Jones algorithm analyses a given image using Haar features consistingof multiple rectangles (Mekha Joseph et al., 2016). Figure 2.1 shows several types ofHaar features. The features perform as window function mapping onto the image. Asingle value result, which representing each feature can be computed by subtractingthe sum of the white rectangle(s) from the sum of the black rectangle(s) (Mekha Josephet al., 2016). The illustration is shown in Figure 2.2.

11Figure 2.2 Integral of Image (Srushti Girhe et al., 2015)The value of integrating image in a specific location is the sum of pixels on theleft and the top of the respective location. In order to illustrate clearly, the value of theintegral image at location 1 is the sum of the pixels in rectangle A. The values ofintegral image at the rest of the locations are cumulative. For instance, the value atlocation 2 is summation of A and B, (A B), at location 3 is summation of A and C,(A C), and at location 4 is summation of all the regions, (A B C D) (SrushtiGirhe et al., 2015). Therefore, the sum within the D region can be computed with onlyaddition and subtraction of diagonal at location 4 1 (2 3) to eliminate rectanglesA, B and C.Burak Ozen (2017) and Chris McCormick (2013), they have mentioned thatAdaboost which is also known as ‘Adaptive Boosting’ is a famous boosting techniquein which multiple “weak classifiers” are combined into a single “strong classifier”.The training set is selected for each new classifier according to the results of theprevious classifier and determines how much weight should be given to each classifierin order to make it significant.However, false detection may occur and it was required to remove manuallybased on human vision. Figure 2.3 shows an example of false face detection (circlewith blue).

12Figure 2.3 False Face Detection (Kihwan Kim, 2011)2.3Pre-ProcessingSubhi Singh et al. (2015) suggested cropping of detected face and colour image wasconverted to grayscale for pre-processing. They also proposed affine transform to beapplied to align the facial image based on coordinates in middle of the eyes and scalingof image to be performed. Arun Katara et al (2017), Akshara Jadhav et.al (2017),Shireesha Chintalapati, and M.V. Raghunadh (2013), all of the 3 papers have proposedhistogram equalization to be applied to facial image, and scaling of images wasperformed for pre-processing.Pre-processing enhances the performance of the system. It plays an essentialrole to improve the accuracy of face recognition. Scaling is one of the important preprocessing steps to manipulate the size of the image. Scaling down of an imageincreases the processing speed by reducing the system computations since the numberof pixels are reduced. The size and pixels of the image carry spatial information.Gonzalez, R. C. and Woods (2008) mentioned spatial information is a measure of thesmallest discernible detail in an image. Hence, spatial information has to bemanipulated carefully to avoid distortion of images to prevent checkerboard effect.The size should be same for all the images for normalization and standardizationpurposes. Subhi Singh et al (2015) proposed PCA (Principal Component Analysis) toextract features from facial images, same length and width of image is preferred, thusimages were scaled to 120 120 pixels.

13Figure 2.4 Images Show Checkerboard Effect Significantly Increasing from Left toRight (Gonzalez, R. C., & Woods, 2008)Besides scaling of images, colour image is usually converted to grayscaleimage for pre-processing. Grayscale images are believed to be less sensitive toillumination condition and take less computational time. Grayscale image is 8 bitimage which the pixel range from 0 to 255 whereas colour image is 24 bit image whichpixel can have 16 77 7216 values. Hence, colour image requires more storage spaceand more computational power compared to grayscale images. (Kanan and Cottrell,2012). If colour image is not necessary in computation, then it is considered as noise.In addition, pre-processing is important to enhance the contrast of images. In the paperof Pratiksha M. Patel (2016), he mentioned that Histogram equalization is one of themethods of pre-processing in order to improve the contrast of the image. It providesuniform distribution of intensities over the intensity level axis, which is able to reduceuneven illumination effect at the same time.

14Figure 2.5 Facial Images Were Converted To Grayscale, Histogram EqualizationWas Applied and Images Were Resized to 100x100 (Shireesha Chintalapati andM.V. Raghunadh, 2013)There are a few methods to improve the contrast of images other thanHistogram Equalization. Neethu M. Sasi and V. K. Jayasree (2013) studied HistogramEqualization and Contrast Limited Adaptive Histogram Equalization (CLAHE) inorder to enhance myocardial perfusion images. Aliaa A. A. Youssif (2006) studiedcontrast enhancement together with illumination equalization methods to segmentretinal vasculature. In addition, in paper by A., I. and E.Z., F. (2016) Image ContrastEnhancement Techniques and performance were studied. Unlike Histogramequalization, which operate on the data of the entire image, CLAHE operates on dataof small regions throughout the image. Hence, the Contrast Limited AdaptiveHistogram Equalization is believed to outperform the conventional HistogramEqualization. Summary of the literature review for contrast improvement is tabulatedin Table 2.4.

15Table 2.4 Summary of Contrast ionenhancementAdvantagesDisadvantages1. Less sensitiveperformedisto noise.thebyglobalstatistics of antransforming theimage.intensity values,resulting1. It depends on2. It cause overinenhancementuniformlyfor some dmoreenhancement.ContrastUnlike,HELimitedwhich works onenhancementtoAdaptiveentire image, itascomparedHistogramworks on ancedtoensure nisthentousedmergeneighbouringtiles.the1. It prevent overwellas1. More sensitivenoiseto

162.4Feature ExtractionThe feature is a set of data that represents the information in an image. Extraction offacial feature is most essential for face recognition. However, selection of featurescould be an arduous task. Feature extraction algorithm has to be consistent and stableover a variety of changes in order to give high a

the attendance of the recognized student will be marked and saved in the excel file. . histogram equalization was applied and images were resized to 100x100 (Shireesha Chintalapati and M.V. Raghunadh, 2013) 14 2.6 PCA Dimension Reduction (Liton Chandra Paul and Abdulla Al Sumam, 2012) 17 2.7 Class Separation in LDA (Suman Kumar .