-
Essay / Paradigms of facial recognition systems - 1335
No previous work has been reported on aspects of similarity recognition in images of family faces. However, it is worth reviewing research on face recognition because many of the issues encountered in our problem are similar to those encountered in related problems. Facial recognition systems have been carried out according to two distinct paradigms. In the first paradigm, researchers first extract facial features such as eyes, nose, etc. and then use clustering/classification algorithms for recognition. The second paradigm treats the full facial image as an input vector and bases the analysis and recognition on algebraic transformations of the input space. Current research has adopted these two paradigms for recognizing family similarities. Face recognition algorithms generally have three phases, including the feature extraction phase (reducing the size of test images), the learning phase (clustering/classification) and the recognition phase. It can be said that the main difference between all the methods proposed by researchers over the last three decades lies in the feature extraction step. Superior efforts have been put into feature extraction, and algorithms from the principal component analysis (PCA) family are the most popular algorithm for reducing the size of the problem space that could have been used . Turk and Pentland [] use PCA in facial recognition for the first time. Feature vectors for PCA are vectorized face images. PCA rotates feature vectors from a large, highly correlated subspace to a small subspace whose basis vectors correspond to the direction of maximum variance in the original image space. This subspace is called Eigenface, from which unnecessary information such as lighting variations or noise is truncated and the ...... middle of paper ......the first to use the wavelet transform with Haar filters to extract 16 images from the original image. The mean and standard deviation of each image form the feature vector. At the recognition stage, Bhattacharyya distance is used to find the distance between the feature vector of the input image and the feature vectors of the obtained subspace. Kinage and Bhirud [Kin09] extend this study and use the two-dimensional wavelet transform plus 2DPCA. First, a wavelet transform applied to the image to obtain a reduced size and insensitive to lighting. Then, the 2DPCA clustering method is used to extract the feature space. During the recognition stage, the Euclidean distance between the input image and the experimental samples is calculated to know the class to which the input image belongs. Experiments in AT&T face database show that the success rate of the proposed method is 94,4 %.