Computer vision is difficult because it requires recognition of human faces. Early artificial visual experiments were focused on small problems. The world in which they were observed had to be carefully controlled and constructed. Boxes in the shape of regular polygons could be identified. Or simple objects like a pair of scissors may have been used. The background of most images was controlled so that there was a good contrast between the objects being examined and the rest of the world. Face recognition is not a problem. Face recognition is difficult because it is complex and natural. It does not have easy (automatically) identifiable edges or features. It is therefore difficult to create a mathematical model that represents the face of a person and can be used for prior knowledge in analyzing an image.

Face recognition is a widespread application. The most common is human-computer interaction. Computers could be made easier by simply sitting down at a terminal. The computer could automatically identify the user and load their personal preferences. This technology can also be used to improve other technologies like speech recognition. The computer can recognize the individual speaking and can automatically load personal preferences. Security could also benefit from human face recognition technology. One method that can be used to identify someone is the recognition of their faces. Face recognition is an easy and quick security measure. The subject is not inconvenienced by the process, unlike retinal scans. However, it does not guarantee that the subject will be identified. Face recognition is susceptible to sporadic changes in human appearance (e.g., shaving, hair style and aging). Face recognition technologies could also be useful in other areas such as search engine technology. It is possible to search for people in specific images using face detection systems. For well-known people, this could be accomplished by simply giving the name of the person or a picture of the person. This technology can be used to create criminal mugshot databases. Automated face recognition can be done in this environment since all poses and lighting are uniform. This technology has the potential to go beyond traditional textual clues used for indexing.

Recognition of the Face The History of Development

Image analysis has one of its most valuable applications: Face recognition. It is difficult to develop an automated system capable of recognising faces. Although we can recognize faces that are familiar, our skills are limited when dealing with unknown faces. Face recognition is a problem that computers have the potential to solve. They also have a limitless computational speed and memory. A simple Google search of the phrase “face Recognition” returns 9422 matches. In just one year 2009, 1332 articles.

It could be beneficial to many industries. Video surveillance, human machine interaction, photo cameras and virtual reality are just a few examples. Multidisciplinary research is a great way to attract interest from many disciplines. Computer vision research is not the only problem. Pattern recognition, neural network, image processing, psychology and image processing all include face recognition. The 1950’s were the first time this topic was studied in psychology [21]. These works were also related to other topics such as face expression, emotion interpretation and perception of gestures. Engineering began to take an interest in face recognition in 1960.

Woodrow W. Bledsoe is one of the pioneers in this field. Bledsoe, together with other researchers, established Panoramic Research, Inc., Palo Alto, California in 1960. This company did most of its work with AI-related contracts from U.S. Department of Defense (and other intelligence agencies] [4]. Bledsoe was also a researcher on computers for recognizing human faces in 1964 and 1965. The work was not published because the funding for these researches came from an unknown intelligence agency. Later, he continued his research at Stanford Research Institute. Bledsoe conceived and implemented an automated system. A human operator selected some face coordinates, and computers used that information to recognize the faces. The main problems faced by Face Recognition 50 years later were described by him: variations in lighting, head rotation, facial expressions, and aging. Researchers continue to study this subject, trying to determine subjective features such as the size of the ears or distance between the eyes. A. Jay Goldstein (Leon D. Harmon), Ann B. Lesk, and Ann B. Lesk used this method in Bell Laboratories. They created a vector containing 21 subjective attributes such as nose size, eyebrow weight and protrusion. Fischler, Elschanger and others attempted to measure similar features manually in 1973. Their algorithm used local template matches and a global measurement to fit facial features.

Back in 1970, there were different approaches. Some attempted to define the face by using a series of geometric parameters. They then performed some pattern recognition using those parameters. Kenade, however, was the first person to create a fully automated face-recognition system. He developed and implemented a face-recognition program. It was run in a specially designed computer system. The algorithm automatically extracted 16 facial parameters. Kenade’s work shows that this algorithm is very similar to manual extraction. He was able to identify the correct source of information at 45-75%. He showed that ignoring irrelevant features could lead to better identification rates. Some efforts were focused on improving the measurement of subjective features. Mark Nixon provided a geometric measurement to measure eye spacing [5]. “Deformable templates” are a way to improve the template matching strategy. New approaches were also introduced in this decade. Artificial neural networks are used by some researchers to recognize faces [1].

L. Sirovich (10) was the first to mention eigenfaces as a method for image processing. This technique would be the most popular in the following years. They relied on Principal Component Analysis. Their goal was not to lose much information but to render an image in a smaller dimension and then reconstruct it [6]. Their work would lead to many other face recognition algorithms. Mathew Turk of MIT presented a work using eigenfaces in recognition [11] Their algorithm was capable of locating, tracking and classifying a subject’s heads. Since 1990, the face recognition topic has been the focus of much attention. Different algorithms have emerged from different approaches. Amongst the most pertinent are PCA and ICA as well LDA and its derivatives. This work will discuss different algorithms and approaches.

Points of view for recognition algorithm design

Face recognition began with the most prominent facial features. It was an effective way to replicate human face recognition abilities. An effort was made [2] to assess the importance certain intuitive features (mouth and eyes, cheeks) as well as geometric measurements (between eye distance [8], width and length ratio). This issue is relevant as it can help improve facial recognition and performance. It is important to determine which facial features are essential for good recognition.

The introduction of abstract mathematical tools, such as eigenfaces, has opened up a new way to recognize faces. The similarities of faces could be computed without the need to consider human-relevant features. This new approach allowed for a higher abstraction level. Skin color is one important feature to detect faces [9, 3]. A normalization step is performed prior to feature extraction [12]. However, abstractions are essential and allow you to approach the problem using pure mathematical or computational methods.

Structure of the Face Recognition Systems

Face Recognition includes many sub-problems. These problems can be classified in different ways in the bibliography. This section will cover some of them. A final classification, either unified or general, will be offered.

A system that can identify human faces using a general algorithm.

A face recognition system uses a video stream or image as input. The output is a verification or identification of the subject(s) appearing in the image/video. Some systems [15] consider a face recognition system to be a three-step process. See Figure 1.1. From this point of view, the Face Detection and Feature Extraction phases could run simultaneously.Figure 1.1: A generic face recognition system.Face detection is defined as the process of extracting faces from scenes. The system identifies an image area as a person by positively identifying it. This process has many uses, including face tracking and pose estimation. Next is feature extraction, which involves extracting the relevant facial features. These features could include specific face areas, variations, angles and measures. They can also be relevant to humans (e.g. Or not. This phase is also useful for facial feature tracking and emotion detection. The system can also recognize faces. A database would be used to identify the identity of the system in an identification task. This phase includes a classifier, a classification algorithm and a measurement of accuracy. This phase utilizes methods that are similar to those used in other areas such as data mining, sound engineering, and data mining. There are many engineering solutions to face recognition problems. Face recognition and face detection could be combined, or the expression analysis could be used before normalizing [10].

Face Detection Problem Structure

Face detection includes several sub-problems. Some systems are able to detect faces simultaneously while others can perform a detection process and, if it is positive, attempt to locate the person. Figure 1.2 illustrates the steps involved in face detection. In order to get a acceptable response time, first reduce the data dimensions. Preprocessing is also possible to adapt the input image for the algorithm requirements. Some algorithms simply analyze the image and extract relevant facial areas. Others, however, do some preprocessing. Next, you will need to extract facial features and measurements. These data will be weighed, evaluated and compared to determine if there are any faces. Some algorithms are capable of learning new data and will be weighted, evaluated, or compared to identify if there is an actual face in a photo. This is a simplified approach to face recognition. Face recognition is a process that classifies a face. There can be as many candidates as classes. Face recognition algorithms can also be used in many face detection techniques. Face recognition algorithms are also often based on techniques used for face detection.

Methods for Extraction of Features

There are many feature-extraction algorithms. These will be covered in more detail in the paper. They can also be used for other purposes than face recognition. Face recognition researchers have modified, modified, and adapted many algorithms to suit their purposes. Karl Pearson, for example, invented PCA in 1901 [8], and it was used to recognize patterns 64 years later [11]. It was finally used for face recognition and representation in the early 1990’s. A list of features extracted algorithms used in face detection can be found in table 1.2

Methods for Choosing Features

The feature selection algorithm’s goal is to find the features with the smallest classification errors. This error is the reason feature selection is dependent on the type of classification used. This problem can be solved by examining every subset possible and choosing the one that meets the criteria function. This task is costly and time-consuming. This problem can be solved using algorithms such as branch and bound. For more information on the selection methods suggested in [4], see table 1.3.

Classification of the Face

After the features have been selected and extracted, it is time to classify the image. The classification methods used to recognize facial features using appearance-based algorithms include many. Sometimes, multiple classifiers may be combined to provide better results. Most model-based algorithms match samples to the template or model. An improvement method can then be applied to the algorithm. Face recognition is a huge task that classifiers can help with in some way. The use of classifiers is widespread in areas such as finance, data mining and signal decoding. There is a lot of literature on this topic. The general topic of classifiers is from the point of recognition. It usually involves some learning, whether it be supervised, semi-supervised, or unsupervised. Unsupervised learning can be the most challenging approach because there is no set of tagged examples. Face recognition applications often include a set of tagged subjects. Face recognition systems typically use supervised-learning methods. There may be small data sets. Sometimes, it can prove impossible to acquire new tags samples. Semi-supervised learning can be a good option.

Face Recognition: The Problem

This article has covered the face recognition industry, explaining the various methods, tools, algorithms, and approaches used since the 1960’s. Some algorithms are better than others. Face recognition has many limitations. The previous chapter explains some of the specific problems with face detection. These problems may not be unique to face recognition. These and other issues will be discussed in the following section. Many algorithms use color information to recognize faces. Some images may only be available in grayscale. However, color images can be used to extract features. Color perception from a surface is not just dependent on its nature but also the light it receives. Color is actually a result of how our eyes perceive light energy and wavelength. Images taken in uncontrolled environments can have relevant illumination variations. However, face recognition is a crucial factor due to its chromacity. The intensity of a pixel’s color can change depending on its lighting conditions. There may be variations or relationships between pixels. Most feature extraction methods depend on how the lighting conditions change. It is important to remember that light sources can change, and light intensities could increase or decrease as new light sources are added. Solarization can cause complete face shadows or opacity, making feature extraction difficult. Problem is that faces of identical subjects may have different illuminations. This can lead to more subtle differences than in other subjects.

In conclusion, automated face recognition systems have a lot of challenges in lighting. There is a lot of literature available on this topic. It has been shown that people can recognize faces under different lighting conditions. However, this is not true for humans.

Author

  • chrisbrown

    Chris Brown is a 33-year-old blogger who focuses on education. He has a Master's degree in education and has been working as a teacher for over 11 years. He is an advocate for education reform and believes that all students should have access to a quality education.