Dr Rajiv Desai

An Educational Blog

FACIAL RECOGNITION (TECHNOLOGY)

Facial Recognition (Technology):

_____

_____

Prologue:

Faces are special. Days after birth, an infant can distinguish his/her mother’s face from those of other women. Babies are more reliably engaged by a sketch of a face than they are by other images. Though human faces are quite similar in their basic composition, most of us can differentiate effortlessly among them. A face is a codex of social information: it can often tell us at a glance, someone’s age, gender, racial background and mood. The human brain is often less reliable than digital algorithms, but it remains superior at facial recognition. At the airport, when a scanner compares your face with your passport photo, the lighting is perfect and the angle is perfect. By contrast, an average human can recognize a family member from behind. No computer will ever be able to do that. Though we may take for granted our brain’s ability to recognize the faces of friends, family, and acquaintances, it is actually an extraordinary gift. Designing an algorithm that can effectively scan through a series of digitized photographs or still video images of faces and detect all occurrences of a previously encountered face is a monumental task. This challenge and many others are the focus of a broad area of computer science research known as facial recognition. The discipline of facial recognition spans the subjects of graphics and artificial intelligence, and it has been the subject of decades of research and the product of significant government and corporate investment.

At its most basic, facial recognition technology, sometimes referred to as facial biometrics, involves using a 2D or 3D camera to capture an image of a human face. The algorithms measure numerous points on the face down to the sub-millimeter. The pattern is then compared to others in a database for a match, whether it’s the billion people on Facebook or the one that unlocks your smartphone. The ability of computers to recognize people’s faces is improving rapidly, along with the ubiquity of cameras and the power of computing hosted in the internet cloud to figure out identities in real time. All biometrics whether a finger, face, iris, voice or any other – is a matter of matching a pattern. For fingerprints its swirls and whorls. In faces, its landmark points like eyes, nose, and mouth. Fingerprints cannot lie, but liars can make fingerprints. Unfortunately, this paraphrase has been proven right in many occasions now; and not only for fingerprints, but also for facial recognition.

______

______

Glossary of terms, acronyms and abbreviations:

Biometric characteristic: A biological and/or behavioral characteristic of an individual that can be detected and from which distinguishing biometric features can be repeatedly extracted for the purpose of automated recognition of individuals

Biometric feature: A biometric characteristic that has been processed so as to extract numbers or labels which can be used for comparison

Biometric feature extraction: The process through which biometric characteristics are converted into biometric templates

Biometric identification: Search against a gallery to find and return a sufficiently similar biometric template

Biometric probe: Biometric characteristics obtained at the site of verification or identification (e.g., an image of an individual’s face) that are passed through an algorithm which convert the characteristic into biometric features for comparison with biometric templates

Biometric template: Set of stored biometric features comparable directly to biometric features of a probe biometric sample

Biometric verification: The process by which an identification claim is confirmed through biometric comparison

Database: same as Gallery

Enrolment: The process through which a biometric characteristic is captured and must pass in order to enter into the image gallery as a biometric template

Equal error rate (EER): The rate at which the false accept rate is exactly equal to the false reject rate

Facial features: The essential distinctive characteristics of a face, which algorithms attempt to express or translate into mathematical terms so as to make recognition possible.

Facial landmarks: important locations in the face-geometry such as position of eyes, nose, mouth, etc.

False accept rate (FAR): A statistic used to measure biometric performance when performing the verification task. The percentage of times a face recognition algorithm, technology, or system falsely accepts an incorrect claim to existence or non-existence of a candidate in the database over all comparisons between a probe and gallery image

False negative:  An incorrect non-match between a probe and a candidate in the gallery returned by a face recognition algorithm, technology, or system

False positive: An incorrect match between a biometric probe and biometric template returned by a face recognition algorithm, technology, or system

False reject rate (FRR): A statistic used to measure biometric performance when performing the verification task. The percentage of times a face recognition algorithm, technology, or system incorrectly rejects a true claim to existence or non-existence of a match in the gallery, based on the comparison of a biometric probe and biometric template

Gallery: A database in which stored biometric templates reside

Impostor: A person who submits a biometric sample in either an intentional or inadvertent attempt to claim the identity of another person to a biometric system

Match: A match is where the similarity score (of the probe compared to a biometric template in the reference database) is within a predetermined threshold

Matching score: Numerical value (or set of values) resulting from the comparison of a biometric probe and biometric template

Recognition rate: A generic metric used to describe the results of the repeated performance of a biometric system to indicate the probability that a probe and a candidate in the gallery be matched

Similarity score:  A value returned by a biometric algorithm that indicates the degree of similarity or correlation between a biometric template (probe) and a previously stored template in the reference database

Three-dimensional (3D) algorithm: A recognition algorithm that makes use of images from multiple perspectives, whether feature-based or holistic

Threshold: Numerical value (or set of values) at which a decision boundary exists

True accept rate (TAR) = 1-false reject rate

True reject rate (TRR) = 1-false accept rate

Python (programming language): Python is a general purpose programming language started by Guido van Rossum that became very popular very quickly, mainly because of its simplicity and code readability. It enables the programmer to express ideas in fewer lines of code without reducing readability.

OpenCV:  OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. OpenCV supports a wide variety of programming languages such as C++, Python, Java, etc., and is available on different platforms including Windows, Linux, OS X, Android, and iOS.

OpenCV-Python: OpenCV-Python is the Python API for OpenCV, combining the best qualities of the OpenCV C++ API and the Python language. OpenCV-Python is a library of Python bindings designed to solve computer vision problems.

Face recognition vendor test (FRVT): FRVT is an independently administered technology evaluation of mature face recognition systems. FRVT provides performance measures for assessing the capability of face recognition systems to meet requirements for large-scale, real-world applications.

_

Abbreviations:

AI = artificial intelligence

ACLU = American Civil Liberties Union

API = Application Programming Interface

EER = Equal Error Rate

FR = facial recognition = face recognition

FAR = False Accept Rate

FRGC = Face Recognition Grand Challenge

FRR = False Reject Rate

FRS = Face/Facial Recognition System

FRT = Face/Facial Recognition Technology

FRVT = Face Recognition Vendor Test

ICA = Independent component analysis

ISO/IEC = International Standard Organization/International Electro technical Commission

JPEG = Joint Photographic Experts Group

LFA = Local Feature Analysis

NIST = National Institute of Standards and Technology

PCA = Principal Component Analysis

PIN = personal identification number

ROC = Receiver Operating Characteristic

SVM = Support Vector Machines

TAR = True Accept Rate

TRR = True Reject Rate

IR = infrared

______

______

Biometrics:

_

The term “biometrics” is derived from the Greek words “bio” (life) and “metrics” (to measure). Automated biometric systems have only become available over the last few decades, due to significant advances in the field of computer processing. Many of these new automated techniques, however, are based on ideas that were originally conceived hundreds, even thousands of years ago. One of the oldest and most basic examples of a characteristic that is used for recognition by humans is the face. Since the beginning of civilization, humans have used faces to identify known (familiar) and unknown (unfamiliar) individuals. This simple task became increasingly more challenging as populations increased and as more convenient methods of travel introduced many new individuals into- once small communities. Other characteristics have also been used throughout the history of civilization as a more formal means of recognition.

_

A biometric is a unique, measurable characteristic of a human being that can be used to automatically recognize an individual or verify an individual identity. Biometrics authentication is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics. Physiological characteristics are related to the shape of the body. Examples include, but are not limited to fingerprint, palm veins, face recognition, DNA, palm print, hand geometry, iris recognition, retina and odour/scent. Behavioral characteristics are related to the pattern of behavior of a person, including but not limited to typing rhythm, gait, and voice. Some researchers have coined the term behaviometrics to describe the latter class of biometrics.

_

Biometrics is used to identify and authenticate a person using a set of recognizable and verifiable data unique and specific to that person.

Identification answers the question: “Who are you?”

Authentication answers the question: “Are you really who you say you are?”

_

More traditional means of access control include token-based identification systems, such as a driver’s license or passport, and knowledge-based identification systems, such as a password or personal identification number. According to Pew Research, the average individual has to remember between 25 and 150 passwords or PIN codes, with 39% saying they use the same or similar passwords in order to avoid confusion. While this, of course, is understandable given the number of online accounts a person may have, it creates the perfect environment for identity theft.  Although Password/Pin systems and Token systems are still the most common person verification and identification methods, trouble with forgery, theft and lapses in user’s memory pose a very real threat to high security environments which are now turning to biometric technologies to alleviate this potentially dangerous threat. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods; however, the collection of biometric identifiers raises privacy concerns about the ultimate use of this information.

_

Biometrics makes use unique physiological and behavioral patterns of human body. These patterns are formed randomly owing to different biological and environmental reasons. Randomness and complexity of details make these patterns good enough to be considered as unique. These biological or behavioral characteristics can be as obvious as facial structure or voice, which can even be recognized and differentiated by human senses, or as unapparent as DNA sequence of vascular structure, which require special equipment and process for identification of an individual. Despite the sizable difference in different biometric traits, they serve a common purpose: making personal identification possible with biometrics. Biometrics makes use of statistical, mathematical, imaging and computing techniques to uniquely map these patterns for an individual. These patterns are first captured by imaging or scanning and then taken through specialized algorithms to generate a biometric template, which is unique to the individual.

_

Biometrics can measure both physiological and behavioral characteristics:

Physiological biometrics (based on measurements and data derived from direct the human body) include:

Finger-scan

Facial Recognition

Iris-scan

Retina-scan

Hand-scan

Behavioral biometrics (based on measurements and data derived from an action) include:

Voice-scan

Signature-scan

Gait-scan

There are also secondary biometrics, such as recognising individual patterns around device and keyboard use but these have so far been limited in their deployment.

_

Some of these features are extracted from the camera devices and some are captured through the specialized scanners. These extracted features can be used in different applications in different ways. These applications include the recognition systems, authentication system, age verification systems, disease identification systems etc. Today most of the online and offline applications are involving the biometric feature to improve the relative authenticity and biometric applications are used across industries, institutions and government establishments. In many business outfits, it is very crucial for continuity of business operations that biometric systems keep functioning tirelessly. In institutions like hospitals and blood transfusion units, where precise identification of patients and blood / organ donors can be crucial, biometrics eliminates possibility of human errors and expedite overall healthcare operations by streamlining patient identification practice.

_

Biometric functionality:

Many different aspects of human physiology, chemistry or behavior can be used for biometric authentication. The selection of a particular biometric for use in a specific application involves a weighting of several factors. Jain et al. (1999) identified seven such factors to be used when assessing the suitability of any trait for use in biometric authentication.

  • Universality means that every person using a system should possess the trait.
  • Uniqueness means the trait should be sufficiently different for individuals in the relevant population such that they can be distinguished from one another.
  • Permanence relates to the manner in which a trait varies over time. More specifically, a trait with ‘good’ permanence will be reasonably invariant over time with respect to the specific matching algorithm.
  • Measurability (collectability) relates to the ease of acquisition or measurement of the trait. In addition, acquired data should be in a form that permits subsequent processing and extraction of the relevant feature sets.
  • Performance relates to the accuracy, speed, and robustness of technology used.
  • Acceptability relates to how well individuals in the relevant population accept the technology such that they are willing to have their biometric trait captured and assessed.
  • Circumvention relates to the ease with which a trait might be imitated using an artifact or substitute.

Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application.

_

Choosing a biometric modality:

Deploying a biometric modality depends on identification or authentication application it is going to be used with, for example: for a low security door access, fingerprint based access does the job, however, for logical access to a high security network server, user might have to authenticate with fingerprints as well as his or her voice print. Biometric authentication application can be implemented using one (unimodal) or more than one (multimodal) approach that makes use of single or multiple biometric modalities respectively. In many online and mobile services, for example, app based mobile banking or financial services application, a comparatively newer approach is used called continuous authentication. This approach comes out of the logic that a user should be continuously monitored to make sure that the device or application is being used by the genuine user throughout the session. User activity can be tracked by usage pattern monitoring and hardware / sensor data to make sure that device is in right hands. Once a user passes the authentication / verification barrier, there is no way to make sure that it is the same user throughout the session. Continuous authentication solves this problem by leveraging behavioral biometrics that creates a unique user profile depending on usage patterns and device data, user’s authentication state can be tracked throughout the session with his unique user profile and access can denied in the middle of a session if any irregularities are detected.

Some important factors that need to be considered before selecting a particular biometric modality are:

Accuracy:

It is one of the most important factors that need to be assessed when selecting a modality. Again, accuracy is based on several other factors such as false acceptance rate (FAR), false reject rate (FRR), error rate, identification rate etc.

Anti-spoofing capabilities:

The widespread use of biometric recognition systems in various sensitive applications stresses the importance of stronger protection against intruder attacks. Therefore, a lot of importance is given to direct attacks where unauthorized individuals can gain access to the system by interacting with the system input device. These unauthorized attempts to access the system are known as spoofing attacks and therefore the chosen modality should have strong anti-spoofing capabilities.

Cost-effectiveness:

This is an important factor to consider when deciding the effectiveness and suitability of a particular modality. Some modalities may be more cost-effective than others due to the underlying technology or hardware characteristics. It is important to realize that the initial investment done on a biometric system can often be compensated in a short amount of time which leads to faster return on investment (ROI).

User acceptability:

The deployment of a particular identification system also depends on how well it is accepted by the users. In some cultures, certain modalities have a stigma associated with them and it can negatively impact the success of the implemented modality. Therefore, it is important to understand which modalities are well acceptable versus those that may cause some user acceptance issues.

Hygiene:

Another important factor to consider before making a deployment decision is whether the system has contact dependent hardware. Many organizations prefer to use contactless modalities due to hygiene reasons and also for infection control.

So organizations should consider all the above factors before selecting a particular modality for their applications. The selected modality should also meet the operational requirements for their deployment.

_____

The following are used as performance metrics for biometric systems:

  • False match rate (FMR, also called FAR = False Accept Rate): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs that are incorrectly accepted. In case of similarity scale, if the person is an imposter in reality, but the matching score is higher than the threshold, then he is treated as genuine. This increases the FAR, which thus also depends upon the threshold value.
  • False non-match rate (FNMR, also called FRR = False Reject Rate): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs that are incorrectly rejected.
  • Receiver operating characteristic or relative operating characteristic (ROC): The ROC plot is a visual characterization of the trade-off between the FAR and the FRR. In general, the matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false non-matches but more false accepts. Conversely, a higher threshold will reduce the FAR but increase the FRR. A common variation is the Detection error trade-off (DET), which is obtained using normal deviation scales on both axes. This more linear graph illuminates the differences for higher performances (rarer errors).
  • Equal error rate or crossover error rate (EER or CER): the rate at which both acceptance and rejection errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is the most accurate.
  • Failure to enrol rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low quality inputs.
  • Failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly.
  • Template capacity: the maximum number of sets of data that can be stored in the system.

_____

Why biometrics is important in global and mobile world?

Although there has always been a need to identify individuals, the requirements of identification have changed in radical ways as populations have expanded and grown increasingly mobile. This is particularly true for the relationships between institutions and individuals, which are crucial to the well-being of societies, and necessarily and increasingly conducted impersonally—that is, without persistent direct and personal interaction. Importantly, these impersonal interactions include relationships between government and citizens for purposes of fair allocation of entitlements, mediated transactions with e-government, and security and law enforcement. Increasingly, these developments also encompass relationships between actors and clients or consumers based on financial transactions, commercial transactions, provision of services, and sales conducted among strangers, often mediated through the telephone, Internet, and the World Wide Web. Biometric technologies have emerged as promising tools to meet these challenges of identification, based not only on the faith that “the body doesn’t lie,” but also on dramatic progress in a range of relevant technologies. These developments, according to some, herald the possibility of automated systems of identification that are accurate, reliable, and efficient.

Many identification systems comprise three elements: attributed identifiers (such as name, Social Security number, bank account number, and drivers’ license number), biographical identifiers (such as address, profession, and education), and biometric identifiers (such as photographs and fingerprint). Traditionally, the management of identity was satisfactorily and principally achieved by connecting attributed identifiers with biographical identifiers that were anchored in existing and ongoing local social relations.  As populations have grown, communities have become more transient, and individuals have become more mobile, the governance of people (as populations) required a system of identity management that was considered more robust and flexible. The acceleration of globalization imposes even greater pressure on such systems as individuals move not only among towns and cities but across countries. This progressive dis-embedding from local contexts requires systems and practices of identification that are not based on geographically specific institutions and social networks in order to manage economic and social opportunities as well as risks.

In this context, according to its proponents, the promise of contemporary biometric identification technology is to strengthen the links between attributed and biographical identity and create a stable, accurate, and reliable identity triad. Although it is relatively easy for individuals to falsify attributed and biographical identifiers, biometric identifiers—an individual’s fingerprints, handprints, irises, face—are conceivably more secure because it is assumed that “the body never lies” or differently stated, that it is very difficult or impossible to falsify biometric characteristics. Having subscribed to this principle, many important challenges of a practical nature nonetheless remain: deciding on which bodily features to use, how to convert these features into usable representations, and, beyond these, how to store, retrieve, process, and govern the distribution of these representations.

Prior to recent advances in the information sciences and technologies, the practical challenges of biometric identification had been difficult to meet. For example, passport photographs are amenable to tampering and hence not reliable; fingerprints, though more reliable than photographs, were not amenable, as they are today, to automated processing and efficient dissemination. Security as well as other concerns has turned attention and resources toward the development of automatic biometric systems. An automated biometric system is essentially a pattern recognition system that operates by acquiring biometric data (a face image) from an individual, extracting certain features (defined as mathematical artifacts) from the acquired data, and comparing this feature set against the biometric template (or representation) of features already acquired in a database. Scientific and engineering developments—such as increased processing power, improved input devices, and algorithms for compressing data, by overcoming major technical obstacles, facilitates the proliferation of biometric recognition systems for both verification and identification and an accompanying optimism over their utility. The variety of biometrics upon which these systems anchor identity has burgeoned, including the familiar fingerprint as well as palm print, hand geometry, iris geometry, voice, gait, and the face.

The question of which biometric technology is “best” only makes sense in relation to a rich set of background assumptions. While it may be true that one system is better than another in certain performance criteria such as accuracy or difficulty of circumvention, a decision to choose or use one system over another must take into consideration the constraints, requirements, and purposes of the use-context, which may include not only technical, but also social, moral and political factors. It is unlikely that a single biometric technology will be universally applicable, or ideal, for all application scenarios. Iris scanning, for example, is very accurate but requires expensive equipment and usually the active participation of subjects willing to submit to a degree of discomfort, physical proximity, and intrusiveness—especially when first enrolled—in exchange for later convenience. In contrast, fingerprinting, which also requires the active participation of subjects, might be preferred because it is relatively inexpensive and has a substantial historical legacy.

Facial recognition has begun to move to the forefront because of its purported advantages along numerous key dimensions. Unlike iris scanning which has only been operationally demonstrated for relatively short distances, it holds the promise of identification at a distance of many meters, requiring neither the knowledge nor the cooperation of the subject. These features have made it a favorite for a range of security and law enforcement functions, as the targets of interest in these areas are likely to be highly uncooperative, actively seeking to subvert successful identification, and few—if any—other biometric systems offer similar functionality, with the future potential exception of gait recognition. Because facial recognition promises what we might call “the grand prize” of identification, namely, the reliable capacity to pick out or identify the “face in the crowd,” it holds the potential of spotting a known assassin among a crowd of well-wishers or a known terrorist surveying areas of vulnerability such as airports or public utilities.  At the same time, rapid advancements in contributing areas of science and engineering suggest that facial recognition is capable of meeting the needs of identification for these critical social challenges, and being realistically achievable within the relatively near future.

_____

_____

Facial recognition by humans:

Face recognition is performed routinely and effortlessly by humans. Face perception is an individual’s understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. From birth, faces are important in the individual’s social interaction. An infant innately responds to face shapes at birth and can discriminate his or her mother’s face from a stranger’s at the tender age of 45 hours. Recognizing and identifying people is a vital survival skill, as is reading faces for evidence of ill-health or deception. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Sometimes, damaged parts of the brain can cause specific impairments in understanding faces or prosopagnosia. Improving significantly in the last several years, technologies that can mimic or improve human abilities to recognize and read faces are now maturing for use in medical and security applications.

_

The physical world around us is three-dimensional (3D) and our each eye sees the world in 2D. People perceive depth and see the real world in three dimensions thanks to binocular vision as we have two eyes that are about three inches apart. The separation of our eyes means that each eye sees the world from a slightly different perspective. Our powerful brains take these two slightly-different images of the world and do all the necessary calculations to create a sense of depth and allow us to gauge distance.

_

Neuroanatomy of facial processing:

There are several parts of the brain that play a role in face perception. Rossion, Hanseeuw, and Dricot used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. The majority of BOLD fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions. They found that the occipital face area, located in the occipital lobe, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe, all played roles in contrasting the faces from the cars, with the initial face perception beginning in the occipital face areas. This entire region links to form a network that acts to distinguish faces. The processing of faces in the brain is known as a “sum of parts” perception. However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces.  Furthermore, Arcurio, Gold, and James used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This research supports that the occipital face area recognizes the parts of the face at the early stages of recognition. On the other hand, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for “holistic/configural” information, meaning that it puts all of the processed pieces of the face together in later processing. This theory is supported by the work of Gold et al. who found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in the later stages of recognition.

_

The temporal lobe of the brain is predominantly responsible for our ability to recognize faces. Some neurons in the temporal lobe respond to particular features of faces. Some people who suffer damage to the temporal lobe lose their ability to recognize and identify familiar faces. This disorder is called prosopagnosia. When the appearance of a face is changed, neurons in the temporal lobe generate less activity.  Exactly how people recognize faces is not completely understood. For some reason, it is difficult to recognize some faces when they are upside-down.

_

Natural selection in facial recognition:

Do we need to wait for a detailed processing of all the features of a face to decide on its mood, potential threat or friendliness? We don’t, because if we did, we wouldn’t survive as a species. It turns out that cognitive processes are activated by “face-like” objects, which alert the observer to both the emotional state and identity of the subject – even before the conscious mind begins to process—or even receive—the information. Even a “stick figure face,” despite its simplicity, conveys mood information. This robust and subtle capability is hypothesized to be the result of eons of natural selection favoring people most able to quickly identify the mental state of humans they encounter, for example, threatening or hostile people. This allows the individual an opportunity to flee or attack pre-emptively. In other words, processing this information subcortically (and therefore subconsciously)—before it is passed on to the rest of the brain for detailed processing—accelerates judgment and decision making when quickness is paramount. It is no wonder then, that one of the areas the fusiform face area and the occipital visual area connect to is the amygdala, the area that is responsible for emotions of fear and rage, or the “fight or flight” response.

___

How we detect a face: A survey of psychological evidence, a 2003 study:

Scientists strive to build systems than can detect and recognize faces. One such system already exists: the human brain. How this system operates, however, is far from being fully understood. In this article, authors review the psychological evidence regarding the process of face detection in particular. Evidence is collected from a variety of face‐processing tasks including stimulus detection, face categorization, visual search, first saccade analysis, and face detection itself. Together, the evidence points towards a multistage‐processing model of face detection. These stages involve preattentive processing, template fitting, and template evaluation; similar to automatic face‐detection systems.

___

How many faces people remember and recognise? a 2018 study:

Researchers from the University of York tested how many individual faces people could recall from among those they knew personally as well as from popular media. And while 5,000 faces is the average number that people seem to know, Dr. Rob Jenkins, one of the researchers, quickly points out, “Our study focused on the number of faces people actually know—we haven’t yet found a limit on how many faces the brain can handle.”   For the study, participants spent an hour listing as many faces from their personal lives as possible, including people they went to school with, colleagues, and family. They then did the same for famous faces, such as actors, politicians, and other public figures. The participants apparently found it easy to come up with lots of faces at first, but it became harder to think of new ones by the end of the hour. That change of pace allowed the researchers to estimate when they would run out of faces completely. The participants were also shown thousands of photographs of famous people and asked which ones they recognized. The researchers asked participants to recognize two different photos of each person to minimize error. The results showed that the participants knew between 1,000 and 10,000 faces, hence the average of 5,000 faces. The mean age of the studies participants was 24 and, according to the researchers, age provides an intriguing avenue for further research. “It would be interesting to see whether there is a peak age for the number of faces we know,” says Jenkins. “Perhaps we accumulate faces throughout our lifetimes, or perhaps we start to forget some after we reach a certain age.” Clearly, there is more to learn about how we learn each other’s faces. This study is published in the Royal Society Proceedings B.

___

Super recogniser:

“Super recognisers” is a term coined in 2009 by Harvard and University College London researchers for people with significantly better-than-average face recognition ability.  It is predominantly used among the British intelligence community. It is the extreme opposite of prosopagnosia. It is estimated that 1–2% of population are super recognisers who can remember 80% of faces they have seen. Normal people can only remember about 20% of faces. They can match faces better than computer recognition systems in some circumstances. The science behind this is poorly understood but may be related to the fusiform face area part of the brain.  In May 2015, the London Metropolitan Police officially formed a team made up of people with a “superpower” for recognising people and put them to work identifying individuals whose faces are captured on CCTV.  Scotland Yard has a squad of over 200 super recognisers. In August 2018, it was reported that the Metropolitan Police had used two super recognisers to identify the suspects of the attack on Sergei and Yulia Skripal, after trawling through up to 5,000 hours of CCTV footage from Salisbury and numerous airports across the country.

______

______

Introduction to facial recognition (technology):

Anyone who has seen the TV show “Las Vegas” has seen facial recognition software in action. But what looks so easy on TV doesn’t always translate as well in the real world. In 2001, the Tampa Police Department installed cameras equipped with facial recognition technology in their Ybor City nightlife district in an attempt to cut down on crime in the area. The system failed to do the job, and it was scrapped in 2003 due to ineffectiveness. People in the area were seen wearing masks and making obscene gestures, prohibiting the cameras from getting a clear enough shot to identify anyone. Boston’s Logan Airport also ran two separate tests of facial recognition systems at its security checkpoints using volunteers. Over a three month period, the results were disappointing. According to the Electronic Privacy Information Center, the system only had a 61.4 percent accuracy rate, leading airport officials to pursue other security options. Humans have always had the innate ability to recognize and distinguish between faces, yet computers only recently have shown the same ability. In the mid-1960s, scientists began work on using the computer to recognize human faces. Since then, facial recognition software has come a long way.

_

The information age is quickly revolutionizing the way transactions are completed. Everyday actions are increasingly being handled electronically, instead of with pencil and paper or face to face. This growth in electronic transactions has resulted in a greater demand for fast and accurate user identification and authentication. Access codes for buildings, banks accounts and computer systems often use PIN’s for identification and security clearances. Using the proper PIN gains access, but the user of the PIN is not verified. When credit and ATM cards are lost or stolen, an unauthorized user can often come up with the correct personal codes. Despite warning, many people continue to choose easily guessed PIN‟s and passwords: birthdays, phone numbers and social security numbers. Recent cases of identity theft have heightened the need for methods to prove that someone is truly who he/she claims to be. Face recognition technology may solve this problem since a face is undeniably connected to its owner except in the case of identical twins. It is non-transferable. The system can then compare scans to records stored in a central or local database or even on a smart card.

_

Humans are great at identifying people they know well. However, our ability to remember and distinguish individuals diminishes when the number of people grows to more than a few dozens. Since it is not always efficient to rely on people for this task, we have come up with alternative forms of authentication. In particular, there are three forms of authentication that rely on: (i) something that the user has, (ii) something that the user knows, and (iii) who the user is.

Examples of physical tokens in everyday life include home and car keys, credit cards, passports, and driver’s licenses. Equivalent examples within an event would include printed tickets, badges, wristbands, and mobile phones. When it comes to examples of privileged information, everyone is familiar with usernames, passwords, and security questions. In the context of an event, there are a lot of check-in applications that rely on name/email searches and scanning QR codes. The use of physical tokens and privileged information has become integral parts of an event lifetime. They are used to check-in people, restrict access, personalize the experience, measure attendance, extract analytics, and perform lead retrieval.

Facial recognition belongs in the third form of authentication along with other biometrics approaches. It is a software which can identify a person from a database of faces without requiring a physical token or the user to provide any privileged information. Technological advancements have increased accuracy and reduced cost drastically. Therefore, we see increased adoption in other industries (e.g., airports, social media, and cell phones).

_

Automatic face recognition has been traditionally associated with the fields of computer vision and pattern recognition. Face recognition is considered a natural, non-intimidating, and widely accepted biometric identification method. As such, it has the potential of becoming the leading biometric technology. Unfortunately, it is also one of the most difficult pattern recognition problems. So far, all existing solutions provide only partial, and usually unsatisfactory, answers to the market needs. In the context of face recognition, it is common to distinguish between the problem of authentication and that of recognition.

In the first case, the enrolled individual (probe) claims identity of a person whose template is stored in the database (gallery). We refer to the data used for a specific recognition task as a template. The face recognition algorithm needs to compare a given face with a given template and verify their equivalence. Such a setup (one-to-one matching) can occur when biometric technology is used to secure financial transactions, for example, in an automatic teller machine (ATM). In this case, the user is usually assumed to be collaborative.

The second case is more difficult. Recognition implies that the probe subject should be compared with all the templates stored in the gallery database. The face recognition algorithm should then match a given face with one of the individuals in the database. Finding a terrorist in a crowd (one-to-many matching) is one such application. Needless to say, no collaboration can be assumed in this case. At current technological level, one-to-many face recognition with non-collaborative users is practically unsolvable. That is, if one intentionally wishes not to be recognized, he can always deceive any face recognition technology.

Even collaborative users in a natural environment present high variability of their faces due to natural factors beyond our control. The greatest difficulty of face recognition, compared to other biometrics, stems from the immense variability of the human face. The facial appearance depends heavily on environmental factors, for example, the lighting conditions, background scene and head pose. It also depends on facial hair, the use of cosmetics, jewellery and piercing. Last but not least, plastic surgery or long-term processes like aging and weight gain can have a significant influence on facial appearance. Yet, much of the facial appearance variability is inherent to the face itself. Even if we hypothetically assume that external factors do not exist, for example, that the facial image is always acquired under the same illumination, pose, and with the same haircut and make up, still, the variability in a facial image due to facial expressions may be even greater than a change in the person’s identity as seen in the figure below.

_

Face recognition and Face search have been gaining prominence over the years because of the emerging need of face search for various purposes in the ongoing world. The facial recognition search software not only helps in recognising the faces in a solo photo, but it also helps in recognising people in group pictures, matching two different faces, finding faces similar to a particular face, providing other face attributes according to the eyes, nose, and other parts and therefore plays a crucial role in guessing, identifying and recognising the face.

Face recognition search is used by thousands of software companies and hardware companies, individuals to filter people of a specific kind and sometimes even to find your images. With the help of the image recognition search engines that are available; you can use different ways to find a face or similar faces.

_

Technopedia defines facial recognition as “a biometric software application capable of uniquely identifying or verifying a person by comparing and analyzing patterns based on the person’s facial contours.”   Facial recognition applications continue to expand into different aspects of our lives. For example, facial recognition technology can now be used instead of a password to unlock a user’s iPhone. Biometrics, including facial recognition, can be used to validate a user when making online purchases. This method is much more secure and convenient for the user than remembering user IDs and passwords. Facebook has developed facial recognition to identify and tag people in photos posted on the website. Facebook will even reach out to the person and ask “is this you?” If the person responds in the positive, the website has validated that instance of facial recognition for that person. Some facial recognition programs work without obtaining consent from the person. The software using artificial intelligence compares the person’s face from a distance and matches the face to a database.

__

Many basic uses of facial recognition technology are relatively benign and receive little criticism. For example, the technology can be used like a high-tech key, allowing access to virtual or actual spaces. Instead of presenting a password, magnetic card or other such identifier, the face of the person seeking access is screened to ensure it matches an authorized identity. This eliminates the problem of stolen passwords or access cards. In heightened security situations, facial recognition could be used in conjunction with other forms of identification. The next step in facial recognition is to connect the systems to digital surveillance cameras, which can then be used to monitor spaces for the presence of individuals whose digital images are stored in databases. Images of those present in the spaces under watch can also be recorded and subsequently paired with identities. Surveillance power grows as various systems, public and private, are networked together to share information.  Facial recognition may create economic savings. Policing efficiency could be improved if tracking of suspected terrorists and criminals were automated, for example, and welfare fraud would be curtailed if individuals were prevented from assuming false identities. The potential benefits of facial recognition systems also extend well beyond the realm of crime, terrorism and finances. The software could, for example, help ensure that known child molesters are denied access to schoolyards.

Facial recognition also has the ability to reach quickly into the past for information, dramatically extending the effective temporal scope of surveillance data analysis. Once an image is included in the database, stored surveillance data can be searched for occurrences of that image with a few keystrokes. Searching videotape for evidence, by contrast, is extremely time-consuming. The process of determining whether a suspected terrorist visited Berlin in 2002, for example, could require watching thousands of hours of videotape from potentially hundreds of cameras. If those cameras operated digital facial recognition systems, and the suspect’s face were available in a linked database, the same search could conceivably be executed in a fraction of the time.

Lately, mobile app of master card has started using fingerprint or facial recognition to verify and authenticate online payments. Mobile devices with high quality camera have made facial recognition a viable option for verifying identity and authenticating people. Most of the phones getting launched in the market are coming with an inbuilt face recognition technology which lets users to unlock their phones just by scanning the face. Along with its popularity across the digital world, enterprises are also taking this logical development seriously to confirm an order or to make a payment.

Facial recognition technology requires further development, however, before reaching maximal surveillance utility.  The American Civil Liberties Union explains: “Facial recognition software is easily tripped up by changes in hairstyle or facial hair, by aging, weight gain or loss, and by simple disguises.” It adds that the U.S. Department of Defense “found very high error rates even under ideal conditions, where the subject is staring directly into the camera under bright lights.”  The Department of Defense study demonstrated significant rates of false positive test responses, in which observed faces were incorrectly matched with faces in the database. Many false negatives were also revealed, meaning the system failed to recognize faces in the database. The A.C.L.U. argues that the implementation of facial recognition systems is undesirable, because “these systems would miss a high proportion of suspects included in the photo database, and flag huge numbers of innocent people – thereby lessening vigilance, wasting precious manpower resources, and creating a false sense of security.”

__

Facial recognition is an emerging technology that has the potential to augment the way people live or appreciably shape the digital world in the next five years. It is a biometric technology that scans people’s face or photographs and recognizes an individual. Face recognition uses the spatial geometry of distinguishing features of the face. It is a form of computer vision that uses the face to identify or to authenticate a person.  Facial recognition functions by examining the physical features of an individual’s face to distinguish uniqueness from others. In order to verify someone’s identity, the process can be broken down into three distinct steps: detection, unique faceprint creation, and finally, verification. In the first step, the technology captures an image of the individual’s face and analyzes it to identify the user’s face. Facial recognition is a category of biometric software that maps an individual’s facial features mathematically and stores the data as a faceprint. It discerns facial features like space between the eyes, depth of the eye sockets, the width of the nose, cheekbones, and jawline. The software reads approximately 80 nodal points designed from a numerical code, called a faceprint and records it in its database. An individual’s facial features are mapped mathematically and stored in the form of faceprints in facial recognition technology. Once the faceprint is recorded the software compares a person’s face from the data captured. Facial recognition software maps details and ratios of facial geometry using algorithms, the most popular of which results in a computation of what is called the “eigenface,” composed of “eigenvalues”. The software uses deep learning algorithms to compare a live capture or digital image to the stored faceprint in order to verify an individual’s identity. More than half of the American population’s faceprint is recorded in a facial recognition database.

_

Computerized facial recognition is based on capturing an image of a face, extracting features, comparing it to images in a database, and identifying matches. As the computer cannot see the same way as a human eye can, it needs to convert images into numbers representing the various features of a face. The sets of numbers representing one face are compared with numbers representing another face. The quality of the computer recognition system is dependent on the quality of the image and mathematical algorithms used to convert a picture into numbers. Important factors for the image quality are light, background, and position of the head. Pictures can be taken of a still or moving subjects. Still subjects are photographed, for example by the police (mug shots) or by specially placed security cameras (access control). However, the most challenging application is the ability to use images captured by surveillance cameras (shopping malls, train stations, ATMs), or closed-circuit television (CCTV). In many cases the subject in those images is moving fast, and the light and the position of the head is not optimal.

_

The techniques used for facial recognition can be feature-based (geometrical) or image-based (photometric). The geometric method relies on the shape and position of the facial features. It analyzes each of the facial features, also known as nodal points, independently; it then generates a full picture of a face. The most commonly used nodal points are: distance between the eyes, width of the nose, cheekbones, jaw line, chin, and depth of the eye sockets. Although there are about 80 nodal points on the face, most software measures have only around a quarter of them. The points picked by the software to measure have to be able to uniquely differentiate between people. In contrast, the image or photometric-based methods create a template of the features and use that template to identify faces.

Algorithms used by the software tools are proprietary and are secret. The most common methods used are eigenfaces, which are based on principal component analysis (PCA) to extract face features. The analysis can be very accurate, as many features can be extracted and all of the image data is analyzed together; no information is discarded. Another common method of creating templates is using neural networks. Despite continuous improvements, none of the current algorithms is 100% correct. The best verification rates are about 90% correct. At the same time, the majority of systems claim 1% false accept rates. The most common reasons for the failures are: sensitivity of some methods to lighting, facial expressions, hairstyles, hair color, and facial hair.

Despite the differences in mathematical methods used, the face recognition analysis follows the same set of steps. The first step is image acquisition; once the image is captured, a head is identified. In some cases, before the feature extraction, it might be necessary to normalize the image. This is accomplished by scaling and rotating the image so that the size of the face and its positioning is optimal for the next step. After the image is presented to the computer, it begins feature extraction using one of the algorithms. Feature extraction includes localization of the face, detection of the facial features, and actual extraction. Eyes, nose, and mouth are the first features identified by most of the techniques. Other features are identified later. The extracted features are then used to generate a numerical map of each face analyzed. The generated templates are then compared to images stored in the database. The database used may consist of mug shots, composites of suspects, or video surveillance images. This process creates a list of hits with scores, which is very similar to search results on the Internet. It is often up to the user to determine if the similarity produced is adequate to warrant declaration of a match. Even if the user does not have to make a decision, he or she is most likely determining the settings used later by the computer to declare a match.

Depending on the software used, it is possible to compare one-to-one or one-to-many. In the first instance, it would be a confirmation of someone’s identity. In the second, it would be identification of a person. Another application of facial recognition is taking advantage of live, video-based surveillance. This can be used to identify people in retrospect, after their images were captured on the recording. It can also be used to identify a particular person during surveillance, while they are moving around. It can be useful for catching criminals in the act, cheaters in casinos, or in identifying terrorists.

Most of the earliest and current methods of face recognition are 2-dimensional (2-D). They use a flat image of a face. However, 3-D methods are also being developed and some are already available commercially. The main difference in 3-D analysis is the use of the shape of the face, thus adding information to a final template. The first step in a 3-D analysis is generation of a virtual mesh reflecting a person’s facial shape. It can be done by using a near-infrared light to scan a person’s face and repeating the process a couple of times. The nodal points are located on the mesh, generating thousands of reference points rather than 20–30 used by 2-D methods. It makes the 3-D methods more accurate, but also more invasive and more expensive.  A face recognition system may use 2D or 3D images as a template to store in the database. 2D images are common; 3D is less in use. Each model has its own advantages and disadvantages. 2D recognition works better if the illuminating light is moderate and is influenced by pose changes. Facial expressions and changes in the face due to aging may affect the recognition rates in a face recognition system. On the other hand, a biometric face recognition system using 3D model database is getting cheaper and faster than before. But the 3D model databases are very few. A biometric face recognition system using 2D model databases takes into consideration only the two dimensions of the face. But the face is a 3D object! This makes the expectation level rise, from a 3D model database, with regard to performance. However, no experiment till now has been able to prove this popular belief. 3D data capturing is not completely independent of light variations. Different sources of light may create different models for the same face. Besides, they are still more expensive as compared to a 2D face recognition system. 2D represents a face by intensity variation and 3D represents a face by shape variation. 3D face recognition system discriminates between faces on the basis of the shape of the features of a given face. As the 3D images use a more reliable base for recognition, they are considered to be more accurate. However, a lot of improvement needs to be made in the field of 3D biometric face recognition system.

If the image is 3D and the database contains 3D images, then matching will take place without any changes being made to the image. However, there is a challenge currently facing databases that are still in 2D images. 3D provides a live, moving variable subject being compared to a flat, stable image. New technology is addressing this challenge. When a 3D image is taken, different points (usually three) are identified. For example, the outside of the eye, the inside of the eye and the tip of the nose will be pulled out and measured. Once those measurements are in place, an algorithm (a step-by-step procedure) will be applied to the image to convert it to a 2D image. After conversion, the software will then compare the image with the 2D images in the database to find a potential match.

An extension of facial recognition and 3-D methods is using computer graphics to reconstruct faces from skulls. This allows identification of people from skulls if all other methods of identification fail. In the past facial reconstruction was done manually by a forensic artist. Clay was applied to the skull following the contours of the skull until a face was generated. Currently the reconstruction can be computerized by taking advantage of head template creation by using landmarks on the skull and the ability to overlay it with computer-generated muscles. Once the face is generated, it is photographed and can be compared to various databases for identification in the same way as a live person’s image.

An important difference with other biometric solutions is that faces can be captured from some distance away, with for example surveillance cameras. Therefore face recognition can be applied without the subject knowing that he is being observed. This makes face recognition suitable for finding missing children or tracking down fugitive criminals using surveillance cameras.

_

Independent of the solution vendor, face recognition is accomplished as follows:

  1. A digital camera acquires an image of the face.
  2. Software locates the face in the image, this is also called face detection. Face detection is one of the more difficult steps in face recognition, especially when using surveillance cameras for scanning an entire crowd of people.
  3. When a face has been selected in the image, the software analyzes the spatial geometry. The techniques used to extract identifying features of a face are vendor dependent. In general the software generates a template, this is a reduced set of data which uniquely identifies an individual based on the features of his face.
  4. The generated template is then compared with a set of known templates in a database (identification) or with one specific template (authentication).
  5. The software generates a score which indicates how well two templates match. It depends on the software how high a score must be for two templates to be considered as matching, for example an authentication application requires low FAR and thus the score must be high enough before templates can be declared as matching. In a surveillance application however you would not want to miss out on any fugitive criminals thus requiring a low FRR, so you would set a lower matching score and security agents will sort out the false positives.

_

In order to develop a useful and applicable face recognition system several factors need to be take in hand.

  1. The overall speed of the system from detection to recognition should be acceptable.
  2. The accuracy should be high
  3. The system should be easily updated and enlarged, that is easy to increase the number of subjects that can be recognized.

_

Difficulties that often arise with face recognition are:

  • Variable image lighting and background make it more difficult for software to locate the face in the image.
  • Parts of the face are covered, e.g. long hair makes it more difficult for the software to locate the face in the image and to recognize the face.
  • Subject does not look directly into the camera, when the face is not held in the same angle the software might not recognize the face.
  • Using different types of cameras (with different lighting, resolution, etc.) makes it more difficult for the software to recognize the face.
  • The face of a subject changes with ageing.
  • It is difficult to make face recognition secure enough for authentication purposes.

As you can see there are some important constraints for using face recognition. Different vendors work on resolving these issues. 3D face recognition solves some of the above issues. Using 3D images the actual 3-dimensional form of the face is evaluated, this is not affected by lighting and does not change with ageing. Also different viewing angles can be better compared when using 3D images. Of course the hardware for 3D face recognition is more expensive.

_

Face recognition or detection is a widely used technology which is undergoing constant development to improve its results. It is used in different environments such as in forensic science, medicine and surveillance or security systems. It is also a widely developed mobile application. There are many different kinds of face detection devices and many different algorithms operating these devices. Many researchers and scholars have been trying to implement the ideal case of face detection algorithm. Many algorithms were used to achieve this goal but not all constraints have been taken into consideration while developing this software. Some of the known algorithms are: Principle Component Analysis using Eigen faces, Linear Discriminate Analysis, Elastic Bunch Graph Matching using Fisherface Algorithm, Content Based Image Retrieval (Jyoti Jain), the Hidden Markov and Dynamic Link Matching. The constraints taken while developing the software to yield accurate results are: position of the face, low lighting, sufficient data in database and facial expressions. To produce the ideal algorithm that yields 90% accurate results, 3 out of 4 constraints should be overcome.

__

High-quality cameras in mobile devices have made facial recognition a viable option for authentication as well as identification. Apple’s iPhone X, for example, includes Face ID technology that lets users unlock their phones with a faceprint mapped by the phone’s camera. The phone’s software, which is designed with 3-D modelling to resist being spoofed by photos or masks, captures and compares over 30,000 variables.  Face ID can be used to authenticate purchases with Apple Pay and in the iTunes Store, App Store and iBooks Store. Apple encrypts and stores faceprint data in the cloud, but authentication takes place directly on the device. Developers can use Amazon Rekognition, an image analysis service that’s part of the Amazon AI suite, to add facial recognition and analysis features to an application. Google provides a similar capability with its Google Cloud Vision API. The technology, which uses machine learning to detect, match and identify faces, is being used in a wide variety of ways, including entertainment and marketing. The Kinect motion gaming system, for example, uses facial recognition to differentiate among players. Smart advertisements in airports are now able to identify the gender, ethnicity and approximate age of passer-by and target the advertisement to the person’s demographic. Facebook uses facial recognition software to tag individuals in photographs. Each time an individual is tagged in a photograph, the software stores mapping information about that person’s facial characteristics. Once enough data has been collected, the software can use that information to identify a specific individual’s face when it appears in a new photograph. To protect people’s privacy, a feature called Photo Review notifies the Facebook member who has been identified.

__

The use of facial recognition is important in law enforcement, as the facial verification performed by a forensic scientist can help to convict criminals. For example, in 2003, a group of men was convicted in the United Kingdom for a credit card fraud based on facial verification. Their images were captured on a surveillance tape near an ATM and their identities were confirmed later by a forensic specialist using facial recognition tools.  Despite recent advances in the area, facial recognition in a surveillance system is often technically difficult. The main reasons are difficulties in finding the face by the system. These difficulties arise from people moving, wearing hats or sunglasses, and not facing the camera. However, even if the face is found, identification might be difficult because of the lighting (too bright or too dark), making features difficult to recognize. An important variable is also resolution of the image taken and camera angle. Normalization performed by the computer might not be effective if the incoming image is of poor quality. One of the ways to improve image quality is to use fixed cameras, especially in places like airports, government buildings, or sporting venues. In such cases all the people coming through are captured by the camera in a similar pose, making it easier for the computer to generate a template and compare to a database. While most people do not object to the use of this technology to identify criminals, there are fears that images of people can be taken at anytime, anywhere, without their permission. However, it is clear that the ability of identifying people with 100% certainty using face recognition is still some time away. However, facial recognition is an increasingly important identity verification method.

__

According to the World Face Recognition Biometrics Market, the face recognition market earned revenues of $186 million in 2005 and is likely to grow at a compound annual growth rate (CAGR) of 27.5 percent to reach $1021.1 million in 2012.  According to Transparency Market Research (TMR) report, “the global facial recognition market has gained popularity from diverse emerging tech trends, whether it’s switching from 2D facial recognition technology to 3D and facial analytics. Because of the higher accuracy in terms of identifying facial features, the market for 3D facial recognition technology segment is expected to record faster growth as compared to 2D facial recognition technology during the forecast period. In addition, a growth of the market for facial analytics, an emerging technology used for examining facial images of people without disturbing their privacy, is further expected to record steady growth as compared to that for 2D facial recognition technology.”  In China, employees use their faces to get an entry in their office building. There are multiple industries who are increasingly adopting this emerging technology such as healthcare, retailers, hospitality industries, manufacturing and so many.

_____

The evaluation of a Facial Recognition system can be broken down into the following categories

  1. Universality

Unlike some of the other physical based Biometric modalities (such as Fingerprint Recognition and Hand Geometry Recognition), every individual has a face. So at least theoretically, everybody should be able to enrol into a Facial Recognition system.

  1. Uniqueness

The face by itself is not distinctly unique at all. For example, members of the same family, as well as identical twins, share the same types of facial features. When it comes down to the DNA code, it is the overall facial structure which we inherit that contains the most resembling characteristics.

  1. Permanence

The structure of the face can change greatly over the lifetime of an individual. As it was described earlier, the biggest factors affecting it are weight loss and weight gain, the aging process, as well as voluntary changes made to the face. As a result, it is quite likely that an individual will have to be enrolled over and over again into the Facial Recognition system to compensate for these variations.

  1. Collectability

It can be quite difficult to extract the unique features of the face. This is primarily because any changes in the external environment can have a huge impact. For instance, the differences in the lighting, lighting angles, and the distance from which the raw images are captured can have a significant effect on the quality of the Enrolment and Verification Templates.

  1. Acceptability

This is the category where Facial Recognition suffers the most. As it was described, it can be used covertly, thus greatly decreasing the public acceptance rate of it.

  1. Resistance to circumvention

Unlike the other Biometric modalities, Facial Recognition systems can be very easily spoofed when 2-D models of the face are being used.

_____

Enhancing biometric precision:

The effectiveness of facial recognition technology depends on several key factors:

  • Image quality: Is the system attempting to distinguish between cooperative or non-cooperative subjects? Cooperative subjects are those that have voluntarily allowed their facial image to be captured. Non-cooperative subjects are those typically captured via surveillance cameras or by a witness using their smartphone.
  • Algorithms for identification: The second key performance factor is the power of the algorithms that are used to determine similarities between facial features. The algorithms analyze the relative position, size, and/or shape of the eyes, nose, cheekbones and jaw. These features are then used to search for other images with matching features.
  • Reliable databases: Lastly, facial recognition accuracy depends on the size and quality of the databases used; to recognize a face, you have to be able to compare it to something! The challenge is to establish matching points between the new image and the source image, in other words, photos of known individuals. Therefore, the larger the database of targeted images, the more likely a match can be found.

__

Is Facial Recognition Technology Expensive?

Some technologies are expensive, especially the ones that require specialized hardware, customizations, and on-site support. However, software-based applications tend to be more affordable. Luckily, face recognition falls into this category. Unless the event planner has excessive requirements, the associated investment is just a few cents per attendee expected to register. Face recognition is a great investment. In most cases, the associated cost savings alone are enough to make it pay for itself. For example, it can increase the check-in speed by 2-10 times. As a result, you can check-in the same amount of attendees using fewer check-in stations, less support staff, and a smaller registration area, which are all great ways to improve the bottom line for your event. As with everything, there is always a trade-off between quality and cost. Therefore, one should be careful about the vendor they choose to use in order to reap the benefits.

_

Are there High Requirements for Facial Recognition?

Uploading one good picture during the online registration will suffice for most applications. Using any device with a decent camera such as a laptop, tablet, or a cell phone is enough to recognize a person during check-in. The video is streamed to the cloud and processed there so the device computation requirements are minimal. Depending on your flexibility and expectations, the required internet bandwidth can be less than 0.5 Mbps (upload) for each device powered by face recognition. For full blown implementations and stringent requirements, a little more effort might be required. On these occasions, on-site support could be a good idea. However, for the vast majority of the cases and in comparison with other high-technology alternatives, face recognition is probably the most user-friendly and easy to use option. You simply point and recognize.  Remote support is enough for most of the events and it will rarely be needed.

______

______

Face Detection versus Face Recognition:

The terms face detection and face recognition are sometimes used interchangeably. But there are actually some key differences. To help clear things up, let’s take a look at the term face detection and how it differs from the term face recognition.

What is Face Detection?

The definition of face detection refers to computer technology that is able to identify the presence of people’s faces within digital images. In order to work, face detection applications use machine learning and formulas known as algorithms to detecting human faces within larger images. These larger images might contain numerous objects that aren’t faces such as landscapes, buildings and other parts of humans (e.g. legs, shoulders and arms).

Face detection is a broader term than face recognition. Face detection just means that a system is able to identify that there is a human face present in an image or video. Face detection has several applications, only one of which is facial recognition. Face detection can also be used to auto focus cameras. And it can be used to count how many people have entered a particular area. It can even be used for marketing purposes. For example, advertisements can be displayed the moment a face is recognized.

What is Face Recognition?

Face recognition can confirm identity. It is therefore used to control access to sensitive areas. One of the most important applications of face detection is facial recognition. Face recognition describes a biometric technology that goes way beyond recognizing when a human face is present. It actually attempts to establish whose face it is. Here computer software performs to make a positive identification of a face in a photo or video image against a pre-existing database of faces.

Facial detection is the capacity for software to identify there are faces but not whose they are. Facial recognition is the capacity to identify and validate who a face belongs to, so whose that face is. Face Detection and Face Recognition work together, but are not the same action. Recognition begins with detection– however, detection does not determine whose faces are in the picture, only whether there are actually faces there. Detection is essentially the first step to recognition, distinguishing human faces from other objects in the image. On the other hand, Facial Recognition is the process of the identifying the person. Facial detection is the input for facial recognition. The basic idea behind facial recognition is to store the facial features of a person when detection was done. These features are called feature vectors and are stored in a database. When the person to be identified is given as the input, facial features or the feature vectors are extracted and compared with those with the ones that are in the data base. The features that gives the best match identifies the person.

Standalone system does not have the processing power required to handle both face detection and recognition at the same time. Facial detection can be found in about every smartphone now, when you take a picture and it outlines the faces and focuses on them it is detecting faces. This is used to take better selfies, create facial effects as in the Facebook photo app that allows you to add digital flowers in your hair or turn your face into something silly. Facial recognition is used more commonly to authenticate people like a password would, for example Microsoft Windows has this as an option now when you login to your computer, you can choose to setup facial recognition though the webcam instead of using a password.

_

How Face Detection Works:

While the process is somewhat complex, face detection algorithms often begin by searching for human eyes. Eyes constitute what is known as a valley region and are one of the easiest features to detect. Once eyes are detected, the algorithm might then attempt to detect facial regions including eyebrows, the mouth, nose, nostrils and the iris. Once the algorithm surmises that it has detected a facial region, it can then apply additional tests to validate whether it has, in fact, detected a face.

Rectangular areas that match the face are extracted by sequentially searching face areas starting from the edge of the image. The Generalized Learning Vector Quantization (GLVQ) algorithm, which is based on the Minimum Classification Error criterion, is used to recognize whether areas are face areas or not, enabling fast and accurate face detection functions.

_

Facial recognition is a type of biometric software that is able to identify or verify a person from a digital image by mapping out their features mathematically and saving the information as a faceprint. This technology uses deep learning algorithms to compare these images to ensure that it is the correct individual’s identity, making it very similar to other identifying technologies such as fingerprint matching, retina scanning and voice recognition.  To start with, at the very baseline and initial application, the traditional use of face recognition resorted to the use of algorithms in order to identify facial features (analyzing the position, size, and shape of the eyes, nose, cheekbones etc.).  We can divide basic or so-called traditional approach to two distinct ones:

– Geometric, focused on distinguishing features;

– Photometric, resorting to statistical approach, where the image is broken down into values and compared the values associated with already existing templates to eliminate discrepancies

Traditional algorithms involving face recognition work by identifying facial features by extracting features, or landmarks, from the image of the face. For example, to extract facial features, an algorithm may analyse the shape and size of the eyes, the size of nose, and its relative position with the eyes. It may also analyze the cheekbones and jaw. These extracted features would then be used for searching other images that have matching features. Traditional algorithms have proved to be highly inaccurate as well as inefficient. These algorithms have not given good results and they are not scalable because there are many people who have similar facial features.

Over the years, the industry has moved towards Deep Learning. Convolutional Neural Networks have been employed lately to improve the accuracy of face recognition algorithms. These algorithms take image as input and extract a highly complex set of features out of the image. These include features like width of face, height of face, width of nose, lips, eyes, ratio of widths, skin color tone, texture, etc. Basically, a Convolutional Neural Network extracts out a large number of features from an image. These features are then matched with the ones stored in the database. Convolutional Neural Networks have proved to be far better than traditional algorithms. However, the biggest challenge that remains is that of scaling. These algorithms require heavy resources and computation to produce tangible results. Therefore, scalability is still a big issue.

_

The face recognition systems can operate basically in two modes:

  • Verification or authentication of a facial image: it basically compares the input facial image with the facial image related to the user which is requiring the authentication. It is basically a 1 x 1 comparison.
  • Identification or recognition: it basically compares the input facial image with all facial images from a dataset with the aim to find the user that matches that face. It is basically a 1 x N comparison.

All identification or authentication technologies operate using the following four stages: Capture, Extraction, Comparison, and Match/non-match: [vide infra]

_

Face Recognition Database:

Face recognition database is a record of identified images of human faces taken by a face recognition device. When a face recognition device scans a human face, it then matches the identified image against the enrolled image that is stored in face recognition database. Sometimes face recognition database gets larger and needs additional space to store other identified images. In most law enforcement agencies, face recognition database is kept in a secured location to prevent it from any unauthorized access, and so that it can only be accessed by security authorities. Face recognition database can store facial measurements and information for one year, or even for longer period of time. Face recognition databases are freely available as well as owned by companies. Facebook supposedly has one of the largest face databases, adding a face every time a person gets tagged on facebook. When benchmarking an algorithm it is recommendable to use a standard test data set for researchers to be able to directly compare the results.

_

Technical Requirements for systems using facial recognitions:

  1. Computers and network. A more powerful computer other than the ones used for regular video surveillance systems and an IP network that enables integration with an IP-based access control system. In addition, VMS, video management software, is needed, especially in larger installations.
  2. High-Resolution Network Camera. Another critical component is high-resolution cameras with a minimum of 1080p to make operations more reliable and efficient.

_

Facial Recognition Software:

Facial recognition software is an application that can be used to automatically identify or verify individuals from video frame or digital images. Some facial recognition software uses algorithms that analyze specific facial features, such as the relative position, size and shape of a person’s nose, eyes, jaw and cheekbones. Unlike fingerprinting and voice recognition, facial recognition software yields nearly instant results because subject consent is not required. Facial recognition software is primarily used as a protective security measure and for verifying personnel activities, such as attendance, computer access or traffic in secure work environments. Facial recognition software is also known as facial recognition system or face recognition software.

Some of the potential uses of facial recognition software include:

  • To prevent voter fraud during elections
  • At ATMs instead of a PIN
  • As a computer login

Successful deployments include:

  • The German Federal Police employs a facial recognition system on a voluntary basis, which allows members to pass through the completely automated border security system in the Frankfurt International Airport.
  • The German Federal Criminal Police Office provides facial recognition on mugshot pictures for every German police agency.
  • The Australian Customs Service department uses a computerized border processing system known as SmartGate, which includes facial recognition software to compare the passport holder’s face with the image in the passport to certify that the correct owner is carrying the passport.
  • The U.S. State Department uses a large face recognition system with more than 75 million photographs, which is regularly used to process visas.
  • Almost all casinos employ a face recognition system to identify card counters or doubtful personalities in their black list.

_

Face Recognition Software Features:

Apart from identification other typical features are:

  • Emotion Detection
  • Age Detection
  • Gender Detection
  • Attention Measurement
  • Sentiment Detection
  • Ethnicity Detection

_

In a nutshell,

Face Detection – Face vs. Non-face

Face Recognition – One person vs. the specific (verification), or one person vs. all the others (identification).

_____

_____

Why we choose face recognition over other biometric?

There are number reasons to choose face recognition.

  1. It requires no physical interaction on behalf of the user.
  2. It is accurate and allows for high enrolment and verification rates.
  3. It does not require an expert to interpret the comparison result.
  4. It can use your existing hardware infrastructure; existing cameras and image capture devices will work with no problems.
  5. It is the only biometric that allow you to perform passive identification in a one to many environments e.g.: identifying a terrorist in a busy Airport terminal.
  6. One key advantage of a facial recognition system that it does not require the cooperation of the test subject to work. Properly designed systems installed in airports, multiplexes, and other public places can identify individuals among the crowd, without passers-by even being aware of the system.

The iris recognition devices are too invasive and can generate serious diseases or conditions and the voice recognition devices cannot be successful, especially when you have a sore throat. PIN or password-based authentication procedures are too easy to lacerate but Face recognition is non-intrusive since it is based on images based on recorded by a distant camera, and can be very effective even if the user is not aware of the existence of the face recognition system. However, as compared to other biometric techniques, face recognition may not be most reliable and efficient. Quality measures are very important in facial recognition systems as large degrees of variations are possible in face images. Factors such as illumination, expression, pose and noise during face capture can affect the performance of facial recognition systems.  Among all biometric systems, facial recognition has the highest false acceptance and rejection rates, thus questions have been raised on the effectiveness of face recognition software in cases of railway and airport security.

_

Fingerprint recognition vs. facial recognition:

Biometric modalities are extensively used in personal identification and authentication applications. Physiological modalities are comparatively more stable than behavioral ones, and stay unaffected by factors like mood, psychology and fatigue. Fingerprint recognition is one of the popular modalities, which commonly used for application like physical and logical access control, employee identification, attendance and customer identification. Friction ridges on fingertips are commonly called fingerprints and they are one of the popular physiological characteristics that are used for personal identification. Having its roots in forensic applications in the past, fingerprint recognition has gained considerable market penetration and popularity in recent years due to extensive use in consumer electronics like mobile phones and national ID programs. Unlike other biometric methods of identification, fingerprint recognition does not require user to stay steady or wear a specific posture like iris or retina recognition. User just needs to touch the scanning surface of recognition equipment and it is done.

Facial structure is also a physiological modality that can be used for personal identification and authentication. Human facial structure is an individual characteristic. Facial recognition biometrics makes use of this fact to identify and authenticate individuals. Human brains have natural ability to remember and distinguish different faces. We identify and authenticate people just by recognizing their face on a daily basis. We recognize our family, friends, colleagues, neighbors and pets primarily by their facial structure. Facial recognition system can identify people by processing their digital images if their facial recognition identity has been pre-established. This system can be useful in identifying people in crowd like airport terminals, railway stations, etc. Facial recognition systems can capture multiple images in a second, compare them with what is stored in the database and produced results.

Following table compares fingerprint recognition and facial recognition side by side:

Fingerprint recognition Facial recognition
Intrusive, subject is required to touch the equipment in order to present a biometric sample. Non-Intrusive, subject is not required to contact the equipment for presenting the biometric sample.
User consent is required. User consent may not be required.
Extensively used in identification and authentication. Extensively used in surveillance and public applications.
High distinctiveness and unique characteristics. Fingerprints do not repeat, not even in twins. Low distinctiveness. Facial characteristics may repeat in people, e.g. in twins.
Highly accurate Low accuracy
Subjects cannot be identified from a distance. Subjects can be identified from a distance.
Small template size Large template size
High permanence and stability, very less affected by age. Medium permanence and stability, may get affected by age.
High security and high confidence level. Low security and low confidence level.
Medium collectability. Low exposure due to location of minute details. Highly collectable. Highly exposed due to location of face and larger details.
Requires specific set of hardware and software. Can be completely software based, can make use of existing digital images.
Medium universality: Fingerprint may not be available in some individuals. High universality: Facial features are found in all human beings.
Low potential for circumvention. Not easily spoofable. High potential for circumvention.
Medium level of acceptability. High level of acceptability.

The fingerprint is an “excellent way” to open a device, but it is not a security feature. There are some kinds of fingerprint formats that are common among many people, and without a 100% complete digital fingerprint image, there is probably another that is similar enough to the original. Another obstacle is that fingerprint readers can perform poorly if the fingers are dirty, greasy or wet, or when the weather is too cold or a person’s fingerprints have been worn down by years of manual labor or an accident. In addition, fingerprints can be purchased and reproduced from different sources, such as a high-resolution photograph. A hacker could turn them into a latex representation and use them in a typical reader to gain access. A famous case occurred in 2014, when the German Defence Minister, Ursula von der Leyen, had her fingerprints hacked from a high-resolution photograph.

_____

Facial recognition can be used as an alternative to other electronic access control systems that use other types of entry devices, but with the added advantages listed below.

The Advantages of using Facial Recognition in Access Control:

Non-Contact:

Unlike other biometric characteristics such as handprints or fingerprints, facial recognition is non-contact, and therefore, more hygienic and easy to use.

Simplifies access control:

Users simply need to present themselves and if their image is recognized by the system, access is instantly granted. No PIN’s to enter or smart cards to present.

Unique Credential:

Because your face is your access credential, it cannot be duplicated nor lost or stolen.

Flexible:

Like in other electronic access control systems where parameters can be set to control doors based on presented credentials, facial recognition capable systems can be programmed to limit access to certain time periods and/or for specific persons.

Audit Trails:

Unlike other electronic access systems where audit trails may only be just time stamped records of the coming and goings of people, facial recognition systems store images of all transactions. This means that images of people who gained or in some cases, failed to gain access, are on file and recoverable, should the need arises.

Interface-able:

It can be used to work with an existing access control system.

_______

Reasons to consider FRT as part of your authentication strategy:

  1. Greater Accuracy: 3D mapping, deep learning and other advances make FRT more reliable and harder to trick.
  2. Better Security: Research shows a 1-in-50,000 chance of a phone with touch ID being unlocked with the wrong fingerprint. With 3D facial modelling, the probability drops to nearly 1-in-1,000,000.
  3. Convenient and Frictionless: FRT is easy. It can be used passively without a user’s knowledge; or actively, such as having a person “smile for the camera.”
  4. Smarter Integration: Face recognition tools are generally easy to integrate with existing security infrastructures, saving time and cost on software redevelopment.
  5. Automation: Automated and accurate 24/7 security eliminates the need for security guards to visually monitor entry points, perform security checks and view security cameras.

Of course, no technology is entirely without risk. Facial recognition is highly data-intense, which can make processing and storage an obstacle. Despite enormous advances, recognizing faces from multiple camera angles or with obstructions (such as hats) is still not perfect. Plus, there have been controversies related to privacy issues, particularly in retail settings. This is why face recognition should be combined with other multifactor methods to strengthen user access, never as a single factor by itself.

_______

_______

History of facial recognition technology:

Pioneers of automated face recognition include Woody Bledsoe, Helen Chan Wolf, and Charles Bisson.  One of the pioneers of facial recognition, Woodrow Bledsoe, devised a technique called “man-machine facial recognition” in the 1960s. Bledsoe’s technique, limited by contemporary computing and imaging technologies, involved classifying photographs of faces digitized by hand using a “RAND Tablet”. The RAND tablet was an electronic human input device consisting of a stylus that could be positioned at horizontal and vertical coordinates on a grid containing 1,000,000 distinct points on a 10-inch-by-10-inch tablet. The stylus’ grid position was communicated via electromagnetic pulses. In Bledsoe’s method, an operator would utilize a RAND tablet to acquire the coordinate locations of various facial features. Among the facial features recorded by the system were the coordinate locations of the photographed individual’s hairline, eyes, and nose. Records associating the name of the photographed individual with the numerical data recorded by the RAND tablet were then inserted into a database. Given a photograph of an unknown face, the system would use a method based on distances between facial features to retrieve the image in the database most closely associated with the provided photograph.

Bledsoe himself noted that there were many factors inhibiting a computer’s ability to accurately recognize a single individual in two different photographs. Photographs of the same person might capture a person’s face at vastly different angles, at different ages, with varying facial expressions, and under different lighting conditions. Such minor changes in pose and environment could easily confound a computer algorithm that uses a distance-based approach to classify two faces as identical.

After Bledsoe left PRI in 1966, this work was continued at the Stanford Research Institute, primarily by Peter Hart. In experiments performed on a database of over 2000 photographs, the computer consistently outperformed humans when presented with the same recognition tasks (Bledsoe 1968). Peter Hart (1996) enthusiastically recalled the project with the exclamation, “It really worked!”

By about 1997, the system developed by Christoph von der Malsburg and graduate students of the University of Bochum in Germany and the University of Southern California in the United States outperformed most systems with those of Massachusetts Institute of Technology and the University of Maryland rated next. The Bochum system was developed through funding by the United States Army Research Laboratory. The software was sold as ZN-Face and used by customers such as Deutsche Bank and operators of airports and other busy locations. The software was “robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hair styles and glasses—even sunglasses”.

In 2006, the performance of the latest face recognition algorithms were evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins. U.S. Government-sponsored evaluations and challenge problems have helped spur over two orders-of-magnitude in face-recognition system performance. Since 1993, the error rate of automatic face-recognition systems has decreased by a factor of 272. The reduction applies to systems that match people with face images captured in studio or mugshot environments. In Moore’s law terms, the error rate decreased by one-half every two years.

_______

Face Recognition Grand Challenge (FRGC):

Not since the mid-1990s has there been such a renewed interest in developing new methods for automatic face recognition. This renewed interest has been fuelled by advances in computer vision techniques, computer design, sensor design, and interest in fielding face recognition systems. These techniques hold the promise of reducing the error rate in face recognition systems by an order of magnitude over the Face Recognition Vendor Test (FRVT) 2002 results. The Face Recognition Grand Challenge (FRGC) is being conducted to fulfil the promise of these new techniques. The primary goal of the FRGC is to promote and advance face recognition technology designed to support existing face recognition efforts in the U.S. Government. FRGC will develop new face recognition techniques and develop prototype systems while increasing performance by an order of magnitude. The FRGC is open to face recognition researchers and developers in companies, academia, and research institutions.

There are three main contenders for improving face recognition algorithms: high resolution images, three-dimensional (3D) face recognition, and new preprocessing techniques. The FRGC is simultaneously pursuing and will assess the merit of all three techniques. Current face recognition systems are designed to work on relatively small still facial images. The traditional method for measuring the size of a face is the number of pixels between the centers of the eyes. In current images there are 40 to 60 pixels between the centers of the eyes (10,000 to 20,000 pixels on the face). In the FRGC, high resolution images consist of facial images with 250 pixels between the centers of the eyes on average. The FRGC will facilitate the development of new algorithms that take advantage of the additional information inherent in high resolution images.

Three-dimensional (3D) face recognition algorithms identify faces from the 3D shape of a person’s face. In current face recognition systems, changes in lighting (illumination) and pose of the face reduce performance. Because the shape of faces is not affected by changes in lighting or pose, 3D face recognition has the potential to improve performance under these conditions.

Recently there have been advances in computer graphics and computer vision on modelling lighting and pose changes in facial imagery. These advances have led to the development of new computer algorithms that can automatically correct for lighting and pose changes in facial imagery. These new algorithms work by preprocessing a facial image to correct for lighting and pose prior to being processed through a face recognition system. The preprocessing portion of the FRGC will measure the impact of new preprocessing algorithms on recognition performance.

The FRGC is designed to fairly and comprehensively conduct experimentation on all three lines of technology development. It is fair and comprehensive because the challenge problems are supported by the Biometric Experimentation Environment (BEE). The BEE is able to precisely specify the challenge problems. This specification allows for an apple to apple comparison of results. The BEE is based on XML, an open source World Wide Web based protocol for specifying information. The BEE describes challenge problems and documents FRGC results in a common format. The common format ensures that results from different researchers and developers can be compared. The FRGC will improve the capabilities of automatic face recognition systems through experimentation with clearly stated goals and challenge problems. Researchers and developers can develop new algorithms and systems that meet the FRGC goals. The development of the new algorithms and systems is facilitated by the FRGC challenge problems.  The FRGC is structured around two challenge problems, version 1 (ver1) and version 2 (ver2). Ver1 is designed to introduce participants to the FRGC challenge problem format and its supporting infrastructure. Ver2 is designed to challenge researchers to meet the FRGC performance goal.

The FRGC is jointly sponsored by several government agencies interested in improving the capabilities of face recognition technology:

  1. Federal Bureau of Investigation
  2. Intelligence Technology Innovation Center
  3. National Institute of Justice
  4. National Institute of Standards and Technology
  5. Technical Support Working Group
  6. U.S. Department of Homeland Security, Science & Technology

The National Institute of Standards and Technology (NIST) is directing and managing FRGC.

_______

_______

Top facial recognition technologies:

All the software web giants now regularly publish their theoretical discoveries in the fields of artificial intelligence, image recognition and face analysis in an attempt to further our understanding as rapidly as possible.

Academia:

The GaussianFace algorithm developed in 2014 by researchers at Hong Kong University achieved facial identification scores of 98.52% compared with the 97.53% achieved by humans. An excellent score, despite weaknesses regarding memory capacity required and calculation times.

Facebook and Google:

Again in 2014, Facebook announced the launch of its DeepFace program which can determine whether two photographed faces belong to the same person, with an accuracy rate of 97.25%. When taking the same test, humans answer correctly in 97.53% of cases, or just 0.28% better than the Facebook program.

In June 2015, Google went one better with FaceNet, a new recognition system with unrivalled scores: 100% accuracy in the reference test Labelled Faces in The Wild, and 95% on the YouTube Faces DB. Using an artificial neural network and a new algorithm, the company from Mountain View has managed to link a face to its owner with almost perfect results.  This technology is incorporated into Google Photos and used to sort pictures and automatically tag them based on the people recognized. Proving its importance in the biometrics landscape, it was quickly followed by the online release of an unofficial open-source version known as OpenFace.

Microsoft, IBM and Megvii:

A study done by MIT researchers in February 2018 found that Microsoft, IBM and China-based Megvii  (FACE++) tools had high error rates when identifying darker-skin women compared to lighter-skin men. At the end of June 2018, Microsoft announced in a blog post that it had made solid improvements to its biased facial recognition technology.

Amazon:

In May 2018, Amazon is already actively promoting its cloud-based face recognition service named Rekognition to law enforcement agencies. The solution could recognize as many as 100 people in a single image and can perform face match against databases containing tens of millions of faces.  In July 2018, Amazon’s facial recognition technology falsely identified 28 members of US Congress as people arrested for crimes.

Key biometric matching technology providers:

At the end of May 2018, the US Homeland Security Science and Technology Directorate published the results of sponsored tests at the Maryland Test Facility (MdTF) done in March. These real-life tests measured the performance of 12 facial recognition systems in a corridor measuring 2 m by 2.5 m.  Gemalto’s solution utilizing a Facial recognition software (LFIS) achieved excellent results with a face acquisition rate of 99.44% in less than 5 seconds (against an average of 68%), a Vendor True Identification Rate of 98% in less than 5 seconds compared with an average 66%, and an error rate of 1% compared with an average 32%.

Ooma’s Face and Audio Recognition:

Netatmo is not the only home security system that incorporates facial recognition. Ooma’s home security system includes a smart-video camera with AI for both facial and audio recognition when a person comes into frame. It also features geofencing capabilities and sensors so you can automatically arm and disarm it with a customizable radius and it automatically calls 911 if the sensors detect smoke.

Face Recognition by Netgear:

Joining the ranks of video home security, Netgear’s Arlo offers HD video surveillance with two-way audio, instant alerts when sound and motion are detected, and is recorded to the cloud for free. The best part is you can check it on your phone, Apple TV, or laptop whenever you want. The system can sync up to 15 cameras so you can keep a careful eye on every room of the house.

Honeywell Partners with Alexa:

Honeywell is also now launching its new indoor and outdoor facial recognition security system and, partnering with Amazon’s Alexa, it comes with all the perks of having a smart home system. The system includes motion sensors, video recording and live streaming that you can check from a simple app on your smartphone.

Face Recognition by Nest:

And for those of you who are ready for some serious security muscle, skip the guard and check out Nest Cam IQ Outdoor. Made to withstand harsh weather and tampering, the Nest Cam IQ watches over your property, 24/7. With the ability to detect a person from 50 feet away, you can get ahead of any unwanted guests. And when used in conjunction with Nest Aware, the camera can also recognize familiar faces and send alerts to your phone through their app. Nest Cam IQ has some pretty impressive features to help keep your family safe and to not miss any special moments.

Facial emotion detection and recognition:

Emotion recognition (from real-time of static images) is the process of mapping facial expressions to identify emotions such as disgust, joy, anger, surprise, fear or sadness on a human face with image processing software. Its popularity comes from the vast areas of potential applications. It’s different from facial recognition which goal is to identify a person not an emotion. Face expression  may  be represented  by geometric or  appearance  features, parameters  extracted  from  transformed images such  as  eigenfaces,  dynamic  models  and  3D  models.  Providers include Kairos (face and emotion recognition for brand marketing), Noldus, Affectiva, Sightcorp, Nviso among others.

_____

_____

Facial Recognition Search Engines and Social Media:

_

Best Facial Recognition Search Engines to Search Faces Online:

Google Face Search:

It’s been a while since Google has introduced its Reverse Image Search option. However, do you know that this can also be used as Google Face Recognition technology where you can search limitless faces that are similar to a particular face? Google has definitely without any doubt has the largest database, and the case is similar when it comes to the images. So, this is the first place where you have the possibility of finding similar faces. However, the technology embedded is not exactly faced recognition, but the algorithms involved in the search are very similar that you will end up with most pleasing results then you should. Just upload the picture of the person that you want to find or identify and get similar faces with Google searching the faces with the help of many factors.

How to make a Google Face Recognition Search?

  1. The first step involves going to the Google Search Engine. Click on the ‘Google Images’ to get the Image Search option.
  2. Click on the ‘Camera’ Icon in the search bar and then use the ‘Upload Image’ option. Upload the image that you are searching for.
  3. Once uploaded, you will get the results of the similar images. To narrow down the search to faces, type “&imgtype=face” in the search engine bar and click “Enter.”
  4. By doing this, you will only be shown images with clear focus faces.
  5. You can also use advanced search options found in settings and set the image type to “Faces” instead.

_

Some of the other facial recognition search engines include:

PicWiser

Betaface

TwinsOrNot

Pictriev

Viewdle

Face Detection

NeoFace Watch

Yandex Reverse Image Search

Baidu Reverse Image Search

Image Raider

ImageBrief

Karma Decay Image Search on Reddit

TinEye Reverse Image Search

Use any of the above-mentioned face recognition search software to find the similar faces that resemble the face you have based on your priorities and their functionalities.

______

Social media platforms have adopted facial recognition capabilities to diversify their functionalities in order to attract a wider user base amidst stiff competition from different applications.

Facial recognition on Facebook:

Facebook’s tag suggestions program scans photographs uploaded by users, identifies people who appear in photographs and enables them to be tagged. To identify faces, the tool first separates faces from other objects in the photograph. It then standardises faces based on certain attributes, such as size. Facebook gives each face a signature in the form of a string of numbers. This signature is then matched against “face templates” to locate matches from a database of images. A face template distinguishes the facial signature of a particular user from other images. Face templates are created from photographs uploaded by users, such as profile images. When Facebook finds a match between a photograph and the template, it suggests tagging. Facebook only stores templates and not facial signatures.

DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users. DeepFace processes images of faces in two steps. First it corrects the angle of a face so that the person in the picture faces forward, using a 3-D model of an “average” forward-looking face. Then the deep learning comes in as a simulated neural network works out a numerical description of the reoriented face. If DeepFace comes up with similar enough descriptions from two different images, it decides they must show the same face. The system is said to be 97% accurate, compared to 85% for the FBI’s Next Generation Identification system.  Facial recognition systems have been used for emotion recognition. In 2016 Facebook acquired emotion detection startup FacioMetrics. Facebook just recently launched its facial recognition feature called the Photo Review which alerts users every time a photo with their face is posted. This makes it even easier to tag yourself in new photos, keep an eye on unflattering photos without tagging yourself or to reach out to your friend to remind them that they promised not to post it.

_

 

DeepFace uses a 3-D model to rotate faces, virtually, so that they face the camera.

_

In Russia, VK.com has launched their own version of Photo Review called FindFace. For those of you unfamiliar with VK.com, it is basically the Russian version of Facebook. FindFace takes it a bit further than Photo Review and does exactly as its name implies, it finds faces. If you want to find someone on VK.com but only have a photo of them, you can simply log in and search with a jpeg or png photo, as long as it is under 5 Mb. FindFace app which uses facial recognition to identify strangers on social media takes Russia by storm. Users put a picture of anyone’s face into the app, and it compares the images to millions of profile pictures on VK, the so-called ‘Facebook of Russia’ which has around 280 million users. Although FindFace doesn’t always match the image to the correct VK profile, its creators claim it works 70 per cent of the time. To make things easier for the searcher, it provides the profiles of 10 people who look similar, as well as the most likely match. FindFace’s creators are working with Moscow police to integrate their software into the city’s CCTV camera network, so authorities will be able to detect wanted suspects as they walk down the street.

_

SnapChat’s animated lenses, which used facial recognition technology, revolutionized and redefined the selfie, by allowing users to add filters to change the way they look. The selection of filters changes every day, some examples include one that make users look like an old and wrinkled version of themselves, one that airbrushes their skin, and one that places a virtual flower crown on top of their head. The dog filter is the most popular filter that helped propelled the continual success of SnapChat, with popular celebrities such as Gigi Hadid, Kim Kardashian and the likes regularly posting videos of themselves with the dog filter.

______

Ways to Find a Person via their Photo:

  1. The hard way: Locate the number Facebook uses to identify the photo and try to search Facebook for it.
  2. The easiest way: Use Google Images to find all the places where that photo is used online. Google Images will also find photos that are like the one you uploaded.
  3. Use Tineye to do a reverse image search. (Upload or paste the photo’s URL.) Tineye will only return results for the exact same image.

_____

_____

Facial recognition on mobile phones:

_

The iPhone X shines a grid of 30,000 infrared dots on a face and makes a crude 3D model. This method works from a metre away.

_

Face ID:

Apple introduced Face ID on the flagship iPhone X as a biometric authentication successor to the Touch ID, a fingerprint based system. Face ID has a facial recognition sensor that consists of two parts: a “Romeo” module that projects more than 30,000 infrared dots onto the user’s face, and a “Juliet” module that reads the pattern. The pattern is sent to a local “Secure Enclave” in the device’s central processing unit (CPU) to confirm a match with the phone owner’s face. This generates a 3D facial map stored in a local, secured area of the device’s processor, inaccessible by Apple itself. The system learns from changes in a user’s face over time, and can therefore successfully recognize the owner while wearing glasses, hats, scarves, makeup, many types of sunglasses or with changes in beard. The system will not work with eyes closed, in an effort to prevent unauthorized access. The technology learns from changes in a user’s appearance, and therefore works with hats, scarves, glasses and many sunglasses, beard and makeup. It also works in the dark. This is done by using a “Flood Illuminator”, which is a dedicated infrared flash that throws out invisible infrared light onto the user’s face to properly read the 30,000 facial points.

Apple Face ID is the new replacement to Touch ID. Face ID is enabled by the TrueDepth camera and is simple to set up. It projects and analyzes more than 30,000 invisible dots to create a precise depth map of your face. Face ID works with iPhone X and unlocks only when you’re looking at it. It’s designed to resist spoofing by photos or masks. Your facial map is encrypted and protected by the Secure Enclave. And authentication happens instantly on the device, not in the cloud.

TrueDepth camera system consists of multiple innovative technologies:

  • Infrared camera – reads the dot pattern, captures an infrared image, then sends the data to the Secure Enclave in the A11 Bionic chip to confirm a match;
  • Dot projector – More than 30,000 invisible dots are projected onto your face to build your unique facial map;
  • Flood illuminator – Invisible infrared light helps identify your face even when it’s dark.

It is estimated that the accuracy of Face ID will be 1 in 1,000,000 when compared to the 1 in 50,000 of Touch ID fingerprints alone. The TrueDepth camera uses 30,000 infrared dots harmlessly projected onto your face for depth mapping. The Infrared camera reads back these features and others below the skin’s unique characteristics to create a secure biometric image of your face. The front side Flood Illuminator assists in creating normalized lighting for any condition, light or dark. The infrared front facing camera will read the structure below the skin and the micro moments to confirm that a live subject and not an image is activating or unlocking the iPhone. For additional security, Face ID is attention aware, meaning it unlocks your iPhone X only when you look toward the device with your eyes open.

Face ID technology is the most secure and reliable in the market at the moment. Face ID is tolerant for appearance changes like facial hair or cosmetic make ups, scarves, hats, glasses, contact lenses and many sunglasses and it works even in total darkness. Moreover, Face ID meets international safety standards, so it is possible to authorize purchases from the iTunes Store, App Store, iBooks Store, and payments with Apple Pay.

Fooling Face ID:

First of all, Face ID can’t be fooled by a photo because it takes a 3D facial scan to unlock a device. Face ID is also “attention aware,” a feature Apple implemented for extra security.  Face ID will only unlock your device when you look in the direction of the iPhone X with your eyes open, meaning Face ID only works when there’s a live person in front of it. Attention aware is optional, though, and can be turned off if you choose. Most people will want to leave attention awareness on, but for users unable to focus their attention on the iPhone, turning it off will allow the iPhone X to unlock with just a facial scan.  Face ID is also sensitive enough to tell the difference between you and someone who is wearing a mask of your face. Apple trained Face ID with hyperrealistic masks created by Hollywood studios, ensuring a mask of a person wouldn’t be able to fool the Face ID system.

Touch ID locks a device after five failed attempts, but with Face ID, Apple is only allowing two failed attempts. After two incorrect scans, the iPhone X will lock and require your passcode to unlock again. You can also discreetly disable Face ID by pressing the side button and volume buttons at the same time. This will lock it and require a passcode to access your device.

Face ID Privacy:

On iPhones with Touch ID, your fingerprint data is stored in a Secure Enclave on the device, and the same is true of Face ID. Your facial map is encrypted and kept in the Secure Enclave, with authentication happening entirely on your device. No Face ID data is uploaded to iCloud or sent to Apple.

Multiple Faces in Face ID?

When using Touch ID, multiple fingerprints can be added to a device so more than one person can unlock it. That is not possible with Face ID. Face ID makes a map of a single face and that’s the only face that can unlock the iPhone X. To add a new face, the existing face must be removed.

Face ID at an Angle:

You don’t need to hold the iPhone X right in front of your face for it to make a Face ID scan. On stage at the keynote event, it was shown held at a comfortable viewing angle and held flat downwards while making an Apple Pay payment at payment terminal.

_____

As has often occurred in the history of Apple smartphones, Apple has decided to take a technology that already existed, redesign it and refine it and then adopt it on its new devices. And as we have already seen, all the Android manufacturers have decided to follow the trend by adopting the same functions, in order to profit from all the buzz generated by Apple.

Some of the best face recognition apps for Android and iOS:

FaceLock

True Key

FindFace

FaceVault

Face Detection

_____

Face recognition on window 10:

Windows 10’s facial recognition system doesn’t work with any old webcam. Instead, it requires dedicated infrared cameras that can detect not only the shape and position of facial features, but the depth of those features. It’s this level of sophistication that not only allows Windows 10 to tell apart identical twins, but to prevent people from logging into a PC by holding a photograph of the account holder in front of the camera.  Alas, that means that facial recognition will be unavailable to the vast majority of Windows 10 users, until infrared cameras become a common feature in laptops and tablets – which is by no means inevitable. Only a select few models from companies such as Dell, Lenovo and Asus currently support the RealSense cameras, made by Intel.

_____

_____

Facial Recognition Startups:

Today, facial recognition is used in dozens of ways, including a startup called Faception that says it can determine whether or not someone is a terrorist based on how he or she looks. Security and surveillance are certainly the top uses of facial recognition. Not surprisingly, the two most-funded facial recognition startups are from China.

  1. SenseTime:

Hong Kong-based SenseTime, a $4.5 billion valuation is one of the most valuable artificial intelligence startups in the world. Big Chinese tech companies like Alibaba are obvious investors, along with U.S. chipmaker Qualcomm, among others. SenseTime’s technology is an integral part of the Chinese government’s surveillance programs of its 1.3 billion citizens, including the use of smart glasses by police for facial recognition of jaywalkers who could lose points on their social credit score. That hasn’t stopped institutions like MIT from partnering with SenseTime on “moonshot” research into machine learning and other AI technologies. Or prevented SenseTime, founded only four years ago, from opening a “smart health” lab in New Jersey to apply computer vision to medical problems.

  1. Megvii (Face++):

Founded back in 2011, Beijing-based Megvii hasn’t been on the same stratospheric tear as SenseTime, but the facial recognition startup has raised $607 million, good enough to earn it a spot in the Unicorn Club and on the list of the world’s most valuable AI startups. In addition to counting Alibaba as an investor, Megvii (also known as Face++), has gotten funding from the world’s biggest fintech startup, Ant Financial, a spin off from Alibaba. In other words, there’s no reason to think that Megvii won’t have access to more capital in the future if needed. Megvii offers a number of different facial recognition solutions, from the usual face detection and search capability, to the more subjective Miss Universe-type applications, like Beauty Score. Megvii also offers various body and image recognition applications, as part of the company’s full spa treatment.

  1. Ever AI:

Founded in 2013, San Francisco-based Ever AI has raised $29 million, including a $16 million Series B a year ago. Investors include prestigious VC firms like Khosla Ventures. Ever AI has developed—over the course of four years and 12 billion images—a facial recognition platform it claims rivals those of the major tech companies like Amazon, Apple, Facebook, and Google, but the first of its kind to be made available to outside enterprises.  Ever AI says its facial recognition technology can be used in a variety of applications, from retail personalization to payment authentication to prevent fraud. The startup recently added a bunch of new features including “liveness detection,” which is the ability to determine if the face detected is “live,” as opposed to a photo. One prominent customer includes SoftBank and its humanoid robot Pepper.

  1. Faceter: Facial Recognition Using Blockchain:

Founded in 2014, Silicon Valley’s Faceter raised $28.6 million through an Initial Coin Offering (ICO). That’s not the only unusual thing about this startup, as it is developing a decentralized system of real-time video surveillance through both facial and body recognition. Faceter claims its new twist will process and analyze video data online using some sort of combination of crypto mining and fog computing. The company cites statistics that one billion video cameras will be installed in the next three years. That’s a pretty powerful business case, even if their tech sounds a bit too buzzy.

  1. AnyVision:

Founded in 2015, AnyVision is an Israeli startup. The company offers solutions for surveillance (ominously named Better Tomorrow) and mobile. AnyVision also markets its Better Tomorrow platform for commercial purposes.

  1. Reconova:

Reconova is another facial recognition startup out of China. In addition to developing facial recognition technologies with better than 95 percent accuracy, Reconova also produces hardware, including what the company claims is China’s first HD network camera with a built-in chip specially designed for facial recognition: Other products include a Face-ID verification terminal to capture faces and compare them against profile pictures read from ID materials or QR codes.

  1. Cynny: Facial Recognition for Video Creation:

Founded in 2013, Cynny out of Italy has raised about $14 million for a facial recognition platform for mobile called MorphCast. Cynny claims that it can tailor videos (and marketing) based solely on the user’s facial features via the mobile device’s camera. The AI can recognize gender with 95 percent accuracy, and reputedly does even better with emotions, scoring 97 percent accuracy. It can even determine age within seven years, which is better than most sideshow carnies.

  1. FaceFirst: Facial Recognition for Catching Shoplifters:

Founded way, way back in 2007, FaceFirst out of the Los Angeles area has only raised $9.5 million, suggesting it might actually be turning some sort of profit. The startup focuses on surveillance, particularly in retail. FaceFirst says its retail security facial recognition platform reduces shoplifting by 34 percent, and in-store violence by 91 percent on average. In addition, FaceFirst recently rolled out a fraud detection solution that catches people who try to return products which they did not purchase at a store.

_______

_______

Facial recognition growth, market and surveys:

_

When it comes to preferred modals, facial recognition dominated as the biometric thought most likely to be on the increase over the next few years, at 38%, followed by multimodal (22%) and iris (11%) – all usurping fingerprint at 9%.

-Biometrics Institute, 2017

_

While the United States currently offers the largest market for face recognition opportunities, the Asia-Pacific region is seeing the fastest growth in the sector. China leads the field.  Facial recognition is the new hot tech topic in China from banks to airports to police. Now authorities are expanding the facial recognition sunglasses program as police are beginning to use them in the outskirts of Beijing.  China is also setting up and perfecting a video surveillance network countrywide. 176 million surveillance cameras were in use at the end of 2017 and 626 million are expected by 2020. In India, the Aadhaar project is the largest biometric database in the world. It already provides a unique digital identity number to 1.2 billion residents. UIDAI, the authority in charge, announced that facial authentication will be available as an add-on service in fusion mode along with one more authentication factor like fingerprint, Iris or OTP.  In Brazil, the Superior Electoral Court (Tribunal Superior Eleitoral) is involved in a nationwide biometric data collection project. The aim is to create a biometric database and unique ID card by 2020, recording the information of 140 million citizens.  In Africa, Gabon, Cameroon and Burkina Faso have chosen Gemalto to meet the challenges of biometric identity.  Russia’s Central Bank has been deploying a country-wide program since 2017 designed to collect faces, voices, iris scans and fingerprints.

_

Face recognition markets:

The facial recognition market is expected to grow to $7.7 billion in 2022 from $4 billion in 2017. That’s because facial recognition has all kinds of commercial applications. It can be used for everything from surveillance to marketing. A study in June 2016 estimated that by 2022, the global face recognition market would generate $9.6 billion of revenue, supported by a compound annual growth rate (CAGR) of 21.3% over the period 2016-2022. This increases to 22.9% growth if we take government administrations alone, the biggest drivers of this growth.

Technological advancements in facial recognition biometrics, more mobile devices equipped with cameras and the rising popularity of media cloud services are among the drivers that will push the image recognition market to $86 billion in annual revenue by 2025, according to a new report from Allied Market Research. The report, which focuses on various aspects of the Image Recognition Market from 2018 to 2025, says the market generated $17.91 billion in 2017, and will grow at a 21.8 percent CAGR.

_____

Brookings survey finds 50 percent of people are unfavorable to facial recognition software in retail stores to prevent theft:

The Brookings survey was an online U.S. national poll undertaken with 2,000 adult internet users in September 2018. It was overseen by Darrell M. West, vice president of Governance Studies and director of the Center for Technology Innovation at the Brookings Institution and the author of The Future of Work:  Robots, AI, and Automation. Responses were weighted using gender, age, and region to match the demographics of the national internet population as estimated by the U.S. Census Bureau’s Current Population Survey. Fifty percent are unfavorable to the use of facial recognition software in retail stores to prevent theft, according to a survey undertaken by researchers at the Brookings Institution. Forty-four percent are unfavorable to using this software in airports to establish identity, 44 percent are unfavorable to it in stadiums as a way to protect people, and 38 percent are unfavorable to its use in schools to protect students.

_____

A January 2018 survey of over 1,000 American conducted by facial recognition software company FaceFirst finds that a majority of people (64 percent) think facial recognition should be used to help identify terrorists and as a crime deterrent. Another 73 percent said they would feel less safe if cameras were removed from airports and another 89 percent think a terrorist or mass shooter attack could occur in the next year at a concert, sporting event or airport.

With regards to facial recognition technology and the general public’s view of it being employed in a public safety setting, the survey had a positive slant.

  • 54 percent of Americans plan to use face recognition to protect their personal data or already own a device that uses face recognition
  • Nearly two thirds (64 percent) of Americans think security personnel guarding airports, concerts, sporting events and other public areas should be allowed to use face recognition to help recognize terrorists and prevent crime
  • 77 percent of Americans think that security guarding airports and tourist attractions are not likely to remember the names and faces of potential terrorists on a watch list without face recognition

The survey’s results demonstrate that as people are adopting face recognition as a means of securing their own privacy and data security and they are growing increasingly comfortable with similar technology being used to secure public spaces. As it seems to usually happen, consumer technology innovation drives adaptation in the commercial markets.

______

______

Technique and technology of facial recognition:

_

Computer vision:

Computer vision is an interdisciplinary field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.  Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences, views from multiple cameras, or multi-dimensional data from a medical scanner.  As a technological discipline, computer vision seeks to apply its theories and models for the construction of computer vision systems.  Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general. Artificial Intelligence (AI) attempts to create a machine that simulates human intelligence to identify and use the right pieces of knowledge at the time of decision-making and solving problems. It deals with computational models that can think and behave like the way humans do. Computer Vision is a super exciting part of Artificial Intelligence where we attempt to get intelligence out of visual data. Intelligence can be scene/object detection, face detection, face recognition, and facial analysis.

____

Techniques for face recognition:

Essentially, the process of face recognition is performed in two steps. The first involves feature extraction and selection and, the second is the classification of objects. Later developments introduced varying technologies to the procedure. Some of the most notable include the following techniques:

  1. Traditional (2D facial recognition):

Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject’s face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation.

Recognition algorithms can be divided into two main approaches, geometric, which looks at distinguishing features, or photometric, which is a statistical approach that distils an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features.

Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching.

  1. 3-Dimensional recognition [vide infra]:

Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin. One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors can be placed on the same CMOS chip—each sensor captures a different part of the spectrum. Even a perfect 3D matching technique could be sensitive to expressions. For that goal a group at the Technion applied tools from metric geometry to treat expressions as isometries.  A new method is to introduce a way to capture a 3D picture by using three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject’s face in real time and be able to face detect and recognize.

  1. Skin (surface) texture analysis:

The image may not always be verified or identified in facial recognition alone.  Skin Texture Analysis turns the unique lines, patterns, and spots apparent in a person’s skin into a mathematical space. Surface Texture Analysis, works much the same way facial recognition does. A picture is taken of a patch of skin, called a skinprint. That patch is then broken up into smaller blocks. Using algorithms to turn the patch into a mathematical, measurable space, the system will then distinguish any lines, pores and the actual skin texture. It can identify differences between identical twins, which is not yet possible using facial recognition software alone. Tests have shown that with the addition of skin texture analysis, performance in recognizing faces can increase 20 to 25 percent. The surface texture analysis (STA) algorithm operates on the top percentage of results as determined by the local feature analysis. STA creates a skinprint and performs either a 1:1 or 1:N match depending on whether you’re looking for verification or identification.

Skin texture analysis has an advantage over other systems. It is relatively insensitive to changes in expression, including blinking, frowning or smiling and has the ability to compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform with respect to race and gender. However, it is not a perfect system. There are some factors that could get in the way of recognition, including:

  • Significant glare on eyeglasses or wearing sunglasses
  • Long hair obscuring the central part of the face
  • Poor lighting that would cause the face to be over- or under-exposed
  • Lack of resolution (image was taken too far away)

_

Facial recognition combining different techniques:

As every method has its advantages and disadvantages, technology companies have amalgamated the traditional, 3D recognition and Skin Textual Analysis, to create recognition systems that have higher rates of success. Combined techniques have an advantage over other systems. It is relatively insensitive to changes in expression, including blinking, frowning or smiling and has the ability to compensate for mustache or beard growth and the appearance of eyeglasses. The system is also uniform with respect to race and gender.

_

  1. Thermal cameras:

A different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or make up. Unlike conventional cameras, thermal cameras can capture facial imagery even in low-light and night time conditions without using a flash and exposing the position of the camera. However, a problem with using thermal pictures for face recognition is that the databases for face recognition is limited.

_

Figure above shows close-up of the infrared illuminator. The light is invisible to the human eye, but creates a day-like environment for the surveillance cameras.

_

The thermal patterns of faces are derived primarily from the pattern of superficial blood vessels under the skin. The skin directly above a blood vessel is on average 0.1 degree centigrade warmer than the adjacent skin. The vein tissue structure of the face is unique to each person (even identical twins); the IR image is therefore also distinctive. The advantage of IR is that face detection is relatively easy. It is less sensitive to variation in illumination (and even works in total darkness) and it is useful for detecting disguises. However, it is sensitive to changes in the ambient environment, the images it produces are low resolution, and the necessary sensors and cameras are expensive. It is possible that there are very specific applications for which IR would be appropriate. It is also possible that IR can be used with other image technologies to produce visual and thermal fusion.

_

Diego Socolinsky, and Andrea Selinger (2004) research the use of thermal face recognition in real life, and operation sceneries, and at the same time build a new database of thermal face images. The research uses low-sensitive, low-resolution ferro-electric electrics sensors that are capable of acquiring long wave thermal infrared (LWIR). The results show that a fusion of LWIR and regular visual cameras has the greater results in outdoor probes. Indoor results show that visual has a 97.05% accuracy, while LWIR has 93.93%, and the Fusion has 98.40%, however on the outdoor proves visual has 67.06%, LWIR 83.03%, and fusion has 89.02%. The study used 240 subjects over the period of 10 weeks to create the new database. The data was collected on sunny, rainy, and cloudy days. In 2018, researchers from the U.S. Army Research Laboratory (ARL) developed a technique that would allow them to match facial imagery obtained using a thermal camera with those in databases that were captured using a conventional camera. [vide infra]

______

______

How 2D Facial Recognition Works:

Facial recognition technologies can perform a number of functions, including (1) detecting a face in an image; (2) estimating personal characteristics, such as an individual’s age, race, or gender; (3) verifying identity by accepting or denying the identity claimed by a person; and (4) identifying an individual by matching an image of an unknown person to a gallery of known people. According to experts, most modern facial recognition systems generally follow the steps shown in figure below:

_

Technologies vary, but here are the basic steps:

Step 1. A picture of your face is captured from a photo or video. Your face might appear alone or in a crowd. Your image may show you looking straight ahead or nearly in profile.

Step 2. Facial recognition software reads the geometry of your face. Key factors include the distance between your eyes and the distance from forehead to chin. The software identifies facial landmarks — one system identifies 68 of them — that are key to distinguishing your face. The result: your facial signature.

Step 3. Your facial signature — a mathematical formula — is compared to a database of known faces. And consider this: at least 117 million Americans have images of their faces in one or more police databases. According to a May 2018 report, the FBI has had access to 412 million facial images for searches.

Step 4. A determination is made. Your faceprint may match that of an image in a facial recognition system database.

__

Face recognition systems use computer algorithms to pick out specific, distinctive details about a person’s face. These details, such as distance between the eyes or shape of the chin, are then converted into a mathematical representation and compared to data on other faces collected in a face recognition database. The data about a particular face is often called a face template and is distinct from a photograph because it’s designed to only include certain details that can be used to distinguish one face from another.  Some face recognition systems, instead of positively identifying an unknown person, are designed to calculate a probability match score between the unknown person and specific face templates stored in the database. These systems will offer up several potential matches, ranked in order of likelihood of correct identification, instead of just returning a single result.  Face recognition systems vary in their ability to identify people under challenging conditions such as poor lighting, low quality image resolution, and suboptimal angle of view.

_

All identification or authentication technologies operate using the following four stages:

  1. Capture: A physical sample is captured by the system during enrolment and also in identification or Verification process.
  2. Extraction: unique data is extracted from the sample and a template is created.
  3. Comparison: the template is then compared with a new sample.
  4. Match/Non match: the system decides if the features extracted from the new samples are a match or a non-match.

_

Facial recognition software is based on the ability to first recognize faces, which is a technological feat in itself. If you look at the mirror, you can see that your face has certain distinguishable landmarks. These are the peaks and valleys that make up the different facial features. There are about 80 nodal points on a human face.

Here are few nodal points that are measured by the software.

  • Distance between the eyes
  • Width of the nose
  • Depth of the eye socket
  • Cheekbones
  • Jaw line and
  • Chin

These nodal points are measured to create a numerical code, a string of numbers that represents a face in the database. This code is called face print. Only 14 to 22 nodal points are needed for face it software to complete the recognition process.

Various nodal points (called landmarks) exist on every face — the top of the chin, the outside edge of each eye, the inner edge of each eyebrow, etc.as seen in the figure below. A machine learning algorithm is able to find these nodal points on any face:

_

Facial recognition systems generate what is called a faceprint — a unique code applicable to one individual — by measuring the distance between points like the width of a person’s nose.

These so-called “nodal points” — there are more than 80 points that a facial recognition system checks — are combined mathematically to build the faceprint, which can then be used to search through an identity database.

_

Feature vector:

In order to understand how Face Recognition works, let us first get an idea of the concept of a feature vector.

Every Machine Learning algorithm takes a dataset as input and learns from this data. The algorithm goes through the data and identifies patterns in the data. For instance, suppose we wish to identify whose face is present in a given image, there are multiple things we can look at as a pattern:

  • Height/width of the face.
  • Height and width may not be reliable since the image could be rescaled to a smaller face. However, even after rescaling, what remains unchanged are the ratios – the ratio of height of the face to the width of the face won’t change.
  • Color of the face.
  • Width of other parts of the face like lips, nose, etc.

Clearly, there is a pattern here – different faces have different dimensions. Similar faces have similar dimensions. The challenging part is to convert a particular face into numbers – Machine Learning algorithms only understand numbers. This numerical representation of a “face” (or an element in the training set) is termed as a feature vector. A feature vector comprises of various numbers in a specific order.

As a simple example, we can map a “face” into a feature vector which can comprise various features like:

  • Height of face (cm)
  • Width of face (cm)
  • Average color of face (R, G, B)
  • Width of lips (cm)
  • Height of nose (cm)

_

Essentially, given an image, we can map out various features and convert it into a feature vector like:

Height of face (cm) Width of face (cm) Average color of face (RGB) Width of lips (cm) Height of nose (cm)
23.1 15.8 (255, 224, 189) 5.2 4.4

So, our image is now a vector that could be represented as (23.1, 15.8, 255, 224, 189, 5.2, 4.4). Of course there could be countless other features that could be derived from the image (for instance, hair color, facial hair, spectacles, etc.). However, for the example, let us consider just these 5 simple features.

Now, once we have encoded each image into a feature vector, the problem becomes much simpler. Clearly, when we have 2 faces (images) that represent the same person, the feature vectors derived will be quite similar. Put it the other way, the “distance” between the 2 feature vectors will be quite small.

Machine Learning can help us here with 2 things:

  1. Deriving the feature vector: it is difficult to manually list down all of the features because there are just so many. A Machine Learning algorithm can intelligently label out many of such features. For instance, a complex features could be: ratio of height of nose and width of forehead. Now it will be quite difficult for a human to list down all such “second order” features.
  2. Matching algorithms: Once the feature vectors have been obtained, a Machine Learning algorithm needs to match a new image with the set of feature vectors present in the corpus.

Now we have a basic understanding of how Face Recognition works. Facial Recognition can be implemented either as a fully automated system or as a semi-automated system. With the former, no human intervention is required, but with the latter, some degree of it is mandatory. This is the preferred method to be used when deploying a Facial Recognition device.

_

Facial recognition in a nutshell:

Face detection occurs first. The algorithms typically cycle through various boxes, looking for faces with a certain dimension. Inside those boxes, the system detects facial landmarks and assigns a score, providing a confidence level regarding whether the image is a face. Once confirmed as a face, the technology creates a template, generally based on factors such as the relative distance between the eyes, the spot just under nose and above the lip, and ear to ear. The mathematical representation developed is then compared to other detected faces. The similarity in ratios between distances on various points of the face, typically focused around anchors, such as the nose, the eyes, the ears and the mouth, yields a score on a logarithmic scale. Close matches range from 3 to 5, and definite non-matches are less than 1. When the same image serves as both probe and target, a score of 40+ is possible.

_____

The Camera environment is important:

When the user faces the camera, standing about two feet from it, the system will locate the user’s face and perform matches against the claimed identity or the facial database. It is possible that the user may need to move and reattempt the verification based on his facial position. The system usually comes to a decision in less than 5 seconds. No matter what technique is used, facial recognition works better when there is a good set of facial images to work with.  It is very important to have consistent and controlled lighting, camera resolution, face position, and limited motion.

Lighting:

The lighting on the face must be bright enough so that the camera sensor doesn’t introduce noise. There also needs to be enough light to provide enough contrast for the recognition algorithm. Many of these systems require at least 300 to 500 lux of illumination. This is about the light we see in a normal office working environment. The lighting should be consistent so that shadows don’t introduce spurious or false representation of the face.

Resolution:

IP Camera resolution depends on the type of recognition system used and the total field of view. In general, the wider the field of view the more resolution you will need. Many systems require a certain minimum number of pixels across certain facial features.  For example, you may require 60 pixels between two eyes (interpupillary distance). Some other systems require at least 80 to 120 pixels across the face.  Once we know the resolution requirements of the facial recognition system, we can calculate the resolution of the camera.

Here’s an example.  If we use a system that uses the distance between two eyes, we first need to know what this distance is expected to be.  A database and study were done for the 1988 Anthropometric Survey of US Army Personnel. In this study, the mean dimension for men was about 65 mm, and women averaged about 62 mm.  The largest distance measured was 74 mm, while the shortest distance was about 55 mm.  To calculate the IP camera resolution required for this type of facial recognition, it is best to use the shortest dimension of 55 mm.

  • We need 60 pixels per 55 mm or 60/55 = 1.09 pixels/mm.
  • Next, we need to decide what field of view (FOV) we would like. Suppose we decide that the horizontal field of view is 1524 mm (which is about 5 ft wide).
  • To achieve 1.09 pixels/mm across the 1524 mm FOV, we need 1.09 x 1524 which equals 1663 horizontal pixels.
  • Next, we look for a camera that has at least this number of horizontal pixels. More is better.
  • A 2 Megapixel (1920 x 1080) camera exceeds this requirement, while a 1-megapixel camera (1280 x 1024) would not work.

The 2 Megapixel IP camera with 1920 pixels across the horizontal is best for this application.  If we want to view a larger area, then we would need to increase the camera resolution.  For example, a 10 ft wide area, would require twice the resolution.

Face Position:

One facial recognition software product has been used in churches to determine who has attended. Churchix software works quite well in the church where the lighting level is right, everyone is facing forward, and they even have the same expression on their face (most of the time). In many applications, the challenge is to assure that everyone you are scanning is looking in the right direction.  It is best to install a face recognition system in a doorway or gate area where it is likely that everyone is facing the right direction, and they are approaching the camera a few people at a time.  There are some systems that can be used in less constricted environments like crowd. Recognizing a face in a crowd is harder than biometric face recognition used for door access control.  The latest 3D facial recognition systems are getting much better at finding people in a crowd. The systems are most effective at choke points where a few people at a time can pass across the camera.

Limited Motion:

Camera systems need to be able to capture the face, and if there is too much motion, it could produce a bad video image. To improve performance when people are moving quickly, facial systems require cameras that can support high frame rates. Usually, 30 fps is adequate, but if more motion is expected, a camera with 60 fps may be required. Facial recognition algorithms need some time to process data, so if there are too many people flowing through the system, the recognition process may not be fast enough. Higher performance computer servers are required as the number of detections increase.

_

The entire facial recognition process by video camera goes through several stages.

The system receives the video stream and analyzes all of the frames in real time. When a face is detected, algorithm catches several shots of it and cuts faces from them. In real life, the person can move, turn and even lower the head. So, on the next stage, the system aligns the faces on each image so that the algorithm on the next stage can analyze the face with a front look for higher accuracy. After that, the system builds a unique vector which consists of the description of the facial features (for instance — distance between eyes, length of forehead, nose, skin tone, etc.). The actual process of vector creation is, however, much more complicated, but very similar to the way that we notice features in faces. Vector is not a simple table of parameters; rather, it is a complex “idea” of what defines this concrete face and how it is different from others, converted into data-vector through certain mathematical process. This vector is created for each frame with this face. As a result, we have a bunch of vectors. The system will analyze them, clustering and extracting key vectors which will be sent to storage. Then this cluster of vectors is compared to others. The comparator then sends the results through API to any other system.

______

Components of face recognition system (FRS):

_

Figure above depicts the typical way that a FRS can be used for identification purposes. The first step in the facial recognition process is the capturing of a face image, also known as the probe image. This would normally be done using a still or video camera. In principle, the capturing of the face image can be done with or without the knowledge (or cooperation) of the subject. This is indeed one of the most attractive features of FRT. As such, it could, in principle, be incorporated into existing good quality “passive” CCTV systems. However locating a face in a stream of video data is not a trivial matter. The effectiveness of the whole system is highly dependent on the quality and characteristics of the captured face image. The process begins with face detection and extraction from the larger image, which generally contains a background and often more complex patterns and even other faces. The system will, to the extent possible, “normalize” (or standardize) the probe image so that it is in the same format (size, rotation, etc.) as the images in the database. The normalized face image is then passed to the recognition software. This normally involves a number of steps such as extracting the features to create a biometric “template” or mathematical representation to be compared to those in the reference database (often referred to as the gallery). In an identification application, if there is a “match,” an alarm solicits an operator’s attention to verify the match and initiate the appropriate actions. The match may either be true, calling for whatever action is deemed appropriate for the context, or it may be false (a “false positive”), meaning the recognition algorithm made a mistake.

_

Computers use their peripheral cameras, video feeds from other cameras, still images, and advanced algorithms to detect, identify, and match human faces to a database. These algorithms help to locate a human face within a scene. Some scenes make detection easier, some make it harder. Some of these technologies require more computer resources, so there is a trade-off of performance versus cost.  Facial recognition systems used to be standalone systems, but today the analytic software is available as part of IP camera recording systems.

______

______

Face Recognition Methods:

In the beginning of the 1970’s, face recognition was treated as a 2D pattern recognition problem. The distances between important points where used to recognize known faces, e.g. measuring the distance between the eyes or other important points or measuring different angles of facial components. But it is necessary that the face recognition systems to be fully automatic. Face recognition is such a challenging yet interesting problem that it has attracted researchers who have different backgrounds: psychology, pattern recognition, neural networks, computer vision, and computer graphics.

The following methods are used to face recognition.

  1. Holistic Matching Methods: In holistic approach, the complete face region is taken into account as input data into face catching system.
  2. Feature-based (structural) Methods: In this methods local features such as eyes, nose and mouth are first of all extracted and their locations and local statistics (geometric and/or appearance) are fed into a structural classifier.
  3. Hybrid Methods: Hybrid face recognition systems use a combination of both holistic and feature extraction methods. Generally 3D Images are used in hybrid methods. The image of a person’s face is caught in 3D, allowing the system to note the curves of the eye sockets, for example, or the shapes of the chin or forehead.

Face recognition: Holistic or feature based approach?

In the task of face recognition you may choose between two approaches in order to tackle the problem. First a holistic approach using eigenfaces (PCA) over 2D images and/or depth maps of 3D models of individuals is an option. On the other hand, a feature based approach like scale-invariant feature transform (SIFT) seems to be another good option (over 2D face images and/or 3D models or depth maps of faces). According to your experience (knowledge) there are some advantages on using one or the other approach.

_

Various ways to classify facial recognition methods:

  1. Geometric Based / Template Based:

Face recognition algorithms classified as geometry based or template based algorithms. The template-based methods can be constructed using statistical tools like SVM [Support Vector Machines], PCA [Principal Component Analysis], LDA [Linear Discriminant Analysis], Kernel methods or Trace Transforms. The geometric feature based methods analyse local facial features and their geometric relationship. It is also known as a feature-based method.

  1. Piecemeal / Wholistic:

This method explores the relation between the elements or the connection of a function with the whole face not undergone into the amount, many researchers followed this approach, trying to deduce the most relevant characteristics. Some methods attempted to use the eyes, a combination of features and so on. Some Hidden Markov Model methods also fall into this category, and feature processing is very famous in face recognition.

  1. Appearance-Based / Model-Based:

The appearance-based method shows a face regarding several images. An image considered as a high dimensional vector. This technique is usually used to derive a feature space from the image division. The sample image compared to the training set. On the other hand, the model-based approach tries to model a face. The new sample implemented to the model and the parameters of the model used to recognise the image. The appearance-based method can classify as linear or nonlinear. e.g.- PCA, LDA, IDA used in direct approach whereas Kernel PCA used in nonlinear approach. On the other hand, in the model-based method can be classified as 2D or 3D e.g.- Elastic Bunch Graph Matching used.

  1. Template / Statistical / Neural Networks Based:-

Template Matching:-

In template matching the patterns are represented by samples, models, pixels, textures, etc. The recognition function is usually a correlation or distance measure.

Statistical Approach:-

In the Statistical approach the goal is to choose and apply the right statistical tool for extraction and analysis. This is a statistical approach that distils images into numerical values and compares the values with templates to eliminate variances. There are many statistical tools, which used for face recognition.

Neural Networks:-

Neural Network has continued to use pattern recognition and classification. Kohonen was the first to show that a neuron network could be used to recognise aligned and normalised faces. There are methods, which perform feature extraction using neural networks. There are many methods, which combined with tools like PCA or LCA and make a hybrid classifier for face recognition. These are like Feed Forward Neural Network with additional bias, Self-Organizing Maps with PCA, and Convolutional Neural Networks with multi-layer perception, etc. These can increase the efficiency of the models.

____

____

Facial Recognition Algorithm:

A face recognition algorithm is built to identify various facial features by extracting identifying landmarks known as features on a person’s face. The algorithm detects and analyzes features based on their position relative to other features. Examples include the size, shape and placement of the nose, jaw and cheekbones. A sophisticated facial recognition algorithm can be used to identify an individual utilizing thousands of features on their face and then comparing those features against a database of photographs. Face recognition algorithms can protect privacy and secure data.

_

Face  recognition  is  an  image  processing  technique  which  aims  to  identify  persons  based  on their faces. The whole process of face recognition is rather difficult task and requires a lot of computations. The face recognition can be divided into two basic steps. In the first step the face recognition algorithm must detect a face and in the second step algorithm extracts important features used to identify the face.

The algorithms for face recognition can be divided into 12 categories:

  • Geometric feature based methods
  • Template based methods
  • Correlation based methods
  • Matching pursuit based methods
  • Singular value decomposition based methods
  • The dynamic link matching methods
  • Illumination invariant processing methods
  • Support vector machine approach
  • Karhuen-Loeve expansion based methods
  • Feature based methods
  • Neural networks based algorithms
  • Model based methods

The details of these categories are beyond the scope of this article.

_

Classical face recognition algorithms:

There has been a rapid development of the reliable face recognition algorithms in the last decade. The traditional face recognition algorithms can be categorised into two categories: holistic features and local feature approaches. The holistic group can be additionally divided into linear and nonlinear projection methods. Many applications have shown good results of the linear projection appearance‐based methods such as principal component analysis (PCA), independent component analysis (ICA), linear discriminate analysis (LDA), 2DPCA and linear regression classifier (LRC). However, due to large variations in illumination conditions, facial expression and other factors, these methods may fail to adequately represent the faces. The main reason is that the face patterns lie on a complex nonlinear and non‐convex manifold in the high‐dimensional space.

In order to deal with such cases, nonlinear extensions have been proposed like kernel PCA (KPCA), kernel LDA (KLDA) or locally linear embedding (LLE). The most nonlinear methods using the kernel techniques, where the general idea consists of mapping the input face images into a higher‐dimensional space in which the manifold of the faces is linear and simplified. So the traditional linear methods can be applied. Although PCA, LDA and LRC are considered as linear subspace learning algorithms, it is notable that PCA and LDA methods focus on the global structure of the Euclidean space, whereas LRC approach focuses on local structure of the manifold. These methods project face onto a linear subspace spanned by the eigenface images. The distance from face space is the orthogonal distance to the plane, whereas the distance in face space is the distance along the plane from the mean image. These both distances can be turned into Mahalanobis distances and given probabilistic interpretations. Following these, there have been developed: KPCA, kernel ICA and generalised linear discriminant analysis. Despite strong theoretical foundation of kernel‐based methods, the practical application of these methods in face recognition problems, however, does not produce a significant improvement compared with linear methods.

_

_

Here are some of the most popular recognition algorithms:

Principal component analysis using eigenfaces

The hidden Markov model

Neuronal motivated dynamic link matching

Linear discriminant analysis

Multilinear subspace learning using tensor representation

The Fisherface algorithm (elastic bunch graph matching)

______

Principal Components Analysis (PCA):

The PCA technique converts each two dimensional image into a one dimensional vector. This vector is then decomposed into orthogonal (uncorrelated) principle components (known as eigenfaces)—in other words, the technique selects the features of the image (or face) which vary the most from the rest of the image. In the process of decomposition, a large amount of data is discarded as not containing significant information since 90% of the total variance in the face is contained in 5-10% of the components. This means that the data needed to identify an individual is a fraction of the data presented in the image. Each face image is represented as a weighted sum (feature vector) of the principle components (or eigenfaces), which are stored in a one dimensional array. Each component (eigenface) represents only a certain feature of the face, which may or may not be present in the original image. A probe image is compared against a gallery image by measuring the distance between their respective feature vectors. For PCA to work well the probe image must be similar to the gallery image in terms of size (or scale), pose, and illumination. It is generally true that PCA is reasonably sensitive to scale variation.

___

EigenFaces Face Recognizer:

Eigenfaces is the name given to a set of eigenvectors when they are used in the computer vision problem of human face recognition.  The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. The eigenvectors are derived from the covariance matrix of the probability distribution over the high-dimensional vector space of face images. The eigenfaces themselves form a basis set of all images used to construct the covariance matrix. This produces dimension reduction by allowing the smaller set of basis images to represent the original training images. Classification can be achieved by comparing how faces are represented by the basis set.

A set of eigenfaces can be generated by performing a mathematical process called principal component analysis (PCA) on a large set of images depicting different human faces. Informally, eigenfaces can be considered a set of “standardized face ingredients”, derived from statistical analysis of many pictures of faces. Any human face can be considered to be a combination of these standard faces. For example, one’s face might be composed of the average face plus 10% from eigenface 1, 55% from eigenface 2, and even -3% from eigenface 3. Remarkably, it does not take many eigenfaces combined together to achieve a fair approximation of most faces. Also, because a person’s face is not recorded by a digital photograph, but instead as just a list of values (one value for each eigenface in the database used), much less space is taken for each person’s face. The eigenfaces that are created will appear as light and dark areas that are arranged in a specific pattern. This pattern is how different features of a face are singled out to be evaluated and scored. There will be a pattern to evaluate symmetry, if there is any style of facial hair, where the hairline is, or evaluate the size of the nose or mouth. Other eigenfaces have patterns that are less simple to identify, and the image of the eigenface may look very little like a face.

The technique used in creating eigenfaces and using them for recognition is also used outside of face recognition. This technique is also used for handwriting recognition, lip reading, voice recognition, sign language/hand gestures interpretation and medical imaging analysis. Therefore, some do not use the term eigenface, but prefer to use ‘eigenimage’.

FisherFaces Recognizer:

Fisherfaces algorithm, instead of obtaining useful features that represent all the faces of all the persons, it removes valuable features that discriminate one person from the others. This features of one person do not dominate over the others, and you have the features that distinguish one person from the others.

Local Binary Patterns Histograms:

We know that Eigenfaces and Fisherfaces are both affected by light and in real life we cannot guarantee perfect light conditions. LBPH face recogniser is an improvement to overcome this drawback. The idea is not to find the local features of an image. LBPH algorithm tries to find the local structure of an image, and it does that by comparing each pixel with its neighbouring pixels.

___

Linear Discriminant Analysis (LDA):

LDA is a statistical approach based on the same statistical principles as PCA. LDA classifies faces of unknown individuals based on a set of training images of known individuals. The technique finds the underlying vectors in the facial feature space (vectors) that would maximize the variance between individuals (or classes) and minimize the variance within a number of samples of the same person (i.e., within a class). If this can be achieved, then the algorithm would be able to discriminate between individuals and yet still recognize individuals in some varying conditions (minor variations in expression, rotation, illumination, etc.).

___

Elastic Bunch Graph Matching (EBGM):

EBGM relies on the concept that real face images have many nonlinear characteristics that are not addressed by the linear analysis methods such as PCA and LDA—such as variations in illumination, pose, and expression. The EBGM method places small blocks of numbers (called “Gabor filters”) over small areas of the image, multiplying and adding the blocks with the pixel values to produce numbers (referred to as “jets”) at various locations on the image. These locations can then be adjusted to accommodate minor variations. The success of Gabor filters is in the fact that they remove most of the variability in images due to variation in lighting and contrast. At the same time they are robust against small shifts and deformations. The Gabor filter representation increases the dimensions of the feature space (especially in places around key landmarks on the face such as the eyes, nose, and mouth) such that salient features can effectively be discriminated. This new technique has greatly enhanced facial recognition performance under variations of pose, angle, and expression. New techniques for illumination normalization also enhance significantly the discriminating ability of the Gabor filters.

_____

Neural Networks, Deep Learning:

By now, you should understand the basic workflow of a facial recognition system and some of its challenges. This is where it gets tricky. Some of the described above steps and algorithms may already be obsolete or completely useless compared to the innovations that drive the more advanced facial recognition technologies. Machine learning is ruling the facial recognition game. But there’s a subset of machine learning algorithms, which takes the cake when it comes to recognizing faces. They fall under deep learning and are called neural networks.

Their architectures are varied and complicated, and, often, we can’t even decipher the results of their calculations or even the values that they output. In one example, when an image is run through a neural network, it reliably outputs a set of measurements that serve as a unique identifier of that face. These sets of measurements are called embeddings. This network can generate nearly identical embeddings for images of the same person, which can later be compared to identify the person. And this is all that matters.

Convolutional neural networks are a subtype in this category, and they prove to be pretty effective. They also require minimal preprocessing of data. And this is one of the reasons why they’re widely adopted for image-recognition problems.

_

Artificial neural networks are a popular method of facial recognition. They are used for feature extraction and decision-making. One of the most widely used options is a network built on a multi-layer perceptron, which allows classification of the input image in compliance with the pretrained network. Neural networks are trained on a set of learning examples. During training, the neural network automatically extracts key features, determines their importance and builds relationships between them. It is assumed that the trained neural network will be able to apply the experience gained in the training process to unknown images, thanks to its abilities to generalize. Being a relatively time- and energy-consuming process, however, trained neural networks show rather good results for facial recognition and decrease the error rate. The main problem is adding a new benchmark face to the database, which requires a complete retraining of the network across the entire database set.  Convolutional neural networks (CNN) show the best results in analyzing visual imagery, due to their ability to take into account the two-dimensional topology of the image, in contrast to the multi-layer perceptron. That is why a convolutional neural network is less affected by scale changes, biases, turns, angles and other distortions. An advanced CNN outperforms the majority of widely adopted algorithms, achieving almost a 100% accuracy (99.8% in one of the experiments). The mounting evidence proves that neural networks, on average, are superior to their “hand-made” counterparts. That’s why major tech companies have been using neural networks for this task for a while.

In general, the combination of principal component analysis and neural networking works as follows. Faces are extracted from the images and described by a set of eigenfaces using PCA. Neural networks are then used to recognize the face through learning the right classification of the descriptors.

______

Various Efficient Face Recognition Algorithms and Techniques:

Fisherfaces:

Fisherfaces implements a face recognition framework for Python with preprocessing, feature extraction, classifier and cross validation. Basically, it lets you measure, save and load models for face recognition in videos (such as webcam feeds). You can also optionally validate your model to see the performance you can expect. It optionally performs a k-Fold Cross Validation to estimate the precision of the model. However, the script does not work perfectly on not preprocessed input pictures. In order to generate more robust recognition, your input must be aligned in the exact same manner as specified in the training set.

_

Real Time Face Recognition:

A real time face recognition algorithm based on TensorFlow, OpenCV, MTCNN and Facenet. Face reading depends on OpenCV2, embedding faces is based on Facenet, detection has done with the help of MTCNN, and recognition with classifier. The main idea was inspired by OpenFace. However, the author has preferred Python for writing code.

_

Android Face Recognition with Deep Learning:

This is an Android library packed with numerous face recognition techniques. Its code has been derived from TensorFlow, FaceNet, LIBSVM and Caffe. You can either train and classify by passing pictures to the library or if features are already obtained from the picture, the feature vector can be passed together with a special flag set to “true”.

_

DeepID Test:

Using the webface data set (of plum team) and the DeepID network (of Tang Xiaoao team), the model parameters were trained by Caffe, and the accuracy of face detection was carried out by LFW classification.  The principle of face recognition involves extracting 6,000 pairs of images, of which 50% are same images and the rest 50% are different images, from labelled faces in the wild home. The next step is to train corresponding 2 images as a good model input, and get 2, 160-bit dimensional feature vector. Finally, obtain 6,000 cosine distance or Euclidean distance, and generate higher face accuracy by selecting threshold.

_

Node FaceNet:

This is TensorFlow backed FaceNet implementation for Node.js, for solving face verification, recognition and clustering problems. The script directly learns mapping from pictures to compact Euclidean space where distances correspond to a measure of facial similarity. It optimizes the face recognition performance using only 128-bytes per face, and reaches the accuracy of 99.63% on LFW (labelled faces in the wild) dataset.

_

SphereFace:

This is an implementation of SphereFace – deep hypersphere embedding for face recognition. The repository consists of entire pipeline (all preprocessings) for deep face recognition with SphereFace. The recognition pipeline contains 3 crucial steps – face detection, alignment and recognition. This technique proposes the A-Softmax (angular softmax) loss that allows CNNs (convolution neural networks) to learn angularity discriminative features. Geometrically, angular softmax loss could be seen as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that images also lie on a manifold. It achieves state-of-the-art verification performance in MegaFace challenge under small training set protocol.

_

Deep Face Recognition with Caffe Implementation:

The algorithm is developed for deep face recognition – related to discriminative feature learning approach for deep face recognition. This model is based on a new supervision signal, known as center loss for face recognition task. The center loss learns a center for deep features of each class, and penalizes the distances between deep features and their corresponding class centers, at the same time. The proposed center loss function is easy to optimize in convolutional neural networks. Under the supervision of both softmax loss and center loss, convolutional neural networks can be trained to obtain deep features with two key learning goals – intra-class compactness and inter-class dispersion as much as possible, which are very crucial for facial recognition.

_

FaceRecognition in ARKit:

This script can detect faces using Vision API and run extracted face through a CoreML model to identify persons. To run this script, you will require Xcode 9, iPhone 6s (or newer) and a machine-learning model.  Authors have trained the model in AWS using Nvidia DIGITS, took hundreds of images of each person, and extracted the faces. There is a separate “unknown” category with different faces. A pre-trained, fine-tuned model has been used for face recognition.

_

Facial Recognition API for Python and Command Line:

The model is using Dlib’s state of the art face identification developed with deep learning. It has 99.38% accuracy on the labelled faces in the Wild benchmark. A simple face recognition command line tool allows you to perform face recognition on an image folder. Moreover, this library could be used with other Python libraries to perform real time face recognition.

_

Face recognition using Tensorflow:

This is a face identifier implementation using TensorFlow, as described in the paper FaceNet. It also uses the phenomena of a discriminative feature learning method for deep face recognition. The source code is inspired by OpenFace implementation. Two data sets CASIA-WebFace and MS-Celeb-1M has been used for training, which yield LFW accuracy of 0.987 and 0.992 respectively.

_

Joint Face Detection and Alignment:

Detecting and aligning in unconstrained environment are quite difficult due to different illuminations, poses and occlusions. However, we can achieve great results with the help of deep learning techniques. This project uses deep cascaded multi-task framework that exploits the inherent correlation between detection and alignment to increase the performance. It utilizes the cascaded architecture with three stages of deep convolutional networks to predict face and landmark location. Also, the authors have introduced a mining approach that further improves the performance in practice. This technique achieves superior accuracy over the state-of-the-art methodologies on challenging WIDER FACE benchmarks for face detection, and AFLW benchmark for face alignment.

_

OpenBR:

This is a communal biometric framework that supports development of open (as well as closed) algorithms and reproducible evaluations. You can use this to improve existing algorithms, explore new modalities, measure recognition performance and deploy automated biometric systems. The framework includes the algorithms for training, face recognition, gender estimation and age estimation. It’s supported on all major platforms, including Mac OSX, Windows and Linux.

_

OpenFace:

OpenFace is a Torch and Python implementation of face identification with deep neural networks, and is based on FaceNet. Torch enables the network to execute on a CPU or with CUDA.

______

______

2D to 3D facial recognition:

The number of 2D face recognition algorithms is immense and they enclose a huge variety approaches so it would be impossible to make an exhaustive enumeration of all publications related with 2D face recognition. As shown in Face Recognition Vendor Test 2002 (Phillips et al., 2002), the vast majority of face recognition methods based on 2D image processing using intensity or color images, reached a recognition rate higher than 90% under lighting controlled conditions, and whenever subjects are consentient. Unfortunately in case of pose, illumination and expression variations the system performances drop, because 2D face recognition methods still encounter difficulties.

The problem of face recognition can be cast as a standard pattern-classification or machine-learning problem. Imagine we are given a set of images labelled with the person’s identity (the gallery set) and a set of images unlabelled from a group of people that includes the individual (the probe set), and we are trying to identify each person in the probe set. This problem can be attacked in three steps. In the first step, the face is located in the image, a process known as face detection, which can be as challenging as face recognition (Viola and Jones, 2004, and Yang et al., 2000). In the second step, a collection of descriptive measurements, known as a feature vector, is extracted from each image. In the third step, a classifier is trained to assign a label with a person’s identity to each feature vector. (Note that these classifiers are simply mathematical functions that return an index corresponding to a subject’s identity.)

In the last few years, numerous feature-extraction and pattern-classification methods have been proposed for face recognition (Chellappa et al., 1995; Fromherz, 1998; Pentland, 2000; Samil and Iyengar, 1992; Zhao et al., 2003). Geometric, feature-based methods, which have been used for decades, use properties and relations (e.g., distances and angles) between facial features, such as eyes, mouth, nose, and chin, to achieve recognition (Brunelli and Poggio, 1993; Goldstein et al., 1971; Harmon et al., 1978, 1981; Kanade, 1973, 1977; Kaufman and Breeding, 1976; Li and Lu, 1999; Samil and Iyengar, 1992; Wiskott et al., 1997). Despite their economical representation and insensitivity to small variations in illumination and point of view, feature-based methods are quite sensitive to the feature-extraction and measurement process, the reliability of which has been called into question (Cox et al., 1996) In addition, some have argued that face recognition based on inferring identity by the geometric relations among local image features is not always effective (Brunelli and Poggio, 1993).

In the last decade, appearance-based methods have been introduced that use low-dimensional representations of images of objects or faces (e.g., Hallinan, 1994; Hallinan et al., 1999; Moghaddam and Pentland, 1997; Murase and Nayar, 1995; Pentland et al., 1994; Poggio and Sung, 1994; Sirovitch and Kirby, 1987; Turk and Pentland, 1991). Appearance-based methods differ from feature-based techniques in that low-dimensional representations are faithful to the original image in a least-squares sense. Techniques such as SLAM (Murase and Nayar, 1995) and Eigenfaces (Turk and Pentland, 1991) have demonstrated that appearance-based methods are both accurate and easy to use. The feature vector used for classification in these systems is a linear projection of the face image in a lower dimensional linear subspace. In extreme cases, the feature vector is chosen as the entire image, with each element in the feature vector taken from a pixel in the image.

Despite their success, many appearance-based methods have a serious drawback. Recognition of a face under particular lighting conditions, in a particular pose, and with a particular expression is reliable only if the face has been previously seen under similar circumstances. In fact, variations in appearance between images of the same person confound appearance-based methods.  If the gallery set contains a very large number of images of each subject in many different poses, lighting conditions, and with many different facial expressions, even the simplest appearance-based classifier might perform well. However, there are usually only a few gallery images per person from which the classifier must learn to discriminate between individuals.

In an effort to overcome this shortcoming, there has been a recent surge in work on 3-D face recognition. The idea of these systems is to build face-recognition systems that use a handful of images acquired at enrolment time to estimate models of the 3-D shape of each face. The 3-D models can then be used to render images of each face synthetically in novel poses and lighting conditions—effectively expanding the gallery set for each face. Alternatively, 3-D models can be used in an iterative fitting process in which the model for each face is rotated, aligned, and synthetically illuminated to match the probe image. Conversely, the models can be used to warp a probe image of a face back to a canonical frontal point of view and lighting condition. In both of these cases, the identity chosen corresponds to the model with the best fit.

The 3-D models of the face shape can be estimated by a variety of methods. In the simplest methods, the face shape is assumed to be a generic average of a large collection of sample face shapes acquired from laser range scans. Georghiades and colleagues (1999, 2000) estimated the face shape from changes in the shading in multiple enrolment images of the same face under varying lighting conditions. Kukula (2004) estimated the shape using binocular stereopsis on two enrolment images taken from slightly different points of view. Ohlhorst (2005) based the estimate on deformations in the grid pattern of infrared light projected onto the face. In another study, Blanz and Vetter (2003) inferred the 3-D face shape from the shading in a single image using a parametric model of face shape. Often, a “bootstrap” set of prior training data of face shape and reflectance taken from individuals who are not in the gallery or probe sets is used to improve the shape and reflectance estimation process.

Although recent advances in 3-D face recognition have gone a long way toward addressing the complications causes by changes in pose and lighting, a great deal remains to be done. Natural outdoor lighting makes face recognition difficult, not simply because of the strong shadows cast by a light source such as the sun, but also because subjects tend to distort their faces when illuminated by a strong light. Furthermore, very little work has been done to address complications arising from voluntary changes in facial expression, the use of eyewear, and the more subtle effects of aging. The hope, of course, is that many of these effects can be modelled in much the same way as face shape and reflectance and that recognition will continue to improve in the coming decade.

_____

_____

3D facial recognition:

For the human eye, 2D information plays a more important role in the recognition of a face than 3D information, though such information can be misleading. Three-dimensional face recognition is an emerging modality, trying to use 3D geometry of the face for accurate identification of the subject. While traditional two-dimensional face recognition methods suffer from sensitivity to external factors, such as illumination, head pose, and are also sensitive to the use of cosmetics, 3D methods appear to be more robust to these factors. Yet, the problem of facial expressions is a major issue in 3D face recognition, since the geometry of the face significantly changes as the result of facial expressions.

Three-dimensional face recognition (3D face recognition) is a modality of facial recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face recognition methods can achieve significantly higher accuracy than their 2D counterparts, rivalling fingerprint recognition.  3D face recognition has the potential to achieve better accuracy than its 2D counterpart by measuring geometry of rigid features on the face. This avoids such pitfalls of 2D face recognition algorithms as change in lighting, different facial expressions, make-up and head orientation. Another approach is to use the 3D model to improve accuracy of traditional image based recognition by transforming the head into a known view. Additionally, most 3D scanners acquire both a 3D mesh and the corresponding texture. This allows combining the output of pure 3D matchers with the more traditional 2D face recognition algorithms, thus yielding better performance (as shown in FRVT 2006).

The main technological limitation of 3D face recognition methods is the acquisition of 3D image, which usually requires a range camera. Alternatively, multiple images from different angles from a common camera (e.g. webcam) may be used to create the 3D model with significant post-processing.  This is also a reason why 3D face recognition methods have emerged significantly later (in the late 1980s) than 2D methods. Recently commercial solutions have implemented depth perception by projecting a grid onto the face and integrating video capture of it into a high resolution 3D model. This allows for good recognition accuracy with low cost off-the-shelf components.  3D face recognition is still an active research field, though several vendors offer commercial solutions. Today, 99% of the infrastructure scattered around the world consists of 2D cameras capable of running advanced facial recognition software and it will likely be years before a physical overhaul to 3D cameras takes place.

Using depth and an axis of measurement that is not affected by lighting, 3D facial recognition can even be used in darkness and has the ability to recognize a subject at different view angles with the potential to recognize up to 90 degrees (a face in profile). Using the 3D software, the system goes through a series of steps to verify the identity of an individual.

__

3D facial recognition system:

_

Three-dimensional face recognition aims at bolstering the accuracy of the face modality, thereby creating a reliable and non-intrusive biometric. There exist a wide range of 3D acquisition technologies, with different cost and operation characteristics. The most cost-effective solution is to use several calibrated 2D cameras to acquire images simultaneously and to reconstruct a 3D surface. This method is called stereo acquisition, even though the number of cameras can be more than two. An advantage of these type of systems is that the acquisition is fast, and the distance to the cameras can be adjusted via calibration settings, but these systems require good and constant illumination conditions.

The reconstruction process for stereo acquisition can be made easier by projecting a structured light pattern on the facial surface during acquisition. The structured light methods can work with a single camera, but require a projection apparatus. This usually entails a larger cost when compared to stereo systems, but a higher scan accuracy. The potential drawbacks of structured light systems are their sensitivity to external lighting conditions and the requirement of a specific acquisition distance for which the system is calibrated. Another problem associated with structured light is that the projected light interferes with the color image and needs to be turned off to generate it. Some sensors avoid this problem by using near-infrared light.

Yet a third category of scanners relies on active sensing: A laser beam reflected from the surface indicates the distance, producing a range image. These types of laser sensors, used in combination with a high-resolution color camera, give high accuracies, but sensing takes time.

The typical acquisition distance for 3D scanners varies between 50 and 150 cm, and laser scanners are usually able to work with longer distances (up to 250 cm) when compared to stereo and structured light systems. Structured light and laser scanners require the subject to be motionless for a short duration (0.8–2.5 s in the currently available systems), and the effect of motion artifacts can be much more detrimental for 3D in comparison to 2D. The Cyberware scanner takes a longer scan time (8–34 s), but it acquires 360 degree scan within this time.

_

Steps involved in 3D Facial Recognition:

_

_

Representation of Face in 2D and 3D Models:

In the 2D biometric face recognition system, representation is done through intensity variation. In case of 3D models, representation is done by shape variation. 2D automated face recognition system can discriminate between the faces depending upon color or the intensity of features on a particular face. In case of 3D system, discrimination is done on the basis of different shapes of the features of a particular face but the 3D system uses more reliable basis of face recognition and so it is considered to be more accurate.

Although 3D systems are more reliable and offer more accuracy, but there is a wide scope of maturity and improvement in this kind of biometrics face recognition system. At the same time, there are constant improvements in 2D systems, which are affecting the popularity of 3D. More and more companies are these days working on infusing 2D systems because this is the more feasible form of new technology in management of vital records.

Experiments show that the combination of 2D and 3D technology yields better results. The use of multiple biometrics is more accurate than best possible sensor for single biometric. An accuracy rate of 98.8% was recorded in a recent study of biometric face recognition system using 2D and 3D images. The combination of two technologies outperforms the use of single technology in face recognition system.

_____

Twin recognition in 3D face recognition:

The 3D system is often unable to differentiate between identical twins. And there have been cases where the facial scan could not tell family members apart because of their close resemblance. That’s because the markers for 3D facial recognition are not as distinct as irises or fingerprints. Identical twins have very similar faces, but their irises and fingerprints are generally dissimilar.

Recently researchers at the Technion, Israel Institute of Technology in Haifa, presented a new twist to face recognition technology— a 3D system based on “bending invariant canonical representation.” Michael and Alexander Bronstein are electrical engineering graduate students and twin brothers. While working on a project headed by their professor, Ron Kimmel, and with the help of lab engineer Eyal Gordon, the brothers decided to try to create a face recognition system that could distinguish identical twins, a difficult problem for most face recognition systems. “The fact that my brother and I are twins was inspiration for this invention,” says Alexander. The team developed a system that treats the face as a deformable object, as opposed to a rigid surface, and uses a range camera and a computer. The 3D system maps rather than photographs the face, capturing facial geometry as a canonical form, which it can then compare to other canonical forms contained in a database. The system can compare surfaces with a high fidelity level, independent of surface deformations resulting from facial expressions. The process of capturing canonical forms occurs in three stages as seen in the figure below:

At the first stage, the system acquires the face’s range image and texture. At the second stage, it converts the range image to a triangulated surface and preprocesses it by removing certain parts such as hair, which can complicate the recognition process. The mesh can be subsampled to decrease the amount of data. The choice of the number of subsamples is a trade-off between accuracy and computational complexity. At the third stage, the system computes a canonical form of the facial surface. This representation is practically insensitive to head orientations and facial expressions, significantly simplifying the recognition procedure. The system performs the recognition itself on the canonical surfaces.

Alex says that their 3D method gives more information about the face, is less vulnerable to makeup and illumination conditions, and fares better than other face recognition systems. Michael points out that other systems are more sensitive to facial expressions, while their 3D system can handle facial deformations. They tested a classical 2D face recognition algorithm (eigenfaces), a 3D face recognition algorithm, and a recently proposed combination of 2D and 3D recognition, and their system fared best. “On a database of 157 subjects, we obtained zero error, even when we were comparing two twins,” Michael says. “Obviously, a larger database is required for more accurate statistics, yet by extrapolation we can predict results significantly outperforming other algorithms.”

_____

A Study of Face Recognition of Identical Twins by Humans:

Humans are very good at identifying people from their images, and so human face recognition performance is often considered as a guideline for assessing face recognition algorithms. Researchers performed a human experiment to determine if humans viewing a pair of facial images can successfully distinguish between images of the same person and of identical twin siblings. Given two seconds to view the image pairs, the average accuracy was found to be 78:82%. Researchers observe that increasing the viewing time significantly improves the matching accuracy. This can be attributed to the fact that facial images of identical twins are very similar with subtle differences which can be better perceived given sufficient time. They also observed that the performance was lower for the uncontrolled images as compared to the controlled images implying that the presence of external factors like illumination, etc. tend to make the already challenging problem of recognizing images of identical twin siblings even harder. For the correct responses the most important feature chosen was moles/scars/freckles while for the incorrect responses, none of the selected features were significantly more important than the others. Researchers observed that humans perform much better than commercial face recognition algorithms.

_____

_____

Common challenges to facial recognition are depicted in the figure below:

_

Image Quality:

The primary requirement of face recognition system is suspect’s good quality face image and a good quality image is one which is collected under expected conditions. For extracting the image features the image quality is important. Without the accurate computations of facial features the robustness of the approaches will also be lost. Thus even the best recognition algorithm deteriorates as the quality of the image declines. The poor quality image can hide some of the adaptive facial features. The poor quality images are not effective to provide the effective results. The quality of image depends on different vectors such as resolution, image size, colors etc. The quality of the image also applied to handle the image problems such as contrast difference, illumination problem, correlation analysis, distortion etc. Lesser the quality of image more difficult will be to perform the recognition. The effectiveness of the recognition system depends on the image quality. In some of the facial recognition system, the image quality improvement is added as the pre-processing stage. These image quality adjustments transform the raw image to the normalized image. Another issue associated with quality of image is the noise. There are number of additive and multiplicative noise vectors that disrupt the image contents so that the recognition rate affects. The noise can occur to the image because of device or the environmental or technological fault.

_

Pose Variation:

Usually, the training data used by face recognition systems are frontal view face images of individuals. Frontal view images contain more specific information of a face than profile or other pose angle images. The problem appears when the system has to recognize a rotated face using this frontal view training data. Comparing faces under varying pose is another fundamental challenge for face recognition system. Some unavoidable problems appear in the variety of practical applications such as the people are not always frontal to the camera, so the pose problem is a big difficulty for the face recognition system to be occurrence. In essence, the difference between the distinct persons under the same poses, it is difficult for the computer to do the face identification when the poses of the query and gallery images are different. Pose variation still presents a challenge for face recognition. Frontal images have better performance to novel poses than do non-frontal images. The recognition rates of frontal images are greater than non-frontal images. User needs to install multiple views of an individual in a face database.

There are three types of head rotation:

  1. out-of-plane rotation (look to the left – to the right)
  2. in-plane rotation (tilted toward shoulders)
  3. up-and-down nodding rotation (up-down)

Following are the most relevant approaches to pose problem:

-Multi-image approach: these methods require multiple images for training. The idea behind this approach is to make template of all possible pose variations. So, when an input image is classified, it is aligned to the images corresponding to a single pose. This process is repeated for each stored pose, until the correct one is detected.

-Single-model based approach: this approach uses several data of a subject on training, but only one image at recognition.

-Geometric approach: there are approaches that try to build a sub-layer of pose-invariant information of face. The input images are transformed depending on geometric measures on those models. The most common method is to build a graph which can link features to nodes and define the transformation needed to mimic face rotation.

-Adopting a coarse-to-fine view-partition strategy, the detector-pyramid architecture consists of several levels from the coarse top level to the fine bottom level. Rowley et al. propose to use two neural network classifiers for detection of frontal faces subject to in-plane rotation.

_

Occlusion:

The face recognition context, it involves that some parts of the face cannot be obtained. E.g. a face photography taken from a surveillance camera could be partially hidden behind column. The recognition process can rely heavily on the availability of a full input face. Therefore, the absence of some parts of the face may lead to bad classification. There are also objects that can occlude facial features sun glasses, hats, beards, certain haircuts etc.

Figure above (a) shows occlusion effect. Obstructions on face can be due to presence of spectacles, moustaches, beard, surgery on face, aging, pimples etc.

Figure above (b) shows expression effect. Expression on face like smiling, frowning, crying, laughing etc. may cause inconsistent results.

_

Expression:

Comparing faces with different facial expression is another problem for some face recognition applications. Faces undergo large deformations under facial expressions. Human can easily handle this variation, but the algorithms can have problems with the expression databases. If the expression of database image and the input image are different then it becomes difficult to perform the recognition. Face recognition under extreme facial expression still remains an unsolved problem, and temporal information can provide significant additional information in face recognition under expression. The performance of face recognition system significantly decreases when there is a dramatic expression on the face. Therefore, it’s important to automatically find the best face of a subject from the images. Using the neutral face during enrolment and when authenticating, so that we can find the neutral face of the subject from the expressions like happiness, unhappiness, anger, horror, surprise.  The expression independent facial recognition requires a robust and probabilistic model to provide the recognition.

_

Illumination:

Comparing two faces with different illumination is one of the fundamental problems for face recognition system. Face images of the same person can be taken under different illumination conditions such as, the position and the strength of the light source can be modified like the ones.

Figure above shows the same individual imaged with the same camera and seen with nearly the same facial expression and pose may appear dramatically different with changes in the lighting conditions. The two leftmost images were taken indoors and the two rightmost were taken outdoors. All four images were taken with a Canon EOS 1D digital camera. Before each acquisition the subject was asked to make a neutral facial expression and to look directly into the lens.

It has been observed that the variations between the images of the same face due to illumination and viewing direction are almost always larger than image variations due to change in face identity. As seen in the figure above, the same person, with the same facial expression, can appear strikingly different when light source direction and viewpoint vary. These variations are made even greater by additional factors such as facial expression, perspiration, hair styles, cosmetics, and even changes due to aging.

The illumination problem can be faced employing different approaches:

  1. Heuristic approach: an approach based on symmetry of human faces is given by sirovich. This algorithm shows a nearly perfect accuracy recognition frontal face image under different lighting conditions.
  2. Statistical approach: in statistical approach, each image is represented in terms of the features. So, it’s viewed as a point (vector) in a d-dimensional space. Therefore, the goal is to choose and apply the right statistical tool for extraction and analysis of the underlying manifold.
  3. Light-modelling approach: some recognition methods try to model a lighting template in order to build illumination invariant algorithms.
  4. Model-based approach: the most recent model based approaches try to build 3D model. The idea is to make intrinsic shape and texture full independent from extrinsic parameters like light variations. The 3D heads can be used to build a model, which is used to fit the input images. Light directions and cast shadows can be estimated automatically.
  5. Multi-spectral imaging: Multi-Spectral Images (MSI) are those that capture image data at specific wavelengths. The wavelengths can be separated by filters or other instruments sensitive to particular wavelets. MSI enables the separation of spectral information of illumination from other spectral information.

_

Aging:

Face recognition across aging is most challenging in that it has to address all other variants as well. Pose expression and illumination changes are bound to happen for two images of a person taken years apart. In addition to this, textural properties of the skin can be different as well makeup, eyeglasses, weight loss/gain, hair loss, etc. The facial changes that occur due to aging are influenced by numerous environmental factors like solar radiation, smoking, drug usage, stress level, etc. The different biological and environmental factors can either delay or expedite the process of aging. Aging results in changes in both the hard and soft facial tissue of an individual. Loss of tissue elasticity and facial volume and alteration in skin texture are some of the other changes with aging. Drifts in facial landmarks appear to reasonably characterize the shape variations associate with aging.

One of the core challenge for facial recognition is the change is person face according to age. It means, the database image that provided the true recognition today may be not provide the same recognition rate after 5 years or 10 years. Because of this, it is required to cover the aging vector in the authentication system. To cover this, there is the requirement to update the dataset regularly. There also exist some robust recognition system that can perform the age estimation as well as provide the recognition along with age variation. The aging problem is critical and based on the probabilistic estimation so that the system which are not age adaptive cannot provide the accurate results.

_

Transformations:

The same face can be presented to the system at different scales. This may happen due to the focal distance between the face and the camera. As this distance gets close the face image gets bigger. Head orientations may change due to translations and rotations.

______

Technical Challenges:

The performance of many state-of-the-art face recognition methods deteriorates with changes in lighting, pose and other factors. The key technical challenges are:

  1. Large Variability in Facial Appearance: Whereas shape and reflectance are intrinsic properties of a face object, the appearance (i.e. texture) is subject to several other factors, including the facial pose, illumination, facial expression, occlusion, accessories (e.g. glasses), color and brightness.

  1. Highly Complex Nonlinear Manifolds: The entire face manifold (distribution) is highly nonconvex and so is the face manifold of any individual under various changes. Linear methods such as PCA, independent component analysis (ICA) and linear discriminant analysis (LDA) project the data linearly from a high-dimensional space (e.g. the image space) to a low-dimensional subspace. As such, they are unable to preserve the nonconvex variations of face manifolds necessary to differentiate among individuals.
  2. In a linear subspace, Euclidean distance and Mahalanobis distance do not perform well for classifying between face and nonface manifolds and between manifolds of individuals. This limits the power of the linear methods to achieve highly accurate face detection and recognition.
  3. High Dimensionality and Small Sample Size: Another challenge is the ability to generalize. A canonical face image of 112 × 92 resides in a 10,304-dimensional feature space. Nevertheless, the number of examples per person (typically fewer than 10) available for learning the manifold is usually much smaller than the dimensionality of the image space; a system trained on so few examples may not generalize well to unseen instances of the face.

______

Likely requirements of a successful face match:

To get a successful match of the probe image to that of database image, the probe image should have at least few of the followings:

  • Is one in which the face takes up to 70% – 80% of photograph.
  • Is in sharp focus and clear.
  • Shows skin tones naturally.
  • Has appropriate brightness and contrast.
  • Is color neutral.
  • Shows eyes open and clearly visible.
  • Has a plain light-colored background.
  • Shows subject without head cover.
  • Where eye glasses do not obscure the eyes and are not tinted.
  • Has a minimum of 90 pixels between the eye centers.
  • Is no more than few years old.

Face recognition is very challenging area of research. The reason is that even today with so much different techniques and algorithms for different requirements, face recognition system fails.

______

New algorithms have increased accuracy dramatically, but overcoming a host of variables in unconstrained settings is still the biggest challenge in facial recognition. Face detection in completely unconstrained settings remains highly challenging because of the host of variables that must be overcome. In in-house tests, state-of-the-art face detectors can achieve about 50-70 percent detection rate, with about 0.5-3 percent of the detected faces being false positives. Consequently, there is still a lot of work that can be done to improve the performance, especially regarding the learning algorithm and features.”   Video, of course, is a highly unconstrained setting.

Perfecting face recognition technology is dependent on being able to analyze multiple variables, including lighting, image resolution, uncontrolled illumination environments, scale, orientation (in-plane rotation), pose (out-of-plane rotation), people’s expressions and gestures, aging, and occlusion (partial hiding of features by clothing, shadows, obstructions, etc.). This is highly challenging for computer scientists. Solutions are largely mathematical, with new procedural and machine-learning algorithms being developed to improve accuracy. Most of the emphasis in face recognition over the past few years has been matching still frontal faces under controlled illumination. The release of the Multiple Biometric Evaluation 2010 report showed excellent performance on mug shots and mobile studio environments. The top performer had a match rate of 93 percent when searching a database of 1.6 million faces. Researchers are also looking at ways to apply the latest advances in facial-recognition technology to uncontrolled environments, where success rates are 50 percent or lower. Improving success rates in video and film is especially important for law enforcement (for example, reviewing security tapes for suspects).

3D facial recognition technology is one of the best ways to neutralize environmental conditions that complicate human recognition and stump traditional face-recognition algorithms.  Traditional methods rely on photos for facial data-2D information. Using three-dimensional data points from a face vastly improves the precision of facial recognition. The European 3D Face Project has shown that combining 2D data (texture, color) with 3D data (shape) results in more accurate results compared to just using 2D or 3D information alone. For example, researchers at Interval Research Corporation in Palo Alto have created a visual person tracking system by integrating depth estimation, color segmentation, and intensity pattern classification modules. 3D research is enhanced by the development of sophisticated sensors that do a better job of capturing 3D face imagery. The sensors work by projecting structured light onto the face. Up to a dozen or more of these image sensors can be placed on the same CMOS chip-each sensor captures a different part of the spectrum.

_____

The biggest technology challenges that remain in facial identification technology are overcoming low-resolution facial images, occlusion, orientation (being able to recognize equally profiles and frontal face), and orientation age — mainly very young and very old.  These challenges are being approached from many directions. MIT, for example, is researching multidimensional morphable models, view-based human face detection, cortex-like mechanisms, and object detection by components. Other scientists are working on new algorithms, statistical pattern recognition, illumination cone models, geometrical face models, and slow feature analysis. Yet others are going against conventional wisdom by focusing on sparse partial representation — the idea that the choice of features is less important than the number of features used. Scientists at the University of Illinois and University of California, Berkeley have developed a sophisticated face-recognition system based on sparse representation that is remarkably accurate in real-life situations. Existing technologies try to find optimal facial features to use as key identifiers — for example, the width of the nose. Rather than seeking individual features, this system’s sparse representation algorithm randomly selects pixels from all over the face, increasing the accuracy of recognition even in cases of occlusion, varying expressions, or poor image quality.

New mathematical models have allowed researchers to identify faces so occluded that it was previously thought impossible,” says University of Illinois lead researcher Dr. Yi Ma. Ma’s algorithm increases accuracy by ignoring all but the most compelling match from one subject. When applied to the Yale B database of images, this system showed 98.3 percent accuracy using mouth-region images; for the AR database it scored 97.5 percent accuracy for face images with sunglasses and 93.5 percent for a scarf disguise.

____

Although progress in face recognition has been encouraging, the task has also turned out to be a difficult endeavor, especially for unconstrained tasks where viewpoint, illumination, expression, occlusion, accessories, and so on vary considerably. The challenges come from high nonconvexity of face manifolds, in the image space, under variations in lighting, pose and so on; unfortunately, there have been no good methods from theories of pattern recognition for solving such difficult problems, especially when the size of training data is small. However, there are two directions to look at towards possible solutions: One is to construct a “good” feature space in which the face manifolds become less complex i.e., less nonlinear and nonconvex than those in other spaces. This includes two levels of processing: (1) normalize face images geometrically and photometrically, such as using morphing and histogram equalization; and (2) extract features in the normalized images which are stable with respect to the said variations, such as based on Gabor wavelets. The second strategy is to construct classification engines able to solve less, although still, nonlinear problems in the feature space, and to generalize better. A successful algorithm usually combines both strategies. Still another direction is on system design, including sensor hardware, to make the pattern recognition problems thereafter less challenging.

______

Facial Recognition and wearing glasses:

People wearing glasses provide a challenge to both facial detection and facial recognition software. Glasses, especially reflective sunglasses, can hinder an algorithm from finding the points of reference it needs when determining whether there is a face in a photo. If there has been no facial detection, there will clearly be no facial recognition. Prescription-type clear-lensed glasses will generally not be a problem for facial detection and recognition software. The key eye details are still visible. Even if they do cause some form of issue the software will continue to examine the other parts of the face, simply removing the occluded portions from its analysis. It is not uncommon for faces to still be detected and recognised with 30% occlusion.

Suppose somebody has already been enrolled in the system, based on a photo where they are not wearing glasses. A future attempt at facial recognition, using a photo of them with glasses, could easily confuse the algorithm as it attempts to compare the two pictures. Of course, if there are already photos of a person wearing glasses in the system’s database, the chances of facial recognition are increased greatly. Although advanced algorithms have a high detection rate with just one photo in the database, they do perform better with more, and if there is a mix of photos with the subject both with and without glasses, this improves the chances of recognition even higher.

Every facial detection and/or recognition product uses a different algorithm. Many of these algorithms do recognise when people are wearing glasses. The best algorithms in the world still have difficulties, however, when people are wearing dark or shiny sunglasses, that effectively hide the pupils of the eyes. This problem is particularly compounded when the glasses obscure and hide the distance between the eyes, which is a key part of many algorithms. Most facial recognition algorithms start by determining eye features first, comparing these against their pre-existing databases of images, before they move onto other facial features.

Use of Glasses to hide from Recognition:

There are deliberate attempts by some people to avoid being recognised by this type of software. Celebrities, in particular, have tried to go incognito and avoid being recognised. Some people have real concerns about their privacy, and really do not want to be seen and have photos of themselves published online. These photos often include metadata about time and location. Some simply wear dark and shiny sunglasses, and these often are effective enough at stopping recognition. Many facial detection algorithms will still pick these up, but that is of less of an issue to a celebrity wanting to avoid recognition.

Then there are technological inventions who come up with glowing glasses aimed to foil facial recognition (and indeed detection). Here glasses shine infra-red (invisible to the human eye) at the wearer’s eyes and nose, which have the effect of totally confusing the facial detection software into thinking that there is simply a big bright flash in that spot, not a human being.[vide infra]

______

The next challenge for facial recognition is identifying people whose faces are covered:

Facial recognition is becoming more and more common, but ask anyone how to avoid it and they’ll say: easy, just wear a mask. In the future, though, that might not be enough. Facial recognition technology is under development that’s capable of identifying someone even if their face is covered up — and it could mean that staying anonymous in public will be harder than ever before.

The topic was raised recently after research published on the preprint server arXiv describing just such a system was shared in a popular AI newsletter. Using deep learning and a dataset of pictures of people wearing various disguises, researchers were able to train a neural network that could potentially identify masked faces with some reliability. Academic and sociologist Zeynep Tufekci shared the work on Twitter, noting that such technology could become a tool of oppression, with authoritarian states using it to identify anonymous protestors and stifle dissent.  The paper itself needs to be taken with a pinch of salt, though. Its results were far less accurate than industry-level standards (when someone was wearing a cap, sunglasses, and a scarf, for example, the system could only identify them 55 percent of the time); it used a small dataset; and experts in the field have criticized its methodology.

But although the paper has its flaws, the challenge of recognizing people when their faces are covered is one that plenty of teams are working on — and making quick progress.  Facebook, for example, has trained neural networks that can recognize people based on characteristics like hair, body shape, and posture. Facial recognition systems that work on portions of the face have also been developed (although, again; not ready for commercial use). And there are other, more exotic methods to identify people. AI-powered gait analysis, for example, can recognize individuals with a high degree of accuracy, and even works with low-resolution footage — the sort you might get from a CCTV camera.

One system for identifying masked individuals developed at the University of Basel in Switzerland recreates a 3D model of the target’s face based on what it can see. Bernhard Egger, one of the scientists behind the work, said that he expected “lots of development” in this area in the near future, but thought that there would always be ways to fool the machine. “Maybe machines will outperform humans on very specific tasks with partial occlusions,” said Egger. “But, I believe, it will still be possible to not be recognized if you want to avoid this.”

There are ways to trick these systems — wearing a rigid, full-face mask, for example. Wearing a rigid mask that covers the whole face, for example, would give current facial recognition systems nothing to go on. And other researchers have developed patterned glasses that are specially designed to trick and confuse AI facial recognition systems. Getting clear pictures is also difficult. Egger points out that we’re used to facial recognition performing quickly and accurately, but that’s in situations where the subject is compliant — scanning their face with a phone, for example, or at a border checkpoint.

______

______

Applications of facial recognition:

___

In the recent years, innovative applications of biometric technology have been taking the world by storm. While majority of early applications adopted fingerprint recognition, now face recognition quickly taking over the responsibility. Face recognition offers some unique advantages like non-invasiveness, ability to be capture from a distance, touch-less scan, and more universality than fingerprints and many other biometric identifiers. Mobile biometrics has particularly seen a rise in face recognition after the launch of iPhone X.

The majority of facial recognition use-cases appear to fall into three major categories:

  • Security: Companies are training deep learning algorithms to recognize fraud detection, reduce the need for traditional passwords, and to improve the ability to distinguish between a human face and a photograph.
  • Healthcare: Machine learning is being combined with computer vision to more accurately track patient medication consumption and support pain management procedures.
  • Marketing: Fraught with ethical considerations, marketing is a burgeoning domain of facial recognition innovation, and it’s one we can expect to see more of as facial recognition becomes ubiquitous.

Face recognition is also useful in human computer interaction, virtual reality, database recovery, multimedia, computer entertainment; information security e.g. operating system, medical records, online banking; Biometric e.g. Personal Identification – Passports, driver licenses , Automated identity verification – border controls; Law enforcement e.g. video surveillances , investigation and  Personal Security – driver monitoring system, home video surveillance system. The ability to identify people in a scalable, non-intrusive manner has substantial benefits and numerous use cases. In various industries facial recognition has been used for access control, surveillance, time attendance, personalized marketing, enhancing the user experience, and extracting analytics. The possibilities within the events industry are also exciting.

______

Various Areas of application:

-Forensic science:

The face of the dead/criminal is checked against the database to identify the face. Some software can identify the face similar to a given face even if the input is distorted.

-Identification systems:

This is an identification task, where any new applicant being enrolled must be compared against the entire database of previously enrolled claimants, to ensure that they are not claiming under more than one identity.

-Surveillance:

The application domain where most interest in face recognition is being shown is probably surveillance. Video is the medium of choice for surveillance because of its richness and type of information that it contains and naturally, for applications that require identification, face recognition is the best biometric for video data.

-Pervasive Computing:

Another domain where face recognition is expected to become very important, although it is not yet commercially feasible, is in the area of pervasive or ubiquitous computing etc.

_____

As consumer and enterprise face recognition solutions proliferate, there are a variety of ways that face recognition is currently being used. Here are some of the most prevalent face recognition applications.

-Biometric Surveillance:

Facial recognition is often used with static surveillance cameras. Cameras are typically optimized for angle and lighting conditions in order to capture the best possible image of an individual’s face. After an image is captured, the person’s face is then matched against a database of images and it is determined if that individual potentially matches someone that should be watched. One example is if a person walks into a retail outlet and his face matches that of a known organized retail criminal. Loss prevention professionals could proactively monitor that person. Whereas normal surveillance cameras are only reactive (offering information after crimes occur), facial recognition empowers the loss prevention teams to deter crime. Facial recognition is currently being used for surveillance purposes at retail stores, banks, casinos, sports arenas and more.

-Mobile Face Recognition:

Mobile face recognition software is often used by patrol officers to identify suspects in the field. For example, if they pull over someone who is speeding, and that person doesn’t have his driver’s license, the officer could snap a photo of the individual and potentially verify his identity and see whether he has any outstanding warrants. This can help officers save a lot of time and keep communities safer. Face recognition can also be used to send alerts to mobile devices which tell security personnel where to go, who to monitor and what to do. An example is if a dangerous criminal enters a department store, security might see an alert saying not to engage the person and instead to call the police immediately.

-Geofencing:

Facial recognition technology is sometimes used for geofencing, which uses biometric data to determine who should or shouldn’t be in a particular area. One example of this application is if a bank were to use face recognition to determine which employees have access to sensitive areas.

-Device/App Security:

Phones have already used biometric data in the form of fingerprints to enable access to various applications. Moving forward, face recognition will play a greater role in security, as our phones and other devices begin using face recognition to enable access to various apps.

_____

Use of facial recognition by people and organizations in different places:

  • U.S. government at airports. Facial recognition systems can monitor people coming and going in airports. The Department of Homeland Security has used the technology to identify people who have overstayed their visas or may be under criminal investigation. Customs officials at Washington Dulles International Airport made their first arrest using facial recognition in August 2018, catching an impostor trying to enter the country.
  • Mobile phone makers in products. Apple first used facial recognition to unlock its iPhone X, and continues with the iPhone XS. Face ID authenticates — it makes sure you’re you when you access your phone. Apple says the chance of a random face unlocking your phone is about one in 1 million.
  • Colleges in the classroom. Facial recognition software can, in essence, take roll. If you decide to cut class, your professor could know. Don’t even think of sending your brainy roommate to take your test.
  • Social media companies on websites. Facebook uses an algorithm to spot faces when you upload a photo to its platform. The social media company asks if you want to tag people in your photos. If you say yes, it creates a link to their profiles. Facebook can recognize faces with 98 percent accuracy.
  • Businesses at entrances and restricted areas. Some companies have traded in security badges for facial recognition systems. Beyond security, it could be one way to get some face time with the boss.
  • Religious groups at places of worship. Churches have used facial recognition to scan their congregations to see who’s present. It’s a good way to track regulars and not-so-regulars, as well as to help tailor donation requests.
  • Retailers in stores. Retailers can combine surveillance cameras and facial recognition to scan the faces of shoppers. One goal: identifying suspicious characters and potential shoplifters.
  • Airlines at departure gates. You might be accustomed to having an agent scan your boarding pass at the gate to board your flight. At least one airline scans your face.
  • Marketers and advertisers in campaigns. Marketers often consider things like gender, age, and ethnicity when targeting groups for a product or idea. Facial recognition can be used to define those audiences even at something like a concert.

_______

Overview of Uses of Facial Recognition Technology:

  1. Retail Stores:

Many retail stores are increasingly adopting face recognition to identify repeat customers and offer them special services. They are also using it to derive data and analyse the performance of their stores. Demographic details like gender and age can be used to reveal the kind of customers that frequent the store. Businesses can then optimise their products to drive more sales. The way things are headed, every store in the future will greet you by name!

Face recognition is currently being used to instantly identify when known shoplifters, organized retail criminals or people with a history of fraud enter retail establishments. Photographs of individuals can be matched against large databases of criminals so that loss prevention and retail security professionals can be instantly notified when a shopper enters a store that prevents a threat. Face recognition systems are already radically reducing retail crime. Face recognition reduces external shrink by 34% and, more importantly, reduces violent incidents in retail stores by up to 91%.

  1. Shopping:

Companies like MasterCard are researching about ways to enable payment verification through the face. The advantages are numerous but the primary motivation is preventing fraud and identity theft. Among other advantages is that one won’t have to remember passwords or put the credit card information on the web. This will greatly reduce losses banks incur due to international credit card theft/hacking mafia.

  1. Phone security:

Mobile devices have become a hub of personal as well financial data of users. Present day phones are used in taking photos, doing business and performing financial transactions. Security requirements of these devices are as important as your bank account, even more. We have seen PINs, passwords and patterns for phone security and we have also seen them failing. Any shoulder surfer can steal them, patterns lock can even be stolen by impression it makes on the device screen while swiping the pattern.

In recent years, phone security and even security of transactions performed on mobile devices are increasingly moving towards facial biometrics. Initially, simpler approaches of face recognition were implemented that leveraged front facing camera of the device to capture facial details. Unfortunately, it was not secure enough as it could be fooled with photographs. Addition of the requirement like blinking eyes or smiling during the face scan tried to make them securer, however that too could be fooled with a video clip. Now smartphone manufacturers are looking at an entirely different approach: 3D map of the facial structure. This method is claimed to be super secure, even securer than the fingerprints recognition. 3D facial scan was first introduced by Apple with iPhone X, Apple’s was so confident with its new facial recognition solution that it completely ditched the fingerprint sensor on iPhone X. 3D facial recognition has not only elevated the level of phone security, it has also improved security of transactions users perform on their smartphones using facial recognition. This technology is a powerful way to protect personal data and ensure that, if a phone is stolen, sensitive data remains inaccessible by the perpetrator.

  1. Online purchases:

Alibaba, a prominent Chinese ecommerce company, plans to use the Alipay platform to let users makes purchases over the Internet. Alipay launched a Smile to Pay facial recognition system at a KFC outlet in Hangzhou. The system recognises a face within two seconds, and then verifies the scan by sending a mobile alert.

  1. Targeted advertising:

Face recognition has the ability to make advertising more targeted by making educated guesses at people’s age and gender.  Back in 2013, Tesco announced the roll out of targeted ads based on the gender and age of customers at petrol stations. Using a screen kitted out with OptimEyes software, the grocery giant aimed to offer more relevant advertising to the benefit of company and consumer. Today, other retail companies are looking to install similar software that identifies customers as they enter shops, changing display boards to suit their personal preferences.

  1. Marketing feedback:

As well as discovering the best ways to connect with potential and existing customers, facial recognition technology is also being used to judge levels of engagement. This has already been put to the test on smart ad boards, trialled in 2015 in London. Walmart is also rumoured to be developing its own FaceTech system to gain insights into customer satisfaction. Eventually, this could become standard procedure for all major retailers.

  1. Find Missing Persons:

Face recognition can be used to find missing children and victims of human trafficking. As long as missing individuals are added to a database, law enforcement can become alerted as soon as they are recognized by face recognition—be it an airport, retail store or other public space. In fact, 3000 missing children were discovered in just four days using face recognition in India!

  1. Helping the blind:

Listerine, the brand behind the popular mouthwash, might not seem like a regular candidate for technological development. However, the company has created a mobile app that enables the blind or visually impaired to know when someone is smiling at them. The app recognizes when people are smiling and alerts the blind person with a vibration. This can help them better understand social situations. This new way to experience the world could help blind people to forge more meaningful connections with others, easing the isolation that can come with a sensory defect.

  1. Help Law Enforcement:

Mobile face recognition apps, like the one offered by FaceFirst, are already helping police officers by helping them instantly identify individuals in the field from a safe distance. This can help by giving them contextual data that tells them who they are dealing with and whether they need to proceed with caution. As an example, if a police officer pulls over a wanted murderer at a routine traffic stop, the officer would instantly know that the suspect may be armed and dangerous, and could call for reinforcement.

Facial recognition can be a crime-fighting tool that law enforcement agencies can use to recognize people based on their eyes and face. MORIS (Mobile Offender Recognition and Information System) is a handheld biometric device that can be attached to a smartphone.  States like New York are now using facial recognition to catch identity thieves and other criminals committing fraud. These tricksters typically try to get driver’s licenses that don’t belong to them and facial recognition can be a powerful tool to unmask imposters.

  1. Aid Forensic Investigations:

Facial recognition can aid forensic investigations by automatically recognizing individuals in security footage or other videos. Face recognition software can also be used to identify dead or unconscious individuals at crime scenes.

  1. Social media:

Social media has become the platform where people connect and share their life via texting, photos and videos with friend and family. Convenience of connecting with others while being on your couch has already turned social media platforms into tech giants like Facebook. Facebook, the largest social media platform, houses millions of user videos and photographs captured from different distances, angles and lighting conditions. It presents an opportunity for social media platforms to leverage facial recognition (which most of them are already doing) to automatically identify individuals using the service. Facebook uses face recognition technology to automatically recognize when Facebook members appear in photos. This makes it easier for people to find photos they are in and can suggest when particular people should be tagged in photos. Face recognition of social media platforms can find your photos you are tagged or untagged in. It can also alert you if any of your photos is uploaded by any user in your circles. It can help user better manage their privacy preferences. Google Photos, a photo sharing platform by Google, takes the sharing and managing the ever increasing number of photos to the next level. With the help of face recognition and AI, It can identify people, place and even pets in the photos uploaded by you.

  1. Diagnose Diseases:

Face recognition can be used to diagnose diseases that cause detectable changes in appearance. As an example, the National Human Genome Institute Research Institute, uses face recognition to detect a rare disease called DiGeorge syndrome, in which there is a portion of the 22nd chromosome missing. Face recognition has helped diagnose the disease in 96% of cases. As algorithms get even more sophisticated, face recognition will become an invaluable diagnostic tool for all sorts of conditions.

  1. Recognize VIPs at Sporting Events:

Face recognition can be used to provide fans with a better experience. Face recognition can instantly recognize when season ticketholders attend sporting events. Event venues can offer them swag, let them skip lines and other VIP perks that result in greater season ticketholder retention.

  1. Protect Schools from Threats:

Face recognition surveillance systems can instantly identify when expelled students, dangerous parents, drug dealers or other individuals that pose a threat to school safety enter school grounds. By alerting school security guards in real time, face recognition can reduce the risk of violent acts.

  1. Track School Attendance:

 

In addition to making schools safer, face recognition has the potential to track students’ attendance. Traditionally, attendance sheets can allow students to sign another student, who is ditching class. But China is already using face recognition to ensure students aren’t skipping class. Tablets are being used to scan students’ faces and match their photos against a database to validate their identities. Many schools have started installing this technology to automatically track attendance of pupils. This gives more time to teachers for teaching and maximises productivity. It also prevents students from faking attendance or other mischief. It is also possible to keep a full-eye on the campus and find and track bullies.

  1. Casinos:

Casinos are actively using facial recognition systems in order to recognize their customers. They then use it to track them, identify them on the web, and let them know how often they are going to the casino. Recognition allows them to identify repeat customers and estimate frequency of their visit. They can then use it to help gambling addicts, reward loyal customers, identify fraudsters, and even enforce bans. Face recognition can help casinos recognize the moment that a cheater or advantage gambler enters a casino. In addition, face recognition can recognize members of voluntary exclusion lists, who can cost casinos hefty fines if they’re caught gambling.

  1. Stop Toilet Paper Thieves:

In China, toilet paper theft in public restrooms is a big problem. Luckily face recognition has come to the rescue. China has installed machines in public restrooms that scan people’s faces before releasing toilet paper. It won’t release more paper to the same person until after 9 minutes have gone by.

  1. Facilitate Secure Transactions:

In China, there is a financial services company called Ant Financial that enables customers to pay for meals by scanning their faces. Customers place orders through a digital menu, and then use face scan as a payment option. After providing their telephone number they can then purchase their meal.

  1. Securing data:

Through biometric authentication, sensitive digital data could be secured from malicious influences. Data security has become a pressing issue, and not just for the boardroom.

  1. Validate Identity at ATMs:

It seems likely that face scans will eventually replace ATM cards completely. But in the meantime, face recognition can be used to make sure that individuals using ATMs cards are who they say they are. Face recognition is currently being used at ATMs in Macau to protect peoples’ identities.

NAB trials AI-powered facial recognition ATMs:

National Australia Bank has begun a trial of new ATMs, which will use artificial intelligence-powered facial recognition software to enable customers to withdraw cash without a card or a phone. The move is part of a push across the bank to use the latest cloud-based systems and will use Microsoft’s Azure platform, along with its cognitive services AI software as it evaluates whether customers like the idea of cardless banking. The proof-of-concept ATMs will only be introduced to the wider public if the bank is satisfied it can address any privacy and security concerns about the use of customer biometrics. The ATMs will recognise a customer’s face and then require them to add their PIN number to complete transactions.  As well as being potentially quicker and more convenient for customers using an ATM, NAB chief technology and operations officer Patrick Wright said it could potentially help guard against the fraudulent use of stolen cards and card-skimming.

  1. Make Air Travel More Convenient:

Airlines have already started using face recognition to help people check bags, check into flights and board planes faster. It seems like we are quickly moving toward a future in which air travel is not only safer than ever before, but also more convenient than any period in history.

  1. Track Attendance at Church:

Churches have started using facial recognition to see which members of their congregation are showing up to church. This can help them identify who to ask for donations and which members to reach out to in order to get them to attend more often.

  1. Find Lost Pets:

Finding Rover is an app that tries to help owners reunite with lost pets. The app uses face recognition (albeit it’s the face of an animal in this case) to match photos that pet owners upload to a database of photos of pets in shelters. The app can then instantly alert owners if their pets are found.

  1. Recognize Drivers:

More car companies are experimenting with ways to use face recognition. One use of face recognition for automobiles is using a face to replace a key as a means of starting a car. Face recognition can also be used to change radio stations and seat preferences based on who is driving. Face recognition can even make drivers safer by recognizing and alerting drivers if they are drifting off or not focusing on the road.

  1. Unlocking your car:

If, one day, we can access our personal electronic devices with our physical appearances, then this could also be the case for other connected possessions. According to Gartner, the number of IoT connected devices will reach 20 billion by 2020. One of the most obvious examples are cars. Jaguar has already been working on walking gait recognition software, and cars can already recognise and respond to surrounding environments. Eventually, this will probably include recognising their owners.

  1. Control Access to Sensitive Areas:

Face recognition can work as a means of access control to ensure that only authorized individuals get into facilities like labs, boardrooms, bank vaults, training centers for athletes and other sensitive locations.  Many offices and government buildings are now using face recognition based authentication for seamless access control. This prevents, among other things, tail gating where people who forgot their IDs run behind their colleagues to pass the access point. It also increases security by immediately reporting potential breach attempts. It is also possible to comprehensively track an individual inside a campus/building and identify malicious visitors.

  1. Dating sites:

With explosion in dating sites/apps and users showing keen interest in such services, businesses have come up with all sorts of crafty matching mechanisms to help users find their soul-mates. Among such sites are the likes of findyourfacemate.com. The makers of this site believe (more like claim) that people are most attracted to those that look like them. They use neural nets to map faces to a mathematical space. People whose faces are nearby in this space are then suggested by the app as potential mates. Similarly, Doggleganger can also match you up with a dog that looks like you!

  1. Government Agencies:

This one comes as no surprise. In countries like US and China, government institutions like FBI etc. have been using face recognition for a long time to identify criminals. In China, some reports suggest that the government can identify criminals even before they commit the crime! They have been known to snoop on crowds at sporting events and other places.

  1. Hotels and Restaurants:

Hotels and restaurants are beginning to use this service to identify customers even before they enter the door. This allows them to offer specialised services to each customer by taking into account their past preferences. Restaurants can also cater to their guests by serving them their favourite meal by preparing it as soon as they arrive.

  1. Bars picking out underage drinkers:

Teenagers using fake IDs to get a drink have been a problem for quite some time. With age recognition technology it is possible to determine age from a person’s face. This will prevent teenagers from getting fake ids and hopefully reduce black market activity.

  1. Mental Health diagnosis:

We know that facial recognition tech can be used to detect diseases, but what about the healthcare issues that are not so easy to spot? Through closely tracking a patient’s expressions, medical staff could judge the extent of distress and come closer to making an accurate diagnosis.

  1. Social robot interactions:

In order to function as quality companions and helpers, social robots need to be able to understand the nuances of human emotion. Alongside natural language processing and contextual data, Face recognition technology will also be vital in enabling a meaningful exchange between humans and bots. In order to respond to a distressed human, social robots cannot simply rely on what that person tells them.

  1. Reading concentration levels:

If you have ever been in a class or lecture you would rather not be in, it is very easy to lose concentration. At ESG Management School, facial biometrics software called Nestor tracks the engagement of students during lessons.

  1. Age estimation by facial recognition:

Another application is the estimation of human age from face images. As an important hint for human communication, facial images contain lots of useful information including gender, expression, age, etc. Unfortunately, compared with other cognition problems, age estimation from facial images is still very challenging. This is mainly because the aging process is influenced not only by a person’s genes but also many external factors. Physical condition, living style etc. may accelerate or slow the aging process. Besides, since the aging process is slow and with long duration, collecting sufficient data for training is fairly demanding work.

_____

_____

Now I will discuss some facial recognition applications in detail:

Access Control:

In addition to payment verification, facial biometrics can be used in a variety of ways for device access controls. For mobile devices, this would mean the replacement of passcode and pattern for accessing these physical devices and objects. This seeming small scale implementation will open ways in the future for cars, houses, and other secure physical locations. The luxury car manufacturer, Jaguar, is working on a potentially parallel technology called the walking gait ID. Other organizations are well aware of the advantages of such technology, especially those working with sensitive data facilities. The use of facial recognition technology for computer usage is an exciting system as well. Computer terminals secured with facial recognition technology allows users to leave their terminal without the need to lock or otherwise secure it.

In many of the access control applications, such as office access or computer logon, the size of the group of people that need to be recognized is relatively small. The face pictures are also caught under natural conditions, such as frontal faces and indoor illumination. The face recognition system of this application can achieve high accuracy without much co-operation from user.  Access control is an important measure in laying physical and logical security. It makes sure than no one other than an authorized individual can access a controlled area. This secure area may be a physical facility e.g. a server room or a digital one, e.g. a server in that room that requires facial scan to unlock.

Facial recognition offers more benefits over other biometric methods of access controls:

  • Cameras can detect the face and unlock the door even before the person touches the door, making the controlled access unrestricted for authorized users.
  • It can also unlock the computers as soon as the users get in front the PCs/laptops/servers. There is no need to enter password or scan fingerprints.
  • It is the least invasive recognition method among all.
  • Cameras are everywhere, on phones, in offices, at public places, etc. People are already comfortable with cameras.

It is more hygienic than other biometric methods like fingerprints or palm print recognition. These methods can spread diseases in a facility where a lot of people seek access by touching the scanners, e.g. in factories and large offices.

_____

Payments:

The payment industry has been going through the most rapid of evolution over the past decades and it is no surprise as to why. Businesses & consumers both want a simpler and more secure payment processing system. The rise of online shopping and mobile payments, the need to pay in cash or card has seen a massive downfall and it’s one of the things which appeals to consumers for the near future. The implementation of FRT will require not even their cards or mobile devices for payments. MasterCard launched a revolutionary new concept through its selfie pay app called MasterCard Identity Check in 2016. This app would simply take a photo using the phone camera to confirm payments by consumers, simple as that! The use of facial recognition application is already in fairly wide usage in stores and at ATMs. But the need for it to venture into the world of online shopping is the next logical step. The booming Chinese ecommerce firm Alibaba has planned on the implementation of this technology for its purchases using its affiliate payment software Alipay. Payments with face recognition are getting popularity. There are many banks and financial service institutions enabling customers to pay with facial recognition (commonly referred to as selfie-pay) on banking and finance apps. This ability is also expected to hit on POS terminal software when biometric payments to become a commonplace. Some POS terminals have already started accepting mobile payments, authenticated with fingerprints or face recognition.

_

Fast food giant Caliburger is gaining customer engagement with facial recognition technology to identify loyalty program members without loyalty card swiping at the store. This saves both the parties time and escalates customer experience.

John Miller, CEO of Cali Group said, “face-based loyalty significantly reduces the friction associated with loyalty program registration and use; further, it enables a restaurant chain like CaliBurger to provide a customized, one-on-one interactive experience at the ordering kiosk. Our goal for 2018 is to replace credit card swipes with face-based payments. Facial recognition is part of our broader strategy to enable the restaurant and retail industries to provide the same kinds of benefits and conveniences in the built world that customers experience with retailers like Amazon in the digital world.”

______

Surveillance;

Surveillance cameras play a major role in public, private and mass surveillance applications. These cameras are used to record video of the surveilled area so that any incident can be investigated later. This approach of surveillance is only been good for investigation of incidents but not stopping them. Fortunately, face recognition technology can address this shortcoming when used with surveillance camera systems. A face recognition system does the job of matching digital photographs of already identified subjects. This system, when used along with a surveillance system, can take photographs out of recorded or live video footage and match them with the database of already identified subjects. Intelligence agencies can put certain individuals on surveillance and face recognition system can keep photos out of captured video stream. If a match is found, the system can raise an alert with the location of the camera it was captured at. This approach gives security officials a chance to identify individuals looking to carry out an incident beforehand and stop any disruptive activity. This approach is also being leveraged in mass surveillance applications around the world to identify subjects out of a large number of people. While security & surveillance may not go hand in hand as ideally would be required, the issues of privacy and personal independence becomes the most difficult barrier. However, surveillance in actuality has a far lower satisfaction level when used with facial recognition. This is due to environmental conditions such as lighting, exposure, angle, etc. which pose a massive challenge.

________

Security, policing and law enforcement:

Face recognition is poised to become one of the most pervasive surveillance technologies, and law enforcement’s use of it is increasing rapidly. Today, law enforcement officers can use mobile devices to capture face recognition-ready photographs of people they stop on the street; surveillance cameras boast real-time face scanning and identification capabilities; and federal, state, and local law enforcement agencies have access to hundreds of millions of images of faces of law-abiding Americans. On the horizon, law enforcement would like to use face recognition with body-worn cameras, to identify people in the dark, to match a person to a police sketch, or even to construct an image of a person’s face from a small sample of their DNA.

__

Police and security forces are the keenest to integrate facial recognition technology to verify citizen identities by analyzing their faces. In 2017 the software helped Chinese police to spot out 25 criminals at the Qingdao Beer Festival. The technology can recognize one face from millions in just one second, an additional security level designed for people’s safety. It has a high accuracy rate of around 99% in a supervised traffic. Likewise, other countries like UK, US, Russia, Germany are using facial recognition technology to control security risks. Having more than half of the population’s faceprint sounds unusual but it is actually helping all the law keepers to track down criminals across the country and keep all the citizens safe. The U.S. Department of State operates one of the largest face recognition systems in the world with a database of 117 million American adults, with photos typically drawn from driver’s license photos.  Although it is still far from completion, it is being put to use in certain cities to give clues as to who was in the photo. The FBI uses the photos as an investigative tool not for positive identification.  As of 2016, facial recognition was being used to identify people in photos taken by police in San Diego and Los Angeles (not on real-time video, and only against booking photos) and use was planned in West Virginia and Dallas. In recent years Maryland has used face recognition by comparing people’s faces to their driver’s license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody. Many other states are using or developing a similar system, however some states have laws prohibiting its use. The FBI has also instituted its Next Generation Identification program to include face recognition, as well as more traditional biometrics like fingerprints and iris scans, which can pull from both criminal and civil databases. In May 2017, a man was arrested using an automatic facial recognition (AFR) system mounted on a van operated by the South Wales Police. This appears to be the first time [AFR] has led to an arrest.  As of late 2017, China has deployed facial recognition technology in Xinjiang. Reporters visiting the region found surveillance cameras installed every hundred meters or so in several cities, as well as facial recognition checkpoints at areas like gas stations, shopping centers, and mosque entrances.

_

The Australian Border Force and New Zealand Customs Services have set up an automated border processing system called SmartGate that uses face recognition, which compares the face of the traveller with the data in the e-passport microchip. Major Canadian airports will be using a new facial recognition program as part of the Primary Inspection Kiosk program that will compare people’s faces to their passports. The Tocumen International Airport in Panama operates an airport-wide surveillance system using hundreds of live face recognition cameras to identify wanted individuals passing through the airport.  Police forces in the United Kingdom have been trialling live facial recognition technology at public events since 2015. However, a recent report and investigation by Big Brother Watch found that these systems were up to 98% inaccurate.

_

Airport security:

Airports are always under the threat of terrorist and criminal activities. International airports are particularly sanative in this regard, as they can be the first or the last place visited by criminals or terrorists before they enter or exit the country. This presents a security challenge as well as an opportunity to catch criminals and terrorists while they try to cross the international border of a country. Face recognition can address the security challenges at the airports and help law enforcement agencies identify known subjects crossing the border or indulging in criminal or terrorist activity.

Efforts to lay airport security with face biometrics are already underway in many parts of the world. In the United States, Biometric Exit program at several airports requires international travellers to go through the facial scan. Several other countries are also adopting smart gates at airports to take travellers through facial scan before they can board the flight. That is not all, surveillance cameras at the airport can be equipped with face recognition ability to catch the subjects on surveillance that may be trying to involve in a disruptive activity. Being an international entry and exit point of a country, law enforcement agencies can use facial recognition to identify subjects that they do not wish to enter or exit the country.

_

Facial recognition is no longer only an application for high-risk locations, such as airports, nuclear power plants and government buildings. A growing number of businesses realize that the ability to identify and recognize specific individuals can help to improve customer service and to serve as a proactive way of protecting their assets. Meanwhile, a broad range of high-resolution network and embedded cameras in combination with the development of commercial AI systems allows for a full replacement of traditional security measures, serving many use cases:

Digital security:

  • Online & Mobile Identity Verification
  • Compare Photo ID to Selfie
  • Fraud detection/anti-spoofing with facial liveness features
  • Frictionless authentication with face recognition AI

Physical security:

  • Threat and intrusion detection
  • Perimeter and asset monitoring
  • Known individual detection

Looking beyond face recognition, to human analytics:

People are the fundamental interface of all businesses. If a camera can know who a person is and how they feel, incredible insights can be unlocked. Sometimes just recognizing faces and people is not enough. The next phase of this technology is to provide demographic and emotion analysis on faces found in images and video– in turn, offering a more complete view of how people interact with the world around them.

_______

Find missing children:

India is using facial-recognition to reunite missing children with their families. Biometric tech has helped police identify 3,000 lost children in just four days.  Police in New Delhi recently trialled facial recognition technology and identified almost 3,000 missing children in four days.  The software scanned the pictures of 45,000 children living in children’s homes and orphanages. In just 4 days, it was able to match 2,930 children with photographs held on the lost child database run by the government. The Track Child portal is a national tracking system for India’s lost and vulnerable children. It holds details of the children as well as their photographs.  Figures from Track Child suggest 237,040 children went missing in India between 2012 and 2014.

______

Healthcare:

You walk into the hospital and the person sitting at the front desk calls you by your name and quickly assigns you to the medical help that suits your ailment. Surprised? Yup, it’s a FRT camera that identifies you and your data, which helps you check in quicker and direct you to your physician’s room. Your physician examines you and writes prescriptions which will automatically be sent to the drugstore and by simply verifying your biometric identity you get the medicines home, without carrying any single piece of paper. This allows healthcare industries to secure their patient’s data by using a facial recognition technology. According to Market research future, “global healthcare biometrics market is expected to grow USD 5.6 billion at the CAGR of 22.3% by 2022.

_

Thanks to deep learning and face analysis, it is already possible to:

  • track a patient’s use of medication more accurately
  • detect genetic diseases such as DiGeorge syndrome with a success rate of 96.6%
  • support pain management procedures.

The use of FRT could be enhanced to not only identify a person but also, identifying potential illnesses based on the patient’s features. This would drastically improve the rate of diagnosing patients in healthcare centers as well as in reducing waiting times.

  1. AiCure – Medication adherence

One of the major challenges in the healthcare industry is the inability for healthcare professionals to cope with medication non-adherence or non-compliance by patients. A patient may be prescribed to take a dosage but if that is not taken correctly at the right time then the doctor cannot possibly be blamed. In the US alone, more than 50 percent of patients with prescription medication do not comply with their prescriptions. This results in more hospital admission and $100 billion in estimated annual costs which could have been prevented. In 2010, an AI company integrated facial recognition technology with computer vision to develop AiCure, aimed at improving the medication adherence practices. An app which is driven with an algorithmic software accessible by mobile devices. In the pilot study conducted in 2017 with 75 schizophrenia patients, the app reported 89.7% drug adherence rate compared to the traditional rate of 71.9%. The study emphasized the massive potential for facial recognition application to impact the healthcare industry, patient success rates and even the economy.

  1. ePAT – Pain management

According to reports, approximately 100 million Americans suffer from chronic pain, which is one-third of its total population. The medical costs related to pain and pain relief is up to $630 billion per year. In the global scale, this translates to over 1.5 billion individuals with chronic pain. ePAT is an app designed to detect facial hints of pain using facial recognition technology. It also allows users to enter “non-facial pain cues” such as “vocalizations, movements and behaviors” to determine a pain severity score, which could then be subsequently be treated with appropriate care.

  1. In addition to various congenital malformations, individuals with DiGeorge syndrome have facial anomalies that lead them to assume typical and characteristic expressions. However, these same facial changes vary from ethnicity to ethnicity, making detection particularly complex. To solve this problem, the NHGRI team studied photographs of hundreds of patients suffering from this rare disease and then developed a software that, through facial recognition, is able to correctly diagnose the patient’s condition in 96.6% of cases. As stated by Christoffer Nellaker, of the Medical Research Foundation’s Functional Genomics Unit at Oxford: “A doctor should in future, anywhere in the world, be able to take a smartphone picture of a patient and run the computer analysis to quickly find out which genetic disorder the person might have”.

_

Scientists put Facial Recognition Algorithms to work in Diagnosing Malaria:

The method is based on computer vision algorithms similar to those used in facial recognition systems combined with visualization of only the diagnostically most relevant areas. Tablet computers can be utilized in viewing the images. In this new method, a thin layer of blood smeared on a microscope slide is first digitized. The algorithm analyzes more than 50,000 red blood cells per sample and ranks them according to the probability of infection. Then the program creates a panel containing images of more than a hundred most likely infected cells and presents that panel to the user. The final diagnosis is done by a health-care professional based on the visualized images.  By utilizing a set of existing, already diagnosed samples, the researchers were able to show that the accuracy of this method was comparable to the quality criteria defined by the World Health Organization. In the test setting, more than 90% of the infected samples were accurately diagnosed based on the panel.

_

Mobile Facial Recognition System for Patient Identification in Medical Emergencies for Developing Economies:

Medical emergencies are part of the common daily lives of people in developing and underdeveloped economies. Frequently, some of these medical emergencies end up tragically for many people in these countries due to many reasons, among which is the delivery of medical treatment when the patient is uncommunicative or unresponsive. The ability of the attending medical personnel to access a patient’s medical history is critical for the quality of the treatment rendered. Unfortunately, today many lives are lost in low income economies during medical emergencies due to lack or inaccessibility of a patient’s medical information. One of the major contributing factors of this paucity in records is attributable to the absence of reliable and cost efficient healthcare delivery systems that support patient identification and verification. Due to the current ubiquity of mobile devices with their concomitant digital cameras, this paper explores the feasibility and practicability of using mobile platform and facial recognition technology as a means to deploying a cost-efficient system for reliable patient identification and verification.

_____

Shanghai Hongqiao airport introduces automated facial-recognition check-in:

Shanghai’s Hongqiao airport has introduced automatic check-in using facial-recognition technology, part of an ambitious rollout of facial-recognition systems that has raised privacy concerns as China pushes to become a global leader in the field. The city’s international airport unveiled self-service kiosks for flight and baggage check-in, security clearance and boarding powered by facial-recognition technology, according to the Civil Aviation Administration of China.

_____

Facial recognition tech to be used on Olympians and staff at Tokyo in 2020:

Automated facial recognition systems from Japanese biz NEC will be used on staffers and athletes at the Tokyo 2020 Olympics. It will require athletes, staff, volunteers and the press to submit their photographs before the games start. These will then be linked up to IC chips in their passes and combined with scanners on entry to allow them access to more than 40 facilities. Tsuyoshi Iwashita, head of security for the games, said the aim was to reduce pressure on entry points and shorten queuing time for this group of people.

_______

3D facial recognition applications:

Border Control:

Since 3D sensing technology is relatively costly, its primary application is the high-security, high-accuracy authentication setting, for instance the control point of an airport. In this scenario, the individual briefly stops in front of the scanner for acquisition. The full face scan can contain between 5,000 to 100,000 3D points, depending on the scanner technology. This data is processed to produce a biometric template, of the desired size for the given application. Template security considerations and the storage of the biometric templates are important issues. Biometric databases tend to grow as they are used; the FBI fingerprint database contains about 55 million templates. In verification applications, the storage problem is not so vital since templates are stored in the cards such as e-passports. In verification, the biometric is used to verify that the scanned person is the person who supplied the biometric in the e-passport, but extra measures are necessary to ensure that the e-passport is not tampered with. With powerful hardware, it is possible to include a screening application to this setting, where the acquired image is compared to a small set of individuals. However, for a recognition setting where the individual is searched among a large set of templates, biometric templates should be compact. Another challenge for civil ID applications that assume enrolment of the whole population is the deployment of biometric acquisition facilities, which can be very costly if the sensors are expensive. This cost is even greater if multiple biometrics are to be collected and used in conjunction.

Access Control:

Another application scenario is the control of a building, or an office, with a manageable size of registered (and authorized) users. Depending on the technology, a few thousand users can be managed, and many commercial systems are scalable in terms of users with appropriate increase in hardware cost. In this scenario, the 3D face technology can be combined with RFID to have the template stored on a card together with the unique RFID tag. Here, the biometric is used to authenticate the card holder given his/her unique tag.

Criminal ID:

In this scenario, face scans are acquired from registered criminals by a government-sanctioned entity, and suspects are searched in a database or in videos coming from surveillance cameras. This scenario would benefit most from advances in 2D-3D conversion methods. If 2D images can be reliably used to generate 3D models, the gallery can be enhanced with 3D models created from 2D images of criminals, and acquired 2D images from potential criminals can be used to initiate search in the gallery.

Identification at a Distance:

A more challenging scenario is identification at a distance, when the subject is sensed in an arbitrary situation. In this scenario, people can be far away from the camera, unaware of the sensors. In such cases, challenge stems from these types of un-cooperative users. Assuming that the template of the subject is acquired with a neutral expression, it is straightforward for a person who tries to avoid being detected to change parts of his or her facial surface by a smiling or open-mouthed expression. Similarly, growing a moustache or a beard, or wearing glasses can make the job of identifying the person difficult. A potential solution to this problem is to use only the rigid parts of the face, most notably the nose area, for recognition. However, restricting the input data to such a small area means that a lot of useful information will be lost, and the overall accuracy will decrease. Furthermore, certain facial expressions affect the nose and subsequently cause a drop in the recognition accuracy. This scenario also includes consumer identification, where a commercial entity identifies a customer for personalized services. Since convenience is of utmost importance in this case, face biometrics are preferable to most alternatives.

Access to Consumer Applications:

Finally, a host of potential applications are related with appliances and technological tools that can be proofed against theft with the help of biometrics. For this type of scenario, the overall system cost and user convenience are more important than recognition accuracy. Therefore, stereo camera based systems are more suited for these types of applications. Computers or even cell phones with stereo cameras can be protected with this technology. Automatic identification has additional benefits that can increase the usefulness of such systems. For instance a driver authentication system using 3D facial characteristics may provide customization for multiple users in addition to ensuring security. Once the face acquisition and analysis tools are in place, this system can also be employed for opportunistic purposes, for instance to determine drowsiness of the driver by facial expression analysis.

______

Surveillance of animals with facial recognition technology:

A Norwegian company is using facial recognition to capture and store the faces of millions of Atlantic salmon to help fight disease. That fish-face database will potentially let farmers monitor salmon numbers and detect abnormalities in health, like parasitic sea lice, which could be a boon for farmers around the world. Salmon are just the latest entry in a growing abundance of animal faces loaded into databases. For some animals, the biometric data gathered from them is being used to aid in conservation efforts. For others, the resulting AI could help ward off poachers. While partly creepy and partly very cute, monitoring of these animals can both help protect their populations and ensure safe, traceable livestock for developing communities. There are many animals currently being surveilled with facial recognition software.  Dog and cat facial recognition has been around for several years now and is typically used as a tool to help distressed owners find their lost (or perhaps escaped) friends. One of these systems, called PiP, sends out an alert with the missing pet’s face to vet clinics and animal shelters within a 15-mile radius of the user.

Cargill partners with local farmers and helps collect the cow headshots. With that information, Cargill says it can monitor the cow’s “food and water intake, heat detection and behavior patterns.” Once analyzed, farmers can more accurately determine each cow’s health and even predict variations in milk production. Chinese e-commerce behemoth JD.com is using facial recognition to monitor large groups of pigs to quickly detect metrics like age, weight, and diet. Researchers at the University of Cambridge are using facial recognition to see how sheep feel. Specifically, the researchers are interested in whether or not they feel pain. Conservationists and wildlife teachers are using facial recognition to keep tabs on a database of over 1,000 lions. U.K. researchers are using online resources like Flickr and Instagram to help build and strengthen a database that will eventually help track global tiger populations in real time. Wildlife experts are tracking elephants to protect them from poachers. However, the facial recognition technology is not easy to be used on animals. According to Yingzi Holding founder Chen Yaosheng, pigs prove significantly more difficult to surveil than humans for one pesky reason: “Humans would stay still in front of the camera, but pigs don’t.”

_______

_______

Limitations, failure and bias in facial recognition technology:

_____

Limitations of facial recognition technology:

While face recognition programs can use a variety of measurements and types of scans to detect and identify faces, there are limitations.

  • Poor resolution images and poor lighting can reduce the accuracy of face-scanning results.
  • Different angles and facial expressions, even a simple smile, can pose challenges for face matching systems.
  • Facial recognition loses accuracy when the person is wearing items like glasses, hats, scarves, or hair styles that cover part of the face. Makeup and facial hair can also pose issues for face detection programs.
  • Facial scans don’t necessarily connect with a profile, meaning that a scan of a person’s face may not be useful if there are no photos of them in an accessible database. Without a match, the identity of the person behind the face scan can remain a mystery.

Concerns over privacy or security can also pose limitations for how facial recognition systems can be used. For example, scanning or collecting facial recognition data without a person’s knowledge and consent violates the Biometric Information Privacy Act of 2008. Also, while lack of a facial recognition match can be useless, a strong one can be a security risk. Facial recognition data that positively matches online photos or social media accounts could allow identity thieves to gather enough information to steal a person’s identity.

_______

Facial recognition weaknesses and ineffectiveness:

  1. Weaknesses:

Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute in 2008, describes one obstacle related to the viewing angle of the face: “Face recognition has been getting pretty good at full frontal faces and 20 degrees off, but as soon as you go towards profile, there’ve been problems.”  Besides the pose variations, low-resolution face images are also very hard to recognize. This is one of the main obstacles of face recognition in surveillance systems. Face recognition is less effective if facial expressions vary. A big smile can render the system less effective. For instance: Canada, in 2009, allowed only neutral facial expressions in passport photos. There is also inconstancy in the datasets used by researchers. Researchers may use anywhere from several subjects to scores of subjects, and a few hundred images to thousands of images. It is important for researchers to make available the datasets they used to each other, or have at least a standard dataset. Data privacy is the main concern when it comes to storing biometrics data in companies. Data stores about face or biometrics can be accessed by third party if not stored properly or hacked. In the Techworld, Parris adds (2017), “Hackers will already be looking to replicate people’s faces to trick facial recognition systems, but the technology has proved harder to hack than fingerprint or voice recognition technology in the past.”

  1. Ineffectiveness:

Critics of the technology complain that the London Borough of Newham scheme has, as of 2004, never recognized a single criminal, despite several criminals in the system’s database living in the Borough and the system having been running for several years. “Not once, as far as the police know, has Newham’s automatic face recognition system spotted a live target.” This information seems to conflict with claims that the system was credited with a 34% reduction in crime (hence why it was rolled out to Birmingham also). However it can be explained by the notion that when the public is regularly told that they are under constant video surveillance with advanced face recognition technology, this fear alone can reduce the crime rate, whether the face recognition system technically works or does not. This has been the basis for several other face recognition based security systems, where the technology itself does not work particularly well but the user’s perception of the technology does.

An experiment in 2002 by the local police department in Tampa, Florida, had similarly disappointing results. A system at Boston’s Logan Airport was shut down in 2003 after failing to make any matches during a two-year test period.  In 2018, a report by the civil liberties and rights campaigning organisation Big Brother Watch revealed that two UK police forces, South Wales Police and the Metropolitan Police, were using live facial recognition at public events and in public spaces, but with an accuracy rate as low as 2%. Their report also warned of significant potential human rights violations. It received widespread press coverage in the UK.

Systems are often advertised as having accuracy near 100%; this is misleading as the studies often use much smaller sample sizes than would be necessary for large scale applications. Because facial recognition is not completely accurate, it creates a list of potential matches. A human operator must then look through these potential matches and studies show the operators pick the correct match out of the list only about half the time. This causes the issue of targeting the wrong suspect.

  1. Problems with twins and minors:

Facial recognition has its own obstacles. The technology is unable to distinguish between identical twins. In those cases (and for adolescents under the age of thirteen, whose facial features may not yet have fully developed), Apple recommends that users employ a password as an extra security measure.

  1. Another threat is the use of 3-D masks. Experts at the CyLab Biometric Center at Carnegie Mellon University point out that while FaceTec software can prevent forgery attempts through videos or user photographs, it would probably not work with a 3-D mask. Apple claims to have trained Face ID to recognize that specific type of falsification. Experts also warn that software-based face recognition systems would require “additional hardware protection to match the security level of Face ID technology.”

_______

The Boston Marathon bombings revealed the limitations of facial-recognition technology to the general public. Many private citizens, accustomed to seeing computers on television and in the movies match photographs to motor vehicle and other databases in mere seconds, were surprised that America’s premier law enforcement agencies did not have the same level of technological sophistication available to them when Boston’s, and perhaps the country’s, security had been threatened. Since 9/11, the federal government has spent a great deal of money on facial-recognition technology, with grants in the millions of dollars going to state and local governments for database creation. Even though government databases contained pictures of both of the Boston suspects, technology could not match surveillance footage to database images.

Several factors limit the effectiveness of facial-recognition technology:

  1. Image quality:

Image quality affects how well facial-recognition algorithms work. The image quality of scanning video is quite low compared with that of a digital camera. Even high-definition video is, at best, 1080p (progressive scan); usually, it is 720p. These values are equivalent to about 2MP and 0.9MP, respectively, while an inexpensive digital camera attains 15MP. The difference is quite noticeable.

  1. Image size:

When a face-detection algorithm finds a face in an image or in a still from a video capture, the relative size of that face compared with the enrolled image size affects how well the face will be recognized. An already small image size, coupled with a target distant from the camera, means that the detected face is only 100 to 200 pixels on a side. Further, having to scan an image for varying face sizes is a processor-intensive activity. Most algorithms allow specification of a face-size range to help eliminate false positives on detection and speed up image processing.

  1. Face angle:

The relative angle of the target’s face influences the recognition score profoundly. When a face is enrolled in the recognition software, usually multiple angles are used (profile, frontal and 45-degree are common). Anything less than a frontal view affects the algorithm’s capability to generate a template for the face. The more direct the image (both enrolled and probe image) and the higher its resolution, the higher the score of any resulting matches.

  1. Processing and storage:

Even though high-definition video is quite low in resolution when compared with digital camera images, it still occupies significant amounts of disk space. Processing every frame of video is an enormous undertaking, so usually only a fraction (10 percent to 25 percent) is actually run through a recognition system. To minimize total processing time, agencies can use clusters of computers. However, adding computers involves considerable data transfer over a network, which can be bound by input-output restrictions, further limiting processing speed.

  1. Changing faces:

Facial recognition entails a simple natural act of facing a device to carry out recognition. Problems, however, lie in its susceptibility to effects due to facial changes, such as aging and facial expression, and effects due to changes in the environment under which the photograph is taken, such as lighting and direction of the camera. These constraints have led to reduction in recognition performance, a major drawback of the method.

Ironically, humans are vastly superior to technology when it comes to facial recognition. But humans can only look for a few individuals at a time when watching a source video. A computer can compare many individuals against a database of thousands.

As technology improves, higher-definition cameras will become available. Computer networks will be able to move more data, and processors will work faster. Facial-recognition algorithms will be better able to pick out faces from an image and recognize them in a database of enrolled individuals. The simple mechanisms that defeat today’s algorithms, such as obscuring parts of the face with sunglasses and masks or changing one’s hairstyle, will be easily overcome.

An immediate way to overcome many of these limitations is to change how images are captured. Using checkpoints, for example, requires subjects to line up and funnel through a single point. Cameras can then focus on each person closely, yielding far more useful frontal, higher-resolution probe images. However, wide-scale implementation increases the number of cameras required.

Evolving biometrics applications are promising. They include not only facial recognition but also gestures, expressions, gait and vascular patterns, as well as iris, retina, palm print, ear print, voice recognition and scent signatures. A combination of modalities is superior because it improves a system’s capacity to produce results with a higher degree of confidence. Associated efforts focus on improving capabilities to collect information from a distance where the target is passive and often unknowing.

Clearly, privacy concerns surround this technology and its use. Finding a balance between national security and individuals’ privacy rights will be the subject of increasing discussion, especially as technology progresses.

_____

Face++ isn’t all-powerful yet:

Despite notions that Chinese police’s facial recognition capabilities can track down anyone, anywhere, that’s simply not what the technology is capable of. Megvii’s Face++ platform, which numerous police departments in China have used to help them arrest 4,000 people since 2016, has serious technological limitations. For example, even if China had facial scans of every one of its citizens uploaded to its system, it would be impossible to identify everyone passing in front of a Face++-linked camera. While the Face++ algorithm is more than 97% accurate, it can only search a limited number of faces at a time. In order to work, police would have to upload the faces they want to track to a local server at the train station or command center where they intend to look. Face++ would then use its algorithm to match those faces to the ones it encounters in the real world.  It wouldn’t be feasible to have the system search for more than 1,000 faces at a time – the data and processing power required for an operation larger than that would require a supercomputer. Plus, they can’t run the system 24/7 today. It’s the kind of thing police will have to activate proactively when a situation is underway. While it is possible that the system could be connected to a supercomputer over the cloud to amplify computing power, it would be too dangerous from a security perspective. The system has to stay offline and local.

_____

_____

Accuracy of facial recognition technology:

In modern forensic and security practice, automatic face recognition software is often used to augment important identification processes. An increasingly common application of face recognition technology is known as one-to-many identification—whereby pattern matching algorithms are used to compare a single probe image to large databases of facial images. This function can be used to protect against identity fraud when issuing national identity documents such as passports, immigration visas and driving licences—by improving detection of duplicate applications by the same individual. Further, the recent proliferation of image-based evidence from CCTV and mobile devices entails that facial images are often an important source of evidence in criminal investigations. In forensic applications, face recognition software therefore enables police officers to use this image evidence to search large databases of known offenders. Similar technology is also used to enhance user experience in popular social media platforms.

_

Although the accuracy of face recognition software has improved markedly over the past two decades, it is important to note that Automatic Face Recognition systems do not yet live up to their name—they are not entirely automatic. Identification accuracy can be quite poor in cases where image capture conditions are not optimal and where images of a face are captured several years apart. To manage this uncertainty, in many applications algorithms present human users with a ‘candidate list’ displaying the highest matching images returned from a database and ranked in order of similarity to the probe image. It then falls to the human operator to review this candidate list and check for the presence or absence of matching identities. Similar combinations of computer and human processing are also used in fingerprint identification systems.

_

Standardised benchmarking tests of algorithm accuracy are administered by the US National Institute of Standards and Technology (NIST). NIST has measured the performance of companies’ facial recognition algorithms since 1993 and has found that the technology has improved over time. Most recently, NIST’s Face Recognition Vendor Test (FRVT) in 2014 found that the error rates continued to decline and algorithms had improved at identifying individuals from images of poor quality or captured under low light.

What about the FRVT report’s claim that some face-recognition algorithms equal or exceed humans’ recognition capabilities? Humans are very good at recognizing faces of familiar people. However, they aren’t so good at recognizing unfamiliar people.  A 2001 study found that human subjects perform poorly at matching different images of unfamiliar faces and automatic PCA system exceeds human performance on the same images.  Since many proposed face-recognition systems would complement or replace humans, the FRVT’s comparative tests of the face-recognition capabilities of humans and software–the first such testing–were important for measuring the potential effectiveness of applications. At low false accept rates (a false accept rate is the measure of the likelihood that a biometric security system will incorrectly accept an access attempt by an unauthorized individual), six out of seven automatic face-recognition algorithms were comparable to or better than human recognition. These were algorithms from Neven Vision, Viisage, Cognitec, Identix, Samsung Advanced Institute for Technology, and Tsinghua University. In addition, research supported by the Technical Support Working Group, a federal interagency group, showed that in certain controlled tests, facial recognition algorithms surpassed human accuracy in determining whether pairs of face images, taken under different illumination conditions, were pictures of the same person or of different people. The top performing vendor in the NIST test achieved a FRR of 1.1%. Thus, face recognition systems definitely provide better accuracy when compared to live guards performing a manual comparison of passport photos with the passport holders.

The most recent Face Recognition Vendor Test (FRVT) reports accuracy of leading commercial “off-the-shelf” algorithms in one-to-many identification. In a particularly challenging test, recent police “mugshots” were used to probe an historic database of 1.6 million mugshot images containing one or more target images of each identity. Leading algorithms performed with a high level of accuracy, returning matching images of the target identity in the top ten ranked matches for between 80% and 97% of targets, depending on the algorithm. Although performance did vary depending on the size of the database being searched, the effect was typically modest, suggesting that accurate searches are also feasible in nation-sized databases, such as those held by passport-issuing agencies. Importantly however, this method only estimates accuracy of the machine component of the total system, ignoring the impact of the decisions made by the human operators—who monitor image galleries containing the highest ranking matches. This is an important limitation because psychological research has shown people make large numbers of errors when matching photos of unfamiliar faces.

_

In an early demonstration of this problem, Bruce and colleagues constructed police ‘line-up’ arrays containing ten high-resolution images of young adult male subjects. Participants had to decide if a target face was present in the photo gallery, and if they were, to select the matching identity. Despite target and gallery images being taken only minutes apart, in full frontal pose and under similar lighting conditions, mean error rates on this task were 30%. Subsequent studies have replicated this basic finding across a variety of stimulus sets and task formats. Error-prone performance is also observed in people who perform face matching as part of their daily work. For example in 2014, White and colleagues tested matching accuracy of passport officers in their workplace. Despite extensive experience in face matching, these staff were no more accurate than a group of university students—on tasks that modelled decisions encountered in their daily duties.  Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. Authors conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

_______

Recognition and verification rate:

The first measure of accuracy is the Recognition Rate, which is probably the simplest measure. It relies on a list of gallery images (usually one per identity) and a list of probe images of the same identities. For each probe image, the similarity to all gallery images is computed, and it is determined, if the gallery image with the highest similarity (or the lowest distance value) is from the same identity as the probe image. Finally, the Recognition Rate is the total number of correctly identified probe images, divided by the total number of probe images.

The second measure is the Verification Rate. It relies on a list of image pairs, where pair with the same and pairs with different identities are compared. Given the lists of similarities of both types, the Receiver Operating Characteristics can be computed, and finally the Verification Rate.

______

Error rate in FRT:

When it comes to errors, there are two key concepts to understand:

A “false negative” is when the face recognition system fails to match a person’s face to an image that is, in fact, contained in a database. In other words, the system will erroneously return zero results in response to a query.

A “false positive” is when the face recognition system does match a person’s face to an image in a database, but that match is actually incorrect. This is when a police officer submits an image of “John,” but the system erroneously tells the officer that the photo is of “James.”

Facial recognition systems can generate two types of errors—false positives (generating an incorrect match) or false negatives (not generating a match when one exists).

_

There are several important elements of this scenario that should be clearly understood. First, the system can only recognize persons whose images have been enrolled in the gallery. If a terrorist is known by name and reputation, but no picture of the terrorist is available, then the face recognition system is useless. Another point is that the system must be able to acquire face images of reasonable quality to use as probes. If lighting conditions are poor, viewing angle is extreme, or people take measures to alter their appearance from that in the gallery image, then the performance of the face recognition system will generally suffer. A third point is that the system has a sensitivity threshold that must be set appropriately. Setting the threshold too low will result in too many false positives – an innocent person is subjected to some amount of scrutiny based on resemblance to a person on the watch list. Setting the threshold too high will result in too many false negatives – a terrorist is not recognized due to differences in appearance between the gallery and probe images of the terrorist.

The diagram below explains these possible outcomes of a recognition system’s decision.

While the system threshold can be adjusted to be either more or less sensitive, it cannot be set to simultaneously give both fewer false positives and fewer false negatives. For any given current state of the technology, the system operator is inevitably faced with the choice of trading occurrences of one type of error against occurrences of the other type of error.  When researching a face recognition system, it is important to look closely at the “false positive” rate and the “false negative” rate, since there is almost always a trade-off. For example, if you are using face recognition to unlock your phone, it is better if the system fails to identify you a few times (false negative) than it is for the system to misidentify other people as you and lets those people unlock your phone (false positive). If the result of a misidentification is that an innocent person goes to jail (like a misidentification in a mugshot database), then the system should be designed to have as few false positives as possible.  If you do not want to miss a terrorist at airport, make threshold low in the system so that it will have very few false negative albeit at the cost of high false positive rate.  Being able to achieve both fewer false positives and fewer false negatives requires new and better technology. Thus, if face recognition appears to be a potential solution in a given application, but the available trade-off between true positives and false positives is not acceptable, then the answer is to develop improved technology.

_____

False rejection rates (FRR): The probability that a system will fail to identify an enrolee. It is also called type 1 error rate.

FRR= NFR/NEIA

Where FRR= false rejection rates, NFR= number of false rejection, NEIA= number of enrolee identification attempt

False acceptance rate (FAR): The probability that a system will incorrectly identify an individual or will fail to reject an imposter. It is also called as type 2 error rate.

FAR= NFA/NIIA

Where FAR= false acceptance rate, NFA= number of false acceptance, NIIA= number of imposter identification attempts

______

Every four years The National Institute of Standards and Technology (NIST) organizes Face Recognition Vendor Tests (FRVT). FRVT serves as independent government evaluation of commercially available face recognition technologies. These evaluations are designed to provide U.S. Government and law enforcement agencies with information to assist them in determining where and how facial recognition technology can best be deployed. It is always interesting to see an evolution of face recognition technology by comparing the accuracy of world’s best face recognition algorithms four years ago to today results. In FRVT 2013 NIST reported NEC algorithm as the best Face Recognition algorithm. For dataset of >1M faces, NEC algorithm showed FRR of 0.068. In April of 2017 NIST published testing results of nine new algorithms. While it is difficult to directly compare results because NIST slightly changed its testing methodology in 2017, it is still possible to get a sense of how accuracy improved in recent years. The best algorithms now are capable of achieving FRR of just 0.025. Decreasing FRR from 0.068 to 0.025 means that on each thousand queries new algorithms find 40 more faces that older algorithms could not find. Such quality improvement of face recognition technology makes it more useful across different application areas.

_____

When setting the threshold there is always a trade-off to be considered. For example, if the threshold for a similarity score is set too high in the verification task, then a legitimate identity claim may be rejected (i.e., it might increase the false reject rate (FRR)). If the threshold for a similarity score is set too low, a false claim may be accepted (i.e., the false accept rate (FAR) increases). Thus, within a given system, these two error measures are one another’s counterparts. The FAR can only be decreased at the cost of a higher FRR, and FRR can only be decreased at the cost of a higher FAR.

Both FRR and FAR are important, but for most applications one of them is considered most important.

Two examples illustrate this:

-1. When biometrics are used for logical or physical access control, the objective of the application is to disallow access to unauthorized individuals under all circumstances. It is clear that a very low FAR is needed for such an application, even if it comes at the price of a higher FRR.

-2. When surveillance cameras are used to screen a crowd of people for missing children, the objective of the application is to identify any missing children that come up on the screen. When the identification of those children is automated using a face recognition software, this software has to be set up with a low FRR. As such a higher number of matches will be false positives, but these can be reviewed quickly by surveillance personnel.

_

The Receiver Operating Characteristic (ROC) graph represents the probability of correctly accepting a legitimate identity claim against the probability of incorrectly accepting an illegitimate identity claim for a given threshold. ROC curve is a plot of true acceptance rate i.e. sensitivity (TAR=1- FRR) against false acceptance rate (1 – specificity) which is computed as the number of false instances classified as positive among all intruder and impostor cases. Higher the sensitivity of FRS, lesser the false negatives. Higher the specificity of FRS, lesser the false positives.  Because the ROC allows for false positives from impostors, it is the metric used in open-set testing, both for verification and identification.  Performance accuracy in the open-set case is a two-dimensional measurement of both true accept rate and false accept rates at a particular threshold.  The perfect system will give 100% verification for a 0% FAR. Such a system does not exist and probably will never exist except under very constrained conditions in controlled environments, which will be of little, if any, practical use. An alternative approach is to use the Detection Error Trade-off (DET) Curve. The DET curves typically plots matching error rates (false reject rate vs. false accept rate). Some authors also use the equal error rate (EER) curve to describe the recognition performance of a FRS. The equal error rate is the rate at which the FAR is exactly equal to the FRR.

Figure above shows False Acceptance Rate (FAR) and False Rejection Rate (FRR) plotted versus the similarity threshold. The Equal Error Rate (ERR) is shown as the cross point between FAR and FRR. ERR is ≈ 27.4 with threshold is ≈ 34.0.

_______

_______

Failure in facial recognition:

In 2014, Teve Talley, a Denver-based financial advisor had been booked on the charges of two bank robberies. The evidence against him was a grainy CCTV camera footage which and a computer facial recognition software that matched his broad-shoulders, skin colour, sex, age, hair, eyes, and square jaw with that of the actual criminal.

In 2015, Google’s newly-launched Photos service, which uses machine learning to automatically tag photos, had made a huge miscalculation when it automatically tagged two African-Americans as “gorillas.” in the folders. The user, a US-based computer programmer reported the problem via Twitter when he found that Google Photos had created an album labelled “gorillas” that exclusively featured photos of him and his African-American friend. At the time, developers at Google had immediately apologised for the gaffe and then worked to fix the app’s database. A few years ago, Flickr’s auto-recognition tool for sorting photos had gone awry and had identified one picture of a black man with the tags “ape” and “animal”. Though the racist implications were obvious, it had also identified a white woman with the same tags.

In 2016, researchers were able to fool a commercial facial recognition system into thinking that they were somebody else just by wearing a pair of patterned glasses. A special (think, funky-looking) sticker overlay with a hallucinogenic print was stuck onto the frames of the specs. The twists and curves of the pattern looked random to humans, but to a computer designed to pick out noses, mouths, eyes, and ears, they resembled the contours of someone’s face — any face the researchers chose, in fact. The facial recognition system was so confused that it even recognised one of the researchers as the Pope!

_

Imperfect technology in law enforcement:

All over the world, law enforcement agencies have begun using facial recognition software to aid in the identifying of criminals. For example, the Chinese police force were able to identify twenty-five wanted suspects using facial recognition equipment at the Qingdao International Beer Festival, one of which had been on the run for 10 years. The equipment works by recording a 15 second video clip and taking multiple snapshots of the subject. That data is compared and analyzed with images from the police department’s database and within 20 minutes, the subject can be identified with 98.1% accuracy. On the other hand, In the UK, the police’s use of facial recognition technology has been found to be up to 98% inaccurate.  Facial recognition technology has been proven to work less accurately on the people of colour.  One study by Joy Buolamwini (MIT Media Lab) and Timnit Gebru (Microsoft Research) found that the error rate for gender recognition for women of color within three commercial facial recognition systems ranged from 23.8% to 36%, whereas for lighter-skinned men it was between 0.0 and 1.6%. Overall accuracy rates for identifying men (91.9%) were higher than for women (79.4%), and none of the systems accommodated a non-binary understanding of gender.

____

Failure of facial recognition in U.K police force (2017-18):

Facial recognition software used by the Metropolitan Police had false positives in more than 98% of alerts generated, a freedom of information request showed. The UK’s biometrics regulator said it was “not yet fit for use” after the system had only two positive matches from 104 alerts. The system used by London’s Metropolitan Police produced as many as 51 false matches for every hit, requiring police to sort through the false-positives manually.  Welsh police facial recognition software has 91% fail rate, showing dangers of early AI. Data released by the UK police force confirmed claims from watchdog groups that the software is inaccurate.

Metropolitan Police:

  • The Metropolitan Police has the worst record, with less than 2% accuracy of its automated facial recognition ‘matches’ and over 98% of matches wrongly identifying innocent members of the public.
  • The force has only correctly identified 2 people using the technology – neither of which was a wanted criminal. One of those people matched was incorrectly on the watch list; the other was on a mental health-related watch list. However, 102 innocent members of the public were incorrectly identified by automated facial recognition.
  • The force has made no arrests using automated facial recognition.

South Wales Police:

  • South Wales Police’s record is hardly better, with only 9% accuracy of its matches whilst 91% of matches wrongly captured innocent people.
  • 0.005% of ‘matches’ led to arrests, numbering 15 in total.
  • However, at least twice as many innocent people have been significantly affected, with police staging interventions with 31 innocent members of the public incorrectly identified by the system who were then asked to prove their identity and thus their innocence.
  • The force has stored biometric photos of all 2,451 innocent people wrongly identified by the system for 12 months in a policy that is likely to be unlawful.
  • Despite this, South Wales Police has used automated facial recognition at 18 public places in the past 11 months – including at a peaceful demonstration outside an arms fair.

In a report submitted to the House of Lords by watchdog group Big Brother Watch, Silkie Carlo, the group’s director, wrote that there is “no law, no oversight, and no policy regulating the police’s use of automated facial recognition.” The UK government, he said, had not even set a target fail rate, allowing the system to continue flagging thousands of people erroneously at wildly high rates. Carlo’s report also added that facial recognition algorithms are known to be inaccurate, citing statistics from the US Government Accountability Office that showed “facial recognition algorithms used by the FBI are inaccurate almost 15% of the time and are more likely to misidentify female and black people.”

______

Amazon’s facial recognition matched 28 members of Congress to criminal mugshots:

The American Civil Liberties Union tested Amazon’s facial recognition system — and the results were not good. To test the system’s accuracy, the ACLU scanned the faces of all 535 members of congress against 25,000 public mugshots, using Amazon’s open Rekognition API. None of the members of Congress were in the mugshot lineup, but Amazon’s system generated 28 false matches, a finding that the ACLU says raises serious concerns about Rekognition’s use by police. “An identification — whether accurate or not — could cost people their freedom or even their lives,” the group said in an accompanying statement. “Congress must take these threats seriously, hit the brakes, and enact a moratorium on law enforcement use of face recognition.”

Amazon spokesperson attributed the results to poor calibration. The ACLU’s tests were performed using Rekognition’s default confidence threshold of 80 percent — but Amazon says it recommends at least a 95 percent threshold for law enforcement applications where a false ID might have more significant consequences. “While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases,” the representative said, “it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.” Still, Rekognition does not enforce that recommendation during the setup process, and there’s nothing to prevent law enforcement agencies from using the default setting.

Amazon’s Rekognition came to prominence when an ACLU report showed the system being used by a number of law enforcement agencies, including a real-time recognition pilot by Orlando police. Sold as part of Amazon’s Web Services cloud offering, the software was extremely inexpensive, often costing less than $12 a month for an entire department.

The test also showed indications of racial bias, a long-standing problem for many facial recognition systems. 11 of the 28 false matches misidentified people of color (roughly 39 percent), including civil-rights leader Rep. John Lewis and five other members of the Congressional Black Caucus. Only twenty percent of current members of Congress are people of color, which indicates that false-match rates affected members of color at a significantly higher rate. That finding echoes disparities found by NIST’s Facial Recognition Vendor Test, which has shown consistently higher error rates for facial recognition tests on women and African-Americans.

Running faces against a database with no matches might seem like a recipe for failure, but it’s similar to the conditions that existing facial recognition systems face every day.

_______

Why facial recognition failed:

The images were poor and the databases were limited, explains an expert. The Boston police commissioner says that facial recognition software did not help identify the Boston bombing suspects, despite the fact that their images were included in public records databases such as the DMV.

There are three or four potential hurdles that all types of facial recognition software face when we try to apply them in real time on a mass scale.

The first is image quality. If you look at the images which were posted on the Internet of the bombing suspects, they were usually taken from far away, which means that when you zoom close to the faces, the faces are usually out of focus. So, you don’t have a very good image to start from. The images which were available were not of good quality.

The second hurdle is the availability of the fine facial data on the identified faces that you already have in your existing databases. There are number of different databases which can be used. Some are public databases, such as DMV, and some are private-sector databases, such as online social media databases. It is increasingly the case that the latter are better than the former for the purpose of facial recognition. The reason being, the more images you have of someone’s face, the more accurate the mathematical model of that face you can produce. If you’re the DMV, and you only have one very good frontal shot of a person, that may not be as good as  having 10 photos from slightly different angles of that person’s face. Together, those 10 photos allow you to create a more accurate model of that person. Private databases involving social media have better databases for facial recognition, which brings up the issue of whether and under what conditions the Facebooks of the world would share those images with investigators.

The third issue is computational power. You can find results in real time, with a delay of just a few seconds when you are working with databases of only a few hundred thousand images. When you are working with a database of say 300 million images, one picture for every U.S. adult, more or less, even using the kind of commercially available cloud computing clusters, it’s just not enough to give you results in real time.

The last problem is that, even if you have enough computational power, and you can do not just hundreds of thousands of face matches in a few seconds but millions or tens or hundreds of millions, you still have the problem of false positives. When you start working with databases that contain millions of faces, you start to realize that many people look similar to each other. We human beings are very good at distinguishing people who look the same, but computers are not that good right now.

______

Large number of people stumps accuracy statistics:

Let’s say there is a stadium full of innocent people and one violent criminal, and we want to identify the violent criminal. There are two possible kinds of errors we could make.

False Negative:

This is the kind of error everyone is worried about. It is when the system looks at a photo of the violent criminal and then looks right at the violent criminal, and fails to identify them. It’s an incorrect, or false, non-match; it should have matched. That’s really bad.

False Positive:

This is when someone gets stopped because they ‘might’ be the criminal we are looking for, and we stop them to take a closer look, but it turns out they are someone else. This is really annoying, but not quite as bad.

In one case, we let a violent criminal go because of the error. In the second case, we are annoying innocent people in our search. They are both bad, but for law enforcement applications, missing the violent criminal is considered to be worse. And these errors are inversely proportional. That means that as the odds of one kind of error happening are decreased, the chances of the other kind happening increases.

The other thing that is important is the number of total people at the match.

Let’s look at biometric matching with large groups. Say the system has a near 0% chance of a false negative and a 1% chance of a false positive.

So if 100 people go to the game, we would be (nearly) 100% confident that we would identify a violent criminal, should one attend the game, but we would probably stop one person who ended up being a false match. This person would be interviewed as a suspect and let go. That is annoying for that one person, but maybe it’s acceptable if it lets us identify and stop a violent criminal.

But if 10,000 people go to the game, we are still nearly 100% confident that we would find the violent criminal, if one is at the game, but now we have interviewed 100 false matches and let them go. This is starting to be a problem; we are annoying 100 innocent people and we are having to spend a lot of time interviewing potential suspects. You can see how this turns into a problem at a game where 100,000 people attend. That’s how many people are reported as attending big game. With a 99% accurate system, you are going to annoy almost 1,000 people.

It’s easy to see that even if you deploy a very accurate system, and if you are doing mass surveillance on large numbers of people, you are going to have a lot of false positives and you are going to be stopping and bothering a lot of innocent people.

Is it worth it?

If you had to bother 1000 innocent people to find terrorist in a crowd at a football game, some people would think it was worth it. But what if you are annoying 1000 innocent people to identify one person who didn’t pay a parking ticket, is it still worth?

On the other hand, face recognition gets worse as the number of people in the database increases. This is because so many people in the world look alike. As the likelihood of similar faces increases, matching accuracy decreases.

______

MegaFace Challenge:

In the last few years, several groups have announced that their facial recognition systems have achieved near-perfect accuracy rates, performing better than humans at picking the same face out of the crowd.  But those tests were performed on a dataset with only 13,000 images — fewer people than attend an average professional U.S. soccer game. What happens to their performance as those crowds grow to the size of a major U.S. city?

University of Washington researchers answered that question with the MegaFace Challenge, the world’s first competition aimed at evaluating and improving the performance of face recognition algorithms at the million person scale. All of the algorithms suffered in accuracy when confronted with more distractions.  “We need to test facial recognition on a planetary scale to enable practical applications — testing on a larger scale lets you discover the flaws and successes of recognition algorithms,” said Ira Kemelmacher-Shlizerman, a UW assistant professor of computer science and the project’s principal investigator. “We can’t just test it on a very small scale and say it works perfectly.”

The UW team first developed a dataset with one million Flickr images from around the world that are publicly available under a Creative Commons license, representing 690,572 unique individuals. Then they challenged facial recognition teams to download the database and see how their algorithms performed when they had to distinguish between a million possible matches. Google’s FaceNet showed the strongest performance on one test, dropping from near-perfect accuracy when confronted with a smaller number of images to 75 percent on the million person test. A team from Russia’s N-TechLab came out on top on another test set, dropping to 73 percent.  By contrast, the accuracy rates of other algorithms that had performed well at a small scale dropped by much larger percentages to as low as 33 percent accuracy when confronted with the harder task.

The MegaFace challenge highlights problems in facial recognition that have yet to be fully solved – such as identifying the same person at different ages and recognizing someone in different poses.

_____

Does facial Recognition technology Work?

There are at least two senses in which one can discuss whether face recognition technology “works” – system-technical sense, and an application-behavioral sense. The system-technical sense is concerned with the ROC curve and how many true and false positives the system produces in some test environment over some time period. The application-behavioral sense is concerned with how behavior is changed by the fact that the system is put into application. For example, say that a face recognition system is installed in an airport and it is known that it has only a 50% chance of correctly recognizing someone on the watch list when they appear in the airport. One might be tempted to think of this technical performance as a failure. However, a 50% chance of being identified might be sufficient to cause a terrorist to avoid the airport. If so, then one might be tempted to think of the system as a success in an application sense.

One often-repeated claim related to the effectiveness of video surveillance systems with face recognition capability is that the introduction of such a system in London caused a major decrease in the crime rate. News articles reported a 20% to 40% drop in crime, with the variation in the reported figures apparently arising from reporting statistics for different categories of crime, and/or adjusting for the increase in crime rate elsewhere in London over the same time period. Those who argue against the use of face recognition technology raise various objections to the reports of this experience. One objection is that there is some inherent variability in reported crime statistics, and so perhaps the reported decrease was only a random occurrence. Another objection is that the face recognition system was apparently not directly responsible for any arrests of criminals during this time. A related objection is that if the crime rate was reduced, perhaps the crimes were simply displaced to neighbouring areas that were not using face recognition technology. It seems plausible that this could be at least partially true. However, citizens in the area where the technology is deployed may still feel that it “works” for them, and this only begs the question of what would be the effect if the technology were more widely deployed.

One practical evaluation of face recognition technology was carried out at the Palm Beach International Airport. A face recognition system evaluated there captured about 10000 face images per day, of about 5000 persons, during four weeks of testing. Of 958 images captured of volunteer subjects in the gallery, 455 were successfully matched, for a recognition rate of approximately forty-seven percent. The report obtained from the Palm Beach County Department of Airports states that – “the false alarm rate was approximately 0.4% of total face captures, or about 2-3 false alarms per hour”.

A news article related to ACLU publicity about the test stated this same information a little differently – “more than 1000 false alarms over the four weeks of testing”. Problems were noted due to subjects wearing eyeglasses, lighting glare, and getting images that were not direct frontal views of the face. An ACLU representative was quoted as saying that the results of the Palm Beach airport evaluation showed that “face recognition is a disaster”. However, some observers may not agree with this assessment. A fifty percent chance of detection on a single face image may be sufficient to ward off some terrorists or criminals.

______

Using facial recognition for profiling:

Faception firm’s central claim is that its facial analysis technology can be used to determine behavioural characteristics. It is proposing to use the software for spotting various categories of criminal. For example, Faception claims to be able to categorise terrorist suspects – but experts are sceptical as to whether this is just racial profiling.  Beyond the detail of Faception’s approach, there are many problems with using facial recognition for profiling. Detected differences between photographs may have nothing directly to do with criminality. For example, somebody who is arrested on a drugs charge may be scruffy. That could be because being high all the time means they’re less likely to take care of their health. However, poor health does not make you a criminal – but it may be associated with criminality. Therefore, to have facial recognition systems flag up people as likely criminals, based only on a haggard appearance, would rightly trouble most people.

______

Gap between technical and socio-political analysis of FRT:

There is a divide between a purely technical and a purely socio-political analysis of FRT. On the one side, there is a huge technical literature on algorithm development, grand challenges, vendor tests, etc., that talks in detail about the technical capabilities and features of FRT but does not really connect well with the challenges of real world installations, actual user requirements, or the background considerations that are relevant to situations in which these systems are embedded (social expectations, conventions, goals, etc.). On the other side, there is what one might describe as the “soft” social science literature of policy makers, media scholars, ethicists, privacy advocates, etc., which talks quite generally about biometrics and FRT, outlining the potential socio-political dangers of the technology. This literature often fails to get into relevant technical details and often takes for granted that the goals of biometrics and FRT are both achievable and largely Orwellian. Bridging these two literatures—indeed, points of view—is very important as FRT increasingly moves from the research laboratory into the world of socio-political concerns and practices.

______

______

Racial and gender bias in facial recognition technology:

As artificial intelligence becomes prevalent in society, computers are increasingly making autonomous decisions that affect us all. Not surprisingly, it turns out that computer software can be just as biased in decision-making as its human programmers. MIT researcher Joy Buolamwini says that artificial intelligence can “reinforce bias and exclusion, even when it’s used in the most well-intended ways”. She cited her own research on facial analysis technology from IBM, Microsoft and Face++, noting that “on the simple task of guessing the gender of a face, all companies’ technology performed better on male faces than on female faces and especially struggled on the faces of dark-skinned African women”. Recently Microsoft responded to these concerns, announcing that it had reduced the error rates of its facial recognition technology by up to 20 times “for men and women with darker skin” and by nine times for all women. It was a critical fix to make, because the use of facial analysis technology is rapidly increasing globally.

_

Antony Haynes, associate dean for strategic initiatives and information systems at Albany Law School, pointed out that all artificial intelligence systems have the potential for bias.  “One assumption we make as human beings is that putting something in software makes it somehow objective or neutral or unbiased,” he said. “That couldn’t be further from the truth because a human being has to write the software, provide the training data, and tell the system when it succeeds or fails.”  The trouble, Haynes said, is the data sets being used to train these systems, including free image libraries, do not represent the overall population.  “Think about the companies that create software,” he said. “Many are based in Silicon Valley, [and] most are controlled or owned by young white men and young Asian men. So the data sets look like that, mostly men. Because the photographs used to train their systems are mostly of white men, the software is going to do better at recognizing men than women, and better at recognizing white people than black or Asian people.”

_

Is it racialized code?

Experts such as Joy Buolamwini, a researcher at the MIT Media Lab, think that facial recognition software has problems recognizing black faces because its algorithms are usually written by white engineers who dominate the technology sector. These engineers build on pre-existing code libraries, typically written by other white engineers.  As the coder constructs the algorithms, they focus on facial features that may be more visible in one race, but not another. These considerations can stem from previous research on facial recognition techniques and practices, which may have its own biases, or the engineer’s own experiences and understanding. The code that results is geared to focus on white faces, and mostly tested on white subjects.  And even though the software is built to get smarter and more accurate with machine learning techniques, the training data sets it uses are often composed of white faces. The code “learns” by looking at more white people – which doesn’t help it improve with a diverse array of races.

Technology spaces aren’t exclusively white, however. Asians and south Asians tend to be well represented. But this may not widen the pool of diversity enough to fix the problem. Research in the field certainly suggests that the status quo simply isn’t working for all people of color – especially for groups that remain underrepresented in technology. According to a 2011 study by the National Institute of Standards and Technologies, facial recognition software is actually more accurate on Asian faces when it’s created by firms in Asian countries, suggesting that who makes the software strongly affects how it works.

In a TEDx lecture, Buolamwini, who is black, recalled several moments throughout her career when facial recognition software didn’t notice her. “The demo worked on everybody until it got to me, and you can probably guess it. It couldn’t detect my face,” she said.

_

Even if facial recognition software is used correctly, however, the technology has significant underlying flaws. The firms creating the software are not held to specific requirements for racial bias, and in many cases, they don’t even test for them.  CyberExtruder, a facial recognition technology company that markets itself to law enforcement, said that they had not performed testing or research on bias in their software. CyberExtruder did note that certain skin colors are simply harder for the software to handle given current limitations of the technology. “Just as individuals with very dark skin are hard to identify with high significance via facial recognition, individuals with very pale skin are the same,” said Blake Senftner, a senior software engineer at CyberExtruder.

__

Commercial AI systems for facial recognition fail women and darker-skinned people, a study finds:

A Massachusetts Institute of Technology study has found that commercial facial-recognition software can come with in-built racial and gender biases, failing to recognise the gender of the darkest-skinned women in approximately half of cases.  Facial recognition technology is improving by leaps and bounds. Some commercial software can now tell the gender of a person in a photograph. When the person in the photo is a white man, the software is right 99 percent of the time. But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study that breaks fresh ground by measuring how the technology works on people of different races and gender. These disparate results, calculated by Joy Buolamwini, a researcher at the M.I.T. Media Lab, show how some of the biases in the real world can seep into artificial intelligence, the computer systems that inform facial recognition. In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it. If there are many more white men than black women in the system, it will be worse at identifying the black women. One widely used facial-recognition data set was estimated to be more than 75 percent male and more than 80 percent white, according to another research study.

The new study also raises broader questions of fairness and accountability in artificial intelligence at a time when investment in and adoption of the technology is racing ahead. Researchers at the Georgetown Law School estimated that 117 million American adults are in face recognition networks used by law enforcement — and that African Americans were most likely to be singled out, because they were disproportionately represented in mug-shot databases.

Ms. Buolamwini, a young African-American computer scientist, experienced the bias of facial recognition first-hand. When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. She figured it was a flaw that would surely be fixed before long. But a few years later, after joining the M.I.T. Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face. By then, face recognition software was increasingly moving out of the lab and into the mainstream. So she turned her attention to fighting the bias built into digital technology. Now 28 and a doctoral student, after studying as a Rhodes scholar and a Fulbright fellow, she is an advocate in the new field of “algorithmic accountability,” which seeks to make automated decisions more transparent, explainable and fair. Her short TED Talk on coded bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.

In her newly published paper, Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. These companies were selected because they offered gender classification features in their facial analysis software — and their code was publicly available for testing. She found them all wanting.

To test the commercial systems, Ms. Buolamwini built a data set of 1,270 faces, using faces of lawmakers from countries with a high percentage of women in office. The sources included three African nations with predominantly dark-skinned populations, and three Nordic countries with mainly light-skinned residents. The African and Nordic faces were scored according to a six-point labeling system used by dermatologists to classify skin types. The medical classifications were determined to be more objective and precise than race.

Then, each company’s software was tested on the curated data, crafted for gender balance and a range of skin tones. The results varied somewhat. Microsoft’s error rate for darker-skinned women was 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males. Ms. Buolamwini shared the research results with each of the companies. IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. The company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women. Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognize, understand and remove bias.”

_____

An Other-Race Effect for Face Recognition Algorithms:

Psychological research indicates that humans recognize faces of their own race more accurately than faces of other races.  This “other-race effect” occurs for algorithms tested in a recent international competition for state-of-the-art face recognition algorithms. Authors report results for a Western algorithm made by fusing eight algorithms from Western countries and an East Asian algorithm made by fusing five algorithms from East Asian countries.  At the low false accept rates required for most security applications, the Western algorithm recognized Caucasian faces more accurately than East Asian faces and the East Asian algorithm recognized East Asian faces more accurately than Caucasian faces.   Next, using a test that spanned all false alarm rates, authors compared the algorithms with humans of Caucasian and East Asian descent matching face identity in an identical stimulus set.  In this case, both algorithms performed better on the Caucasian faces—the “majority” race in the database.  The Caucasian face advantage, however, was far larger for the Western algorithm than for the East Asian algorithm.  Humans showed the standard other-race effect for these faces, but showed more stable performance than the algorithms over changes in the race of the test faces.  State-of-the-art face recognition algorithms, like humans, struggle with “other-race face” recognition.

_____

_____

Risks and dangers of facial recognition:

_

While facial recognition software isn’t new, the ease of use and wider access is, creating new opportunities — and potential issues. When used ethically and accurately, for instance, this technology can increase public safety. However, there are also serious risks, including the potential for data misuse, improved techniques for scammers and a loss of privacy. Technology has power for good and evil. This tool could assist disaster relief, refugee reunification programs, remembering names and finding that mysterious doppelganger. But what are the risks? When is aggregated open source information dangerous to the general public?

  1. Predatory Advertising

In addition to alerting stores to potential shoplifters, there is the possibility of manipulating customers. In theory, stores could monitor the emotional state of their customers and send them tailored ads to persuade them to part with more money.

  1. Abuse by Governments and Law Enforcement

Facial recognition offers clear benefits to law enforcement. But imagine getting pulled over for speeding. The cop politely says, please give me your license, registration and pose for our facial recognition database.

  1. Crackdown on protesters

Dictators could use this software to suppress human rights. Facial recognition apps make it easier for governments to scan crowds and melt away anonymity, matching protestors to online profiles.

  1. Enhanced Stalking

Last year, an app called FindFace was launched in Russia. You take a picture of someone using your smartphone camera and the app, using facial recognition technology, scans for that person on Russian social network Vkontakte. It currently boasts a 70% accuracy rate of turning photos into social media data. The app is a stalker’s dream. Simply by taking a surreptitious photo while you’re out and about on the streets or sitting on the train, a stranger can find your social media profiles, your friends and your whole life. In a world where facial recognition technology becomes widespread, there is no such thing as anonymity in public anymore. Your face is the key to your life. And everything that you have ever shared, or that anyone else has shared about you, is now written all over it.

  1. AI will need Checks and Balances in Place

As the world becomes more connected and AI is becoming part of our daily life, we need to make sure that there are checks and balances in place to make sure that this technology is used for good.

  1. Fear and Potential Mistakes may outweigh the Safety Benefits

Like the movie “Minority Report,” this technology can create a safer society, but it can also breed fear that it won’t work correctly and the wrong people will get caught up in it. Plus, the sense is that technology doesn’t make mistakes like people, so it will become even more difficult should technology make a mistake.

  1. Crime is likely to Worsen, not Improve

While it’s great to catch criminals, tech that can identify people from crowds is a privacy invasion in every other context. When you can track people in public spaces, unfortunately, crime opportunities increase. Break-ins when a person is away, pretending to know a person, stalking, and identity theft all get easier. The stronger our tech, the more damage an individual with access can cause.

  1. All Anonymity will disappear

Facial recognition leveraged for law enforcement and counter-terrorism is something most people would support. But the same technology can be used for unapproved surveillance and marketing. The technology can identify you in public and track your location and activities. If you’ve just visited a protest rally, is that something your employer should have access to? Where does privacy begin and end?

  1. Harassing innocents

Although the FBI purports its system can find the true candidate in the top 50 profiles 85% of the time, that’s only the case when the true candidate exists in the gallery. If the candidate is not in the gallery, it is quite possible the system will still produce one or more potential matches, creating false positive results. These people—who aren’t the candidate—could then become suspects for crimes they didn’t commit. An inaccurate system like this shifts the traditional burden of proof away from the government and forces people to try to prove their innocence.

  1. Data Theft

In this day and age of technology, who has the key to the chest containing your data is the real question. At the same time when facial recognition technology can help reduce the crime rate and act as a counter-terrorism tool, what we fail to think of is the large pool of data being collected in each scan. If you are being scanned, then the information is being stored somewhere. This information in the wrong hands can prove to be disastrous.

  1. Concerns about Ethics and Discrimination

ACLU pointed to the inherent bias in facial recognition technology. There is evidence that the use of facial recognition technology disproportionately affects people of color. “Black and brown people already are over-policed and face disparate treatment in every stage of the criminal justice system,” the ACLU said in its statement. “Facial recognition likely will be disproportionately trained on these communities, further exacerbating bias under the cover of technology.”

As the ability to read faces increases, we also see a lot of challenges in terms of its applications and a thin line between what is ethical and what is not. Researchers at Stanford University have demonstrated that, when shown pictures of one gay man, and one straight man, the algorithm could attribute their sexuality correctly 81% of the time. Uses of face recognition in recruitment could allow employers to use it to filter job applications and act on their prejudices to deny a person a job.

  1. Unwanted tagging on Facebook:

In 2014, Facebook stated that in a standardized two-option facial recognition test, its online system scored 97.25% accuracy, compared to the human benchmark of 97.5%.  Facebook’s approach is based on encouraging users to tag others in photographs. This is potentially wonderful in Facebook’s world – where everyone wants to share everything all the time. Of course, there are many reasons why someone might not wish to be flagged up in a Facebook photograph. Mistresses, drug dealers, ex-girlfriends, and bad-boy drinking partners all make for awkward photo-buddies. Accordingly, this feature has been the subject of recent lawsuits – and is seemingly not used on EU users.

_______

_______

Privacy concerns of facial recognition:

Privacy is one of the most prominent concerns raised by critics of facial recognition. This is not surprising because, at root, facial recognition disrupts the flow of information by connecting facial images with identity, in turn connecting this with whatever other information is held in a system’s database. Although this need not in itself be morally problematic, it is important to ascertain, for any given installation, whether these new connections constitute morally unacceptable disruptions of entrenched flows (often regarded as violations of privacy) or whether they can be justified by the needs of the surrounding context.

_

This technology has major implications for people’s privacy rights, and its use can worsen existing biases and discrimination in policing practices. The use of FRT impinges on privacy rights by creating an algorithm of unique personal characteristics. This in turn reduces people’s characteristics to data and enables their monitoring and surveillance. These data and images will also be stored for a certain period of time, opening up the possibility of hacking or defrauding.  In addition, FRT can expose people to potential discrimination in two ways. First, state agencies may misuse the technologies in relation to certain demographic groups, whether intentionally or otherwise. And secondly, research indicates that ethnic minorities, people of colour and women are misidentified by FRT at higher rates than the rest of the population. This inaccuracy may lead to members of certain groups being subjected to heavy-handed policing or security measures and their data being retained inappropriately.

_

The unique features of your face can allow you to unlock your new iPhone, access your bank account or even “smile to pay” for some goods and services.  The same technology, using algorithms generated by a facial scan, can allow law enforcement to find a wanted person in a crowd or match the image of someone in police custody to a database of known offenders. Facial recognition came into play recently when a suspect arrested for a shooting at a newsroom in Annapolis, Maryland, refused to cooperate with police and could not immediately be identified using fingerprints. Facial recognition is playing an increasing role in law enforcement, border security and other purposes in the US and around the world. While most observers acknowledge the merits of some uses of this biometric identification, the technology evokes fears of a “Big Brother” surveillance state. Heightening those concerns are studies showing facial recognition may not always be accurate, especially for people of color. A 2016 Georgetown University study found that one in two American adults, or 117 million people, are in facial recognition databases with few rules on how these systems may be accessed. A growing fear for civil liberties activists is that law enforcement will deploy facial recognition in “real time” through drones, body cameras and dash cams.

_

There’s nothing inherently right or wrong with facial recognition technology but its use raises potential privacy issues.  It’s a tool that can be used for great good. But if we don’t stop and carefully consider the way we use this technology, it could also be abused in ways that could threaten basic aspects of our privacy and civil liberties. Unlike fingerprints, face prints create privacy concerns. Unlike other biometric identifiers such as iris scans and fingerprints, facial recognition is designed to operate at a distance, without the knowledge or consent of the person being identified. Individuals cannot reasonably prevent themselves from being identified by cameras that could be anywhere – on a lamp post, attached to an unmanned aerial vehicle or, now, integrated into the eyewear of a stranger.  Once someone has your face print, they can get your name, they can find your social networking account, and they can find and track you in the street, in the stores that you visit, the government buildings you enter and the photos your friends post online. Using facial recognition technology beyond checking attendance or to maintain security could be a slippery slope into privacy issues if its use by employers or their vendors veers into sourcing potential job candidates.

_

Back in 2011, researchers from Carnegie Mellon University showed that facial recognition could increase privacy risks. In the first test that they ran, they managed to identify people on a dating website where members didn’t use their real names. In the second experiment, they discovered the identities of students walking on campus, by linking images of their faces to those of their Facebook profiles. Photographs of students’ faces also eventually led researchers to guess their personal interests and, in some cases, their Social Security numbers (a form of identification in the United States).

_

Privacy advocates agree that efforts to improve the travel experience probably will be welcomed by anyone who’s ever trudged through an airport with their baggage, but they say requiring people to submit to facial scanning goes too far. The government, they say, needs to do a better job of explaining why the scans are needed, how it intends to use the information and how long the information will be kept, among other things. Adam Schwartz, senior staff attorney with the Electronic Frontier Foundation, said a system that uses biometrics — particularly facial scans — presents unique challenges to a person’s privacy and security because those characteristics can’t be changed once they are acquired.  “You can’t change your face the way you can change a license plate,” he said.

_

Wearable devices like Google Glass are seen as an invasion of privacy by some and there is a growing suspicion of what Google, Facebook, Twitter and others are collecting about their users. A recent study from First Insight found that more than 75 per cent of US consumers would not shop at a store that use facial recognition technology for marketing purposes, and a similar study in the UK by RichRelevance found that most shoppers found in-store facial recognition to be “creepy”. Security experts say that this highlights the need for greater transparency over what data is collected, and how it is stored and secured. Additionally, initiatives like the Privacy Visor glasses, suggest that end-users could rebel against internet giants and their use of biometrics if this is not properly explained. Others are also starting to look at from a legal point of view. Jennifer Lynch, an attorney for privacy rights group Electronic Frontier Foundation, recently told Bloomberg: “Face recognition data can be collected without a person’s knowledge. It’s very rare for a fingerprint to be collected without your knowledge.” Whatever side of the fence you sit on, one thing is clear – the technology is here to stay. While there are many benefits, there are clear issues that need resolving. The debate over facial recognition software is set to continue.

_

Facial recognition software when leveraged for law enforcement is an idea supported by the masses, but it can also be used for illegal and forbidden surveillance. The technology can further be used to track your location and even pinpoint you in public using street cameras. This puts your privacy in danger and it can be used for all the wrong reasons. In the hands of the wrong person, this technology can act as a blackmailing tool as scammers will have access to your location at all times. A new smartphone app, FindFace, allows people to take a person’s photo and use facial recognition to find their social media accounts. Ostensibly a convenient way to connect with friends and co-workers, the app invites misuse. People can use it to expose identities and harass others. These new capabilities are also raising concern about other malicious uses of publicly available images.

_

From a privacy standpoint, the collection of details related to someone’s face is classified as “biometric” information, which requires specific forms of notice and consent in line with the laws of certain states. Additionally, the use of an individual’s picture in advertising (after identified using the technology) may give rise to claims under publicity rights laws, which cross the line between privacy rights and property right violations (i.e. torts). Biometric information is, generally, any information that is received through the processing of data points pulled from physical and physiological characteristics. This information offers the ability to uniquely identify someone and includes things such as faces, fingerprints, and DNA. Though the FTC has explicitly addressed this type of information, many states do not have statutes specifically discussing how private entities can handle such a sensitive category of information, particularly outside of the employment context. Illinois, Texas, and Washington currently stand out in this area with specific statutes on the book to address biometrics. While other states are behind, general principles of data privacy law and compliance with stated privacy policies will still be enforced in other states.

Social media web sites such as Facebook have very large numbers of photographs of people, annotated with names. This represents a database which may be abused by governments for face recognition purposes.  Facebook’s DeepFace has become the subject of several class action lawsuits under the Biometric Information Privacy Act, with claims alleging that Facebook is collecting and storing face recognition data of its users without obtaining informed consent, in direct violation of the Biometric Information Privacy Act. The most recent case was dismissed in January 2016 because the court lacked jurisdiction.  Therefore, it is still unclear if the Biometric Information Privacy Act will be effective in protecting biometric data privacy rights. In December 2017, Facebook rolled out a new feature that notifies a user when someone uploads a photo that includes what Facebook thinks is their face, even if they are not tagged. Facebook has attempted to frame the new functionality in a positive light, amidst prior backlashes.  Facebook’s head of privacy, Rob Sherman, addressed this new feature as one that gives people more control over their photos online. “We’ve thought about this as a really empowering feature,” he says. “There may be photos that exist that you don’t know about.”

The Electronic Privacy Information Center and several other consumer groups plan to file a complaint with the Federal Trade Commission asking for an investigation into the Facebook’s use of facial recognition technology. Facebook for years has used the technology to help users in tagging photos, but it has failed to gain proper consent for linking biometric markers with individual users, the technology watchdog groups say. Facebook says that when someone has their setting turned off, it doesn’t use the technology to identify them in photos.  “Our face recognition technology helps people manage their identity on Facebook and makes our features work better for people who are visually impaired,” said Rob Sherman, Facebook Deputy Chief Privacy Officer, in a statement.  It also uses facial identification to allow users to tag people more easily and to let them know if they’ve appeared in other people’s photos or videos. As Facebook’s facial recognition technology advanced, its identification of persons in photos — who might not even know a photo was taken of them — represents privacy problems and a violation of the company’s agreement to get users’ consent, the groups say. “The scanning of facial images without express, affirmative consent is unlawful and must be enjoined,” the groups say in the complaint.

_

Reasons to be concerned about your privacy:

Privacy matters. Privacy refers to any rights you have to control your personal information and how it’s used — and that can include your faceprint.

So, what are the issues? Here are some:

  • Security. Your facial data can be collected and stored, often without your permission. Its possible hackers could access and steal that data.
  • Prevalence. Facial recognition technology is becoming more widespread. That means your facial signature could end up in a lot of places. You probably won’t know who has access to it.
  • Ownership. You own your face — the one atop your neck — but your digital images are different. You may have given up your right to ownership when you signed up on a social media network. Or maybe someone tracks down images of you online and sells that data.
  • Safety. Facial recognition could lead to online harassment and stalking. For example, someone takes your picture on a subway or some other public place and uses facial recognition software to find out exactly who you are.
  • Mistaken identity. Law enforcement uses facial recognition to try to identify someone who is involved in criminal activity. Facial recognition system is never 100 percent accurate and show false positives. What if the police think that you are suspect?
  • Basic freedoms. Government agencies and others could have the ability to track you. What you do and where you go might no longer be private. It could become impossible to remain anonymous.

How you can help protect yourself against facial recognition:

You might start with your social networks.

  • Facebook allows you to opt out of its facial recognition system.
  • Google+ won’t enable facial recognition until you opt in. The system also allows you to turn face recognition on and off.

It’s smart in general to be careful about what you share on social networks. Posting too much personal information, including photos, could lead to identity theft. For instance, you might share your dog’s name or your high school mascot. Those details might give an identity thief a clue to the answers to your security questions for your bank or credit card accounts.

_

Civil rights right organizations and privacy campaigners such as the Electronic Frontier Foundation, Big Brother Watch and the ACLU express concern that privacy is being compromised by the use of surveillance technologies. Some fear that it could lead to a “total surveillance society,” with the government and other authorities having the ability to know the whereabouts and activities of all citizens around the clock. This knowledge has been, is being, and could continue to be deployed to prevent the lawful exercise of rights of citizens to criticize those in office, specific government policies or corporate practices. Many centralized power structures with such surveillance capabilities have abused their privileged access to maintain control of the political and economic apparatus, and to curtail populist reforms.

_

Face recognition can be used not just to identify an individual, but also to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social networking profiles, Internet behavior, travel patterns, etc. – all through facial features alone. Concerns have been raised over who would have access to the knowledge of one’s whereabouts and people with them at any given time. Moreover, individuals have limited ability to avoid or thwart face recognition tracking unless they hide their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any marketer, government agency, or random stranger to secretly collect the identities and associated personal information of any individual captured by the face recognition system.  Consumers may not understand or be aware of what their data is being used for, which denies them the ability to consent to how their personal information gets shared.

_

Does Face Recognition Surveillance in Public Spaces invade Privacy?

The most fundamental argument against government use of face recognition technology in public spaces is that it is a violation of the constitutional right to privacy. This core objection was advanced by the ACLU in the context of the use of face recognition at the Super Bowl: “… this activity raises concerns about the Fourth Amendment right of all citizens to be free of unreasonable searches and seizures.” However, essentially all legal commentators agree that use of face recognition systems in public spaces cannot be considered a “search” for constitutional purposes.

Woodard’s analysis of the issue seems careful and representative:

“Under current law, however, the type of facial recognition used at the Super Bowl would almost certainly be constitutional. The American Supreme Court has explained that government action constitutes a search when it invades a person’s reasonable expectation of privacy. But the court has also found that a person does not have a reasonable expectation of privacy with regard to physical characteristics that are constantly exposed to the public, such as one’s facial features, voice, and handwriting. So although the Fourth Amendment requires that a search conducted by government actors be ‘reasonable,’ which generally means that there must be some degree of suspicion that the person to be searched is engaged in wrongdoing, the scan of spectators’ facial characteristics at the Super Bowl did not constitute a search.”

However, this interpretation of the right to privacy was formulated before it was technically conceivable that a system could automatically match the face of every person entering a public space against a gallery of images of people wanted by authorities. Thus, some observers may argue that the scale of operations made possible by computerized face recognition technology should result in a change to the Supreme Court’s traditional interpretation of the right to privacy. Another concern is simply that citizens should be notified when they enter a public space where video surveillance is being used. The idea is apparently that people could then make an informed choice of whether or not to subject themselves to surveillance. Of course, if all airports install face recognition systems then there may be little practical “choice” left for some travellers. However, given the level of screening already in place for passengers boarding an airplane, posing for a picture for a face recognition system would seem to be a rather minimal added inconvenience.

_

There are important openings for dissent in the nascent facial recognition society. The U.S. Supreme Court may have denied a right of privacy over facial features, but there is sociological evidence suggesting people observe a customary right to facial privacy. Journalist Malcolm Gladwell (2002) says we tend to focus on audible communication and ignore much of the visual information given in the face, because to do otherwise would “challenge the ordinary boundaries of human relationships.” Gladwell refers to an essay written by psychologist Paul Ekman in which Ekman discusses Erving Goffman’s sociological work:

Goffman said that part of what it means to be civilized is not to “steal” information that is not freely given to us. When someone picks his nose or cleans his ears, out of unthinking habit, we look away …. for Goffman the spoken word is “the acknowledged information, the information for which the person who states it is willing to take responsibility” … (2002)

Gladwell writes that it is disrespectful and an invasion of privacy to probe people’s faces for information their words leave out. Awareness of the information also entails an obligation, Gladwell says, to react to a message that was never intended to be transmitted. “To see what is intended to be hidden, or, at least, what is usually missed,” Gladwell explains, “opens up a world of uncomfortable possibilities” (2002). Ideas such as these, that examine the forms of interaction a facial recognition society would create, can be exploited in mounting a defence against the observation onslaught. They may be of little consequence now, when people have yet to experience the full brunt of facial surveillance, but as its drawbacks become increasingly apparent, the arguments will become more salient.

________

________

Freedom and anonymity compromised by facial recognition:

Civil rights groups have warned a vast, powerful system allowing the near real-time matching of citizens’ facial images risks a “profound chilling effect” on protest and dissent. The technology – known in shorthand as “the capability” – collects and pools facial imagery from various state and federal government sources, including driver’s licences, passports and visas. The biometric information can then rapidly – almost in real time – be compared with other sources, such as CCTV footage, to match identities. The system is designed to give intelligence and security agencies a powerful tool to deter identity crime, and quickly identify terror and crime suspects.

But it has prompted serious concern among academics, human rights groups and privacy experts. The system sweeps up and processes citizens’ sensitive biometric information regardless of whether they have committed or are suspected of an offence. Critics have warned of a “very substantial erosion of privacy” and the system’s potential use for mass general surveillance. There are also fears about the level of access given to private corporations and the legislation’s loose wording, which could allow it to be used for purposes other than related to terrorism or serious crime.

It’s hard to believe that it won’t lead to pressure, in the not too distant future, for this capability to be used in many contexts, and for many reasons. This brings with it a real threat to anonymity. But the more concerning dimension is the attendant chilling effect on freedoms of political discussion, the right to protest and the right to dissent. I think these potential implications should be of concern to us all.

_

With no legislation, guidance, policy or oversight, facial recognition technology should have no place on our streets. It has chilling implications for our freedom. Every single person who walks by these cameras will have their face – their most identifiable feature – scanned and stored on a police database. There is no escaping it – especially when you don’t know it’s happening. And if you are one of the unlucky ones who is falsely identified as a match, you might be forced to prove your identity to the police – or be arrested for a crime you didn’t commit. It’s not hard to imagine the chilling effect its unrestricted use will have. Constant surveillance leads to people self-censoring lawful behaviour. Stealthily, these measures curb our right to protest, speak freely and dissent. They shape our behaviours in ways that corrode the heart of our democratic freedoms. And even more perniciously, this technology is most dangerous for the people who need it the most. Technology that misidentifies women and people from ethnic minority communities disenfranchises people who already face inequality. If the history of the civil rights movement teaches us anything, it’s that protest can bring about social change. The people most likely to be wronged by the facial recognition technology being rolled out in our public spaces are the people who need public protest the most. The government’s defence is that the technology is “evolving”. But that doesn’t wash when it is having a real and unjust impact on people in the here and now.

_____

Can Face ID lead to mass surveillance?

Face ID creates fear of government surveillance: mass scans to identify individuals based on face profiles. Law enforcement is rapidly increasing use of facial recognition; one in two American adults are already enrolled in a law enforcement facial recognition network, and at least one in four police departments have the capacity to run face recognition searches.  While Facebook has a powerful facial recognition system, it doesn’t maintain the operating systems that control the cameras on phones, tablets, and laptops that stare at us every day. Apple’s new system changes that. For the first time, a company will have a unified single facial recognition system built into the world’s most popular devices—the hardware necessary to scan and identify faces throughout the world.

Apple doesn’t currently have access to the faceprint data that it stores on iPhones. But if the government attempted to forced Apple to change its operating system at the government’s behest—a tactic the FBI tried once already in the case of the locked phone of San Bernardino killer Syed Rizwan Farook—it could gain that access. And that could theoretically make Apple an irresistible target for a new type of mass surveillance order. The government could issue an order to Apple with a set of targets and instructions to scan iPhones, iPads, and Macs to search for specific targets based on Face ID, and then provide the government with those targets’ location based on the GPS data of devices that receive a match. Apple has a good record of fighting for user privacy, but there’s only so much the company could do if its objections to an order were turned down by the courts. By generating millions of face prints while simultaneously controlling the cameras that can scan and identify them, Apple might soon face a government order to turn its new unlocking system into the app for mass surveillance.

______

Is it Surveillance Economy?

Nearly all technologies that come with privacy risks are developed for legitimate and even beneficial purposes. Facial recognition is no exception, but it deserves attention and debate. Simple facial detection could surround you in a bubble of billboards and electronic store displays shown only to people of your race, sex, and age.

More importantly, facial recognition has the potential to erode the anonymity of the crowd, the specific type of privacy you experience when you stride through a public space, near home or on vacation, and refreshingly, no one knows your name. Marketers already can see every article we read online; do we need to let them record every shop window we gaze through?

According to privacy advocates, this is the time to consider policy changes, while facial recognition is still ramping up. One step advanced would be to require an opt-in before people are entered into a facial recognition database, with reasonable exceptions for safety and security applications. That idea has already been implemented by some leading technology companies.  For instance, users of Micro­soft’s Xbox gaming system can access their profiles using facial recognition, but only if they choose to turn on that feature.  Second, regulations could require companies to encrypt faceprints or institute other strong data protections—after all, a compromised PIN can be replaced, but there’s no ready solution if someone steals your biometric files.  Special rules could prevent children under the age of 13 from being targeted by facial recognition systems in stores. And consumers should have the right to know who has a copy of his or her faceprint, how it is being used, and who it is being shared with.

_____

_____

Security threats by facial recognition:

Acceptance of facial recognition and other biometric identification systems has generally been driven by security concerns and the belief that these technologies offer solutions. Yet, less salient are the security threats posed by these very systems, particularly threats of harm posed by lax practices dealing with system databases. Recent incidents in the UK and US suggest that institutions still do not deserve full public trust in how they safeguard personal information. In the case of biometric data, this fear is magnified many times over since it is generally assumed to be a non-falsifiable anchor of identity. If the biometric template of my faceprint is used to gain access to a location, it will be difficult for me to argue that it was not me, given general, if problematic, faith in the claim that “the body never lies.” Once my faceprint has been digitally encoded, however, it can potentially be used to act “as if” it were me and, thus, the security of biometric data is a pressing matter, usefully considered on a par with DNA data and evidence. A similar level of caution and security needs to be established.

_

The usual discussion around facial recognition used in surveillance centers is evidently on privacy issues. What should also be discussed is the high probability of security breaches and the volume of personal information that can leak. While biometric data is one of the most reliable tools for authentication, it is also a major risk. If someone loses a credit card in a high-profile breach like that of Equifax, they have the option to freeze their credit and can take steps for changing the personal info that was leaked.

What if you lose your face?

A 2016 report from the Center on Privacy & Technology at Georgetown Law revealed that, “One in two American adults is in a law enforcement face recognition network. [These networks] include over 117 million American adults.” In the UK, the independent Biometrics Commissioner has attacked the Government’s practice of keeping mugshots of un-convicted citizens – about 19 million of them. The Commissioner outlines exactly how intrusive this national database is becoming as facial recognition is applied to it. He is also damning about the lack of safeguards surrounding its use.

What can you do when biometric information is leaked or stolen?

Around the world, biometric information is captured, kept and analyzed in quantities that boggle the mind. As facial recognition software is still in its infancy in some ways, laws on how this type of biometric data is used are still non-existent or up for debate. And regular citizens whose information is compromised have almost no legal avenues to pursue. The most plausible answer to the above question is “nothing”.

_

People need to be educated on how to manage their online identities and their privacy settings so that they avoid sharing private information unintentionally or unnecessarily. Social media platforms and facial recognition services need to support this and provide safeguards that prevent their misuse. And regulation needs to be updated to ensure that technology doesn’t open up loopholes that can be exploited by nefarious or ignorant actors. Our lives today are dramatically different from what they were only a decade ago. In a decade’s time, they will be more different still. The technology that develops and how we use it will shape our world immeasurably.

______

______

Regulation of facial recognition:

Machine learning researchers note that some inaccuracies in facial recognition systems are inevitable no matter how refined the technology becomes. And privacy advocates argue that this reality underscores the need for system audits and legislation that manages facial recognition deployment, crucial measures for protecting individual privacy and the ability to remain anonymous. So far, lawmakers worldwide have been slow to codify parameters for the technology. Even the United Kingdom, which has experimented with facial recognition tools in law enforcement since the 1990s, lacks a regulatory framework for it. In the US, some representatives have expressed a desire to introduce legislation addressing government’s use of facial recognition. In the meantime, though, the technology has proliferated unchecked.

_

Facial recognition is largely unregulated. Companies aren’t barred from using the technology to track individuals the moment we set foot outside. No laws prevent marketers from using faceprints to target consumers with ads. And no regulations require faceprint data to be encrypted to prevent hackers from selling it to stalkers or other criminals.  You may enjoy Facebook’s photo-tagging suggestions, but would you be comfortable if every mall worker was jacked into a system that used security-cam footage to access your family’s shopping habits, favorite ice cream flavors, and most admired superheroes? Like it or not, that could be the future of retail, according to Kelly Gates, associate professor in communication and science studies at the University of California, San Diego and author of “Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance.”  “Regardless of whether you want to be recognized, you can be sure that you have no right of refusal in public, nor in the myriad private spaces that you enter on a daily basis that are owned by someone other than yourself,” Gates says. “You give consent by entering the establishment.”

_

Is Face Recognition Legal?

There are several regulations that restrict the use of biometric data. Face recognition, also known as “facial geometry recognition,” has slipped by most of the agencies that would be inclined to regulate it. Whether this is because laws move very slowly or because of the tech industry lobbyists is debatable. However, a few states, including Illinois, have chosen to regulate facial recognition technology (and other biometric data). In October of 2008, Illinois enacted the Biometric Information Privacy Act. The act was aimed at protecting an individual’s biometric identifiers, including facial geometry, from unauthorized collection, use, and sale. The Act applies to both private individuals and entities – basically everyone but federal, state and local government – and sets forth requirements for those who possess face recognition data.

In the U.S. the states of Texas, Illinois, and Washington have adopted biometrics regulations. Moreover, the European Union has passed the General Data Protection Regulation (GDPR), which has specific requirements in terms of sensitive data. Even though these regulations might make certain use cases hard to serve, they have a place because in principle they are protecting the privacy of end users. They are making every party involved aware that it is not right and acceptable to misuse people’s personal data. And in a way, they are bringing the facial recognition technology on the same (or similar) level with other tracking technologies such as RFID and Beacons.

____

Microsoft urges regulation of facial recognition technology:

Microsoft’s chief legal officer called for regulation of facial recognition technology due to the risk to privacy and human rights. Brad Smith made a case for a government initiative to lay out rules for proper use of facial recognition technology, with input from a bipartisan and expert commission.

Facial recognition technology raises significant human rights and privacy concerns, Smith said in a blog post. “Imagine a government tracking everywhere you walked over the past month without your permission or knowledge,” he said. “Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech.” It could become possible for businesses to track visitors or customers, using what they see for decisions regarding credit scores, lending decisions, or employment opportunities without telling people. He said scenarios portrayed in fictional films such as “Minority Report”, “Enemy of the State”, and even the George Orwell dystopian classic “1984” are “on the verge of becoming possible”. “These issues heighten responsibility for tech companies that create these products,” Smith said. “In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.”

It may seem unusual for a company to ask for government regulation of its products, but there are many markets where thoughtful regulation contributes to a healthier dynamic for consumers and producers alike. The auto industry spent decades in the 20th century resisting calls for regulation, but today there is broad appreciation of the essential role that regulations have played in ensuring ubiquitous seat belts and air bags and greater fuel efficiency. The same is true for air safety, foods and pharmaceutical products. There will always be debates about the details, and the details matter greatly. But a world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.

That’s why Microsoft called for national privacy legislation for the United States in 2005 and why they have supported the General Data Protection Regulation in the European Union. Consumers will have more confidence in the way companies use their sensitive personal information if there are clear rules of the road for everyone to follow. While the new issues relating to facial recognition go beyond privacy, they believe the analogy is apt.

It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime. Governments may monitor the exercise of political and other public activities in ways that conflict with longstanding expectations in democratic societies, chilling citizens’ willingness to turn out for political events and undermining our core freedoms of assembly and expression. Similarly, companies may use facial recognition to make decisions without human intervention that affect our eligibility for credit, jobs or purchases. All these scenarios raise important questions of privacy, free speech, freedom of association and even life and liberty.

_____

_____

Pros and cons of facial recognition:

_

Pros of Facial Recognition

  1. Increased Security: One of the biggest pros of facial recognition technology is that it enhances safety and security. From government agencies to personal use, there is an increasing demand for advanced security and surveillance systems. Organizations can easily identify and track anyone who comes onto the premises, and they can easily flag visitors who aren’t welcome. It can be very helpful when it comes to finding potential terrorists. Plus, there is no key, badge, or password that can be stolen or lost.
  2. Speed: The new cameras and the data points they analyze through the software make facial recognition processing occur with seconds, not minutes. So, whether someone is logging into work physically at a door or on his/her computer, there will be virtually no wait time.
  3. No Contact: Facial recognition is preferred over fingerprint scanning because of its non-contact process. People don’t have to worry about the potential drawbacks related to fingerprint identification technology, such as germs or smudges.
  4. Accuracy: With today’s technology, face ID technology is becoming more and more reliable. The success rate is currently at a high due to the developments of 3D facial recognition technologies, skin texture analysis, and infrared cameras. The combination of these technologies makes it very hard to trick the system. With such accuracy, you can have confidence that the premise is more secure and safe for you and your peers.
  5. Almost Fraud-Proof: No identification system is perfect, but it is almost impossible for an individual to log in as someone else, either to get through a doorway or to get onto a computer. This prevents employees from gaining access to information they should not have, and it prevents them from falsifying their hours on the job. Logging in with passwords, on the other hand, is loose and insecure.
  6. Cost-efficiency: Because facial recognition technology is automated, it also reduces the need for security guards to personally verify a match. You will receive instant notification as soon as a suspicious character comes into your facilities. That helps save a lot of money you would have spent on security staff.
  7. Safety: Automated security processes also mean that fewer security personnel would be put in potentially dangerous situations.
  8. Convenience: With our personal information becoming more and more digitised, facial recognition can help us on an individual level too. Using biometrics instead of a password to secure a smartphone or laptop can keep our device safe from prying eyes. It’s also easier than memorising multiple passwords and PIN numbers.
  9. Connection: There are some exciting possibilities for facial recognition on a social level too. Imagine meeting a room full of strangers, and having an easy, almost immediate way to find out more about them, seeing what you have in common with each one.
  10. Full Automation: You can use facial identifying tech to automate your entire identification process. You don’t have to hire security guards who often use manual recognition to identify your workers. An automated facial recognition platform ensures your identification process is flawless and doesn’t halt. Moreover, you won’t need someone to monitor your surveillance cameras. Automation will ultimately translate into more convenience and reduced costs of operation.
  11. Easy Integration Process: Most of the time, integratable facial recognition tools work pretty flawlessly with the existing security software that companies have installed. And they’re also easy to program for interaction with a company’s computer system. You won’t need to spend additional money and time on redeveloping your own software to make it suitable for FRT integration. Everything will be already adaptable.
  12. Forget the Time Fraud: One of the big benefits that facial recognition technology companies offer is the time attendance tracking that allows excluding the time fraud among the workers. No more buddy favours from securities for staff members, since everyone now has to pass a face scanning devices to check-in for work. And the paid hours begin from this moment till the same check-out procedure. And the process will be fast due to the fact that employees don’t have to prove their identities or clock in with their plastic cards.
  13. Wide application fields: In addition to all current applications of fingerprint recognition, such as Time and Attendance, Access Control System; face recognition also can be applied to all kinds of video surveillance alarm system, digital camera, and robotics.

______

Cons of Facial Recognition:

  1. Need for high-quality equipment: You will need super advanced software to operate your HQ digital cameras accurately. The facial recognition system usually captures a face from a video and compares it with the size of the enrolled photo. As such, the quality of the picture can impact your entire facial identification process. It’s pretty hard to identify an intruder in a case involving small size images. There is no point in installing a facial recognition system unless an organization is willing to go the cost of top quality cameras that can capture extremely high-quality images for placement in databases. Likewise, the software infrastructure must be able to recognize instantly, or there will be waiting time and frustration. Mediocre quality of any type may result in people who are similar appearance being mistaken for one another.
  2. High Implementation Costs: Facial recognition requires top-quality cameras and advanced software to ensure accuracy and speed. However, Allied Market Research predicts that technological advancements are likely to reduce the prices of facial recognition systems in the future.
  3. Processing and Data Storage: Storage is a fundamental requirement in the digital world. An online retailer has to store a lot of data for future use. The video and high-quality images required for facial recognition take up a significant amount of storage. In order for facial recognition systems to be effective, they only process about 10 to 25% of videos. This leads organizations to use numerous computers to process everything and to do it quickly. You might be wasting a lot of time with your facial recognition system processing every video’s frame. You have to use a cluster of computers to minimize your processing time.
  4. Changes in Appearance and Camera Angle: Such systems may be fooled by hats, beards, sunglasses and face mask. Any major changes in appearance, including facial hair and weight changes, can throw off the technology. In these instances, a new picture is required. Camera angle can also cause issues because multiple angles are needed to identify a face. The surveillance angle often subjects your identification process under extreme pressure. It means you have to use numerous angles to enrol a face through FRT software. Moreover, you will need nothing short of a frontal view to generate a clear face template. You need multiple angles and higher resolution photos to increase the accuracy of your results. Moreover, things such as sunglasses and facial hair might cause more troubles. An intruder can fool your system with masks or removed beards.
  5. Exposure: Insiders speculate about potential situations like using facial recognition to determine who around you belongs to a certain cultural or religious group, or who has a criminal record. This application might make the user feel safer or better informed, but it would be invasive to those on the receiving end.
  6. Safety: In an era where online bullying has become prevalent, this type of technology could also leave more people vulnerable to stalking and harassment in the real world.
  7. Legislation: There are concerns that biometrics are progressing too rapidly for regulators, legislators, and the judicial system to set up standardised rules and precedents around their use. For example, in the USA, the Fifth Amendment protects people from giving up information that could incriminate them. This would include information like a password or PIN. However, a thumbprint or your face shape isn’t a piece of information you know; it’s something you are. So, is facial recognition covered by the Fifth Amendment? Some people view mass-scale facial recognition cameras as the ultimate “big brother.”

______

Facial recognition technology benefit versus drawback chart:

No Benefits of FRT Drawbacks of FRT
1 Security levels will be significantly improved Difficulties with data processing and storing
2 The integration process is easy and flawless Troubles with images size and quality
3 High accuracy allows avoiding false identification Strong influence of the camera angle
4 Facial Recognition System is fully automated May lead to stalking, harassment and identity theft
5 Time fraud will be excluded Without proper legislation and regulation

_____

_____

Anti-facial recognition systems:

  1. There are ways to trick these systems — wearing a rigid, full-face mask, for example. Wearing a rigid mask that covers the whole face, for example, would give current facial recognition systems nothing to go on.
  2. Two universities have developed anti-facial recognition glasses to make wearers undetectable. The glasses — the work of researchers at Carnegie Mellon University and the University of North Carolina at Chapel Hill — could be one way to help protect yourself. These patterned glasses are specially designed to trick and confuse AI facial recognition systems.

Researchers from Carnegie Mellon University have shown that specially designed spectacle frames can fool even state-of-the-art facial recognition software. Not only can the glasses make the wearer essentially disappear to such automated systems, it can even trick them into thinking you’re someone else. By tweaking the patterns printed on the glasses, scientists were able to assume one another’s identities or make the software think they were looking at celebrities.  The glasses work because they exploit the way machines understand faces. Facial recognition software is often powered by deep learning; systems that crunch through large amounts of data to sift out recurring patterns. In terms of recognizing faces, this could mean measuring the distance between an individual’s pupils, for example, or looking at the slant of their eyebrows or nostrils. But compared to human comprehension, this analysis takes place at an abstract level. Computer systems don’t understand faces in the way we do; they’re simply looking for patterns of pixels. If you know what patterns are being looked for, you can easily trick machine vision systems into seeing animals, people, and objects in what are just abstract patterns. This is exactly what the researchers from Carnegie Mellon did. First they worked out the patterns associated with specific faces, and then they printed them onto a set of wide-rimmed glasses (so as to better occupy more of the frame; about 6.5 percent of the available pixels in the end). In 100 percent of their tests, the researchers were able to use the glasses to effectively blind facial recognition systems to their identities. Also, a 41-year-old white male researcher was able to pass himself off as actress Milla Jovovich to facial recognition systems with 87.87 percent accuracy.

  1. In January 2013 Japanese researchers from the National Institute of Informatics created ‘privacy visor’ glasses that use nearly infrared light to make the face underneath it unrecognizable to face recognition software. The latest version uses a titanium frame, light-reflective material and a mask which uses angles and patterns to disrupt facial recognition technology through both absorbing and bouncing back light source.
  2. In December 2016 a form of anti-CCTV and facial recognition sunglasses called ‘reflectacles’ were invented by a custom-spectacle-craftsman based in Chicago named Scott Urban They reflect infrared and, optionally, visible light which makes the users face a white blur to cameras.
  3. Another method to protect from facial recognition systems is specific haircuts and make-up patterns that prevent the used algorithms to detect a face.
  4. HyperFace Camouflage:

Human facial recognition is hard to fool, but machines are a little less sophisticated. As a result, a new kind of camouflage has been designed. This can conceal our faces from ever-watchful artificial intelligence cameras. It works by mimicking features of the face which causes “hits” in facial recognition systems. HyperFace uses multiple patterns printed on clothing and fabrics that resemble features of human faces to confuse facial recognition systems. This patterned material makes computers think they’re looking at a whole horde of faces – creating a cloud of confusion around the user. You become just one face, lost in your own private crowd.

HyperFace is a new kind of camouflage that aims to reduce the confidence score of facial detection and recognition by providing false faces that distract computer vision algorithms. HyperFace works by providing maximally activated false faces based on ideal algorithmic representations of a human face. These maximal activations are targeted for specific algorithms. Instead of seeking computer vision anonymity through minimising the confidence score of a true face, HyperFace offers a higher confidence score for a nearby false face by exploiting a common algorithmic preference for the highest confidence facial region. In other words, if a computer vision algorithm is expecting a face, give it what it wants.

  1. In Russia, Grigory Bakunov has invented a solution to escape the cameras permanently watching our movements and confuse face detection devices. He has developed an algorithm which creates special makeup to fool the software. However, he has chosen not to bring his product to market after realizing how easily it could be used by criminals.
  2. In Germany, Berlin artist Adam Harvey has come up with a similar device known as CV Dazzle, and is now working on clothing featuring patterns to prevent detection.
  3. In late 2017, a Vietnamese company successfully used a mask to hack the Face ID face recognition function of Apple’s iPhone X. However, the hack is too complicated to implement for large-scale exploitation. Around the same time, researchers from a German company revealed a hack that allowed them to bypass the facial authentication of Windows 10 Hello by printing a facial image in infrared.
  4. Forbes announced in an article dated 31 May 2018 that researchers from the University of Toronto have developed an algorithm to disrupt facial recognition software (aka privacy filter). In short, a user could apply a filter that modifies specific pixels in an image before putting it on the web. These changes are imperceptible to the human eye, but are very confusing for facial recognition algorithms.
  5. Camera Finders:

Cameras can sometimes be detected and avoided if you see them before they see you, or if you know where they are ahead of time. Then they can be neutralized with something as simple as a disguise, a tilt of the head or placement of an opaque object between you and the camera you wish to avoid. Camera finders would be more correctly called reflection or lens finders because they use light reflected off camera lenses to find hidden cameras. These devices typically have a lens or filter that the operator looks through to sweep an area for cameras while the device projects light, which is reflected back by camera lenses and highlighted when the operator looks through the camera finder’s lens or filter which matches the color of light.

______

You can also trick facial recognition in other ways too, using special hairstyles and makeup as seen in the figure below.

  1. Makeup: Avoid enhancers that amplify facial features. Instead, use makeup that contrasts with your skin tone in unusual tones and directions. Examples: light colors on dark skin, dark colors on light skin.
  2. Nose Bridge: Partially cover the nose bridge area. This is where the nose, eyes and forehead intersect to form a key facial feature.
  3. Eyes: Partially obscure one of the ocular regions, such as position and darkness of eyes.
  4. Masks: Don’t wear masks because they can be illegal in certain cities. What you want to do instead is change the contrast, tonal gradients, and spatial relationship of dark and light areas on your face using hair makeup and/or special fashion accessories.
  5. Head: Certain research shows that obscuring the elliptical shape of your head can improve your ability to evade facial detection.
  6. Asymmetry: Facial recognition software expects to find symmetry between the left and right sides of your face. Break this up by developing an asymmetrical look.

_____

Spoofing by 3D mask:

Spoofing is the act of masquerading as a valid user by falsifying data to gain an illegitimate access. Vulnerability of recognition systems to spoofing attacks (presentation attacks) is still an open security issue in biometrics domain and among all biometric traits, face is exposed to the most serious threat, since it is particularly easy to access and reproduce. Nowadays face recognition systems are facing a new problem after having won the challenge of reliability. The problem is that these systems have become vulnerable to attacks by identity theft. In order to deceive the recognition systems hackers use several methods, such as the use of face images or videos of people belonging to the system database. The popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services; and a number of face spoof detection techniques have been proposed. Luckily, this type of attack is thwarted by the use of adapted systems. But unfortunately another type of attack that uses 3D face masks appeared. This type of attack is very efficient and a high percentage of hackers who use 3D masks can mislead a good facial recognition system.

Authentication through face recognition is actually prone to spoofing with 3D mask. It would be easy to recreate through 3D rendering the faces of well-known politicians or actors that are often exposed in photos or videos as there is a great amount of data available on them. 3D printers, which are used to build face structures, are also no longer hard to find. They can be commonly found on the online shopping mall like Amazon. Bkav, a security company based in Vietnam, succeeded in spoofing Face ID with a face mask it created. The mask’s face structure was created using a plastic mask and silicon, and an additional application of paper and makeup was enough to bypass authentication.

______

3D models based on Facebook images can fool Facial recognition systems. A 2016 study:

A group of researchers from the University of North Carolina demonstrated that a number of existing facial recognition systems can be fooled by 3D facial models made from photographs published on Facebook. The models are displayed with virtual reality applications running on mobile devices. The team conducted an experiment that involved 20 volunteer subjects, the researchers obtained their photos from online sources. It is a quite easy today to find images of a subject online, in this way act criminals that intend to steal our digital identity too. The team used the virtual reality technology to give motion and depth to the images in order to bypass facial recognition systems. Then the researchers created 3D models of the volunteers’ faces, they tweaked their eyes and added some facial animations to simulate the behavior of a man that is looking at the camera. In some cases, when they haven’t found online photos that showed the subject’s whole face, they have recreated the missing parts.

The experts tested their virtual reality face renders on five facial recognition systems used to authenticate users, KeyLemon, Mobius, TrueKey, BioID, and 1D. All these systems implement an authentication feature that can be used for a wide range of applications, such as locking smartphones. The 3D models made by the researchers were able to fool four out of five facial recognition systems they tested 55 percent to 85 percent of the time. Their attack, which successfully spoofed four of the five systems they tried, is a reminder of the downside to authenticating your identity with biometrics. By and large your bodily features remain constant, so if your biometric data is compromised or publicly available, it’s at risk of being recorded and exploited. Faces plastered across the web on social media are especially vulnerable—look no further than the wealth of facial biometric data literally called Facebook. Other groups have done similar research into defeating facial recognition systems.

_________

Technology to impersonate and bypass facial recognition system:

Scientists have invented a baseball cap that can trick facial recognition tech into thinking you’re someone else entirely. The hi-tech headwear uses laser dots to fool software like Apple’s Face ID, which works by scanning your face to identify who you are. Scientists at China’s Fudan University laced the inside of the cap with tiny LED lights, which project infrared dots onto your face. These dots aren’t visible to the naked eye, but they’ll be picked up by facial recognition systems. Apple’s iPhone face-scanning works by using an infrared blaster to project dots all over your face. By tracking these dots, it can work out the structure of your face — and identify you. But a laser-projecting baseball cap can mess with the system by projecting dots onto your face in “strategic spots,” altering how the Face ID tech sees you. The researchers even said that the face-scanning lights could be “hidden in an umbrella and possibly even hair or a wig.” They said that people could use this tech to “dodge surveillance cameras” and even “impersonate” victims to get around face-recognition systems.  The attack is totally unobservable by nearby people, because not only is the infrared light invisible, but also the device to launch the attack is small enough. Worryingly, researchers can use this tech to trick the software into thinking you’re someone else entirely. The baseball cap tricked face-recognition system FaceNet into thinking targets where public figures — like musician Moby and Korean politician Lee Hoi-chang — with a 70 percent success rate. This means you could wear this baseball cap and trick a facial recognition system to unlock a gadget. And criminals could potentially make police with face-scanning sunglasses think you’re someone else.

_______

Can Mossad beat facial recognition?

Mossad director Yossi Cohen made a rare public speech and admitted that facial recognition technology is challenging the Mossad as it trickles out and around the world even to “non-hi-tech” countries. Spying is getting harder because the same technologies that catch terrorists can sometimes uncover foreign intelligence operations.  Police identified two Novichok spies who poisoned ex-spook Sergei Skripal using facial recognition technology. The pair are believed to be part of Putin’s feared GRU military intelligence service. Investigators combing through hours of CCTV footage discovered the “fresh identities” by cross-checking them with passenger lists of the commercial flight used to flee Britain. Facial recognition technology was used to identify the suspects, who were said to be travelling on fake names. There are two main ways to beat facial recognition: one is through cutting-edge technology to defend against these programs and the other is to alter one’s appearance.

Gil Perry, a former IDF intelligence official, is the founder and CEO of D-ID, a company that has developed anti-facial recognition technology. The technology was not engineered for the Mossad, but rather for business people who want to share their appearance in marketing material without it being scooped up later by identity thieves and used against them in other unintended contexts. Perry said that his technology is already being used by Cloudinary, a cloud technology service with 350,000 customers. Perry said that D-ID’s technology “protects organizations, enterprises, governments and databases of photos and videos” so that photos being shared or stored for marketing purposes look the same to the human eye, but not to artificial intelligence and facial-recognition algorithms.  It uses advanced image processing, deep-machine learning, morphological transformation and generative adversarial network attacks to slightly alter the photo in a way that throws off facial recognition programs, without being noticed by the human eye. It creates a kind of “noise” that “confuses” such programs. The technology is designed to be adaptable in a way that facial recognition programs will repeatedly be fooled and cannot make a small adjustment to catch up, Perry said. The bottom line is facial recognition programs “cannot recognize the subject” in a photo, but humans for whom the photos are intended in a marketing campaign cannot tell that the photos have been slightly altered. D-ID’s technology can be used to block “identity theft, fraud, reuse of biometric data” and “to comply with the GDPR” – the EU’s new privacy protection law. While Perry’s technology is utilized to protect the misuses of marketing materials, it is not a far stretch to think of a large number of ways that similar technologies could be used by the Mossad to beat facial recognition in a variety of contexts.

Other tools which are much less sophisticated and impractical for businesses – but possibly highly useful in other contexts for Mossad agents – are eyeglasses which use infrared light or flashes to fool facial recognition, but are undetectable to the human eye. A similar adaptation of infrared technology can be hidden under a baseball hat and possibly under a wig, an umbrella or hair. One application of the technology projects dots of light onto the wearer’s face in a way that not only obscures their identity, but could even facilitate impersonating someone else. Hacking a particular facial recognition guard station and combinations of pixelation of the face are other hi-tech tactics the Mossad might use.  Then there are the old-school tactics. Some privacy activists are able to beat facial recognition with low-tech makeup or paint placed in pinpoint areas of contrast on a human face – where a nose is located or where the chin becomes the neck – that have been used by the Mossad and other intelligence agencies for a while. In the past, former Mossad officials have mentioned using hats along with fake moustaches, fake beards and other low-tech non-permanent facial adjustments to avoid being identified, although this can still work against facial recognition technologies in certain contexts.

______

______

Face Recognition Research:

__

  1. Double Trouble: Differentiating Identical Twins by Face Recognition, a 2014 study:

Facial recognition algorithms should be able to operate even when similar-looking individuals are encountered, or even in the extreme case of identical twins. An experimental data set comprised of 17486 images from 126 pairs of identical twins (252 subjects) collected on the same day and 6864 images from 120 pairs of identical twins (240 subjects) with images taken a year later was used to measure the performance on seven different face recognition algorithms. Performance is reported for variations in illumination, expression, gender, and age for both the same day and cross-year image sets. Regardless of the conditions of image acquisition, distinguishing identical twins are significantly harder than distinguishing subjects who are not identical twins for all algorithms.

____

  1. Infrared based Multi/ Hyperspectral Imaging System:

The Face Recognition (FR) is growing as a major research area because of the broad choice of applications in the fields of commercial and law enforcement. Traditional FR methods based on Visible Spectrum (VS) are facing challenges like object illumination, pose variation, expression changes, and facial disguises. Unfortunately these limitations decrease the performance in object identification and verification. To overcome all these limitations, the Infrared Spectrum (IRS) may be used in human FR. So it leads and encourages the researchers for continuous research in this area of FR. The IR based three dimensional cubic dataset i.e. Multi/ Hyperspectral Imaging System can minimize the several limitations arise in the existing and classical FR system because the skin spectra derived with cubic dataset depicts the unique features for an individual. Multi/ Hyperspectral Imaging System provide valuable discriminants for individual appearance that cannot be obtained by additional imaging system, that’s why this may be the future of human FR.

____

  1. Facial recognition is the future of diagnostics:

A facial recognition computer model can accurately predict your BMI, body fat, and blood pressure, new research shows. Dr. Ian Stephen, of Macquarie University in Sydney, Australia, and his colleagues used facial shape analysis to correctly detect markers of physiological health in more than 270 individuals of different ethnicities. “We have developed a computer model,” explains Dr. Stephen, “that can determine information about a person’s health simply by analyzing their face, supporting the idea that the face contains valid, perceptible cues to physiological health.” The findings have now been published in the journal Frontiers in Psychology, and they make the idea of a computer-enhanced super-doctor whose brain has been optimized for flawless diagnosing appear more scientific than fictional.

Dr. Stephen explains how the study was carried out: “First, we used photos of 272 Asian, African, and Caucasian faces to train the computer to recognize people’s body fat, BMI, […] and blood pressure from the shape of their faces.  We then asked the computer to predict these three health variables in other faces, and found that it could do so,” says Dr. Stephen. Next, the researchers wanted to see whether or not humans would detect health cues in the same way. So, Dr. Stephen and his colleagues designed an app that enabled human participants to change the appearance of the faces so that they would look as healthy as possible. The parameters of the app could be changed according to the computer model. “We found that the participants altered the faces to look lower in fat, have a lower BMI and, to a lesser extent, a lower blood pressure, in order to make them look healthier,” says Dr. Stephen. “This suggests that some of the features that determine how healthy a face looks to humans are the same features that the computer model was using to predict body fat, BMI, and blood pressure.” In other words, our brains work in much the same way as the computer model, and they can predict health from a facial shape with surprising accuracy.

Dr. Stephen goes on to speculate about the evolutionary significance of the findings. He says, “The results suggest that our brains have evolved mechanisms for extracting health information from people’s faces, allowing us to identify healthy people to mate with or to form cooperative relationships with.” “This fills an important missing link in current evolutionary theories of attractiveness,” he adds. “The findings,” Dr. Stephen concludes, “provide strong support for the hypothesis that the face contains valid, perceptible cues to physiological health, and while the models are at an early stage, we hope that they could be used to help diagnose health problems in the future.”

_____

  1. Face Recognition on Drones:

Drones, as known as unmanned aerial vehicles (UAV), are aircrafts which can perform autonomous pilot. They can easily reach locations which are too difficult to reach or dangerous for human beings and collect images from bird’s-eye view through aerial photography. Enabling drones to identify people on the ground is important for a variety of applications, such as surveillance, people search, and remote monitoring. Since faces are part of inherent identities of people, how well face recognition technologies can be used by drones becomes essential for future development of the above applications.  Face recognition capability is undoubtedly a key for drones to identify specific individuals within a crowd. For example, to adopt drones in the search of missing elderlies or children in the neighbourhood, the drones first need to know who the targets are, and then the search can be launched. Thus, face recognition on drones would be a vital technical component in such applications; consequently, how well face recognition perform on drones is a research topic worth to be investigated. Of course drone based facial recognition can also be used to identify terrorists in remote areas.  The current face recognition technologies are capable of recognizing faces on drones with some limits in distance and angle, especially when drones take pictures in high altitudes and the face image is taken from a long distance and with a large angle of depression. And augmenting face models with 3D information may help to boost recognition performance in the case of large angles of depression.

Through the empirical studies on Face++ and ReKognition, it is found that the present face recognition technologies are able to perform adequately on drones. However, some obstacles need to be conquered before such techniques can unleash their full potentials:

-1. The small-sized facial images taken by drones from long distances do cause trouble to both face detection and recognition.

-2. The pose variances introduced by large angles of depression dramatically weaken the capability of both face detection and recognition.

-3. A recognition model augmented with 3D modelling techniques might increase the performance of face recognition in the case with large angles of depression. However, this augmentation may also decrease the distinguishability of faces in common cases, and thus requires further investigation.

In the future, since the sizes of facial images greatly influence the performance of face recognition, how the parameters of aerial cameras (e.g., resolutions and compression rate) may impact the performance of face recognition on drones should be further studied. Besides, cameras with large FOV (field of view) not only capture wide scenes into pictures, but also generate morphs at the margin of the pictures. To compensate the negative influences caused by such morphs is also worth of investigation. Last but not least, although the current face recognition techniques are capable on drones, applying online services such as Face++ and ReKognition directly on drones may be practically infeasible. Constraints from network bandwidth, batteries, and computation power of the embedded system carried by drones limit how face recognition can be applied in this scenario. Developing a face recognition enabled drone-based system which is balanced in accuracy, computation, network transmission, and power consumption will be part of future research.

_____

  1. Army builds Face Recognition Technology that works in Low-Light Conditions:

An artificial intelligence and machine learning method formulated by army researchers generates a visible face image from a thermal image of a person’s face captured in night-time or low-light conditions. This development could result in improved real-time biometrics and post-mission forensic analysis for secret night-time operations.

_

Figure above shows a conceptual illustration for thermal-to-visible synthesis for interoperability with existing visible-based facial recognition systems.

_

Thermal cameras like Forward Looking Infrared (FLIR) sensors are dynamically deployed on ground and aerial vehicles, in watchtowers and at checkpoints for surveillance purposes. Of late, thermal cameras are becoming available for use as body-worn cameras. The ability to perform automatic face recognition at night-time using such thermal cameras is advantageous for informing a soldier that a particular person is someone of interest (meaning a person who may be on a watch list). The motivations for this technology — created by Drs. Benjamin S. Riggan, Nathaniel J. Short and Shuowen “Sean” Hu, from the U.S. Army Research Laboratory — are to improve both automatic and human-matching capabilities. “This technology enables matching between thermal face images and existing biometric face databases/watch lists that only contain visible face imagery,” said Riggan, a research scientist. “The technology provides a way for humans to visually compare visible and thermal facial imagery through thermal-to-visible face synthesis.” He said under low-light and night-time conditions, there is scarce light for a conventional camera to capture facial imagery for recognition without active illumination such as a spotlight or flash, which would reveal the position of such surveillance cameras; however, thermal cameras that capture the heat signature naturally radiating from living skin tissue are perfect for such conditions.

When using thermal cameras to capture facial imagery, the main challenge is that the captured thermal image must be matched against a watch list or gallery that only contains conventional visible imagery from known persons of interest. Therefore, the problem becomes what is referred to as cross-spectrum, or heterogeneous, face recognition. In this case, facial probe imagery acquired in one modality is matched against a gallery database acquired using a different imaging modality. This method leverages modern domain adaptation methods based on deep neural networks. The primary approach is made up of two main parts: a non-linear regression model that maps a particular thermal image into a corresponding visible latent representation and an optimization issue that projects the latent projection back into the image space.

A technical paper titled “Thermal to Visible Synthesis of Face Images using Multiple Regions” showcased the research at the IEEE Winter Conference on Applications of Computer Vision, or WACV, in Lake Tahoe, Nevada in March 2018. The technical conference comprised of scientists and scholars from industry, academia, and government. At the conference, the army researchers showed that integrating global information, such as the characteristics from across the whole face, and local information, such as features from discriminative fiducial regions, for instance, eyes, mouth, and nose, improved the discriminability of the synthesized imagery. They demonstrated how the thermal-to-visible mapped representations from both local and global regions in the thermal face signature could be used in combination to synthesize a refined visible face image. The optimization issue for synthesizing an image attempts to equally preserve the shape of the whole face and appearance of the local fiducial fine points. Using the synthesized thermal-to-visible imagery and current visible gallery imagery, they performed face verification experiments using a common open source deep neural network architecture for face recognition. The architecture used is openly designed for visible-based face recognition. The most astonishing result is that their approach realized better verification performance than a generative adversarial network-based technique, which formerly showed photo-realistic properties. The technique developed by ARL preserves identity information to improve discriminability, for instance, better recognition accuracy for both automatic face recognition algorithms and human adjudication.

As part of the paper presentation, the Army Research Laboratory (ARL) researchers exhibited a near real-time demonstration of this technology. The proof of concept demonstration incorporated the use of a FLIR Boson 320 thermal camera and a laptop working the algorithm in near real-time. This demonstration revealed to the audience that a captured thermal image of a person can be used to create a synthesized visible image in situ. This work was bestowed the best paper award in the faces/biometrics session of the conference, out of over 70 papers presented. Going forward, Riggan said he and his colleagues will continue this research under the sponsorship of the Defense Forensics and Biometrics Agency to develop a powerful night-time face recognition capability for soldiers.

_____

  1. Human plus machine – face recognition at its best:

Face recognition accuracy of forensic examiners, super-recognizers, and face recognition algorithms, a 2018 study:

The first study to compare the performances of trained forensic facial examiners, people known as super-recognisers who have a natural talent for face identification, and facial-recognition computer algorithms, has revealed that a combination of human and computer decision-making is most accurate.

The study, by a team of scientists from the National Institute of Standards and Technology in the US and three universities including UNSW Sydney, is published in the Proceedings of the National Academy of Sciences.

“Experts in face identification often play a crucial role in criminal cases,” says study team member, UNSW psychologist Dr David White. “Deciding whether two images are of the same person, or two different people, can have profound consequences. “When facial comparison evidence is presented in court, it can determine the outcome of a criminal trial. Errors on these decisions can potentially set a guilty person free, or wrongly convict an innocent person,” he says.

The international study involved a total of 184 participants from five continents – a large number for an experiment of this type.  Eighty-seven were trained professional facial examiners, while 13 were super-recognisers – people with exceptional natural ability, but no training. The remaining 84 were control participants with no special training or natural ability, including 53 fingerprint examiners and 31 undergraduate students. Participants received pairs of face images and rated the likelihood of each pair being the same person on a seven-point scale. The research team intentionally selected extremely challenging pairs, using images taken with limited control of illumination, expression and appearance. They then tested four of the latest computerised facial recognition algorithms, all developed between 2015 and 2017, using the same image pairs.

“As a group, trained forensic examiners outperformed the other groups,” says Dr White. “Another important insight from the study was that the most advanced facial-recognition algorithms are now as accurate as the very best humans. “However, the results with people showed large variation in accuracy of individuals in all the groups tested. This ranged from near random guessing, with an accuracy of about 50%, to a perfect score of 100%. “This variability is a problem, because it is common practice for just one examiner to present face identification decisions in court,” says Dr White. The study found that combining several examiners’ opinions produced higher accuracy than one examiner working alone, and led to less variability in accuracy compared to individual responses. “But the surprising best solution to the problem of individual variability is to combine the responses from one examiner with the responses from the best algorithm. A combination of human and computer decision-making leads to the most accurate results,” Dr White says.

The results of the study point to tangible ways to maximize face identification accuracy by exploiting the strengths of humans and machines working collaboratively.  To optimize the accuracy of face identification, the best approach is to combine human and machine expertise. Fusing the most accurate machine with individual forensic facial examiners produced decisions that were more accurate than those arrived at by any pair of human and/or machine judges. This human–machine combination yielded higher accuracy than the fusion of two individual forensic facial examiners. Computational theory indicates that fusing systems works best when their decision strategies differ. Therefore, the superiority of human–machine fusion over human–human fusion suggests that humans and machines have different strengths and weaknesses that can be exploited/mitigated by cross-fusion.

_____

  1. Brain-machine interface for facial recognition:

Cognitive biometrics is a novel approach to user authentication and/or identification that utilises the response(s) of nervous tissue. Cognitive biometrics relies on the response of the subject when they are presented with a particular stimulus such as a familiar photograph, a song or a puzzle. This novel approach to user authentication and/or identification is based on technologies and methods that measure signals generated directly or indirectly by human thought processes. The biological signals representative of the mental and emotional states of the user can be recorded using a variety of methods, such as the electroencephalogram (EEG), the electrocardiogram (ECG), the electrodermal response (EDR), eye trackers (pupilometry), and the electromyogram (EMG), among others. The validation of the user is then based on the matching of their response to the stimulus with a pre-recorded ECG, EEG or other metric. The stimuli are designed to elicit responses that are sensitive to the individual’s genetic predispositions, modulated by subjective experiences. Provided the proper stimuli are presented, the stimulus-response paradigm provides a powerful methodology for evaluating the authenticity of the subject requesting authentication.

Recent breakthroughs using noninvasive functional transcranial Doppler spectroscopy as demonstrated by Njemanze, to locate specific responses to facial stimuli have led to improved systems for facial recognition. The new system uses input responses called cortical long-term potentiation (CLTP) derived from Fourier analysis of mean blood flow velocity to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, and the method has been referred to as cognitive biometrics.

_____

  1. Combining Multiple CCTV Images could help catch suspects, a 2018 study:

Combining multiple poor quality CCTV images into a single, computer-enhanced composite could improve the accuracy of facial recognition systems used to identify criminal suspects, new research suggests.

Psychologists from the universities of Lincoln and York, both in the UK, and the University of New South Wales in Australia created a series of pictures using a ‘face averaging’ technique – a method which digitally combines multiple images into a single enhanced image, removing variants such as head angles or lighting so that only features that indicate the identity of the person remain. They compared how effectively humans and computer facial recognition systems could identify people from high quality images, pixelated images, and face averages. The results showed that both people and computer systems were better at identifying a face when viewing an average image that combined multiple pixelated images, compared to the original poor-quality images. Computer systems benefited from averaging together multiple images that were already high in quality, and in some cases reached 100 per cent accurate face recognition.

The results have implications for law enforcement and security agencies, where low quality, pixelated images are often the only pictures of suspects available to use in investigations. The image averaging method offers a standardised way of using images captured from multiple CCTV cameras to create a digital snapshot which can be better recognised by both people and computer software systems. Dr Kay Ritchie, from the University of Lincoln’s School of Psychology, led the study. She said: “We know that not all CCTV systems have the luxury of high quality cameras, meaning that face identifications are often being made from poor quality images. We have shown that there is a relatively quick and easy way to improve pixelated images of someone’s face. We also know anecdotally that there are lots of different techniques that people can use as investigative tools to improve low-quality images, such as manipulating brightness. Our standardised face averaging method could help in suspect identification from low-quality CCTV footage where images from multiple different cameras are available, for example, from tracking a suspect along a particular route.”

In the study, participants were asked to compare a high quality image with either a low quality pixelated image or one created using the image averaging method, and determine whether they depicted the same person or two different people. Results showed that accuracy was significantly higher when viewing an average combining pixelated images, rather than a single pixelated image. The same test images were run through two separate computer recognition programmes, one a smart phone application, and the other a commercial facial recognition system widely used in forensic settings. Both computerised systems showed higher levels of accuracy in identifying a person from average images.

______

  1. Can human facial recognition guide computer facial recognition?

Research in face recognition has attracted scientists from a very wide range of disciplines. Broadly, research projects divide into those concerned with investigating the mechanisms underlying human face recognition, and those that aim to automate the process for applied reasons. Automatic face-recognition systems need not be constrained to mimic human processes, though some of the most popular techniques currently available do claim to capture some aspects of human face processing.  There is no necessary link between techniques developed by engineers to automate face recognition, and natural mechanisms used by the human visual system to achieve the same end.

Psychological research over the past 20 years or so has shown that human vision processes upright faces in a ‘holistic’ or configural way, rather than as a set of independent facial features. However, Neuroscientists are now rethinking how the brain recognizes faces. Brain cells in monkeys are tuned to react to specific combinations of features, rather than to a whole face. People can pick a familiar face out of a crowd without thinking too much about it. But how the brain actually does this has eluded researchers for years. Now, a study shows that rhesus macaque monkeys rely on the coordination of a group of hundreds of neurons that pay attention to certain sets of physical features to recognize a face.

The Code for Facial Identity in the Primate Brain, a 2017 study:

Each time you scroll through Facebook, you’re exposed to dozens of faces—some familiar, some not. Yet with barely a glance, your brain assesses the features on those faces and fits them to the corresponding individual, often before you even have time to read who’s tagged or who posted the album. Research shows that many people recognize faces even if they forget other key details about a person, like their name or their job. That makes sense: As highly social animals, humans need to be able to quickly and easily identify each other by sight. But how exactly does this remarkable process work in the brain?

That was the question vexing Le Chang, a neuroscientist at the California Institute of Technology, in 2014. In prior research, his lab director had already identified neurons in the brains of primates that processed and recognized faces. These six areas in the brain’s temporal lobe, called “face patches,” contain specific neurons that appear to be much more active when a person or monkey is looking at a face than other objects. “But I realized there was a big question missing,” Chang says. That is: how the patches recognize faces. “People still didn’t know the exact code of faces for these neurons.”

In search of the method the brain uses to analyze and recognize faces, Chang decided to break down the face mathematically. He created nearly 2,000 artificial human faces and broke down their component parts by categories encompassing 50 characteristics that make faces different, from skin color to amount of space between the eyes. Then he implanted electrodes into two rhesus monkeys to record how the neurons in their brain’s face patches fired when they were shown the artificial faces. By then showing the monkeys thousands of faces, Chang was able to map which neurons fired in relation to which features were on each face, he reports in a study published in the journal Cell.

The results show that each neuron associated with facial recognition, called a face cell, pays attention to specific ranked combinations of facial features. It turned out that each neuron in the face patches responded in certain proportions to only one feature or “dimension” of what makes faces different. This means that, as far as your neurons are concerned, a face is a sum of separate parts, as opposed to a single structure. Chang notes he was able to create faces that appeared extremely different but produced the same patterns of neural firing because they shared key features.

Understanding this pattern of neural firing allowed Chang to create an algorithm by which he could actually reverse engineer the patterns of just 205 neurons firing as the monkey looked at a face to create what faces the monkey was seeing without even knowing what face the monkey was seeing. Like a police sketch artist working with a person to combine facial features, he was able to take the features suggested by the activity of each individual neuron and combine them into a complete face. In nearly 70 percent of cases, humans drawn from the crowdsourcing website Amazon Turk matched the original face and the recreated face as being the same. “People always say a picture is worth a thousand words,” co-author neuroscientist Doris Tsao said. “But I like to say that a picture of a face is worth about 200 neurons.”

Tsao and Chang recorded responses from a total of 205 neurons between the two monkeys. Each neuron responded to a specific combination of some of the facial parameters.  “They have developed a model that goes from a picture on a computer screen to the responses of neurons way the heck down in the visual cortex,” says Greg Horwitz, a visual neurophysiologist at the University of Washington in Seattle. “This takes a huge step forward,” he says, because the model maps out how each cell responds to all possible combinations of facial features, instead of just one. Tsao and Chang wondered whether, within the specific combination of characteristics that a face cell recognized, each neuron was better tuned to particular features than to others. They tested this idea by trying to recreate the faces the monkeys were shown, on the basis of each neuron’s response to its cast of characteristics. Based on the strength of those signals, the neuroscientists could recreate the real faces almost perfectly. When the monkeys saw faces that varied according to features that a neuron didn’t care about, the individual face cell’s response remained unchanged. In other words, “the neuron is not a face detector, it’s a face analyser”, says Leopold. The brain “is able to realize that there are key dimensions that allow one to say that this is Person A and this is Person B.” Human brains probably use this code to recognize or imagine specific faces, says Tsao.

Bevil Conway, a neuroscientist at the National Eye Institute, said the new study impressed him. “It provides a principled account for how face recognition comes about, using data from real neurons,” says Conway, who was not involved in the study. He added that such work can help us develop better facial recognition technologies, which are currently notoriously flawed. Sometimes the result is laughable, but at other times the algorithms these programs rely on have been found to have serious racial biases.

In the future, Chang sees his work as potentially being used in police investigations to profile potential criminals from witnesses who saw them. Ed Connor, a neuroscientist at Johns Hopkins University, envisions software that could be developed to adjust features based on these 50 characteristics. Such a program, he says, could allow witnesses and police to fine-tune faces based on the characteristics humans use to distinguish them, like a system of 50 dials that witnesses could turn to morph faces into the once they remember most. “Instead of people describing what others look like,” Chang speculates, “we could actually directly decode their thoughts.”

“The authors deserve kudos for helping to drive this important area forward,” says Jim DiCarlo, a biomedical engineer at MIT who researches object recognition in primates. However, DiCarlo, who was not involved in the study, thinks that the researchers don’t adequately prove that just 200 neurons are needed to discriminate between faces. In his research, he notes, he’s found that it takes roughly 50,000 neurons to distinguish objects in a more realistic way, but still less realistic than faces in the real world. Based on that work, DiCarlo estimates that recognizing faces would require somewhere between 2,000 and 20,000 neurons even to distinguish them at a rough quality. “If the authors believe that faces are encoded by nearly three orders of magnitude less neurons, that would be remarkable,” he says. “Overall, this work is a nice addition to the existing literature with some great analyses,” DiCarlo concludes, “but our field is still not yet at a complete, model-based understanding of the neural code for faces.”

Connor, who also wasn’t involved in the new research, hopes this study will inspire new research among neuroscientists. Too often, he says, this branch of science has dismissed the more complex workings of the brain as akin to the “black boxes” of computer deep neural networks: so messy as to be impossible to understand how they work. “it’s hard to imagine anybody ever doing a better job of understanding how face identity is encoded in the brain,” says Connor of the new study. “It will encourage people to look for sometimes specific and complex neural codes.” He’s already discussed with Tsao the possibility of researching how the brain interprets facial expressions. “Neuroscience never gets more interesting than when it is showing us what are the physical events in the brain that give rise to specific experiences,” Connor says. “To me, this is the Holy Grail.”

_________

_________

Moral of the story:

_

  1. The human face is undoubtedly the most common characteristic used by humans to recognize other people. From birth, face recognition is performed routinely and effortlessly by humans. An infant innately responds to face shapes at birth and can discriminate his or her mother’s face from a stranger’s at the tender age of 45 hours. Recognizing and identifying people is a vital survival skill, as is reading faces for evidence of ill-health or deception. On average, about 5000 faces are remembered and recognised by humans. Humans are very good at recognizing faces of familiar people. However, they aren’t so good at recognizing unfamiliar people.

_

  1. A biometric is a unique, measurable, physiological or behavioural characteristic of a human being that can be used to automatically recognize an individual or verify an individual’s identity. Proper biometric use is very application dependent. Certain biometrics will be better than others based on the required levels of convenience and security. No single biometric will meet all the requirements of every possible application.

_

  1. Although Password/PIN systems and Token systems are still the most common person verification and identification methods, trouble with forgery, theft and lapses in user’s memory pose a very real threat to high security environments which are now turning to biometric technologies. Since biometric identifiers are unique to individuals, they are more reliable in verifying identity than token and knowledge-based methods. Biometric technologies are based not only on the faith that ‘the body doesn’t lie’ [i.e. it is very difficult or impossible to falsify biometric characteristics], but also on automated systems of identification that are accurate, reliable, and efficient.

_

  1. Computer vision is concerned with the automatic extraction, analysis and understanding of useful information from a single image or a sequence of images. It involves the development of a theoretical and algorithmic basis to achieve automatic visual understanding. Computer vision is a super exciting part of Artificial Intelligence where we attempt to get intelligence out of visual data. Intelligence can be scene/object detection, face detection, face recognition, and facial analysis.

_

  1. Face detection is a computer technology that is able to identify the presence of people’s faces within digital images as images might contain numerous objects that aren’t faces such as landscapes, buildings and other parts of humans (e.g. legs, shoulders and arms). Face recognition describes a biometric technology that actually attempts to establish whose face it is. Face detection is essentially the first step to face recognition, distinguishing human faces from other objects in the image; while facial recognition is the process of the identifying or verifying the person. Face detection is the indivisible part of the face recognition. If there has been no facial detection, there will clearly be no facial recognition.  Face detection has several applications, only one of which is facial recognition.

_

  1. Facial recognition (also known as face recognition)is a biometric software application capable of uniquely identifying (recognising) or verifying (authenticating) a person by comparing and analysing patterns based on the person’s facial contours by using spatial geometry of distinguishing features of the face.  It is a form of computer vision that uses the face to identify or to authenticate a person. Face recognition system automatically identifies or verifies a person from video frame or digital image by mapping out their facial features mathematically and matching it with the facial database.

_

  1. Facial recognition is based on capturing an image of a face, extracting features, comparing it to images in a database, and identifying matches. As the computer cannot see the same way as a human eye can, it needs to convert images into numbers representing the various features of a face. The sets of numbers representing one face are compared with numbers representing another face. The quality of the computer recognition system is dependent on the quality of the image and mathematical algorithms used to convert a picture into numbers.

_

  1. Face recognition systems use computer algorithms to pick out specific, distinctive details about a person’s face. These details such as distance between the eyes, depth of the eye sockets, the width of the nose, cheekbones, and jawline are then converted into a mathematical representation and stored as face template or faceprint. This faceprint is compared to data of other faces collected in a face recognition database. The numerical representation of a face is termed as a feature vector. A feature vector comprises of various numbers in a specific order. Currently most of the face recognition technique use feature extractor, which just output the feature vector without any post-processing. Here feature vector is faceprint. When we have two faces (images) that represent the same person, the feature vectors derived will be quite similar. Put it the other way, the “distance” between the two feature vectors will be quite small. Machine learning algorithms derive feature vector from digital image and also match feature vector with pre-existing feature vector database. Deep learning with Convolutional neural networks (CNN) algorithms show the best results in analysing visual imagery due to their ability to take into account the two-dimensional topology of the image. The most advanced facial recognition algorithms are now claimed to be as accurate as the very best humans.

_

  1. The more images you have of someone’s face, the more accurate the mathematical model of that face you can produce. If you have only have one very good frontal shot of a person, that may not be as good as having 10 photos from slightly different angles of that person’s face. Also, combining multiple poor quality CCTV images into a single, computer-enhanced composite could improve the accuracy of facial recognition systems used to identify criminal suspects.

_

  1. In the context of facial recognition, it is common to distinguish between the problem of authentication (verification) and that of recognition (identification). In authentication, one-to-one matching occurs when face recognition algorithm compares a given face with a given template and verify their equivalence; for example, in an automatic teller machine (ATM). In recognition, one-to-many matching occurs when face recognition algorithm compares a given face with all the templates stored in the database; for example finding a terrorist in a crowd. Once an image is included in the database, stored surveillance data can be searched for occurrences of that image with a few keystrokes. Searching videotape for evidence, by contrast, is extremely time-consuming.

Identification (recognition) answers the question: Who are you?

Authentication (verification) answers the question: Are you really who you say you are?

_

  1. Because facial recognition promises what we might call “the grand prize” of identification, namely, the reliable capacity to pick out or identify the “face in the crowd,” it holds the potential of spotting a known assassin among a crowd of well-wishers or a known terrorist surveying areas of vulnerability such as airports or public utilities. Remember, unknown criminal or unknown terrorist cannot be identified with face recognition as their faceprints are not stored in database. Face recognition can only recognize persons whose images have been enrolled in the database. A scan of a person’s face is useless if there are no photos (faceprints) of them in the database. Without a match, the identity of the person behind the face scan can remain a mystery.

_

  1. Traditional facial recognition algorithms can be divided into two main approaches, geometric which relies on the shape and position of the facial features, or photometric which is a statistical approach that distils an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features. The state-of-the-art feature extraction algorithms are available that include 2D holistic feature, 2D local feature, 3D holistic feature, and 3D local feature extraction algorithms.

_

  1. The greatest difficulty of face recognition, compared to other biometrics, stems from the immense variability of the human face. The facial appearance depends heavily on environmental factors, for example, the lighting conditions, background scene and head pose. It also depends on facial hair, the use of cosmetics, glasses, jewellery and piercing. Last but not least, plastic surgery or long-term processes like aging and weight gain can have a significant influence on facial appearance. Even if we hypothetically assume that all these factors do not exist, for example, that the facial image is always acquired under the same illumination, pose, and with the same haircut and make up, still, the variability in a facial image due to facial expressions may be even greater than a change in the person’s identity. Overcoming a host of variables in unconstrained settings is the biggest challenge in facial recognition.

_

  1. The database image that provided true recognition today may not provide the same recognition after 5 years or 10 years due to change is person’s face as age advances. So updating database images regularly can improve recognition accuracy rate.

_

  1. Most current methods of face recognition are 2-dimensional (2D). They use a flat image of a face. But the face is a 3D object! 3D methods are also being developed and some are already available commercially. The main difference in 3D analysis is the use of the shape of the face, thus adding information to a final template. 2D face recognition represents a face by intensity variation and 3D face recognition represents a face by shape variation. 2D face scan reads only 20 to 80 nodal points on the face but 3D face scan reads 5,000 to 100,000 points on the face. 3D facial recognition avoids pitfalls of 2D face recognition algorithms such as change in lighting, different facial expressions, make-up and head orientation. Today, 99% of the infrastructure scattered around the world consists of 2D cameras capable of running advanced facial recognition software and it will take years before a physical overhaul to 3D cameras takes place. 3D face recognition is still an active research field, though several vendors offer commercial solutions.

_

  1. Traditional 2D face recognition methods are based on visible spectrum light and cannot work at night or in dark. Thermal cameras using infrared light can capture facial imagery even in low-light and night time conditions without using a flash. The infrared light is invisible to the human eye, but creates a day-like environment for the thermal cameras. Apple’s 3D face recognition system Face ID on its smartphone uses infrared to scan your face, so it works in low lighting conditions and in the dark.

_

  1. Facial recognition technology is useful in security, surveillance, access control, law enforcement, forensic science, identification, verification, tagging on social media, search engines, time attendance, payments, healthcare, schooling, retail, advertising and marketing. Face recognition is also useful in human computer interaction, pervasive computing, virtual reality, database recovery, multimedia and computer entertainment.

_

  1. About 3000 missing children were found in just four days using face recognition in India!

_

  1. Surveillance of animals with facial recognition technology help distressed owners find their lost pet, aid in animal conservation efforts, help fight diseases in animals, and help ward off poachers.

_

  1. Facial recognition market generated $17.91 billion in 2017. Technological advancements in facial recognition biometrics, more mobile devices equipped with cameras and rising popularity of media cloud services will push facial recognition market to $86 billion in annual revenue by 2025.

_

  1. Facial recognition system (FRS) can generate two types of errors—false positives (generating an incorrect match) or false negatives (not generating a match when one exists).

_

  1. FAR (False Accept Rate) measures the percent of invalid inputs that are incorrectly accepted i.e. false positives. FRR (False Reject Rate) measures the percent of valid inputs that are incorrectly rejected i.e. false negatives. The matching algorithm performs a decision based on a threshold that determines how close to a template the input needs to be for it to be considered a match. If the threshold is reduced, there will be fewer false negatives (lower FRR) but more false positives (higher FAR). Conversely, a higher threshold will reduce the FAR but increase the FRR. Setting the threshold too low will result in too many false positives – an innocent person is subjected to some amount of scrutiny based on resemblance to a person on the watch list. Setting the threshold too high will result in too many false negatives – a terrorist is not recognized due to differences in appearance between the gallery and probe images of the terrorist.

_

  1. Accurate FRS should have low FAR and low FRR. Both FAR and FRR are important, but for most applications one of them is considered most important.

Two examples illustrate this:

-1. When biometrics are used for logical or physical access control, the objective of the application is to disallow access to unauthorized individuals under all circumstances. It is clear that a very low FAR is needed for such an application, even if it comes at the price of a higher FRR.

-2. When surveillance cameras are used to screen a crowd of people for missing children, the objective of the application is to identify any missing children that come up on the screen. When the identification of those children is automated using a face recognition software, this software has to be set up with a low FRR. As such a higher number of matches will be false positives, but these can be reviewed quickly by surveillance personnel. If you do not want to miss a terrorist at airport, set up a low threshold so that it will have very few false negative (low FRR) albeit at the cost of high false positive rate (high FAR). Some innocents will have to undergo scrutiny to catch a terrorist.

_

  1. The perfect facial recognition system will give 100% verification (authentication) and 100 % identification (recognition) with no false negative and no false positive. Such a system does not exist. Facial recognition algorithms used by the FBI are inaccurate almost 15% of the time and are more likely to misidentify female and black people.

_

  1. Compared to other biometric techniques, facial recognition may not be most reliable and efficient. Among all biometric systems, facial recognition has the highest false acceptance and rejection rates, thus questions have been raised on the effectiveness of face recognition software in surveillance and security. However, facial recognition is the only biometrics that allows identification at a distance of many meters, requiring neither the knowledge nor the cooperation of the subject. These features have made it a favourite for a range of security and law enforcement functions, as the targets of interest (terrorist/criminal) are likely to be highly uncooperative, actively seeking to subvert successful identification.

_

  1. Even if you deploy accurate system (fewer false positive/negative) but you are doing mass surveillance on very large numbers of people, these systems would miss a high proportion of suspects included in the photo database, and flag huge numbers of innocent people; thereby lessening vigilance, wasting precious manpower resources, and creating a false sense of security. Also, face recognition gets worse as the number of people in the database increases. This is because so many people in the world look alike. As the likelihood of similar faces increases, matching accuracy decreases. Humans are very good at distinguishing people who look similar, but computers are not that good right now.

_

  1. Facial recognition technology is designed to give intelligence and security agencies a powerful tool to deter crime, and quickly identify terror and crime suspects. However, when two UK police forces, South Wales Police and the Metropolitan Police used ‘live’ facial recognition at public events and in public spaces, they got accuracy rate as low as 2 to 9%. Systems are often advertised as having accuracy near 100%; this is misleading as research studies often use much smaller sample sizes than would be necessary for large scale applications. The Boston Marathon bombings revealed the limitations of facial recognition technology to the world. Even though government databases contained pictures of both of the Boston suspects, the technology could not match surveillance footage to database images.

_

  1. Automatic Face Recognition systems do not yet live up to their name—they are not entirely automatic. Identification accuracy can be quite poor in cases where image capture conditions are not optimal and where images of a face are captured several years apart. To manage this uncertainty in many applications, algorithms present human users with a ‘candidate list’ displaying the highest matching images returned from a database and ranked in order of similarity to the probe image. It then falls to the human operator to review this candidate list and check for the presence or absence of matching identities. This human performance curtails accuracy of face recognition systems and potentially reduces benchmark estimates by 50% in operational settings i.e. the human operators pick the correct match out of the list only about half the time. Psychological research has shown humans make large numbers of errors when matching photos of unfamiliar faces. Mere practice does not attenuate these limits, but superior performance of trained examiners can improve the operational accuracy of face recognition systems. The best approach to achieve maximum face recognition accuracy is by combining human and machine expertise i.e. by fusing the most accurate facial recognition algorithm with expert forensic facial examiner. This is so because humans and machines have different strengths and weaknesses that can be exploited/mitigated by cross-fusion.

_

  1. Experiments show that the combination of 2D and 3D technology yields better results. The combination of two technologies outperforms the use of single technology in face recognition system. As every method has its advantages and disadvantages, technology companies have amalgamated the traditional 2D facial recognition, 3D recognition and Skin Textual Analysis, to create recognition systems that have higher rates of success. Combined techniques have an advantage over other systems. It is relatively insensitive to changes in expression, including blinking, frowning or smiling and has the ability to compensate for moustache or beard growth and the appearance of eyeglasses. The system is also uniform with respect to race and gender.

_

  1. When the public is regularly told that they are under constant video surveillance with advanced face recognition technology, this fear alone can reduce the crime rate, no matter whether the face recognition system technically works or does not. For example, if face recognition system installed in an airport has only a 50% chance of correctly recognizing someone on the watch list when they appear at the airport; one might be tempted to think of this technical performance as a failure. However, a 50% chance of being identified might be sufficient to cause a terrorist to avoid the airport.

_

  1. At the current technological level, one-to-many face recognition with non-collaborative users is practically unsolvable. That is, if one intentionally wishes not to be recognized, he/she can always deceive any facial recognition technology. There are two main ways to avoid recognition by facial recognition system: one is through cutting-edge technology to defend against these programs and the other is to alter one’s appearance. On the other hand, facial recognition system can be fooled by impersonating as somebody else using various spoofing techniques. Deliberately avoiding self-recognition means generating false negative and deliberately impersonating as someone else means generating false positive.

_

  1. Current 2D/3D facial recognition technology is unable to distinguish between identical twins.  Humans perform much better than commercial face recognition algorithms to distinguish between identical twins.

_

  1. Spying is getting harder because facial recognition technologies that catch terrorists can sometimes uncover foreign intelligence operations.

_

  1. When the person in the photo is a white man, facial recognition software is right 99 percent of the time. But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women. On the simple task of guessing the gender of a face, leading companies’ technology performed better on male faces than on female faces and especially struggled on the faces of dark-skinned African women. Western facial recognition software has problems recognizing black faces because its algorithms are usually written by white engineers who dominate the technology sector with the code geared to focus on white faces, and mostly tested on white subjects. On the other hand, facial recognition software is actually more accurate on Asian faces when it’s created by firms in Asian countries. All these suggest that who makes the software strongly affects how it works.

_

  1. Facial recognition can become a tool to harass innocents. Although the FBI purports its system can find the true candidate in the top 50 profiles 85% of the time, that’s only the case when the true candidate exists in the database. If the candidate is not in the database, it is quite possible the system will still produce one or more potential matches, creating false positive results. These people—who aren’t the candidate—could then become suspects for crimes they didn’t commit. An inaccurate system like this shifts the traditional burden of proof away from the government and forces people to try to prove their innocence.

_

  1. Facial recognition is designed to operate at a distance, without the knowledge or consent of the person being identified. Individuals cannot reasonably prevent themselves from being identified by cameras that could be anywhere. Scanning or collecting facial recognition data without a person’s knowledge and consent violates privacy of individual. Acceptance of facial recognition and other biometric identification systems has generally been driven by security concerns and the belief that these technologies offer solutions, therefore privacy rights of society need to be balanced with the security concerns of society. The U.S. Supreme Court has denied a right of privacy over facial features in public places. On the other hand, facial scan itself becomes security threat because these characteristics can’t be changed once they are acquired. You can’t change your face the way you can change a password. Once someone has your faceprint, they can get your name, they can find your social networking account, they can find and track you in the street, in the stores that you visit, the government buildings you enter, your internet behavior, your travel patterns, your social security number and the photos your friends post online etc. Concerns have been raised because a criminal or a stalker can have access to the knowledge of one’s whereabouts and people with them at any given time.

_

  1. If the biometric template of my face is used to gain access to a location, it will be difficult for me to argue that it was not me. Once my face has been digitally encoded, it can potentially be used to act “as if” it was me and thus, the security of biometric data is a pressing matter. A compromised PIN can be replaced, but there’s no ready solution if someone steals your faceprint. If someone loses a credit card, they have the option to freeze their credit card. What if you lose your face? Data stores about face biometrics can be accessed by third party if not stored properly or hacked.

_

  1. Facial recognition is largely unregulated. It seems important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse. On the other hand, the technology evokes fears of a “Big Brother” surveillance state with the government having the ability to know the whereabouts and activities of all citizens around the clock. It has chilling implications on our freedom, anonymity and privacy. Constant surveillance leads to people self-censoring lawful behaviour. Surreptitiously, these measures curb our right to protest, right to dissent, freedom of speech and freedom of association. They shape our behaviours in ways that corrode the heart of our democratic freedoms and our liberal democracy.

_

  1. Humans are very good at identifying people from their images, and so human face recognition performance is often considered as a guideline for assessing face recognition algorithms. There is no necessary link between techniques developed by engineers to automate face recognition, and natural mechanisms used by the human visual system to achieve the same end, although some of the most popular techniques currently available do claim to capture some aspects of human face processing. If we exactly know how face recognition happen at the level of neurons in the brain, then we can develop facial recognition technology emulating it to overcome significant flaws in the current facial recognition technologies. Although automatic face-recognition systems need not be constrained to mimic human processes, you always learn from someone who is performing better than you.

_____

Dr. Rajiv Desai. MD.

December 3, 2018

______

Postscript:

I am wondering how facial recognition has become successful in China from smartphones to schools to banks to airports to police, despite significant limitations and inaccuracies of the technology. Well, the veiled intention could be a surveillance state with the ability to spy on all citizens around the clock to curb protest, dissent and free speech. Therefore for China’s growing surveillance state, any technological shortcomings are incidental.

____

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

60 comments on “FACIAL RECOGNITION (TECHNOLOGY)”

  • agen toto says:

    After study a couple of of the weblog posts in your web site now, and I truly like your approach of blogging. I bookmarked it to my bookmark website list and will probably be checking again soon. Pls take a look at my web page as nicely and let me know what you think.

  • Wow, marvelous blog layout! How long have you been blogging for? you made blogging look easy. The overall look of your web site is fantastic, let alone the content!

  • 1인샵 says:

    I love reading an article that can make men and women think. Also, thanks for allowing for me to comment!

  • Appreciate you sharing, great post. Cool.

  • You are so awesome! I do not think I have read anything like that before. So good to find another person with a few genuine thoughts on this subject. Really.. thank you for starting this up. This site is something that is required on the internet, someone with some originality!

  • 수원출장 says:

    Right here is the perfect website for everyone who hopes to understand this topic. You know a whole lot its almost hard to argue with you (not that I really will need toÖHaHa). You definitely put a brand new spin on a topic that’s been discussed for years. Excellent stuff, just excellent!

  • Good site you have here.. Itís hard to find high-quality writing like yours nowadays. I really appreciate individuals like you! Take care!!

  • 대전출장 says:

    Nice post. I learn something totally new and challenging on blogs I stumbleupon on a daily basis. It will always be useful to read through articles from other writers and use a little something from their sites.

  • You are so awesome! I do not suppose I’ve truly read through anything like this before. So nice to discover somebody with a few original thoughts on this issue. Seriously.. thank you for starting this up. This web site is something that is required on the internet, someone with a bit of originality!

  • 먹튀검증 says:

    You are so awesome! I don’t believe I’ve read something like that before. So nice to find another person with some unique thoughts on this subject matter. Really.. thanks for starting this up. This site is one thing that is required on the internet, someone with a little originality!

Leave a Reply

Your email address will not be published. Required fields are marked *

Designed by @fraz699.