December 20, 2018
Facial recognition is the process of taking images or data of a face and associating that data with a specific individual. The technology has been portrayed for years in expensive media. We’ve all seen shows where police were able to take grainy surveillance footage and run it through some sort of database to identify exactly who the perpetrator is. Throughout the 1990’s and 2000’s, despite an increasing number of commercial applications, popular media was where facial recognition stayed for the general consumer. As the technology progresses and evolves, new uses for facial recognition are being developed. These tools are now present in phones, video game consoles, and surveillance systems. As with most innovation, legislation lags significantly behind implementation. Yet, facial recognition tools are poised to have devastating effects on an individual’s privacy and security throughout the world.
The origins of facial recognition date back to the 1960s. The early attempts were limited by processing power and some were only partially automated. Engineers from Bell Labs tried to rely on features like, ear protrusion, eyebrow weight or nose length, as the basis to recognize faces using pattern classification techniques. A Japanese researcher, Takeo Kanade, was able to successfully build the first fully automated facial recognition system in 1973. His system relied upon the same type of techniques as the Bell Labs research. First, he digitized photographs, then they would be analyzed for specific facial features. Kanade was able to achieve up to 75% accuracy on a specific collection of over 800 portraits. These early methods were incredibly fragile. Just the introduction of a pair of glasses was enough to take Kanade’s system from a 75% identification rate all the way to less than 3% as stated in the article on unconstrained facial recognition. Current facial recognition technology far surpasses the early attempts. Advanced technology sites like Facebook and Google are applying machine learning algorithms to their massive databases of personal photos and information in order to build the next generation of tools.
Facebook relies on the power of deep learning techniques to determine its own method of identifying individual faces given enough input to learn from. Identifying a face is normally broken down into a multistep process. The first subproblem is detecting the outline of a face in a photograph, then the face must be aligned to a standard portrait. Finally, the face must be converted to some representation other than pixels and identified. Facebook’s major improvements to this process were in the alignment and represent/ identify steps. To increase the accuracy of facial alignment, the researchers developed a way to extract a 3D model of the major facial features from the photo. Then they would rotate the 3D model and then convert it back to a 2D representation of the face so that the distortions caused...