The facial recognition technology market is rapidly expanding as organizations use the technology for a variety of reasons, including authenticating and/or identifying individuals to grant them access to online accounts, authorizing payments, tracking and monitoring employee presence, targeting specific ads to shoppers and much more.
In fact, the global facial recognition market is predicted to reach $12.67 billion in 2028, up from $5.01 billion in 2021, according to The Insight Partners. This increase is also driven by growing demand from governments and law enforcement agencies, who are using the technology to aid in criminal investigation, surveillance or other security efforts.
But as with any technology, there are potential drawbacks to using facial recognition, including privacy and security concerns.
Privacy issues surrounding facial recognition technology
The main privacy implication of facial recognition technology is using the technology to identify individuals without their consent. This includes using applications, such as real-time public surveillance or by merging databases, that are not legally constructed, said Joey Pritikin, chief product officer at Paravision, a computer vision company specializing in facial recognition technology.
Tracy Hulver, senior director of digital identity products at Aware Inc., agreed that it is important for organizations to let users know what biometric data they are collecting and then get their consent.
“You have to make it clear to the user what you’re doing and why you’re doing it,” he said. “And ask if they agree.”
Stephen Ritter, CTO at Mitek Systems Inc., a provider of mobile recording and digital identity verification products, agreed that consumer notice and consent is critical.
“Whether we’re delivering an app or a user experience directly to a consumer, or whether we’re delivering technology to a bank, or a marketplace, or a company that delivers an app to the end user, we need an appropriate notification, meaning the consumer is made very aware of the data we are going to collect and can agree to that,” said Ritter.
TO SEE: Mobile Device Security Policy (Tech Republic Premium)
In surveillance applications, the number one concern for citizens is privacy, said Matt Lewis, commercial research director at security consultancy NCC Group.
Facial recognition technology in surveillance has improved a lot in recent years, meaning it’s quite easy to track a person as they move around a city, he said. One of the privacy concerns about the power of such technology is who can access that information and for what purpose.
Ajay Mohan, principal, AI & analytics at Capgemini Americas, agreed with that assessment.
“The big problem is that companies are already collecting a huge amount of personal and financial information about us [for profit-driven applications] which basically just follows you, even if you don’t actively approve or authorize it,” Mohan said. I’m going.”
In addition, artificial intelligence (AI) continues to push the capabilities of facial recognition systems in terms of their performance, while from an attacker’s perspective there is emerging research that uses AI to create facial “master keys”, i.e. AI generation of a face that matches many different faces, through the use of so-called Generative Adversarial Network techniques, according to Lewis.
“AI also enables additional feature detection on faces beyond recognition alone – that is, being able to determine a face’s mood (happy or sad) and also a good approximation of a person’s age and gender based purely on of their facial images,” said Lewis. “These developments certainly exacerbate privacy concerns in this space.”
In general, facial recognition captures a lot of information depending on the amount and sources of data and that’s what the future needs to focus on, said Doug Barbin, managing principal at Schellman, a global cybersecurity reviewer.
“If I do a Google image search on myself, will it return images tagged with my name or are the images previously identified as me recognizable without any text or context? That creates privacy concerns,” he says. “A huge application of machine learning is to be able to identify health problems through scans. But what about the cost of revealing someone’s condition?”
Security vulnerabilities with facial recognition technology
Any biometrics, including facial recognition, are not private, which also leads to security concerns, Lewis said.
“This is a feature rather than a vulnerability, but essentially it means that biometric data can be copied and that poses security challenges,” he said. “With facial recognition, it may be possible to ‘spoof’ a system (disguised as a victim) by using photos or 3D masks created from images of a victim.”
Another property of all biometrics is that the matching process is statistical: a user never presents their face in exactly the same way in front of a camera and the user’s characteristics may differ depending on the time of day, use of cosmetics, etc. Lewis said.
Therefore, a facial recognition system needs to determine how likely it is that a face presented is that of an authorized person, he said.
“This means that some people can resemble others in ways that they can authenticate as other people because of similarities in functions,” Lewis said. “This is called the false acceptance rate in biometrics.”
Because it includes the storage of facial images or templates (mathematical representations of facial images used for matching), the security implications of facial recognition are similar to any personally identifiable information, requiring accredited encryption approaches, policies and process safeguards, he said.
TO SEE: Password Breach: Why Pop Culture and Passwords Don’t Mix (Free PDF) (TechRepublic)
“In addition, facial recognition can be susceptible to what we call ‘presentation attacks’ or the use of physical or digital spoofs, such as masks or deepfakes, respectively,” Pritikin said. “So good technology to detect these attacks is critical in many use cases. .”
People’s faces are key to their identity, says John Ryan, partner at the law firm Hinshaw & Culbertson LLP. People who use facial recognition technology are at risk of identity theft. Unlike a password, people can’t just change their face. As a result, companies using facial recognition technology are targeted by hackers.
As such, companies typically implement storage and destruction policies to protect this data, Ryan said. In addition, facial recognition technology usually uses algorithms that cannot be reverse engineered.
“These barriers have been helpful so far,” he said. “But state and federal governments are concerned. Some states, such as Illinois, have already passed laws to regulate the use of facial recognition technology. Legislation is also pending at the federal level.”
Pritikin said his company uses advanced technologies, such as Presentation Attack Detection, that protect against the use of forged data.
“We are also currently developing advanced technologies to detect deep fakes or other digital facial manipulations,” he said. “In a world where we rely on faces to confirm identity, whether in person during a video call, understanding what’s real and what’s fake is a critical aspect of security and privacy, even if no facial recognition technology is used. “