Facial Recognition Technology and Privacy Concerns

Author: Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, CISO
Date Published: 21 December 2022

From unlocking an iPhone to auto-tagging Facebook photos to employers tracking productivity and police forces surveilling protests, facial recognition technology (FRT) is becoming increasingly embedded into everyday life. FRT matches captured images with other facial images held, for example, in databases or on government watchlists. It is an extremely intrusive form of surveillance and can seriously undermine individual freedoms and, ultimately, society.

In recent years, the development and use of FRT has grown exponentially. The market for FRT is rapidly increasing as organizations employ the technology for a variety of reasons, including verifying and/or identifying individuals to grant them access to online accounts, authorizing payments, tracking and monitoring employee attendance, targeting specific advertisements to shoppers and much more. In fact, the global facial recognition market size is forecasted to reach US$12.67 billion by 2028, up from US$5.01 billion in 2021. This increase is also driven by growing demand from governments and law enforcement agencies that use the technology to assist in criminal investigations, conduct surveillance or engage in other security-related activities.

But, as with any technology, there are potential disadvantages to using FRT, including the several privacy and security issues:

  • Lack of consent—A basic principle of any data privacy laws is that organizations must let users know what biometric data they are collecting and receive their consent to do so. The most significant privacy implication of FRT is the use of the technology to identify individuals without their consent. This includes using applications such as real-time public surveillance or an aggregation of databases that are not lawfully constructed.
  • Unencrypted faces—Faces are becoming easier to capture from remote distances and cheaper to collect and store. Unlike many other forms of data, faces cannot be encrypted. Data breaches involving facial recognition data increase the potential for identity theft, stalking, and harassment because, unlike passwords and credit card information, faces cannot easily be changed.
  • Lack of transparency—Using FRT to identify individuals without their knowledge or consent raises privacy concerns, especially since biometrics are unique to an individual. Furthermore, it poses additional concerns because, unlike other biometrics (e.g., fingerprints), facial scans can be captured easily, remotely and secretly.
  • Technical vulnerabilities—With FRT, it may be possible to spoof a system (i.e., masquerade as a victim) by using pictures or three-dimensional (3D) masks created from imagery of a victim. In addition, FRT can be prone to presentation attacks or the use of physical or digital spoofs, such as masks or deepfakes, respectively.
  • Inaccuracy—Inaccuracy is another common critique of FRT. A captured facial scan that misidentifies someone could have long-term consequences. Moreover, accuracy varies by demographic, with false positive rates being highest among women and people of color, that can lead to unjust arrests in the criminal context.

Examining the Regulatory Landscape

A variety of laws around the world have been passed to control and regulate the use of FRT. Recent laws and proposals have targeted regulating government entities rather than the private sector. Some efforts focus primarily on law enforcement, while others regulate the public sector. In Pittsburgh, Philadelphia, USA, and the US State of Virginia, prior legislative approval is now required to deploy FRT. Before conducting a facial recognition search, the US States of Massachusetts and Utah require law enforcement to submit a written request to the state agency maintaining the database. Similarly, the Surveillance Devices Act (2016) was enacted in South Australia. The legislation prohibits the knowing installation, maintenance or use of an optical surveillance device by a person on a premises that visually records or observes a private activity without the express or implied consent of all the key parties. The act also prohibits the knowing use, communication or publication of information or material derived from the use of an optical surveillance device.

Privacy Principles for FRT

Organizations are working continuously to streamline the use of FRT in their commercial applications to ensure safety, security, access, authentication, storage identification and management, and accessibility to personally identifiable information (PII). Privacy principles for FRT have been designed to drive responsible data use by the enterprises and online platforms developing and using FRT in commercial settings. These principles include:

  • Consent—Enterprises should obtain express, affirmative consent when enrolling an individual in a program that uses FRT for verification or identification purposes and/or identifying an individual to third parties that would not otherwise have known that individual’s identity.
  • Use—Enterprises should commit to collecting, using and sharing facial recognition data in ways that are compatible with reasonable consumer expectations for the context in which the data was collected.
  • Transparency—Enterprises should provide consumers with meaningful notice about how the facial recognition software templates are created and how such data will be used, stored, shared, maintained, and destroyed.
  • Data security—Enterprises should maintain a comprehensive data security program that is reasonably designed to protect the security, privacy, confidentiality, and integrity of personal information against risk (e.g., unauthorized access or use, unintended or inappropriate disclosure) using administrative, technological, and physical safeguards appropriate to the sensitivity of the information.
  • Privacy by design—Enterprises should seek to implement technological controls that support or enforce compliance with these principles in addition to policy, legal and administrative measures.
  • Integrity and access—Enterprises should implement reasonable measures to maintain the accuracy of facial recognition data. Offer individuals reasonable access to review or request correction of inaccurate identity labeling, and the ability to request deletion of facial recognition data.
  • Accountability—Enterprises should take reasonable steps to ensure that use of FRT and data by the organization, and in partnership with all third-party service providers or business partners, adhere to these principles.

Conclusion
The expansion of FRT has become a prominent global issue. While FRT has many potential benefits, it also brings significant privacy concerns. It appears as though the regulatory road forward in this booming area will be focused on ensuring that adequate safeguards are in place to prevent abuse of FRT and protect privacy, but only time will tell.

In a world where faces are relied upon to confirm identity, whether it be in person on a video call, understanding what is real and what is fake is a critical aspect of security and privacy—even if FRT is not used.

Hafiz Sheikh Adnan Ahmed

Is a futurist and technology/information security leader with more than 17 years of experience in the areas of information and communications technology (ICT) governance, cybersecurity, resilience, data privacy and protection, risk management, enterprise excellence and innovation, and digital and strategic transformation. He is a strategic thinker, writer, certified trainer, global mentor and advisor with proven leadership and organizational skills in empowering high-performing technology teams. He is a certified data protection officer and earned Chief Information Security Officer (ACISO) of the Year awards in 2021 and 2022, granted by GCC Security Symposium Middle East and Cyber Sentinels Middle East, respectively. Ahmed is a public speaker and conducts regular training, workshops, and webinars on the latest trends and technologies in the fields of digital transformation, information and cybersecurity, and data privacy. He volunteers at the global level of ISACA® in different working groups and forums. He can be contacted through email at hafiz.ahmed@azaanbiservices.com and LinkedIn at https://ae.linkedin.com/in/adnanahmed16.