Challenges of AI and Data Privacy—And How to Solve Them

Author: Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, CISO
Date Published: 6 October 2021

Artificial intelligence (AI) has developed rapidly in recent years. Today, AI and its applications are a part of everyday life, from social media newsfeeds to mediating traffic flow in cities to autonomous cars to connected consumer devices such as smart assistants, spam filters, voice recognition systems and search engines.

AI and Data Privacy Challenges
AI has the potential to revolutionize society, however, there is real risk that the use of new tools by states or enterprises could have a negative impact on human rights. The following are some of the major data privacy risk areas and problems related to AI:

  • Reidentification and deanonymization—AI applications can be used to identify and track individuals across different devices in their homes, at work and in public spaces. For example, facial recognition, a means by which individuals can be tracked and identified, has the potential to transform expectations of anonymity in public spaces.
  • Discrimination, unfairness, inaccuracies and bias—AI-driven identification, profiling and automated decision making can lead to discriminatory or biased outcomes. People can be misclassified, misidentified or judged negatively, and such errors or biases may disproportionately affect certain demographics.
  • Opacity and secrecy of profiling—Some applications of AI can be obscure to individuals, regulators or even the designers of the system themselves, making it difficult to challenge or scrutinize outcomes. While there are technical solutions to help improve some systems’ interpretability and/or ability to audit, a key challenge remains whenever this is not possible, and the outcome can significantly impact people’s lives.
  • Data exploitation—People are often unable to fully understand what kinds of—and how many—data their devices, networks and platforms generate, process or share. As consumers continue to introduce smart and connected devices into their homes, workplaces, public spaces and even bodies, the need to enforce limits on data exploitation has become increasingly pressing.
  • Prediction—AI can utilize sophisticated machine-learning algorithms to infer or predict sensitive information from non-sensitive forms of data. For instance, someone’s keyboard typing patterns can be analyzed to deduce their emotional state, which includes emotions such as nervousness, confidence, sadness or anxiety. Even more alarming, a person’s political views, ethnic identity, sexual orientation and even overall health status can also be determined based on activity logs, location data and similar metrics.

Solutions and Recommendations
A data protection principle that underpins all AI development and applications is accountability. This principle is central to all data privacy laws and regulations, and places greater responsibility on the data controller to ensure that all processing is compliant. Data processors are also bound by the accountability principle.

The following are 2 requirements that are especially relevant for organizations using AI:

  • Privacy by design—The data controller should build privacy protection into systems and ensure that data protection is safeguarded in the system’s standard settings. The tenets of privacy by design require that data protection be given due consideration in all stages of system development, in routines and in daily use. Standard settings should be as protective of privacy as possible and data protection features should be embedded at the design stage.
  • Data Protection Impact Assessment—Anyone processing personal data has a duty to assess the risk involved. If an enterprise believes that a planned process is likely to pose a high risk to natural persons’ rights and freedoms, it has a duty to conduct a Data Protection Impact Assessment (DPIA). Moreover, there is a requirement to assess the impact on personal privacy by systematically and extensively considering all personal details in cases where these data are used in automated decision-making or when special categories of personal data (i.e., sensitive personal data) are used on a large scale. The systematic and large-scale monitoring of public areas also requires documentation showing that a DPIA has been conducted.

The following are some recommendations for purchasing and using AI-based systems:

  • Carry out a risk assessment and, if required, complete a DPIA before purchasing a system.
  • Ensure that the systems satisfy the requirements for privacy by design.
  • Conduct regular tests of the system to ensure that it complies with regulatory requirements.
  • Ensure that the system protects the rights of the users, customers and other stakeholders.
  • Consider establishing industry norms, ethical guidelines or a data protection panel consisting of external experts in the fields of technology, society and data protection. Such experts can provide advice on the legal, ethical, social and technological challenges–and opportunities–linked to the use of AI.

AI systems can be a tremendous benefit to both individuals and society, but organizations using AI must address the risk to data subjects’ privacy rights and freedoms.

Hafiz Sheikh Adnan Ahmed, CGEIT, COBIT 5 Assessor, CDPSE, GDPR-CDPO, Lead Cloud Security Manager
Is a governance, risk and compliance (GRC), information security and IT strategy professional with more than 16 years of industry experience. He serves as a board member of the ISACA® United Arab Emirates (UAE) Chapter, IAPP Knowledge Net Chapter Chair, and volunteers at the global level of ISACA in different working groups and forums. He is a Microsoft Certified Trainer, a PECB Certified Trainer and an ISACA-APMG Accredited Trainer. He can be reached via email at adnan.gcu@gmail.com and LinkedIn (https://ae.linkedin.com/in/adnanahmed16).