Unpacking the World’s First AI Act

Author: Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, CISO
Date Published: 31 May 2023

The European Union proposed the Artificial Intelligence Act (AI Act), the world’s first AI act, in 2021. After months of negotiations—and two years after draft rules were proposed—EU lawmakers reached an agreement and passed a draft of the AI Act, which would be the first set of comprehensive laws related to AI regulation. The AI Act, consisting of 85 articles, establishes harmonized rules on AI.

The proposed regulatory framework focuses on four key objectives:

  1. Ensuring that AI systems on the EU market are safe and respect existing law on fundamental rights and EU values
  2. Ensuring legal certainty to facilitate investment and innovation in AI
  3. Enhancing governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems
  4. Facilitating the development of a single market for lawful, safe and trustworthy AI applications and preventing market fragmentation

One of the saliant features of the AI Act is that the law takes a risk-based approach toward AI systems and applications. AI applications are assigned to 3 risk categories:

1. Unacceptable risk—This category encapsulates practices that have significant potential to manipulate people through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups (e.g., children, people living with disabilities). The AI Act prohibits AI-based social scoring for general purposes done by public authorities. The use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement is also prohibited, unless certain limited exceptions apply.

2. High risk—This category is used to describe applications that create a high risk to the health and safety or fundamental rights of natural persons. This classification not only depends on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used.

3. Low or minimal risk—This risk category applies to applications that are not explicitly banned or categorized as high risk.

The AI Act also establishes legal requirements for high-risk AI systems in relation to data and data governance, documentation and recording keeping, transparency and provision of information to users, human oversight, robustness, accuracy, and security. It places a clear set of horizontal obligations on providers of high-risk AI systems. Proportionate obligations are also placed on users and other participants across the AI value chain (e.g., importers, distributors, authorized representatives).

The AI Act sets the framework for notified bodies to be involved as independent third parties in conformity assessment procedures. It also explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system. The conformity assessment approach aims to minimize the burden for economic operators and for notified bodies, whose capacity must be progressively ramped up over time.

What Comes Next

AI applications influence what information is displayed online by predicting what content is engaging to the user. Such applications can capture and analyze data from faces to enforce laws or personalize advertisements. AI may even be used to diagnose and treat serious illnesses. In other words, AI could impact many parts of an individual’s life.

The market for AI is expected to experience strong growth in the next decade. Its current value of nearly US$100 billion is expected to grow twentyfold by 2030, up to nearly US$2 trillion. The AI market covers a vast number of industries—everything from supply chains, marketing, product development, research, analysis and more are fields that will, in some way, adopt AI within their business structures. Chatbots, image-generating AI and mobile applications are among the major trends improving AI in the coming years.

Similar to the EU General Data Protection Regulation (GDPR) in 2018, the AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on life for individuals everywhere. 

In September 2021, the Brazilian Chamber of Deputies passed a bill that creates a legal framework for AI. The AI Act is already making an impact internationally.

Hafiz Sheikh Adnan Ahmed, CGEIT, CDPSE, GDPR-CERTIFIED DATA PROTECTION OFFICER, ISO MS LEAD AUDITOR, ISO MS LEAD IMPLEMENTER

Is an analytical thinker, writer, certified trainer, global mentor, and advisor in the areas of information and communications technology (ICT) governance, cybersecurity, business continuity and organizational resilience, data privacy and protection, risk management, enterprise excellence and innovation, and digital and strategic transformation. He is a certified data protection officer and was awarded Chief Information Security Officer (CISO) of the Year awards in 2021 and 2022, granted by GCC Security Symposium Middle East and Cyber Sentinels Middle East, respectively. He was also named a 2022 Certified Trainer of the Year by the Professional Evaluation and Certification Board (PECB). He is a public speaker and conducts regular training, workshops, and webinars on the latest trends and technologies in the fields of digital transformation, cybersecurity, and data privacy. He volunteers at the global level of ISACA® in different working groups and forums. He can be contacted through email at hafiz.ahmed@azaanbiservices.com.