A Toolkit to Facilitate AI Governance

Author: Wojciech Misiura, CISA, CIA, Group Internal Auditor
Date Published: 4 June 2024
Read Time: 3 minutes

As a response to increasing popularity of AI systems in both development and usage, there is global pressure to control AI systems and ensure proper governance. The European Union has recently adopted the EU Artificial Intelligence (AI) Act, Singapore is moving forward with a Global AI Governance Law and Model AI Governance Framework (“Model Framework”), on top of private organization frameworks such as MITRE ATLAS.

These regulations will affect a large number of companies, as well as processes outsourced to third parties. Regulations will affect companies differently as requirements are defined based on the risk associated with AI systems. Therefore, companies need to perform a review to identify and classify the risk arising from AI systems as a starting point to determine the extent of regulations affecting company processes.

Introducing: ISACA Artificial Intelligence (AI) Audit Toolkit

ISACA’s Artificial Intelligence Audit Toolkit is a control library designed to facilitate the assessment of the governance and controls over the AI system in an enterprise. The AI Audit Toolkit is aimed at assisting auditors in verifying that AI systems adhere to the highest standards of governance and ethical responsibility. The Toolkit can be treated as a framework for audit professionals when planning and executing audits, and can also be useful and beneficial for other lines of defense, such as when creating processes around AI systems or performing maturity assessments of current processes.

  • The first level of defense can use the suggested evidence/deliverables/artifacts to enhance current processes or develop new processes.
  • The second level of defense can use the AI Audit Toolkit to better understand how to design the policies/procedures/guidelines to meet the desired objectives.
  • The third level of defense can use the suggested assessment method to obtain an understanding of how to evaluate and test evidence, deliverables, artifacts, and the operating effectiveness of the control.

In addition, each AI control is designed to ensure that explainable AI objectives from ICO’s Explainability dimensions are met – Rational, Responsibility, Data, Fairness, Safety & Performance and Impact. Controls are synthesized with well-established security frameworks and standards, including NIST 800-53 and ISO 27001, and further informed by deconstruction and analysis of selected AI-specific guidelines was performed, including of the European Union Artificial Intelligence (EU AI) Act, Singapore’s Model AI Governance Framework (“Model Framework”), MITRE's Atlas, and OWASP ML Top Ten Security Risks.

Strengthening Three Lines of Defense with AI Framework

Ensuring compliance with AI regulations requires companies to implement changes to the current environment. The first step is to create AI systems inventories, identify currently used AI systems (including those managed by third parties whose output is used by companies), design a process of risk classification and integrate it with current processes.

The first line of defense should create a process of identifying, managing and treating risk arising from AI systems. The second line of defense should ensure policies, procedures and controls are developed and self-assessed for adequacy by the first line of defense. The third line of defense is responsible to understand risk arising from AI systems and include it as part executing a risk-based internal audit plan to test the completeness and operating effectiveness of related controls. 

Closing the AI Talent Gap

Adoption to every change depends on various factors, with the most important being talent. Recent AI research from ISACA shows that 40% of organizations offer no AI training at all, with the majority of respondents identifying a talent gap in their own skillset.

While waiting for new regulations to take effect, companies should rethink and redesign their AI talent management to ensure that employees working with AI systems have proper knowledge. This can be achieved by leveraging ISACA AI-focused resources available at www.isaca.org/ai. Regulation might be changed in the future, but one thing is certain – AI will stay with us.

Additional resources