Building an AI Risk Management Program: A Security & Audit Team Perspective

Author: David Kliemann, Cloud Risk & Controls Leader, IBM Cloud
Date Published: 25 June 2024
Read Time: 2 minutes

Generative AI is all the rage as organizations rush to figure out how they can “do AI.” Studies show that AI could add near US$16 trillion in economic value by 2030. But in order for organizations to fully take advantage, they need to implement trustworthy AI at the enterprise level.

Enterprise-grade AI, including generative AI, requires a highly sustainable, compute-and-data intensive distributed infrastructure. Because AI workloads will likely form the backbone of mission-critical workloads and ultimately house and manage the most trusted data, the systems infrastructure must be trustworthy and resilient by design.

Security, risk and audit leaders need to understand there are a whole new set of risks that we need to do our part to ensure are mitigated; risks in addition to the security and privacy risks that we already are dealing with. It’s fundamental for organizations to develop a comprehensive AI risk management program that augments their current cyber, risk and privacy programs.

In August, during the Building an AI Risk Management Program session at GRC Conference 2024 in Austin, Texas, I will review some of those key AI risks and leverage lessons from numerous cases across regulated industries of organizations that are trying to both solve the security, risk and compliance challenges while enabling their organizations to still move at the “speed of business.”

In addition, while there have been numerous publications designed to provide guidance around understanding some of those AI-related risks (NIST AI RMF, MITRE ATLAS, IBM AI Adversarial Robustness 360). It’s key for organizations to have both an overall governance structure and a comprehensive framework of controls that are designed to actually mitigate those risks as part of that AI risk management program. 

In this session, one such framework, co-developed with the IBM Financial Services Cloud Council (a group of over 90 financial institutions), that is adapted specifically with Generative AI in mind will be highlighted. A framework such as this one can help organizations align with evolving industry standards and best practices, across the entire AI technology stack, agnostic of the specific solutions that they use, as they take that next step in their journey to leverage trustworthy AI. 

Join me at the GRC Conference 2024 as we discuss how security and audit teams can help their organizations move down their AI path in a secure and trustworthy manner.

Additional resources