Establishing Artificial Intelligence Governance: Tips for CIOs

Author: Kirsten Lloyd
Date Published: 16 August 2021

Artificial intelligence (AI) is seemingly everywhere, with events such as the COVID-19 pandemic spurring increased investment in AI as organizations accelerate plans to power an increasingly digitally connected workforce. However, upon closer examination, the reality is that many organizations’ investments in AI have yet to pay off. There are myriad reasons contributing to this, including siloed or messy data, overburdened data science or engineering talent and difficulty integrating AI capability into enterprise applications. One significant factor that is often overlooked or swept aside is AI governance.

Governance may not be top of mind when thinking about factors that contribute to increased innovation, but in the case of AI, establishing good governance could be the key to unlocking real value. Rather than thinking of governance as a hindrance, it is helpful to think about it as a set of guardrails and, ultimately, a force multiplier for data science and engineering teams. For the teams on the ground, governance provides preapproved processes to follow, allowing them to move faster and create innovative new solutions. For chief information officers (CIOs) and other stakeholders, governance ensures compliance in the form of auditability and transparency, ultimately yielding higher quality and better performing systems, which, in turn, generate more value.

Establishing Governance and Enabling Accountability

AI governance is essential, but with so many stakeholders involved, it can be challenging to ensure everyone is on the same page. In the absence of governance, many teams take matters in their own hands to adopt a “move fast and break things” approach to building AI-enabled solutions. This often leads to shadow AI, or the proliferation of AI-enabled tools or applications instituted by groups or individuals without any IT or security oversight or governance. Rather than worrying about data scientists using bad code libraries from the Internet, it is important to build out a centralized system for model management where everything is documented, tracked and secured. All details related to model background such as architecture, training framework, training data, expected performance, potential bias and ideal application should be tracked, with the ability to easily add or update versions over time.

Once models are deployed into production, logging is key—this includes everything from model runs, job completion details and infrastructure usage to user actions. By establishing role-based access controls, organizations can also limit who has access to sensitive information. This centralized approach allows all stakeholders one location to review for compliance purposes, track and monitor performance, and identify efficiencies and optimizations to generate increased value. It also helps teams assign responsibility to the right stakeholders, ensuring everyone remains accountable for ensuring a high level of performance.

Being forward leaning about traceability and compliance also enables transparency, a persistent challenge when it comes to AI. Particularly when considering stakeholder roles and responsibilities for quality control and performance monitoring, the concept of explainability can open Pandora’s box. Different stakeholders (e.g., data scientists, engineers, CIOs, auditors, end users) have varying levels of technical acumen, so explainability will look different to each of them. Depending on the scenario, providing too much information might be overwhelming. For example, a Global Positioning System (GPS) system might reroute a driver based on upcoming traffic. For the driver, it might be useful to see that the new route is to avoid a slowdown, but they do not need to see every detail to understand what metadata was captured that caused the algorithm to make that recommendation. Through the use of clear auditability and governance practices, organizations can make informed decisions about the right level of transparency and explainability aligned to specific use cases. Ultimately, this will allow for the establishment of appropriate performance monitoring and risk mitigation procedures.

Once the right compliance, quality control and performance monitoring protocols are in place, it becomes possible to begin to measure the value generated from AI projects. Although measuring the impact of AI projects was once an elusive concept due to shadow AI, establishing AI governance ensures that all stakeholders involved can track the value generated from AI programs.

Once the right compliance, quality control and performance monitoring protocols are in place, it becomes possible to begin to measure the value generated from AI projects.

Conclusion

With the right foundation in place, it is possible to establish the right level of auditability and compliance, determine the appropriate levels of transparency and explainability aligned to use cases or applications and develop procedures for monitoring system performance with AI technology. Ultimately, this allows teams to focus on creating applications of this technology that generate more value for their organizations. Establishing AI governance provides teams a safe environment to innovate and enables CIOs to plan and monitor the performance of AI projects, all while ensuring that AI-enabled systems are built with fairness in mind.

Kirsten Lloyd

Is the founder of Modzy and head of Go-to-Market. In that capacity, she leads Modzy’s research and analysis for product positioning and fit in the market while driving sales enablement, client services awareness and supporting marketing strategies to drive growth and retention. She has authored and contributed to numerous reports on the intersection of ethics and AI, including Assessing the Ethical Risks of Artificial Intelligence, A National Machine Intelligence Strategy for the United States, and Bias Amplification in AI Systems. Her research on ethics and AI has been accepted by conferences such as the Association for the Advancement of Artificial Intelligence (AAAI) Fall 2018 Symposium, the Carnegie Mellon University (Pittsburgh, Pennsylvania, USA) Artificial Intelligence for Data Discovery and Reuse (AIDR) 2019, and the ISACA® 2018 GRC Conference. Prior to her current role, Lloyd worked at Booz Allen Hamilton on the corporate strategy team, identifying new business opportunities for the organization by examining the intersection of emerging economic, technological and environmental trends. Her recommendations helped advance the organization’s existing client capabilities and informed the development of new capabilities.