Algorithms and the Enterprise Governance of AI

Author: Guy Pearce, CGEIT, CDPSE, and Maureen Kotopski
Date Published: 30 June 2021
français

Alan Turing (1912-1954), a founding father of artificial intelligence (AI) and the inventor of the Turing Test that determines whether a computer can think,1 defined algorithms as sets of precise instructions for solving problems.2 Today, algorithms describe many methods of computation.3 AI can be defined as groups of algorithms that modify themselves and create new algorithms in response to new inputs as part of a mechanism described as “intelligence.”4 A complicating factor is that intelligence has been defined in at least 70 different ways.5

Another definition proposes that AI concerns intelligence agents that perceive their environments and take action to maximize their chances of success.6 So, AI algorithms are different than regular algorithms; whenever the environment presents challenges that its designers never contemplated, AI algorithms must be adapted—perhaps even beyond their original design specifications—by modifying them, adding to them or deleting from them. Merely changing coefficients in a static model does not constitute AI.

It is useful to explore considerations of the enterprise governance of AI algorithms. That shareholder interests are king in some economic models is at significant odds with urgent public interest issues with tools such as AI.7 There are also reputational and sustainability risk scenarios in organizations when AI algorithms do not perform as expected, or worse, introduce harm. The two drivers of this risk are algorithmic bias and lack of transparency,8 with examples of bias including:9

  • Google’s voice recognition system—designed and tested on men—having difficulties recognizing women’s voices.
  • Amazon abandoning its AI recruitment program after it was found to have eliminated women from consideration.

In terms of governance and accountability, algorithms make up only one of at least seven AI items that require oversight, with some others being data, goals, outcomes, compliance, influence and usage (figure 1).10

As shown in figure 2, while the conformance aspect of enterprise governance—corporate governance—addresses compliance (e.g., through the audit committee of the board), entity governance, the performance aspect of enterprise governance, is the responsibility of the full board and should cover the other value-creating items highlighted in figure 1. It is helpful to note that the term “entity governance” is used as a generalization of the more commonly used term “business governance” to accommodate not-for-profit or public sector organizations or agencies.

Figure 2 illustrates how, from an enterprise governance perspective, developments in AI can go terribly wrong. If the only board-level control for AI is corporate governance, then in the absence of relevant regulations in a particular jurisdiction, virtually anything goes. From an entity governance perspective, however, the board is positioned to ask management important ethical and societal questions about the nature of the technology—but only if the board has the knowledge to ask these questions. Unfortunately, the latter cannot be assumed, given the poor digital literacy of many boards of directors (BoDs).11

The Diversity and Reach of AI Challenges

A digitally literate board will be familiar with the following elements of AI and AI algorithms as part of their fiduciary responsibilities for enterprise governance:

  • Low-quality AI–The more compromised the data used to test and train an AI system are, the more compromised the outcomes of the AI algorithms. For example, notoriously bad data resulted in an AI system making Medicaid cuts to 4,000 disabled people in the US State of Idaho, resulting in widespread financial hardship.12
  • The black box–AI algorithms are so complex that, in some cases, it is not possible to know why a particular outcome emerged, raising a legal red flag for cases ranging from medical diagnoses to self-driving cars. For example, a failure of Tesla’s AI autopilot system resulted in a fatality, and an autonomous Uber car killed a pedestrian during testing.13
  • Public interest concerns–There is potential for AI to be developed according to its makers’ own private ends, which may not be in the public’s best interest. For example, Mercedes’ AI algorithms were programmed to save the driver and passengers in the event of an accident, as opposed to those outside the car, prompting the question, what if the alternative was driving onto the sidewalk and killing 20 pedestrians? Similarly, Facebook’s AI algorithms have “deliberately” been used to shape public perception of current affairs.14
  • Deferring to AI–There exists a general sense that AI is superior to human intelligence. For example, pilots already accustomed to aircraft autopilot systems may have been compromised by Boeing’s 737 Max AI algorithms, which resulted in them unexpectedly having to fly planes in difficult circumstances introduced by erroneous decisions made by the onboard systems.15
  • Low-cost AI–A driver of AI is cost savings relative to the human equivalent, despite AI sometimes proving significantly less accurate than humans. For example, facial recognition algorithms used by the Metropolitan Police in the United Kingdom have resulted in false positive criminal identification more than 90 percent of the time.16

There are also unintended effects of AI upon society to consider, such as errors that may not yet be visible in emerging use cases, loss of critical thinking and understanding, the risk of nefarious algorithm manipulation, the loss of the human touch and the role of human judgement in critical decision-making.17 These do not include the privacy and audit challenges inherent to AI. All of these are enterprise governance concerns.

Ethical issues with AI development, such as the concentration of resources in a small number of powerful organizations such as Uber, Amazon, Facebook, Microsoft, Google, Apple, IBM and Tesla, where governance is “…unusually autocratic and lacking in accountability,” is a significant concern.18 That control rests with a small group of insiders with strong interrelationships and similar interests introduces bias and prompts questions about whether the needs of a few could compromise the public interest of the many.19

While regulation could reduce the public risk posed by AI, governments contend that this may stifle innovation and reduce competitive advantage. It is also difficult to create regulations before a new variant of AI is deployed, and it is complex to introduce specific AI regulations after the fact, given the potential introduction of a liability gap.20 This is a challenge for the role of corporate governance in AI oversight (figure 2), which in some organizations may be the only level of AI oversight performed.

BOARD DIRECTORS’ AWARENESS OF HOW THEIR ORGANIZATION IS CREATING, USING OR SELLING AI IS CRITICAL.

The Call to Action for the Board

Board directors’ awareness of how their organization is creating, using or selling AI is critical. This can be achieved through compliance, strategic planning, or traditional legal and business risk oversight. A board has the fiduciary responsibility to proactively ensure the ethical deployment of AI.

As with any inquiry into learning, start at square one. A board and management AI exploration item should be created on meeting agendas, and time should be dedicated to ensuring that the right people are in the room to explore questions such as:

  • What is AI? This definition and scope can be different from organization to organization. Stakeholders should never start with an assumption.
  • Who is leading and governing AI?
  • How will AI change/impact how the organization does business?
  • Does the management team have a shared perspective?
  • What is the AI strategy?
  • What is the benefit of a strong AI strategy in 3-5 years?
  • Does the organization have the processes and people to lead this strategy?
  • How will AI make a difference?
  • What could be the strategic advantage?

These questions will drive insightful conversations and provide the board with a better and shared understanding of the AI landscape and the organization’s approach to AI.

WHILE THE BENEFITS OF AI ARE BECOMING CLEARER, WHAT IS NOT SO APPARENT IS A GENERAL UNDERSTANDING OF THE TECHNOLOGY’S RISK PROFILE.

This conversation will naturally flow toward the topic of risk and governance. Although AI adoption is increasing, overseeing and mitigating its risk remains an unresolved and urgent task. Although 41 percent of respondents to a global AI survey said that their organizations “comprehensively identify and prioritize” the risk associated with AI deployment, fewer mitigate them.21

Risk oversight is increasingly complicated and, as such, board directors need to know what questions to ask. It is worth repeating that starting with the basics will help create strong foundational knowledge:

  • Who is overseeing the use of AI and related risk? Is it the same team leading the initiative?
  • To what degree is the organization employing human monitoring?
  • Who has developed the audit policies for the AI models?
  • Is the organization vulnerable to attack? What risk plans are in place to address this threat?

Note that at a national level in Canada, 50 percent of respondents to a survey of enterprise directors on key political, social and economic issues impacting organizations and the country identified AI and automation as top issues for the country (and, thus, for policy), but only 28 percent considered it a critical challenge for their organization.22 For those taking up the challenge, AI’s potential to deliver significant benefits in the private and public sectors introduces new and complex risk. Increasing the board’s fluency with, and visibility into, AI is good governance. A board, its committees and individual directors can approach this as a matter of strict compliance, strategic planning, or as traditional legal and business risk oversight.

Understanding the privacy and ethical aspects of AI is also challenging for BoDs. If there was an issue, who would be held accountable for any unintended outcomes of AI? Who would be held accountable for the problem and who would be responsible for making things right? Are the AI systems and algorithms open to inspection and are the resulting decisions fully explainable?

BoDs can ensure that AI is on their internal agenda and that recruitment for new directors explicitly accounts for skill set gaps. Governance practices must be evolved with respect to AI, including identifying the education required to keep directors up to date with the rapidly evolving AI landscape.

Finally, board oversight must include requirements from the organization on corporate policies that delineate what various AI systems will be used for and confirm that management—not just IT—is sufficiently focused and properly resourced to manage AI compliance and risk.

The Call to Action for Management

While the benefits of AI are becoming clearer, what is not so apparent is a general understanding of the technology’s risk profile. A savvy board director might already understand why AI should be on the agendas of the risk, ethics and audit committees—not just the IT committee—with the agenda including items such as fairness, transparency, accuracy, and security, and governance and accountability practices that include leadership engagement, organizational structures and training.23 Possible management responses to these oversight imperatives could include:

  • Ensuring that data quality programs are in place and that the data used represent what should be rather than what is to limit bias24—incomplete data, no matter how clean, will be nonrepresentative and, therefore, introduce bias
  • Diagnostic components that are built into AI algorithms before deployment25
  • Broad and diverse stakeholder groups that evaluate the efficacy of proposed AI algorithms
  • Establishing regular checks and balances for AI decision-making vs. human decisions
  • Establishing human validation for critical AIdriven decision-making. With most AI-driven cost decreases having been in supply chain, manufacturing and service operations to date,26 it will be important to identify areas where falsepositive decision-making results in a high risk of significant negative impact.

A key part of the management agenda is ensuring that the chief executive officer (CEO) is appropriately informed of management’s activities, findings and concerns with respect to AI deployments.

Conclusion

Senior sector leaders are beginning to reprimand large, algorithm-based enterprises that mislead “…users on data exploitation, [and] on choices that are no choices at all,” some of which are attempting to find out how much they can get away with rather than considering the societal consequences of their activities.27 While there seems to be some reluctance to regulate AI in certain jurisdictions, “[h]igh-profile AI failures will reduce consumer trust and only serve to increase future regulatory burdens. These are best avoided through proactive measures today.”28

As part of this proactivity, calls to action were made herein both for the board and for management, the latter headed by the CEO as the role to whom the board delegates authority and who is appointed to create value (entity governance in figure 2) within the parameters set by the board. Part of these board parameters are the checks and balances needed to ensure that AI is deployed responsibly within the organization, or within the products created and sold.

MANY ARE UNANIMOUSLY IN FAVOR OF GOOD AI, AND BY EXTENSION, THE STRONG ENTERPRISE GOVERNANCE (RATHER THAN ONLY THE CORPORATE GOVERNANCE) OF AI ALGORITHMS.

So, what is the cost of a lack of enterprise governance of AI algorithms? Granted, AI is only 65 years old and still very much in its infancy, but disruptive scandals such as those of the roles of Cambridge Analytica and Facebook in influencing the outcome of the 2016 US presidential election offer insight into the power technology has to tear at the very social fabric that binds society—just as much as it has the potential to mend and strengthen it.29

Many are unanimously in favor of good AI, and by extension, the strong enterprise governance (rather than only the corporate governance) of AI algorithms. The hope is that those in the position to shape it all—BoDs of organizations engaged in implementing AI—share this aspiration, and that they skill up significantly in this important area before potentially damaging organizational or societal risk scenarios are realized.

Endnotes

1 Copeland, B. J.; “Alan Turing,” Britannica, https://www.britannica.com/biography/Alan-Turing
2 Petzold, C.; The Annotated Turing, Wiley, USA, 2008
3 McFadden, C.; “The Origin of Algorithms We Use Every Single Day,” Interesting Engineering, 5 September 2020, https://interestingengineering.com/origin-algorithms-use-every-day
4 Ismail, K.; “AI vs Algorithms: What’s the Difference,” CMS Wire, 6 October 2018, https://www.cmswire.com/information-management/ai-vs-algorithms-whats-the-difference/
5 Larsson, S.; F. Heintz; “Transparency in Artificial Intelligence,” Internet Policy Review, vol. 9, iss. 2, 5 May 2020, https://policyreview.info/concepts/transparency-artificial-intelligence
6 Kallem, S. R.; “Artificial Intelligence Algorithms,” IOSR Journal of Computer Engineering, vol. 6, iss. 3, September/October 2012, https://www.researchgate.net/profile/Sreekanth_Reddy_Kallem/publication/314564271_Artificial_Intelligence_Algorithms/links/5f4863fa92851c6cfdee155c/Artificial-Intelligence-Algorithms.pdf
7 Dignam, A.; “Artificial intelligence, Tech Corporate Governance and the Public Interest Regulatory Response,” Cambridge Journal of Regions, Economy and Society, vol. 13, iss. 1, March 2020, https://academic.oup.com/cjres/article/13/1/37/5813462?login=true
8 Butcher, J.; I. Beridze; “What Is the State of Artificial Intelligence Governance Globally?” The RUSI Journal, vol. 164, 2019, p. 5–6, https://www.tandfonline.com/doi/pdf/10.1080/03071847.2019.1694260?needAccess=true
9 Op cit Dignam
10 Op cit Larsson and Heintz
11 Pearce, G.; “Digital Transformation: Boards Are Not Ready for It,” ISACA® Journal, vol. 5, 2018, https://www.isaca.org/archives
12 Op cit Dignam
13 Ibid.
14 Ibid.
15 Ibid.
16 Ibid.
17 Brahm, C.; “Tackling AI’s Unintended Consequences,” Bain & Company, 3 April 2018, https://www.bain.com/insights/tackling-ais-unintended-consequences/
18 Op cit Dignam
19 Ibid.
20 Op cit Butcher and Beridze
21 Cam, A.; M. Chui; B. Hall; “Global AI Survey: AI Proves Its Worth, but Few Scale Impact,” McKinsey & Company, 22 November 2019, https://www.mckinsey.com/featured-insights/artificial-intelligence/global-ai-survey-ai-proves-its-worth-but-few-scale-impact
22 Institute of Corporate Directors, Directorlens Survey Spring 2019, Canada, 2019, https://www.icd.ca/ICD/media/documents/2019-Spring.pdf
23 Hosanagar, K; “Why Audits Are the Way Forward for AI Governance,” Wharton School, University of Pennsylvania, Pittsburgh, USA, 4 November 2019, https://knowledge.wharton.upenn.edu/article/audits-way-forward-ai-governance/
24 Shin, T.; “Real-Life Examples of Discriminating Artificial Intelligence,” Towards Data Science, 4 June 2020, https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070
25 Op cit Dignam
26 Statista, “Cost Decreases From Adopting Artificial Intelligence (AI) in Organizations Worldwide as of Fiscal Year 2019, by Function,” November 2020, https://www.statista.com/statistics/1083516/worldwide-ai-cost-decrease/
27 Bariso, J.; “Tim Cook May Have Just Ended Facebook,” Inc., 30 Jan 2021, https://www.inc.com/justin-bariso/tim-cook-may-have-just-ended-facebook.html
28 Op cit Hosanagar
29 Sabbagh, D.; “Trump 2016 Campaign ‘Targeted 3.5M Black Americans to Deter Them From Voting,” The Guardian, 28 September 2020, https://www.theguardian.com/us-news/2020/sep/28/trump-2016-campaign-targeted-35m-black-americans-to-deter-them-from-voting

Guy Pearce, CGEIT, CDPSE

Has served on governance boards in banking, in financial services and at a not-for-profit, and has served as chief executive officer (CEO) of a multinational financial services organization. His interest in artificial intelligence (AI) arose with natural language processing (NLP) in Prolog in the late 1980s and was revitalized by emerging AI technologies. Consulting in digital transformation, data and governance, Pearce readily shares his experience as an author and speaker and received the 2019 ISACA® Michael Cangemi Best Author award for contributions to IT governance. He serves as chief digital officer and chief data officer at Convergence.Tech, a Canada-based digital transformation organization operating globally.

Maureen Kotopski

Is an experienced national board director who has served as the chair of governance and policy and on human resources committees. With board certifications from the Rotman Institute of Corporate Directors (ICD) (Toronto, Ontario, Canada), she is a proven leader in maturing board governance and policy. Kotopski currently works at a technology consulting firm that focuses on big data and financial crimes technology. She has previously led complex technology initiatives spanning system implementations, compliance, strategy and organizational design. Kotopski’s experience in enterprise technology and leveraging data for business benefit has supported her career and board contributions.