Acknowledging Humanity in the Governance of Emerging Technology and Digital Transformation

Author: Guy Pearce, CGEIT, CDPSE
Date Published: 1 July 2019
français

Artificial intelligence (AI) and robotics have captured the imagination of humans. A quick search for AI and robotics in pop culture finds that it stretches back to Mary Shelley’s book, Frankenstein, first published 201 years ago, and extends to cult classics in television and film such as 2001: A Space Odyssey, Star Trek, Wargames, Hitchhiker’s Guide to the Galaxy, Tron, The Matrix, Ex Machina and more.

Depending on how one defines AI, it has certainly had its ups and downs since 1956.1 Approximately 30 years ago, AI activity was at a peak, with a strong interest in languages such as Prolog. Interest in AI waned soon thereafter, only to come back stronger in the 21st century. Prolog is still regarded as one of the best languages for AI programming.2

AI is, however, but one class of emerging technologies that are being touted as instrumental for digital transformation. However, many emerging technologies are not IT-related (e.g., lab-grown meat), and only a handful are expected to be effective at enabling organizations to outperform their peers. In this respect, the analysis of more than 250 emerging technologies identified the eight technologies most likely to change the way organizations do business: AI, augmented reality, blockchain, drones, Internet of Things (IoT), robotics, virtual reality and 3D printing (figure 1).3

Amid this excitement, the question for IT governance practitioners is whether they are well enough prepared to appropriately govern emerging technologies and their contributions to digital transformation. The short answer? Perhaps not.

The Risk Domain of IT Governance

The risk factors involved in the deployment of any information technology are numerous. Concerns include navigating the complexity of the integration demanded between the new technology and the legacy systems, omnipresent data challenges, questions about the nature of the security of the new system relative to the existing information security paradigm, and the existence of operational gaps requiring the integration and development of new supporting IT and business processes.

There is also the variety of vendor risk, especially if the vendor has a desirable technology, but either or both the technology and the vendor are relatively unproven. And one should not forget the business case risk, i.e., the uncertainty in terms of whether the technology will eventually realize the business benefits being proposed of it—a risk amplified by emerging technology.

The accelerated rate of privacy concerns and privacy regulations globally raises more compliance risk and even more questions, with a further rapidly emerging issue concerning the ethical use of technology and/or its data, especially in AI applications.

These risk factors and issues are compounded for emerging digital technologies because the unknowns associated with emerging technologies are greater than those of mature technologies. The requirements for the appropriate oversight of digital transformation and emerging technologies intensifies the already demanding requirements for good risk management knowledge, risk management experience and risk management instincts in today’s IT governance professionals.

Culture’s Role in Effective IT Governance Has Been Established

As complex as the previously mentioned risk factors are, they are still much easier to manage than are any human factors. The human factors are so significant that the global financial services regulatory agency, the Financial Stability Board (FSB), has been explicit about the role of culture (the set of human behaviors and norms that an organization finds acceptable) in establishing effective risk management, being the first regulatory agency to do so. In particular, the FSB speaks of setting the appropriate tone at the top; in other words, that the organization’s leadership sets a behavioral example for the organization to follow.

There is increasing recognition that culture is integral to absolutely everything. Some have referred to the tone at the top as “the first ingredient in a world-class ethics and compliance program.”4 It is little wonder that the concept is the subject of international attention, even becoming the subject of corporate governance codes such as South Africa’s King IV.5 Without a determined tone at the top, ethics and cultural initiatives lose their impact. What is the impact of ethics and integrity programs in organizations where “more and more CEOs [are] leaving their role amid accusations of ethics breaches and lack of integrity?”6

On a related note, corporate culture has been found to be the most significant critical success factor (CSF) for effective enterprise IT governance.7 Furthermore, International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standard ISO/IEC 38500—a key IT governance framework—specifically warns not to underestimate the human element.8 Understanding the organization’s personality traits—its culture—is key to establishing IT effective governance.9

As critical as culture has been found to be, it is but one important component of the human element of both effective risk management and effective IT governance. The other important component is ethics, a field of study that assesses the morality of the organization’s behaviors.

Where Culture and Ethics Collide

The difference between culture and ethics can be demonstrated by means of the recent Volkswagen emissions scandal, where the global automaker deemed the behavior of designing software to cheat the US’s diesel emission standards as being acceptable.10 Ethics is concerned with the morality of solving the diesel emissions problem by cheating. That Volkswagen was consciously unethical is the root of the scandal.

For an emerging information technology example, Amazon has, since 2014, had a software tool that analyzes job applicants’ résumés to facilitate the identification of the best candidates for a role. So far, so good, as pretty much all large-scale recruiters scan résumés for keywords, even though their tools might not be as sophisticated as having an AI engine. A problem emerged when Amazon’s AI tool was found to discriminate against women.11

The ethical issue for Amazon is not the culture and behavior of algorithmically identifying potentially the best candidates for a role, but rather that the data used to train machine learning models contains biases that humans may not even be aware of, a matter of being unconsciously unethical. In fact, a recent headline notes that AI is plagued by human bias.12

The ethics question is, therefore, what needs to be done to protect people against unconscious or conscious biases as tomorrow’s thinking machines are created, which may be the root of their own ethical dilemma in some circles. The difference between Volkswagen and Amazon in these examples is that Volkswagen knew about and supported its misguided endeavors (setting an unethical tone at the top), while Amazon seems to have suffered the problem plaguing many AI initiatives, that of unconscious bias.13

Volkswagen is not alone in being consciously unethical. Companies like Facebook have been implicated in fake and incendiary news designed to be divisive, with its chief executive officer (CEO) offering only that he would make it harder for such manipulation to occur in the future. Other examples of being consciously unethical include very large organizations producing “unambiguously harmful products” (Monsanto), fake bank accounts (Wells Fargo) or inflating medical costs (Cigna).14 The scale of these matters demonstrates the extent to which the culture of the organization condoned such activities.

There are growing concerns about the ethical implications of developments of digital transformation and emerging technologies such as AI, not only from the public, but also from technology organizations and lawmakers.15

Ethics: IT Governance’s Next Frontier?

Ethics can emerge as an issue whenever technology—not just information technology or new technology—is introduced. The conversation extends beyond the technologies to the data they produce, which raises all manner of security and privacy concerns. The concerns involve not only hardware, e.g., IoT visual sensors (cameras), but the software, too, e.g., analytics practices. It has been argued that the production, access to and control of information will be at the heart of ethical issues in IT.16 This is true not only in the private sector, but in the public sector as well. In the case of the latter, the challenge is managing the community expectations and perceptions about fairness; in other words, ethics by design.17

Ethics as a subject is growing in stature in IT because emerging technologies have given people more power to act, resulting in people having to make choices that they did not have to make before. While people were previously implicitly constrained by their lack of knowledge or capability, they now have to be voluntarily constrained by their ethics.18

The most significant of the many reasons cited for the failure of the US$1,500 Google Glass earlier this decade was privacy concerns,19, 20 where it was feasible that someone could be the subject of a wearer taking unwelcome video or photos.21 “Creepy” was used as an adjective to describe the product.22

Of all the articles citing numerous reasons for the failure of Google Glass, one of the most telling is an article titled, “Google Glass Wasn't a Failure. It Raised Crucial Concerns,”23 where the author argued that Glass’s failure was not so much a failure of the product as it was a victory for human beings, who are increasingly discovering that there are limits to where technology should play a role in human lives. Privacy was immediately raised as a serious concern—not marketing, not value propositions, not product design, etc.—but privacy.24 The other reasons for failure discussed in marketing schools today came only much later.

In Toronto, Canada, a current tech controversy concerns what Sidewalk Labs—a subsidiary of Alphabet, just like Google, YouTube and Google X are—is really up to in its desire to create a hyperefficient sensor (and, therefore, data-driven) community on the Toronto waterfront,25 where the development expects all vehicles to be autonomous and shared, while robots will perform menial chores.26 Controversy ignited when globally renowned former Canadian privacy commissioner Ann Cavoukian resigned from her role as a consultant to Sidewalk Labs, “to make a strong statement”27 on discovering that Sidewalk Labs would not guarantee that everyone on the project would de-identify the data at source as she had originally been told, only that de-identification would be encouraged, a state of affairs she described as “not good enough.”28

While it is incidental that both these examples involved Google or allied companies, the issue illustrated by them is that there are an increasing number of important questions being asked about the appropriateness of the use of emerging technology in society, exactly the kind of ethics questions that those accountable for the oversight of their organizations should be asking.

Just because new technologies—such as some applications of AI—may be more efficient, it does not necessarily make them morally better.29, 30 This idea—potentially 500 years in the making—identifies that while AI has the potential to increase productivity, reduce poverty and boost affluence, it can also make society worse off if the wrong choices are made.31 The World Economic Forum raises nine categories of ethical and risk consequences related to the efficiencies introduced by AI:32

  • Unemployment brought about by increased automation (efficiency)
  • Inequality through a new form of the concentration of power in AI organizations
  • Humanity, specifically how machines affect human behavior and interaction
  • Artificial stupidity and guarding against unintended consequences (risk)
  • Racist robots and other biases
  • Security and protecting against technologies being used for bad
  • The unforeseen consequences of AI-driven automation
  • Singularity, when humans are no longer the most intelligent beings on Earth
  • Robot rights and the growing conversation of the legal status of intelligent robots

Privacy should also be a candidate for this list. For now, the top ethical principles proposed for digital transformation across a variety of technologies are:33

  • Explicitly designing for privacy, security and integrity—Regulations such as the EU General Data Protection Regulation (GDPR) have this covered through clauses on data security and data privacy by design, with a paragraph on accuracy representing integrity.
  • Promoting trust—There is quite a long way to go based on the examples used in this section.
  • Acknowledging and addressing the kinds of biases raised earlier—The issue at stake is individual or community confirmation bias, subconscious frames-of-reference that influence how people gather, interpret and act on data about the world, as illustrated by the Amazon example.

Ultimately, those who interact with a digitally transformed organizational ecosystem will expect much more in terms of transparency and fairness from those organizations.34

There is a call for the stronger oversight (governance) of AI to protect against the unethical uses of technology.35 Are IT governance practitioners prepared for this level of oversight? Again, perhaps not, because there do not seem to be clear guidelines on what actually constitutes the ethical governance of digital transformation and emerging technologies such as AI, at least not in the same way that the CGEIT Review Manual is definitive about IT governance frameworks, strategic alignment, business cases, risk and resourcing. In particular, is there an IT governance construct that guides people with respect to assessing the impact of emerging technology such as AI on, for example, the organization’s honesty, transparency, accountability, responsibility, independence, fairness and social responsibility?36

A Gap in GEIT Coverage?

Emerging technology and digital transformation exert new pressures on IT while in pursuit of (ethically) increasing organizational competitiveness and sustainability. The ISACA CGEIT Review Manual speaks of five governance of enterprise IT (GEIT) domains shown in figure 2.

The topics of ethics and culture contained within GEIT’s resourcing domain are enterprise constructs, as are GEIT’s other domains. Should they not, therefore, constitute a domain in their own right? Culture and ethics impact the entire organization, just as strategic alignment, business cases, and risk and resourcing do.

“Good governance is part of ethics.”37 In other words, ethics does not apply to just a portion of good governance, as suggested by the current positioning of ethics and culture within the GEIT resourcing domain, but to all governance. More so, compliance ethics and culture were found to empirically influence the level of IT governance (all of IT governance),38 not the other way around.

Conclusion

Because ethical dilemmas exist wherever humans, information and information systems interact,39 “[t]he future of the computing profession depends on both technical and ethical excellence.”40

In addition to the ethics concerns covered previously, issues such as privacy, bias, lack of transparency and biotechnology were only briefly raised, and there are a host of other IT ethics issues in areas such as cybersecurity, copyright infringement and plagiarism, and even in terms of the digital divide.

Ethics and culture are significant constructs in the context of today’s information technology. Ethics and culture, just like strategic alignment, business case, and risk and resourcing are all enterprise-level GEIT constructs that should, perhaps, be performed with the same level of diligence. It seems that the risk of seeing IT ethics and culture only through the tiny lens of the resourcing domain are too great.

Endnotes

1 Crevier, D.; AI: The Tumultuous Search for Artificial Intelligence, Basic Books, USA, 1993, p. 47-49
2 RankRed, “Eight Best Artificial Intelligence Programming Language in 2019,” 3 January 2019, https://www.rankred.com/best-artificial-intelligence-programming-language/
3 PricewaterhouseCoopers, “The Essential Eight: Your Guide to the Emerging Technologies Revolutionizing Business Now,” https://www.pwc.com/gx/en/issues/technology/essential-eight-technologies.html
4 Mohlenkamp, M.; “The First Ingredient in a World-Class Ethics and Compliance Program,” Deloitte Perspectives, https://www2.deloitte.com/us/en/pages/risk/articles/tone-at-the-top-the-first-ingredient-in-a-world-class-ethics-and-compliance-program.html
5 Ibid.
6 Atkins, B.; “Business Ethics and Integrity: It Starts With the Tone at the Top,” 7 February 2019, Forbes, https://www.forbes.com/sites/betsyatkins/2019/02/07/business-ethics-and-integrity-it-starts-with-the-tone-at-the-top/#3839d0a457c6
7 Pearce, G.; “The Sheer Gravity of Underestimating Culture as an IT Governance Risk,” ISACA Journal, vol. 3, 2019, https://www.isaca.org/archives
8 Robinson, N.; “Organizational Profiling: A Path to Effective IT Governance,” Cutter IT Journal, vol. 22, no. 12, January 2010, https://nicholasrobinson.files.wordpress.com/2010/01/itj0912_nr.pdf
9 Ibid.
10 Armstrong, R.; “The Volkswagen Scandal Shows That Corporate Culture Matters,” Financial Times, 13 January 2017, https://www.ft.com/content/263c811c-d8e4-11e6-944b-e7eb37a6aa8e
11 Dastin, J.; “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 9 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
12 Bradley, S.; “All the Creepy, Crazy and Amazing Things That Happened in AI in 2017,” Wired, 20 December 2017, https://www.wired.co.uk/article/what-happened-in-ai-in-2017
13 Bloomberg, J.; “Bias Is AI’s Achilles Heel. Here’s How to Fix It,” Forbes, 13 August 2018, https://www.forbes.com/sites/jasonbloomberg/2018/08/13/bias-is-ais-achilles-heel-heres-how-to-fix-it/#5b7f67256e68
14 Stebbins, S.; E. Comen; M. B. Sauter; C. Stockdale; “Bad Reputation: America’s Top 20 Most-Hated Companies,” USA Today, 1 February 2018, https://www.usatoday.com/story/money/business/2018/02/01/bad-reputation-americas-top-20-most-hated-companies/1058718001/
15 Simonite, T.; “The AI Text Generator That’s Too Dangerous to Make Public,” Wired, 14 February 2019, https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/
16 Stanford Encyclopedia of Philosophy, “Information Technology and Moral Values,” Stanford University, California, USA, 12 June 2012, https://plato.stanford.edu/entries/it-moral-values/
17 Menachemson, D.; “The New Digital Ethics: How to Survive and Prosper in Times of Unprecedented Transformation,” The Mandarin, 14 January 2019, https://www.themandarin.com.au/101779-the-new-digital-ethics/
18 Green, B. P.; “What Is Technology Ethics?” Markkula Centre for Applied Ethics, Santa Clara University, California, USA, https://www.scu.edu/ethics/focus-areas/technology-ethics/
19 Arthur, C.; “Google Glass: Is It a Threat to Our Privacy?” The Guardian, 6 March 2013, https://www.theguardian.com/technology/2013/mar/06/google-glass-threat-to-our-privacy
20 Kelly, H.; “Google Glass Users Fight Privacy Fears,” CNN Business, 12 December 2013, https://www.cnn.com/2013/12/10/tech/mobile/negative-google-glass-reactions/index.html
21 Doyle, B.; “Five Reasons Why Google Glass Was a Miserable Failure,” Business 2 Community, 28 February 2016, https://www.business2community.com/tech-gadgets/5-reasons-google-glass-miserable-failure-01462398
22 Munarriz, R.; “Three Reasons Google Glass Failed,” AOL, 19 January 2015
23 Eveleth, R.; “Google Glass Wasn’t a Failure. It Raised Crucial Concerns,” Wired, 12 December 2018, https://www.wired.com/story/google-glass-reasonable-expectation-of-privacy/
24 Makarechi, K.; “Google Executive Explains Why Google Glass Didn’t Take Off,” Vanity Fair, March 2015, https://www.vanityfair.com/news/2015/03/google-glass-failures
25 Baron, J.; “Tech Ethics Issues We Should All Be Thinking About In 2019,” Forbes, 27 December 2018, https://www.forbes.com/sites/jessicabaron/2018/12/27/tech-ethics-issues-we-should-all-be-thinking-about-in-2019/#52cd52574b21
26 MIT Technology Review, “10 Breakthrough Technologies 2018,” https://www.technologyreview.com/lists/technologies/2018/
27 O’Shea, S.; “Ann Cavoukian, Former Ontario Privacy Commissioner, Resigns From Sidewalk Labs,” Global News, 21 October 2018, https://globalnews.ca/news/4579265/ann-cavoukian-resigns-sidewalk-labs/
28 CBC News, “‘Not Good Enough’: Toronto Privacy Expert Resigns From Sidewalk Labs Over Data Concerns,” CBC, 21 October 2018, https://www.cbc.ca/news/canada/toronto/ann-cavoukian-sidewalk-data-privacy-1.4872223
29 Op cit Green
30 Mullane, M.; “Tackling the Ethical Challenges of AI,” IEC e-tech, 17 September 2018, https://medium.com/e-tech/the-ethics-of-artificial-intelligence-7a0cf0d9a2cf
31 Ibid.
32 Bossman, J.; “Top Nine Ethical Issues in Artificial Intelligence,” World Economic Forum, 21 October 2016, https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/
33 Yardley, D.; “Are You Making Ethical Decisions During the Digital Transformation Process?” Kogan Page, 29 March 2018, https://www.koganpage.com/article/essential-ethics-for-digital-transformation
34 Ibid.
35 SAS, “Three Essential Steps for AI Ethics,” https://www.sas.com/en_us/insights/articles/analytics/artificial-intelligence-ethics.html
36 Price, N. J.; “Achieving Strong Corporate Governance Through Technology,” Diligent Insights, 24 April 2018, https://insights.diligent.com/corporate-governance/achieving-strong-corporate-governance-through-technology/
37 Herrod, C.; A. Parks; “Strong IT Governance: Ethical Arguments and GRC Convergence Strategies,” SCCE’s 7th Annual Compliance and Ethics Institute, September 2008, https://community.corporatecompliance.org/HigherLogic/System/DownloadDocumentFile.ashx?DocumentFileKey=738ccc29-e5f0-4cf4-9ebf-065cc8da7b1c
38 Ibid.
39 Conway, P.; “Ethics, Information Technology and Today’s Undergraduate Classroom,” Proceedings of the Third Annual iConference, University of Michigan, USA, 2008, https://deepblue.lib.umich.edu/bitstream/handle/2027.42/85225/C01%20Conway%20Ethics%20IT%20Undergraduate%202008.pdf;sequence=1
40 Association for Computing Machinery (ACM), “ACM Code of Ethics and Professional Conduct,” https://ethics.acm.org/code-of-ethics/

Guy Pearce, CGEIT
Has served on various enterprise boards and as chief executive officer of a multinational retail credit operation. This experience provides him with rich insights into the real-world expectations of governance, risk, IT and data. Capitalizing on two decades of corporate digital transformation experience, he instructs a digital transformation course at the University of Toronto (Ontario, Canada) School of Continuing Studies, targeting boards and the C-suite, based on a gap he identified while researching a recent article published in the ISACA Journal. He serves as an independent consultant in enterprise digital transformation and is the recipient of the 2019 ISACA Michael Cangemi Best Book/Author Award.