Generative AI: The Future Is Now, and It’s Risky

Author: Jon Brandt, Director, Professional Practices and Innovation, ISACA
Date Published: 4 May 2023

Generative artificial intelligence (AI) is making headlines and will continue to do so for the foreseeable future. We live in a society perpetually in awe of gadgets and the promise of work/life hacks or the proverbial easy button. From my vantage point, far too many users are starstruck by AI capabilities and are putting enterprises at substantial risk. As the adage goes, “If it sounds too good to be true, it often is.”1 As with so many technological advances, generative AI is not without issues. Influential leaders have already requested a pause on development. At least in the United States, that request appears to have fallen on deaf ears.2 In a perfect world, a pause would happen with all developers agreeing to the hiatus, but we live in an imperfect world fueled by competitive edges. Enterprises that have not already addressed this growing threat are behind.

Enterprises that have not already addressed this growing threat are behind.

While many are hyperfocused on ChatGPT, it is important to note that it is but one of many generative AI tools currently on the market. There will be more. With risk assessments at less-than-desired intervals for the modern threat landscape,3 enterprises of all sizes are wise to ban generative AI—if only temporarily—pending a thorough risk assessment and subsequent issuance of formal guidance. Administrative controls can assist enterprises with operational effectiveness, efficiency and adherence to regulations and management policies.4 Of course, administrative controls (e.g., policies, security education and awareness training) by themselves will not protect intellectual property or otherwise sensitive data. Effective use of technical controls is also a key piece of the puzzle and necessitates an understanding of the data collected, classified, processed and stored.

Contending With AI Uncertainties

Graphical user interface (GUI) betterments aimed at increasing the usability of any IT-related solution can speed adoption, despite users outside of digital trust professions lacking an understanding of the inner workings of those tools. Generative AI, on the other hand, differs and is riskier in that many practitioners themselves lack sufficient knowledge of it to make informed decisions. If nothing else, model cards5 such as GPT-4 System Card6 serve to promote transparency into trained models. Risk is magnified when you consider black box models that take on minds of their own and algorithms that become unexplainable.

If I sound downtrodden, I am. Healthy skepticism aside, I despise fear, uncertainty and doubt (FUD) that fuels kneejerk decision-making, but in the case of generative AI, FUD is warranted. Engineers have undoubtedly worked hard for years to get to where we are with product releases such as GPT-4, but the AI arms race has created considerable angst among these same creators. AI pioneer Geoffrey Hinton’s departure from Google is interesting and seems to align with warnings from others that AI lacks sufficient guardrails for safe use.7

The Human Factor

Humans are fallible. Of interest, prior ISACA® research data reveals honesty and empathy were the 2 least important soft skills8 which is concerning given the protective-oriented nature of those charged with the design, operation, security and assurance of IT-related systems. Also consider that 43% of respondents overwhelmingly believe cybercrime is underreported even when organizations are required to do so. This is concerning when IT-related fields are among the most credentialed professions requiring adherence to codes of conduct that would dissuade this practice. Coupled with a slew of documented ethical issues, one should rightfully be pessimistic about the fairness and transparency of AI, especially when AI algorithms have already negatively impacted lives.9

Privacy and fairness remain core to discussions involving AI, while others seek to leverage the power of AI to change the way countries conduct military operations.10 Bias arguably cannot ever be fully mitigated, which is concerning considering that AI models are only as good as training inputs and are likely to garner attention from bad actors. When having discussions about generative AI in the enterprise, there are 6 messages to consider that may aid dialogue about AI opportunities and risk:

  1. Increased technological capabilities inherently carry risk. While many GPT risk areas are documented, there will undoubtedly be more given the recency of GPT-4. Misuse of technology—intentional or otherwise—is inevitable. Preemptive planning, governance, risk management and continued research are imperative.
  2. Language models can amplify biases and perpetuate stereotypes. Much emphasis remains on computational factors (e.g., represented data and fairness) yet it fails to address human and systemic biases and societal factors. In many cases, the information users have given generative AI tools will be used to shape future outputs, so inputs are already biased.
  3. Legislation has long failed to keep pace with technological advances. The explosion of generative AI has created a host of issues surrounding intellectual property and underscores the need for both impactful privacy legislation (notably in the United States) and oversight.
  4. Automated systems carry risk—not only in processing, but also when they are poorly designed, implemented and operated, or when they lack appropriate oversight. Clear, concise notices to users that provide generally accessible plain language documentation of the overall system functioning and the role automation plays are critical, and so are human alternatives. Further, enterprises have a responsibility to provide clear guidelines for the use of technology within the workplace.
  5. The imbalance between supply and demand for tech talent has historically fostered a multitude of vendor solutions purported to solve any enterprise problem. At this point, GPT-4 usefulness in cybersecurity is limited. GPT-4 is expected to increase the believability of phishing emails, which will strain social engineering mitigation and require enhancements to cybersecurity education and awareness training.
  6. FUD around AI replacing human jobs is not new, but the current emphasis appears to be on augmentation of human capital, which may not always be the case. Of importance, AI is only as good as the data it trains on, so humans remain critical for contextualization, creativity and communication. AI has the potential to displace some roles, which increases the criticality of policymaking globally and for individual countries. Within IT-related fields, it is more likely that the explosion of technologies such as GPT-4 will result in the reimagination of work and redistribution of certain work functions rather than the displacement of workers.
Within IT-related fields, it is more likely that the explosion of technologies such as GPT-4 will result in the reimagination of work and redistribution of certain work functions rather than the displacement of workers.

Generative AI and Digital Trust

Fundamental to the mentioned AI insights is digital trust. Recent advances in AI have made digital trust even more difficult to achieve. Digital trustworthiness is an evolution of digital transformation and a modern-day imperative. And AI technology is not immune to bugs and breaches. Digital trust must be earned and sustained—it is not optional, nor freely given. The lack of transparency in how technology is developed, operated and protected can only lead to serious issues ranging from operational problems to irreparable brand damage. Today, consumers are largely forced into compromises of privacy in exchange for all-or-nothing access to services. Sadly, we have become heavily reliant on legislation to curb business practices that take advantage of inattentive or unknowing persons.

Conclusion

Choosing to authorize or block generative AI is a distinct decision that should not be taken lightly. In all cases, the technology requires careful consideration and risk assessment before it can be considered digitally trustworthy.

A final important note: Enterprise leaders should assume that employees have already freely uploaded intellectual property or otherwise sensitive information to sites that provide input to generative AI training models. This is problematic for multiple reasons. First, intellectual property protections in the context of AI use are not black and white, and at least in the United States, the courts are now involved.11 Second, divergent laws complicate matters and may ultimately influence where AI enterprises operate.12 Third, copyright infringement is not cheap—assuming that the intellectual property is protected correctly.

Generative AI is transforming far more and in far wider ways than anyone could have imagined, and I assert that no part of business is exempt. All signs point to the need for more frequent risk assessments and broader involvement in enterprise risk management activities.

Endnotes

1 Oxford Reference, “If Something Sounds Too Good to Be True, It Probably Is
2 Holland, M.; “Effort to Pause AI Development Lands With Thud in Washington,” TechTarget, 31 March 2023
3 ISACA® conducted a Risk Pulse Poll survey earlier this year and found that respondents overwhelmingly conduct risk assessments annually when they believe assessments should be occurring more frequently.
4 ISACA, Glossary
5 Ruiz, A.; “AI Model Cards 101: An Introduction to the Key Concepts and Terminology,” 28 February 2023
6 OpenAI, GPT-4 System Card, 23 March 2023
7 Reuters, “Google AI Pioneer Says He Quit to Speak Freely About Technology's 'Dangers,'” 2 May 2023
8 ISACA, State of Cybersecurity 2022, USA, 2022
9 Heikkila, M.; “Dutch Scandal Serves as a Warning for Europe Over Risks of Using Algorithms,” Politico, 29 March 2022
10 Engadget, “Palantir Shows Off an AI That Can Go to War,” 26 April 2023
11 Vincent, J.; “Getty Images Sues AI Art Generator Stable Diffusion in the US for Copyright Infringement,” The Verge, 6 February 2023
12 Maisel, J.; “Will Divergent Copyright Laws Between the US and UK Influence Where You Do Business as an Artificial Intelligence Company?,” JD Supra, 8 September 2022

Jonathan Brandt, CISM, CDPSE, CCISO, CISSP, PMP

Is director of professional practices and innovation on ISACA’s Content Development and Services team. In this role, he leads audit; emerging technology; governance, risk and compliance (GRC); IT; information security and privacy thought leadership initiatives relevant to ISACA® constituents. He provides other ISACA teams with subject matter expertise on infosec, influences innovative workforce readiness solutions and leads development of performance assessments. Brandt is a highly accomplished US Navy veteran with 30 years of experience spanning multidisciplinary security, cyberoperations and technical workforce development.