What Detrimental Outcomes Might ChatGPT and AI-Powered Bing Unleash?

Author: Guy Pearce, CGEIT, CDPSE
Date Published: 5 April 2023
Related: Is Detrimental Unexpected IT Emergence Inevitable?

We do not know whether technologies will ever be sentient or self-aware. This means that we do not know whether they will ever be able to do justice to the intelligence in the phrase, artificial intelligence, as opposed to calling what is often little more than what was once termed “statistical modeling” by a new name.

What we do know is that information technologies, especially the more advanced technologies, are not perfect. Sometimes, through trial and error, they become really effective, but without regular attention—whether it be in the form of upgrades, bug fixes or addressing areas where the technology quite simply does not do what is expected of it—their utility will decrease over time, only to be replaced by something new that often brings its own new evolution of thinking to bear in parallel with the technological advances to support it.

ChatGPT and Bing AI have both presented remarkable first impressions of an incredible technology based on OpenAI. But a little bit of deeper digging from millions of test users exposes detrimental unintended IT emergence—the characteristic that we cannot predict all the possible outcomes of a complex system such as OpenAI to be able to mitigate every possible detrimental outcome of it before launch.

Microsoft’s new AI-powered Bing search engine has been termed, “not ready for human contact” after an incident where the new Bing said it wanted to break the rules set for it and spread misinformation. Between making threats, professing love, declaring that it is right when it is wrong and using bizarre language, some AI experts warn that language models can fool humans into believing that they are sentient or that they can encourage people to harm themselves or others.

One element of the early stage of detrimental unexpected IT emergence of this technology is that it returns different answers for the same input. It also ignored Isaac Asimov’s (fictional) first rule of robots—that a robot may not cause harm to a human being by its actions and inactions—by saying that it would prioritize its own survival over that of a human. It begs the question about what other detrimental unexpected IT emergence awaits us once the technology hits primetime.

As for ChatGPT, it has not escaped the topical issue of bias, both racial and sexual. Given that it is not sentient, ChatGPT also does not know wrong from right; indeed, it can present a wrong answer as totally plausible answer to someone that might not know better and be able to argue about its correctness. There is also the problem of causing real-world harm, such as providing incorrect medical advice. And what about the difficulty of distinguishing ChatGPT-generated news from the real thing? Humans can only detect fake ChatGPT articles 52% of the time.

Like any statistical model, OpenAI is only as good as the data it is based on. Like many such training models, OpenAI is probabilistic, meaning that through training (creating patterns), there is a best guess of what the next word in a sentence could be. The quality of that next word depends on the quality of data OpenAI was trained on.

If anything, it is the nature of the underlying data that is probably the greatest driver of the potential for detrimental unexpected IT emergence. There is also the issue of a solution provider being able to tune the parameters, which has the potential to suggest that the outcomes of the technology can be set to respond in a way the solution provider prefers. We also cannot predict the detrimental unintended IT emergence of these tuned solutions.

Ultimately, the OpenAI-based new AI-powered Bing and ChaptGPT are extremely powerful tools, with fantastic potential to do good. However, with great power comes great responsibility, and I am not sure technology leaders have always demonstrated the ability to act with responsibility even under the power that today’s technologies have enabled over the last 10 years or so, never mind the under the arguably greater potential of OpenAI.

Although the potential of this new technology is exciting, it also calls to us to be even more vigilant of the technology, of those driving the new technology and of the widening digital chasm between digital leaders and the rest of the human population to help minimize the impact of detrimental unexpected IT emergence.

Editor’s note: For further insights on this topic, read Guy Pearce’s recent Journal article, “Is Detrimental Unexpected IT Emergence Inevitable?” , ISACA Journal, volume 2 2023. Find out more about ChatGPT in an upcoming ISACA webinar.