The Bleeding Edge: Intelligence—A Not-So-Mediocre Commodity

Author: Dustin Brewer, CISM, CSX-P, CDPSE, CEH
Date Published: 31 December 2020

“Brains are the only things worth having in this world, no matter whether one is a crow or a man.”1

Not that long ago, humankind used candles to produce light and horses to travel or to do mechanical work. Now, a candle is an obsolete measurement of light power—lumens are a much more modern way to express light power and are used prolifically in the lighting industry (one candela is the equivalent of 12.57 lumens). Meanwhile, a single horsepower is still a measurement of work output prevalent in the automotive and mechanical engineering industry. For inquiring minds, one horsepower is also equivalent to 745 watts. We have these phrases, labels and measurements that survive the crests and troughs of time via the scientific community. Emerging technologies seem to have that effect on us; we use the literal term to describe a functionality, it becomes somewhat of a custom to continue to refer to it as such, and then, after a time, we retire it in favor of a new term once we make the next discovery pertaining to that technology. We use the term “artificial intelligence (AI)” now, but for how long will that language maintain as a custom for a technology that, as of now, may not be particularly “intelligent” nor entirely “artificial” in its makeup?

Yes, we are tackling the great and powerful concept of AI in this edition of The Bleeding Edge. A daunting task for sure, but as the technology becomes more prevalent and more enterprises adopt it, we will need to prepare ourselves as members of the IT community. Because this task is so robust and difficult, I have brought along a coauthor for this column who truly understands the concept of AI technology: an AI writing algorithm. I gave the AI writing algorithm (AI Guest Writer) the topic of “Comparing AI to Candles and Horses.” Let’s check in and see what the AI Guest Writer came up with:

The good news is that humans have existed successfully for thousands of years and have been able to use them for thousands of years. The last category is the case with C++, where artificial intelligence-powered machines will develop an active aversion to humans and decide to destroy them or to assert themselves and somehow forget them. It will certainly not be enough to prove that 1,000 lines of a program cannot be broken by external hackers, but it proves that it has existed successfully for thousands of years.2

Wow. Really insightful. Thanks, AI Guest Writer! Let’s talk a little about the technology itself.

The Brain and the Courage to Use It

Recent leaps and bounds in AI technology have made it more useful for our IT business functions as well as our everyday lives. We have smart devices at home that are run by voice recognition services powered by AI. AI and machine learning (ML) algorithms are used in cybersecurity to measure heuristics and report possible breaches and incidents. Even the scientific community is using AI for data analysis. The COVID-19 pandemic was actually predicted by an AI at BlueDot algorithm weeks before any government confirmation of the virus’s existence had taken place.3

Of course, one of the biggest advances in AI technology is its ability to learn, make decisions based on data and act on a decision. Robotic process automation (RPA) and self-driving vehicles are just a few examples of this technology in applications today. This capability is starting to set AI apart from other emerging technologies and makes it a horse of a different color all together.

Anything else to add about AI usages, AI Guest Writer?

The Forbin Project, where the chaos is of planetary proportions, but this time set in the distant future, far away from Earth and far away from humans. You need the knowledge of human faces and you use it to create a baking show for people.

Fantastic!

AI does not come without issues or fallbacks. We cannot simply hand the keys over to AI and let it take us wherever we need to go yet. AI needs to be trained, tested and retrained, usually with large data sets. From where does the data come? Who are the models for this new technology? It depends on the functionality of the AI technology, but it can come from us in the form of collected data via scientific trials, Internet tracking collection and social media. And herein lies one of the biggest issues with AI. Us.

AMONG THE MYRIAD OF ETHICAL AND SOCIAL DILEMMAS WE FACE WITH THE CREATION AND IMPLEMENTATION OF AI TECHNOLOGY, ONE OF THE MOST PREVALENT ISSUES IS UNDESIRABLE RESULTS.

If It Only Had a Heart

Among the myriad of ethical and social dilemmas we face with the creation and implementation of AI technology, one of the most prevalent issues is undesirable results. Google recently apologized for its AI image labeling algorithm having produced racist results by labeling a dark-skinned hand holding an infrared thermometer as a gun while a version of the same picture with the hand painted a lighter color was labeled as a monocular.4 Amazon’s AI recruiting tool for its technical jobs was producing biased results based on the gender of the job applicants.5 In Amazon’s case, the fault was in the data being fed to it. The algorithm was simply learning from the data set it was given and attempting to mimic it. So where did Amazon get the data used? Whose behavior was being mimicked?

And this is the hurdle with all new technologies, but especially AI. Society has become a collective parent to a new form of decision-making entity. It is in the very early stages of development (Skynet and the Terminator are not going to be knocking on your door anytime soon). During these formative years, we (society) are the village that it takes to raise our collective AI child. Congratulations! It’s a…neural net.

When we create and implement new technologies, the byproducts of that process put our culture and society under a microscope. In some technological innovation cases, our intent is measured by what sells, what is trendy or what is seemingly important to us. With AI, we are facing a whole new problem in which our behavioral patterns are not only being observed, but magnified and reflected back. Sometimes we are unaware of these behaviors and biases, and they can be difficult to acknowledge. Advancements in AI will continue only to show humanity its flaws and vices over the next five to 10 years, and it is up to us as to how we will handle the feedback provided.

One possibility is to build a “filter” into AI algorithms to ignore our more abhorrent systemic issues within our institutions. Another possibility would be to change our behavior so that AI has a better role model (or data set) to guide it into its next evolution. That process, for us, will take a little intelligence, some courage and a whole lot of heart. With that said, I would like to give our AI Guest Writer a chance to make a comment on this issue:

Biases in AI are not an easy problem to solve, and developers need to respond constructively. Machine algorithms and artificial intelligence cannot do it alone, just as humans must acknowledge race and prejudice before they can defeat it.6

Wise words we should all take to heart.

Endnotes

1 Baum, L. F.; The Wonderful Wizard of Oz, George M. Hill Company, USA, 1900
2 Brooks, R.; “The Seven Deadly Sins of Predicting the Future of AI,” Rodney Brooks: Robots, AI, and Other Stuff, 7 September 2017, https://rodneybrooks.com/the-seven-deadly-sins-of-predicting-the-future-of-ai/
3 BlueDot, https://bluedot.global/
4 Kayser-Bril, N.; “Google Apologizes After Its Vision AI Produced Racist Results,” Algorithm Watch, 7 April 2020, https://algorithmwatch.org/en/story/google-vision-racism/
5 Dastin, J.; “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, 10 October 2018, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
6 Scheirer, W.; “How to Make AI Less Racist,” Bulletin of the Atomic Scientists, 9 August 2020, https://thebulletin.org/2020/08/how-to-make-ai-less-racist/

Dustin Brewer, CISM, CSX-P, CDPSE, CEH

Is ISACA’s principal futurist, a role in which he explores and produces content for the ISACA® community on the utilization benefits and possible threats to current infrastructure posed by emerging technologies. He has 17 years of experience in the IT field, beginning with networks, programming and hardware specialization. He excelled in cybersecurity while serving in the US military and, later, as an independent contractor and lead developer for defense contract agencies, he specialized in computer networking security, penetration testing, and training for various US Department of Defense (DoD) and commercial entities. Brewer can be reached at futures@isaca.org.