Implementing Artificial Intelligence: Capabilities and Risk

Author: Ivy Munoko, PH.D., CISA, ACCA
Date Published: 1 July 2022
Related: Implementing Robotic Process Automation (RPA) | Digital | English

During the last decade, there has been exponential growth in artificial intelligence (AI) adoption. Total global enterprise AI investments grew from US$12.75 billion to US$67.85 billion between 2015 and 2020.1 There was a significant increase of approximately US$20 billion in AI investments between 2019 and 2020, suggesting that the pandemic did not slow the adoption of AI and, instead, may have accelerated it.

There are several definitions and classifications of AI. One widely used definition is “the capability of a machine to imitate intelligent human behavior.”2 To learn human behavior, AI systems often use patterns in human-generated data and algorithms for performing related functions. Figure 1 depicts the general operation of AI systems, their roles and purposes, and features of ethical concern.


Source: Munoko, I.; H. Brown-Liburd; M. Vasarhelyi; "The Ethical Implications of Using Artificial Intelligence in Auditing," Journal of Business Ethics, vol. 167, iss. 2, 8 January 2020, https://link.springer.com/article/10.1007/s10551-019-04407-1. Reprinted with permission.

However, all AI systems are not equal. AI systems can be differentiated based on their type of intelligence (e.g., mechanical intelligence, analytical intelligence, intuitive intelligence, empathetic intelligence),3 their role in executing a task (e.g., assistive AI, augmented AI, autonomous AI)4 or their embedded technologies (e.g., natural language processing [NLP], machine learning [ML], machine vision, speech recognition). An AI implementation may involve several intelligence, role or technology categories (figure 2).

Understanding the needed level of intelligence empowers an implementer to determine whether the current AI capabilities match the required level of intelligence for the task.

With the exponential growth of AI, understanding the differences between categories enables those implementing AI systems to make decisions based on AI’s inherent strengths and limitations and address the unique technical and ethical implications of using these AI categories.

Considering Intelligence Levels

When determining whether a process is a good candidate for AI initiatives, it is essential to break it into its composite tasks and assess the level of intelligence required to perform each task. Understanding the needed level of intelligence empowers an implementer to determine whether the current AI capabilities match the required level of intelligence for the task. The most basic type of intelligence that a task can require is mechanical intelligence.

Mechanical intelligence enables the automation of routine and repetitive tasks. For example, the ability to copy information captured within invoices into an enterprise resource planning (ERP) system requires mechanical intelligence. Many AI systems exhibit some form of mechanical intelligence. For example, AI systems that include robotic process automation (RPA)—a technology that automates routine, repetitive and high-volume tasks—rely on mechanical intelligence because they follow a predefined set of steps to complete tasks.

The next level of intelligence, analytical intelligence, requires machines to process data to perform assigned functions. For example, an AI system that detects fraudulent credit card transactions exhibits analytical intelligence. It examines past transactions to identify patterns useful in detecting future anomalous transactions. Some AI systems exhibit analytical intelligence that surpasses the analytical abilities of humans.5

A higher level of intelligence is intuitive intelligence, which is required to perform tasks that depend on an underlying understanding of concepts. An AI system would require some intuitive intelligence to perform higher-level tasks usually performed by experts.

The highest level of intelligence is emotional or empathetic intelligence. For example, an AI system providing customer service should ideally have some emotional intelligence to sense and react to customers’ emotions (e.g., by detecting changes in their voices or facial expressions).

Failing to analyze the level of intelligence required for a task and not understanding the capabilities of an AI system may result in implementing AI systems that are inappropriate for the processes to which they are applied, leading to a lack of adoption, use, trust or reliance by end users. It may also result in unintended ethical consequences or implications. Consider a task such as interviewing job candidates. There has been recent debate as to whether the use of AI is appropriate for such a task because it requires some emotional intelligence. Would AI be able to exhibit the necessary level of judgment?6

Considering the Role

The second category of AI systems is based on their roles. The assistive role is the most basic, whereby the AI system performs tasks to provide the system user with outputs suitable for carrying out core processes. For example, a user responsible for preparing and filing a tax return can query a chatbot about applicable tax codes. In this case, the chatbot is assistive AI.

The role of an augmented AI system is more advanced, with tasks that require some judgment as part of a process in which the end user performs a related set of judgment tasks. For example, in a fraud risk assessment, the AI system may analyze a batch of transactions and flag those of high risk; an auditor can then perform appropriate testing of the flagged transactions. Lastly, an autonomous AI operates on its own without any human intervention. One example of autonomous AI is the use of drones to perform asset inspections.

The higher the role level given, the greater the risk if there are insufficient preventive and detective controls to guard against the consequences of system errors and failures.

It is essential for those implementing AI systems to determine the appropriate role level and responsibilities to assign to an AI system and establish how accountability will be enforced, especially in autonomous AI scenarios. The higher the role level given, the greater the risk if there are insufficient preventive and detective controls to guard against the consequences of system errors and failures. As the roles of AI progress from assistive to autonomous, the chance of possible ethical issues increases. To effectively evaluate the ethical implications of AI systems, implementers need to consider the ethical principles inherent in these systems. An effective way to examine the ethical principles is to use an ethical checklist, such as the Wright checklist.7 Figure 3 contains some questions from the Wright checklist that implementers of AI should consider for the various AI roles.


Source: Adapted from Munoko, I.; H. Brown-Liburd; M. Vasarhelyi; “The Ethical Implications of Using Artificial Intelligence in Auditing,” Journal of Business Ethics, vol. 167, iss. 2, 8 January 2020, https://link.springer.com/article/10.1007/s10551-019-04407-1. Reprinted with permission.

Considering the Technologies

AI technologies face inherent challenges related to their technology type. For example, ML, which relies on analyzing data sets to detect patterns and develop algorithms, can be susceptible to false assumptions of causation, even when correlations in the data have no causal relationship. A common illustration is a correlation between ice cream sales and drowning in a specified geographical area. The two variables (ice cream sales and drowning) have no relationship with each other, but without applying reasoning, ML algorithms can make false assumptions about them.

There are also inherent challenges with NLP, which attempts to analyze and understand human speech. NLP could be susceptible to misunderstanding synonyms and slang terms, or it may fail to read between the lines when a speaker introduces irony or sarcasm into a conversation. Another example of AI challenges is seen with machine vision systems, such as self-driving cars and optical character readers, because they require massive amounts of training data to function accurately. Obtaining adequate, representative image data is costly, and there is a risk of inadvertently using captured images of humans without their consent.

Combining Process Tasks and AI Capabilities

A close fit is needed between task requirements, the AI’s current capabilities and the roles assigned to the AI. For appropriate alignment, there needs to be strong collaboration between the academic researchers driving AI innovation and the professionals seeking to improve processes through assistive, augmented or autonomous AI, as shown in figure 4.

Such collaboration ensures that professionals keep up with AI’s inherent strengths and challenges as the technology evolves. In addition, such partnerships provide researchers with the needed practical settings to develop and evaluate new pragmatic AI techniques.

Conclusion

It is essential for implementers of AI systems to consider the role assigned to the AI, the specific inherent strengths and weaknesses of the technology (e.g., NLP, ML, machine vision), the level of intelligence required for the task and, ultimately, the level of intelligence exhibited by the AI. Although AI can provide implementers with immense benefits, its implementation also has challenges; however, by deliberately considering the nature of the task in which the AI will be deployed, implementers of these systems can ensure that the systems meet their technical, operational and ethical objectives so that the AI system is effective and acceptable to the end users and society.

A close fit is needed between task requirements, the AI’s current capabilities and the roles assigned to the AI.

Endnotes

1 Statista, “Global Total Corporate Artificial Intelligence (AI) Investment From 2015 to 2020,” 17 March 2022, https://www.statista.com/statistics/941137/ai-investment-and-funding-worldwide/
2 Merriam-Webster Dictionary, “Artificial Intelligence,” https://www.merriam-webster.com/dictionary/artificial%20intelligence
3 Huang, M.-H.; R. T. Rust; “Artificial Intelligence in Service,” Journal of Service Research, vol. 21, iss. 2, 5 February 2018, https://journals.sagepub.com/doi/full/10.1177/1094670517752459
4 Munoko, I.; H. L. Brown-Liburd; M. Vasarhelyi; “The Ethical Implications of Using Artificial Intelligence in Auditing,” Journal of Business Ethics, vol. 167, iss. 2, 8 January 2020, https://link.springer.com/article/10.1007/s10551-019-04407-1
5 Christ, M. H.; S. A. Emett; S. L. Summers; D. A. Wood; “Prepare for Takeoff: Improving Asset Measurement and Audit Quality With Drone-Enabled Inventory Audit Procedures,” Review of Accounting Studies, vol. 26, iss. 4, 9 January 2021, p. 1323‒1343
6 Loten, A.; “Employers, Investors Take Notice of AI Tools to Speed Job Recruitment,” The Wall Street Journal, 7 January 2022, https://www.wsj.com/articles/employers-investors-take-notice-of-ai-tools-to-speed-job-recruitment-11641599629
7 Wright, D.; “A Framework for the Ethical Impact Assessment of Information Technology,” Ethics and Information Technology, vol. 13, 8 July 2010, https://link.springer.com/article/10.1007/s10676-010-9242-6

IVY MUNOKO | PH.D., CISA, ACCA

Is an assistant professor at the University of Florida, Warrington College of Business (Gainesville, Florida, USA). Her research focuses on using artificial intelligence for auditing and forensics, including the ethical implications, and she has more than seven years of combined experience in IT, finance and auditing. Munoko has had several roles, including IT project manager, systems auditor and automation specialist, directly related to operational and financial systems development, risk and control assessments, and process automation and improvement management. She has presented her research to various US and international regulators.