Can AI Be Used for Risk Assessments?

Author: Adeline Chan, CISM
Date Published: 28 April 2023

Risk is a measure of the extent to which an organization is threatened by a potential circumstance or event. It is a function of the impact if the event occurs and the likelihood of occurrence.1 Assessing risk requires careful analysis of threat and vulnerability information to determine the extent to which events could adversely impact an organization. It also determines the likelihood that events would occur. Qualitative considerations include audit findings, stress testing and even related changes in the risk environment due to change initiatives. Quantitative analysis includes reviewing key control or risk indicators and incidents such as internal and external events.

Risk assessments are important for effective risk management, providing decision makers with a thorough understanding of potential risk, which is why, traditionally, the results of risk assessments rely heavily on the reliability of the data used and the skills and expertise of the individual conducting the assessment. To produce risk assessments with more accuracy, artificial intelligence (AI) can be used because one of its core competencies is data aggregation and interpretation.

How AI Can AI Help Assess Risk

AI technologies are particularly useful in risk assessment due to their ability to quickly detect, analyze and respond to threats. AI-powered tools such as user and event behavior analytics (UEBA) can detect, analyze and respond to any anomalies that may indicate an unknown compromise. This reduces the number of false positives generated by traditional vulnerability detection tools.

When vulnerabilities are prioritized and contextualized, risk scoring is more accurate.2 For example, a legacy asset may indicate a potential risk but is overlooked. In contrast to traditional risk rating systems, AI can measure exposures and countermeasures independently. By analyzing them comparatively and weighing them in comparison with each other, risk scoring is derived with greater accuracy. This aggregation of information is not possible without AI.

Qualitative Considerations and Predictive Analysis

AI can evaluate unstructured data, and patterns related to past incidents can be identified and turned into risk predictors. The machine can then detect patterns and trends. Using these patterns, forward-looking plausible scenarios can be constructed to predict events and project risk. In addition, AI provides a more transparent link between business processes and risk. By increasing data transparency, risk controls and adequacy can be assessed to ensure corrective actions are taken to mitigate risk.3

AI can evaluate unstructured data, and patterns related to past incidents can be identified and turned into risk predictors.

Auditors use AI to analyze complete groups of data and transactions rather than sampling. This leads to a more complete audit and helps auditors identify anomalies that can be flagged for additional scrutiny. It also ensures that smaller transactions get a level of scrutiny where they previously would have been overlooked because of materiality constraints.4

Microsoft's latest security development incorporates large language models (LLMs). An LLM is a type of AI algorithm that uses deep learning techniques and large data sets to understand, summarize and predict new content.5

With the upcoming Microsoft Security Copilot, analysts will be able to quickly respond to threats, process signals and assess risk exposure in minutes. This is done using OpenAI's GPT-4.6 Natural language questions can be asked to Security Copilot and actionable responses can be received. By identifying ongoing attacks, assessing their scale and receiving instructions for remediation, Security Copilot can prevent future attacks. This is based on proven tactics from real-world security incidents. Security Copilot can also be used for threat hunting. For example, a query such as “Have any suspicious log ins happened in the last 10 days?” can be made and answers are received instantly. If any security incident has occurred, Security Copilot can summarize any event, incident, or threat in minutes. It can also prepare a ready-to-share, customizable report and even prepare a PowerPoint slide on a security incident.7

This new AI capability can also reduce the time spent drafting risk treatment plans after the risk assessment phase. By leveraging AI, organizations can gain much more accurate results and proactively identify potential future threats and vulnerabilities. This enables organizations to put measures in place to prevent potential security threats from occurring and more effectively remediate existing risk gaps.

Quantitative Analysis and Improved Evidence Processing

Internet of Things (IoT) devices are now used to auto-verify evidence and verify controls proactively. IoT devices with capabilities to sense, detect and recognize events and individuals have been incorporated into many aspects of enterprise assurance functions. For example, biometric IDs control entry into data centers, face recognition software monitors personnel movement within the data center and log analyzers parse server logs to determine if privileges have been violated.

For an auditor conducting a system and organization controls (SOC) 2 report audit, a system with machine learning capabilities can aggregate data from different monitoring systems. This data can be used as evidence of task performance. If anomalies are detected in the data, it can also serve as evidence. Although AI-based systems may analyze unstructured data and bring out patterns of information, auditors will be required to input the right evaluation parameters. Auditors best understand this data; therefore, they need the skill sets to manage it effectively. They also need to use data visualization techniques to present the findings to stakeholders. The role of the auditor changes from a reviewer to an interpreter of AI systems results. 

An example of how AI. with its advanced analytics and natural language processing capabilities, can better process networks of related parties, unstructured data and customer activity over time is tested by Citibank in its use of AI to improve trade compliance. Trade compliance is a key focus for global regulators, and, along with ensuring regulatory compliance, Citibank is using an AI initiative to streamline the time-consuming, highly manual processes associated with reviewing approximately 9 million annual global trade transactions. Trade transaction monitoring is more effective and efficient with SAS's sophisticated analytics platform. As a result, performance is improved and risk insights are enhanced. This results in reduced operational costs, improved monitoring response times and a better risk posture. 8

Adopting AI as a Dynamic Risk Assessment Tool

As risk assessments are conducted, a common consideration is whether controls are adequate and relevant. However, it has always been a challenge for the risk assessor to forecast if a needed control has been overlooked until there is an audit finding. With AI, it is possible to incorporate automated measurements in AI-based systems to improve the accuracy of predicting expected outcomes and to instantaneously verify that the actual values match the predictions. This approach creates an innovative form of control verification that is proactive in nature.

With AI, it is possible to incorporate automated measurements in AI-based systems to improve the accuracy of predicting expected outcomes and to instantaneously verify that the actual values match the predictions.

Risk managers and auditors will no longer need to limit themselves to the evidence provided. Algorithms such as deep learning can extract meaningful and contextual information from a stream of distinct sources such as contracts, conference calls and emails. This information can serve as supporting evidence. When updated data arrives, the AI system can immediately analyze it and turn it into actionable information. With deep learning algorithms, the continuous control monitoring system can reconfigure itself based on the feedback from the previous set of results. This approach can help ensure that controls are designed, configured and implemented optimally with minimal human intervention.

However, to implement AI technologies, organizations must consider the risk they want to assess and manage, the data they want to collect, and the associated challenges, such as data protection.

The first step in incorporating AI into a risk assessment strategy is to identify regulatory, financial and reputational risk. It is also crucial to identify what data should be collected and how they should be processed based on the current risk framework and organizational values. The AI model of processing data sets can be defined based on previous risk assessments. The type of data to use and the sources are critical considerations. Data sourcing is crucial for the implementation of an ecosystem, even at the operational level, as it influences the quality of the results. As with other risk management tools, AI must be continually evaluated and adjusted.9

Conclusion

Over time, AI will transform the world. By automating and using machine learning algorithms, many financial institutions and organizations can facilitate decision-making processes and provide services tailored to their users. Because AI analyzes large amounts of information, it significantly improves the ability to identify risk-relevant information; therefore, risk assessment and management will become more dynamic with AI’s continued adoption.

Endnotes

1 US National Institute of Standards and Technology (NIST), “Risk,” NIST Glossary
2 Kaminski, E.; “Is AI-Based Vulnerability Management Really That Efficient?” AITHORITY, 27 August 2021
3 Boultwood, B.; “How Artificial Intelligence Will Change Qualitative Risk Assessment,” Global Association of Risk Professionals, 18 December 2020
4 Association of International Certified Professional Accountants (AICPA), “Artificial Intelligence Is a Game Changer for Auditors,” 12 July 2002
5 Kerner, S. M.; “Large Language Model (LLM),” TechTarget, April 2023
6 Gatlan, S.; “Microsoft Brings GPT-4-Powered Security Copilot to Incident Response,” BleepingComputer, 28 March 2023
7 Viswanathan, P.; “Microsoft Announces Security Copilot, An AI-Powered Security Analysis Tool for Enterprises,” BigTechWire, 28 March 2023
8 Citigroup, Inc., “Citi Global Trade Uses AI to Digitize Compliance in Next Generational Project,” 29 April 2019
9 Reciprocity, “Using Artificial Intelligence in Risk Management,” 9 September 2021

Adeline Chan, CISM

Is the head of information cybersecurity and strategy at Standard Chartered Bank (SCB). She is responsible for assessing and mitigating operations, technology and cyberrisk and leading teams in raising awareness of the bank's risk culture. In her risk management role, she has implemented various risk frameworks for the cloud, Standard Chartered ventures, operations, technology and cybersecurity. She is focused on business value creation and engages with senior stakeholders on aligning cyber investments with business objectives. She coaches subject matter experts on achieving organizational redesign and cost efficiencies while managing project and change risk. She has worked extensively across global and corporate banking and other industries, including insurance and energy. Chan volunteers for the ISACA® SheLeadsTech Singapore Chapter as a mentor to women in the technology sector and candidates looking to pursue careers in the governance, risk and compliance sector. She shares her professional insights through writing (https://medium.com/@adelineml.chan). Her work has been published in SCB’s cyber newsletter and the ISACA® Journal.