Quantifying the Qualitative Technology Risk Assessment

Author: Julie Ebersbach and Michael Powers, PH.D., CRISC
Date Published: 1 September 2022
Related: Cyberrisk Quantification | Digital | English

Technology risk assessments help enterprises identify, analyze and evaluate weaknesses in their IT processes and security frameworks. Risk assessments allow enterprises to understand their evolving risk postures, determine whether their current risk aligns with their risk appetites and implement necessary controls to remediate identified gaps. As defined by the International Organization for Standardization (ISO), risk assessment is the overall process of risk identification, risk analysis and risk evaluation.1 In conducting a risk assessment, enterprises rely on data to form qualitative opinions about various elements of the assessment. Many enterprises utilize qualitative assessments because of their time and cost benefits, but this approach is not beneficial on its own because it is subjective and lacks specific, fact-based data. Some risk assessments may be entirely quantitative, but most enterprises find an exclusively mathematical approach flawed and subject to manipulation and bias. Therefore, a blended approach is recommended to retain qualitative subject matter expert (SME)-based conclusions supported by specific quantifiable measures. The quantitative elements reinforce the qualitative determinations in the risk assessment.

Qualitative vs. Quantitative Risk Assessments

There are two types of risk assessments: quantitative and qualitative.2, 3 Qualitative risk assessments include identifying and analyzing risk factors using an expert evaluation based on an enterprise’s risk management standard or framework with predefined risk ratings (i.e., high, medium, low). These types of risk assessments usually include determining the probability and impact of specific threats. Standard components of a qualitative risk assessment include inherent risk, control environment and residual risk rated on a three-point (high, medium, low) or five-point (high, medium-high, medium, medium-low, low) scale.

Quantitative risk assessments involve numeric ratings, with particular emphasis on the value of technology assets or the costs associated with service or asset disruptions.

As noted, qualitative risk is considered quicker, but it is more subjective, while quantitative analysis is objective, but requires more data, is more complex and has data accuracy issues.4 Despite the lack of overall standardization of qualitative technology risk assessment templates, several proprietary, open-source and enterprise-specific assessment types exist developed from industry standards or frameworks (figure 1).

The two primary frameworks or standards that inform qualitative and quantitative technology risk assessments are NIST SP 800-30 Guide for Conducting Risk Assessments5 and ISO 27005:2018 Information technology—Security techniques—Information security risk management.6 The NIST publication notes that risk assessment is a fundamental need for organizational risk management and observes that assessment approaches include quantitative, semiquantitative and qualitative approaches.7 It provides guidance and a high-level framework for conducting risk assessments. Similarly, ISO/IEC 27005:2018 is an international standard that defines risk management practices with specific focus on information security management. It, too, offers a high-level risk management approach and framework focused on risk identification, analysis, evaluation, treatment and monitoring.8 General qualitative approaches include FMEA/FMECA, CRAMM and OCTAVE. FMEA/FMECA is strictly focused on system and equipment types of risk, thus is fairly narrow in scope (technical assets vs. organizational).9 CRAMM is a software-based utility focused on assessing information value, identifying associated threats and vulnerabilities, and identifying risk mitigation techniques.10 OCTAVE, similar to FMEA/FMECA, also focuses on information assets, threats to those assts and vulnerabilities of those assets.

The primary quantitative technology risk assessment methods, FAIR and ALE, use purely quantitative techniques to assess risk. FAIR is an analytic model focused on factors that drive frequency and magnitude of loss,11 while ALE is a generic mathematic equation where annual loss equals probability of event multiplied by value of loss.12

Without quantitative assessments—specifically, how much an investment will decrease risk in the environment—risk is not truly being measured.

Many enterprises avoid purely quantitative assessments because quantifying IT risk is challenging due to the abundance of data, rapidly changing threats, potential overreliance on numeric estimation and the pace of technological change.13 In contrast, purely qualitative assessments can involve less critical thinking in a complex environment, with minimal data to back up any conclusions.14 Severity ratings (e.g., high, medium, low) give stakeholders the perception that there is some rigor behind the ratings, even when their definitions are based on minimal data. Ratings of high, medium and low are defined not by actual metrics but by estimates. By using existing metrics in the environment and an operating history, enterprises can define these ratings in a more quantitative manner and better define their ideal target operating thresholds. It is also important to compare the severity ratings to one another. Understanding how risk mitigation efforts impact residual risk on a moving scale and how much total risk exists is critical to better management.15 Without quantitative assessments—specifically, how much an investment will decrease risk in the environment—risk is not truly being measured.

Risk Assessment Components

Although the components of a qualitative risk assessment vary by institution, typical elements include inherent risk (or risk factors prioritized by probability and impact),16, 17 anticipated top and developing risk factors, control environment and residual risk, which takes the form of an action statement intended to address the enterprise’s posture moving forward:

  • Inherent risk—The chance that an event could produce an unexpected outcome in the absence of any mitigation efforts. This is determined by evaluating the assets or critical business processes at risk relative to the vulnerabilities and threats to those assets.
  • Top and developing risk—Identified risk factors that are of the greatest concern or newest to the environment.
  • Control environment—Mitigation processes implemented within the environment that allow the business to function efficiently while mitigating risk to the desired level. The absence of controls should also be considered.
  • Residual risk—The amount of risk remaining after controls and mitigation efforts are put in place. This is dependent on the controls’ designs and effectiveness relative to the specific assets and inherent risk factors identified.

Risk Quantification Elements

Each component of the qualitative risk assessment has potential quantitative or pseudo-quantitative drivers that influence the assessment.

Inherent Risk
Inherent risk is the chance that an event could produce an unexpected outcome. Inherent risk is typically measured by first determining the assets, size and complexity of the technology environment. Considerations include the number of servers and databases across the environment, the applications hosted on those assets and the true functions of those applications. For example, an application with significant business impact, potential for customer loss or impactful legal requirements would be higher risk than one that supports a small number of users with narrow business scope. Assessing the potential threats and vulnerabilities of the assets results in a more quantifiable measurement of inherent risk in the environment.

Inherent risk is typically measured using a three-point or five-point scale or a similar scale using numeric scores, with definitions determined by the enterprise’s risk management framework. Although these definitions are often subjectively evaluated by risk professionals, quantifying the elements of inherent risk using probability and impact statements provides more robust support for the qualitative result. Quantification examples include references to historical occurrences (e.g., measuring how often X occurs annually, even in the presence of controls; therefore, the probability of occurrence is high, medium or low), number of customer-impactful incidents and cost data associated with operational losses.

Each component of the qualitative risk assessment has potential quantitative or pseudo-quantitative drivers that influence the assessment.

Top and Developing Risk
Enterprises should identify the top risk factors in the environment and those that are new and emerging to determine what threats and vulnerabilities are most likely to have an impact. Of all the risk elements, top and emerging risk quantification is the most challenging because, by nature, top and emerging risk is more purely qualitative.

However, certain quantifiable data elements and techniques are available. Specifically for application vulnerability risk, the Open Web Application Security Project (OWASP) Foundation’s threat modeling method can assist in identifying threats to an environment and assessing what vulnerabilities exist.18 This activity involves identifying not only the number of risk factors, but also the type and extent of risk. Some risk factors have a greater impact than others, and assessing these threats is critical to understanding the true risk to the environment. Although the OWASP method measures and rates top risk factors for applications, similar principles can be adapted to assess an entire environment. Similar to inherent risk, top and developing risk should be assessed using probability and impact statements, with quantifiable data to support the qualitative data.

Control Environment
The control environment is typically defined by the control effectiveness score, which is found by dividing the number of effective controls by the total number of controls. This is where quantification fails. The score is not the full picture and 100 percent effectiveness may not always be the desired goal. The first consideration is understanding the control scope and coverage across the environment. Coverage is often measured by the extent to which an enterprise’s existing control portfolio maps to an authoritative framework source such as COBIT®19 or NIST SP 800-53 Security and Privacy Controls for Information Systems and Organizations.20 If the scope and coverage are extensive and thorough, with few gaps, the control environment can be considered strong; in this case, the control effectiveness score becomes significant. However, if control scope and coverage are still maturing and many gaps remain, having a high control effectiveness score is less meaningful because it does not yet cover the total environment. Thus, a 65 percent control effectiveness rating with full coverage (as measured by a given framework) may be more informative and meaningful than a 100 percent control effectiveness score with only fractional coverage. It is critical that control scope and coverage and control effectiveness are evaluated together; when each is considered alone, it does not provide a detailed enough view of true mitigation efforts.

Another facet of the control environment is the number of formal technology issues and their associated severity. Determining the number of documented issues does not provide the full picture; severity is an important component. For example, an environment with a high number of low-rated severity issues should be evaluated differently from an environment with a moderate number of high-rated issues.

Assessing residual risk on a periodic basis provides knowledge about how the environment is changing and evolving over time from both a quantitative and a qualitative perspective.

In addition, metrics and key performance indicators (KPIs) can be used to obtain a holistic view of how the environment is performing over time. Applying quantifiable measures to vulnerabilities in the environment or assessing the number of high-priority incidents over a certain period helps determine how the environment is trending across periods and how well mitigation efforts are working. Metrics can be helpful in identifying weaknesses in the control environment, even when a control is effective. It is important to note that thresholds and indicators should be carefully defined and frequently reviewed to determine the appropriate measurements for the best visibility.

Residual Risk
While inherent risk, top and developing risk, and the control environment are qualitatively assessed using measurable data and concrete supporting evidence, residual risk is defined with more objective facts, in contrast to the subjective nature of a purely qualitative assessment. Objective facts and quantification elements related to residual risk include overall control effectiveness percentage (assuming full risk-based coverage), number of regulatory or compliance issues, number of high-risk technology issues, and operational metrics and health of those metrics associated with technology management. Qualitative judgments by risk professionals can be more fairly challenged by defense-in-depth elements within the enterprise (second line, audit) or external auditors or regulators if quantitative factors are present.

If an assessment includes both quantitative and qualitative components, they can be aligned to reach a conclusion about the overall residual risk of the environment. Assessing residual risk on a periodic basis provides knowledge about how the environment is changing and evolving over time from both a quantitative and a qualitative perspective. It can also identify weaknesses in the environment, such as areas where more controls may be needed or existing controls may be lacking effectiveness.

Conclusion

One major challenge of qualitative risk assessments for senior management, boards of directors and other nonrisk professionals is understanding the ambiguity of risk assessments, which are primarily rooted in expert judgment and highly subject to opinion. To counter this challenge, risk practitioners should have a well-established, mature framework with concrete defined constructs for the methodology and quantifiable data elements. Quantification bolsters the credibility and accuracy of the qualitative risk assessment.

Enterprises have a variety of options when performing technology risk assessments, ranging from internally developed methodologies to the adoption of industry methodologies (proprietary and open source). In general, the predominance of qualitative assessments and methodologies suggests that the complexity, cost and limitations of purely quantitative assessments constrain their more widespread adoption. In reality, a blended approach in which qualitative assessments are supported by quantitative techniques and tools may offer the best of both worlds.

Authors’ Note

The authors acknowledge the assistance of Robert Boutell, IT risk director, in providing review and feedback.

Endnotes

1 International Organization for Standardization (ISO), ISO 31000:2018 Risk management—Guidelines, Switzerland, 2018, https://www.iso.org/iso-31000-risk-management.html
2 Gibson, D.; Managing Risk in Information Systems, 2nd Ed., Jones and Bartlett Learning, USA, 2015
3 Rot, A.; “IT Risk Assessment: Quantitative and Qualitative Approach,” Proceedings of the World Congress on Engineering and Computer Science 2008, (WCECS), October 2008, p 22‒24, http://www.iaeng.org/publication/WCECS2008/WCECS2008_pp1073-1078.pdf
4 Volan, E.; “Risk Assessment and Analysis Methods: Qualitative and Quantitative,” ISACA® Journal, vol. 2, 2021, https://www.isaca.org/archives
5 National Institute of Standards and Technology (NIST), Special Publication (SP) 800-30 Rev. 1, Guide for Conducting Risk Assessments, USA, September 2012, https://csrc.nist.gov/publications/detail/sp/800-30/rev-1/final
6 International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC), ISO/IEC 27005:2018 Information technology—Security techniques—Information security risk management, Switzerland, 2018, https://www.iso.org/standard/75281.html
7 US National Institute of Standards and Technology (NIST), NIST Special Publication (SP) 800-53 Security and Privacy Controls for Information Systems and Organizations, Revision 5, USA, 2020, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf
8 Op cit International Organization for Standardization/International Electrotechnical Commission
9 Op cit Rot
10 Yazar, Z.; A Qualitative Risk Analysis and Management Tool—CRAMM, SANS Institute, USA, 2002, https://www.sans.org/white-papers/83/
11 Jones, J.; "An Adoption Guide for FAIR," RiskLens, 2019, https://www.risklens.com/ebooks/an-adoption-guide-for-fair
12 Op cit Rot
13 Wangen, G.; C. Hallstensen; E. Snekkenes; “A Framework for Estimating Information Security Risk Assessment Method Completeness,” International Journal of Information Security, vol. 17, iss. 6, 2017, https://doi.org/10.1007/s10207-017-0382-0
14 Op cit Yazar
15 Ibid.
16 Op cit Rot
17 Op cit Jones
18 Drake, V.; “Threat Modeling,” Open Web Application Security Project (OWASP), https://owasp.org/www-community/Threat_Modeling
19 ISACA®, COBIT®, USA, 2018, https.//www.isaca.org/resources/cobit
20 Op cit National Institute of Standards and Technology

JULIE EBERSBACH

Is an IT risk manager at a midwestern US regional banking institution. She can be reached at jvandenbulke@gmail.com.

MICHAEL POWERS | PH.D., CRISC

Is an IT risk director at a midwestern US regional banking institution and an adjunct professor of quantitative statistics, project management and cybersecurity at three universities. He can be reached at mpowersphd@gmail.com.