Emotional Cyberrisk Management Decisions

Author: Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, Chief Risk Officer, Kovrr
Date Published: 3 January 2024

There are certain aspects of managing cyberrisk in which emotion can play an important role. However, far too often, emotion is treated as the default tool in risk management, rather than the specialty tool it is meant to be. Cognitive biases and heuristics have an outsized influence on cyberrisk management. They are hard-wired into our brains and, if managed poorly, can make it difficult to ascertain reality. One of the more common sources of bias is confirmation bias, which is the tendency to search for, interpret, and recall information in a way that confirms one’s preexisting beliefs or hypotheses.

Confirmation bias may lead someone to disregard evidence that contradicts their existing beliefs. Overconfidence bias is also common. This leads individuals to overestimate their abilities, such as the effectiveness of their cybersecurity controls. Lastly, the anchoring effect happens when someone relies too heavily on the first piece of information ingested (i.e., the anchor) when making decisions, such as an initial risk assessment that may be outdated. The flip side of this is availability or recency biases, which occurs when someone gives more weight or importance to recent events or experiences compared to those that happened in the past.

The sum of these biases makes it easy to misinterpret data and can result in a variety of negative outcomes for an organization’s risk posture. Each of these biases becomes its own heuristic or simplified decision-making model for risk. For example, I may decide that something is high risk because I have heard or read a lot about it. I may reason that if everyone is discussing something, then it must be something I should care about. Most decision-making models (such as risk) that operate this way often leverage biases and can result in an oversimplification of the reality of a situation. Heuristic models are quick and work well in situations where immediate impacts can be devastating. However, most risk decision making is related to events that have yet to materialize, which means there is generally plenty of time to invest more in deliberations. This distinction is what Nobel laureate Daniel Kahneman refers to as System 1 (fast) and System 2 (slow) thinking.

Taking a more analytical approach to cyberrisk is an effective way to account for decision-making deficiencies. Qualitative assessment methods double down on the bias and heuristic error by adding ambiguous risk labels (e.g., high/medium/low). Cyberrisk quantification (CRQ) embeds testable and verifiable numerical values into the corpus of decision-making artifacts. Since these decisions are about things that may happen in the future, a natural range of possible futures should be expressed. These values are related to three major risk variables: the probability of events happening, the strength of the control environment in preventing or recovering from the events, and the severity or magnitude of impact. 

However, despite this well-established, if not widely implemented, ability to quantify cyberrisk, there is still a role for decision-making's emotional, subjective aspects. Imagine that someone prepares an exceedance probability curve (sometimes known as a loss exceedance curve), a standard tool in CRQ analyses that forecasts a range of likelihoods for experiencing a certain amount of loss in the coming year. They have overcome the biases of leveraging unaided human judgment and ignored the quick yet inaccurate risk assessment heuristics widely used (e.g., heatmaps). The individual presents this to the board of directors (BoD). What actions is the board taking, and how is it expected to decide what to do?

Typically, the use cases for CRQ tend to fall into a handful of discrete yet complex scenarios. First are decisions about risk transfer. This includes discussing how much insurance coverage is needed and a corresponding deductible. A corollary to this discussion is also how much money to set aside in case these events are realized (i.e., a capital allocation strategy).

Another use case is how much money to invest in cybersecurity controls. This is both a big-picture and a granular assessment of overall cybersecurity spending and individual project initiatives. Here is an evaluation of the budget necessary to support the tools, staffing and third parties needed to operate a cybersecurity team. Decisions about how much to spend and where savings could happen are top-of-mind in these discussions.

These two use cases will spawn a series of conversations about what to tell the BoD, what to disclose to the investor community (materiality), and the setting and governing of risk appetite. With this in mind, the BoD’s responses will be based on its feelings about how much risk and cybersecurity spending is too much. Indeed, this is how we all make decisions about finances. How much insurance we need is based on factors such as the replacement cost of the asset we are trying to protect and how much loss we feel is acceptable to absorb.

In this case, relying on emotions to set thresholds for items such as appetite, materiality, and insurance is reasonable. By controlling bias and the limitations of heuristics, the human mind is free to make the decisions for which it is well-equipped: determining the comfort level of risk exposure. We can pick a point on the loss curves and test for comfort level, then adjust based on strategy and other data about organizational performance. This way, we can set more appropriate limits on loss exposure and risk-taking than merely saying “We do not want to accept any high risk.”

Moving beyond emotional or intuitive decision-making in critical aspects of cyberrisk management is vital, yet it cannot be abandoned altogether. Focusing on CRQ underscores the potential for more grounded, objective and effective approaches in setting materiality disclosure thresholds and risk appetites. It is not intended to replace human decision making, but rather, informs it so that organizations can better manage their risk exposure and execute on their goals and objectives.

Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, NACD.DC

Is a cyberrisk quantification expert, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, ISSA Distinguished Fellow, FAIR Institute Fellow, IAPP Fellow of Information Privacy, (ISC)2 2020 Global Achievement Awardee and the recipient of the ISACA® 2018 John W. Lainhart IV Common Body of Knowledge Award.