Proximate Cyberrisk Management Starts With Culture

Author: Jack Freund, Ph.D., CISA, CISM, CRISC, CGEIT, CDPSE, Chief Risk Officer, Kovrr
Date Published: 26 January 2022

Everybody wants to get to the root cause of a problem, but few actively try to manage the risk around its edges. Root cause analysis (RCA) is a common technique applied in the sciences to help practitioners better understand the initiating cause of a problem. This initial link in a causal chain helps one better understand how to resolve problems and bring resolution to a fault, error or undesirable outcome of a system.

In cybersecurity, time is often spent trying to understand the root cause of a cyberincident. In doing so, one can get sidetracked by focusing on variables that are easy to manage and not necessarily the factors that will bring about a resolution to the problem system. For example, significant focal points in post-incident analysis are which controls failed and why. This is a very important part of such analyses, as it is critical to understand that a version of some middleware product was not the latest version and was subsequently used in a hack. However, as the old aphorism goes, correlation does not necessarily mean causation.

True RCA makes a distinction between proximate causes and ultimate or root causes. Proximate causes are those that are the closest to the event under analysis. These are often controls. It can be true that the proximate cause of a successful hack was a failure to have correct permissions set for cloud storage. This opening may have allowed a threat agent to access the files within and cause the incident. While closest to the incident (proximate), the lack of appropriate permissions may not be the root cause. Solving the problem could be as straightforward as simply adjusting the permissions set for the cloud storage. However, a deeper analysis is often necessary to better understand the ultimate cause of this control state. It is here that a catalog of controls is no longer useful in diagnosing the failure. Often, the ultimate causes are uncovered by analyzing culture factors and human failures.

It is important for any analysis of a security incident to include an evaluation of the cybersecurity culture in which it occurred. Does the organizational culture in which this technology is employed have attributes that encourage employees to perform their jobs in ways that incorrect permissions are more or less likely to be set for cloud storage? Such characteristics can be indicative of a weak security culture and are important elements of understanding the ultimate cause of a security incident. Are expectations communicated clearly? Does the organization encourage or unofficially require employees to cut corners to meet demands? Does it care for the well-being of its employees in such a way that they are ready and able to protect the enterprise's assets and further its organizational goals and missions? Are employees constantly worried about losing their jobs or otherwise distracted? Are they given the opportunity to keep their skills up to date?

Ultimately, the answers to all these questions contributes to various organizational failures and successes, but it is important to understand the ways in which this can manifest itself in the cybersecurity realm. Will employees need more training, better tools, checklists, standard operating procedures or more help from others? Understanding the basic elements of human behavior in the context of organizational culture is critical to better understanding the ways in which these behaviors can contribute to cybersecurity failures.

Jack Freund, Ph.D., CISA, CRISC, CISM, CGEIT, CDPSE, NACD.DC

Is vice president and head of cyberrisk methodology for BitSight, coauthor of Measuring and Managing Information Risk, 2016 inductee into the Cybersecurity Canon, ISSA Distinguished Fellow, FAIR Institute Fellow, IAPP Fellow of Information Privacy, (ISC)2 2020 Global Achievement Awardee and ISACA’s 2018 John W. Lainhart IV Common Body of Knowledge Award recipient.