Enterprise Risk Monitoring Methodology, Part 2

Author: Luigi Sbriz, CISM, CRISC, CDPSE, ISO/IEC 27001:2022 LA, ITIL V4, NIST CSF, UNI 11697:2017 DPO
Date Published: 9 April 2019

An enterprise risk monitoring process requires the involvement of all the internal processes in the organization but under different perspectives, due to the processing of data having different degrees of granularity. This is generally considered a complex and time-consuming effort in linear proportion to the size of the organization. The most obvious goal is to contain the costs of managing the risk-monitoring process while seeking high-quality outcomes, reducing the time, effort and the complexity of the operation.

This methodological approach keeps the process simple and doable even in very large and complex organizations.

Enterprise Risk Assessment

There is a need to evaluate a broader set of risk areas for the top-level of the organization, with a focus on process control rather than operational control, even if at the expense of losing a bit of granularity in the risk analysis.

A different methodology is necessary to integrate the operational risk with the strategic risk and with any general control framework based on business outcomes’ evaluation.

In this situation, it is necessary to change the perspective and start from the operating rules of the business instead of from the operating actions. Action is the implementation of the rule. The risk of not being able to meet expectations is a function calculated according to three indicators: maturity level to implement the solution, expected loss missing the expectations and worst-case scenario likelihood.

The level of implementation or enforcement of a rule is the basis for its risk evaluation. The gap, with respect to full implementation, is the metric to evaluate the risk. Rules coming from all the control frameworks adopted in the organization make up a wide set of variables that will allow for the building of a flexible method to evaluate all the risk factors practitioners want if they use a suitable algorithm.

The Concept

The concept behind this methodology is that the operating rules in an organization are issued to achieve the business objectives and to highlight any issue in opposition to objectives. So, indirectly, evaluating the rules’ enforcement provides a basis for calculating exposure to the risk. In other words, it is similar to an evaluation of vulnerability.

The idea of building an appropriate set of rules is to split the organization into its basic constituent rules—a few hundred rules taken from policies, procedures, contractual constraints, regulations, laws, standards and so on. For each rule selected, it is necessary to assess its current level of implementation, and the assessment is provided by an accountable party or “expert” for that rule. Reassembling the rules, singularly evaluated, following the controls (made of one or more rules) of a chosen framework enables extracting a complete risk estimation under that framework.

Practically, this is possible by creating a relationship between the rules and the controls; for each control, all the rules having impact on it will be identified. Aggregating the evaluations of the rules selected for a control, it will be possible to assess the level of implementation of that control. Appropriate weights per rule are used in the calculation or consolidation of the risk for all the levels of the framework.

Therefore, the first step of the methodology is to identify the operating rules of the organization. A practical approach could be reading the operating procedures of the internal processes or departments and, for each of them, identifying the actions necessary to perform their operations. Any action or group of them is a rule. Should it be necessary, it is possible to split the action into sub-actions, but only if there is a realistic need to manage them differently (i.e., separately in different contexts). Otherwise, the split is not necessary (to avoid exceeding the number of selected rules).

Rule Enforcement Evaluation

Evaluation consists of an estimation of the level of maturity, the expected economic loss and the likelihood of the worst outcome. The metrics adopted are qualitative because, for each rule, it is necessary to collect the feeling of the expert about that rule. An analytic consolidation is automatically provided by the system using a numerical conversion for each qualitative item.

The basic rule is called the “control statement” and must be uniquely coded. At the end of its descriptive text (narrative), there is also a label representing its severity—a colored label after the text of the control (figure 1). The relevance of the severity flag is not only in the impact of its weight in the consolidation, but also as a level of mandatory application of the rule for the organization. When severity is high, it means that the missing compliance needs a mandatory remediation plan, and completion as soon as possible is required.

It could happen that two or more risk assessors entitled to answer on the same rule/control have different evaluations. In that case, only the worst will be recorded.

Maturity Level

The first step to be taken is the definition of a metric to measure the level of implementation of a rule. The method requires a qualitative assessment but with a hidden numeric twin to allow easy analytic calculation and consolidation of the results. So, any qualitative item should be able to be shown as a number or vice versa. What is proposed is a seven-level ranking (figure 2). Each level is determined by weighting to address the different levels of severity between the levels proposed from the risk perspective.

Evaluation should not measure performance (how the rule has provided results in the past or at present), but the prospect of addressing a risk in the future. In other words, if the result achieved can be improved, even when it is formally compliant, the maximum answer is “Further attention” because, from a risk perspective, improvements could be made to reduce the risk exposure. For example, a rule such as “Payment of the software licenses is mandatory” may exist. If the licenses of any software acquired are paid, but no regular check on the already installed software is performed, the level of compliance has not been reached because the current control can be improved (with the monitoring of the installed software).

Loss Expectancy

After evaluating the maturity level, it is necessary to define the level of expected economic loss if compliance is not achieved. Even the expected loss is declared using a qualitative set of elements. Any loss is weighted using a percent of the business unit’s intrinsic value (I.V.), as shown in figure 3.

The I.V. is the overall economic value assigned to the business unit (entity under assessment) to establish its value in the event of its unavailability in order to achieve the objectives of the organization. Depending on the kind of business, it could be yearly turnover or relocation cost or total value of the main contracts or another value considered representative for the specific business unit. This value can be reviewed during the budgeting process as a target for the business. From a risk analysis perspective (as it is based on a future scenario), a target value for the future means better the overall reference value for the business.

Worst-Case Likelihood

The evaluation of the expected loss makes sense only when the probability of its occurrence is clear. As with other measures, the evaluation of the worst-case scenario is made with a qualitative indicator of likelihood matching a hidden analytics probability (figure 4).

The reason the maturity, loss and likelihood measures are qualitative assessments, not quantitative, is to ensure easier management of the data entry by the end user. The consideration for this decision depends on the events being managed; the events are in the future so the determination of a number is tougher to manage with respect to a small set of qualitative items. How many elements will be adopted (i.e., ternary scale, five-value scale) is not so significant because a precise estimate is not expected as it is an estimate made by a person for a potential event in the future. It is random enough to not require great precision.

Remediation and Explanation

The assessment of the level of application of the rule cannot miss two further information areas. They are optional when the unit is compliant or mandatory if noncompliance is determined. The first area is a remediation plan, but reduced to the essential. The remediation plan is synthetically summarized according to only two pieces of information: the organizational position having accountability to solve the problem and the period to complete it. Neither the name of the person nor the day are allowed because it is necessary to know the proper organization role leading the solution, and a period is easier to estimate than a specific date.

The second information area is an explanation and is always recommended when it is necessary to understand the severity of the issue. Typically, the explanation is addressed to a person (risk analyst, auditor) who does not know the process under evaluation and, for this reason, the explanation must be clear and essential.

Risk Calculation

Several methods can be adopted for the calculation of the risk, but the recommendation is to use the easiest. The method used here avoids complex calculations by exploiting all the possible combinations (Cartesian product)1 of three input parameters (maturity, loss expectancy, likelihood) together and assigning a risk level for each. Risk level is the output (figure 5).

The risk function is implemented by a small table of three input parameters, three columns dedicated to maturity, loss expectancy and likelihood, while the last column provides the output (risk level). With an SQL selected on this, a risk matrix is performed for the transformation of the maturity assessment in a risk level; this means great flexibility and minimal complexity. The management of the Cartesian product is simple because the logical relationship maturity/loss/likelihood with the risk level is kept in natural language and by an easy script that is truncated and populated against the risk matrix when anyone wants to update some risk level.

In any moment, the level of the risk can be viewed either as qualitative data or as numerical data with minimal effort by the use of the risk master file that has stored the qualitative/quantitative relationship, the displaying styles, the labels and the weights for the consolidation. So, in the same way, the three input parameters are stored in master tables having all the relationships needed in the calculation or in the presentation of the data switching between analytical or textual or vice versa. This enables one on-the-fly inquiry to extract the risk outcomes in terms of value (qualitative/quantitative) and presentation features (i.e., fonts, icons, colors).

Self-Assessment Certification

Basically, the enterprise risk assessment process described here is intended as a self-assessment process. It is organized to gather information at the entity level, involving all the local departments or processes based on the experience of the managers involved in completing the checklist. Of course, it is necessary to ensure harmonization and control of the answers provided. Recalling the Responsible, Accountable, Supported, Consulted, Informed (RASCI) matrix, there are two additional roles to perform the verification of the self-assessment statements: the certifiers and the auditors.

The certifier is identified in the same working process with respect to the rule considered, but with a role/position in an upper business level, such as the headquarters level, department-head level or similar. The type of check envisaged is an offsite interview (i.e., by email, telephone call, video conferencing) with the local risk assessor, eventually supported by documented information sent by the interviewed person. Any answer provided by the risk assessor can be changed by the certifier. In this case, a suitable flag indicates this change and the answer becomes unchangeable for the risk assessor for a certain period.

The auditor is independent relative to the entity or the working process considered and acts on the complete set of rules directly onsite. Three different kinds of checks can be performed: interview of involved people, analysis of documented information and observation. As the certifier, the auditor can change any evaluation, which will block data entry for a certain period (a parameter defined in the system) to the local risk assessor and to the risk certifier. See figure 6 for a matrix reflecting the different levels of assessment and certification of the statements.

The assessment provided is kept valid for a fixed period and then automatically placed in the expired status. A good choice as a value for the period is the time interval between a revaluation and the following. The auditing can be split if it has been certified using an interview (trust with the answer) and through an audit, i.e., if an independent test (without data prepared by the interviewed person) has happened.

Integration Between RTP and ERA

Now, it is time to again consider the risk treatment plan (RTP) tool. The work carried out for the issuance of the RTP will not be lost if the information needs to be shared. The framework of both is basically the same but with different levels of detail management; so, it is possible to correlate the risk analysis of one with the other. A logical sequence consists of executing before the RTP and, consequently, all the involved controls in the enterprise risk assessment (ERA) are fed automatically. Data entry in ERA will be blocked for all its impacted controls.

The conversion relationships RTP/ERA are elevated to the actions list and the risk list; both are one way (figure 7). In the actions list, there is the relationship between the domain of the progress of the actions (PROGRESS) in RTP with the domain of the rating of the maturity level (MATURITY) in ERA. The actions on delay in RTP are evaluated anyway as a single in progress without distinctions in ERA.

The actions list also illustrates the relationship between the impact of missing the implementation objectives (IMPACT) in RTP and the expected loss (LOSS) in ERA (figure 8). The granularity of the impact on performance in RTP is less than the monetary impact in ERA.

In the risk list in RTP, the probabilities are coded in the same way as ERA. In contrast, the rating for risk assessment (RISK) in RTP relates to the rating of the maturity level (MATURITY) in ERA (figure 9). The concept of acceptable risk in RTP is equivalent to compliance in ERA, but the impact of a confidentiality, integrity, availability (CIA) objective above the baseline will need further attention.

Even the concepts are the same: The calculation functions of the risk in RTP and ERA must remain distinct because the algorithms use logic and weights differently.

Control Frameworks

Two different methods to collect information for a risk evaluation based on a bottom-up methodology (aggregating assessments of operating activities on the level of achievement of the assigned tasks) and on a top-down methodology (dividing the processes into rules for their subsequent assessment) have been mentioned herein. They are based only on the enterprise organizational structure, its mission and objectives (tailored approach considering only the relevant phenomena) and are integrated at the lowest level.

If a new external control framework is adopted by the organization, it is not necessary to organize a new data collection for each entity but only base it on the lowest level of ERA. Generally, the great detail and availability of rules in ERA should allow the creation of its relationship with the new framework without changing the ERA checklist. But, in this case, either a rule in ERA could be split or a new one could be added to better adhere to the lowest level of the new framework adopted. For example, if the organization wants to adopt a sustainability standard, the first step is to establish relationships between each new standard control with the internal control statements of ERA (figure 10).

At any time, it is possible to review this relationship by adding new ERA controls, removing them or changing their weight in the consolidation by the shown data entry. Also, considering that the calculation is done on the fly, any change is immediately transposed in the reporting (figure 11).

After this setup operation, it is possible to analyze the outcomes in the reporting area using tabular reports or charts to compare the controls between them over time at any level of the organization. For example, a tabular report could be used to get the Statement of Applicability (in the International Organization for Standardization [ISO] sense) for any base or consolidating entity with the evaluation of the new controls and their basic components (derived from ERA checklist).

An immediate evaluation of the trend of the new controls can also be achieved with a chart (figure 12). The use of the charts is more effective for analyzing qualitative phenomena.

In the same way, it is possible to continue the analysis by changing the shape of the report, swapping the axis, selecting a different granularity of the data, comparing different levels of consolidation (by process, by geographical area, by legal entity, by period) and so on for each framework (figure 13). The different perspectives help the identification of the weaknesses or the areas that need further study.

The analytic approach is always possible using the I.V. of the entity to transpose the qualitative outcome in a figure. That is, textual labels can be substituted by figures, but no advantage is gained because more information is not being added. The analysis is regarding the future (uncertain and undecided), so a qualitative approach is faster and easier without losing accuracy.

Risk Monitoring Used for the Internal Audit Plan
Periodically, the internal audit process needs to issue a new plan to audit the entities. Prioritization of the entities to be visited is answered by using the assessments of the ERA controls. By rebuilding this set differently with respect to the maturity level algorithm and adding some other information (e.g., entity context, incidents occurred, performance of the local business), it is possible to have the availability of indicators to use as metrics for the auditability index (A) calculation. This index is a metric that provides a criterion for weighing entities and giving order to the audit sequence. Auditability index is calculated by the formula:

Each ai is a component of the auditability and wi, its weight. Evaluating the auditability index (figure 14) for each entity and sequencing it from highest to lowest will provide the list of eligible candidates for the audit on top. The decision of how many and which indicators to use depends on the type of business and the risk analyses. A good heuristic is to choose an indicator for each critical factor identified in the achievement of the business objectives. The following example shows the 12 critical factors evaluated using, partially or totally, specific groups of ERA controls.

Two indicators show the impact of the relationship with ERA. The first indicator (called untrusted ERA) is fully based on the ERA outcomes but with a different perspective of the maturity level. An indicator of confidence in risk assessments has to be considered in the same way when all is declared perfect (but without evidence) and when all is declared wrong (but without remediation plan). These answers are more worrying than situations of declared issues with remediation plan (negative situation but supposed under control). In other words, there is more confidence in situations where problems are faced than situations without declaration of problems or unaddressed problems.

This concept is represented by higher scores for distrust situations and lower for others. The answers are, consequently, also weighted if they are certified or audited compared to those that are only self-assessed. So, the outcome is an indicator of distrust obtained by applying a formula with a different logic than the maturity level. This is easily possible by changing only the weights used for each rating in the calculation algorithm and nothing else.

As a second example of an indicator impacted by the ERA process, consider the measure of the level of insecurity of the data (figure 15). It is similar to the indicator of the information security level, but is obtained with a different logic.

Both are obtained by mixing information from ERA to build the evaluation of a few ISO 27001 controls, plus information gathered by the mapping of the entity (e.g., operative information from RTP). The outcome could be strange if it is considered that both are quite high, but there are two different types of logic. Information security is a formal evaluation of compliance of the applied controls in the data protection. Data insecurity is the lack of confidence in the consistency of the answers used in the information security KPI evaluation. In the example, an inconsistency is a declaration of business impact analysis (BIA) compliant when the parameter Maximum Tolerable Downtime (MTD) is missing. It is a clear indication of distrust in the answers; there is no trust in an analysis when the main parameter addressing the analysis itself is missing.

Conclusions

Two distinct but integrated approaches are used to provide an overall monitoring of enterprise risk. The first advantage of this methodology is the reduction of the effort required to keep the management system updated. There is no data redundancy or repeatedly required same data (for different frameworks), and only the right people are involved (i.e., those who deal with the topic operationally for their working position).

Other significant benefits include the ease of managing the structure without a post process to realign the database and the absence of a massive training of risk assessors (self-explanatory forms tailored to daily work). The people involved will have to evaluate only their own job, and this operation contributes quality to the answers gathered. Of course, many people are involved, but only for a very small and consistent activity with their own work.

The automatic integration of the RTP and ERA data collection ensures a solid mechanism to get the necessary consistency in the outcomes. The automatic data transferring, without human effort, means no typos or contradictory assessments on the same topic.

The basic mechanism feeds the system with the self-assessment, but the steps that follow—of certification offsite and auditing onsite—result in a high level of harmonization and quality in the data. An additional advantage, and certainly not the least significant, is the flexibility to introduce a new framework with a minimal impact on end users.

Endnotes

1 Wolfram Alpha, “Cartesian Product,” https://www.wolframalpha.com/input/?i=cartesian+product

Luigi Sbriz, CISM, CRISC, ISO/IEC 27001:2013 LA, ITIL v3, UNI 11697:2017 DPO
Has been the risk monitoring manager at Magneti Marelli for more than four years. Previously, he was responsible for information and communications technology operations and resources in the APAC region (China, Japan, Malaysia) and, before that, was the worldwide information security officer for more than seven years. For internal risk monitoring, he developed the described methodology, merging an operative risk analysis with a consequent risk assessment driven by the maturity level of the processes. Also, he designed the cybermonitoring tool. Sbriz was also a consultant for business intelligence systems for several years. He can be contacted directly at https://it.linkedin.com/in/luigisbriz or his current contact information can be found at http://sbriz.tel.