When Victims and Defenders Behave Like Cybercriminals

Author: C. Warren Axelrod, Ph.D., CISM, CISSP
Date Published: 1 January 2020

In this Orwellian era, when opponents are enemies and enemies are co-conspirators, where news is falsified and trust is endangered, it has become well-nigh impossible to apply clear definitions to attackers, victims and defenders in the context of cyberspace. The rapid expansion of social networks without the forethought necessary to protect individuals’ privacy and to distinguish between truth and falsehoods has resulted in victims believing attackers’ false onslaughts to be real and cooperating with them either intentionally or unwittingly, thereby altering the traditional distinction between the good guys/white hats (victims, defenders) and the bad guys/black hats (cybercriminals, hackers).

The rapid expansion of this new world order, with hostile nation-states and other bad actors manipulating politics and threatening critical infrastructures, has obsolesced prior models of good vs. evil. This situation calls for a new and better understanding of the motives of perpetrators, the consequences of their actions and the means of counteracting their influence. It also requires knowing about the motivations of victims and defenders and how such impulses impact the protective measures that they take. It is imperative to incorporate nefarious and malicious activities, which previously may have been considered inconsequential but have now become mainstream, into cybercrime models.

Cybersecurity, audit and risk professionals must be more aware of the factors that induce individuals, groups, organizations, government agencies and nations to perpetrate cybercrimes or defend against them. When those responsible for cybersecurity understand why attackers act as they do, they are better able to construct and deploy effective measures that will deter attackers and protect assets from nefarious activities.

Understanding Victims and Defenders

There may be a predisposition to assume that potential victims and their defenders are all trustworthy, and because of that trust, there is a tendency for operational cybersecurity professionals to give victims and defenders greater access to more sensitive information than they would normally entrust to privileged insiders and outsiders. Trusted insiders are regularly given access to highly sensitive information without requisite oversight. And herein lies the problem that was so well illustrated by the Edward Snowden case. Snowden, a contractor with the US National Security Agency (NSA), was able to persuade a colleague to lend him his credentials for accessing highly sensitive classified information, which Snowden then leaked to the world.

The question that must be faced is how to overcome having to grant individuals, groups and organizations privileged access to sensitive data while addressing the reality that some of them will abuse the trust associated with such access, deliberately or accidentally.

The notion that victims and defenders could be in league with attackers is occasionally mentioned, but seldom enters into consideration of how to protect systems and data from nefarious attacks. The closest considerations to this subject are discussions of insider threats, but evil insiders are often criminals posing as victims rather than real victims.

One approach is to understand how motives, motivation, intent, risk and consequences affect the decision-making of individuals and groups who balance these factors against the value and benefits of committing crimes. Interactions among attackers, victims, defenders and other players (i.e., observers and influencers) have evolved from relatively straightforward conflict models (red team vs. blue team) to much more complex hybrid cooperation/conflict models. As a result, the methods of the past no longer fully address the issues. There have been several attempts at seeking new approaches and suggestions for alternative methods for resolving the many problems surrounding cybersecurity risk management,1, 2, 3 but, for the most part, these attempts have only been partially successful as the juggernaut of cybercrime plows ahead. It is essential to develop more realistic models that will, in turn, lead to better cybersecurity risk management decisions.

Attackers, Defenders and Victims

The determination of who is for an organization and who is against it is highly subjective. Victims who benefit from attacks have different perspectives on the consequences of attacks in the short and long terms, as compared to those who see themselves as genuine victims. This was abundantly clear in the alleged Russian interference in the 2016 US elections. Those who believe that they benefitted from such intrusions appear reluctant to act to prevent the same or similar activities in the future since there may be the opportunity to profit from such attacks again, even though they themselves could be subjected to similar detrimental attacks in the future.

Much published research views cybersecurity risk from victims’ and defenders’ perspectives.4 Relatively few take the attackers’ point of view,5 although there is a contingent that strongly believes that defenders should think like hackers. The key to both approaches is to understand the behavior of all parties better.

Victims and defenders need not be, and frequently are not, the same persons or groups, although they might overlap somewhat in organizations and nations. Basically, victims are those who are the target of successful attacks. Individuals may have some responsibility for defending themselves against cyberattacks, but if those individuals are employed by an organization, they will be defended by dedicated internal groups or by outside third parties. Stricter criteria are usually applied to midsize to large organizations, which are expected to understand cybersecurity risk and have appropriate staff dedicated to mitigating it.

IT IS NOT AS LIKELY TO SEE ATTACKERS AND DEFENDERS WORKING TOWARD A COMMON GOAL, ALTHOUGH IT CAN HAPPEN WHEN CERTAIN OBJECTIVES ALIGN.

However, when cooperation or collusion between attackers and victims is in play, it must first be understood what those relationships among players are, and then law enforcement and the legal system must distinguish between those who were victimized and those who participated in criminal activities and prosecute perpetrators as appropriate. It is not as likely to see attackers and defenders working toward a common goal, although it can happen when certain objectives align. For example, an enterprise might engage hackers or former black hats to fight against attacks since the latter likely better understand the thinking of other attackers and how to protect against them. However, such collaboration with former black hats is not generally advisable since one can never quite be sure whether such persons might slide back into their former habits.

The following focuses mostly on cases where victims, and possibly defenders, choose to cooperate with attackers, either for personal gain or to prevent adverse consequences for themselves or their organizations. Sometimes, cooperation might be considered to be beneficial for an organization, as in the case of paying the ransom in the event of a ransomware attack, in which attackers gain access to victims’ systems, encrypt victims’ data and offer to unlock the data only if a ransom is paid. One may decide not to succumb to such threats, as is usually advised, but when the value of the encrypted data greatly exceeds the sought-after ransom, there is great temptation to give in to the blackmailers. Such cooperation is not seen as criminal—in fact, quite the opposite.

THE DEFAULT POSITION SHOULD BE TO NOT TRUST A SOURCE UNTIL IT CAN BE DEMONSTRATED THAT TRUST IS WARRANTED

The previous is an example of a negative reason for cooperation but, of course, there are situations in which victims voluntarily agree to work with attackers for their personal gain. In such cases, these persons can no longer be thought of as victims—they are attackers.

Trust and Verification

How can trust be dealt with in cyberspace? The common mantra of information security professionals is “trust, but verify.” However, a more appropriate approach might be “do not trust and verify.” That is to say, the default position should be to not trust a source until it can be demonstrated that trust is warranted. One of the major issues that must be confronted in cyberspace is the popular leaning toward putting one’s faith in technologists and computer technology and the environments that they create. Whether through ignorance, inertia or disinterest, users of the Internet tend to trust the World Wide Web more than they ought to, judging from the huge amount of sensitive information that people are willing to disclose to the world on the web.

Verification of authoritative sources is no easy matter. Attackers can be extremely creative in their efforts to disguise sources and anonymize information. It has become common practice to spoof or hide or fake identities on the Internet, where someone will masquerade as a real or imaginary person. The recipient of the information is often unable or unwilling to scrutinize incoming information to ensure that the professed source is genuine. Even when the source appears to be genuine, attackers can engage in a man-in-the-middle (MitM) exploit wherein they intercept transmissions and insert false information while retaining the original source. While this form of attack is used mainly for identity theft and fraud, there is no reason why a similar approach could not be used for untrue news items.

Erosion of Trustworthiness

There has recently been an uptick in focus on how to improve the trustworthiness of systems, networks and data, as the incidence of falsification has mushroomed.6 As is usual, technologists are seeking technology solutions for the problem. But it is important to include human factors in the mix,7 since technology can only achieve so much. Indeed, there are those who question whether technologists’ biases are at all acceptable. For example, the director of the US Defense Advanced Research Projects Agency (DARPA), Arati Prabhakar, is quoted as saying, “I don’t want to live in a world where technologists make up the answers (about what should be allowed in brain research).”8 Bona fide sources can certainly be biased. However, one can usually determine the form of that bias from knowledge of the particular source. When the source is deeply embedded within an algorithm, it is virtually impossible to eke out biases or prejudices of the creator of the algorithm, until and unless someone identifies those preferences that were written into the algorithm. This adds another layer of uncertainty in attempts to verify sources and their positions and makes the determination of trustworthiness that much more difficult.

Recent US congressional hearings, with testimony from Facebook, Google and other organizations, highlighted the limitations of algorithms and artificial intelligence (AI) in separating out truths and falsehoods, and biases and prejudices. To really address this loss of impartiality, the influence of political and economic factors, and how they play on the psychology of humans who use these universal systems must be investigated.

Perhaps the approach that would be most effective, yet most difficult to achieve, is to change the behavior of players—attackers, victims and defenders. The answer may be in the use of a combination of deterrence against attackers, avoidance by victims and preventive (rather than protective) methods by defenders. The extent to which these methods will be effective depends on a full understanding of the motives and risk of attackers, awareness, education and incentives for victims, and substantive support for research and implementation of preventive methods and technologies, in addition to setting up institutions to invoke and maintain policies and standards.

Motives and Motivations

The cyberenvironment, which consists of technology and human actors, can be more resistant to compromise and less sensitive to efforts to diminish trustworthiness. The inclusion of human factors will enhance trust models.9

It is important to distinguish between the word “motive,” which usually has a somewhat negative connotation and is generally used in the context of attackers, and “motivation,” which is viewed more positively and applies more appropriately to victims and defenders. There are many reasons why attackers choose to assault people and systems, some of which are shown in figure 1. The motives range from financial fraud to political dominance and, for the most part, include illegal and unethical goals.10 Motivations are more general. Here players are looking to accentuate the positive features of their lives through financial gain and/or to reduce the negative aspects through reducing risk or attempting to destroy irritants.

The underlying assumption is that perpetrators are evil and victims and defenders are good, but that is not always the case, especially when victims collaborate with attackers for personal gain, rather than being forced to cooperate by means of physical threats and the like. For example, there are few, if any, who would risk the well-being of family members to prevent attackers’ stealing funds from their employer.

THE UNDERLYING ASSUMPTION IS THAT PERPETRATORS ARE EVIL AND VICTIMS AND DEFENDERS ARE GOOD, BUT THAT IS NOT ALWAYS THE CASE, ESPECIALLY WHEN VICTIMS COLLABORATE WITH ATTACKERS FOR PERSONAL GAIN.

Victims may be motivated to protect themselves to avoid the negative consequences of successful attacks. But in situations where potential victims think that their chances of being attacked are low or that losses from an attack will be minimal, they may choose not to bother with protecting themselves. Of course, such an optimistic view may prove to be wrong, in which case victims would have been far better off having implemented protective measures. Since anticipation of being attacked is often highly subjective, it is not always reasonable to assume that potential victims will evaluate the risk in an objective manner, which brings up the role of behavioral sciences.

Defenders, namely, those in the business of protecting victims, also have complex motivations. Not all their motivations are necessarily altruistic, since, after all, they are in business to make money, although some offer open-source solutions at no charge. This raises the issue as to whether the ranks of defenders contain some who actually create threats that would not have otherwise existed, or draw attention to threats that have not been discovered by the general population, so that they might enhance their reputations and increase their profits. Whether or not such activities could be categorized as criminal is not always clear-cut. A case in point is that of Kaspersky Laboratories, a Russian-based global purveyor of cybersecurity software. The US government ruled that US agencies could no longer use Kaspersky products and services due to a claim that the organization has links to the Russian government and, therefore, might be revealing classified information that it gathers as part of its regular research into malware to improve their products—or so they say.11

IT IS NO LONGER OBVIOUS (IF IT EVER WAS) AS TO WHO ATTACKERS, VICTIMS AND DEFENDERS ACTUALLY ARE AND WHAT MAKES THEM ACT IN THE WAYS THAT THEY DO.

There is also a question of whether Big Tech can be trusted. After all, as stated by photo-forensics expert Hany Farid, “The entire business model of these trillion-dollar companies is attention engineering. It’s poison.”12 This begs the question as to whether entities, such as Facebook and Google, are friends or foes since their use of personal information for revenue-generation might be construed as attacking their users’ privacy. While their public relations arms present images of these organizations as being intent on preserving and protecting users’ privacy, their actions speak louder than words, and their actions often suggest the exploitation of users’ personal information to enhance their bottom lines. Perhaps such organizations are not enemies in the traditional sense, but they are clearly not defenders of users’ personal data or protectors of users from addiction to their services.

It is no longer obvious (if it ever was) as to who attackers, victims and defenders actually are and what makes them act in the ways that they do. It is safe to say that one cannot generalize about motives and motivations since the roles of the various players are not always apparent at any point in time. Furthermore, reasons for behavior might change drastically over time, as do technologies and new methods of attack. Ransomware was not prevalent until quite recently, but now it is rampant. One might argue that it always existed at some low level, but that the advent of cryptocurrencies, which are anonymous and can be readily transmitted over the Internet, has greatly facilitated this method of attack.

Risk and Consequences for Attackers, Victims and Defenders

“Risk” is defined here to be some form of expected loss (or gain), and “risk factors” describe the many categories that combine to make them up. This difference is frequently not raised to the detriment of expositions on risk and confusion with how risk is calculated. Risk has negative connotations, with losses being the norm. However, the intent of those taking a risk is often to accrue gains and avoid losses.

Attackers are taking the risk that they will be caught and punished. Victims are at risk of their data and identities being stolen, and their credentials being used for fraud and other crimes. Defenders are risking failure that could sully their reputations; losing current and potential customers; and possibly being sued for incompetence, misconduct or negligence.

 

It is when perceived and actual risk diverges that problems arise. There is the attacker who underestimates the chances of getting caught and punished and exaggerates the expected gain. Victims often think that the chances of their falling prey to attacks is low and so they do not invest sufficiently in protective measures. Defenders tend to overstate their capabilities to their detriment when attackers are successful despite investments in defensive measures.

Cooperation, Consensus and Conflict

In today’s world, there seems to be a rapid growth in “frenemies,” a combination of friends and enemies. On the international front, some countries have cooperative (or mutually nonhostile) relationships with countries that are clearly not considered friendly based on traditional standards. This suggests that one might have a spectrum of relationships from fully cooperative to openly hostile between countries, groups and individuals depending on the matters at hand.

Figure 2 shows the scope of such interactions ranging from collaboration to conflict, with compromise and competition as intermediate conditions. When persons or groups collaborate, they fully concur on their decisions, whereas when they compromise, they arrive at a consensus that is not wholly in one camp or the other, but both sides are willing to give up some terms of their agreement to gain others. When moving further along to competition, the two sides see themselves in a contest where there will be winners and losers, but the losers do not necessarily have to suffer any consequences. Adversaries get involved in conflicts where winners usually take all, and losers suffer humiliation and worse.

Typically, players might progress from collaboration through to conflict depending on their willingness to seek solutions to their differences. However, as shown in figure 3, there are opportunities along the way to roll back what might seem to be an inexorable slide into conflict, where players might seek to deescalate confrontation back to a more peaceful state. Sometimes agreement is not reached until after a confrontation has actually taken place, like when treaties are signed by victors after a war.

This model not only applies to nation-states or factions within a state seeking domination, it also can be used in regular business negotiations. The success of one side or another depends on factors such as relative size (both organizational and physical) of the negotiators, the competitiveness of the marketplace, the relative importance of any in-place agreements or understandings, and the willingness to compromise. These factors are often discussed in the context of assuring security in the outsourcing of IT services,13 but they are applicable to many other situations.

It is at this point that the boundary between technology and psychology is crossed, since it is possible to only go so far with suppositions as to how players will react under various circumstances before it becomes necessary to consider human behavior.

Value Propositions

Given players (attackers, victims, defenders) and the relationships among them (friendly, hostile, cooperative), a behavioral model, founded on game theory, can be built if there is a basis for comparing the values of each type of player in terms of their gains or losses. Figure 4 shows some assumed values as they might change over time. The graphs are not precise, nor are they accurate, since they are subjective impressions as to how such values relate to one another and change over time. Nevertheless, the mere expression of relative values, as indicated in figure 4, is extremely helpful to imagining the motive, motivations and responses of various players, especially if separate graphs are developed for each type of relationship.

For example, if the players are all cooperative to some extent and arrive at an optimum, it is likely that the costs of attacks and defenses and of being a victim all are less, so the aggregate net value for all players will be higher than if the relationships are hostile. In the latter case, hostilities will produce higher costs for all players because the lower losses and lower gains of an amicable scenario will be subverted by the additional costs, both financial and psychological, that conflict generates. Values will likely vary substantially between cooperative and hostile engagements.

While figure 4 illustrates variations in value to players when the relationships among players change, the value curves are essentially static, which is not an accurate depiction of reality. In real-world situations, actual and implied negotiations take place dynamically.14 For example, if victims are willing to increase the budgets of defenders and possibly raise the costs for attackers who must now break through stronger defenses, it is likely that the net values for all players will fall due to increased direct and indirect costs. Does this suggest that everyone would be better off if protections were reduced with corresponding lower direct costs of defense? No, not if the consequent costs of successful attacks were higher than cost savings from lesser preventive measures, if the attackers have similar benefits from both situations.

One should expect to arrive, at least in theory, at some optimal point where total net value for all players combined is maximized. However, as information technologies and cybersecurity tools develop, this hypothetical equilibrium point will be quickly dislodged as attackers see opportunities from newly discovered weaknesses and defenders implement new technologies.

Behavioral Models

Behavioral economics questions the use in traditional economic theory of the rational person and examines how individuals and groups might respond in various real-life situations.15 There are those who question whether the results of controlled small-scale microeconomic experiments can be extended to large-scale macroeconomic problems.16

Behavioral economics can be used to explain the motives and motivations of attackers, victims and defenders, as suggested previously. The use of behavioral models in the context of cybersecurity risk can lead to a greater understanding of the forces at play when these roles interact. Furthermore, it should be enlightening to run experiments that portray the interactions among the various players, which goes into the domain of game theory.

Game Theory
Game theory is used to help players determine their optimal strategies when engaged in a contest or conflict with other parties. In the application of game theory to cybersecurity risk management, players are assumed to be rational, and the relationship between players is usually taken to be adversarial, if not actively hostile.

Behavioral Economics and Game Theory Combined
Behavioral economics and game theory are part and parcel of the same social science approach to economic decision-making, where behavior relates to how subjects react to specific circumstances based on their expectations and games relate to how opposing teams generally respond to one another so that one team will gain at the expense of others.

It should be noted that the view on the rationality of players differs between behavioral economics, where participants are considered to be irrational, and traditional game theory, where players are usually assumed to be rational. The key is to incorporate the irrationality of behavioral economics into the strictly rational players in game theory.17

Traditional cybersecurity risk management assumes that one team’s gain is another team’s loss; that is to say, it is a zero-sum game. But the balance over time of losses and gains among participants is usually heavily skewed. By not recognizing the asymmetry, cybersecurity professionals often apply methods and tools that are not adequate for deterring, avoiding or protecting against attacks, since they underestimate the true benefits claimed by attackers. In general, the value realized from attacking is so much greater than attackers’ costs that attacks continue to be profitable and grow in scope and effectiveness. Defenders reap lower but substantial benefits in comparison to attackers. Victims sometimes incur horrendous losses, although they are indemnified in many cases, such as those involving financial institutions. Contestants among players almost always favor attackers. Thus, the suggestion to improve security by making attacks costlier for perpetrators may not work because the costs incurred by defenders and victims could greatly exceed the benefits.

To change this dynamic, researchers need to understand how each category of player will respond to specific conditions and scenarios and create incentives and disincentives accordingly. Taking into account motives, motivations, expectations of attackers of the likelihood of their getting caught and consequences when they are caught should eventually lead to globally optimal protective mechanisms—or so it is hoped.

Recommendations

The discussion herein provides a basis for understanding what each player or group of players is up against, and that understanding should lead to better decisions for defenders in particular. Therefore, all parties will benefit from developing strategies for maximizing their own net value, which will, to some degree, be at the expense of other player categories. It is recommended that the various models described previously should be used to understand the impact decisions will have on others within and across specific player categories. Anticipating how players might react should yield substantially better overall decisions than today’s limited us-vs.-them approaches. Furthermore, the effects of control mechanisms can be determined.

Future Research

This discussion only scratches the surface of what is surely a huge opportunity for future investigations. There are other potential players who might interact with the groups presented here. There are other types of relationships besides friends and enemies, especially when considering spies, double agents and the like. And there are many types of interactions other than those of conflict, consensus and cooperation depicted here, some combining the characteristics of two or more interaction types. Researchers might wish to take on the challenge of expanding the frontiers of this research and providing tools that will facilitate the implementation of newfound approaches.

Takeaways

There are some precautions that practitioners should use as standard practice without waiting for behavioral models to be developed:

  • Substantive background checks should be conducted before hiring employees or contractors. More in-depth checks should be done for individuals who will have privileged access to sensitive data and critical systems. Checks of persons with privileged access should be repeated regularly—possibly annually—and if there are substantial changes in a person’s role and responsibilities or in the systems and data available to them. Also, certain changes in a person’s status, habits and practices should trigger a new background review. If someone is a reformed ex-hacker, then more intense scrutiny of their backgrounds and proposed role is necessary, and their activities must be carefully monitored.
  • Identity and access management (IAM) systems must be kept up to date as persons change responsibilities or leave the organization, especially involuntarily. Having persons retain former and current access and/or delaying or omitting updating of authorizations leaves an organization vulnerable throughout the delay period.
  • Vendor organizations, especially suppliers of security products and services, business partners, merger and acquisition prospects, managed service providers, cloud service providers, and the like, should be subjected to due diligence reviews based on standard and targeted questionnaires. For critical relationships, personal interviews and site visits should be conducted initially and then repeated at regular intervals with frequency determined by the criticality of these enterprises to the mission of the organization or if there is a material change in the relationship and/or in the financial health of any of the parties. If enterprises are suspected of having questionable alternative intentions, particularly if they are based in hostile nations, then they should be avoided or, if it is decided to do business with them, their activities should be limited and monitored closely. Examples are foreign-based enterprises or organizations whose employees are based in countries with questionable allegiances.
  • Consider the motives and motivations behind various players’ decisions and actions and respond accordingly, accounting for risk factors that need to be mitigated.

Conclusion

It is frequently asserted that current approaches to cybersecurity are inadequate. Therefore, new approaches need to be developed, such as using behavioral economics and game theory. This approach has been generally ignored, but it holds the promise of a better understanding of cybersecurity risk management and better decision-making.

There is no denying that introducing behavioral and psychological considerations is difficult, especially for those with a more technological bent. Nevertheless, it is well worth the effort to try to understand what motivates and concerns attackers, particularly, so that appropriate defenses can be implemented. If such considerations are not included in decision processes, then it will be harder to counter counterattacks, which will surely grow in frequency, intensity, magnitude of financial losses and destructive capabilities.

Endnotes

1 Shrobe, H.; D. L. Shrier; A. Pentland; New Solutions for Cybersecurity, The MIT Press, USA, 2017
2 Saydjari, O. S.; Engineering Trustworthy Systems: Get Cybersecurity Design Right the First Time, McGraw Hill Education, USA, 2018
3 Axelrod, C. W.; J. Bayuk; D. Schutzer; Enterprise Information Security and Privacy, Artech House, USA, 2009
4 Arief, B.; M. A. Bin Adzmi; “Understanding Cybercrime From Its Stakeholders’ Perspectives: Part 2—Defenders and Victims,” IEEE Security & Privacy, vol. 13, no. 2, March/April 2015, p. 84-88
5 Arief, B.; M. A. Bin Adzmi; T. Gross; “Understanding Cybercrime From Its Stakeholders’ Perspectives: Part 1—Attackers,” IEEE Security & Privacy, vol. 13, no. 1, January/February 2015, p. 71–76
6 Op cit Saydjari
7 Bellovin S. M.; P. G. Neumann; “The Big Picture: A Systems-Oriented View of Trustworthiness,” Communications of the ACM, vol. 61, no. 11, November 2018, p. 24-26
8 Katchadourian, R.; “Degrees of Freedom: A Scientist’s Work Linking Minds and Machines Helps a Paralyzed Woman Escape Her Body,” The New Yorker, 26 November 2018, p. 56–71
9 Op cit Bellovin
10 Altshuler, Y.; N. Aharony; Y. Elovici; A. Pentland; M. Cebrian; “Stealing Reality: When Criminals Become Data Scientists,” in H. Shrobe; D. L. Shrier; A. Pentland; New Solutions for Cybersecurity, The MIT Press, USA, 2017, p. 267–290
11 Boyd, A.; “U.S. Finalizes Rule Banning Kaspersky Products From Government Contracts,” Nextgov, 9 September 2019, https://www.nextgov.com/cybersecurity/2019/09/us-finalizes-rule-banning-kaspersky-products-government-contracts/159742/
12 Rothman, J.; “Afterimage: Now That Everything Can Be Faked, How Will We Know What’s Real?” The New Yorker, 12 November 2018, p. 34–44
13 Axelrod, C. W.; Outsourcing Information Security, Artech House, USA, 2004
14 Axelrod, C. W.; “The Dynamics of Privacy Risk,” ISACA Journal, vol. 1, 2007
15 Kahneman, D.; A. Tversky; “Prospect Theory: An Analysis of Decision Under Risk,” Econometrica, vol. 47, no. 2, March 1979, p. 263–292
16 McKenzie, R. B.; Predictably Rational? In Search of Defenses for Rational Behavior in Economics, Springer-Verlag, Germany, 2010
17 Thaler, R. H.; Misbehaving: The Making of Behavioral Economics, W. W. Norton & Company, USA, 2015

C. Warren Axelrod, Ph.D., CISM, CISSP
Is the research director for financial services with the US Cyber Consequences Unit. Previously, he was the business information security officer and chief privacy officer for U.S. Trust. He was a cofounder and board member of the Financial Services Information Sharing and Analysis Center (FS-ISAC) and represented the banking sector’s cybersecurity interests in Washington DC during the Y2K date rollover. He testified before the US Congress on cybersecurity in 2001. Axelrod received ISACA’s Michael P. Cangemi Best Book/Article Award in 2009 for his ISACA Journal article “Accounting for Value and Uncertainty in Security Metrics.” He was honored in 2007 with the Information Security Executive Luminary Leadership Award and received a Computerworld Premier 100 award in 2003. Warren’s books include Engineering Safe and Secure Software Systems and Outsourcing Information Security, and he was the coordinating editor of Enterprise Information Security and Privacy. He has published more than 140 professional articles and chapters in books and has delivered more than 150 professional presentations. His current research activities include the behavioral aspects of cybersecurity risk management and the security and safety of cyberphysical systems, particularly as they relate to autonomous road vehicles.