Operational Security: A Business Imperative

Author: Jon Brandt, Director, Professional Practices and Innovation, ISACA
Date Published: 1 November 2021

The US military developed the operational security (OPSEC) methodology during the Vietnam War.1 Military service afforded me unique opportunities and insights obtained throughout assignments that spanned across the intelligence community and multiple areas of warfare. There are many facets of military service that involve both the private and public sectors.

So, what exactly is operational security? Definitions vary, but the majority of them include the terms "adversary," "enemy" or "hostile forces." A 2019 CSO Magazine article defined OPSEC as a "process by which organizations assess and protect public data about themselves that could, if properly analyzed and grouped with other data by a clever adversary, reveal a bigger picture that ought to stay hidden."2 While this definition is certainly softer than some governmental variants, I feel it is still too narrow.

Of note, the first US military OPSEC definition was borne out of Operation PURPLE DRAGON3 authorized by the US Joint Chiefs of Staff. According to historical documents, OPSEC was "the ability to keep knowledge of our strengths and weaknesses away from hostile forces."4 For me, this is a better starting point for defining OPSEC. Ironically, the US government often violates this principle, as the US Congress, the media and several high-ranking public officials are thrust into the limelight via committee hearings and reports that openly discuss US weaknesses, breaches in security protocols and other disclosures, which undermine programs meant to protect the nation.

In my opinion, a better working definition for operational security is "a security mindset anchored in risk management whereby organizations continuously assess and protect data about themselves which could be exploited by bad actors." My proposed definition accounts for security being "a state of mind that requires iterative work to continually evaluate surroundings and make decisions that minimize harm."5 In this manner, security is grounds for risk management. Further, explicitly specifying the interval as continuous rather than periodic, regular or even routine drives the importance of consciously considering the security implications of every business activity. Lastly, I intentionally avoid the terms "adversary," "enemy" or "hostile forces," which are less common in the private sector. Instead, I believe "bad actor" to be a more appropriate term because it is common in the lexicon of information security professionals and it accounts for insider threats.

A better working definition for operational security is “a security mindset anchored in risk management whereby organizations continuously assess and protect data about themselves which could be exploited by bad actors.

Extending OPSEC to the private sector is not a novel idea, but the concept has yet to reach successful adoption. I assert that many enterprises remain laser-focused on protecting products, data or services such that they overlook business operations and the inherent risk that accompanies them. Undoubtedly, disjointed physical security and information security programs result in a vulnerability ripe for exploitation. For example, I recently noted a disturbing number of LinkedIn posts by new hires in which they shared photos of their workplace identification badges. These employees were rightfully excited about landing new jobs and were most likely oblivious to the risk of their seemingly innocent action. But where is the security education?

As cybersecurity technologies advance and security preparedness increases, bad actors are apt to go back to the basics and work on soft targets—usually people. It is mind-boggling that multifactor authentication (MFA) is not universally adopted across the globe. While nothing is ironclad, MFA exponentially increases the difficulty for bad actors to do bad things. Considering that IT has long used standard email nomenclature (e.g., first initial last name at organization dot com), bad actors already possess half of the standard authentication equation, making MFA an important implementation. Interestingly, the global chief information security officer (CISO) for Teleperformance recently discussed exactly that,6 noting that bad actors are starting to employ tactics of the past (e.g., using malicious telephone calls as a medium for preying on unsuspecting employees).

As cybersecurity technologies advance and security preparedness increases, bad actors are apt to go back to the basics and work on soft targets—usually people.

As swaths of the public and private sectors return to physical office spaces, the likelihood of physical security breaches increases. Access control systems may fall under the purview of a physical security officer, but in the absence of that role, it is possible the responsibility defaults to IT. The latter is problematic if physical security lacks the appropriate oversight. Two physical security concerns that have potential information security ramifications are piggybacking and tailgating. Both generally refer to 1 person or more following another person who has been authorized to access a restricted or secure area. The difference between the 2 boils down to consent in that piggybacking infers an authorized person has knowingly allowed someone in using their authorization, whereas tailgating refers to 1 person or more following the authorized person without their permission.7 While some argue that such concerns do not matter unless the unauthorized access occurs in a sensitive processing space (e.g., server room, vault), the reality is both tactics invalidate a level of defense (not to mention eliminate a source of correlational data in the case of an investigation).

Additionally, do enterprise data security policies apply to physical documents and media? This speaks to the importance of the nuances between information security and cybersecurity. If this trend continues, perhaps we will see the return of dumpster diving.

Supply chain vulnerabilities receive a great dealt of attention nowadays, often in the context of components, software or services, but acquisitions and deliveries are additional threats to be considered. Richard "Dick" Marcinko is a famed US Navy SEAL who, in his autobiography Rogue Warrior,8 wrote about early red teaming. Unconventional thinking is required when assessing risk. It has been many years since I read his book, but when considering his memoir compared to modern red teaming (which is often constrained by conservative rules of engagement), it is no wonder bad actors often succeed.

My first information security management position was onboard a US Navy aircraft carrier, and it was there that I saw firsthand how employees can put an organization in harm’s way. For the military, OPSEC failures can result in mission failure or worse; military units have long had rules governing the disclosure of unit movement and methods to incrementally curtail nonessential communication based on various emergency scenarios. But as Internet access and email became a normal part of military life, they introduced challenges in terms of who was allowed to do what, and when. Can you imagine senior management communicating a policy that said under certain circumstances, only director-level employees and above, and select other key persons, were allowed to use their web browser? For sailors, port visits offer an opportunity to unwind; some book hotel rooms. Herein lies the problem, as employee web traffic would then telegraph unit movement. In other words, technical controls (namely web filtering) were not aligned with organizational rules surrounding to whom, and when, port visits could be made public. This was externally correlated using web logs from said geographic locations to the unit website. While this is an extreme example of locational data, it highlights how employee actions can send unwanted signatures. Each enterprise must evaluate its risk.

Speaking of web data, are you putting out too much as an individual or as an enterprise? I recall seeing a LinkedIn thread recently that debated whether non-work-related topics should be discussed on the platform. I have noticed a marked increase in personal pictures, vacations and the like since first joining the platform and, while there may not be a universally correct answer, I advocate against LinkedIn for anything but professional use. First, it is marketed as a professional network. Second, personal details provide nuggets of information useful for social engineering—especially with a resurgence in malicious phone calling as mentioned earlier.

Conclusion

Cyber may be a business enabler, but is has also become a powerful weapon, as proven by the year of increased attacks on critical infrastructure.9 Collectively, we must make it more expensive for bad actors to accomplish their goals. That is difficult when digital landscapes can change with every piece of data written. Nothing about security is easy. If it were, the combined IT and cybersecurity community would not be strapped for talent. Cybersecurity is technical risk, but the lack of commonality with enterprise roles and responsibilities requires collaboration throughout an organization—especially one that is devoid of formal enterprise risk management. With this approach, it matters not who is responsible for cybersecurity, data, information and physical security, but rather that they are communicating with one another to best minimize blind spots.

Endnotes

1 Rosencrance, L.; "OPSEC (Operations Security),"TechTarget
2 Fruhlinger, J.; "What Is OPSEC? How Operations Security Protects Critical Information", CSO Magazine, 8 May 2019
3 US National Security Agency, PURPLE DRAGON: The Origin and Development of the United States OPSEC Program, USA, 1993
4 Ibid
5 ISACA®, Cybersecurity Fundamentals Study Guide, 3rd Edition, USA, 2021
6 DrZeroTrust with Jeff Schilling, "Chat With a CISO of 'the Largest Company Nobody Has Heard of ", USA, 28 September 2021
7 Newton Security Innovation, FAQ
8 Marcinko, R; Rogue Warrior, Pocket Books, USA, 1993
9 It is noteworthy that there is disagreement on what is considered critical infrastructure. In the United States, critical infrastructure is outlined in Presidential Policy Directive 21(PPD-21) and the definition is quite broad.

Jonathan Brandt, CISM, CDPSE, CCISO, CISSP, CPI, CySA+, PMP

Is a senior information security practice manager in ISACA’s Knowledge and Research department. In this role, he contributes thought leadership by generating ideas and deliverables relevant to ISACA’s constituents. He serves as a subject matter expert on information security projects and leads author management teams whenever external resources are necessary. Brandt is a highly accomplished US Navy veteran with more than 25 years of experience spanning multidisciplinary security, cyberoperations and technical workforce development. Prior to joining ISACA®, Brandt was a project manager for classified critical infrastructure projects across the globe.