Security Inventors Need Not Apply: Reducing Design Risk

Author: David Cross, CISSP, GCIH, GPEN, GWAPT, ISTQB
Date Published: 9 July 2021

Part of what makes a cyberjob appealing is that you get to invent, build the “killer application (app)” or make a new breakthrough. That is the appeal for many brilliant creative types, but the reality is that the cybersecurity realm does not allow much leeway for creativity. Developers are encouraged to think outside of the box while cybersecurity practitioners are encouraged to do the opposite. The cybersecurity and audit industries dislike innovation when it comes to risk and risk avoidance. That is a very good thing.

Why would a talented engineer pass up the opportunity to create the ultimate new solution? Because it is too dangerous.

The seasoned cybersecurity architect approaches their job with knowledge of many standard cybersecurity technologies and relatively few new ideas. When it comes to an app holding sensitive data, they will absolutely avoid reinventing the wheel. A forward-thinking architect making decisions based on data may decide to try artificial intelligence (AI) or find a new way to correlate information, but one will never find a good cybersecurity architect inventing a new technology for production use such as a crypto algorithm, or coding a replacement for Transport Layer Security (TLS) or Security Assertion Markup Language (SAML). Instead, they will implement a tired, boring standard. Why would a talented engineer pass up the opportunity to create the ultimate new solution? Because it is too dangerous.

Like any intelligent person, you probably think that the reserved approach leads to a lack of progress and innovation. And you would be right—amazing technology is abandoned by the wayside all the time, crushed and destroyed like your dreams of becoming an Olympic sprinter. Take this red pill and follow me down the rabbit hole of The Matrix1 into a world where it makes sense to let go of dreams and follow security standards for these reasons:

  1. To protect yourself from being sued by the US federal government or other parties—Federal agencies and other entities can and will challenge advertising claims from time to time. One must be certain that they do what they claim to do or they will be vulnerable to significant risk. By sticking to technologies referenced in official documents, and the specific definitions stated within such documents (e.g., the meaning of the word “encryption” according to the US National Institute of Standards and Technology [NIST]), one can avoid punitive measures and stand on solid legal ground.
  2. Because they are used by other organizations—This violates everything your parents warned you about when they said, “Just because your friends do it, does not mean that it is a good idea.” Strangely, this rule flips on its head and makes complete sense in the world of cybersecurity. If you are using the same security design or method as, for example, Google or Microsoft, you will be able to defend your work somewhat based on precedent.
  3. Because they have been tested or at least accepted—The hash algorithm BCrypt, which has been a standard for many years, went through a recent code review where a vulnerability was discovered that significantly impaired its effectiveness. Even with the extensive validation and testing that had been performed, an issue arose with the widely accepted algorithm. Despite this, common use replaces testing to a degree in that other people are doing it, too. If you used BCrypt and something was off about it, then at least it was not your algorithm that was to blame—it was the standard. Trying to prove to a judge that your algorithm is thoroughly tested and equivalent to, or better than, the standard would be a difficult challenge, even if the standard was not perfect.
  4. Because NIST has a document for everything—Standard-setting boards such as NIST have so many detailed documents that describe the “how” and “why” of things that sometimes, even their documents have documents. It is their job to set standards, recommend technologies and give guidance. The US government strictly adheres to a suite of standards spread across many guidelines and will not stray from such recommendations. It is safe to follow that example and benefit from the wisdom of these standard-setting bodies and their documents.
  5. Because your name is not prefixed by “Doctor”—If you do have a doctorate, then maybe you can home-brew some new piece of technology or cybersecurity method, but you can bet that it will not be accepted into the dogma of cybersecurity until a few more people with your title give it their blessing. Even then, if something goes wrong and data are stolen as a result, it is your good name and title on the line.
  6. Because solutions from the web such as open source have licenses that can place all the risk back on you—Code from the web, including sites such as Stack Exchange, may have been tested on nonstandard data and can have bugs or flaws that can put you at risk, even if the author claims it adheres to all standards. The licenses that come with open source tend to solidify that risk by including statements such as, “Without warranty of any kind.” If you use open source, choose reputable libraries, especially ones considered to be a standard.
  7. Because other people are smarter—Well, that is probably not true if you are reading this, but it is the assumption of the US government and standard-setting groups. Because of reasons 1, 2, 4, and possibly 5, it is probably going to end badly for your organization if you stray from the standards. A genius may be able to build the best crypto algorithm in the history of the world but, until that algorithm wins itself a spot at the top of the crypto comparison charts à-la NIST, it is not going to be legally accepted as encryption.
  8. …Unless you work for a tech giant, in which case, none of these reasons apply to you—with the notable exception of crypto—Since billions of US dollars and giant chunks of the Internet are dependent upon your tech giant build, it is generally accepted that whatever new protocol you put out will become a standard. While it is true that large enterprises generally will not invest significant resources without doing their due diligence, we tend to give them the benefit of the doubt. Google’s SPDY protocol is an example of a case where a big company standard made an impact but still had room for improvement. But even the tech giants shy away from designing new encryption for use in production systems.

Finding comfort in standards feels natural to a cybersecurity architect, but not always to developers. The quicker you can get yourself and your team on board with not making their own crypto, hash algorithms, signing and authentication methodologies, and other goodies, the sooner you can reduce your risk. Did you miss out on a world-changing invention? Possibly. Was it worth missing out to keep your risk profile low? Absolutely.

Embracing cybersecurity standards and letting go of your dream of saving the world with superhero-like new technology is the mature thing to do. Stick with traditional cybersecurity technologies and standards that have been rigorously tested, are ubiquitous enough to have patches made for them, and are technically and legally rock-solid. In the future, when you are tempted to go along with a smart person's invention, embrace your inner skeptic, consider the reasoning herein and be confident in your role as Destroyer of (Inventors’) Dreams, all for the good of your data.

Endnotes

1 Lana Wachowski and Lilly Wachowski, dir., The Matrix, Burbank, California, USA, Warner Brothers, 1999

David Cross, CISSP, GCIH, GPEN, GWAPT, ISTQB

Is a hacker and principal security architect for Henry Schein One. He crossed over into the security sector from development, where he created applications including neural network-based cost prediction systems, document recognition and emergency medical management systems. He enjoys speaking at security conferences and serving the community. He is on the boards of UtahSec and the Cybersecurity Collaboration Forum, and has also served as president of the InfraGard. When he is not securing systems, shoring up policy or automating security controls, he is writing voice-automated artificial intelligence (AI)-based hacking software, contributing to hacking tools, speculating about the future or evangelizing for AI use in security.