The Bleeding Edge: Why the Bleeding Edge Is So Bloody

Author: Dustin Brewer, Senior Director, Emerging Technology and Innovation, ISACA
Date Published: 28 August 2020
Related: State of Cybersecurity 2020 Part 2: Threat Landscape and Security Practices | Digital | English

Those of us who have been in this industry for a while have seen amazing accomplishments and growth within the technology sector. We have also seen security breaches happen, usually paired with quick knee-jerk reactions and a “slap-a-Band-Aid-on-it” mentality as a response. We read about a data breach that was caused by unpatched software, review our own patching policies, append them if needed, and move on to the next issue and repeat this process. We constantly teeter between secure computing and just plain old computing, depending on whether the security part is convenient or within budget.

The Spine

To me, one of the most fascinating vulnerabilities out there is the buffer overflow attack. This attack is executed by taking a vulnerability in software, breaking it and attempting to make the processor run code that was never intended to be there in the first place by overwriting memory. There is an elegance to this epic ballet of hardware and software variables that have to be in perfect alignment for these amazingly intricate exploits to work. I was curious as to the first time this exploit was used and I found a white paper titled Computer Security Technology Planning Study, Volume II,1 published in 1972.

The paper not only theorized about the possibility of buffer overflow attacks, but also postulated theories on access controls, data integrity and physical device security. Many of these original thoughts are still used in security to this day, and it is a fascinating read, mostly because it was written 48 years ago and the ideas still ring true. For example, the most recent published buffer overflow attack was identified in January of this year (as of this writing).2

The Blade

Since 1972, we have made leaps and bounds in processing speed, communications, storage capacity and many more aspects of computing, but it was a bit surprising to learn that we have known about the possibility of this exploit for almost 50 years and we still continue to have the same problem. Buffer overflows are, of course, just one example; however, we find the same types of attacks (e.g., social engineering, unauthorized hardware, programming issues) are used over and over again to gain unauthorized access. Trends in attack vectors change to match what is working at the time.

According to ISACA’s State of Cybersecurity 2019, Part 2: Current Trend in Attacks, Awareness and Governance, the top three attack types are phishing, malware and social engineering.3 The first known mention of a phishing attack can be traced back as far as 1996.4 It can be argued that malware has been around since software was introduced, but the first rumblings of such a concept were around the 1970s.5 Attempting to find the first social engineering attack is almost impossible because since humans have been able to communicate, there are those who have clandestinely tried to get more information from someone than that individual wanted to give. The same attacks that have worked for decades are still working. What are we missing?

The Tip

One of the best predictors of future human behavior is past behavior. Technology behavior and trends seem to have no immunity to this adage. Current trends continue to see more data breeches and cyberattacks every year, with a growing technology skills gap. As consumers, we not only want our technology to work flawlessly but also to have innovation with every release, all while expecting technology to remain secure. However, we are building all of this on a foundation that is not secure or necessarily stable.

In order to secure the future, we need to better secure the present, which means understanding how the technology of our past works. Do you have to know how to program in x86 assembly off the top of your head? Absolutely not. But if you understand the basics and know what payloads and shellcode for certain attacks look like, it may help with incident response and possible future mitigation. Do you have to memorize all of the flag combinations of an IPv4 packet? Again, no. But knowing how to use a protocol analyzer and read packet traffic to find patterns and possible anomalies may give you a deeper understanding of not only what an attacker was doing but also their tactics, techniques and procedures. This all depends on your current job role and responsibilities, but if you are in cybersecurity, policy, frameworks and standards are only half the battle. Attackers have these technical skills and know their vulnerabilities or at least how to test a system for them. As a bonus, having a good foundational knowledge of information technology helps when learning about emerging tech. As cybersecurity practitioners, we have to be jacks-of-all-trades and masters of some.

AS CYBERSECURITY PRACTITIONERS, WE HAVE TO BE JACKS-OF-ALL-TRADES AND MASTERS OF SOME.

The Bleeding Edge

Insanity is popularly defined as doing the same thing over and over again and expecting different results. While not a clinical definition, this expression may be onto something when it comes to evolving in our practices as professionals linked to the IT field and our practices as human beings. If something is not working, it is time to try something else while not forgetting our principles and where we are heading.

The bleeding edge of technology is still deeply intertwined with the technology of the past. As we continue to use the blade of technology to solve problems, the deeper into the artery we get and, at this point, it is not just the edge that is bloody, but the whole knife. As the incision has become deeper, we have only applied patches, sutures and made minor course adjustments to compensate for the mistakes caused by prior innovation. To compound the problem, we have also forgotten (or never properly learned) the technologies that we are currently building on, leaving room for the proverbial needle of a vulnerability in the haystack for someone to find, study, exploit, and grab any and all data that they can.

With the exception of quantum computing, the majority of computing is still done using the same processor type we were using in the 70s. And while many practitioners are excited about the prospects of a totally new computing capability and architecture with quantum, can we be worthy of such a gift in our future without first mastering the technology of the past and meeting the demands of the present?

Endnotes

1 Anderson, J. P.; Computer Security Technology Planning Study, Volume II, USA, October 1972, http://seclab.cs.ucdavis.edu/projects/history/papers/ande72.pdf
2 Exploit Database, Torrent 3GP Converter 1.51 - Stack Overflow (SEH), https://www.exploit-db.com/exploits/47965
3 ISACA®, State of Cybersecurity 2019, Part 2: Current Trend in Attacks, Awareness and Governance, USA, 2020, https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004KoFkEAK
4 Phishing.org, History of Phishing, https://www.phishing.org/history-of-phishing
5 Love, J.; “A Brief History of Malware—Its Evolution and Impact,” Lastline, 5 April 2018, https://www.lastline.com/blog/history-of-malware-its-evolution-and-impact/

Dustin Brewer, CISM, CSX-P, CDPSE, CEH

Is ISACA’s principal futurist, a role in which he explores and produces content for the ISACA® community on the utilization benefits and possible threats to current infrastructure posed by emerging technologies. He has 17 years of experience in the IT field, beginning with networks, programming and hardware specialization. He excelled in cybersecurity while serving in the US military and, later, as an independent contractor and lead developer for defense contract agencies, he specialized in computer networking security, penetration testing, and training for various US Department of Defense (DoD) and commercial entities. Brewer can be reached at futures@isaca.org.