How Social Engineering Bypasses Technical Controls

Author: Allen Ari Dziwa, CISA, CCSP, CEH, CISSP
Date Published: 1 September 2022
Related: A Holistic Approach to Mitigating Harm from Insider Threats | Digital | English

The knowledge that robust enterprise networks can be rendered impotent by simple social engineering maneuvers that allow threat attackers to bypass technical controls is disconcerting to security professionals. Even experienced security professionals have been victims of social engineering manipulation because they failed to distinguish deception from normal human interactions based on trust, kindness and expected social norms.

Adversaries can easily exploit human trust to violate confidentiality. For example, consider a person making a request for medical records from a previous physician by making a call and speaking with a receptionist, who asks for name and date of birth as a means of validating the person’s identity. The receptionist then forwards the request for medical records to the physician for approval. The receptionist confirms that a computer printout of the medical records will be available for collection the following day. Due to COVID-19 protocols in place at this time, on arrival at the clinic, the person calls the receptionist to signal arrival and wears a mask. One of the nurses emerges from the clinic and, without confirming the person’s identity or requesting removal of their mask, the nurse hands over the medical records. The problem with this scenario is that a friend, an ex-spouse or a complete stranger could have pieced together enough information from various sources to impersonate the person who requested the medical records. If the clinic has handed over the medical records to a threat actor, its failure to authenticate the person’s identity would violate the US Health Insurance Portability and Accountability Act (HIPAA).

Because the United States lacks strong omnibus privacy laws, a great deal of personal information about US residents is available online, which makes it easier for threat actors to piece together disparate data during reconnaissance. Various websites publish information such as phone numbers, residential addresses, workplaces and other details without the consent of the people involved. When that widespread availability of information is enhanced by the excessive compulsion by some people to engage in excessive sharing on social media, social engineers can have a field day. These circumstances allow criminals and nation-state sponsored threat actors to target influential people in business and politics from thousands of miles away. Security professionals both in public and private sectors are well known by nation-state threat actors because their information is readily available. This is a serious vulnerability.

Social engineering can be defined as the act of using manipulation and deception to obtain access to confidential information.1 Social engineering has also been described as one of the most inventive methods of gaining unauthorized access to information systems and obtaining sensitive information.2 The key themes are deception and inventiveness (creativity). Successful social engineering schemes are well orchestrated, with the aim of establishing inviolable trust between the attacker and the target without generating any suspicion whatsoever during execution. With an obvious absence of federal privacy laws in the United States and unwarranted pervasive disclosure of personal information on multiple websites, security professionals need to understand that their war should be with social engineering, as technical controls are only a small part of a successful exploitation. Without serious privacy law reforms and enhanced security awareness in organizations, many threat actors, both nation-state sponsored and criminal actors, will continue to exploit the current loopholes to bypass expensive technical controls in place.

How Technology Facilitates Deception

Many people are aware of the phishing emails and vishing attempts that have become prevalent in the last decade. However, even for the most careful individuals, there is still a substantial risk of falling for a new social engineering technique: Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) phishing.3 A CAPTCHA is intended to prevent robots from making remote digital entries and allow only authorized persons with the correct username and password to access an account.4 However, complex CAPTCHAs can be defeated through social engineering.

A CAPTCHA phishing attack involves deploying a CAPTCHA phishing interface on a webpage (CAPTCHA carrier) and selecting some high-traffic websites to publish the phishing messages.5 Then, when unsuspecting users try to access a website they normally log onto, they see a CAPTCHA. When they attempt to select an image to validate that they are human, the process fails. They eventually get a fake sign-in screen prompting them to enter their true credentials, which they then unwittingly surrender to a third party. Such a social engineering trap might go unnoticed, even by security-conscious users.

Another new form of social engineering that has gained popularity is a form of phishing that spoofs emails and sends realistic-looking messages on work-related topics, such as changes in benefits or gift card rewards for jobs well done. Employees may click on such links because the emails look completely legitimate, but the result is that unauthorized outsiders gain admission to the enterprise network.

The more people expose themselves to new technologies and apps, the greater the chances that social engineering experts can take advantage of their ignorance to execute successful attacks.

Social engineering has gotten easier with the expansion of the Internet of Things (IoT) because many people use multiple devices that are interconnected. Although some enterprises are making an effort to use containers to separate their enterprise applications (apps) from employees’ personal apps, human errors can facilitate intrusions into enterprise networks. With the proliferation of devices that track individuals’ movements, it has become easier for adversaries to know and exploit their targets’ habits.

After the COVID-19 pandemic began, many employers allowed their employees to work from home—and many used their home networks for work-related business. There are technical measures to counter security weaknesses, such as using virtual private networks (VPNs) and containers to separate personal and work-related apps. However, the desire of employees to catch up with work using Internet connections available in coffee shops, airport lobbies and on planes benefits social engineering schemes. The more people expose themselves to new technologies and apps, the greater the chances that social engineering experts can take advantage of their ignorance to execute successful attacks.

The COVID-19 pandemic has transferred a broad range of daily in-office procedures to online platforms, substantially increasing the percentage of the population working online. The rise in Internet users has not been accompanied by sufficient cybersecurity education and training in how to recognize and respond to the different types of attacks that might occur on a daily basis.6

Humans Are the Weakest Link

Many people view the field of cybersecurity as involving abstract and esoteric configurations of settings in hardware and software—recondite information that is the responsibility of employees well-versed in computer science. Typical users care more about the convenience of using technology than about its security. Threat actors are aware of this flawed thinking among computer users and aim to exploit it. Such thinking can translate into lax behavior such as logging on to an enterprise website while on a plane, forgetting that a stranger—possibly even a nation-state sponsored threat actor—might take the opportunity to look over the user’s shoulder. What if the spy’s target is a federal employee logging into an app? The employee is simply trying to catch up with work, but prying eyes can take note of the applications being accessed and use that information to craft an email that might induce the employee to click on an embedded malicious link.

Social engineering can be described as the art of penetrating cybersecurity defenses by exploiting human psychology in persons who are unprepared to guard against such threats or attacks.7 One of the human weaknesses that leads to successful social engineering is that some people are not aware that their behavior or thought processes might be flawed; therefore, they take little to no precautions. Threat actors can more easily exploit those who do not have heightened awareness. This type of mentality might also lead to risky employee behavior such as loudly discussing confidential client information on a cellphone in an airport lobby. Individuals who are resistant to the idea that their behavior could enable a new vulnerability can sometimes be quite stubborn, and they are not likely to be corrected with a single annual security awareness training session.

How Nation-State Sponsored Actors Win

Cybersecurity is a national security concern. Nation-state sponsored threat actors have learned that the easiest way to overcome the defense-in-depth (DiD) technical configurations surrounding sensitive data is through social engineering. With disparate laws and loopholes regarding data management from one jurisdiction to another, and sometimes in defiance of the laws that do exist, some organizations continue to buy and sell individuals’ personal data online. Those data are available for social engineers to use in cyberattacks against their geopolitical adversaries.

All users of digital systems need to be aware that their behaviors or thoughts can be flawed and that attackers can exploit those flaws and weaknesses.

Some people believe national security is about protecting critical infrastructures and that the privacy of citizens is not a pressing concern. People with that mindset are sometimes the very employees who manage important infrastructures on behalf of the government. If those people’s information is made available online, nation-state threat actors can target them in attempts to reach the computer systems and networks they use. As the era of cyberwarfare looms, and with laws regulating cyberwarfare still evolving, it is imperative to reinforce the message that allowing the information of critical infrastructure employees to be exposed online is a disservice to cybersecurity.

Mitigating Social Engineering Threats

All users of digital systems need to be aware that their behaviors can be flawed and that attackers can exploit those flaws and weaknesses. Maintaining such an awareness is not easy as people tend to be confident in the decisions they make. Employers should sponsor training events and security awareness campaigns, at a minimum, but it falls on individuals to apply what they have learned from organized training and to maintain constant vigilance. Employees can be vigilant only if they are aware of the dangers that might arise as a result of their own flawed behavior in response to threats in the environment.

Organizations should train their recruiters to be careful regarding what they reveal in job requisitions posted online. Revealing that an organization is seeking a software engineer that specializes in a specific software platform (which may be using outdated vulnerable technology) can reveal valuable information to a threat actor. The hired software engineer should also be trained to avoid posting unnecessary details about their role on LinkedIn, or anywhere else online, including configurations and other superfluous details that could give threat actors information about an internal organization’s architecture.

Some organizations post profiles of employees online, including work emails and work locations. In addition, some organizations post personal information about their key security employees online, including their residential addresses, personal emails and phone numbers. Employees should be aware of whether their information is exposed to threat actors and make efforts to remove it.

In addition to training, regulations can help mitigate social engineering threats. Policymakers and legislators must be educated on what the absence of an omnibus privacy law means for nation-state sponsored actors from adversary countries. The abundance of information available to threat actors due to a lack of federal regulations puts the United States at a disadvantage when it comes to potential cyberwarfare. A legislative equivalent to the EU General Data Protection Regulation (GDPR) would be one way to prevent unnecessary exposure of information pertaining to security professionals and officials.

Conclusion

Due to the COVID-19 pandemic, many organizations now have employees who work from home full-time, while some have adopted hybrid approaches. In either case, employees should be vigilant about the dangers of social engineering. It is important to understand that behavior can be flawed in ways that may not be apparent. Each user should be trained in careful cybersecurity self-assessment techniques to guard against unwittingly divulging credentials to attackers.

The world is entering a time of heightened risk of cyberwar. Social engineering could be an effective weapon for adversaries to gather the information they need to launch devastating cyberattacks. Each day, millions of users post personal information on Facebook, Twitter and many other online social networks. The type of information they are sharing may seem insignificant, but it could be the key to igniting a cyberwar.8

Strong privacy laws, coupled with a shift in collective online behavior, could help rectify the problem of ubiquitous personal data floating around the Internet. Perhaps it is not too late.

Author’s Note

The views in this article do not represent those of the author’s current or previous employers.

Endnotes

1 Washo, A.; “An Interdisciplinary View of Social Engineering: A Call to Action for Research,” Computers in Human Behavior Reports, vol. 4, August–December 2021, https://www.sciencedirect.com/science/article/pii/S2451958821000749
2 Alsulami, M.; F. Alharbi; H. Almutairi; “Measuring Awareness of Social Engineering in the Educational Sector in the Kingdom of Saudi Arabia,” Information, vol. 12, iss. 5, 2021, https://www.mdpi.com/2078-2489/12/5/208/htm
3 Kang, L.; J. Xiang; “CAPTCHA Phishing: A Practical Attack on Human Interaction Proofing,” Inscrypt’09: Proceedings of the 5th International Conference on Information Security and Cryptology, December 2009, https://dl.acm.org/doi/abs/10.5555/1950111.1950150
4 Jacob, B.; “Tech Term Tuesday: CAPTCHA,” ISACA Engage Forum, 15 March 2022, https://engage.isaca.org/onlineforums
5 Dhanalakshmi, G.; S. Devadarshini; R. Swarnaa; R. Srinidhi; “CAPTCHA Security Engine,” International Research Journal of Engineering and Technology (IRJET), vol. 8, iss. 4, April 2021, https://www.irjet.net/archives/V8/i4/PIT/ICIETET-43.pdf
6 Venkatesha, S; K. Reddy; B. Chandavarkar; “Social Engineering Attacks During the COVID 19 Pandemic,” SN Computer Science, 11 March 2021, https://pubmed.ncbi.nlm.nih.gov/33585823/
7 Alsufyani, A.; L. Alhathally; B. Al-Amri; S. Alzahrani; “Social Engineering, New Era of Stealth and Fraud Common Attack Techniques and How to Prevent Against,” International Journal of Scientific and Technology Research, vol. 9, iss. 10, October 2020, https://www.ijstr.org/final-print/oct2020/Social-Engineering-New-Era-Of-Stealth-And-Fraud-Common-Attack-Techniques-And-How-To-Prevent-Against.pdf
8 Abass, I. A. M.; “Social Engineering Threat and Defense: A Literature Survey,” Journal of Information Security, vol. 9, iss. 4, 2018, https://www.scirp.org/journal/paperinformation.aspx?paperid=87360

ALLEN ARI DZIWA | CISA, CCSP, CEH, CISSP

Is a cybersecurity student at Brown University (Providence, Rhode Island, USA). He serves as a cybersecurity risk specialist with the Federal Reserve Bank of Cleveland. He has worked in technology and cybersecurity consulting for 15 years. He serves on the board of directors of the Information Systems Security Association (ISSA) North Texas (USA), on the EC-Council Ethical Hacking Advisory Board and as a subject matter expert for (ISC)2. He is a certified ethical hacker and certified threat intelligence analyst.