Zero Trust

Author: Bruce R Wilkins, CISA, CRISC, CISM, CGEIT, CISSP
Date Published: 12 October 2020

So, it has begun. Zero trust, an old cybersecurity term that is being reintroduced with new meaning. If you remember zero trust, it is probably in the context of trusted development. Zero trust meant that all software created, and hardware tools used to enforce a security safeguard, had to be developed and certified using trusted development techniques. This included compilers, operating systems, databases and more. Pretty unrealistic, but a great idea for those who have a risk avoidance mentality. You might have seen the similarities between the zero trust model and the cloud concept of the “black core.” 

The crux of zero trust today means that as cybersecurity professionals, we need to protect data, resources, workflows and the associated user communities. Data are the attack surfaces or the desired objectives from a hacker’s or adversary’s perspective. In an environment where there is no ‘inside’ or ‘outside,’ this is not an easy task. In addition, people now approach data from all locations using devices provided by organizations or following bring your own device (BYOD) policies. Here are some quick insights into implementing zero trust in today’s threat environment.

It is 2020. Do you know where your data are located? Nothing is more fundamental to a zero trust model than data security. It is the very root of the concept. When moving to a zero trust model, an organization should know its data and where the data are located, both within the IT infrastructure and within the physical environment. In a datacentric protection strategy, you should consider the following 7 questions:

  1. Do you have a formal data architecture that defines your data communities, the associated users and the users’ data paths?
  2. Does your infrastructure properly separate your data communities?
  3. Does your organization view servers, virtual machines (VMs) and cloud services as capable of separating data communities?
  4. Is your separation of data communities granular enough for defining separation of duties?
  5. Are your protection strategies implemented properly in the data infrastructure?
  6. How have your cybersecurity and IT budget realities put your data communities at risk?
  7. Do you have a data recovery strategy for each data community?

Complimented with the separation of the data is the segregation of duties (SoD). SoD is enforced in a zero trust model using Policies-, Attributes- or Rules-based Access Control (PRBAC, ABAC, RBAC). In this access control model, employment positions are granted access and privilege based on rules that represent how the user filling the position should access data. Every position in the organization is defined against its necessary data, thus defining the need to know. Having implemented this model in one of the largest infrastructures in the US federal government, it tends to be more theoretical than practical when it comes to reacting to a dynamic data environment. In fact, one of the biggest shortcomings in organizations that are assessed is lack of a data architecture. Now, combine that with every position in the organization being defined against each data community, which are multiple subsets of that data architecture, and the problem grows. The following are some further considerations:

  • Resource protection—Just because there is not an inside or outside, does not mean we should not protect at this level. Resource protection is still fundamental to accomplishing a well-balanced protection strategy. Layers of protection are always more secure than 1 approach, no matter how innovative. Ensuring that assets are available, whether in the cloud or in-house, is still a critical part of the protection strategy.
  • Network—Security is actively being pushed into the network prior to a connection’s arrival at your front door. Tools are looking for paths and routes to determine where the users are coming from and how they were routed. This goes beyond simply checking the endpoint’s IP address. This is also used to protect resources by pushing denial-of-service (DoS) attacks intelligently back into the network.
  • Workflow—Workflow safeguarding is a field ripe for artificial intelligence (AI) applications. Monitoring user types as to what they do, where they do it from and when they do it, allows the deep learning algorithm to learn common processing practices. These algorithms are able to see patterns and know when those patterns are not being followed. So, when users of a user group do not operate like the others, there may be grounds for concern.
  • Endpoint management—Many organizations perform endpoint management strictly on authentication. If you have a token or a password, then your endpoint is secure. Today, there are much more sophisticated endpoint managers. You can secure the endpoint by IP address, media access control (MAC) address, physical location, and even network route to get to the infrastructure. In addition, a device, whether issued by the organization or BYOD, can be added to the endpoint manager to further secure the work location. When users log on to a system from a new location or device, the system generates a warning to the organization to determine if the user account and endpoint are still trusted.

There is much misinformation about zero trust models on the Internet. I have read articles that state virtual private networks (VPNs) and Secure Sockets Layers (SSLs) are unnecessary, or other similarly outrageous statements. What they should be saying is that protecting to the server or enclave level is not enough. Zero trust brings new technology and safeguards to the infrastructure as well as retooling existing ones. In the end, all safeguards are not discreet functions, but have been orchestrated into a correlated defense strategy that is datacentric, considers workflows and pushes threats back into the network. Just remember, devices in a policy access model such as a Policy Enforcement Point (PEP) are similar to fully implemented application firewalls, down to a user type and its associated data. The need remains for a layered approach to security, especially for legacy technology that cannot be fully integrated into your security strategy.

Ultimately, if your data transition across a given piece of technology, then you should care about that technology and secure it, whether in transmission or storage or in process. You must protect data regardless of whether a legacy device can support your zero trust model. I discuss keeping data encrypted while in process without decrypting it in my 2019 article, “Homomorphic Encryption: Privacy Throughout the Data Life Cycle.

Bruce R. Wilkins, CISA, CRISC, CISM, CGEIT, CISSP, is the chief executive officer of TWM Associates Inc. In this capacity, Wilkins provides his customers with secure engineering solutions for innovative technology and cost-reducing approaches to existing security programs.