deobfuscate: Zero Trust
Where Security is Heading Right Now
Zero Trust Architecture is a huge movement in the space of cybersecurity, and there’s no wonder why. If you pay attention to the federal space, there are executive orders demanding it of federal agencies, and meanwhile, CISA is recommending our commercial space head that way too. So what is Zero Trust? Let’s break it down.
First and foremost, Zero Trust is a concept, design, or architecture, not a product, line item, or SKU. This is my favorite quote in the space:
An interior designer doesn’t sell you feng shui, they sell you products and designs that achieve it.
Much like feng shui, you can’t just buy a product and have it settled, ‘congrats you’re Zero Trust Certified’; no. Unfortunately, it doesn’t work like that. Rather, Zero Trust is accomplished when we approach the idea of applying no trust to any one resource, and we don’t inherent trust from anything. Rather, we must toss the practice of ‘trust but verify’ and adopt ‘never trust and always verify’ instead.
We’ve Been Doing This Slow March For Years
The Concept of us moving toward Zero Trust isn’t something new. In fact, we’ve been doing this slow march for years since its (‘it’ being the Zero Trust Model since Zero Trust has been around since 1994) original founding by John Kindervag in 2010, during his time at Forrester Research. Zero Trust is the basis for how all of our security should be operating. With the sprawl of our technologies being on-prem, to [Everything]-As-A-Service’ing the heck out of our I.T. shop, you can imagine the need for this kind of approach to security. Just looking at this graph above though might immediately make some people hesitant to approach Zero Trust, but I promise it isn’t as difficult as it appears. Let’s dive into the fundamentals that you’re gonna need to know for the SOC.
To do this, I’ve included something a little more constructive, and it’s the ‘Foundation of Zero Trust’ outline that the Cybersecurity and Infrastructure Security Agency (CISA) developed as part of their Zero Trust Maturity Model. As part of it, we see five pillars:
Identity - Users and their unique attributes.
Device - Any asset that can connect to a network.
Network & Environment - Our method of communications.
Application Workload - Applications and their workloads that execute on and off-prem.
Data - Information stored, transported, and processed by the other pillars.
And below the pillars are the foundational items that apply to all those things:
Visibility and Analytics - Being able to identify assets, users, and data, and how the three interact with one another, as well as applying analytics, to identify behaviors between the three that should be monitored.
Automation and Orchestration - Being able to automate our processes and actions through vetted capabilities and solutions to reduce the burden and lag time human intervention has, as well as orchestrating processes to maintain consistency, which prevents misconfigurations and human error.
Governance - Oversight, auditing, and assurance that our systems in place are effective, compliant, maintain zero to low errors and false positive/false negative rates, and meet or exceed expectations for MTTD, MTTReso, and MTTResp.
A Unified Approach To Securing Us, On and Off Prem
With Zero Trust in full swing, our organizations are implementing top-to-bottom Authentication, Authorization, and Access (AAA) solutions like Single-Sign-On (SSO); think Okta, Ping Identity, Azure Active Directory, and OneLogin. These solutions are our first barrier to entry to our networks and provide us access to all our on and off-prem applications and services. SSO has become imperative to the approach to Zero Trust, as it also incorporates our next most important item: Multifactor Authentication (MFA).
Often simply called 2FA, since only two layers are usually implemented, MFA allows us to never trust and always verify our logons by enforcing Username and Password (usually) paired with a second or even third form of authenticating a user is who they say they are. We do this by breaking down the concept of authenticating someone by three things:
Something they know - As simple as a pin, password, or passphrase. This can also be a ‘select your secure image’ sort of thing.
Something they have - This is the most common second layer, where users authenticate with something they have. Most 2FA operates this way because smartphones are so prevalent. This is the key you get from an app that generates it, you have a physical token generator, or a text message or email with a code —though these two ways are considered awful methods of MFA, where some security professionals don’t even consider them as part of MFA anymore, and I am loosely one of them.
Something they are - These pertain to us as humans, and are primarily rooted in biometrics. Fingerprints, retina scans, and FaceID are all methods of this form.
There is also the method of somewhere you are, but this is tied to authorization and not authentication, but because it bleeds into SSOs, it is worth mentioning. Somewhere you are is tied to your geographic location. Whether GPS data can be gathered directly or we must associate geolocation tied to an IP address, this adds to the complexity, especially when you consider letting employees get into networks remotely. If you decide to try to sign in on vacation in a foreign country, you might suddenly not be authorized to log in to your organization’s applications or even VPN to the network.
Once we get through these layers of signing on, which many organizations require the step of AAA’ing into each application even with SSO (goodbye persistent logons), there are many many layers to zero trust beyond this. What we do on those apps, and what parts of the apps we are even allowed to use, are all important layers to Zero Trust. We administrate these by usually employing Role-based Access Control (RBAC) on these applications. This lets us become granular between users, power users, system admins, and system owners; letting us be as secure as possible when administering the principle of least privilege.
There’s a ton more that comes to implementing Zero Trust, but this is usually where it starts: managing our users and how they interact without systems and data. For the sake of focusing on how this relates to the SOC, I’ll move on from Zero Trust in general to how it affects us. If you want to know more about Zero Trust, here is the Wikipedia Article, which can include further references and discussion points on how it works.
How Do We Manage All These Things?
Or rather, what does all of this tie into? Well, 'policy' is a big word that’s used everywhere in Zero Trust. Policy Management and Policy Enforcement are the two biggest, and while we may tend to think of a Policy as a thing our C-Suite is writing and we are monitoring for, you’re half right. Policies are also something we develop on our systems, which also includes the usage of SSO, RBAC, MFA, etc. We set policies and enforce them on our systems to prevent misuse, misconfigurations, and breaches. Whenever a policy is violated, our systems, which should be ‘zero trust compatible’ (a lose term our industry has created to establish that it is easy to implement Zero Trust with that product) should be alerting us to these events so we can respond to them.
This is where we, the SOC, come into play. Zero Trust doesn’t matter much to us in the construction, though we should still need to be involved, however when the switch is flipped on and Zero Trust is active in our environments, we are the second line of defense. When policies are violated, misconfigurations are detected, and data exfiltration is found, our job kicks into gear. Continuous monitoring is arguably the thing that holds the entirety of Zero Trust together, so the SOC plays the biggest role. If there isn’t someone around to tackle these issues as they populate, well then it all falls apart doesn’t it?
We Are the Watchers
First and foremost, Visibility and Analytics are a priority for the SOC to establish. As we build out the Zero Trust initiative in our organization, we should be aggregating system lists, network maps, and contact information for our teams. We should be establishing ties to the Identity and Access Management (IAM) Team and work to ensure we track normal users, critical (VIP) users, and privileged users. We should identify our critical information systems and classify our data to ensure proper handling. Getting these things ahead of time makes tracking them down a non-issue in the future when bad things happen. Remember, security breaches are a matter of ‘when’, not ‘if’. Hamming this point, even some Zero Trust experts say that Zero Trust also just assumes the breach has already happened, and segmentation and privileged access all play a part in performing some level of containment for us automatically because of those policies in place.
“Wait, hold on, we are (or are being told to do) doing this already?” - Yes, of course, because harkening back to my original statement, we’ve been doing this slow march for a while. And if you haven’t taken these critical steps already, then you’re not going to have Zero Trust protecting your organization. Where there are gaps, threats will find and wiggle their way through (gap analysis anyone?).
When we have our lists of systems and owners, our data classified, our networks mapped, and our users identified, the rest of this comes easy, I promise.
We feed these data points to our detection technologies and begin with the highest priority systems, data, and users first, working our way down. Let the analytics do its job and it's smooth sailing. The best thing about Zero Trust, in my opinion, is when zero Trust is built out correctly, the alerts we get are usually enriched and robust, making them fantastic to work with and they won’t leave you lost.
Having a Plan, And Letting The Machines Execute It
Automation and Orchestration is the next step, though honestly, I think it should be Orchestration and Automation. The reason why is that I think the plan comes first, then we automate it, but that’s just me.
Having a plan to respond to the situations of our violations, misconfigs, and breaches should all be outlined by the SOC, working in conjunction with the teams that manage the Zero Trust solutions; network engineers, IAM engineers, systems admins, etc. We should be constructing our response plans so that when our analytics finds something, we aren’t sitting there questioning:
Where did this come from?
Why did this happen?
Who is involved?
What data is exposed/lost?
Who do we contact?
What do we do?
If these questions come up in a response to an alert, we haven’t defined our standard operating procedures (SOPs) well enough to guide and allow us to function effectively. This also could be because the tool isn’t feeding the right (notice I didn’t say all) data we need to triage these alerts into our analytics tool.
Once these alerts do come in though and we have a plan outlined, let’s start talking about implementing automation. This mainly is centered around a Security Orchestration, Automation, and Response (SOAR) tool, but could be as simple as building python scripts for those lean SOCs. SOAR should take the well-established plans we constructed, integrate them with the tools we have in our stack, and then work to automate as much of the response as we are comfortable with as an organization, leaving the important human intervention steps in those plans where needed so that analysts and engineers can respond and coordinate where needed.
Who Watches the Watchers?
The last thing we like to have is a bureaucrat coming into the SOC and telling us we don’t do our jobs right, but to be fair, that is their job. When things don’t work, break, or aren’t operating effectively, Governance usually finds out about it, and even if they don’t, a good audit will alert them to it. Having sat through a few before, they aren’t fun, and it is hard to B.S. your way to a good score.
So when we get feedback on our performance, we should be working to ensure that we integrate it as effectively as possible, as their mission is to ensure we ensure the security of the organization; Those Who Watch the Watchers.
This is also an important note to be our own governance too. Conducting After-Action Reviews (AARs), Post-Mortems, or whatever you like to call them, these portions of Lessons Learned are vital for our continuous evolution of the SOC; the adapting to new threats and the need to bring new processes, people, and technology into the organization to improve our capabilities. This also gives us receipts for Governance to look at to show them we aren’t just sitting on our hands waiting for the red light to flash on our dashboards indicating a bad thing has happened.
Enjoy reading our content? Consider Sharing this post and Supporting Us!