1       Introduction

The changes over recent decades in businesses' and government's IT environments have rendered the traditional security model obsolete. The sophistication and technical aptitude of adversaries constantly increase to take advantage of every attack surface, including the ones emerging from the shift away from on-premise "fortresses". A set of principles called zero trust has been in development over the past decade to address this situation. This paper discusses these principles and their implications for enhancing security countermeasures against the intensifying threats. 

 

1.1       Massive Increase in Risk

Numerous recent reports[1] have catalogued the sharp rise in system breaches and ransomware attacks. The number of attacks has been increasing for years.  Awareness and often preparedness has also increased in many organizations. U.S. regulators in the FTC are considering scrutinizing businesses more intently regarding cybersecurity[2]. In spite of this, security teams do not seem to be able to reduce the risk of computer system compromise. 

 

On the positive side, security risk assessment methods and frameworks are freely available. Several threat modeling methodologies are available for identifying security risks. Automated tools exist to identify patterns of behavior that indicate imminent or active attacks. Computer security is a relatively mature field with a common vocabulary and concept of risk. The federal government is developing programs[3] to increase the size and improve the training of the cybersecurity workforce. Federal agencies are increasing staffing[4]. In spite of these positive conditions, security teams are having a harder time protecting digital assets.

 

 

1.2       Traditional Approach Is Inadequate

Funding and staffing of security teams may be lagging since corporate budgets may not yet reflect an economy starting to rebound in 2021. And the shift from on-premise infrastructure to cloud-based or hybrid infrastructure has changed the nature of the attack surface. But the primary factors for the increase in vulnerabilities seem to be a misdirected focus and a misguided placement of trust. 

 

1.2.1      Misdirected Focus

As explained by Roger Grimes[AB1] [AB2] [AB3] [AB4] [5], an expert with 34 years' experience in host and network security in both public and private sectors, there are hundreds of best practices for computer component configuration and security procedure. Audit plans, security frameworks, and risk assessment methods cover every best practice from 'A' to 'Z'. Except for the largest organizations, most security teams are not staffed adequately, nor have the proper training in all of the technologies deployed, to sufficiently implement all of the best practices. Nevertheless, they are tasked to do so. Certain remediations take more time than others, and some fixes are 'quick wins'. But often there is no sense within an organization as to its key threats. As a result, time may be spent in those organizations improving the door locks, so to speak, while leaving the windows wide open.

 

1.2.2      Misguided Placement of Trust

Security controls often continue to be deployed in a castle-and-moat approach.  This involves coarse segmentation of enterprise users and assets into a single enclave, or a very few enclaves. The rest of the world - entities and devices - is considered to be outside this enclave. And there are only a few gateways into the enclave. This approach no longer models the actual state of affairs where users access enterprise resources on devices and via infrastructure not owned by the enterprise, and the resources themselves might not reside on enterprise-owned infrastructure. 

 

1.3       Better Approaches Are Available

Instead of devoting scarce corporate resources to attend to a full laundry list of security best practices, organizations should prioritize the response to security risk based on the key threats they face. Current threat intelligence can help provide this data. For example, Grimes indicates that organizations are primarily succumbing to social engineering, inadequate patching, and bad password management (including hard-coded and reusable passwords). There may be additional threats and hazards particular verticals or organizations are facing.  But the immediate focus of mitigation should be narrower than the full list of security controls.

 

Once an organization identifies the key threats and takes notice of the current trends identified by security experts, there are two questions: Can the risks due to those threats be eliminated or at least mitigated? If one or more risks can't be eliminated, is the current IT environment able to prevent widespread impact should there be a breach? 

 

If the answer to the second question is 'no' or 'not sure', the path to improvement can start by redefining the meaning of 'trust'. Instead of assuming the existence of a trusted internal network whose perimeter must be defended from external entities, assume no device, agent or actor is trusted until they are thoroughly verified not to be compromised. This includes even the organization's full-time staff and the systems deployed or leased from the biggest-name vendors. Everything is suspect and untrusted until verified to be clean. And then the trust extends only for the length of the session needed to access a resource, and not for all time. 

 

The improvement described above is called zero trust, a buzzword that represents the latest thinking in network and application security. It doesn't really refer to the elimination of trust but rather to the placement of constraints on when trust is accorded, constraints that align with third-party hosted and virtual IT infrastructure and a multitude of user endpoints. A better name would have been "just-in-time verified trust"[6], but "zero trust" captures the imagination and keeps the focus on continually looking for ways to reduce the unverified granting of trust.

 

Identifying and focusing on the biggest security threats and moving to a zero-trust architecture require fundamental operational changes. Implementing these changes will require detailed plans staged over a period of time as described by NIST SP 800-207 (see article p. 36; section 7; and Figure 12).  But this effort holds promise as a way to reduce an organization's security risk.

 

2       Current Efforts in Zero-trust Practice

The Identity Defined Security Alliance (IDSA) is in a position to offer technology-centric advice[7] for implementing a zero-trust architecture. IDSA references work by Forrester (2018) on a Zero Trust Extended Ecosystem. It was Forrester, in 2010, that coined the term 'zero trust', although some principles were articulated elsewhere earlier in the decade of 2000.

 

The Department of Defense (DoD)[8] references National Institute of Standards and Technology (NIST) SP 800-207[9] as the 'emerging' technology standard (to become a mandatory standard for the DoD within three years). 

 

The Cloud Security Alliance (CSA) has developed a reference architecture for their 'software-defined perimeter' or SDP[10]. NIST SP 800-207 refers to SDP among several use cases. Like the IDSA, SDP provides a technical framework for implementing a zero-trust architecture. 

 

The Cybersecurity and Infrastructure Security Agency (CISA) is developing a maturity model for zero trust[11], to help federal agencies migrate to a zero-trust environment. 

 

 

3       Tenets and Pillars of Zero-trust

3.1       Tenets

The DoD enumerates the tenets of zero trust as follows. This is a condensation of the seven tenets and six assumptions of network connectivity from NIST SP800-207: 

 

·       Assume a hostile environment - all users, devices, networks, environments are untrusted

·       Presume breach - adversaries are present within the environment

·       Never trust, always verify - deny access by default to all devices, users, workloads, data flows; allow access only after authentication passes involving multiple attributes and authorization is enforced involving least privilege and dynamic policies

·       Scrutinize explicitly - "All resources are consistently accessed in a secure manner using multiple attributes (dynamic and static) to derive confidence levels for contextual access to resources.  Access to resources is conditional and access can dynamically change based on action and confidence levels resulting from those actions." (p. 19)

·       Apply unified analytics - analyze behavior / traffic patterns and log all transactions

 

User and agent identity must be confirmed at each request to access a resource. And the configuration of each device or asset must be confirmed to conform to standards before each attempt to connect to a resource. In general, an organization's resources are 'dark', hidden, and undiscoverable by default. And furthermore, if and when verification passes and a device or a user is allowed to access a resource, only the lowest, most granular privileges possible should be authorized to match precisely that task that needs to be accomplished. In a mature zero-trust architecture, each resource or collection of tightly related resources becomes a micro trust zone. Any user with access to a trust zone has no access to other trust zones without re-authentication, re-verification, and re-authorization.

 

As expected, there are many technical considerations in a zero-trust migration project[12]. However, important non-technical components are so-called 'human firewalls'[13]. These address the vulnerabilities, inherent in an organization's staff. Staff members are exploited through social engineering tactics, and their compromise is responsible for a vast number of breaches[14]. 

 

Human firewalls are a mitigation for social engineering and not a part of a zero-trust "architecture". But the essential operation of a human firewall requires suspicion and verification --- the essence of zero trust. All communications are suspect, especially email containing attachments or links[15], and now, with deep-fake voices, even phone calls from managers[16]. 

 

 

3.2       Pillars

The pillars of zero trust according to DoD comprise five areas to deploy zero-trust controls and two areas of capability (means of achieving results). These pillars and capabilities are a rephrasing of the elements and processes in the Forrester Zero Trust Extended Ecosystem. CISA adds a third capability ('Governance') covering policy, audit, and enforcement of controls, whether automated or not, that is similar to the second capability of the DoD. CISA indicates that this model is one of several ways to support the transition to zero trust.

 

·       Apply controls to:

o    User / Identity - human and non-human entities

o    Device - all and any types of hardware (not only servers) that can connect to a network

o    Network / Environment - the communications medium

o    Application / Workload - systems, programs, services on-premise or not

o    Data - all and any datasets, structured or not, and the systems storing them

·       Exercise capabilities in:

o    Visibility and Analytics - metrics and indicators

o    Automation and Orchestration - controls

 

4       Benefits of Zero-trust Operations

4.1       Visibility

By knowing the complete inventory of identities and devices whose access you authorize, one not only has the opportunity to do continuous verification for potentially trusted entities, but one can be more certain whether or not any observed traffic is rogue. 

 

4.2       Reduced Attack Chain - Access, Privilege Escalation, Lateral Movement

By authorizing granular, least-privilege access, one reduces the impact of any threat actor that may evade defenses, shutting down some of the stages of the attack chain[17]: Initial Access, Privilege Escalation, Lateral Movement - leaving little opportunity to deploy command and control (C2) and other persistent or destructive mechanisms. 

 

With regard specifically to the ransomware attack chain, B. Moldenhauer, Sr. Director in the Office of the CISO at Zscaler, says[18] that the technical elements of a zero-trust architecture will address some of those challenges. Two challenges are compromise of trusted applications and attacks bypassing typical signature scans. 

 

Moldenhauer states "trust is the attack surface" --- it is the analysis of a wide set of attributes about users and devices before permitting any request, and the subsequent constriction of access, that can shrink that attack surface. Additionally, deploying machine-learning to inspect user behavior, traffic patterns, and other presently unknown indicators of compromise will address the new techniques, tactics, and procedures (TTP's) used by threat actors. 

5       Anomalix's Position

5.1       Zero Trust Compatibility

Anomalix's idGenius platform can be built out according to an organization's zero-trust journey. Wherever an organization is located on the path to a zero-trust architecture, idGenius can provide the means to meet requirements in the area of identity and access management.

 

5.2       Automation of Third-party Identity Lifecycle

Having a definitive inventory of identities is the first pillar of zero-trust operations. And one of the biggest gaps in any organization is third-party identities, the management of whose lifecycle may often be dismissed within Human Resource Information Systems (HRIS). idGenius is purpose-built to automate management of the third-party identity lifecycle, from engagement, to disengagement, and reengagement, including analysis of network behavior and visibility into access privileges

 

5.3       Automation of Non-human Identity Lifecycle

Having a definitive inventory of devices is another pillar of zero-trust operations. Many of the 'devices' are software-based bots or IoT hardware devices. These identities need the same diligence in oversight as human third-party identities. idGenius's identity lifecycle management extends to non-human identities and service accounts, helping to complete your organization's compliance with new zero-trust directives. 

 

5.4       Going Beyond

Automated analytics are essential for producing go/no-go decisions at runtime for each access request to a microsegment of the network. A series of attributes are inspected as a unit, such as user identity, device state, time, and location, to determine trust. But some attributes cannot be fully analyzed at the point of request. Anomalix's idGenius inspects logs from all layers of the tech stack to get insight over a period of time. Insights from user and entity behavior analytics (UEBA) feeds back as another attribute to the runtime analytics, to improve ongoing trust decisions. 

 

Anomalix's idGenius can complement zero-trust security controls with other aspects of trust that combine to provide a well-rounded risk mitigation strategy for an organization, namely criminal history search, social proofing, and project proofing. 

 

The search for criminal history is particularly important for prospective non-employees for purposes of cybersecurity, not to mention for general security of the workplace. HRIS may not be involved in vetting non-employees to the extent that they vet prospective employees. This gap should be filled before any non-employee entities are allowed to access an organization’s resources. It is not enough to trust the third-party vendor without knowing more about each of their staff members who are is assigned to any project. 

 

With social proofing, an organization can reduce the risk to its reputation due to employees’ or third-party staff's online activity in public forums through the monitoring of that activity and alerting on instances that do not align with the organization's values. 

 

With project proofing, the risk due to a third party's underperformance on projects and activities can be mitigated through the search for and reporting of issues on earlier evaluations. 

 

Managers can use any of these proofing scores to respond to imminent risks or existing issues as the need arises. 



[1] See this report by Cybercrime Magazine (6/3/2021), for example.

[2] See this Wall Street Journal report, for instance (9/29/2021).

[3] Training materials here; public- and private-sector commitments to increasing the nation's cybersecurity posture here, here and here; outreach to minorities here; DHS internal initiative here..

[4] See https://therecord.media/cisa-aims-to-fill-all-50-statewide-cyber-coordinator-posts-by-years-end/, for example.

[5] Roger Grimes Teaches Data-Driven Defense (9/15/2021) and the Prevention section of "Nuclear Ransomware 3.0: It Is About To Get Much Worse" (10/27/2021)..

[6] See "Security Network Auditing: Can Zero-Trust Be Achieved?" by Carl Garrett, https://www.sans.org/white-papers/39825, (September 202).

[7] https://www.idsalliance.org/wp-content/uploads/2019/07/IDSA_Zero-Trust_Whitepaper.pdf (July 2019)

[8] Zero Trust Reference Architecture: https://dodcio.defense.gov/Portals/0/Documents/Library/(U)ZT_RA_v1.1(U)_Mar21.pdf  , p.49, (February 2021).

[9] Zero Trust Architecture https://csrc.nist.gov/publications/detail/sp/800-207/final (August 2020)

[10] https://cloudsecurityalliance.org/artifacts/software-defined-perimeter-and-zero-trust/ (May 2020)

[11] https://cisa.gov/publication/zero-trust-maturity-model (draft June 2021)

[12] See "How to Create a Comprehensive Zero Trust Strategy" by David Shackleford, https://www.sans.org/white-papers/39790 (September 2020).

[13] An early reference to 'human firewall' can be found at https://scholarspace.manoa.hawaii.edu/bitstream/10125/41681/1/paper0532.pdf in a paper entitled "Combating  Phishing  Attacks:  A  Knowledge Management  Approach" (January 2017), which emphasized that collaboration among staff on identifying social engineering attacks can be more productive than individual staff members working in isolation. This paper also cites the SANS white paper "Human Being Firewall" https://www.sans.org/white-papers/32998/  (January 2009); however it stands for a concept quite different from later uses: for SANS, it is a single person whose job it is to replicate the essential functionality of a server translated to inspection of human-based / social engineering attacks. The SANS article references https://www.networkworld.com/article/2342484/the-human-firewall.html (May 2003), which aligns more closely with recent uses of this term, and which references an organization called the Human Firewall Council that existed around 2000 (see https://www.researchgate.net/publication/268287455_Rebuilding_the_Human_Firewall ) and whose domain humanfirewall.org now (9/2021) is unclaimed, but apparently was absorbed by the Information Systems Security Organization in 2004 (see https://www.cnet.com/tech/services-and-software/human-firewall-gets-new-owner/ ). However, as of September /2021, either the ISSA eventually shut down the associated site or never revived it in the first place. Looking at ISSA.org at this time, there is no mention of Human Firewall Council, the organization that may have first popularized if not coined the term 'human firewall'.

[14] https://blog.knowbe4.com/bec-fraud-and-ransomware-attacks-are-all-on-the-rise-and-costing-more-than-ever references a cyber insurance report from Coalition that indicates close to 50% of the attack vectors in 2020 and 2021 are from phishing. 

[15] Users should confirm out-of-band with the ostensible requesting party before sending money (in the event of business email compromise); or (in the event of phishing) before responding to requests for credentials for too-good-to-be-true offers , to respond to a threat of loss of money, or to respond to an urgent opportunity; or (in the case of Facebook quizzes ) to divulge helpful identifying information in a seemingly harmless pastime.

[16] https://blog.knowbe4.com/deepfake-technology-is-cloning-a-voice-from-the-c-suite  

[17] See https://attack.mitre.org/# for an explanation of a generic attack chain.

[18] See the webinar "Ransomware Prevention with a Zero Trust Architecture" at https://brighttalk.com/webcast/10415/511370 (October 2021). Two other current challenges are exfiltration of data and the obscuring of attacks that are encrypted by virtue of ubiquitous SSL. The former is addressed by DLP solutions, and the latter is addressed by "inline content inspection" of SSL traffic,  both of which are elements independent of a zero-trust architecture.

download white paperBack To White PApers

Please enter your information to download this white paper

Thank you!
Download from the link below.
Zero-trust for Non-Employee/Third-Party Individuals
Oops! Something went wrong. Please check all fields and try again.
← Go Back to White Papers