I write about Tech, Cyber and Marketing. Not in the exact order.
In my latest article about “The Rise Of Zero Trust Architecture”, I wrote about the broad and rapid adoption of this relatively new concept in the world of cybersecurity. However, there are still several other security architectures which are in use today:
Traditional network perimeter security is made up of many different parts, all of which work together to provide a security solution for the network.
Traditionally, network security will begin with the authentication of the user, generally by using a username and password. This method is also referred to as one-factor authentication, with two-factor authentication there will be another item that needs to verified, such as a mobile phone, a USB drive, or even some sort of token. On the more advanced end of the spectrum, there is also three-factor authentication, which will involve the user’s biology, such as a retinal or fingerprint scan.
Once the user has been verified, a firewall will then make sure that access protocol is followed by placing restrictions on what the user can access within the network. This is a very effective method of preventing people from accessing the network when unauthorized. Additionally, communications between two hosts on the network can be encrypted to provide an additional layer of security to the network.
Some businesses may also deploy honeypots. A honeypot is essentially a network resource that acts as a decoy within the network itself. These can be used either as a surveillance tool or as a form of early-warning system because honeypots are not used for any legitimate business purpose. This means if a honeypot is accessed, normally something is wrong. Attacks on honeypots can be analyzed by security teams to keep up to date with new attacks. These findings can then be applied to further increase the level of security in the real network by highlighting previously unknown vulnerabilities.
Similarly to honeypots, honeynets are decoy networks which are set up with deliberate security flaws. These are designed to entice attackers so that their methods can be analyzed to increase the security of the real network. Currently, more and more businesses are making use of network segmentation and adding it to their systems to bolster their security.
Despite the positives of traditional perimeter security, this procedure is not entirely reliable at preventing Trojans and computer worms from spreading through your network. Traditionally, to combat this either an anti-virus software or an Intrusion Prevention System will be used. They can detect and prevent the spread of these attacks.
Furthermore, while this system is excellent at preventing external threats, it is not effective at preventing internal threats. With the number of people working remotely from their own device increasing, there is also an increased risk that contaminated devices may be connecting to a company’s network, potentially putting the network at risk. This has led to the increased popularity of Zero-Trust architectures.
A VPN (Virtual Private Network) works to create a private network across a network that is initially public. This allows the user of the VPN to send data across and receive data in the same way as if they were actually connected to the main private network itself. This means that any application running through a VPN will be able to use the functions, security and management features of the private network that the VPN is connected.
VPN technology was initially developed to allow remote users and different office branches to have access to applications and other items that would be hosted on the network of the main office branch. To gain access to the VPN, users must authenticate themselves either by using a password or Security Certificates.
When a business makes use of VPN technology, it can help to ensure that remote workers and other offices can establish a secure connection to the head office’s network without the risk of an attacker infiltrating the network through the remote user.
In the context of computer networking, the practice of network segmentation is where you split a computer network down into a group of subnetworks. Each of these subnetworks is then called a network segment.
One of the security advantages of segmenting your networks is that any broadcasts from the segments will be kept within the internal network itself. As a result of this, the structure of the network will only be visible on the inside.
Another advantage of segmenting your networks is that if a segment of your network does become compromised, there is a reduced surface space for an attacker to move through. Furthermore, certain types of cyber-attacks only work on local networks, which means if you segment the different areas of your systems, making these decisions by their usage. For example, if you were to create separate networks for your database, web servers and the devices of your users, this would help to keep your network a bit more secure.
Network segmentation can also be used to ensure that people only have access to the resources that they need. This would be carried out by tactically distributing your resources to various networks and assigning specific individuals to each network segment.
Proper use of network segmentation to improve levels of security would involve splitting segmenting your network into those different subnetworks and given each subnetwork a certain level of required authorization for access. Then, you should take steps to make sure a protocol has been put in place to restrict what can move between each subnetwork.
Role-based access controls (RBAC) help to restrict access to certain systems, based on the level of authorization that a user has. A vast majority of companies with 500+ employees make use of role-based access controls and more frequently, Small-Medium-Enterprises are starting to make use of this technology. To make the most effective use of RBAC, a company will divide their user profiles by certain categories. Generally, they will be based on job role, level of seniority and the resources that each individual will need based on the first two factors.
For example, if an organization was to use RBAC and a junior member of the finance team was to log in to their network, the employee would have access to lower level financial data that they will be required to view within their job role. However, their access would be limited to this. They would not be able to, for instance, view any files or data relating to the legal team or files that are only meant to be seen by higher-ranking finance team members, such as the Finance Director.
When a business utilizes RBAC, it can be an incredibly effective way to make sure that any sensitive data is only viewed by individuals who have permission to view them. It can also help to prevent deliberate internal leaks of information.
What is a Software Defined Perimeter?
One of the ways that you can create a zero-trust architecture within your organization is to create or use an SDP. We’ll be looking at what an SDP is so you can do just that.
Since cloud storage has become more commonplace in the modern day, there has been an increased risk of cyber-attacks on these cloud systems due to the fact that cloud servers cannot be protected by traditional perimeter security measures. This led to the creation of Software Defined Perimeter in 2013. Software Defined Perimeter is a research working group. Their key focus is to create a security system that can help to prevent attacks on cloud systems.
Any findings from their research will be made free to use for the public and will not be subject to any usage fees or any other restrictions.
From the inception of the working group, they decided to try and focus on building a security solution that is cost-efficient, but still incredibly flexible and effective. During their work, the team identified three essential design requirements.
Firstly, they decided that their security architecture would need to confirm the ID of the user, what device they are using and the permissions they have to access certain directories. Next, they decided that verification using cryptography(that is also used as the fundamental technology behind what we all know today as blockchain) would be the best option to ensure their security protocol would be applied. Finally, it was decided that the tools required to achieve the first two requirements are security tools that have a proven track record and are in the public domain.
SDP came to the decision that their security architecture should be based on a control channel. This control channel would make use of standard components which the team thought would be best suited for the task. These components were SAML, PKI, and mutual TLS.
The working group eventually published a paper based on this idea to gauge whether or not there was demand for such a system. This is where they named it Software Defined Perimeter.
There was a lot of interest in the work that SDP was doing and this led to the release of Version one of their systems during April 2014.
Their first ever design was made up of an Initiating Host, which would give the Controller information on what device is being used and who by. This information would be transmitted along with a mutual TLS connection. Once this is done, the controller will link up to an issuing CA to confirm the identity of the device and will also link up to an ID provider so that they can verify the user’s identification. After this information has been confirmed, the controller will provide either one or several mutual TLS connections, these connection(s) will link the previously mentioned Initiating Host and any required Accepting Hosts. This system works to significant effect, being able to prevent any strain of network attack including, Man-in-the-Middle, DDoS, and Advanced Persistent Threat.
The original SDP products for commercial use was implemented using an overlay network for business applications, examples of these are remote access to high-value data, or to protect the cloud system from attacks. The Initiating Host for the SDP took the form of the client and the Accepting Host became the Gateway.
The SDP Client itself is responsible for a large variety of functions. Two of these include verifying the device being used and the user ID that is being utilized, as well as the routing of whitelisted applications to protected applications that have been authorized.
The SDP Client has a real-time configuration to make sure that the mutual TLS VPN connection is only linked to items that the individual user is authorized to use. This means that the SDP Client serves the function of placing restrictions on access to certain data points based on the user’s level of authority. This is carried out after the user’s ID and device have both been verified.
The SDP Gateway serves as the point where the Mutual TLS connection to the SDP Client is terminated. In a topological sense, the Gateway will be implemented to be as close to the protected application as is practical.
The SDP Gateway will receive the IP address and their Certificate, once the identity of the device requesting access has been confirmed and their level of permission has been brought to light.
The SDP Controller serves the function of a trusted middleman between the backend security features such as the Identity Provider and the Certificate Authority to the SDP Client itself. Once the SDP client has achieved verification and the level of authority for the user has been examined, the SDP controller will then begin to configure the SDP client and the SDP Gateway so that they can establish a real-time connection through a mutual TLS.
When you correctly implement all three of these features, the SDP architecture can provide excellent and unique property for your security system. These features are listed below.
(1) Hiding Information
There is no DNS information, nor are there any visible ports within protected application infrastructure. Because of this, assets which are protected by SDP are called “dark” assets because they cannot be discovered, even if you scan for them.
The identity of the device attempting access will always be verified before they are granted a connection. The identity of the device will be confirmed using an MFA token which will either be embedded in the TCP or the mutual TLS architecture.
Users in an SDP system are only given access to servers that they need to access because of their role. The system used to confirm identity will communicate the authorizations of the user to the SDP Controller. This is carried out using a SAML assertion.
(4) Application Layer Access
Even when a user is granted access to the application, this will only be on an application level and not on a network level. The SDP will also whitelist certain applications on the device that the host is using, which helps to keep system communications on an app-to-app level.
The architecture of SDP is created on the back of various different standards-based parts. These include mutual TSL, SAML, and Security Certificates. Due to being made up of standard-based parts, the SDP architecture can easily be linked and integrated with other types of security system, including data encryption systems and remote attestation systems.
Through the combined use of pre-authentication with pre-authorizations businesses can create networks that are invisible to unidentified hosts, whilst only providing the necessary permissions to known-users, based on their organizational role.
One of the key parts about SDP is that the pre-authentication and pre-authorization need to occur before a TCP connection is granted between the user and the protected application. As well as this, authorized users will only be granted permissions for certain applications to ensure that compromised devices cannot move laterally across the network.
Create your free account to unlock your custom reading experience.