paint-brush
Strategies for Combating Cloud Security Risksby@stvlange
248 reads

Strategies for Combating Cloud Security Risks

by Stephen LangeJune 23rd, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

There are some proven operational and security practices that you can adopt to significantly reduce the occurrence of security breaches. In this article, my focus will be on Amazon Web Services, but many of these suggestions can be applied to any cloud provider. With convenience comes opportunity in the cloud, convenience can quickly lead to unexpected outcomes and exposures if security isn’t a primary driver in the decisions that you make in the. In organizations that have successfully leveraged the. cloud, early C-Suite directives that set the security expectations can go a long way towards ensuring an organization's overall success in.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Strategies for Combating Cloud Security Risks
Stephen Lange HackerNoon profile picture

You see it in the news; you read about it online: Another company using the cloud had a security incident that resulted in the loss of revenue/data.

Why does this keep happening? What can you do to avoid being one of the victims?

Solving the cloud security problem isn’t easy. Still, there are some proven operational and security practices that you can adopt to significantly reduce the occurrence of security breaches. 

In this article, my focus will be on Amazon Web Services (AWS), but many of these suggestions can be applied to any cloud provider. Although, as you will come to learn, there are many ways to approach security in the cloud, the strategies I highlight below are some potential paths you might consider adopting.

How Does Your Organization View Security?

Before we get into the cloud security recommendations, an important first step is to review how your organization views security. If security takes a back seat in your organization, the cloud might not be the best place for you to operate. 

With convenience comes opportunity. In the cloud, that convenience can quickly lead to unexpected outcomes and exposures if security isn’t a primary driver in the decisions that you make in the cloud.

Setting the tone for security is best done early before any planning or implementation occurs in the cloud. In organizations that have successfully leveraged the cloud, early C-Suite directives that set the security expectations can go a long way towards ensuring an organization's overall success in the cloud.

Code repositories are critical tools in software development. They make tracking changes over time trivial and provide easy-to-implement change review and approval processes. This kind of change management is perfectly adapted for use within cloud environments. 

Perhaps one of the best aspects of cloud environments is that they enable an organization to embrace the concept of infrastructure as code using cloud-native (e.g., AWS CloudFormation) or third-party (e.g., HashiCorp Terraform) solutions.

From a security perspective, tracking changes in a code repository can be significantly easier than doing so in cloud-native logs. This is not to say that cloud-native logging has no importance, merely that reviewing historical changes and approvals in a code repository can be significantly easier.

In addition, there are numerous pre and post-merge tools available that can be used to check for policy-based violations. For example, if the organization has determined that using 0.0.0.0/0 is prohibited in ingress security group rules, there are tools that can deny such a merge change from being allowed.

Once you have adopted an infrastructure-as-code mindset, a scalable implementation framework often referred to as an orchestration pipeline is needed.

The Importance of Orchestration

If infrastructure as code is the language that defines what is created in the cloud, then the orchestration pipeline is the assembly line by which those instructions are implemented. 

RedHat¹ Linux defines orchestration as:

Orchestration is the automated configuration, management, and coordination of computer systems, applications, and services. Orchestration helps IT to more easily manage complex tasks and workflows.

By using an orchestration pipeline to implement the infrastructure as code being developed and stored in your code repository, several security efficiencies can be obtained:

Orchestration Pipelines Can Have Tailored Roles. Having team-specific orchestration pipelines can be beneficial from both an operational and security perspective. Operationally it allows for more team independence with less potential impact on other teams. 

From a security perspective, it allows for more granular Identity and Access Management (IAM) role assigned to each group’s orchestration pipeline service account. For example, an application team pipeline role may not have permission to make security-related changes in the cloud environment (e.g., IAM, Security Group, KMS, etc.) while a network or security pipeline does.

1. Separation of Duties is maintained: Sometimes referred to as dual approval, separation of duty ensures that the party requesting the change and the one approving is different. 

To application team developers, all this separation represents is a change to where they submit their pull request for the restricted resource changes. Once submitted, the restricted repository reviewers can review the change for merge approval and implementation. 

2. Monitoring for Rogue Changes is Easier: By restricting what and who is authorized to make changes, security monitoring solutions can easily identify and alert on non-standard changes for investigation. 

If, for example, the orchestration service accounts are the only entities authorized to make changes, alerting on any change not made by one of these roles is easy to implement.

This isn’t to say that the organization won’t have break-glass emergency change procedures restricted to a subset of users and roles. What it does mean is that when those procedures are used, security will know about it and any other changes not from the standard orchestration roles.

3. Orchestration Keeps the Cloud Ephemeral: Implementing fragile solutions that are not resilient to change doesn’t leverage the power of the cloud. With orchestration and infrastructure as code solutions can be torn down and redeployed easily, in the same way, every time. 

By refreshing cloud-based compute resources frequently, an organization can ensure that the latest machine image operating system and security updates are deployed across their cloud infrastructure.

Operationally, it can also ensure that divergent implementations of code do not occur between environments. And from a cost perspective, depending on your business need, it can act as a cost-saving measure (e.g., if your organization does not require certain resources to run 24x7, it can be a way to reduce operational overhead by only deploying and running resources when required).

The concept of “least-privilege ” is a time-tested security mantra for a good reason. In the cloud, there are many ways to effectively reduce potential blast radiuses. Scoping actions, scoping resources, and adding conditions are three critical parts of that strategy.

Scoping Actions and Scoping resources can be summed up as follows: Don’t use wildcards (*) if you don’t have to for actions and resources. Wildcards may be convenient, but they almost always expose more functions and data than they really need to.

Take the time to review the capabilities that each service offers and only provide access to the actions that are required for the intended role. It is also recommended that custom roles and policies be used whenever possible for two primary reasons:

Cloud provider-managed roles and policy updates are done automatically without notification. Cloud provider-managed roles and policies often are less restricted. 

Only grant access to specific resources that are needed for the role to be effective. This seems like such a simple and logical thing to do, but often, people cut corners and grant way too much access. 

Try to always think about what your exposure will be if/when you are compromised. The more you can contain the break, the less impact the compromise will theoretically have.

Setting Up Conditions for Success

Cloud providers like AWS support the use of conditions in the IAM policies that you define. For example, using condition statements IAM policies can be scoped to certain IP addresses using the “

SourceIP
” condition or to specific “SourceVPC” environments.

"Condition": {     
    "IpAddressIfExists": {"aws:SourceIp" : ["xxx"] },         
    "StringEqualsIfExists" : {"aws:SourceVpc" : ["yyy"]}  
}

These are just a few of the conditions available for use. Conditions can play a very important role in preventing a security incident from being successful. Consider a situation where an AWS service account access and secret key were to be exfiltrated. 

In a situation where the IAM policy associated with these credentials did not have conditions attached, the credentials could very well be used by the attacker from an outside location. However, if the IAM policy had contained additional condition checks, the chances of this occurring could be greatly reduced.

Reducing Scope at the Account Level with SCP Policies

Up until now, we have focused on how to restrict and reduce access at the user and service account level through IAM policy action and resource scoping. Through the use of AWS Organizations Service Control Policies (SCP), AWS customers implement controls that all IAM principals (users and roles) adhere to. 

Some examples of what can be done with SCP policies include:

  • Account may only operate in certain AWS regions (example)
  • Account may only deploy certain EC2 instance types (example)
  • Account requires MFA to be enabled before taking action (example)

SCP policies require careful consideration and design but if implemented correctly can provide a valuable additional layer of defense for organizations operating in Amazon.

Use Custom KMS Keys Whenever Possible

Out of the box, most Amazon services are configured to use account-specific default encryption keys generated when an account is created. 

When you use default encryption, if access to the resource in question is granted through an IAM policy, permission to decrypt is granted automatically if default encryption is used.

For an additional layer of protection, custom KMS keys can be generated in the AWS KMS service and used for an added layer of security. When a custom KMS is used with a resource (for example, a S3 bucket), the user/service account must be granted permission to use use the KMS key in addition to the permission granted for the underlying service.

Let’s consider a simple but potentially realistic scenario. Within an organization, the operator role has an IAM policy that grants access to a S3 bucket called “operator”. The role has get/put access to this bucket and this bucket alone.

{
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::operator*"
            ],

During maintenance, a mistake is made that accidentally grants role access to all S3 buckets:

{
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::*"
            ],

If the organization is using SSE-S3 (Default) encryption, the role in question could very likely not have access to other buckets that contain sensitive information that they are not approved to access.

If the organization was using bucket specific KMS keys, and the change to the role did not add access permissions for each KMS key assigned to every other bucket in use, the resource access exposure granted by changing the resource path to a wildcard would not result in the operator role having access to unapproved buckets because the lack of permissions to use the KMS keys defined for those buckets would deny the access.

Remember, having an effective defense in depth strategy involves asking yourself a series of “what if” statements regarding your solution to implement a layered design to reduce the potential for failure. 

Don’t Rely on Implicit Den; Use a Default Deny

I wanted to take a moment to talk a little bit more about the Amazon S3 storage service. It is historically the single biggest source of data exposure seen in AWS due to user error/misconfiguration. 

As AWS will point out, access to S3 buckets is implicitly denied, meaning that unless the access is granted through either an IAM policy or bucket policy, there is no access granted to the bucket.

Granting access to the bucket can be done in either an IAM policy or in the bucket policy. When both exist, according to AWS:

Identity-based policies and resource-based policies grant permissions to the identities or resources to which they are attached. When an IAM entity (user or role) requests access to a resource within the same account, AWS evaluates all the permissions granted by the identity-based and resource-based policies. The resulting permissions are the total permissions of the two types. If an action is allowed by an identity-based policy, a resource-based policy, or both, then AWS allows the action. An explicit deny in either of these policies overrides the allow.

AWS Policy Evaluation Image ©Amazon.com

The last sentence of this explanation is important to note, especially when considering your strategy for S3 buckets that you consider to be highly sensitive.

Given that AWS evaluates resource permissions, it is possible to have access granted in multiple areas (IAM and Bucket Policy). While convenient, this can lead to permission creep over time. 

One way to combat this is to use a Default Deny within the resource (bucket in this case) policy. It blocks all access using a wildcard and then explicitly grants access to specified resources. 

With a default deny permissions granted in an IAM policy must also be mirrored in the explicit grant of the bucket policy. While this adds another layer of complexity to S3 bucket management, the security improvement it provides can justify the additional effort.

Use of default deny will not work for every use case and needs to be carefully evaluated to determine if its use is appropriate for your individual situation.

Account Settings for Block Public Access

AWS recently added an account-level feature that can be applied to all S3 buckets to block public access under different scenarios and is highly recommended for use. According to AWS:

Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, access point policies or all. In order to ensure that public access to all your S3 buckets and objects is blocked, turn on Block all public access. These settings apply account-wide for all current and future buckets and access points. AWS recommends that you turn on Block all public access, but before applying any of these settings, ensure that your applications will work correctly without public access. If you require some level of public access to your buckets or objects, you can customize the individual settings below to suit your specific storage use cases.

Conclusion

I hope that I have provided you some considerations to keep in mind as you begin to map out your cloud security. As I mentioned at the beginning of this article, there are multiple ways of addressing security in the cloud. 

How you ultimately manage, implement, and audit changes to your cloud implementation will impact your overall success in the cloud. I have provided you with a few strategies to consider. 

References