paint-brush
Securing Your EC2 to S3 Connectionby@guilleojeda
295 reads

Securing Your EC2 to S3 Connection

by Guille OjedaSeptember 14th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Step by step instructions to create a VPC Endpoint, including a security group and an Endpoint Policy, and a secure S3 Bucket Policy to allow access from EC2.
featured image - Securing Your EC2 to S3 Connection
Guille Ojeda HackerNoon profile picture

You've deployed your app on an EC2 instance, and there's a file in an S3 bucket that you need to access from the app. You created a public S3 bucket and uploaded the file, and it works! But then you read somewhere that keeping your private files in a public S3 bucket is a bad idea, so you set out to fix it.


Set Up a Restrictive Bucket Policy and Add a VPC Endpoint With an Endpoint Policy

Here's the initial setup, and you can deploy it here:

Deploy initial setup


This is what it looks like before the solution:


What it looks like before the solution




This is what it looks like with the solution:

What it looks like with the solution



Step-by-Step Instructions to Secure the Connection From EC2 to S3


Step 0: Test that the connection is working

  • Open the CloudFormation console
  • Select the initial state stack
  • Click the Outputs tab
  • Copy the value for EC2InstancePublicIp
  • Paste it in the browser, append:3000, and hit Enter/Return


Step 1: Create a VPC Endpoint

  • Go to the VPC console
  • In the panel on the left, click Endpoints
  • Click Create Endpoint
  • Enter a name
  • In the Services section, enter S3 in the search box, and select the one that says 'com.amazonaws.your_region.s3' (replace 'your_region' with the region where you deployed the initial setup, which is where the S3 bucket is). Then select the one that says Interface in the Type column.
  • For VPC, select SimpleAWSVPC from the dropdown list
  • Under Subnets, select us-east-1a and us-east-1b, and for each click the dropdown and select the only available subnet
  • Under Security groups, select the one called VPCEndpointSecurityGroup
  • Under Policy, pick Full Access for now (we'll change that in Step 2).
  • Open Additional settings
  • Check Enable DNS name
  • Uncheck Enable private DNS only for inbound endpoint
  • Click Create endpoint


Step 2: Configure the VPC Endpoint Policy

  • In the Amazon VPC console, go to Endpoints
  • Select the Endpoint you just created
  • Click the Policy tab
  • Click Edit Policy
  • Modify the following JSON by replacing the placeholder values REPLACE_BUCKET_NAME and REPLACE_VPC_ID with the name of your S3 bucket and the ID of SimpleAWSVPC. Then paste it into the Edit Policy page, and click Save.


{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowAccessToSpecificBucket",
            "Principal": "*",
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::REPLACE_BUCKET_NAME",
                "arn:aws:s3:::REPLACE_BUCKET_NAME/*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:sourceVpc": "REPLACE_VPC_ID"
                }
            }
        }
    ]
}


Step 3: Set up a more restrictive bucket policy

  • Open the S3 console
  • Click on the bucket that you created with the initial setup
  • Click on the Permissions tab
  • Scroll down to Bucket Policy and click Edit
  • Paste the following policy, replacing the placeholders REPLACE_BUCKET_NAME and REPLACE_VPC_ENDPOINT_ID with their values (REPLACE_VPC_ENDPOINT_ID is not the same as REPLACE_VPC_ID from the previous step). Then click Save changes


{
    "Version": "2012-10-17",
    "Id": "Policy1415115909153",
    "Statement": [
        {
            "Sid": "Access-only-from-SimpleAWSVPC",
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::REPLACE_BUCKET_NAME",
                "arn:aws:s3:::REPLACE_BUCKET_NAME/*"
            ],
            "Condition": {
                "StringNotEquals": {
                    "aws:SourceVpce": "REPLACE_VPC_ENDPOINT_ID"
                }
            }
        },
        {
            "Sid": "Access-from-everywhere",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::REPLACE_BUCKET_NAME",
                "arn:aws:s3:::REPLACE_BUCKET_NAME/*"
            ]
        }
    ]
}


Step 4: Test that the connection is still working

Go back to the browser tab where you pasted the public IP address of the instance and refresh the page

Step 5: Empty the S3 bucket

Before deleting the CloudFormation stack, you'll need to empty the S3 bucket! The Node.js app puts a file in there.


How does this solution make the connection from EC2 to S3 more secure?

VPC Endpoints

First of all, you'll notice that a VPC Endpoint is for one specific service, S3 in this case. If you wanted to connect to other services you'd need to create a separate VPC Endpoint for each different service.


The second thing you'll notice is that there are 2 types of endpoints: Interface and Gateway. Gateway endpoints are only for S3 and DynamoDB, while Interface endpoints are for nearly everything. Gateway endpoints are simpler, so use them when you can (except if you're writing a newsletter and want to show a few things about Interface endpoints).


Interface endpoints work by creating an Elastic Network Interface in every subnet where you deploy it, and automatically routing to that ENI the traffic that's addressed to the public endpoint of the service. That way, you don't need to make any changes to the code. This only works if you check the Enable DNS name.


VPC Endpoint Policies

The existing policy is a Full Access policy, which is the default policy when a VPC endpoint is created. It allows all actions on the S3 service from anyone.

Instead of that, we're setting up a more restrictive policy, which only allows access to our specific bucket, and denies access to all other buckets.

VPC Endpoint policies are IAM resource policies, and as such, anything that's not explicitly allowed is implicitly denied.

Restrictive S3 bucket policies

Bucket policies are another type of IAM resource policy. Obviously, this bucket policy will only apply to our S3 bucket. It's important to add it because, while we've restricted what the VPC Endpoint can be used for, the S3 bucket can still be accessed from outside the VPC (e.g. from the public internet). This bucket policy is the one that's going to prevent that, restricting access to only the VPC Endpoint.

Discussing Connection Security to S3

In this case, I kept internet access for the VPC and for the EC2 instance itself, just to make it easier to trigger the code with an HTTP request. This solution is a good idea in these cases because traffic to S3 doesn't go over the public internet, but admittedly, the public internet is a viable alternative.


Where this solution matters more is when you don't have access to the internet. Sure, adding it is rather simple, but you're either exposing yourself unnecessarily by giving your instances a public IP address they don't need or you're paying for a NAT Gateway. In those cases, VPC Endpoints are a much simpler, safer, and cheaper solution.


Conceptually, you can think of this as giving the S3 service a private IP address inside your VPC. In reality, what you're doing is creating a private IP address in your VPC that leads to the S3 service, so that conception is pretty accurate! Behind the scenes (and you can see this easily), the VPC service creates an Elastic Network Interface (ENI) in every subnet where you deploy the VPC Endpoint. Those ENIs will forward the traffic to the S3 service endpoints that are private to the AWS network.


Also, behind the scenes, there's a Route 53 Private Hosted Zone that you can't see, but which resolves the S3 address to the private IPs of those ENIs, instead of to the public IPs of the public endpoints. That's why you don't need to change the code: Your code depends on the address of the S3 service, and that private hosted zone takes care of resolving it to a different address. You can't see this privately hosted zone, it's managed by AWS and hidden from users.


Best Practices for S3 Security

Operational Excellence

  • Monitor and Alert Endpoint Health: Monitor the health of your VPC endpoints using CloudWatch metrics. Any unusual activity or degradation in performance should trigger alerts. This could also help you detect a security incident!

Security

  • Least Privilege Access to Bucket: This is basically what we did in Step 3: We disabled public access, and implemented a policy that only allows reads from the VPC. Try reading from that S3 bucket from your own computer: aws s3api get-object --bucket 12ewqaewr2qqq --key thankyou.txt thankyou.txt --region us-east-1
  • Regularly Audit IAM Policies: Regularly review and tighten your IAM policies. Not only for the VPC Endpoint and S3 bucket but also for the EC2 instance!

Reliability

  • Use Multiple Subnets in Different AZs: Each subnet gets one ENI, so if you distribute your subnets in several AZs, your VPC Endpoint is highly available within the region (i.e., it can continue functioning if an Availability Zone fails).

Performance Efficiency

  • Choose the Right VPC Endpoint Type: Choose the right type of VPC Endpoint based on your workload. For S3, a Gateway Endpoint works best. I'll leave it to you to figure out how to create it (=.

Cost Optimization

  • Delete Unused VPC Endpoints: Regularly delete any unused VPC endpoints to avoid paying for stuff you don't use.


Also published here.