AWS Security Blog

How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker

Docker enables you to package, ship, and run applications as containers. This approach provides a comprehensive abstraction layer that allows developers to “containerize” or “package” any application and have it run on any infrastructure. Docker containers are analogous to shipping containers in that they provide a standard and consistent way of shipping almost anything.

One of the challenges when deploying production applications using Docker containers is deciding how to handle run-time configuration and secrets. Secrets are anything to which you want to tightly control access, such as API keys, passwords, and certificates. Injecting secrets into containers via environment variables in the Docker run command or Amazon EC2 Container Service (ECS) task definition are the most common methods of secret injection. However, those methods may not provide the desired level of security because environment variables can be shared with any linked container, read by any process running on the same Amazon EC2 instance, and preserved in intermediate layers of an image and visible via the Docker inspect command or ECS API call. You could also bake secrets into the container image, but someone could still access the secrets via the Docker build cache.

In this blog post, I will show you how to store secrets on Amazon S3, and use AWS Identity and Access Management (IAM) roles to grant access to those stored secrets using an example WordPress application deployed as a Docker image using ECS. Using IAM roles means that developers and operations staff do not have the credentials to access secrets. Only the application and staff who are responsible for managing the secrets can access them. The deployment model for ECS ensures that tasks are run on dedicated EC2 instances for the same AWS account and are not shared between customers, which gives sufficient isolation between different container environments.

IAM roles for EC2

In order to store secrets safely on S3, you need to set up either an S3 bucket or an IAM policy to ensure that only the required principals have access to those secrets. You could create IAM users and distribute the AWS access and secret keys to the EC2 instance; however, it is a challenge to distribute the keys securely to the instance, especially in a cloud environment when instances are regularly spun up and spun down by Auto Scaling groups. This is where IAM roles for EC2 come into play: they allow you to make secure AWS API calls from an instance without having to worry about distributing keys to the instance.

Instead of creating and distributing the AWS credentials to the instance, do the following:

  1. Create an IAM role.
  2. Define which accounts or AWS services can assume the role.
  3. Define which API actions and resources your application can use after assuming the role.
  4. Specify the role that is used by your instances when launched.
  5. Have the application retrieve a set of temporary, regularly rotated credentials from the instance metadata and use them.

Amazon VPC S3 endpoints

In order to secure access to secrets, it is a good practice to implement a layered defense approach that combines multiple mitigating security controls to protect sensitive data. Though you can define S3 access in IAM role policies, you can implement an additional layer of security in the form of an Amazon Virtual Private Cloud (VPC) S3 endpoint to ensure that only resources running in a specific Amazon VPC can reach the S3 bucket contents. Amazon VPC S3 endpoints enable you to create a private connection between your Amazon VPC and S3 without requiring access over the Internet, through a network address translation (NAT) device, a VPN connection, or AWS Direct Connect.

The walkthrough: How to create a WordPress application on ECS that uses S3 to store database credentials

The rest of this blog post will show you how to set up and deploy an example WordPress application on ECS, and use Amazon Relational Database Service (RDS) as the database and S3 to store the database credentials. The following diagram shows this solution.

The standard way to pass in the database credentials to the ECS task is via an environment variable in the ECS task definition. Because many operators could have access to the database credentials, I will show how to store the credentials in an S3 secrets bucket instead. This S3 bucket is configured to allow only read access to files from instances and tasks launched in a particular VPC, which enforces the encryption of the secrets at rest and in flight.

Diagram showing this blog post's solution

Walkthrough prerequisites and assumptions

For this walkthrough, I will assume that you have:

You will need to run the commands in this walkthrough on a computer with Docker installed (minimum version 1.9.1) and with the latest version of the AWS CLI installed. You will use the US East (N. Virginia) Region (us-east-1) to run the sample application. If you are using a Windows computer, ensure that you run all the CLI commands in a Windows PowerShell session.

How to create the WordPress application

In this section, I will explain the steps needed to set up the example WordPress application using S3 to store the RDS MySQL Database credentials. I will launch an AWS CloudFormation template to create the base AWS resources and then show the steps to create the S3 bucket to store credentials and set the appropriate S3 bucket policy to ensure the secrets are encrypted at rest and in flight—and that the secrets can only be accessed from a specific Amazon VPC. Finally, I will build the Docker container image and publish it to ECR.

Step 1: Create the AWS resources using the provided CloudFormation template

First, create the base resources needed for the example WordPress application:

  1. When you open the provided CloudFormation template, it opens the CloudFormation console in the US East (N. Virginia) Region. Click Next.
  2. On the Specify Details page, type a value for the DbPassword and KeyName, and then click Next. Take a note of the value provided for DbPassword because you will need it in subsequent steps.
  3. Accept the default values on the Options page, and then click Next.
  4. On the Review page, select the I acknowledge that this template might cause AWS CloudFormation to create IAM resources check box, and then click Create. The creation process can take approximately 10 minutes.

This CloudFormation template creates:

  • An Amazon VPC with two public subnets.
  • An ECS cluster to launch the WordPress ECS service.
  • An ECS instance where the WordPress ECS service will run.
  • An ECR repository for the WordPress Docker image.
  • An ECS task definition that references the example WordPress application image in ECR.
  • An RDS MySQL instance for the WordPress database.
  • An S3 bucket with versioning enabled to store the secrets.
  • A CloudWatch Logs group to store the Docker log output of the WordPress container.

Step 2: Add a policy to the S3 secrets bucket

The bucket that will store the secrets was created from the CloudFormation stack in Step 1. To obtain the S3 bucket name run the following AWS CLI command on your local computer.

aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`SecretsStoreBucket`].OutputValue' --output text

Add a bucket policy to the newly created bucket to ensure that all secrets are uploaded to the bucket using server-side encryption and that all of the S3 commands are encrypted in flight using HTTPS. Create a new file on your local computer called policy.json with the following policy statement. Be sure to replace SECRETS_BUCKET_NAME with the name of the bucket created earlier.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyUnEncryptedObjectUploads",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:PutObject",
      "Resource": "arn:aws:s3:::SECRETS_BUCKET_NAME/*",
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-server-side-encryption": "AES256"
        }
      }
    },
    {
      "Sid": " DenyUnEncryptedInflightOperations",
      "Effect": "Deny",
      "Principal": "*",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::SECRETS_BUCKET_NAME/*",
      "Condition": {
        "Bool": {
          "aws:SecureTransport": false
        }
      }
    }
  ]
}

Now add this new JSON file with the policy statement to the S3 bucket by running the following AWS CLI command on your local computer. This command extracts the S3 bucket name from the value of the CloudFormation stack output parameter named SecretsStoreBucket and passes it into the S3 PutBucketPolicy API call.

aws s3api put-bucket-policy --bucket $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`SecretsStoreBucket`].OutputValue' --output text) --policy file://policy.json

Step 3: Upload the database credentials file to S3

Now that you have created the S3 bucket, you can upload the database credentials to the bucket. Create a database credentials file on your local computer called db_credentials.txt with the content: WORDPRESS_DB_PASSWORD=DB_PASSWORD. Be sure to replace the value of DB_PASSWORD with the value you passed into the CloudFormation template in Step 1.

Upload this database credentials file to S3 with the following command. This command extracts the S3 bucket name from the value of the CloudFormation stack output parameter named SecretsStoreBucket and passes it into the S3 copy command and enables the server-side encryption on upload option.

aws s3 cp db_credentials.txt s3://$(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`SecretsStoreBucket`].OutputValue' --output text)/db_credentials.txt --sse

Notice how I have specified to use the server-side encryption option –sse when uploading the file to S3. If you try uploading without this option, you will get an error because the S3 bucket policy enforces S3 uploads to use server-side encryption.

Step 4: Create the S3 VPC endpoint to restrict access to the S3 bucket from the Amazon VPC

Now that you have uploaded the credentials file to the S3 bucket, you can lock down access to the S3 bucket so that all PUT, GET, and DELETE operations can only happen from the Amazon VPC. Accomplish this access restriction by creating an S3 VPC endpoint and adding a new condition to the S3 bucket policy that enforces operations to come from this endpoint.

The command to create the S3 VPC endpoint follows. This command extracts the VPC and route table identifiers from the CloudFormation stack output parameters named VPC and RouteTable, and passes them into the EC2 CreateVpcEndpoint API call.

aws ec2 create-vpc-endpoint --vpc-id $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`VPC`].OutputValue' --output text) --route-table-ids $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`RouteTable`].OutputValue' --output text) --service-name com.amazonaws.us-east-1.s3 --region us-east-1

You should see output from the command that is similar to the following.

{
  "VpcEndpoint": {
  "PolicyDocument": "{"Version":"2008-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":"*","Action":
"*","Resource":"*"}]}",
  "VpcId": "vpc-1a2b3c4d",
  "State": "available",
  "ServiceName": "com.amazonaws.us-east-1.s3",
  "RouteTableIds": [
    "rtb-11aa22bb"
  ],
  "VpcEndpointId": "vpce-3ecf2a57",
  "CreationTimestamp": "2016-05-15T09:40:50Z"
  }
}

Take note of the value of the output parameter, VpcEndpointId. You will need this value when updating the S3 bucket policy.

Now that you have created the VPC endpoint, you need to update the S3 bucket policy to ensure S3 PUT, GET, and DELETE commands can only occur from within the VPC. Open the file named policy.json that you created earlier and add the following statement. Be sure to replace SECRETS_BUCKET_NAME with the name of the S3 bucket created by CloudFormation, and replace VPC_ENDPOINT with the name of the VPC endpoint you created earlier in this step.

    {
        "Sid": "Access-to-specific-VPCE-only",
        "Effect": "Deny",
        "Principal": "*",
        "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ],
        "Resource": "arn:aws:s3::: SECRETS_BUCKET_NAME/*",
        "Condition": {
          "StringNotEquals": {
            "aws:sourceVpce": "VPC_ENDPOINT"
          }
        }
    }

Now, you will push the new policy to the S3 bucket by rerunning the same command as earlier.

aws s3api put-bucket-policy --bucket $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`SecretsStoreBucket`].OutputValue' --output text) --policy file://policy.json

Because you have sufficiently locked down the S3 secrets bucket so that the secrets can only be read from instances running in the Amazon VPC, you now can build and deploy the example WordPress application.

Step 5: Build the Docker image and publish it to ECR

The example application you will launch is based on the official WordPress Docker image. In the official WordPress Docker image, the database credentials are passed via environment variables, which you would need to include in the ECS task definition parameters. This is not a safe way to handle these credentials because any operations person who can query the ECS APIs can read these values.

Instead, what you will do is create a wrapper startup script that will read the database credential file stored in S3 and load the credentials into the container’s environment variables. This is safer because neither querying the ECS APIs nor running Docker inspect commands will allow the credentials to be read. You will publish the new WordPress Docker image to ECR, which is a fully managed Docker container registry that makes it easy for you to store, manage, and deploy Docker container images.

Now, you must change the official WordPress Docker image to include a new entry-point script called secrets-entrypoint.sh. This script obtains the S3 credentials before calling the standard WordPress entry-point script. Note that you do not save the credentials information to disk—it is saved only into an environment variable in memory.

To create the new entry-point script:

  1. Create a new file named secrets-entrypoint.sh with the following contents.
#!/bin/bash

# Check that the environment variable has been set correctly
if [ -z "$SECRETS_BUCKET_NAME" ]; then
  echo >&2 'error: missing SECRETS_BUCKET_NAME environment variable'
  exit 1
fi

# Load the S3 secrets file contents into the environment variables
eval $(aws s3 cp s3://${SECRETS_BUCKET_NAME}/db_credentials.txt - | sed 's/^/export /')

# Call the WordPress entry-point script
/entrypoint.sh "$@"
  1. Make the secrets-entrypoint.sh file executable by running the following command on your local computer: chmod +x secrets-entrypoint.sh
  2. Create a file named Dockerfile with the following contents.
FROM wordpress

# Install the AWS CLI
RUN apt-get update && 
    apt-get -y install python curl unzip && cd /tmp && 
    curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" 
    -o "awscli-bundle.zip" && 
    unzip awscli-bundle.zip && 
    ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws && 
    rm awscli-bundle.zip && rm -rf awscli-bundle

# Install the new entry-point script
COPY secrets-entrypoint.sh /secrets-entrypoint.sh

# Overwrite the entry-point script
ENTRYPOINT ["/secrets-entrypoint.sh"]
CMD ["apache2-foreground"]
  1. Build the Docker image by running the following command on your local computer. Remember to replace AWS_ACCOUNT_ID with your own AWS account ID.
$ docker build –t AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/secure-wordpress .
  1. Get the ECR credentials by running the following command on your local computer.
$ aws ecr get-login --region us-east-1 | sh
  1. Push the Docker image to ECR by running the following command on your local computer. Remember to replace AWS_ACCOUNT_ID with your own AWS account ID.
 $ docker push AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/secure-wordpress

Now that you have prepared the Docker image for the example WordPress application, you are ready to launch the WordPress application as an ECS service.

Step 6: Launch the ECS WordPress service

Now, you will launch the ECS WordPress service based on the Docker image that you pushed to ECR in the previous step. The service will launch in the ECS cluster that you created with the CloudFormation template in Step 1.

  1. Run the following AWS CLI command, which will launch the WordPress application as an ECS service. It will extract the ECS cluster name and ECS task definition from the CloudFormation stack output parameters.
aws ecs create-service --cluster $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`EcsCluster`].OutputValue' --output text) --service-name wordpress --task-definition $(aws cloudformation describe-stacks --stack-name ManagingSecretsS3Blog --region us-east-1 --query 'Stacks[].Outputs[?OutputKey==`WordPressTaskDefinition`].OutputValue' --output text) --desired-count 1 --region us-east-1
  1. Click the value of the CloudFormation output parameter, WordPressURL. This will open your browser to the address of the ECS instance, and it also will open the WordPress application integrated with the RDS MySQL database instance.

You now have a working WordPress application using a locked-down S3 bucket to store encrypted RDS MySQL Database credentials, rather than having them exposed in the ECS task definition environment variables.

Another approach

You could also control the encryption of secrets stored on S3 by using server-side encryption with AWS Key Management Service (KMS) managed keys (SSE-KMS). With SSE-KMS, you can leverage the KMS-managed encryption service that enables you to easily encrypt your data. By using KMS you also have an audit log of all the Encrypt and Decrypt operations performed on the secrets stored in the S3 bucket. For more information about using KMS-SSE, see Protecting Data Using Server-Side Encryption with AWS KMS–Managed Keys (SSE-KMS).

Conclusion

In the post, I have explained how you can use S3 to store your sensitive secrets information, such as database credentials, API keys, and certificates for your ECS-based application. I have also shown how to reduce access by using IAM roles for EC2 to allow access to the ECS tasks and services and enforcing encryption in flight and at rest via S3 bucket policies. I have added extra security controls to the secrets bucket by creating an S3 VPC endpoint to allow only the services running in a specific Amazon VPC access to the S3 bucket. This is advantageous because querying the ECS task definition environment variables, running Docker inspect commands, or exposing Docker image layers or caches can no longer obtain the secrets information.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about this blog post, please start a new thread on the EC2 forum.

– Matthew

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.