Configure EC2 the Cloud-Native Way Using AWS Parameter Store

Photo by JOSHUA COLEMAN on Unsplash

The Twelve-Factor App, a commonly used set of guidelines for building cloud-native applications, requires an application to exhibit “strict separation of config from code“. A key takeaway from this guideline is that you should not configure credentials to data sources or other services as constants in your code. Instead, credentials should be stored separately from your code to keep your application more generic and easier to deploy if credentials change or differ across environments.

The other day, I installed WordPress on a CentOS AMI and wrote a Terraform config to deploy it alongside an RDS instance. One of my primary design goals was to avoid baking the RDS credentials into the AMI. The AMI needed to be deployable regardless of the credentials used to connect to the database.

In this article, I’ll talk about how I achieved this using AWS Systems Manager Parameter Store.

Storing Config Using AWS Systems Manager Parameter Store

The Parameter Store is a feature AWS offers to store application config. It’s straightforward to store both secrets and non-secrets in Parameter Store using Terraform. Here’s a snippet of my Terraform config where I stored my database credentials:

resource "aws_ssm_parameter" "db_name" {
    name = "db_name"
    value = var.db_name
    type = "String"
}

resource "aws_ssm_parameter" "db_username" {
    name = "db_username"
    value = var.db_username
    type = "SecureString"
    key_id = aws_kms_key.this.id
}

resource "aws_ssm_parameter" "db_password" {
    name = "db_password"
    value = var.db_password
    type = "SecureString"
    key_id = aws_kms_key.this.id
}

The aws_ssm_parameter resource lets you easily save your config to Parameter Store. As you can see, I used Terraform to save three parameters: db_name, db_username, and db_password. I gave db_name the String type, indicating that it should be stored in plaintext. I gave db_username and db_password the SecureString type, indicating that these are secrets and should be encrypted using a KMS key. There’s an additional type that I didn’t use here called StringList, which is like what it sounds – a (comma-separated) list of Strings!

Notice that I set the key_id attribute for db_username and db_password. This attribute specifies which KMS key I want to use for encryption and decryption. I decided to create my own KMS key with Terraform, with the config to create this key shown below:

data "aws_caller_identity" "this" {}

resource "aws_kms_key" "this" {
    description = "For encrypting and decrypting wordpress-related parameters"
    deletion_window_in_days = 7

    policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
    "Sid": "Allow access for Account Holder",
    "Effect": "Allow",
    "Principal": {
        "AWS": "${data.aws_caller_identity.this.arn}"
    },
    "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:TagResource",
        "kms:UntagResource",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion",
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*"
    ],
    "Resource": "*"
    }
  ]
}
EOF
}

I used the aws_kms_key resource to create my key. I also made a policy document to give myself full access to the key. Let’s see how you can also permit your EC2 instances to access this key, as well as your parameters in the next section.

Granting Access to EC2

At this point, we have database credentials saved to Parameter Store and have created a KMS key encrypting/decrypting the username and password. However, we haven’t permitted the EC2 instances to access these yet. Luckily, granting this access to EC2 is very simple.

Here’s the Terraform I wrote to give EC2 permission to fetch the RDS credentials as well as use the KMS key for decryption:

resource "aws_iam_role" "get_parameters" {
    name = "get_parameters"
    description = "Role to permit ec2 to get parameters from Parameter Store"
    assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_iam_role_policy" "get_parameters" {
    name = "get_parameters"
    role = aws_iam_role.get_parameters.name

    policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:GetParameters"
            ],
            "Resource": [
                "${var.db_name_arn}",
                "${var.db_username_arn}",
                "${var.db_password_arn}"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt"
            ],
            "Resource": [
                "${var.parameters_key_arn}"
            ]
        }
    ]
}
EOF
}

resource "aws_iam_instance_profile" "get_parameters" {
    name = "get_parameters"
    role = aws_iam_role.get_parameters.name
}

Let’s break this down. First, I used the aws_iam_role resource to create an IAM role for EC2. The policy I made permits EC2 instances to assume an IAM role.

Next, I used the aws_iam_role_policy resource to limit the scope of what assigned EC2 instances can do. I assigned the ssm:GetParameters action to give EC2 permission to get only my database credentials. I also assigned the kms:Decrypt action to grant EC2 permission to decrypt using the KMS key I created.

Finally, I used the aws_iam_instance_profile resource to create an instance profile. An instance profile attaches an IAM role to EC2 instances that use it. We’ll assign this instance profile to the designated EC2 instances next.

Fetching Parameters From EC2

The last thing to do is to fetch the parameters from EC2. This part depends on your AMI, but I’ll walk through my use case.

My AMI contains a WordPress installation, and every WordPress instance includes a file called wp-config.php used for configuring data source credentials. To keep my AMI generic, I used PHP’s getenv() function to fetch credentials from environment variables. Here’s a snippet of my wp-config.php.

/** The name of the database for WordPress */
define( 'DB_NAME', getenv('DB_NAME') );

/** MySQL database username */
define( 'DB_USER', getenv('DB_USER') );

/** MySQL database password */
define( 'DB_PASSWORD', getenv('DB_PASSWORD') );

/** MySQL hostname */
define( 'DB_HOST', getenv('DB_HOST') );

I built my AMI to serve WordPress from Apache. You can set environment variables visible to Apache by defining them in a file called .htaccess. To do this, I leveraged EC2 user data to run a set of commands on startup. Here’s the relevant portion of my Terraform config:

resource "aws_instance" "wordpress" {
  count = length(var.availability_zones)

  <removed for simplicity>

  iam_instance_profile = var.instance_profile_name

  user_data = <<EOF
#!/bin/bash
rm -f /var/www/html/.htaccess
echo "SetEnv DB_NAME $(aws ssm get-parameters --names db_name --query 'Parameters[0].Value' --output text)" >> /var/www/html/.htaccess
echo "SetEnv DB_USER $(aws ssm get-parameters --names db_username --with-decryption --query 'Parameters[0].Value' --output text)" >> /var/www/html/.htaccess
echo "SetEnv DB_PASSWORD $(aws ssm get-parameters --names db_password --with-decryption --query 'Parameters[0].Value' --output text)" >> /var/www/html/.htaccess
echo "SetEnv DB_HOST ${var.db_host}" >> /var/www/html/.htaccess
EOF

}

My user data runs a set of commands that populate the .htaccess file with Parameter Store config. To get the db_name parameter I needed, I ran:

aws ssm get-parameters --names db_name --query 'Parameters[0].Value' --output text

To get secrets such as the database username and password, I added the –with-decryption flag. This flag uses the parameter’s corresponding KMS key for decryption. Only users with access to that key can decrypt the parameter. This access is taken care of by the instance profile we created earlier.

Notice that I also needed to use the –query ‘Parameters[0].Value’ flag because the aws ssm get-parameters command returns a JSON that looks like this:

{
    "Parameters": [
        {
            "Name": "db_name",
            "Type": "String",
            "Value": "wordpress",
            "Version": 1,
            "LastModifiedDate": "2020-10-13T23:05:07.065000-05:00",
            "ARN": "arn:aws:ssm:us-east-2:767235477326:parameter/db_name",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}

Since I only wanted the value, I used the –query flag to parse this out.

By using “aws ssm get-parameters” in my user data script, I was able to set the Apache environment variables and configure my data source in a cloud-native fashion!

Thanks For Reading

Hopefully, this article helps you configure your EC2 instances the cloud-native way using the AWS Systems Manager Parameter Store. Your use case is likely to be different than mine, but ideally, this gave you an idea of how you can get started.

Another way you can achieve the same result is to use AWS Secrets Manager. This service is similar to the Parameter Store, but it also allows you to generate and rotate secrets dynamically. However, note that Parameter Store is free for standard usage, whereas Secrets Manager has a $0.40 per secret per month cost associated.

That’s all for this post. Let me know if you have any questions. Thanks!

Austin Dewey

Austin Dewey is a DevOps engineer focused on delivering a streamlined developer experience on cloud and container technologies. Austin started his career with Red Hat’s consulting organization, where he helped drive success at many different Fortune 500 companies by automating deployments on Red Hat’s Kubernetes-based PaaS, OpenShift Container Platform. Currently, Austin works at fintech startup Prime Trust, building automation to scale financial infrastructure and support developers on Kubernetes and AWS. Austin is the author of "Learn Helm", a book focused on packaging and delivering applications to Kubernetes, and he enjoys writing about open source technologies at his blog in his free time, austindewey.com.

2 Comments

  1. Really nice article! You can, however, drop the `jq` requirement by using the options `–query “Parameter.Value” –output text` in the `aws ssm get-parameter` call.

Leave a Reply