Damian Tykałowski
Written by Damian Tykałowski
Published June 11, 2018

AWS ECS — quickly create environment for your dockerized apps

Learn this few tricks to help you quickly creating environment for your dockerized apps.

Go to GitHub and clone repository, change variables, run it.

https://github.com/d47zm3/devops/tree/master/aws/ecs-cluster

You will need AWS CLI, JQ and ECS-CLI (links in script).

First part (VPC creation) comes from here, I modified it a bit to add another subnet for HA and more open ports: https://medium.com/@brad.simonin/create-an-aws-vpc-and-subnet-using-the-aws-cli-and-bash-a92af4d2e54b

Since the whole setup consists of few files, I won’t paste them here, they all are available in my GitHub repository. Here I will discuss more important parts of it and how it all runs together. Let’s start with VPC. I’ve took this part from Brad with his permission, I just added another subnet for HA and a rule for loadbalancer as well as some other small tweaks.

It’s time to explain everything, step-by-step. First we need to have VPC that would keep our EC2 instances for ECS cluster, underneath there are a lot of resources to be created, thus everything connects properly. VPC will have two subnets for HA located in a two separated zones. In any case, tags will be added, thus it’s easy to identify ECS resources, all required routes, Internet gateway, firewall rules, all that jazz. We will use two custom tools besides obviously required AWS CLI, JQ to parse responses from AWS CLI client (to save IDs) and ECS-CLI client to manage ECS cluster. Couple of variables to be set here… In addition to the first few ones, like name, region or SSH keypair (which you will have to create yourself if you want to use one) there is no need to change network settings. In any case, I added comments for variables that might be confusing, hopefully it’s self-explaining.

After the first part you will see message like this.

You can check out your AWS console to see newly created VPC, ready with all dependencies.

After VPC is created, it’s time for some IAM magic. We will need the IAM role with some permissions, so your ECS instances can pull images from your private ECR registry (it’s a private Docker registry hosted on AWS). To achieve this task, we need to use some external files with definitions of permissions, not too much to talk about.

Now it’s time to get started with ECS part itself. While you could start-up ECS cluster without all these options, and it would create VPC, EC2 etc. instances for you, you could quickly hit the VPC limit when managing multiple clusters, also making mess that way is also an easy task. It’s better to have one (or more, your choice) dedicated VPC to manage for ECS cluster instances and have control over them (and customize them on the way). Again, added comments are explaining steps one by one.

After executing the next part, you will see your cluster created, since machines are in different AZs it’s highly available.

Now, we’ve got EC2 machines with configured Docker that we can control from ECS dashboard. Now we can set up services and tasks on them, but what is a service and what is a task? As for the task, we need to start with a task definition, that is described in docker-compose.yml. It specifies image to use, mounts, port forwards, logging configuration and such. Task i simply running instance of task definition. You start container with all settings specified. Service on the other hand makes sure proper amount of tasks is running, if one stops, it restarts it and such, it ties load balancer so it points to your running tasks, ensures placement strategy so containers are spreaded etc. Find more info here: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html

But before we define our tasks and services, let’s create load balancer for it. Notice that, once you create service tied to one load balancer, you cannot change load balancer. You will need to re-create service. Amazon has two types of load balancer available, Classic Load Balancer and Elastic/Application Load Balancer. For us, what is important is fact, using Classic Load Balancer, we won’t be able to set up dynamic port forwarding and we cannot do no-downtime deployments. So we will use Application Load Balancing that supports this feature.

Crucial for dynamic port forwarding is this “strange” 0 port number mapping in docker-compose.yml

This means, it will map any random port on host to 80 port in container. ECS (or rather service) will take care of registering this random port correctly in target group. Other than that we specify group for logs, it will be created and stream-prefix, it allows us to recognize quickly from which container stream logs come from. Now we are ready to define our task and start service. Below I present output from script and the final result. From timestamps you can see we managed to close in 5 minutes with whole environment ready!

 

We can see our service running with two tasks (as inside script it’s scaled to two instances). By default placement strategy for tasks, is spread meaning tasks will be spread across available ECS instances. As you see basic metrics, CPU and memory utilization are available at hand.

To check logs from this cluster and it’s containers, choose any of tasks, expand container view (in my case web) and click on “View logs in CloudWatch

Note that each container has it’s own log stream

What to do if you want to roll new image without downtime? It’s easy, specify new version in docker-compose.yml file and run deploy script below, remember to specify same values. Also add timeout flag since, even smallest deployments takes little longer than 5 minutes and it timeouts…

 

And here’s output from deployment.

 

That’s it, you have a ready infrastructure for your containerized application that has logging and monitoring, which are must for success. Also, if you just play with it do not forget to clean up resources, or it will eat your money quickly!

Written by Damian Tykałowski
Published June 11, 2018