Written by Kacper Dąbrowski
Published February 2, 2016

How to destroy your infrastructure

In this article I tell you if destroying your infrastructure makes sense, and if it does then how to do it. In the summary I share some technical details about the stack we use in our project with some general information about configuration.

I want to tell a story about the infrastructure I work with. I describe the approach that we chose for maintaining it on a daily basis. There are many different approaches to this task, starting from manual configuring, through hiring many people who are working on it in a more or less automated way.

You can also spend a bit more time to prepare a fully automated solution, which will work as an independent machine doing all these tasks in an expected way and won’t require a lot of resources from the people involved. By resources here I mean: time, context switching, deployment process, hiring gurus knowing your infrastructure, and so on.

No matter which way you choose, your goal is always the same – you want to get a stable, consistent, fast, secure environment that matches your expectations.

Continuous Deployment instead of Continuous Hotpatching!

During last years many environments of IT systems were just a merge of some generic images with patches and operations code deployed to them in a more or less automated way. There was one thing they all had in common: people didn’t want to destroy them, because they were afraid they would lose some important parts.

However,  many things have changed over time. Many tools helping to organize work in an automated way appeared on the market. The aforementioned aim of  the infrastructure environments also changed.

Now we need to be prepared for even more needs. In times of modern Continuous Integration and Deployment processes, where you have much shorter time-to-market, you should have a reliable way of making sure that your infrastructure is ready for that. It would be great, if we had common interfaces for managing this stuff, and if they were easy to understand for everyone.

Check our open DevOps positions!

Use the same, well-known patterns

Talking about infrastructure, we should remember that it is an integral part of software development. The logic and knowledge about the requirements for the environments come from the application, what somehow shows that it is an integral part of it, the same as a module sending e-mail, writing files to a storage or some connectors to a database.

They are all some lines of code describing simpler or more complex behaviour. The relations between them are kept in one place, so why don’t we start thinking about our infrastructure as if it was our code? Wouldn’t you like to see your servers “being compiled” for you as a stage of your application pipeline? Wouldn’t it be awesome to move your servers from one place to the other, and destroy the old ones? What about keeping your infrastructure in a Version Control System, having some branches for different use cases and just removing the ones you don’t need anymore?

Using such approach we can start thinking about pipelines that generate artifacts, which are a set of configurations you can deploy to create an environment for your services.

How do we create infrastructure?

To achieve our goals and meet our needs we use a simple yet efficient solution. This process allows us to create infrastructure dynamically, spawn or destroy instances, change their behavior and check their status easily. We managed to achieve that using just these four tools:

  • Ansible: automating configuration of operating systems and services
  • Packer: generating operating system images
  • GoCD: triggering pipelines that build our applications, environments and configure them
  • Terraform: describing resources such as servers, load-balancers, security, networking etc.

The whole process begins in a repository with Ansible roles, where we put the code that describes the look of the server containing the specific role. This is a set of playbooks used for playing to get desired resources for your application. It’s worth mentioning that each commit to this repository spawns GoCD pipeline which automatically creates instances, and this allows us to test them right after our change. The last step here is a Terraform repository where we put definitions of all resources required for our applications to run.

Pain of deploying changes

Finally, it’s time to discuss the deployment of changes in your environments.

When I think about infrastructure and the whole process of applying changes and preparing new platforms for services during my last few years in Operations area, the first thing that comes to my mind is lack of consistency. Such situation has been causing unexpected, and often unwanted, behavior of the platform that we want to rely on.

The important thing here is to be aware that no matter how good code you have, if the environment doesn’t work properly, then nothing will work. This is why you probably want to have a reliable way of deploying these extremely important objects.

Maintaining servers requires a dialog between operation and customer departments, who are mainly developers. In most of the cases these discussions are about smaller or bigger changes in your infrastructure, and mechanisms that will guarantee the expected result of a change, what consumes a lot of time.

Now, when we know the areas causing the biggest problem (costs), it’s time to think about a solution that could help you with resolving and/or decreasing some effects.

Destroy, create, enjoy!

The solution I would like to propose is actually forgetting about doing changes to your platform and just creating its components each time, as if they were separate artifacts in software development.

Such approach gives you the possibility of replacing components of the infrastructure gradually and releasing new versions of them after passing all automated tests that you can write and execute against them.

I really encourage you to give it a try and believe me – there’s no way back! 🙂

Written by Kacper Dąbrowski
Published February 2, 2016