This is the first of two posts on the impact of OS Containers on the Continuous Configuration Management market.

Part I — Two Reasons OS Containers are Fun, and Also the Future

If you are somehow involved with the software industry, you probably know what an Operating System Container is. Or at least have heard about Docker, the current market leader. If you don’t, I urge you to.

One of the reasons for such recommendation is that job offerings involving containers are booming in response to their rise as the preferred method for deploying applications in a number of use cases. A recent study from RightScale with more than 1,000 respondents indicate that Docker adoption increased from 15% in 2015 to 40% in 2017, becoming more popular than any of the four leading Software Configuration Management tools (Chef, Puppet, Ansible and SaltStack) for example.

There is learning curve for using containers – especially for production workloads – and there are situations in which they may not be appropriate yet. Having said that, at Daitan we see containers improving the daily lives of both our engineers and our clients, most of the time.

Why? There are many reasons but I’ve picked two from our own experiences – let me know if it resonates with you, too.

Reason One: Addressing the Need for Flexibility and Efficiency, Even at Scale

When discussing with our Tech Leads at Daitan about the results we’re having with Docker for more than three years now, it reminds me of my astonishment when I first encountered containers.

That was not Docker, though. Ten years ago (yes, of course cool stuff existed at that time! 😉 ) I was introduced to Solaris Zones. I don’t want to sound nostalgic here but it was the first (and probably the only) time I had fun setting up servers myself. Why?

Having fun with containers

There were many reasons for that but flexibility and efficiency were definitely the top two at that point (forgive me if it doesn’t sound like a lot of fun for you :) ).

Virtual Machines existed at that time, but the well known burden of having two layers of “thick” Operating Systems on hardware of ten years ago was always a big issue when costs were considered. It is still an issue sometimes.

So being able to create several lightweight virtual servers on top of a single bare metal Sun machine, sharing OS libraries for maximum efficiency (it used to boot in seconds) and even configuring resource allocation (CPU, memory) according to your wish was fantastic, especially ten years ago.  That was really smart.

If you’re one of the nostalgic readers, I bet you remember the thrill of typing this and getting back a prompt in a matter of seconds:

zoneadm –z myzone boot

zlogin -C myzone

The dormant age

Unfortunately, probably due to the decline of Solaris as the preferred Enterprise-grade server operating system, Solaris Zones has never reached the wider public as, in my humble opinion, it always deserved to. Other implementations existed, such as BSD jails, equally suffering from user starvation.

However, luckily some clever people at Google had realized at that point the potential of containers – and started to contribute with technologies that would allow Linux to leverage such a fantastic concept. To make things even better, the libraries allowed for Linux distribution independence – you could create a container on Ubuntu and run on Debian, and so on.

And then Google started to use it, improve it, and scale it.

The scale proof

Then three years ago Google shocked the world – by announcing they were starting 2 billion containers a week. Can you imagine that? It has been proven that containers could run at scale.

Please note that I’m not discussing specific implementations – some people will prefer Docker for reason A, rkt for reason B or even LXC for reason C – my point here is that the concept of containers has been flexible for small workloads with resource constraints such as mine and proven at Google-scale workloads.

I’m sure that eventually we’ll have a container tool ecosystem mature enough for any workload scale that will be suitable for those (few) that still don’t buy the container trend.

Reason Two: Addressing the Problem with Infrastructure Changes

In the old days I remember Operations teams at clients blaming internal changes to infrastructure and applications as the cause of around 70 to 80 percent of all IT incidents.

Although this number was probably an exaggeration, it was clear to me after a couple of years as a consultant, that changes in the infrastructure got the Operations teams really annoyed. Indeed, installing new applications and patching them involved a large number of error-prone manual steps.

How many times have you logged in into a production server ten years ago (assuming you were already an IT professional at that time), only to find out three versions of Java that someone installed someday?

The rate of problems related to changes was affecting the business and in order not to lose their sanity, IT and Telco Operations introduced a number of (should I say painful?) procedures to prevent risky and unplanned changes to move forward. As a result, a wall was built which separated developers from operations in many ways.

Immutable infrastructure sounds good

Docker and other modern container engines such as rkt are wonderful to implement one of the most celebrated software engineering practices in many years: immutable infrastructure.

Containers are not the only option for immutable infrastructure, for sure, however they do it gracefully.

The idea is that instead of patching servers, one should remove the server with the old application version from the infrastructure and deploy a new server, with the new version. Ideally, with no downtime.

Immutable infrastructure is great with containers because they get provisioned and unprovisioned quickly, which also allows for on-demand scalability.

You also do not need to install anything to build a new environment – just move the same lightweight images around and you’re done – dev, test and prod with exactly the same components installed in exactly the same version.

That helps a lot. At Daitan, for example, we managed to reduce rework and bugs in production in a project by 70% – and Docker with immutable infrastructure played a key role in such a result.

Summary of Part I

Containers are here to stay for many reasons, in my opinion. I’ve picked two big pains from my own past experience just to show that they solve real problems for people trying to install, configure, clone, maintain and scale applications, in a graceful way.

Container benefits include being lighter than virtual machines, quick to boot, infrastructure independent, among others. This makes them perfect for immutable infrastructure which in turn reduces the risk of change-related incidents considerably.

What’s next?

My next post will deep dive into the implications of containers to the world of Continuous Configuration Management. Stay tuned!

What about you?

Have you tried containers? If not, I bet you can start in a couple of hours. If so, Daitan would love to hear your good, and not so good, stories in your journey to improve your operations with containers.

Acknowledgements

I’d like to thank some of the engineers at Daitan for sharing insights and enthusiasm on the topics of this post: Alex Zanetti, Isac Souza, Juliano Coimbra, Guilherme Feliciano, Thiago Marinello, and Thiago Cangussu.

 

FacebookTwitterGoogle+LinkedInShare