Make Your Logs Work for You

The days of logging in to servers and manually viewing log files are over. SolarWinds® Papertrail™ aggregates logs from applications, devices, and platforms to a central location.

View Technology Info

FEATURED TECHNOLOGY

Troubleshoot Fast and Enjoy It

SolarWinds® Papertrail™ provides cloud-based log management that seamlessly aggregates logs from applications, servers, network devices, services, platforms, and much more.

View Capabilities Info

FEATURED CAPABILITIES

Aggregate and Search Any Log

SolarWinds® Papertrail™ provides lightning-fast search, live tail, flexible system groups, team-wide access, and integration with popular communications platforms like PagerDuty and Slack to help you quickly track down customer problems, debug app requests, or troubleshoot slow database queries.

View Languages Info

FEATURED LANGUAGES

TBD - APM Integration Title

TBD - APM Integration Description

TBD Link

APM Integration Feature List

TBD - Built for Collaboration Title

TBD - Built for Collaboration Description

TBD Link

Built for Collaboration Feature List

Tips from the Team

What Are Containers and Containerization in DevOps?

START FREE TRIAL

Fully Functional for 14 Days

Nowadays, most software is built using microservices architecture. The easiest way of building microservices is by using containers. But technology and architecture are only half of the equation.

Processes, company culture, and methodologies also play a big role in the software development process. For this part, the most popular approach is to follow DevOps practices. In fact, containers and DevOps complement each other. In this post, you’ll learn how one relates to another and what containerization and DevOps are all about.

What Is Container(ization)?

Containerization is a process of packaging your application together with its dependencies into one package (a container). Such a package can then be run pretty much anywhere, no matter if it’s an on-premises server, a virtual machine in the cloud, or a developer’s laptop. By abstracting the infrastructure, containerization allows you to make your application truly portable and flexible.

Before we dive into explaining containers in the context of DevOps, let’s first talk about containers themselves.

Problems of Traditional Applications

Traditionally, to install and run any software on any server, you need to meet several requirements. The software needs to support your operating system, and you probably need to have a few libraries and tools installed to run the software. But that’s not all.

All these requirements probably need to be in a specific version and accessible at the specific path. Also, sometimes you may need to have proper permissions for certain directories and files. Overall, there are quite a few checkboxes you need to tick before you can successfully run the software.

These requirements create certain problems. First, what if you already have some of the tools or libraries installed, but in an unsupported version? You’d have to upgrade or downgrade them, hoping it won’t break any existing software. The problems don’t end once requirements are met and the application is running correctly, though.

What if you want to run the application in another cloud environment? You’ll have to start the process all over again. Containers are meant to solve these problems.

Containers to the Rescue

The idea of a container is to package these libraries, tools, and any other things your application requires into one “executable package”—a container. Then you can run these containers without worrying about the libraries because everything is already included in a container.  Containers isolate software from the cloud environment and make sure it works the same way regardless of potential environmental differences.   

Containers vs. Virtual Machines

This all might sound like a concept of a virtual machine deployed from a prebuilt image, but containers work much differently. The main difference between a VM and a container is a container virtualizes the operating system, whereas a virtual machine is an abstraction of physical hardware. This difference results in containers being more efficient and portable than VMs.

So, how do containers achieve isolation and portability? In the case of a Linux system, the answer is cgroups and namespace isolation. You can run containers on Windows, too. The Windows kernel uses different mechanisms to achieve the same results. Instead of cgroups it has job objects, and instead of namespaces it has server silo objects. These features of Windows and Linux kernels allow building containers, which encapsulate everything your application may need without creating any conflicts with the host operating system and other containers.

Because they achieve the same functionality as VMs (isolation, ability to package all you need into one executable piece), some people call containers “lightweight VMs.” “Lightweight” comes from the fact that whereas a VM includes a full copy of an operating system, the application, and binaries and libraries that can take up space and be slow to boot. Containers share the operating systems and infrastructure with other containers, with each container running as isolated processes in user space. Since containers are fundamentally different than virtual machines in how they’re constructed, it’s best to avoid calling them “lightweight VMs.” 

What Makes Containers Possible?

Let’s go back to the working principle of a container. As we mentioned earlier, both Windows and Linux kernels have some features—namely, cgroups and namespace isolation—that allow containers to be created. But what are they, and how do they work?

Linux: cgroups and Namespaces

On the Linux kernel, it’s straightforward. Cgroups is a kernel feature that allows limiting hardware resources per process. This gives us the ability to limit CPU, RAM, disk I/O, or network usage for one or more specific processes.

Namespaces, on the other hand, give you the ability to isolate the “scope” of the process. There are a few different namespaces, and you can apply one or more at the same time to the process. Let’s take the ID namespace, for example. Normally, any process in a Linux machine can see all the other processes running on the same machine. If you apply the process ID namespace isolation to any process, however, that process will no longer see the other processes running on the machine. It’ll see only the processes running within the namespace.

Windows: Virtual File System, Job Objects, and Server Silos

There are no cgroups and namespaces in a Windows kernel, but there are equivalent mechanisms with the ability to limit processes in a certain way. Virtual file system effectively abstracts the real devices and “translates” every call from a process to the file system.

Job objects are the equivalent of cgroups—they allow limiting resources per process (or a group of processes).

Server silos would be a Windows equivalent of Linux kernel namespaces. Normally, pretty much anything a process can do (access files, read and change registry entries, create links, connect to other processes) is orchestrated within a so-called root scope. But you can create a server silo what’s happening in a separate, limited scope.

cgroups

A cgroup (short for “control group”) is a kernel feature that allows limiting and isolating access to hardware resources (like CPU, RAM, disk I/O, or networking) by a process. In other words, a cgroup allows you to say, “process with this, and that ID can only use 10% of a CPU and 256 MB of RAM.”

Namespace Isolation

Namespaces, on the other hand, let you isolate one or more processes from another in a way that a process will see only what’s inside the namespace. There are different namespaces for different purposes—for example, network, mount, process ID, or user namespace. Each has its own purpose (as you may guess by their names), and you can apply all or only some of them to a process.

To illustrate how it works in practice, let’s take a process ID namespace for example. Normally, any process in a Linux machine can see all other processes running on the same machine. If you apply process ID namespace isolation to any process, however, that process will no longer see the other processes running on the machine. It’ll only see processes running within the namespace.

Let’s look at mount namespace as another example. If you apply mount namespace isolation to a process, that process will have a list of mount points independent from other processes and from the host operating system. So, whatever file system you mount within the namespace will be visible to that process but not to other processes.

Container Demystified

If you apply all types of a namespace isolation and restrict the process usage of hardware resources with cgroups, you’ll end up having a container. At this point, it’s important to understand there’s no such thing as a container from a Linux kernel perspective. For the kernel, it’s just an ordinary process that has been isolated using cgroups and namespaces.

But don’t get me wrong—namespace isolation and cgroups lets you create proper isolation, so whatever you do inside a container will only affect the container. For example, you can have a nginx server running on your server and listening on port 80, and you can have a container running on the server with nginx running inside and listening on port 80, and they won’t fight with each other. Also, the content of /etc (and any other directory for that matter) on your host system will be different and independent from the content of /etc inside the container.

Container Runtime

Creating these namespace isolations, cgroup, job objects, or server silos rules manually is a tedious and complicated task. However, a tool called a “container runtime” can do this for you under the hood. You’ve probably heard about Docker, but it’s not the only container runtime on the market. There’s also containerd, podman, and cri-o. They all work slightly differently, but the general idea remains the same. These tools do the complicated job of managing cgroups and namespace isolation for you, so you can say, “I want a container with this application, these libraries, and these extra files and mount points.”

Containerization

Now you have some understanding of what it takes to create a container, let’s talk about containerization. Simply put, it’s a process of implementing containers. We mentioned before you need a container runtime like Docker for that. In the simplest scenario, that’s all you need. But in reality, there are a few layers of containerization.

Infrastructure

Starting with the infrastructure: You need some servers either on-prem or in the cloud to run containers on.

Operating System

When it comes to the operating system, the only real requirement is it will support containers. Most modern operating systems do support containers, but there are some exotics that don’t, so you need to double-check beforehand. If you’ll only run containerized applications on your server, you can go to the next level and use one of a few container-optimized operating systems. These are operating systems dedicated to running containers; therefore, they only include libraries needed to run containers. As a result, they’re much smaller than a traditional operating system and more secure.

Orchestrator

On top of an operating system, you’ll need the previously mentioned container runtime (unless you opt for the container-optimized operating systems, which have container runtime built in). We could end here, but you’ll probably need a container orchestrator to top off the stack. Managing a few containers is tolerable without it, but in real life, you’ll probably have more than a few containers. And the more you do, the harder it becomes to control them all. A container orchestrator will do the job for you. The most popular these days is Kubernetes.

Containerization vs. DevOps

When you talk about containers, you’ll often hear about DevOps too. We need to understand why that is. Containers are a technology, while DevOps is a set of practices, culture, and principles. The reason you often see them together is containers as a technology make implementing DevOps easier. We’ll explain why in a second, but it’s important to understand that they can exist separately. You can have containers without DevOps, and vice versa.

The thing to remember is one without another would be more difficult and less efficient. Containers are a natural fit for DevOps. There are a few reasons for that, but the main point is DevOps provides faster software delivery through closer cooperation between developers and operation teams, giving more freedom to developers and a “fail fast” approach.

Let’s Talk About the Benefits

Containers help with everything. Thanks to containers, different environments (e.g., development, test, production) can be the same since you no longer rely on operation teams to make sure different servers are running the same software versions. What’s more, applications will be in the same “environment” (container), even on a developer’s laptop. You simply deploy the same containers on different environments. This removes the common problem of “it works on my machine but not on the test server.”

Continuous Deployments

Continuous deployment becomes easier with containers too. That’s because containers in general are small (or at least they should be), so it just takes seconds to deploy a new version of a container. Also, if you’re using containers you probably architected your application as microservices. This means you can update different parts of your application independently.

Flexibility

Another benefit of containers is different parts of your application can be written in different languages. Therefore, developers aren’t limited to one programming language but can use languages they’re the most comfortable with. This contributes to DevOps because it gives you more freedom in arranging teams.

Fail Fast

When it comes to the “fail fast” approach, containers limit the scope of application code developers need to understand. To fix a bug, a developer (in most cases) only needs to understand how one container works, not the whole application (unless, of course, the issue spans many containers). So, it’s usually far easier to narrow down the potential issues and find the root cause.

And once it’s fixed, you can quickly deploy a new version of one container, and you’re done. No need for multiple teams to align to find the issue, or for an end-to-end test of entire application when a change to single piece of the application was made. No need for approvals and alignment across multiple departments to redeploy the whole application, either.

DevSecOps

But containers’ contribution to DevOps doesn’t stop here. In fact, containerization can help you upscale your DevOps practices to the increasingly popular DevSecOps. Since different parts of your application are packaged into small pieces, it’s easy to implement network security policies to, for example, keep traffic from flowing where it shouldn’t.

Testing also becomes easier because you only need to focus on a small part of an application when writing tests, and that directly decreases the chances of deploying buggy code.

Another feature of containers worth mentioning is the ability to do runtime security scanning. Traditionally, when your application is running directly on the operating system on your host server, there will be dozens or hundreds of processes running next to it. It’s hard to determine if one of them contains malware. However, if your application is running inside the containers and you have a good understanding of what should be running inside each of them, you can simply block other binaries from running in the container.

The Downside

With all the benefits of containers comes a downside: the cost. Networking becomes much more complicated since containers need to talk to each other. This is usually done using REST API. Therefore, instead of having only front-end to back-end and back-end to database connectivity, you’ll have dozens of connections, creating a complicated networking mesh.

The same applies to logging. You’ll no longer have one place to read logs from; each container will create its own logs. You’ll have to aggregate them, and it might become more difficult to have a general overview of the whole application. There are, however, tools like SolarWinds® Papertrail to help you with that. Papertrail can aggregate the logs from containers and create a centralized, easy-to-understand overview of the application state. Managing container logs with Papertrail will allow you to enjoy the benefits of containerization while still maintaining the ability to quickly identify and troubleshooting issues. Sign up for a free trial here.

Summary

As you can see, containers and DevOps are often put in the same box for good reason. They complement each other well. Containers make implementing DevOps easier, and DevOps helps extract the most value from containers. No one will stop you from packaging a badly designed monolithic application into a container. However, in such a scenario you won’t benefit much from containerization. Implement DevOps at the same time and you’ll start breaking the monolith down and implementing microservices, which then will uncover the benefits of containers.

And if you don’t want to end up losing visibility into your application and drowning in a sea of log files spanning in multiple containers, install Papertrail and let it simplify the complexity for you.

This post was written by Dawid Ziolkowski. Dawid has 10 years of experience as a Network/System Engineer, DevOps, and Cloud Native Engineer. He’s worked for an IT outsourcing company, a research institute, telco, a hosting company, and a consultancy company, so he’s gathered a lot of knowledge from different perspectives. Nowadays, he’s helping companies move to cloud and redesign their infrastructure for a more cloud-native approach.

Aggregate, organize, and manage your logs

  • Collect real-time log data from your applications, servers, cloud services, and more
  • Search log messages to analyze and troubleshoot incidents, identify trends, and set alerts
  • Create comprehensive per-user access control policies, automated backups, and archives of up to a year of historical data
Start Free Trial

Fully Functional for 30 Days

Let's talk it over

Contact our team, anytime.