Docker is the most widely used container platform today. But what is Docker, and what do you need to know about it? How can Docker help you manage your IT infrastructure? Can it help you lower costs and improve uptime?
In this post, we’ll cover what Docker is, what it’s good for, and what it’s not so good for. Then, we’ll show you how to get started using it quickly.
What Is Docker?
To explain Docker well, we first need to discuss containers. Containers bundle application code with operating system (OS) libraries into a runnable package. This means the application can run on any system that supports the container format, even if it runs on an entirely different OS.
Let’s consider an example. A PostgreSQL container could contain the Linux version of the database server combined with the runtime libraries for Debian Linux. This container runs on any computer (referred to as the “host”) that supports the container—including macOS, Windows, or other Linux distributions, such as CentOS.
Why Use Containers?
For developers, containers simplify the delivery of applications. Instead of supporting multiple operating systems and versions, you provide a container that works almost anywhere.
For infrastructure operators, containers simplify running applications. You don’t have to manage multiple operating systems or versions. Your systems only need to support the container runtime. You can commingle applications that require different operating systems on the same host.
Containers allow you to run various applications on the same host, even if they require different operating systems or conflicting library versions. As a result, you can use your infrastructure more efficiently.
Docker: A Container Platform
Docker is an open-source containerization system. Although you can use tools other than Docker to package your applications into containers, Docker makes working with containerized applications simple. It supports Linux, macOS, Windows, and the major cloud platforms.
Docker has tools for creating, distributing, starting, and stopping containers. The containers can share network resources with the host, run with their virtual network visible only to other containers, or do both. Docker also has robust tools for managing how containers communicate and scale together.
Containers are not virtual machines
While containers are a way for applications to share system resources, they’re not virtual machines (VMs). A VM is a computer within a computer. It runs a discrete copy of an OS, and an application running in a VM “sees” virtualized disks, network adapters, and video cards. While it may still be sharing a host with other virtual machines, it’s more isolated than an application in a container.
Typically, it’s best practice for a single container to run a single application. If you need to run more than one app, you will need to run more than one container, or you may require a virtual machine. The containers share a single host OS.
Why Use Docker?
We’ve already covered how containers make it easier to package, distribute, and run applications. But those aren’t the only advantages containers provide.
By isolating applications and their dependencies within a container, you also separate them from one another. Docker containers can only access the resources you specify when they are run. They are isolated from their host systems’ file systems and networks by default. This isolation provides an additional layer of protection against application and OS library vulnerabilities. If an attacker compromises a container, they’ve only gained access to the resources the container can see.
Docker containers use fewer resources than a VM. So, in theory, you can run more containerized applications on a host system than you can on virtual machines.
Containers also start much faster than virtual machines. A virtual machine must create the virtualized system, boot the OS, and run your application. Containers only run their contained application. This makes containers useful for applications that utilize microservices, as they need to be started and stopped quickly to adjust to system demand.
Docker is also useful for supporting applications with special requirements, such as older operating systems or libraries that conflict with other applications. Instead of keeping a new host for one application, you run it in a container.
When would you not use Docker?
While Docker containers provide process isolation and security benefits, they share the host operating system kernel, which means they’re not as isolated as virtual machines. If you need complete OS-level isolation for maximum security—such as when running untrusted code or applications that require strict separation from other workloads—virtual machines may be a better choice.
Additionally, Docker may not be ideal for applications that require direct hardware access, specific kernel modules, or legacy software that depends heavily on particular OS configurations. In these cases, running applications directly on the host or in dedicated virtual machines might be more appropriate than containerization.
Getting Started With Docker
You can get started with Docker on your desktop system in just a few minutes. Simply select the correct OS to download Docker. Docker has a helpful guide with everything you need to know to get started using Docker.
After you’ve downloaded Docker to your system, the Getting Started guide demonstrates how easy it is to create a new web server using Docker.
A container image is a packaged application with all its dependencies included. When you run an image, Docker creates a container—the live, executing version of that application on your system. To download the docker/getting-started
image from Docker Hub and start it in a container, run the following command:
docker run -d -p 80:80 docker/getting-started
When you navigate to https://127.0.0.1 in your browser, you’ll see the main page of the new web server that this container created.
Let’s try another Docker command before we move on to some troubleshooting.
docker run --rm -it ubuntu
root@9f0f108a9215:/# ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run
sbin srv sys tmp usr var
root@9f0f108a9215:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 18:22 pts/0 00:00:00 bash
root 10 1 0 18:22 pts/0 00:00:00 ps -ef
In the example above, we ran an image named ubuntu
. It’s an Ubuntu runtime you can use as a starting point for a Linux-based container. The -it
argument tells Docker to redirect the container’s input and output to the current terminal; as soon as the container is started, we’re dropped into the shell that controls the container.
The ps
command demonstrates how the only process running in the container is the shell we’re in. It spawned the ps
command when we typed it.
Common Docker Problems
Service port conflicts
The first command in the Getting Started guide maps TCP service port 80
in the container to the same port on the host system. The first number after -p
is the host; the second is the container.
docker run -d -p 80:80 docker/getting-started
But what if there was already a web server running on the host system? This is what you would see:
docker run -d -p 80:80 docker/getting-started
13b8f062c4324bdfb657109d08235452798079db597bebb7d1b57f390f5f231f
docker: Error response from daemon: driver failed programming external connectivity on endpoint compassionate_liskov (e434df0a2fc24aad1516474f9d3461fe7ad66f6e47977f34e11f5cc4a3d068aa): Bind for 0.0.0.0:80 failed: port is already allocated.
The container will fail to start since the port is already in use. Failing to map a port or mapping it incorrectly is one of the most common problems with Docker containers.
We can fix this problem by using a different port on our host. For example:
docker run -d -p 8080:80 docker/getting-started
This maps port 80
on the container to port 8080
on the host.
Docker Logging
Docker containers log to the local Docker daemon. You can view these logs using the docker logs <container name>
command. SolarWinds Papertrail provides a comprehensive tutorial on working with Docker logs, as well as a logging guide, available here.
Because local logs are difficult to work with—especially if you run multiple containers—you’re better off routing all your Docker logs to a central location.
SolarWinds Papertrail and Docker
When you aggregate your Docker logs, you combine the efficiency and scalability of containers with the advantages of centralized observability. You can search the logs, tail them from the central console, provide access to your team, and generate charts and analytics from log data.
SolarWinds Papertrail offers several methods for aggregating your container logs, including remote syslog and an OTel-based integration. Both options enable you to easily add log aggregation to your container infrastructure and search multiple log streams from a single search bar.
Sign up for a free SolarWinds Papertrail trial today and see how easy Docker log management can be.
Docker and You
In this post, we covered what Docker is and how it can help you use your infrastructure more efficiently. We also discussed situations where containers might not be a good fit. We then took a brief look at how to get started with Docker by examining a pair of examples. Finally, we discussed how SolarWinds Papertrail can help manage your container infrastructure.
Containers have enjoyed widespread adoption over the past few years, and now you know why. Get started today with Docker and SolarWinds Papertrail!
This post was written by Eric Goebelbecker. Eric has worked in the financial markets in New York City for 25 years, developing infrastructure for market data and financial information exchange (FIX) protocol networks. He loves to talk about what makes teams effective (or not so effective).
Need something more advanced? Check out SolarWinds Docker Monitoring