Every container produces logs with valuable information. A log is basically data written by the container to STDOUT or STDERR. However, if you run a container in detached mode, you can’t see the logs in your console because detached mode refers to running a container in the background of your terminal. Therefore, you won’t see any logging or other output from your Docker container.
To display these valuable logs, you can use a range of Docker log commands. And in this article, you’ll learn how to display Docker logs in your console, tips and tricks related to displaying logs, and how to configure Docker to send logs to SolarWinds® Papertrail™.
Let’s learn first about the importance of Docker logs.
Docker logs are important for developers and DevOps engineers. Logs are most useful when debugging or analyzing a problem because they offer insights into what went wrong. Thus, developers and engineers can solve issues faster.
In addition, you can apply trend analysis to Docker logs to detect anomalies. For example, Loggly offers this feature for any type of log, so you can detect anomalies faster and shift from reactive to proactive monitoring.
Imagine we’re running a container and want to access the logs for this container. How can we accomplish this task?
First, we can use the following command to check for currently running containers:
docker ps -a
This command prints a list of running containers. You’ll see their IDs, used images, start commands, how long the containers have been running, and many other variables.
For us, the most important parameter is the container ID, which we will use in the next step.
CONTAINER ID IMAGE COMMAND CREATED STATUS fcae22fa4144 xyz/my-image:latest "/opt/bin/entrypoint-" 2 hours ago Up 2 minutes 155206af2f03 abc/my-image:latest "/entrypoint.sh mysql" 2 hours ago Up 2 minutes
Now that we’re sure our container is running, let’s use the container ID to see all its logs.
Enter the following command in your command-line interface (CLI), replacing <container ID> with your container’s ID:
docker logs <container ID>
Although this will show us the logs, it won’t allow us to view continuous log output. In Docker jargon, we refer to creating a continuous stream of log output as tailing logs. To tail the logs for our container, we can use the follow option.
docker logs --follow <container ID>
Next, let’s explore more interesting tricks related to displaying Docker logs.
Here are three useful logging tricks you can access through your CLI.
In some cases, you don’t want to see all logs for a container. Perhaps something happened, and you want to quickly verify the latest 100 logs for your container. In this case, you can use the tail option to specify the number of logs you want to see:
docker logs --tail 100 <container ID>
Docker provides the option to only stream logs from a specific time. For example, logs written during the first three seconds when the container was active can tell you if the container started successfully. In this case, you don’t have to create a never-ending stream of logs. Here, you can use the until option with the follow option. The until option allows you to specify a time span for which the container should print logs to your CLI.
docker logs --follow --until=3s
You can use different notations to designate the timescale. For example, to see the logs for the last 30 minutes, use the following command:
docker logs --follow --until=30m
The opposite action is also possible with Docker CLI options. Let’s say you want to see the logs from a specific point in time until now. The since option helps with this task.
docker logs --since 2019-03-02 <container ID>
The accepted format here is YYYY-MM-DDTHH:MM:SS. This means you can specify a very accurate timestamp from which you want to display logs, or a less specific timestamp, as shown in the example above.
In this section, we’ll give you a simple example of how you can configure Docker to send logs to Papertrail.
First of all, you can run a logspout container, which allows you to configure an address to send logs to. The below example starts a logspout container configured to send logs to Papertrail.
docker run --restart=always -d \ -v=/var/run/docker.sock:/var/run/docker.sock gliderlabs/logspout \ syslog+tls://logsN.papertrailapp.com:XXXXX
Alternatively, you can configure the syslog driver to tell the container where to send logs. Make sure to change the syslog-address property to your custom Papertrail address.
docker run --log-driver=syslog --log-opt syslog-address=udp://logsN.papertrailapp.com:XXXXX image-name
You can find more information in the Papertrail tutorial.
You can easily create and destroy containers. However, every time a container restarts, you lose all the data it holds. Therefore, never store application-specific data in your container.
For the same reason, you should take good care of your logs. Logs can be stored persistently to a volume, but it’s even better to store them long-term. For example, you can pipe logs to a local hard drive or you can send them to a log management platform. Both options allow you to save your logs long-term and use them for future analysis.
A logging container helps you scale your logging. The idea is that you pipe your logging output from multiple containers to a logging container. Next, your logging container takes care of saving logs to persistent storage or a log management platform.
Also, you can spin up multiple logging containers to scale your logging service when you decide to host more containers. It’s a flexible and easy solution for handling log output.
To be able to aggregate logs from your containers, you need to make sure the applications running in those containers log data to STDOUT or STDERR, both standard channels for logging output messages or error messages. Docker is configured to automatically pick up data from both outputs. If you log data to a file inside your container, you risk losing all this data when the container crashes or restarts. Therefore, if you don’t want to lose important logging data, it’s important to log to STDOUT or STDERR.
Docker supports the JSON logging format, and logging data in this format is recommended. Docker itself stores logs as JSON files; therefore, it’s optimized to handle JSON data.
For this reason, many Node.js logging libraries such as Bunyan or Winston prefer to log data using the JSON format.
As you’ve seen, Docker provides multiple CLI options to display logs in different formats. For example, you can display logs for a specified point in time. In addition, the follow option is one of the most used Docker options because it allows developers to live tail logs for specific containers.
Lastly, you can configure Docker to transport logs to a log management platform, such as Papertrail. This also helps to visualize your logs. Papertrail allows you to easily monitor logs and provides developers with the ability to create alarms to warn them if an anomaly is detected.
Want to give it a try? You can sign up for a free trial of Papertrail now and see how it can work for you.
This post was written by Michiel Mulders. Michiel is a passionate blockchain developer who loves writing technical content. Besides that, he loves learning about marketing, UX psychology, and entrepreneurship. When he’s not writing, he’s probably enjoying a Belgian beer!