Syslog is a standard for collecting, routing, and storing log messages. It emerged from the Sendmail project in the 1980s. In 2001, it was standardized as RFC 3164 and then as RFC 5424 in 2009. It’s supported on several different platforms, including Unix/Linux, BSD Unix, macOS, and network devices like printers and routers. Because of the remote syslog capability, the standard has lasted several decades.
Container logging adds some complexities we don’t experience when using VMs or dedicated hardware. With containers, once the container dies, the logs and data for the container also die. This may not be a problem for small applications with little logging. But for more complex applications or applications running in production, you need to start thinking about log persistence and management.
Let’s paint a picture. You’re a developer working on a system broken down into multiple services. These services do their one thing well and all communicate. However, it’s 3 a.m., and you get a call from your boss telling you something’s wrong. People can’t complete the tasks they’re supposed to complete, and there are errors on the webpages they’re using. You’re the first responder and need to figure out what’s going on.
A good application usually has some sort of logging to provide clues when something goes wrong. When it doesn’t, I have to spend time making sure I can get some basic information logged when an issue occurs. In those cases, I used to write files in /var/log for every application or scheduled task. But I came to a point where there were too many files to review when I had to troubleshoot. To simplify troubleshooting, I decided to change my approach to an easier and more convenient system. This post describes what I came up with.
As you automate tasks with PowerShell, those tasks start to become more complex. As a result, logs become larger. Take a look at this huge log that was output from executing commands in PowerShell:
Local Kubernetes clusters are great for both developers and system engineers. When developing applications, programmers can use local clusters to make sure their application can be easily and correctly deployed without the need for configuring real infrastructure. System engineers can use them for testing, creating proofs of concept (POCs), or learning and trying new tools.
Regardless of what language you code in or what type of apps you’re working on, you’re going to end up reading log files. They’re your window into what’s happening inside your code or the server you’re talking to. Linux log management is one of the skills setting an experienced developer apart from the rest.
Logs are a ubiquitous component of IT. They come in all shapes and sizes from a huge variety of sources and possible destinations. But at the end of the day, all types of logging serve a fundamental role in a technological infrastructure: they allow a system to record information about its behavior to a persistent medium. People can then look at this information and reconstruct what happened so they can detect and fix whatever issues they might find.
Server logs are important and can never be left out of web development. There’s no such thing as a perfect website—even one owned by a big tech company is likely to have errors in production. Using web server logs, you can easily know where the problem is coming from and solve it on time. Logs are automatically created by the server and consist of files containing information such as errors, requests made to the server, and other information worth looking at.
Creating alerts to spot problems before your users do is simple. However, when you receive too many alerts, you might end up ignoring critical problems.