Today, DevOps teams support innovation and routine operations in modern organizations. These teams rely on an evolving set of tools and platforms to enhance their continuous integration and continuous delivery practices. Docker is one such platform, which helps teams deliver their applications reliably in multi-cloud setups. As Docker is built on industry-standard, open-source technologies, the strong community support has led organizations to adopt the platform with open hands. Docker facilitates frequent release with higher reliability by packaging all application dependencies within a container. This allows developers to be sure their application will behave as expected, regardless of its journey from testing to production environments.
Logging in Docker
Though Docker has streamlined application delivery to a great extent, it has also introduced a new set of challenges. The ephemeral nature of Docker containers makes Docker logging challenging. Traditional tools and practices for infrastructure monitoring aren’t suited for Docker and cloud-native environments.
Traditionally, developers had to rely on Docker’s remote API to access the container logs using the “docker logs” command and a few supported log shippers. Later on, to improve Docker logging, logging drivers were introduced, which record text strings. The default logging driver in Docker receives container logs and forwards the logs to your preferred destination. Further, you can use logging driver plugins offered by several commercial and open-source tools such as Loggly, Fluentd, Syslog Driver, Splunk, and more.
Challenges With Docker Logging
While Docker logging driver has simplified things, it can still create issues. The “docker logs” command, which is used to inspect logs, works only with the json-file. If you use any alternative logging driver, the command starts failing. For all such drivers, you can monitor logs only on the destination side. In addition to the failure of docker logs command with logging drivers, similar issues may also arise with tools relying on Docker API for logging.
Further, teams often prefer TCP or TLS connections for secure delivery of logs using the Docker Syslog driver. However, network failures or latency issues during the establishment of a TCP connection can also lead to failures in container deployment and logging. Recovering these logs can be a challenge as these logs aren’t buffered before successful delivery to the remote destination.
Resolving Docker Logging Challenges
A json-file or journald logging driver offers a simpler approach to Docker logging. Journald simplifies container log monitoring with easier filtering as compared to json-file. You can use various fields to search for specific log messages. However, modern microservices based applications often require multiple containers. These containers can produce a large volume of logs. Inspecting these individual logs with “docker logs” command isn’t practical. You need to store your container logs in a remote location instead of on the local disk.
Monitoring Docker With Cloud Logging Tools
Centralized logging offers a smart way to manage logs in a reliable manner, preventing any accidental deletion, even when your containers shut down. A centralized location for managing logs will also facilitate easier analysis and event correlation. You can explore cloud-based log management tools to simplify initial configurations in setting up a centralized system. Here, are some of the reasons why a commercial cloud logging tool is a better option over self-managed open-source logging solutions:
Hosted or cloud-based logging tools are built to resolve all those operational challenges teams encounter while building a logging solution from scratch. This means you can start monitoring your Docker logs, without spending time in setting up servers and configuring several open-source tools.
Whenever you face any technical issues or need assistance with upgrades or account management related queries, you can rely on the vendor to provide dedicated support as per your service levels.
All cloud-based commercial logging solutions offer flexible pricing, with clear visibility into your spends. This is unlike open-source tools, where organizations often tend to take into account various hidden costs. Moreover, with self-managed solutions, the time and resources involved in maintaining servers also tend to build-up and become unmanageable over a period.
How to Troubleshoot Issues With Docker Logs
Make Use of Tagging
Tags are unique identifiers, which can help you slice and dice your data and extract specific information from the logs. In live environments, when you have to troubleshoot application issues, tags can help you skim through numerous container logs quickly. Docker allows you to customize the first 12 characters of the container ID for setting up your unique identifiers.
Ensure Real-time Monitoring
You can use the –follow option under the docker logs command to monitor container logs in real time. This real-time monitoring feature is also called “live tail” and most log monitoring solutions also offer this feature. Real-time monitoring is essential for keeping track of containers in the production environment.
However, as discussed earlier, if your containers are sending logs to a remote destination, network errors might disrupt log collection. To avoid such issues, you should consider using UDP connections, which handle network packet failure better than TCP. Alternatively, you can set up a syslog server on the host. Using a dedicated syslog container to ship logs to the remote server is another approach helpful in mitigating network interference issues.
Set up Alerts
You can configure your Docker log viewers to raise alerts for specific events or whenever a certain predefined threshold is breached. These alerts will enable your team to resolve critical issues on priority. Most “logging as a service” providers offer easy integration with tools like Slack, Pagerduty, Hipchat, and more.
Where to Get Started?
We discussed commercial logging tools and how they can help you extract the most out of your logs and get more time for continuous improvement of your containerized applications, by abstracting you from operational worries. Some advanced, feature-rich products like Splunk, SumoLogic, and LogDNA, etc. offer a wide range of capabilities including machine learning or ML-based analytics, pre-configured visual dashboards, automated compliance reporting and more. These enterprise offerings are more suited for setting up a large-scale network or security operations center. However, keep in mind these solutions have a higher learning curve, and it will take some time before your organization starts extracting true value out of them.
On the other hand, there are tools like SolarWinds Papertrail, which offer a specific set of capabilities, including centralized log management, search, real-time monitoring, and alerts. You may find Papertrail highly useful for Docker logging as it adds agility to your operations with a small footprint. It supports a wide range of log formats and can help you filter and monitor different infrastructure and application logs simultaneously with its event viewer. Further, you can live tail container logs to stay on top of your environment. The tool also offers a powerful command-line interface for developers. Moreover, Papertrail offers a free trial to help you get accustomed to its features and interface. Once you have evaluated the product, you can customize a plan as per your organization’s needs.