A Comprehensive Guide to Checking Docker Logs

Hi there! Docker logs provide crucial insights into container activity – but accessing and leveraging logs effectively takes some knowledge. In this guide, I‘ll walk you through Docker logging, so you can debug containers and understand what‘s happening under the hood.

Why Docker Logs Matter

First, what exactly are container logs?

Docker logs capture everything written by the application to stdout and stderr. This includes:

  • Output of commands (echo, prints etc.)
  • Application logging
  • Errors and stack traces
  • Warnings, fatal errors etc.

Capturing this output serves several purposes:

  • Debugging – inspect logs to troubleshoot crashes, performance issues etc.
  • Auditing – logs provide history of what operations were run and their output.
  • Monitoring activity inside containers.
  • Security – detect suspicious events like unauthorized access attempts etc.

Without logs, you are flying blind – unable to view container events or debug issues.

Now that you know why logs matter, let‘s cover how to access them…

Viewing Basic Docker Logs

The docker logs command outputs logs for a running or stopped container:

docker logs [OPTIONS] CONTAINER_NAME|CONTAINER_ID

Let‘s try this for a MySQL container named my-db:

docker logs my-db

This prints MySQL start logs to stdout:

2021-10-12T22:33:00.048349Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: ‘8.0.21‘  socket: ‘/var/run/mysqld/mysqld.sock‘  port: 3306  MySQL Community Server - GPL.

Note: This works for the default json-file logging driver which logs to local JSON-formatted text files.

Now let‘s cover key log command options…

Docker Log Command Options

The docker logs command supports handy options to filter and format log output:

docker logs [OPTIONS] CONTAINER

Let‘s explore log options to tail, filter, and customize logs:

Tail Logs

By default, docker logs streams the entire contents of the log.

Use the -n or --tail option to show the last N lines:

docker logs --tail=10 my-db

This prints the last 10 lines only – great for observing recent events.

Filter Logs by Time

Options like --since and --until filter logs by timestamp:

# Logs since specific time
docker logs --since="2021-10-12T22:00" my-db  

# Logs before specific time
docker logs --until="2021-10-12T22:30" my-db

Times can be RFC3339 date-times or durations like 10m (10 minutes ago).

Follow/Stream Logs Continuously

Add -f or --follow to continuously stream new logs rather than a one-time snapshot:

docker logs -f my-db

Think of it like a tail -F. Useful to capture all logs in real-time.

Formatting: Timestamps, Details, etc.

Formats include:

  • -t/--timestamps – Prefix logs with timestamp
  • --details – Extra metadata around logs
  • -n -N – Number of lines

More output customization available here.

Centralized Logging with ELK Stack

The previous section covers accessing local Docker logs. But in production, most teams use centralized logging for:

  • Aggregate logs across hosts
  • Enhanced analysis, dashboards etc.
  • Critical for cloud deployments across hosts

Popular stacks include ELK (Elasticsearch + Logstash + Kibana) and Google Stackdriver.

Let‘s see an ELK deployment…

ELK Logging Architecture Overview

A typical container logging architecture with ELK:

ELK Logging Architecture

It involves:

  1. Containers with app code write stdout/stderr to json-file driver
  2. Docker forwards json logs to Logstash agent
  3. Logstash parses and transforms data into structured format expected by Elasticsearch
  4. Logstash loads structured logs into Elasticsearch cluster
  5. Kibana reads data in Elasticsearch to build searchable dashboards

Additional components like Redis and Kafka provide queueing and buffering capabilities.

ELK Stack Components

So what is the role of each component?

  • Elasticsearch – scalable search and analytics engine to store, search and aggregate logs
  • Logstash – transports and processes logs from sources like Docker containers
  • Kibana – visualization layer to explore and dashboard logs

Together these make up the ELK stack – a popular centralized logging pipeline.

Benefits

Why implement the ELK stack or similar central logging?

Log Aggregation across hosts – most teams run Docker clusters across multiple physical/virtual machines. Central logging aggregates dispersed logs into unified view.

Centralized Logging Architecture

Enhanced Analysis like searching and reporting across logs using Elasticsearch queries.

Retention Policies to persist logs long-term instead of just small local json files. Regulatory compliance requires log history.

Alerting to trigger alerts from certain log events, like signs of an intrusion or outage.

There are more benefits but this covers primary drivers.

Wrapping Up

So in this guide, we went all the way from basic docker logs to production-grade centralized logging!

We covered:

  • Access Docker container stdout/stderr
  • Log command options – filter, tail
  • Stream logs continuously
  • Production logging with ELK stack

Logging effectively uncovers container internals – crucial for debugging and visibility.

For any questions on implementing container logging pipelines, please reach out!