Docker: Storage & Security

Learn to persist data with volumes, configure networks, and implement security best practices for production containers.

Docker in Practice

This page covers the practical aspects of working with Docker: persisting data, configuring networks, and securing your containers. These topics become essential as you move beyond simple experiments into real-world deployments.

Docker is a platform for developing, shipping, and running applications via containerization technology which packages applications and their dependencies into lightweight and portable containers that can run consistently across different environments.

Docker Workflow

BUILD Dockerfile docker build SHIP Image Docker Registry push pull RUN Containers

Installing Docker

Before you can use Docker, you need to install it on your system. The installation process varies by operating system, but the result is the same: a working Docker daemon that can build and run containers.

Choose your platform below. Most users will want Docker Desktop (Windows/Mac) or Docker Engine (Linux).

Linux Installation

Ubuntu/Debian Quick Install
# Install Docker using the convenience script
curl -fsSL https://get.docker.com | sudo sh

# Add your user to the docker group (logout required)
sudo usermod -aG docker $USER

# Verify installation after logging back in
docker --version

The convenience script handles repository setup automatically. For production systems, see the official installation guide for manual setup.

Post-Installation Setup
# Enable Docker to start on boot
sudo systemctl enable docker

# Verify everything works
docker run hello-world

The first step in your Docker journey is installation. Docker provides packages for all major operating systems:

Once Docker is installed, you’ll interact with it primarily through the command-line interface. Let’s explore the essential commands that will become part of your daily workflow.

Common Docker CLI Commands

Images

docker images List all local images
docker pull <image>:<tag> Download image from registry
docker rmi <image>:<tag> Remove an image

Containers

docker ps List running containers
docker ps -a List all containers
docker run -it --rm --name <name> <image>:<tag> Run interactive container
docker stop <container> Stop a running container
docker rm <container> Remove a container

Container Operations

docker logs <container> View container logs
docker exec -it <container> <command> Execute command in container

Building & Publishing

docker build -t <image>:<tag> . Build image from Dockerfile
docker push <image>:<tag> Push image to registry

Common Workflow

docker build -t myapp:1.0 . Build your application image
docker run -d -p 8080:80 myapp:1.0 Run container in detached mode
docker logs -f <container_id> Monitor application logs
docker push myapp:1.0 Share image via registry

Docker Compose

  • Start a multi-container application: docker-compose up -d
  • Stop a multi-container application: docker-compose down

Docker Storage: Volumes, Bind Mounts, and tmpfs

By default, data inside a container disappears when the container stops. This is actually a feature, not a bug: it keeps containers lightweight and reproducible. However, most real applications need to persist data somewhere.

Consider the following scenarios and which storage type fits each:

Scenario Best Storage Type Why
Database files Volume Docker manages it, easy backups, best performance
Source code during development Bind mount See changes instantly without rebuilding
Configuration files Bind mount Edit on host, container reads immediately
Sensitive data (secrets, tokens) tmpfs Never written to disk, cleared when container stops
Build cache Volume Persists between builds, improves speed

Understanding Docker Storage

Docker provides three ways to persist data beyond the container lifecycle. The right choice depends on your use case.

Storage Type Use Case Performance Portability Management
Volumes Production data, databases, shared data between containers Best (native to Docker) High (managed by Docker) Easy (Docker commands)
Bind Mounts Development, config files, source code Good (direct filesystem) Low (host-dependent) Manual (filesystem)
tmpfs Temporary data, secrets, caches Excellent (memory) None (memory only) Automatic (cleared on stop)

Docker Volumes

When to use: Production databases, application state, any data that must survive container restarts.

Essential Volume Commands

# Create and use a named volume
docker volume create app-data
docker run -d -v app-data:/var/lib/postgresql/data postgres:15

# List and clean up volumes
docker volume ls
docker volume prune  # Remove unused volumes

Backup and Restore

# Backup: mount volume read-only, tar to host
docker run --rm -v app-data:/source:ro -v $(pwd):/backup \
  alpine tar czf /backup/backup.tar.gz -C /source .

# Restore: extract tar into volume
docker run --rm -v app-data:/target -v $(pwd):/backup:ro \
  alpine tar xzf /backup/backup.tar.gz -C /target

Bind Mounts

When to use: Development workflows where you want to edit files on your host and see changes immediately in the container.

Development Workflow

# Mount source code for live development
docker run -d -v $(pwd)/src:/app/src -p 3000:3000 node:18 npm run dev

# Mount config file read-only (container cannot modify)
docker run -d -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf:ro nginx

The :ro suffix makes the mount read-only, preventing the container from modifying your host files.

tmpfs Mounts

When to use: Sensitive data like secrets or tokens that should never be written to disk, or temporary caches that can be discarded.

Secure Temporary Storage

# Store secrets in memory only (never touches disk)
docker run -d --tmpfs /run/secrets:size=10m,mode=0700 my-app

# Fast temporary cache
docker run -d --tmpfs /app/cache:size=100m my-app

tmpfs mounts exist only in memory. When the container stops, the data is gone. This is ideal for sensitive information.

Sharing Data Between Containers

When to use: When multiple containers need to read or write the same data, such as a web server and a log processor.

Volume Sharing Pattern

# Both containers access the same volume
docker volume create shared-data
docker run -d -v shared-data:/data --name writer my-app
docker run -d -v shared-data:/data:ro --name reader log-processor

The writer container can modify data; the reader has read-only access. Both see the same files.

Docker Networking In-Depth

Networking determines how containers communicate with each other, with the host, and with external services. Getting this right is essential for both functionality and security.

Docker Network Architecture

Docker provides several network drivers for different scenarios. The default (bridge) works for most cases, but understanding the alternatives helps you make better architectural decisions.

Network Driver Overview

Bridge (default)

Default network for standalone containers. Provides network isolation and automatic DNS resolution between containers.

docker network create --driver bridge my-bridge
Host

Removes network isolation between container and host. Container uses host's network directly.

docker run --network host nginx
Overlay

Creates distributed networks among multiple Docker hosts. Used in Swarm mode for multi-host communication.

docker network create --driver overlay --attachable my-overlay
Macvlan

Assigns MAC address to containers, making them appear as physical devices on the network.

docker network create -d macvlan --subnet=192.168.1.0/24 my-macvlan
None

Disables all networking for the container. Used for maximum isolation.

docker run --network none alpine

Bridge Networking Deep Dive

Key concept: Always create custom bridge networks for your applications. Unlike the default bridge, custom networks provide automatic DNS resolution between containers.

Creating Custom Networks

# Create a network and run containers on it
docker network create my-app-network
docker run -d --name web --network my-app-network nginx
docker run -d --name db --network my-app-network postgres

# Containers can find each other by name
docker exec web ping db  # Works!

Network Isolation Pattern

# Isolate frontend from database
docker network create frontend
docker network create backend
docker run -d --name webapp --network frontend nginx
docker run -d --name api --network backend my-api

# Connect API to both networks (acts as bridge)
docker network connect frontend api

The webapp can reach the api, but not the database directly. The api can reach both. This is a common security pattern.

Overlay Networking for Swarm

When to use: Docker Swarm deployments where services need to communicate across multiple hosts.

# Create encrypted overlay network
docker network create --driver overlay --opt encrypted my-overlay

# Services on this network can find each other across hosts
docker service create --name api --network my-overlay my-api

Macvlan Networking

When to use: When containers need to appear as physical devices on your network (legacy system integration, specific IP requirements).

# Container gets a real IP on your network
docker network create -d macvlan \
  --subnet=192.168.1.0/24 --gateway=192.168.1.1 \
  -o parent=eth0 my-macvlan

docker run -d --network my-macvlan --ip 192.168.1.100 nginx

Advanced Networking Patterns

For complex deployments, consider these patterns:

  • Service mesh: Use a proxy (Envoy, Traefik) to handle routing, load balancing, and observability
  • Network segmentation: Create separate networks for frontend, backend, and database tiers
  • Firewall rules: Use iptables DOCKER-USER chain to restrict container traffic

These patterns are typically managed through orchestration tools like Kubernetes or Docker Swarm rather than manual configuration.

Docker Security Best Practices

Container security is not about a single setting. It is about applying multiple layers of protection, from how you build images to how you run containers in production.

Consider the following security layers:

Layer What It Protects Key Actions
Image What goes into containers Use minimal base images, scan for vulnerabilities
Build The build process Use BuildKit secrets, multi-stage builds
Runtime Running containers Drop capabilities, run as non-root, limit resources
Network Container communication Use custom networks, encrypt overlay traffic
Host The Docker host Keep Docker updated, use user namespaces

Container Security Fundamentals

The following practices significantly reduce your attack surface. Start with the basics and add more controls as your security requirements grow.

Security Principles

Least Privilege

Run containers with minimal permissions required for operation

Defense in Depth

Multiple security layers from host to application

Immutability

Containers should be stateless and read-only where possible

Vulnerability Scanning

Regular scanning of images for known vulnerabilities

Running Containers Securely

The most impactful change: Run containers as non-root users. This single practice prevents many container escape vulnerabilities.

Non-Root User in Dockerfile

FROM alpine:3.18
RUN adduser -D appuser
COPY --chown=appuser . /app
USER appuser
CMD ["python3", "app.py"]

Runtime Hardening

# Read-only filesystem with necessary tmpfs
docker run -d --read-only --tmpfs /tmp my-app

# Drop all capabilities, add only what is needed
docker run -d --cap-drop ALL --cap-add NET_BIND_SERVICE nginx

# Limit resources to prevent DoS
docker run -d --memory 512m --cpus 0.5 --pids-limit 100 my-app

Each flag adds a layer of protection. --read-only prevents filesystem modifications. --cap-drop ALL removes Linux capabilities. Resource limits prevent runaway processes.

Secrets Management

Never put secrets in: Dockerfiles, environment variables in compose files committed to git, or image layers. These are all visible to anyone with access to the image or source code.

Safe Options for Secrets

| Method | Use Case | How It Works | |--------|----------|--------------| | Docker Secrets (Swarm) | Production services | Secrets stored encrypted, mounted as files at /run/secrets/ | | BuildKit secrets | Build-time credentials | Secret available only during build, not in final image | | External secrets manager | Enterprise deployments | Vault, AWS Secrets Manager inject at runtime | | Environment file | Development only | .env file loaded at runtime (never commit to git) |
# BuildKit: secret available only during build
DOCKER_BUILDKIT=1 docker build --secret id=token,src=./token.txt .

# Development: use .env file (add to .gitignore!)
docker run --env-file .env my-app

Image Security Scanning

Scan images for known vulnerabilities before deploying them. Integrate scanning into your CI/CD pipeline to catch issues early.

# Docker Scout (built into Docker Desktop)
docker scout cves my-app:latest

# Trivy (open source, widely used)
trivy image my-app:latest

# Enable image signing to verify provenance
export DOCKER_CONTENT_TRUST=1
docker pull my-registry/my-app:latest  # Fails if not signed

Security Compliance Checklist

Image Security

  • ✓ Use minimal base images (alpine, distroless)
  • ✓ Scan images for vulnerabilities regularly
  • ✓ Don't store secrets in images
  • ✓ Use specific version tags, not 'latest'
  • ✓ Sign images with Docker Content Trust
  • ✓ Remove unnecessary packages and files

Runtime Security

  • ✓ Run containers as non-root user
  • ✓ Use read-only root filesystems
  • ✓ Drop unnecessary capabilities
  • ✓ Limit resources (memory, CPU, PIDs)
  • ✓ Use security profiles (AppArmor, SELinux, Seccomp)
  • ✓ Isolate containers with user namespaces

Network Security

  • ✓ Use custom bridge networks, not default
  • ✓ Encrypt overlay network traffic
  • ✓ Implement network segmentation
  • ✓ Use TLS for container communication
  • ✓ Restrict container-to-container communication

Troubleshooting Common Docker Issues

When something goes wrong, start with the simplest checks and work your way to more detailed investigation.

Debugging Containers

Container Will Not Start

# First, check the logs
docker logs container-name

# Get the exit code (non-zero means error)
docker inspect container-name --format='{{.State.ExitCode}}'

# Start an interactive shell to investigate
docker run -it --entrypoint /bin/sh my-image

Connectivity Issues

# Use netshoot to debug networking
docker run --rm --network container:my-app nicolaka/netshoot

# Inside: test DNS and connectivity
nslookup service-name
curl -v http://service-name:port

Performance Problems

# Real-time stats for all containers
docker stats

# Check disk usage
docker system df

Common Error Solutions

| Error | Quick Fix | |-------|-----------| | "Cannot connect to Docker daemon" | `sudo systemctl start docker` or add user to docker group | | "No space left on device" | `docker system prune -a --volumes` | | "Port already in use" | `sudo lsof -i :8080` to find the process, then kill it or use a different port | | "Permission denied" | Run with sudo, or add user to docker group and log out/in |

Cleaning Up Disk Space

# See what is using space
docker system df

# Remove everything unused (images, containers, volumes)
docker system prune -a --volumes

Health Checks

Health checks let Docker know if your application is actually working, not just running.

# Add to Dockerfile
HEALTHCHECK --interval=30s --timeout=3s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1
# Check health status
docker ps  # Shows health in STATUS column
docker inspect --format='{{.State.Health.Status}}' container-name

After mastering the basic Docker commands, the next crucial skill is creating your own Docker images. This is where Dockerfiles come in - they are the blueprint for building custom container images.