Database Design

Relational modeling, indexing, and distributed database architecture

Every application needs to store data. Whether you’re building a social network, e-commerce platform, or analytics system, you’ll face fundamental questions: How should data be organized? How can multiple users access it simultaneously? What happens if the system crashes? Database design provides systematic answers to these challenges.

Quick Start: Database Design in 5 Minutes

If you’re new to databases, here’s what you need to know to get started:

What’s a Database? A database is a structured way to store and retrieve data. Think of it as a smart filing cabinet that can:

  • Find any piece of data instantly
  • Handle thousands of people accessing it at once
  • Never lose data, even if the power goes out
  • Enforce rules (like “account balance can’t be negative”)

Key Concepts in 30 Seconds Each:

Tables - Like spreadsheets, but smarter

CREATE TABLE users (
    id INT PRIMARY KEY,      -- Unique identifier
    email VARCHAR(255),      -- Text up to 255 chars
    created_at TIMESTAMP     -- When they signed up
);

SQL - The language databases understand

-- Create
INSERT INTO users (email) VALUES ('alice@example.com');

-- Read
SELECT * FROM users WHERE email LIKE '%@example.com';

-- Update
UPDATE users SET email = 'newemail@example.com' WHERE id = 1;

-- Delete
DELETE FROM users WHERE id = 1;

Relationships - How tables connect

-- Users have many orders
CREATE TABLE orders (
    id INT PRIMARY KEY,
    user_id INT REFERENCES users(id),  -- Links to users table
    total DECIMAL(10,2)
);

Indexes - Make queries fast

CREATE INDEX idx_users_email ON users(email);
-- Now finding users by email is 1000x faster!

Transactions - All or nothing operations

BEGIN;
    UPDATE accounts SET balance = balance - 100 WHERE id = 1;
    UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;  -- Both succeed or both fail

That’s it! You now understand the basics. The rest of this guide goes much deeper into each concept.

Why Databases Matter

Imagine building an online store. You start by storing product information in files:

# products.json
[
    {"id": 1, "name": "Laptop", "price": 999, "stock": 50},
    {"id": 2, "name": "Mouse", "price": 29, "stock": 200}
]

This works initially, but problems emerge quickly:

  • What if two customers buy the same product simultaneously?
  • How do you ensure stock never goes negative?
  • What if the server crashes during a purchase?
  • How do you find all products under $50 efficiently?

Databases solve these problems through carefully designed systems that have evolved over decades. Let’s explore how they work, starting with practical needs and building up to the theory that makes modern databases possible.

From Files to Databases

The Relational Model: Organizing Your Data

The breakthrough came when Edgar Codd realized that data could be organized as tables (relations) with well-defined rules. This wasn’t just tidiness—it enabled powerful operations and guarantees.

Consider our e-commerce example. Instead of one complex file, we organize data into focused tables:

-- Products table: Each row is a product
CREATE TABLE products (
    product_id INT PRIMARY KEY,  -- Unique identifier
    name VARCHAR(100),
    price DECIMAL(10,2),
    stock INT CHECK (stock >= 0)  -- Never negative!
);

-- Orders table: Each row is an order
CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    customer_id INT,
    order_date TIMESTAMP,
    FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);

This structure brings immediate benefits:

  • No redundancy: Product info stored once, referenced many times
  • Data integrity: Can’t reference non-existent customers
  • Efficient queries: Indexes make lookups fast
  • Concurrent access: Multiple users can read/write safely

ACID: Making Databases Reliable

The real magic happens when databases guarantee ACID properties. These aren’t abstract concepts—they solve real problems:

Atomicity - “All or Nothing” When a customer places an order, multiple things must happen:

  1. Decrease product stock
  2. Create order record
  3. Charge payment
  4. Send confirmation email

If step 3 fails, you don’t want steps 1-2 to remain. Atomicity ensures either everything succeeds or nothing changes.

Consistency - “Rules Always Apply” Your business rules (stock >= 0, valid email format) are enforced always, even during system failures.

Isolation - “No Interference” Two customers buying the last item see consistent results—one succeeds, one sees “out of stock”. No weird partial states.

Durability - “Confirmed Means Saved” Once you tell a customer “order confirmed”, that order survives power outages, crashes, and restarts.

SQL: The Universal Database Language

SQL emerged as the standard way to interact with relational databases. It’s declarative—you say what you want, not how to get it:

-- Find all orders from high-value customers last month
SELECT c.name, COUNT(o.order_id) as order_count, SUM(o.total) as revenue
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
WHERE o.order_date >= DATE_SUB(CURRENT_DATE, INTERVAL 30 DAY)
GROUP BY c.customer_id
HAVING SUM(o.total) > 1000
Order BY revenue DESC;

The database figures out the most efficient way to execute this—which tables to read first, which indexes to use, how to join the data. This separation of “what” from “how” is powerful.

Database Normalization: Avoiding Data Disasters

As your application grows, poor data organization leads to nightmares. Normalization is the process of organizing data to minimize redundancy and dependency issues.

Why Normalization Matters

Consider this poorly designed table:

Orders Table:
OrderID | CustomerName | CustomerEmail | Product1 | Price1 | Product2 | Price2
1001    | John Smith   | john@email   | Laptop   | 999    | Mouse    | 29
1002    | John Smith   | john@email   | Keyboard | 79     | NULL     | NULL

Problems:

  1. Update anomalies: Change John’s email? Update every row!
  2. Insert anomalies: Can’t add products without orders
  3. Delete anomalies: Delete last order? Lose customer info!
  4. Wasted space: Those NULLs add up

The Normalization Process

Normalization systematically eliminates these problems:

Step 1: Eliminate Repeating Groups (1NF)

-- Bad: Multiple products in one row
-- Good: Separate rows for each product
CREATE TABLE order_items (
    order_id INT,
    product_name VARCHAR(100),
    price DECIMAL(10,2),
    quantity INT
);

Step 2: Remove Partial Dependencies (2NF)

-- Product price depends only on product, not order
-- Split into separate tables
CREATE TABLE products (
    product_id INT PRIMARY KEY,
    product_name VARCHAR(100),
    price DECIMAL(10,2)
);

CREATE TABLE order_items (
    order_id INT,
    product_id INT,
    quantity INT,
    FOREIGN KEY (product_id) REFERENCES products(product_id)
);

Step 3: Remove Transitive Dependencies (3NF)

-- Customer info depends on customer, not order
CREATE TABLE customers (
    customer_id INT PRIMARY KEY,
    name VARCHAR(100),
    email VARCHAR(100)
);

CREATE TABLE orders (
    order_id INT PRIMARY KEY,
    customer_id INT,
    order_date TIMESTAMP,
    FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);

Now updates are simple, storage efficient, and data integrity is maintained!

When to Break the Rules

Sometimes denormalization improves performance. Amazon might store customer names in order records to avoid joins in their massive order history displays. The key is knowing why you’re breaking the rules and managing the trade-offs.

Modeling Relationships: Connecting Your Data

One-to-One

Each record in Table A relates to one record in Table B.

CREATE TABLE users (
    user_id SERIAL PRIMARY KEY,
    username VARCHAR(50)
);

CREATE TABLE user_profiles (
    user_id INT PRIMARY KEY REFERENCES users(user_id),
    bio TEXT,
    avatar_url VARCHAR(255)
);

One-to-Many

One record in Table A relates to many in Table B.

CREATE TABLE authors (
    author_id SERIAL PRIMARY KEY,
    name VARCHAR(100)
);

CREATE TABLE books (
    book_id SERIAL PRIMARY KEY,
    title VARCHAR(200),
    author_id INT REFERENCES authors(author_id)
);

Many-to-Many

Multiple records in Table A relate to multiple in Table B.

CREATE TABLE students (
    student_id SERIAL PRIMARY KEY,
    name VARCHAR(100)
);

CREATE TABLE courses (
    course_id SERIAL PRIMARY KEY,
    title VARCHAR(100)
);

CREATE TABLE enrollments (
    student_id INT REFERENCES students(student_id),
    course_id INT REFERENCES courses(course_id),
    enrollment_date DATE,
    PRIMARY KEY (student_id, course_id)
);

Indexing: Making Queries Lightning Fast

Imagine finding a word in a dictionary versus a novel. The dictionary has an index (alphabetical order), while the novel requires reading every page. Database indexes work similarly.

When Indexes Transform Performance

Without an index, finding a customer among millions requires checking every row:

-- Slow: Full table scan
SELECT * FROM customers WHERE email = 'john@example.com';
-- Time: 5 seconds for 10 million rows

With an index:

CREATE INDEX idx_customers_email ON customers(email);
-- Same query now takes 0.005 seconds!

Types of Indexes and When to Use Them

B-Tree Index: Your Swiss Army Knife

  • Use for: Most queries, especially ranges
  • Example: Finding orders between dates, products under $100
  • How it works: Like a phone book - hierarchical, sorted

Hash Index: The Speed Demon

  • Use for: Exact matches only
  • Example: Looking up users by ID
  • How it works: Like a hash table - direct lookup
  • Note: PostgreSQL automatically converts to B-tree, MySQL supports true hash

Full-Text Index: The Search Engine

  • Use for: Text search, “contains” queries
  • Example: Finding products with “wireless” in description
  • How it works: Breaks text into searchable tokens

Bitmap Index: The Space Saver

  • Use for: Columns with few unique values
  • Example: Status fields (active/inactive), categories
  • How it works: One bit per row per unique value

Smart Indexing Strategies

Composite Indexes: Order Matters!

-- This index helps both queries:
CREATE INDEX idx_orders_customer_date ON orders(customer_id, order_date);
-- Fast: WHERE customer_id = 123
-- Fast: WHERE customer_id = 123 AND order_date > '2024-01-01'
-- Slow: WHERE order_date > '2024-01-01'  -- Can't use index efficiently!

Covering Indexes: Include Everything

-- Index includes all needed columns - no table lookup needed!
CREATE INDEX idx_orders_covering 
ON orders(customer_id, order_date) 
INCLUDE (total, status);

Partial Indexes: Index Only What You Need

-- Only index active users - smaller, faster
CREATE INDEX idx_active_users ON users(email) WHERE active = true;

Modern Index Types:

  • BRIN (Block Range Index): For time-series data
  • GIN/GiST: For JSON, arrays, full-text search
  • Vector Indexes: For AI/ML embeddings (pgvector, Pinecone)
  • Learned Indexes: AI-predicted data locations (research phase)

The Cost of Indexes

Indexes aren’t free:

  • Storage: Each index is a data structure that needs disk space
  • Write performance: Every INSERT/UPDATE must update indexes
  • Maintenance: Indexes can become fragmented

Rule of thumb: Index based on read patterns, but don’t index everything!

AI-Assisted Index Recommendations: Modern databases now use machine learning to suggest indexes:

  • PostgreSQL: pg_stat_statements + ML advisors
  • MySQL: Performance Schema with AI insights
  • Cloud Services: AWS Performance Insights, Azure Intelligent Performance

How Databases Execute Your Queries

When you write a SQL query, the database performs remarkable optimizations behind the scenes. Understanding this helps you write better queries.

The Journey of a Query

Let’s follow this query through the database:

SELECT c.name, SUM(o.total) as lifetime_value
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
WHERE c.country = 'USA'
GROUP BY c.customer_id
HAVING SUM(o.total) > 1000;

Step 1: Parse and Validate

  • Check syntax: Are the SQL keywords correct?
  • Verify objects: Do these tables and columns exist?
  • Check permissions: Can this user access this data?

Step 2: Optimize The optimizer considers multiple execution strategies:

Plan A: Scan all customers, then find their orders

  • Cost: 1 million customers × average 10 orders each = expensive!

Plan B: Use country index, then join

  • Cost: 50,000 US customers × 10 orders = much better!

Plan C: Start with high-value orders, then find customers

  • Cost: Depends on how many orders > $100…

The optimizer estimates costs using statistics about your data.

Step 3: Execute The chosen plan becomes physical operations:

  1. Index seek on customers.country
  2. Hash join with orders
  3. Aggregate by customer
  4. Filter by total > 1000

Understanding Query Plans

Databases show you their execution strategy:

EXPLAIN ANALYZE
SELECT c.name, COUNT(*) 
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
GROUP BY c.name;

-- Output:
HashAggregate (cost=1234.56 rows=1000)
  -> Hash Join (cost=234.56 rows=10000)
        Hash Cond: (o.customer_id = c.customer_id)
        -> Seq Scan on orders o (cost=0.00 rows=10000)
        -> Hash (cost=123.45 rows=1000)
              -> Seq Scan on customers c (cost=0.00 rows=1000)

Reading plans bottom-up:

  1. Scan customers table (1000 rows)
  2. Build hash table
  3. Scan orders table (10000 rows)
  4. For each order, probe hash table (fast!)
  5. Aggregate results

The Magic of Relational Algebra

Behind every optimization is relational algebra—a mathematical framework that makes query optimization possible. Just as arithmetic has commutative (a+b = b+a) and associative ((a+b)+c = a+(b+c)) properties, relational operations have rules:

Pushing Selections Down

-- Original: Join everything, then filter
SELECT * FROM orders o JOIN customers c ON o.customer_id = c.customer_id
WHERE c.country = 'USA';

-- Optimized: Filter first, then join (much less data!)
SELECT * FROM orders o 
JOIN (SELECT * FROM customers WHERE country = 'USA') c 
ON o.customer_id = c.customer_id;

The optimizer applies these transformations automatically!

Join Reordering

-- Three-way join: 6 possible orders!
-- A ⋈ B ⋈ C could be:
-- (A ⋈ B) ⋈ C
-- A ⋈ (B ⋈ C)  
-- (A ⋈ C) ⋈ B
-- etc.

The optimizer estimates costs for each order and picks the best one.

Code Reference: For implementations of query optimization algorithms, see query_processing.py

Query Optimization in Practice

Common Performance Killers and Solutions:

  1. The N+1 Query Problem ```python

    Bad: 1 query + N queries

    customers = db.query(“SELECT * FROM customers”) for customer in customers: orders = db.query(f”SELECT * FROM orders WHERE customer_id = {customer.id}”) # If you have 1000 customers, this runs 1001 queries!

Good: 1 query with JOIN

result = db.query(“”” SELECT c., o. FROM customers c LEFT JOIN orders o ON c.customer_id = o.customer_id “””)


2. **Missing Indexes on Foreign Keys**
```sql
-- Orders reference customers, but no index on customer_id!
-- Every join does full table scan
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
-- Now joins are fast
  1. Wrong Data Types ```sql – Bad: Storing numbers as strings CREATE TABLE products ( price VARCHAR(10) – “99.99” stored as string! ); – WHERE price > 100 requires converting every row!

– Good: Use numeric types CREATE TABLE products ( price DECIMAL(10,2) – Numeric comparisons are fast );


4. **SELECT * Abuse**
```sql
-- Bad: Fetching all columns when you need two
SELECT * FROM users;  -- Transfers unnecessary data

-- Good: Request only what you need  
SELECT id, email FROM users;  -- Less network traffic, less memory

Scaling Beyond One Machine: Distributed Databases

Eventually, your database outgrows a single server. Maybe you have too much data, too many users, or need geographical distribution. This is where distributed databases come in—and where things get interesting.

The CAP Theorem: Pick Two

In 2000, Eric Brewer observed that distributed systems face a fundamental trade-off. You can have at most two of:

Consistency: Everyone sees the same data

  • Example: Bank account balance is identical at all branches
  • Cost: Might need to wait for all nodes to agree

Availability: System always responds

  • Example: Shopping cart always works, even during Black Friday
  • Cost: Might show slightly outdated data

Partition Tolerance: Survives network failures

  • Example: East Coast datacenter loses connection to West Coast
  • Cost: Must choose between C and A when split happens

Real-World Trade-offs

Banking System (CP - Consistency + Partition Tolerance)

# ATM withdrawal must check all replicas
def withdraw(account_id, amount):
    # Check balance across all nodes (might fail if network is down)
    if check_all_nodes_balance(account_id) >= amount:
        deduct_all_nodes(account_id, amount)
        return "Success"
    return "Insufficient funds"

Better to say “ATM temporarily unavailable” than allow overdrafts!

Social Media Feed (AP - Availability + Partition Tolerance)

# Always show something, even if not latest
def get_feed(user_id):
    try:
        return get_latest_feed(user_id)
    except NetworkPartition:
        return get_cached_feed(user_id)  # Might be 5 minutes old

Better to show slightly old posts than no posts!

Configuration Service (CA - Consistency + Availability)

# Only works within single datacenter
def update_config(key, value):
    # All nodes in datacenter see same config
    broadcast_to_local_nodes(key, value)
    return "Updated"

Assumes datacenter network is reliable (risky assumption!).

The Challenge of Time in Distributed Systems

In a single database, there’s one clock. In distributed systems, every node has its own clock, and they drift. This creates surprising problems:

The Problem:

Node A (Time: 10:00:00): User updates email to "new@email.com"
Node B (Time: 09:59:58): User updates email to "old@email.com"

Which update happened first? Node B's clock is 2 seconds behind!

Solution 1: Vector Clocks - Tracking Causality

# Each node tracks its version and others it knows about
Node A: {A: 1, B: 0}  # "I'm at version 1, last saw B at 0"
Node B: {A: 0, B: 1}  # "I'm at version 1, last saw A at 0"

# After A sends update to B:
Node A: {A: 2, B: 0}  # Incremented own counter
Node B: {A: 2, B: 2}  # Merged A's knowledge, incremented own

# Now B knows A's update happened before its next action

Solution 2: Hybrid Logical Clocks - Best of Both Worlds

class HybridClock:
    def __init__(self):
        self.physical_time = get_system_time()
        self.logical_counter = 0
    
    def tick(self):
        new_time = get_system_time()
        if new_time > self.physical_time:
            self.physical_time = new_time
            self.logical_counter = 0
        else:
            self.logical_counter += 1
        return (self.physical_time, self.logical_counter)

This gives us timestamps that respect both wall clock time and causality!

Consensus: Getting Distributed Nodes to Agree

The heart of distributed databases is consensus—how do multiple nodes agree on data values? This is harder than it sounds when nodes can crash and networks can fail.

Raft: Consensus Made Understandable

Raft breaks the problem into manageable pieces:

The Leader Election Analogy Imagine a group project where you need a coordinator:

  1. Everyone starts as a follower - waiting for a leader
  2. If no leader speaks up - someone volunteers (becomes candidate)
  3. Candidates request votes - “I’ll be leader, okay?”
  4. Majority wins - becomes leader, others go back to following
  5. Leader sends heartbeats - “Still here, still in charge!”

How It Handles Failures:

# Simplified Raft leader election
class RaftNode:
    def __init__(self):
        self.state = "follower"
        self.term = 0
        self.voted_for = None
        
    def election_timeout(self):
        # No heartbeat from leader? Start election!
        self.state = "candidate"
        self.term += 1
        self.voted_for = self.id
        
        votes = 1  # Vote for self
        for node in other_nodes:
            if node.request_vote(self.term, self.id):
                votes += 1
                
        if votes > len(all_nodes) / 2:
            self.state = "leader"
            self.send_heartbeats()  # Tell everyone I'm leader

Why This Works:

  • Only one leader per term (majority vote)
  • Split votes resolved by random timeouts
  • Old leaders step down when they see higher terms
  • All changes go through leader (simplifies consistency)

Paxos: The Original (Complex) Solution

Paxos solves the same problem but is notoriously hard to understand. Leslie Lamport even wrote a paper explaining it through an analogy of ancient Greek legislators! The key insight: use two phases (prepare/accept) to ensure safety even with failures.

Distributed Transactions: All or Nothing Across Machines

Remember ACID’s atomicity? It gets tricky when data spans multiple machines. How do you ensure all machines commit or all abort?

Two-Phase Commit (2PC): The Wedding Protocol

Think of 2PC like a wedding ceremony:

Phase 1 - “Do you take this transaction?”

# Coordinator (the officiant)
def prepare_transaction(tx_id, participants):
    responses = []
    for participant in participants:
        response = participant.prepare(tx_id)  # "Do you commit?"
        responses.append(response)
    
    if all(r == "YES" for r in responses):
        decision = "COMMIT"
    else:
        decision = "ABORT"
    
    log_decision(tx_id, decision)  # Write to disk before telling anyone
    return decision

Phase 2 - “I now pronounce you committed”

def commit_transaction(tx_id, participants, decision):
    for participant in participants:
        participant.commit(tx_id, decision)  # "You may now commit"
        # Participant applies changes or rolls back

The Problem: What if the coordinator crashes?

  • Participants are stuck waiting (“standing at the altar”)
  • Can’t commit (might need to abort)
  • Can’t abort (others might have committed)
  • This is called “blocking”

Saga Pattern: Breaking Up Long Transactions

For operations that take minutes or hours (like booking a trip), use sagas:

class TripBookingSaga:
    def execute(self):
        try:
            flight_id = book_flight()        # Step 1
            hotel_id = book_hotel()          # Step 2
            car_id = book_rental_car()       # Step 3
            send_confirmation()              # Step 4
        except Exception as e:
            # Compensate in reverse order
            if car_id: cancel_rental_car(car_id)
            if hotel_id: cancel_hotel(hotel_id)
            if flight_id: cancel_flight(flight_id)
            raise e

Each step is a complete transaction. If something fails, run compensating actions. Not perfect (someone might see then not see a booking) but practical for long operations.

Code Reference: For working implementations of these algorithms, see distributed_systems.py

NoSQL: When Relational Isn’t the Right Fit

Not all data fits neatly into tables. NoSQL databases emerged to handle specific use cases where relational databases struggle.

Document Stores: Natural for Nested Data

When to Use: Variable schemas, nested data, rapid development

MongoDB Example - Product Catalog:

// Products have wildly different attributes
db.products.insertOne({
    name: "Gaming Laptop",
    price: 1299,
    specs: {
        cpu: "Intel i7",
        ram: "16GB",
        gpu: "RTX 3060",
        display: {
            size: "15.6 inches",
            resolution: "1920x1080",
            refresh_rate: "144Hz"
        }
    },
    reviews: [
        {user: "gamer123", rating: 5, text: "Runs everything!"},
        {user: "techie99", rating: 4, text: "Great but runs hot"}
    ]
});

// Query nested fields naturally
db.products.find({
    "specs.ram": "16GB",
    "specs.display.refresh_rate": "144Hz"
});

In SQL, this would require multiple tables and joins!

Key-Value Stores: Speed Above All

When to Use: Caching, sessions, real-time features

Redis Example - Gaming Leaderboard:

# Update player score (atomic operation)
ZINCRBY game:leaderboard 100 "player:alice"

# Get top 10 players instantly
ZREVRANGE game:leaderboard 0 9 WITHSCORES

# Cache expensive database query
SET cache:top_products '[{"id":1,"name":"Laptop"}...]' EX 300

Millions of operations per second, sub-millisecond latency!

Column-Family Stores: Big Data Time Series

When to Use: Time-series data, write-heavy workloads, analytics

Cassandra Example - IoT Sensor Data:

-- Optimized for time-series queries
CREATE TABLE sensor_data (
    sensor_id UUID,
    timestamp TIMESTAMP,
    temperature DOUBLE,
    humidity DOUBLE,
    PRIMARY KEY (sensor_id, timestamp)
) WITH CLUSTERING ORDER BY (timestamp DESC);

-- Fast writes from millions of sensors
INSERT INTO sensor_data (sensor_id, timestamp, temperature, humidity)
VALUES (123e4567-e89b-12d3-a456-426614174000, now(), 22.5, 45.2);

-- Efficient time-range queries
SELECT * FROM sensor_data 
WHERE sensor_id = 123e4567-e89b-12d3-a456-426614174000
AND timestamp > '2024-01-01' AND timestamp < '2024-01-02';

Graph Databases: It’s All About Relationships

When to Use: Social networks, recommendations, fraud detection

Neo4j Example - Friend Recommendations:

// Find friends of friends who aren't already friends
MATCH (me:Person {name: 'Alice'})-[:FRIENDS_WITH]->(friend)
      -[:FRIENDS_WITH]->(foaf:Person)
WHERE NOT (me)-[:FRIENDS_WITH]-(foaf) AND me <> foaf
RETURN foaf.name, COUNT(*) as mutual_friends
ORDER BY mutual_friends DESC
LIMIT 10;

Try writing this in SQL - it’s a recursive nightmare!

Data Modeling Patterns

Star Schema

Central fact table with dimension tables.

-- Fact table
CREATE TABLE sales_facts (
    sale_id SERIAL PRIMARY KEY,
    product_id INT,
    customer_id INT,
    date_id INT,
    amount DECIMAL(10, 2),
    quantity INT
);

-- Dimension tables
CREATE TABLE dim_products (
    product_id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    category VARCHAR(50),
    brand VARCHAR(50)
);

CREATE TABLE dim_customers (
    customer_id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    city VARCHAR(50),
    country VARCHAR(50)
);

Snowflake Schema

Normalized star schema with sub-dimensions.

Entity-Attribute-Value (EAV)

Flexible schema for variable attributes.

CREATE TABLE entities (
    entity_id SERIAL PRIMARY KEY,
    entity_type VARCHAR(50)
);

CREATE TABLE attributes (
    attribute_id SERIAL PRIMARY KEY,
    attribute_name VARCHAR(50),
    data_type VARCHAR(20)
);

CREATE TABLE values (
    entity_id INT REFERENCES entities(entity_id),
    attribute_id INT REFERENCES attributes(attribute_id),
    value TEXT,
    PRIMARY KEY (entity_id, attribute_id)
);

Transactions and Concurrency: Managing Simultaneous Access

When multiple users access a database simultaneously, chaos can ensue. Transactions and concurrency control bring order to this chaos.

The Concurrency Problem

Consider this scenario in an online store:

# Two customers buy the last item simultaneously
# Customer A's thread:
stock = db.query("SELECT stock FROM products WHERE id = 123")  # Returns 1
if stock > 0:
    # Context switch to Customer B here!
    db.execute("UPDATE products SET stock = stock - 1 WHERE id = 123")
    db.execute("INSERT INTO orders ...")

# Customer B's thread (running at same time):
stock = db.query("SELECT stock FROM products WHERE id = 123")  # Also returns 1!
if stock > 0:
    db.execute("UPDATE products SET stock = stock - 1 WHERE id = 123")
    db.execute("INSERT INTO orders ...")

# Result: stock = -1, both customers think they got the item!

How Databases Solve This

Databases use two main strategies: pessimistic (locking) and optimistic (versioning).

Strategy 1: Locking (Pessimistic)

Two-Phase Locking (2PL): Grab locks, do work, release locks

BEGIN TRANSACTION;
-- Lock the row for update
SELECT stock FROM products WHERE id = 123 FOR UPDATE;
-- Now only this transaction can modify this row
UPDATE products SET stock = stock - 1 WHERE id = 123;
INSERT INTO orders ...;
COMMIT;  -- Releases all locks

Lock Types:

  • Shared (S): Multiple readers OK (SELECT)
  • Exclusive (X): Single writer, no readers (UPDATE/DELETE)

The Deadlock Problem:

Transaction 1: Lock A, waiting for B
Transaction 2: Lock B, waiting for A
-- Both stuck forever!

Databases detect deadlocks and kill one transaction to break the cycle.

Strategy 2: Multi-Version Concurrency Control (MVCC)

Instead of locking, keep multiple versions of data. Each transaction sees a consistent snapshot:

# Simplified MVCC concept
class MVCCDatabase:
    def __init__(self):
        self.data = {}  # {key: [(value, timestamp, deleted), ...]}
        self.timestamp = 0
    
    def begin_transaction(self):
        self.timestamp += 1
        return Transaction(self.timestamp)
    
    def read(self, tx, key):
        # Find latest version visible to this transaction
        versions = self.data.get(key, [])
        for value, ts, deleted in reversed(versions):
            if ts <= tx.start_time:
                return None if deleted else value
        return None
    
    def write(self, tx, key, value):
        # Create new version, don't overwrite
        if key not in self.data:
            self.data[key] = []
        self.data[key].append((value, tx.start_time, False))

How PostgreSQL Uses MVCC:

-- Transaction 1 (started at time 100)
BEGIN;
SELECT balance FROM accounts WHERE id = 1;
-- Sees balance = 1000 (version from time 50)

-- Transaction 2 (started at time 101)
BEGIN;
UPDATE accounts SET balance = 900 WHERE id = 1;
COMMIT;
-- Creates new version at time 101

-- Transaction 1 still sees old version!
SELECT balance FROM accounts WHERE id = 1;
-- Still sees balance = 1000 (snapshot from time 100)

Benefits:

  • Readers never block writers
  • Writers never block readers
  • Great for read-heavy workloads
  • Natural time-travel queries (“show me data as of yesterday”)

Understanding Serializability

The gold standard for correctness is serializability: the result should be as if transactions ran one at a time, even though they actually ran concurrently.

Testing for Conflicts:

# Two transactions operating on same data
T1: READ(A), WRITE(B)
T2: WRITE(A), READ(B)

# Conflicts:
# T1.READ(A) conflicts with T2.WRITE(A)  (Read-Write)
# T2.READ(B) conflicts with T1.WRITE(B)  (Read-Write)

# Build a graph: T1 -> T2 (T1 must come before T2)
#                T2 -> T1 (T2 must come before T1)
# Cycle! Not serializable.

Why This Matters: Non-serializable schedules can produce results impossible with serial execution:

-- Account transfer race condition
-- T1: Transfer $100 from A to B
-- T2: Transfer $100 from B to A

-- Serial execution: No net change
-- Bad concurrent execution: Money appears/disappears!

Isolation Levels: Choosing Your Guarantees

Databases offer different isolation levels—trade-offs between correctness and performance:

Read Uncommitted: “I live dangerously”

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
-- Can see uncommitted changes (dirty reads)
-- Use case: Rough analytics where exactness doesn't matter

Read Committed: “Show me committed data” (PostgreSQL default)

SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
-- Each query sees committed data at query start
-- Problem: Same query can return different results
SELECT COUNT(*) FROM orders;  -- Returns 100
-- Another transaction commits new order
SELECT COUNT(*) FROM orders;  -- Returns 101

Repeatable Read: “My view stays consistent”

SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- All queries see same snapshot
-- Problem: Phantom reads (new rows matching WHERE)

Serializable: “Perfect isolation”

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- As if transactions ran one at a time
-- Might fail with "serialization error" - retry needed

Real-World Example: Seat Booking

-- With READ COMMITTED: Two people might book same seat
-- With SERIALIZABLE: One succeeds, one gets error to retry

BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT seat_id FROM seats 
WHERE flight_id = 123 AND status = 'available' 
LIMIT 1 FOR UPDATE;

UPDATE seats SET status = 'booked', customer_id = 456
WHERE seat_id = 789;
COMMIT;

Code Reference: For implementations of these concepts, see concurrency_control.py

Practical Locking Patterns

Pattern 1: Preventing Lost Updates

-- Problem: Two users editing same document
-- Solution: Optimistic locking with version

-- User A loads document
SELECT content, version FROM documents WHERE id = 123;
-- Returns: content="Hello", version=5

-- User B loads same document  
SELECT content, version FROM documents WHERE id = 123;
-- Returns: content="Hello", version=5

-- User A saves changes
UPDATE documents 
SET content = 'Hello World', version = version + 1
WHERE id = 123 AND version = 5;
-- Success! 1 row updated

-- User B tries to save
UPDATE documents
SET content = 'Hello Everyone', version = version + 1  
WHERE id = 123 AND version = 5;
-- Failure! 0 rows updated (version is now 6)
-- Application shows: "Document was modified by another user"

Pattern 2: Queue Processing

-- Multiple workers processing job queue
-- Need to ensure each job processed once

WITH next_job AS (
    SELECT job_id FROM job_queue
    WHERE status = 'pending'
    ORDER BY priority DESC, created_at ASC
    LIMIT 1
    FOR UPDATE SKIP LOCKED  -- Key: Skip rows locked by others
)
UPDATE job_queue 
SET status = 'processing', worker_id = 'worker-1'
WHERE job_id = (SELECT job_id FROM next_job)
RETURNING *;

Database Security

Access Control

-- Create user
CREATE USER 'app_user'@'localhost' IDENTIFIED BY 'secure_password';

-- Grant permissions
GRANT SELECT, INSERT, UPDATE ON mydb.* TO 'app_user'@'localhost';

-- Revoke permissions
REVOKE DELETE ON mydb.* FROM 'app_user'@'localhost';

-- Role-based access
CREATE ROLE read_only;
GRANT SELECT ON mydb.* TO read_only;
GRANT read_only TO 'analyst'@'localhost';

Data Encryption

At Rest:

  • Transparent Data Encryption (TDE)
  • File system encryption
  • Column-level encryption

In Transit:

  • SSL/TLS connections
  • VPN tunnels

SQL Injection Prevention

# Bad - vulnerable to injection
query = f"SELECT * FROM users WHERE username = '{username}'"

# Good - parameterized query
cursor.execute(
    "SELECT * FROM users WHERE username = %s",
    (username,)
)

# Good - using ORM
user = User.query.filter_by(username=username).first()

Advanced Database Internals

Query Optimizer Deep Dive

The query optimizer is the brain of the database. Understanding how it works helps you write better queries.

Cost Model Components:

# Simplified cost calculation
def estimate_cost(plan):
    # I/O cost: Reading pages from disk
    seq_page_cost = 1.0
    random_page_cost = 4.0  # Random I/O is slower
    
    # CPU cost: Processing rows
    cpu_tuple_cost = 0.01
    cpu_operator_cost = 0.0025
    
    # Network cost (for distributed databases)
    network_tuple_cost = 0.1
    
    total_cost = (
        plan.seq_pages * seq_page_cost +
        plan.random_pages * random_page_cost +
        plan.rows * cpu_tuple_cost +
        plan.operators * cpu_operator_cost +
        plan.network_rows * network_tuple_cost
    )
    return total_cost

Join Algorithm Selection:

-- Nested Loop Join: Good for small tables or indexed lookups
-- Cost: O(n * m)
SELECT * FROM small_table s JOIN large_table l ON s.id = l.foreign_id;

-- Hash Join: Good for medium tables without indexes  
-- Cost: O(n + m)
SELECT * FROM medium1 m1 JOIN medium2 m2 ON m1.id = m2.id;

-- Merge Join: Good for pre-sorted data
-- Cost: O(n log n + m log m) if sorting needed
SELECT * FROM sorted1 s1 JOIN sorted2 s2 ON s1.id = s2.id;

Statistics and Selectivity:

-- Database tracks statistics for better estimates
SELECT 
    attname as column,
    n_distinct,
    most_common_vals,
    most_common_freqs,
    histogram_bounds
FROM pg_stats
WHERE tablename = 'orders';

-- Selectivity affects plan choice
-- High selectivity (few rows): Index scan
-- Low selectivity (many rows): Sequential scan

Memory Management Internals

Buffer Pool Architecture:

class BufferPoolManager:
    def __init__(self, pool_size_mb):
        self.frames = [None] * (pool_size_mb * 128)  # 8KB pages
        self.page_table = {}  # page_id -> frame_id
        self.free_list = list(range(len(self.frames)))
        self.clock_hand = 0  # For clock replacement
        
    def fetch_page(self, page_id):
        # Check if in memory
        if page_id in self.page_table:
            frame_id = self.page_table[page_id]
            self.frames[frame_id].pin_count += 1
            return self.frames[frame_id]
        
        # Need to load from disk
        frame_id = self._get_free_frame()
        if frame_id is None:
            frame_id = self._evict_page()  # Clock algorithm
            
        # Load page from disk
        page = self._read_from_disk(page_id)
        self.frames[frame_id] = page
        self.page_table[page_id] = frame_id
        return page

Work Memory Areas:

-- Different operations use different memory areas
-- Sort operations
SET work_mem = '256MB';  -- Per operation!
EXPLAIN (ANALYZE, BUFFERS) 
SELECT * FROM large_table ORDER BY created_at;

-- Hash joins
-- Hash table must fit in work_mem or spills to disk
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM t1 JOIN t2 ON t1.id = t2.id;

-- Maintenance operations use different pool
SET maintenance_work_mem = '1GB';
CREATE INDEX idx_large ON large_table(column);

Lock Management Internals

Lock Compatibility Matrix:

# PostgreSQL lock modes
LOCK_COMPATIBILITY = {
    #            AS  RS  RE  SUE  S   SSE  E   AE
    'AS':      [ 1,  1,  1,  1,  1,  1,  1,  0], # AccessShare
    'RS':      [ 1,  1,  1,  1,  0,  0,  0,  0], # RowShare  
    'RE':      [ 1,  1,  0,  0,  0,  0,  0,  0], # RowExclusive
    'SUE':     [ 1,  1,  0,  0,  1,  0,  0,  0], # ShareUpdateExcl
    'S':       [ 1,  0,  0,  1,  1,  0,  0,  0], # Share
    'SSE':     [ 1,  0,  0,  0,  0,  0,  0,  0], # ShareRowExcl
    'E':       [ 1,  0,  0,  0,  0,  0,  0,  0], # Exclusive
    'AE':      [ 0,  0,  0,  0,  0,  0,  0,  0], # AccessExclusive
}

Deadlock Detection Algorithm:

class DeadlockDetector:
    def __init__(self):
        self.wait_graph = {}  # txn -> [waiting_for_txns]
        
    def add_wait(self, waiter, holder):
        if waiter not in self.wait_graph:
            self.wait_graph[waiter] = []
        self.wait_graph[waiter].append(holder)
        
        # Check for cycle
        if self._has_cycle(waiter, holder, {holder}):
            return self._choose_victim()
    
    def _has_cycle(self, start, current, visited):
        if current == start:
            return True
        if current in self.wait_graph:
            for next_txn in self.wait_graph[current]:
                if next_txn not in visited:
                    visited.add(next_txn)
                    if self._has_cycle(start, next_txn, visited):
                        return True
        return False

Backup and Recovery

Backup Strategies

Full Backup:

# PostgreSQL
pg_dump dbname > backup.sql

# MySQL
mysqldump --all-databases > backup.sql

Incremental Backup:

  • Only changes since last backup
  • Requires less storage
  • Faster backup, slower restore

Point-in-Time Recovery:

  • Transaction logs
  • Restore to specific moment

High Availability

Replication:

  • Master-slave
  • Master-master
  • Multi-master

Clustering:

  • Active-passive
  • Active-active
  • Shared storage

Common Database Patterns and Anti-Patterns

Design Patterns That Work

1. Audit Trail Pattern

-- Track all changes to sensitive data
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255),
    -- ... other fields
);

CREATE TABLE users_audit (
    audit_id SERIAL PRIMARY KEY,
    user_id INT,
    changed_by INT REFERENCES users(id),
    changed_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    operation VARCHAR(10), -- INSERT, UPDATE, DELETE
    old_values JSONB,
    new_values JSONB
);

-- Trigger to automatically log changes
CREATE TRIGGER audit_users
AFTER INSERT OR UPDATE OR DELETE ON users
FOR EACH ROW EXECUTE FUNCTION log_user_changes();

2. Hierarchical Data Pattern

-- Method 1: Adjacency List (simple, good for shallow trees)
CREATE TABLE categories (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    parent_id INT REFERENCES categories(id)
);

-- Method 2: Materialized Path (fast for reading)
CREATE TABLE categories (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    path TEXT  -- '1/3/7' means: root(1) -> parent(3) -> this(7)
);

-- Method 3: Nested Sets (complex but powerful)
CREATE TABLE categories (
    id SERIAL PRIMARY KEY,
    name VARCHAR(100),
    lft INT,
    rgt INT
);

3. Polymorphic Association Pattern

-- When multiple tables need to reference a common feature
CREATE TABLE comments (
    id SERIAL PRIMARY KEY,
    commentable_type VARCHAR(50),  -- 'Post', 'Photo', 'Video'
    commentable_id INT,
    content TEXT,
    INDEX idx_commentable (commentable_type, commentable_id)
);

Anti-Patterns to Avoid

1. Entity-Attribute-Value (EAV) Gone Wrong

-- Anti-pattern: Storing everything as key-value pairs
CREATE TABLE user_attributes (
    user_id INT,
    attribute_name VARCHAR(50),
    attribute_value TEXT  -- Everything stored as text!
);

-- Problems:
-- - No type safety
-- - Terrible query performance
-- - No referential integrity
-- - Complex queries for simple tasks

-- Better: Use JSONB for truly dynamic attributes
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    email VARCHAR(255),
    metadata JSONB  -- Flexible but still queryable
);

2. Intelligent Keys Anti-Pattern

-- Anti-pattern: Encoding meaning in primary keys
CREATE TABLE orders (
    order_id VARCHAR(20) PRIMARY KEY  -- 'US-2024-WEB-00123'
);

-- Problems:
-- - What if business rules change?
-- - Parsing keys for queries is slow
-- - Running out of namespace

-- Better: Use surrogate keys
CREATE TABLE orders (
    id SERIAL PRIMARY KEY,
    country VARCHAR(2),
    year INT,
    source VARCHAR(10),
    order_number INT,
    UNIQUE(country, year, source, order_number)
);

3. Multicolumn Attributes Anti-Pattern

-- Anti-pattern: Columns like tag1, tag2, tag3...
CREATE TABLE posts (
    id SERIAL PRIMARY KEY,
    title VARCHAR(200),
    tag1 VARCHAR(50),
    tag2 VARCHAR(50),
    tag3 VARCHAR(50)
);

-- Better: Proper many-to-many relationship
CREATE TABLE tags (
    id SERIAL PRIMARY KEY,
    name VARCHAR(50) UNIQUE
);

CREATE TABLE post_tags (
    post_id INT REFERENCES posts(id),
    tag_id INT REFERENCES tags(id),
    PRIMARY KEY (post_id, tag_id)
);

How Databases Store Your Data

Understanding database internals helps you design better schemas and debug performance issues.

The Storage Hierarchy

Databases carefully manage the journey from SQL to disk:

SQL Query
    ↓
Buffer Pool (RAM) - "Hot" data cached here
    ↓
Storage Engine - Manages pages and files
    ↓  
File System - Database files
    ↓
Disk - Actual persistent storage

Pages: The Building Blocks

Databases don’t read individual rows from disk—they read pages (typically 8KB or 16KB):

+------------------Page 1 (8KB)-------------------+
| Header (checksum, LSN, free space pointer)      |
|--------------------------------------------------||
| Row 1: {id: 1, name: "Alice", email: "..."}    |
| Row 2: {id: 2, name: "Bob", email: "..."}      |
| Row 3: {id: 3, name: "Charlie", email: "..."}  |
| ... (more rows) ...                             |
| Free Space                                       |
+--------------------------------------------------+

Why Pages Matter:

  • Disk I/O is slow; reading 8KB isn’t much slower than reading 100 bytes
  • Related data stored together (spatial locality)
  • Enables efficient caching in memory

Buffer Pool: Your Database’s Cache

The buffer pool is why databases can serve queries from memory:

class BufferPool:
    def __init__(self, size_mb):
        self.pages = {}  # page_id -> page_data
        self.lru = OrderedDict()  # for eviction
        self.max_pages = size_mb * 1024 // 8  # 8KB pages
    
    def get_page(self, page_id):
        if page_id in self.pages:
            # Cache hit! Move to end (most recently used)
            self.lru.move_to_end(page_id)
            return self.pages[page_id]
        else:
            # Cache miss - read from disk
            page = read_page_from_disk(page_id)
            self.add_to_cache(page_id, page)
            return page

Tuning Buffer Pool:

-- PostgreSQL: Check cache hit ratio
SELECT 
    sum(blks_hit)/(sum(blks_hit)+sum(blks_read)) as cache_hit_ratio
FROM pg_stat_database;
-- Want > 0.95 (95% from cache)

-- Increase if too low
ALTER SYSTEM SET shared_buffers = '4GB';

B+ Trees: The Workhorse Index Structure

B+ trees power most database indexes. Think of them as a multi-level phone book:

                    [M]
                   /   \
            [D,G,J]     [P,S,V]
           /   |   \     /   |   \
        [A-C][D-F][G-I][M-O][P-R][S-U][V-Z]
         ↓    ↓    ↓    ↓    ↓    ↓    ↓
      (actual data rows in leaf nodes)

Why B+ Trees Work Well:

  1. Shallow: Even with millions of rows, only 3-4 levels deep
  2. Cache-friendly: Each node fits in a page
  3. Range queries: Leaf nodes linked for scanning
  4. Predictable: Always balanced, consistent performance

Following a Search:

# Finding "John" in a B+ tree index on names
1. Root: "John" < "M", go left
2. Level 2: "John" > "G", go to middle child  
3. Leaf: Scan "G-I" page, find "John" -> row location
4. Fetch actual row from heap file

Insert Example:

def insert(tree, key, value):
    leaf = find_leaf(tree.root, key)
    
    if leaf.has_space():
        leaf.insert(key, value)
    else:
        # Split leaf into two
        new_leaf = leaf.split()
        middle_key = new_leaf.keys[0]
        
        # Propagate split up the tree
        insert_into_parent(leaf.parent, middle_key, new_leaf)

LSM Trees: Built for Big Data Writes

While B+ trees update in place, LSM trees use a different strategy perfect for write-heavy workloads:

The Big Idea: Buffer writes in memory, flush to disk in batches

Writes go to:
    MemTable (in RAM)
         ↓ (when full)
    SSTable Level 0 (on disk)
         ↓ (compaction)
    SSTable Level 1 (larger, sorted)
         ↓ (compaction)
    SSTable Level 2 (even larger)

Write Path Example:

class LSMTree:
    def write(self, key, value):
        # 1. Log for crash recovery
        self.wal.append(f"SET {key} = {value}")
        
        # 2. Add to in-memory table
        self.memtable[key] = value
        
        # 3. Flush when full
        if self.memtable.size() > THRESHOLD:
            self.flush_to_disk()
    
    def flush_to_disk(self):
        # Sort and write to new SSTable file
        sorted_data = sorted(self.memtable.items())
        sstable = create_sstable(sorted_data)
        self.sstables[0].append(sstable)
        self.memtable.clear()

Why Cassandra/RocksDB Use LSM:

  • Sequential writes are 100x faster than random writes
  • Great for time-series data (always appending)
  • Compaction happens in background

The Trade-off:

  • Writes: Super fast (just append)
  • Reads: Slower (might check multiple files)
  • Solution: Bloom filters to skip files that definitely don’t have the key

Write-Ahead Logging: Surviving Crashes

How do databases maintain ACID’s durability when power fails mid-transaction? Write-Ahead Logging (WAL).

The Rule: Log changes before applying them

class WriteAheadLog:
    def __init__(self):
        self.log_file = open("database.wal", "ab")  # Append, binary
        self.lsn = 0  # Log Sequence Number
    
    def log_update(self, tx_id, table, row_id, old_value, new_value):
        entry = {
            "lsn": self.lsn,
            "tx_id": tx_id,
            "type": "UPDATE",
            "table": table,
            "row_id": row_id,
            "old_value": old_value,  # For undo
            "new_value": new_value   # For redo
        }
        self.log_file.write(serialize(entry))
        self.log_file.flush()  # Force to disk
        self.lsn += 1
        
        # Only now safe to update actual data
        return self.lsn

Recovery After Crash:

def recover():
    # Phase 1: Analysis - What was happening?
    committed_txns = set()
    active_txns = set()
    
    for entry in read_log_from_checkpoint():
        if entry.type == "BEGIN":
            active_txns.add(entry.tx_id)
        elif entry.type == "COMMIT":
            active_txns.remove(entry.tx_id)
            committed_txns.add(entry.tx_id)
    
    # Phase 2: Redo - Replay committed transactions
    for entry in read_log_from_checkpoint():
        if entry.tx_id in committed_txns:
            apply_change(entry)
    
    # Phase 3: Undo - Rollback incomplete transactions
    for entry in reversed(read_log_from_checkpoint()):
        if entry.tx_id in active_txns:
            undo_change(entry)

Why This Works:

  • Log is sequential (fast writes)
  • Log records are small
  • Can reconstruct any state from log
  • Checkpoints limit recovery time

Code Reference: For working implementations, see storage_engines.py

Troubleshooting Common Database Issues

Debugging Slow Queries

Step 1: Identify the Culprit

-- PostgreSQL: Enable slow query logging
ALTER SYSTEM SET log_min_duration_statement = 1000; -- Log queries > 1 second
SELECT pg_reload_conf();

-- MySQL: Check slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;

-- Find currently running queries
SELECT pid, now() - query_start as duration, query 
FROM pg_stat_activity 
WHERE state = 'active' 
ORDER BY duration DESC;

Step 2: Analyze the Query Plan

-- Look for these red flags in EXPLAIN output:
EXPLAIN (ANALYZE, BUFFERS) SELECT ...;

-- Bad signs:
-- - "Seq Scan" on large tables (missing index?)
-- - "Nested Loop" with high row counts (consider hash join)
-- - High "Buffers: shared hit" (data not in cache)
-- - "Sort" with "Disk: ..." (increase work_mem)

Step 3: Common Fixes

-- Missing index on JOIN columns
CREATE INDEX idx_orders_customer_id ON orders(customer_id);

-- Statistics out of date
ANALYZE orders; -- PostgreSQL
ANALYZE TABLE orders; -- MySQL

-- Query needs rewriting
-- Bad: Correlated subquery
SELECT * FROM orders o 
WHERE total > (SELECT AVG(total) FROM orders WHERE customer_id = o.customer_id);

-- Good: Window function
WITH customer_avgs AS (
    SELECT customer_id, AVG(total) OVER (PARTITION BY customer_id) as avg_total
    FROM orders
)
SELECT o.* FROM orders o 
JOIN customer_avgs ca ON o.customer_id = ca.customer_id 
WHERE o.total > ca.avg_total;

Connection Pool Issues

Symptoms: “Too many connections”, intermittent timeouts

Diagnosis:

-- Check current connections
SELECT count(*) FROM pg_stat_activity;

-- See what connections are doing
SELECT state, count(*) 
FROM pg_stat_activity 
GROUP BY state;

-- Find idle connections
SELECT pid, usename, application_name, state_change 
FROM pg_stat_activity 
WHERE state = 'idle' 
AND state_change < NOW() - INTERVAL '10 minutes';

Solutions:

# Configure connection pooling properly
pool = psycopg2.pool.ThreadedConnectionPool(
    minconn=5,    # Keep some connections ready
    maxconn=20,   # Limit maximum connections
    host="localhost",
    database="mydb"
)

# Use context managers to ensure cleanup
from contextlib import contextmanager

@contextmanager
def get_db_connection():
    conn = pool.getconn()
    try:
        yield conn
        conn.commit()
    except:
        conn.rollback()
        raise
    finally:
        pool.putconn(conn)

Disk Space Issues

Prevention:

-- Monitor table sizes
SELECT 
    schemaname,
    tablename,
    pg_size_pretty(pg_total_relation_size(schemaname||'.'||tablename)) as size
FROM pg_tables
ORDER BY pg_total_relation_size(schemaname||'.'||tablename) DESC
LIMIT 20;

-- Set up automatic cleanup
-- PostgreSQL: Configure autovacuum
ALTER TABLE large_table SET (autovacuum_vacuum_scale_factor = 0.01);

-- Archive old data
CREATE TABLE orders_archive (LIKE orders INCLUDING ALL);
INSERT INTO orders_archive SELECT * FROM orders WHERE created_at < '2023-01-01';
DELETE FROM orders WHERE created_at < '2023-01-01';

Performance Tuning: Making It Fast

Performance tuning is part science, part art. Here’s a practical approach:

Step 1: Measure First

Find Slow Queries:

-- PostgreSQL: Find slowest queries
SELECT 
    mean_exec_time,
    calls,
    total_exec_time,
    query
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;

Check Cache Performance:

-- Cache hit ratio (want > 95%)
SELECT 
    sum(heap_blks_hit) / 
    (sum(heap_blks_hit) + sum(heap_blks_read)) as cache_hit_ratio
FROM pg_statio_user_tables;

Step 2: Tune Configuration

Memory Settings (PostgreSQL example):

# Buffer pool - start with 25% of RAM
shared_buffers = 4GB

# Total memory for queries  
work_mem = 50MB  # Per operation!

# Maintenance operations
maintenance_work_mem = 1GB

# Effective cache - tell planner about OS cache
effective_cache_size = 12GB  # ~75% of RAM

Step 3: Optimize Schema

Partitioning for Large Tables:

-- Partition orders by month
CREATE TABLE orders (
    order_id BIGINT,
    order_date DATE,
    customer_id INT,
    total DECIMAL(10,2)
) PARTITION BY RANGE (order_date);

-- Create monthly partitions
CREATE TABLE orders_2024_01 PARTITION OF orders
    FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');

-- Queries on date ranges now scan only relevant partitions!

Materialized Views for Complex Queries:

-- Expensive dashboard query
CREATE MATERIALIZED VIEW customer_stats AS
SELECT 
    c.customer_id,
    c.name,
    COUNT(DISTINCT o.order_id) as order_count,
    SUM(o.total) as lifetime_value,
    MAX(o.order_date) as last_order_date
FROM customers c
LEFT JOIN orders o ON c.customer_id = o.customer_id
GROUP BY c.customer_id, c.name;

-- Refresh periodically
CREATE INDEX idx_customer_stats_value ON customer_stats(lifetime_value);

-- Now dashboard query is instant!

Step 4: Monitor and Iterate

Key Metrics to Watch:

  • Response time: 95th percentile latency
  • Throughput: Queries per second
  • Resource usage: CPU, memory, disk I/O
  • Lock waits: Blocked queries
  • Connection pool: Active vs idle connections

The Future of Databases

Database technology continues to evolve rapidly. Here are the cutting-edge developments:

NewSQL: Best of Both Worlds

NewSQL databases provide SQL and ACID guarantees at massive scale:

Google Spanner: The Pioneer

-- Looks like regular SQL
CREATE TABLE users (
    user_id INT64 NOT NULL,
    email STRING(255),
    created_at TIMESTAMP
) PRIMARY KEY (user_id);

-- But runs across continents!
-- Synchronous replication globally
-- External consistency via TrueTime

Spanner uses atomic clocks and GPS to synchronize time globally, enabling consistent transactions across the planet!

CockroachDB: Spanner for Mortals

-- Familiar PostgreSQL syntax
CREATE TABLE orders (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    customer_id UUID NOT NULL,
    total DECIMAL(10,2),
    region STRING AS (CASE 
        WHEN country IN ('US', 'CA') THEN 'NA'
        WHEN country IN ('GB', 'FR', 'DE') THEN 'EU'
        ELSE 'OTHER' 
    END) STORED  -- Computed column for partitioning
);

-- Automatically distributed, survives datacenter failures

Machine Learning in Databases

Databases are beginning to use ML to optimize themselves:

Learned Indexes: Replacing B+ Trees with ML

# Traditional B+ tree: Follow pointers
def btree_lookup(key):
    node = root
    while not node.is_leaf:
        node = node.find_child(key)
    return node.find_position(key)

# Learned index: Predict position directly!
def learned_lookup(key):
    # Neural network learns the cumulative distribution
    predicted_pos = model.predict(key) * num_records
    
    # Handle prediction error
    min_pos = max(0, predicted_pos - error_bound)
    max_pos = min(num_records, predicted_pos + error_bound)
    
    # Binary search in small range
    return binary_search(data[min_pos:max_pos], key)

Results: 70% less memory, 2x faster lookups for some workloads!

Self-Tuning Databases:

-- Database observes your queries and auto-creates indexes
-- Monday: Many queries filtering by customer_id
-- Tuesday: Database automatically creates index
CREATE INDEX idx_orders_customer_id ON orders(customer_id);
-- No DBA needed!

Quantum Databases: The Far Future

Quantum computing could revolutionize database searches:

Grover’s Algorithm: Quantum Search

# Classical search: Check each item
def classical_search(database, target):
    for item in database:  # O(n)
        if item == target:
            return item

# Quantum search: Superposition magic
def quantum_search(database, target):
    # Put all items in superposition
    # Amplify probability of target
    # Measure to get result
    # Only O(√n) iterations!
    pass

For a billion-row table:

  • Classical: 1 billion checks worst case
  • Quantum: ~31,000 checks worst case

Current Reality:

  • Quantum computers are noisy and limited
  • Only work for specific problems
  • Still years from practical database use
  • But research is accelerating!

AI-Powered Query Optimization

Traditional optimizers use statistics and rules. Modern systems learn from experience:

# Query plan as a graph
query_graph = {
    "nodes": [
        {"id": 1, "op": "TableScan", "table": "orders"},
        {"id": 2, "op": "TableScan", "table": "customers"},
        {"id": 3, "op": "HashJoin", "condition": "orders.customer_id = customers.id"},
        {"id": 4, "op": "Filter", "predicate": "total > 100"}
    ],
    "edges": [
        {"from": 1, "to": 3},
        {"from": 2, "to": 3},
        {"from": 3, "to": 4}
    ]
}

# GNN learns optimal plans
optimal_plan = query_optimizer_gnn.predict(query_graph)

Real Benefits Today:

  • Better cardinality estimates for complex joins
  • Learns correlations statistics miss
  • Adapts to workload changes
  • Microsoft and Google using in production

Blockchain Databases: Trust Through Technology

Blockchains bring immutability and trust to databases:

Use Case: Supply Chain Tracking

-- Traditional database: Can be altered
UPDATE shipments SET status = 'delivered' WHERE id = 123;
-- Who changed it? When? Can we trust this?

-- Blockchain database: Immutable audit trail
INSERT INTO blockchain_shipments (
    shipment_id,
    status,
    location,
    timestamp,
    previous_hash,
    signature
) VALUES (
    123,
    'delivered',
    'Customer warehouse',
    NOW(),
    SHA256(previous_record),
    SIGN(data, private_key)
);
-- Cryptographically proven, tamper-evident

When It Makes Sense:

  • Multiple organizations need shared truth
  • Audit trail requirements
  • Regulatory compliance
  • High-value transactions

When It Doesn’t:

  • Need to update/delete data
  • High transaction volume
  • Single organization control
  • Performance critical

Hardware-Accelerated Databases

Modern hardware enables new database architectures:

GPU Databases: Massive Parallelism

-- Running on GPU: 100x faster for analytics
SELECT 
    product_category,
    SUM(quantity * price) as revenue,
    COUNT(DISTINCT customer_id) as unique_customers
FROM sales_fact
WHERE sale_date >= '2024-01-01'
GROUP BY product_category;

-- GPU executes thousands of threads in parallel

Persistent Memory: Best of RAM and SSD

# Traditional: RAM is fast but volatile
ram_buffer = {}  # Lost on power failure

# Persistent Memory: Fast AND durable
pmem_buffer = PersistentDict("/mnt/pmem/buffer")
pmem_buffer["key"] = "value"  # Survives power loss!
# Nearly RAM speed, SSD persistence

Smart SSDs: Compute at Storage

# Traditional: Move data to CPU
data = ssd.read("SELECT * FROM huge_table")
filtered = cpu.filter(data, condition)

# Smart SSD: Filter at storage layer  
filtered = smart_ssd.read("SELECT * FROM huge_table WHERE condition")
# Only relevant data travels to CPU

Code Reference: For implementations of these modern approaches, see modern_databases.py

Real-World Case Studies

Case Study 1: Instagram’s Cassandra Migration

The Challenge:

  • PostgreSQL couldn’t handle Instagram’s massive growth
  • Billions of photos, likes, and follows
  • Need for geographic distribution

The Solution:

# Cassandra schema for user feeds
CREATE TABLE user_feed (
    user_id BIGINT,
    post_timestamp TIMESTAMP,
    post_id BIGINT,
    author_id BIGINT,
    post_data TEXT,
    PRIMARY KEY (user_id, post_timestamp, post_id)
) WITH CLUSTERING ORDER BY (post_timestamp DESC);

# Optimized for: "Show me user X's feed, newest first"

Lessons Learned:

  • NoSQL isn’t always better—Instagram kept PostgreSQL for user data
  • Design schema around query patterns
  • Denormalization is OK when you need scale

Case Study 2: Uber’s Schemaless

The Challenge:

  • Hundreds of microservices with different data needs
  • Rapid development requiring schema flexibility
  • Need for strong consistency in some cases

The Solution:

// Schemaless: MySQL backend with JSON-like interface
{
  "row_key": "rider:123:profile",
  "cells": [
    {
      "column": "name",
      "value": "Alice Smith",
      "version": 1234567890
    },
    {
      "column": "rating",
      "value": 4.8,
      "version": 1234567891
    }
  ]
}

Benefits:

  • Schema changes without migrations
  • Per-cell versioning for consistency
  • MySQL’s reliability with NoSQL flexibility

Case Study 3: Discord’s Message Storage

The Challenge:

  • Billions of messages across millions of channels
  • Messages must be queryable by channel and time
  • Old messages accessed rarely but must be available

The Solution:

-- Cassandra for recent messages (hot data)
CREATE TABLE messages (
    channel_id BIGINT,
    bucket INT,  -- Time bucket (e.g., day)
    message_id BIGINT,
    author_id BIGINT,
    content TEXT,
    PRIMARY KEY ((channel_id, bucket), message_id)
);

-- ScyllaDB for even better performance
-- Google Cloud Storage for old messages (cold data)

Architecture:

  1. Write to Cassandra immediately
  2. After 30 days, migrate to object storage
  3. Query router checks both systems

Database Selection Guide

Decision Tree for Database Selection

Start Here
    |
    v
Is your data relational?
    |                    \
   Yes                   No -> Document Store (MongoDB)
    |                          or Key-Value (Redis)
    v
Need ACID guarantees?
    |              \
   Yes              No -> Consider NoSQL
    |
    v
Scale needs?
    |                          \
   Single Server                Multi-Region
    |                                |
    v                                v
PostgreSQL/MySQL              CockroachDB/Spanner


Special Cases:
- Time-series data -> InfluxDB, TimescaleDB
- Graph relationships -> Neo4j, Amazon Neptune  
- Full-text search -> Elasticsearch
- Analytics -> ClickHouse, Apache Druid
- Embedded -> SQLite, RocksDB

Database Comparison Matrix

Database Type Best For Avoid When Scale Limit
PostgreSQL Relational General purpose, complex queries Petabyte scale ~10TB comfortable
MySQL Relational Web apps, simple queries Complex analytics ~5TB comfortable
MongoDB Document Flexible schema, rapid development Strong consistency needs ~100TB
Cassandra Wide Column Time-series, write-heavy Complex queries Petabytes
Redis Key-Value Caching, real-time Primary data store RAM size
Neo4j Graph Relationship queries Tabular data ~10B nodes
ClickHouse Column Analytics, aggregations OLTP workloads Petabytes
SQLite Embedded Mobile, desktop apps Concurrent writes ~100GB

References

Essential Literature

Foundational Texts:

  • Kleppmann, M. (2017). Designing Data-Intensive Applications - Best modern overview
  • Karwin, B. (2010). SQL Antipatterns - Learn from common mistakes

Going Deeper:

  • Ramakrishnan & Gehrke (2003). Database Management Systems - Solid textbook
  • Petrov, A. (2019). Database Internals - How databases actually work

Research Frontiers:

  • Recent SIGMOD, VLDB, and ICDE conference proceedings
  • The Morning Paper - Database paper summaries

Online Resources

Interactive Learning:

Talks and Videos:

Hands-On Projects

  1. Build a Mini Database: Implement B+ tree, buffer pool, and simple queries
  2. Benchmark Different Databases: Compare PostgreSQL, MySQL, MongoDB for your use case
  3. Distributed System: Build a simple distributed key-value store with Raft
  4. Query Optimizer: Write a cost-based optimizer for simple queries

Best Practices from the Trenches

Design Principles That Scale

1. Design for 10x Growth

-- Bad: Works today, fails at scale
CREATE TABLE users (
    id INT PRIMARY KEY,  -- Runs out at 2 billion!
    email VARCHAR(50)    -- Some emails are longer!
);

-- Good: Room to grow
CREATE TABLE users (
    id BIGINT PRIMARY KEY,
    email VARCHAR(255),
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    INDEX idx_email (email),
    INDEX idx_created (created_at)
);

2. Make Schemas Self-Documenting

-- Bad: Cryptic names
CREATE TABLE usr_prch_hist (u_id INT, p_id INT, ts INT);

-- Good: Clear intent
CREATE TABLE user_purchase_history (
    user_id BIGINT NOT NULL,
    product_id BIGINT NOT NULL,
    purchased_at TIMESTAMP NOT NULL,
    quantity INT NOT NULL DEFAULT 1,
    unit_price DECIMAL(10,2) NOT NULL,
    FOREIGN KEY (user_id) REFERENCES users(id),
    FOREIGN KEY (product_id) REFERENCES products(id),
    INDEX idx_user_purchases (user_id, purchased_at DESC)
);

3. Plan for Maintenance

-- Add metadata columns to important tables
CREATE TABLE orders (
    id BIGINT PRIMARY KEY,
    -- Business columns
    customer_id BIGINT NOT NULL,
    total DECIMAL(10,2) NOT NULL,
    
    -- Maintenance columns
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
    version INT DEFAULT 1,  -- For optimistic locking
    is_deleted BOOLEAN DEFAULT FALSE  -- Soft deletes
);

Common Pitfalls to Avoid

  1. N+1 Queries: Load related data in one query, not one per row
  2. Missing Indexes on Foreign Keys: Every FK should have an index
  3. Storing Calculated Values: Use generated columns or views instead
  4. Ignoring Time Zones: Store UTC, convert for display
  5. Not Planning for Deletes: Soft deletes often better than hard deletes

Build Your Own Database

The best way to understand databases is to build one. Here’s a practical progression:

Project 1: Key-Value Store (Weekend Project)

# Start simple - in-memory key-value store
class SimpleKVStore:
    def __init__(self):
        self.data = {}
        self.log = []  # For durability
    
    def set(self, key, value):
        self.log.append(f"SET {key} {value}")
        self.data[key] = value
        
    def get(self, key):
        return self.data.get(key)
    
    def snapshot(self):
        with open("snapshot.db", "w") as f:
            json.dump(self.data, f)

Project 2: B+ Tree Index (1-2 Weeks)

# Add indexing for range queries
class BPlusTree:
    def __init__(self, order=4):
        self.root = LeafNode()
        self.order = order
    
    def insert(self, key, value):
        # Find leaf, split if needed
        # Update parents
        pass
    
    def range_query(self, start, end):
        # Find start leaf
        # Scan linked leaves until end
        pass

Project 3: Simple SQL Engine (1 Month)

# Parse and execute basic SQL
class MiniSQL:
    def execute(self, query):
        ast = parse_sql(query)
        
        if ast.type == "SELECT":
            table = self.scan_table(ast.table)
            filtered = self.apply_where(table, ast.where)
            return self.project(filtered, ast.columns)
        
        elif ast.type == "CREATE TABLE":
            self.create_table(ast.table_name, ast.columns)

Project 4: Add Transactions (2 Months)

  • Implement write-ahead logging
  • Add simple 2PL for isolation
  • Build recovery manager
  • Handle concurrent access

Each project builds on the last, gradually introducing complexity!

See Also

  • AWS - Cloud database services and DynamoDB internals
  • Docker - Containerizing databases
  • Cybersecurity - Database security and encryption
  • AI - Machine learning with databases and learned indexes
  • Networking - Distributed database protocols
  • Quantum Computing - Future of quantum database algorithms

Summary

Databases are the foundation of modern applications. From simple files to distributed systems spanning the globe, they solve the fundamental challenge of storing and retrieving data reliably at scale.

Whether you’re building a small app or a global platform, understanding how databases work—from B+ trees to distributed consensus—helps you make better design decisions and debug issues when they arise.

The field continues to evolve rapidly, with machine learning, new hardware, and distributed systems pushing the boundaries of what’s possible. But the core principles—organizing data efficiently, managing concurrent access, and ensuring reliability—remain timeless.

AI-Native Databases:

  • Vector databases for AI/ML (Pinecone, Weaviate, Qdrant)
  • Natural language to SQL (Text2SQL with LLMs)
  • Automatic index and query optimization with ML

New Architectures:

  • Disaggregated storage and compute (Snowflake, Databricks)
  • Serverless databases (Neon, PlanetScale, Fauna)
  • Multi-model databases (document + graph + relational)

Developer Experience:

  • Database branching and preview environments
  • Type-safe query builders (Prisma, Drizzle)
  • Edge databases for low latency (Cloudflare D1, Fly.io)

Start with the basics, experiment with different databases, and gradually work your way up to advanced topics. The journey from SELECT * FROM users to building distributed systems is challenging but incredibly rewarding.

Glossary of Database Terms

ACID: Atomicity, Consistency, Isolation, Durability - properties that guarantee reliable transactions

B-Tree/B+ Tree: Balanced tree data structure used in most database indexes

CAP Theorem: States you can have at most 2 of: Consistency, Availability, Partition tolerance

Cardinality: Number of unique values in a column (affects index efficiency)

Deadlock: When two transactions wait for each other indefinitely

Foreign Key: Column that references primary key in another table

Index: Data structure that speeds up queries

MVCC: Multi-Version Concurrency Control - allows concurrent access without locking

Normalization: Process of organizing data to reduce redundancy

OLTP/OLAP: Online Transaction Processing vs Online Analytical Processing

Primary Key: Unique identifier for each row

Query Planner: Component that decides how to execute queries efficiently

Replication: Copying data to multiple servers for availability

Sharding: Splitting data across multiple servers horizontally

Transaction: Group of operations that succeed or fail together

WAL: Write-Ahead Logging - ensures durability by logging before applying changes