Skip to main content

Web Servers, Reverse Proxies & Modern Deployment

Understanding Web Servers

A web server is software that listens for HTTP/HTTPS requests and responds with the appropriate content. When someone types your domain into their browser, the web server is what actually handles that request.

  • Nginx: High-performance, widely used, requires manual configuration
  • Apache: Oldest and most established, very flexible
  • Caddy: Modern, automatic HTTPS, simple configuration
  • Lighttpd: Lightweight and fast

Web servers can serve static files directly (like HTML, CSS, images) or act as a gateway to application servers running your backend code.

What is a Reverse Proxy?

A reverse proxy sits between the internet and your backend services. Instead of clients connecting directly to your application, they connect to the reverse proxy, which then forwards requests to the appropriate backend service.

Internet → Reverse Proxy → Your Application(s)

This might seem like an unnecessary extra step, but it's actually crucial for modern web infrastructure.

Why Do We Need Reverse Proxies?

1. SSL/TLS Termination

Your application doesn't need to handle HTTPS encryption. The reverse proxy handles all the certificate management and encryption/decryption, then communicates with your app over plain HTTP on the internal network.

Browser (HTTPS) → Reverse Proxy (handles SSL) → App (HTTP)

This means:

  • You configure certificates in one place, not in every application
  • Your app code stays simpler (no SSL logic needed)
  • You can easily update certificates without touching your app

2. Multiple Applications on One Server

You only have one public IP address and can only have one service listening on port 443. A reverse proxy solves this by routing requests based on the domain name:

blog.example.com → Reverse Proxy → Blog (port 3000)
api.example.com  → Reverse Proxy → API (port 8000)
app.example.com  → Reverse Proxy → Web App (port 5173)

All three domains point to the same server, but the reverse proxy directs traffic to the correct application based on the Host header.

3. Load Balancing

A reverse proxy can distribute incoming requests across multiple instances of your application:

                    → App Instance 1
Reverse Proxy       → App Instance 2
                    → App Instance 3

This improves reliability and performance by spreading the load.

4. Security & Protection

The reverse proxy acts as a shield:

  • Hides your actual application servers from direct internet access
  • Can add rate limiting to prevent abuse
  • Provides a single point for implementing security rules
  • Can filter malicious requests before they reach your app

5. Caching & Compression

Reverse proxies can cache static assets and compress responses, reducing load on your application and improving speed for users.

Reverse Proxy Solutions

Nginx

Nginx is the most popular reverse proxy, known for its performance and stability.

Pros:

  • Extremely fast and efficient
  • Battle-tested in production
  • Huge community and extensive documentation
  • Fine-grained control over every aspect

Cons:

  • Configuration syntax is complex and unintuitive
  • Manual SSL certificate management (usually via Certbot)
  • Requires reloads when changing config
  • Steep learning curve

Example Nginx config:

server {
    listen 80;
    server_name example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    server_name example.com;
    
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
    
    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Setting up HTTPS with Nginx requires:

  1. Installing Certbot
  2. Running certificate generation
  3. Configuring auto-renewal
  4. Manually updating Nginx config with certificate paths

Caddy (Recommended!)

Caddy is a modern web server that prioritizes simplicity and automatic HTTPS. It's my strong recommendation for most use cases.

Pros:

  • Automatic HTTPS: Automatically obtains and renews Let's Encrypt certificates
  • Dead simple configuration (usually just 2-3 lines)
  • Secure by default
  • Built-in support for modern protocols (HTTP/2, HTTP/3)
  • Zero-downtime config reloads
  • Written in Go (single binary, easy to deploy)

Cons:

  • Smaller community than Nginx
  • Fewer third-party modules
  • Less battle-tested in extremely high-traffic scenarios (though still very capable)

The same reverse proxy in Caddy:

example.com {
    reverse_proxy localhost:3000
}

That's it. Two lines. Caddy automatically:

  • Redirects HTTP to HTTPS
  • Obtains SSL certificates from Let's Encrypt
  • Renews certificates before expiration
  • Configures secure SSL settings
  • Sets proper proxy headers

Why Docker Matters

Docker solves the "it works on my machine" problem by packaging your application with all its dependencies into a container.

Key Benefits

1. Consistency Across Environments

Your app runs identically on your laptop, staging server, and production server because the container includes everything it needs.

2. Isolation

Each container is isolated from others. Your Node.js app can use Node 18 while another app uses Node 20, without conflicts.

3. Easy Deployment

Deploy by pulling an image and running a container. No manual setup of dependencies, no configuration drift.

4. Resource Efficiency

Containers share the host OS kernel, making them much lighter than virtual machines. You can run many containers on one server.

5. Version Control for Infrastructure

Your Dockerfile and docker-compose.yml are version-controlled alongside your code, documenting exactly how your app should be deployed.

Docker Networking

Containers on the same Docker network can communicate using container names as hostnames. This is crucial for reverse proxying.

Caddy Container ←→ Docker Network ←→ App Container
(exposed: 80,443)                    (internal: 3000)

Your app container doesn't need to expose ports to the host. Only Caddy exposes ports 80 and 443 to the internet.

A Typical Setup: Docker + Caddy + Portfolio

Let's walk through a complete, production-ready setup for hosting a portfolio website with automatic HTTPS.

Project Structure

my-portfolio/
├── docker-compose.yml
├── Caddyfile
├── portfolio/
│   ├── Dockerfile
│   ├── package.json
│   └── ... (your portfolio files)

Portfolio Dockerfile

# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Production stage
FROM nginx:alpine
COPY --from=builder /app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

This creates a two-stage build:

  1. Builds your React/Vite app
  2. Copies the built files into a lightweight Nginx container

Caddyfile

yourportfolio.com {
    reverse_proxy portfolio:80
    tls your-email@example.com
}

That's the entire Caddyfile! Let's break it down:

  • yourportfolio.com - Your domain name
  • reverse_proxy portfolio:80 - Forward requests to the container named "portfolio" on port 80
  • tls your-email@example.com - Use this email for Let's Encrypt notifications (certificate expiry alerts)

Caddy automatically:

  • Obtains a certificate for yourportfolio.com
  • Redirects HTTP to HTTPS
  • Renews the certificate every 60 days
  • Configures secure TLS settings

docker-compose.yml

version: '3.8'

services:
  caddy:
    image: caddy:2-alpine
    container_name: caddy
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    networks:
      - web
    depends_on:
      - portfolio

  portfolio:
    build: ./portfolio
    container_name: portfolio
    restart: unless-stopped
    networks:
      - web

networks:
  web:
    driver: bridge

volumes:
  caddy_data:
  caddy_config:

Key points:

  1. Caddy service:

    • Uses the official Caddy image
    • Exposes ports 80 and 443 to the host (and internet)
    • Mounts your Caddyfile
    • Uses named volumes for certificate storage (persists across restarts)
  2. Portfolio service:

    • Builds from your Dockerfile
    • Does NOT expose any ports to the host
    • Only accessible via the Docker network
  3. Network:

    • Both services are on the "web" network
    • Caddy can reach the portfolio container using portfolio:80
    • The portfolio is isolated from direct internet access
  4. Volumes:

    • caddy_data: Stores SSL certificates
    • caddy_config: Stores Caddy configuration state
    • These must persist across container restarts or you'll hit Let's Encrypt rate limits

Deployment Steps

  1. Point your domain to your server:

    A Record: yourportfolio.com → Your.Server.IP.Address
    
  2. On your server, clone your repo and navigate to it:

    git clone https://github.com/yourusername/portfolio.git
    cd portfolio
    
  3. Start everything:

    docker-compose up -d
    
  4. Check logs to verify it's working:

    docker-compose logs -f caddy
    

You should see Caddy obtaining a certificate. Within 30 seconds, your site will be live with HTTPS.

Managing Your Deployment

View logs:

docker-compose logs -f [service-name]

Restart services:

docker-compose restart

Update your site:

git pull
docker-compose up -d --build

Stop everything:

docker-compose down

Stop and remove volumes (careful!):

docker-compose down -v

Multiple Sites on One Server

The beauty of this setup is you can easily host multiple sites. Just add more services:

Caddyfile:

portfolio.example.com {
    reverse_proxy portfolio:80
    tls your-email@example.com
}

blog.example.com {
    reverse_proxy blog:3000
    tls your-email@example.com
}

api.example.com {
    reverse_proxy api:8000
    tls your-email@example.com
}

docker-compose.yml:

services:
  caddy:
    image: caddy:2-alpine
    # ... same config as before

  portfolio:
    build: ./portfolio
    # ... portfolio config

  blog:
    build: ./blog
    networks:
      - web

  api:
    build: ./api
    networks:
      - web

networks:
  web:

Each site gets its own subdomain, SSL certificate, and container, all managed automatically by Caddy.

Why This Setup is Excellent

  1. Simple Configuration: The Caddyfile is incredibly readable and maintainable
  2. Automatic HTTPS: Zero manual certificate management
  3. Isolation: Each application is isolated in its own container
  4. Security: Applications aren't directly exposed to the internet
  5. Easy Updates: git pull && docker-compose up -d --build
  6. Reproducible: Works identically on any server
  7. Scalable: Easy to add more services
  8. Cost-Effective: Run multiple sites on one $5/month VPS

Common Pitfall: DNS and Certificates

Important: Caddy can only obtain certificates if:

  1. Your domain's DNS A record points to your server's IP
  2. Ports 80 and 443 are accessible from the internet
  3. No firewall is blocking these ports

If certificate generation fails, check:

# Check if ports are listening
sudo netstat -tlnp | grep -E ':(80|443)'

# Check firewall
sudo ufw status

# Allow ports if needed
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp

Final Recommendation

For most developers and small-to-medium projects, use Caddy. The automatic HTTPS alone saves hours of configuration and maintenance. Nginx is powerful but overkill unless you need its specific advanced features.

The Docker + Caddy combination gives you a production-ready, secure, maintainable deployment setup that scales from personal projects to serious applications. It's the setup I recommend for anyone starting out or looking to simplify their infrastructure.