Varnish Cache is a high-performance HTTP accelerator (reverse proxy) that caches backend responses in memory and serves them without invoking your application repeatedly. In production, it can reduce response times and backend load. The configuration is done via VCL (Varnish Configuration Language), which gets compiled to C code under the hood.
However, Varnish does not natively support SSL/TLS termination. That means if you want HTTPS in front of Varnish, you’ll need an SSL termination layer (e.g. Nginx or Hitch) that forwards to Varnish over plain HTTP.
Here is the typical layering:
Client → (HTTPS) → Nginx (or other SSL reverse proxy) → Varnish → Backend web server (Apache/Nginx, your app)
Alternatively (for HTTP-only sites):
Client → Varnish → Backend.
Before proceeding, ensure:
- You have a DigitalOcean droplet with Ubuntu (20.04 or 22.04 recommended).
- You have SSH access (a non-root user with
sudo
). - Your domain’s DNS is pointed to the droplet (for HTTPS).
- Your backend web server (Apache, Nginx, etc.) is installed and serving your site (for test purposes).
Step 1: Install Varnish (latest stable version)
Because the default Ubuntu repos may not carry the bleeding edge, it’s better to use the official Varnish package repository. For example, for Varnish 7:
sudo apt update
sudo apt install -y debian-archive-keyring curl gnupg apt-transport-https
curl -fsSL https://packagecloud.io/varnishcache/varnish70/gpgkey | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/varnish.gpg
. /etc/os-release
sudo tee /etc/apt/sources.list.d/varnishcache_varnish70.list > /dev/null <<EOF
deb https://packagecloud.io/varnishcache/varnish70/$ID/ $VERSION_CODENAME main
deb-src https://packagecloud.io/varnishcache/varnish70/$ID/ $VERSION_CODENAME main
EOF
sudo apt update
sudo apt install -y varnish
If your Ubuntu version is newer and the official repository hasn’t released a version for it, guide authors sometimes use the focal
(Ubuntu 20.04) repository line as a workaround.
After installation, verify with:
varnishd -V
This should show Varnish version, platform, etc.
Step 2: Adjust Varnish runtime parameters (systemd override)
By default, the systemd service for Varnish may bind to nonstandard ports (e.g. 6081) or accept less memory. You’ll want to override the defaults.
Check existing service config:
sudo systemctl cat varnish.service
Then create an override file:
sudo systemctl edit varnish.service
In the editor, insert something like:
[Service]
ExecStart=
ExecStart=/usr/sbin/varnishd \
-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-s malloc,256m
Here:
-a :80
→ listen on port 80-T localhost:6082
→ management interface-s malloc,256m
→ allocate up to 256 MB for cache
Save and exit. Then:
sudo systemctl daemon-reload
sudo systemctl restart varnish
sudo systemctl enable varnish
The override ensures your custom parameters survive upgrades. (This pattern is recommended by Varnish documentation.)
Step 3: Reconfigure your backend server
Since Varnish will now occupy port 80 (HTTP), you must change your backend server (Apache or Nginx) to listen on a different port, e.g. 8080.
If your backend is Nginx:
Edit /etc/nginx/sites-available/your-site.conf
(or /etc/nginx/nginx.conf
) and change:
listen 80;
to
listen 8080;
Then reload Nginx:
sudo systemctl restart nginx
If your backend is Apache:
Find lines like:
Listen 80
change to:
Listen 8080
Also update your VirtualHost definitions to *:8080
. Then restart Apache:
sudo systemctl restart apache2
Step 4: Configure default.vcl (VCL definition)
Edit /etc/varnish/default.vcl
to point Varnish at your backend:
vcl 4.0;
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_recv {
# Normalize Host header, strip cookies, etc., basic logic
if (req.method == "PURGE") {
if (!client.ip ~ local) {
return (synth(403, "Forbidden"));
}
return (purge);
}
}
sub vcl_backend_response {
# Cache only certain content types, ignore others
if (beresp.http.Content-Type ~ "text|application") {
set beresp.ttl = 1h;
} else {
set beresp.ttl = 5m;
}
}
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = "HIT";
} else {
set resp.http.X-Cache = "MISS";
}
}
You can expand these subroutines (e.g. vcl_recv
, vcl_backend_response
) based on your application logic, cookies, custom headers, etc.
Once that file is modified, load it:
sudo systemctl reload varnish
You can also manage VCL via varnishadm
, which allows live reloading (without downtime).
Step 5: Test caching behavior
Use curl -I
to verify headers:
curl -I http://your-server-ip/
Look for headers like X-Cache: HIT
or MISS
, Via
, or X-Varnish
. That tells you whether Varnish is being used.
Another useful tool is varnishstat
, which gives runtime statistics on cache hits, misses, etc.
sudo varnishstat
Adjust your VCL logic based on what you see.
Step 6 (Optional but recommended): SSL termination
Because Varnish lacks native SSL support, you need to insert a TLS-terminating layer. A common pattern is:
Client → Nginx (HTTPS) → Varnish (HTTP) → Backend
- Install Nginx (if not already).
- Configure Nginx to listen on ports 443 (HTTPS).
- Use Let’s Encrypt (Certbot) to obtain a certificate.
- In your Nginx
server
block, proxy pass to Varnish:
server {
listen 443 ssl;
server_name dropletdrift.com;
ssl_certificate /etc/letsencrypt/live/dropletdrift.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/dropletdrift.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Alternatively, some prefer to run Varnish on port 80, Nginx on 443 + 80, then forward 80 traffic to Varnish internally.
Once configured, restart Nginx, and test:
curl -Ik https://dropletdrift.com
Expect to see X-Varnish
or Via
headers in the response.
Step 7: Tuning and operational notes
At this point you have your Varnish setup up and running. Here are some additional steps you can take:
- Memory size: Adjust
-s malloc,XXXm
as your droplet permits. More memory = bigger cache but more system resources used. - Grace mode / stale serving: Use grace periods to serve stale content in case backends are slow.
- Cache purging/invalidation: Use
ban
orpurge
VCL to selectively invalidate content when your application publishes updates. - Health probes: In VCL, configure
probe
blocks to mark backends as unhealthy. - Monitoring: Use tools like
varnishstat
,varnishlog
, Prometheus exporters or Grafana dashboards. - Max object size / limits: Tweak
storage
backends (e.g.malloc
,file
) based on your use case. - Multiple backends / load balancing: Use Varnish directors or backends arrays in VCL for high availability.
Summary
You now have a working Varnish installed on a DigitalOcean droplet, listening on HTTP port 80, sitting in front of your backend (now relocated to 8080), with a sample VCL that handles caching logic. If you added Nginx for SSL, you also have secure HTTPS traffic proxying through Varnish.
From here, the next natural steps are:
- Refine your VCL (custom headers, cookie handling, selective caching).
- Implement cache purging (for dynamic content).
- Add monitoring and alerting for cache health, hit ratios.
- Explore advanced features like ESI (Edge Side Includes), ESI stitching, or VMODs (Varnish modules).
- Consider scalability: multiple Varnish instances behind a load balancer or in a cluster.