WordPress is flexible enough to power blogs, stores, and high-traffic content sites—but that flexibility puts the burden of architecture on you when traffic grows. DigitalOcean gives you the primitives—Droplets, Load Balancers, Managed Databases, Spaces + CDN—to assemble a resilient stack. In this guide you’ll stand up a tuned single node, then evolve it to a highly available, horizontally scalable cluster with shared object storage, caching, and health-checked load balancing. You’ll need a DigitalOcean account and API token to follow along.
You will deploy Nginx + PHP-FPM + Redis object cache on Ubuntu 22.04, fronted by a DigitalOcean Load Balancer, with a Managed MySQL cluster, media offloaded to Spaces + CDN, and proactive monitoring. This design removes single points of failure, separates concerns (web vs. DB vs. storage), and lets you scale each tier independently. Load testing and alerts close the loop.
1) Provision the first web node (tuned single Droplet)
Start with one performant node and squeeze it hard before you add more. Pick a region near your audience to cut RTT, e.g., FRA1 for central Europe.
Create the Droplet and firewall (CLI)
# Prereqs: install & auth
# macOS: brew install doctl
doctl auth init
# Create a tag for web nodes
doctl tags create wp-web
# Create Droplet (8GB RAM, 4 vCPU is a sensible floor for busy sites)
doctl compute droplet create wp-web-1 \
--region fra1 --image ubuntu-22-04-x64 \
--size s-4vcpu-8gb --tag-names wp-web \
--ssh-keys "$YOUR_KEY_FINGERPRINT" \
--user-data-file cloudinit-web.yaml \
--wait
# Restrictive firewall: allow 80/443 from LB, 22 from your IP
doctl compute firewall create \
--name wp-web-fw \
--inbound-rules "protocol:tcp,ports:80,sources:load_balancer:all" \
--inbound-rules "protocol:tcp,ports:443,sources:load_balancer:all" \
--inbound-rules "protocol:tcp,ports:22,sources:addresses:$YOUR_IP/32" \
--outbound-rules "protocol:tcp,ports:all,destinations:0.0.0.0/0" \
--tag-names wp-web
The cloudinit-web.yaml
can bootstrap packages and config consistently across future nodes (shown shortly).
OS, Nginx, PHP-FPM, and OPcache tuning
# As root on the Droplet
apt update
apt install -y nginx php8.2-fpm php8.2-cli php8.2-mysql php8.2-xml php8.2-curl \
php8.2-gd php8.2-mbstring php8.2-zip php8.2-intl redis-server
# PHP OPcache
cat >/etc/php/8.2/mods-available/opcache.ini <<'EOF'
opcache.enable=1
opcache.enable_cli=1
opcache.memory_consumption=256
opcache.max_accelerated_files=100000
opcache.validate_timestamps=0
opcache.save_comments=1
EOF
phpenmod opcache
Size PHP-FPM for concurrency without swapping. A quick rule of thumb: pm.max_children ≈ (RAM_for_PHP / avg_process_MB)
. With 8 GB RAM, reserving ~3 GB for OS/Nginx/Redis, ~2 GB for page cache, ~3 GB for PHP and each PHP process ≈ 60–80 MB, set pm.max_children
~40–50.
# /etc/php/8.2/fpm/pool.d/www.conf (snippet)
sed -i 's/^pm = .*/pm = dynamic/' /etc/php/8.2/fpm/pool.d/www.conf
crudini --set /etc/php/8.2/fpm/pool.d/www.conf www pm.max_children 50
crudini --set /etc/php/8.2/fpm/pool.d/www.conf www pm.start_servers 6
crudini --set /etc/php/8.2/fpm/pool.d/www.conf www pm.min_spare_servers 6
crudini --set /etc/php/8.2/fpm/pool.d/www.conf www pm.max_spare_servers 12
systemctl restart php8.2-fpm
Fast full-page cache for guests (Nginx FastCGI cache)
Cache HTML for anonymous users at Nginx, bypass for logged-in/admin or when cookies indicate personalization.
# /etc/nginx/conf.d/fastcgi-cache.conf
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=WP:200m inactive=60m max_size=10g;
map $http_cookie $skip_cache {
default 0;
~*wordpress_logged_in|comment_author|woocommerce_items_in_cart 1;
}
server {
listen 80;
server_name _;
root /var/www/html;
location ~* \.(png|jpg|jpeg|gif|ico|css|js|svg|woff2?)$ {
expires 30d;
add_header Cache-Control "public, immutable";
try_files $uri =404;
}
location / {
try_files $uri /index.php?$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_cache_bypass $skip_cache;
fastcgi_no_cache $skip_cache;
fastcgi_cache WP;
fastcgi_cache_valid 200 301 302 30m;
add_header X-Cache $upstream_cache_status;
}
}
Reload:
nginx -t && systemctl reload nginx
# Enable and secure Redis locally (we will move to a shared instance when multi-node)
systemctl enable --now redis-server
# WordPress: install "Redis Object Cache" and add to wp-config.php:
/* wp-config.php */
define('WP_CACHE_KEY_SALT', 'yoursite:');
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_MAXTTL', 3600);
2) Move state out: Managed MySQL and Spaces + CDN
Horizontal scale demands stateless web nodes. Shift database and media out of the Droplet.
Managed MySQL (high availability, point-in-time recovery)
Create a Managed MySQL cluster, add the web tag as a trusted source, and use the provided connection string. Managed Databases handle automatic failover, backups, and vertical scaling; storage-only scaling is available in select regions.
# Create MySQL cluster (adjust size/region as needed)
doctl databases create wp-mysql --engine mysql \
--region fra1 --num-nodes 2 --size db-s-2vcpu-4gb
# Allow Droplets with tag "wp-web" to connect
DB_ID=$(doctl databases list --format ID,Name --no-header | awk '/wp-mysql/ {print $1}')
doctl databases firewalls append $DB_ID --rule "droplet:tag:wp-web"
Update wp-config.php
with the Managed DB endpoint, user, and password:
/* wp-config.php */
define('DB_NAME', 'wordpress');
define('DB_USER', 'doadmin');
define('DB_PASSWORD', 'YOUR_DB_PASSWORD');
define('DB_HOST', 'db-postfix-do-user-12345-0.b.db.ondigitalocean.com:25060');
define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL); // Managed DBs require SSL
(From the DO control panel/CLI you’ll copy the exact host/port and CA settings. Managed DBs expose read-only endpoints you can direct analytics/reporting to without impacting writes.)
Offload media to Spaces and turn on CDN
Create a Space, enable CDN, and configure a plugin (e.g., WP Offload Media Lite) to rewrite media URLs to the CDN hostname or your CNAME. This removes I/O and egress from your web nodes and lowers TTFB for assets.
# Create a Space and enable CDN in the DO UI or via API.
# In WordPress: install WP Offload Media Lite, set provider = DigitalOcean Spaces,
# enter Space name, region, Access Key/Secret, and choose "DigitalOcean Spaces CDN" as Delivery Provider.
# (Optionally set a custom CNAME like cdn.example.com and point it to the Space's CDN endpoint.)
3) Put a DigitalOcean Load Balancer in front
With DB and media externalized, you can add more web nodes behind a managed Load Balancer with health checks and (optional) sticky sessions for plugins that require them. DigitalOcean Load Balancers support health-checked backends, TLS termination, HTTP/2, and scaling node count for throughput.
Create and wire the Load Balancer (CLI + health checks)
# Create LB that forwards 80/443 to Droplets tagged "wp-web".
doctl compute load-balancer create \
--name wp-lb --region fra1 \
--algorithm round_robin \
--forwarding-rules entry_protocol:http,entry_port:80,target_protocol:http,target_port:80 \
--forwarding-rules entry_protocol:https,entry_port:443,target_protocol:http,target_port:80,certificate_id:$CERT_ID \
--health-check protocol:http,port:80,path:/lb-health,check_interval_seconds:10,response_timeout_seconds:5,healthy_threshold:3,unhealthy_threshold:3 \
--droplet-tag wp-web
Add a lightweight health endpoint in Nginx:
# inside the same server block
location = /lb-health {
access_log off;
add_header Content-Type text/plain;
return 200 "OK";
}
Sticky sessions. Only enable when necessary (e.g., a checkout plugin that doesn’t externalize state). If required, set a cookie name and TTL; otherwise keep stateless for better balancing.
Scaling the LB. For surges, increase LB node count (regional LBs can be scaled; global LBs cannot).
4) Go horizontal: bake an immutable web image and add nodes
Use the same cloud-init to stamp new web nodes. Keep code in a repo and deploy via CI or rsync
to /var/www/html
(or better, ship a build artifact). When a node joins, it should be ready without manual steps.
Example cloudinit-web.yaml
(idempotent bootstrap)
#cloud-config
package_update: true
packages:
- nginx
- php8.2-fpm
- php8.2-cli
- php8.2-mysql
- php8.2-xml
- php8.2-curl
- php8.2-gd
- php8.2-mbstring
- php8.2-zip
- php8.2-intl
- redis-server
write_files:
- path: /etc/nginx/conf.d/fastcgi-cache.conf
content: |
fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=WP:200m inactive=60m max_size=10g;
runcmd:
- systemctl enable --now nginx php8.2-fpm redis-server
- sed -i 's/^pm = .*/pm = dynamic/' /etc/php/8.2/fpm/pool.d/www.conf
- systemctl restart php8.2-fpm
- nginx -t && systemctl reload nginx
Add another node:
doctl compute droplet create wp-web-2 \
--region fra1 --image ubuntu-22-04-x64 \
--size s-4vcpu-8gb --tag-names wp-web \
--ssh-keys "$YOUR_KEY_FINGERPRINT" \
--user-data-file cloudinit-web.yaml \
--wait
The LB will discover it via the tag and include it once health checks pass.
For multi-node parity, move Redis off-host so all nodes share object cache and (if needed) PHP session storage.
# Create a small Droplet or a Managed Redis (if available in your region).
# Point WordPress to the shared Redis host:
/* wp-config.php (shared cache) */
define('WP_REDIS_HOST', '10.10.10.5'); // private VPC address of Redis
define('WP_CACHE_KEY_SALT', 'yoursite:');
Keep FastCGI full-page caching on each node (it’s disk-local), because it’s trivial to regenerate and reduces LB > web hops. Logged-in traffic bypasses it.
6) Terraform the critical pieces (LB + health checks)
Codify your infrastructure so you can reproduce it and review changes.
# versions.tf
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.40"
}
}
}
# lb.tf
resource "digitalocean_loadbalancer" "wp" {
name = "wp-lb"
region = "fra1"
algorithm = "round_robin"
forwarding_rule {
entry_protocol = "http"
entry_port = 80
target_protocol = "http"
target_port = 80
}
healthcheck {
protocol = "http"
port = 80
path = "/lb-health"
check_interval_seconds = 10
response_timeout_seconds = 5
healthy_threshold = 3
unhealthy_threshold = 3
}
droplet_tag = "wp-web"
}
(Health check and forwarding-rule blocks map directly to DO LB features.)
7) CDN and browser caching that actually stick
After WP Offload Media rewrites asset URLs to your CDN hostname, push long Cache-Control
with immutable on static fingerprints and let the CDN do the heavy lifting. The Nginx snippet earlier adds immutable
and 30-day TTL; with a CDN in front, cache TTLs should be equal or longer at the CDN. The DO Spaces CDN integration streamlines this switch in the plugin UI.
8) Database performance guardrails
Even with Managed MySQL, schema and query hygiene matter. Create reasonable defaults for WordPress:
-- InnoDB tuning (applies if you self-manage MySQL)
SET GLOBAL innodb_flush_log_at_trx_commit = 1;
SET GLOBAL innodb_buffer_pool_instances = 4;
-- Buffer pool size ≈ 60–70% of DB RAM if self-hosted.
On Managed DBs, scale CPU/RAM or storage independently in supported regions and use read-only endpoints for heavy reads (reports, exports). Plan maintenance windows and let the platform orchestrate failovers.
9) Monitoring, alerts, and capacity signals
Wire metrics now so you can scale before users feel pain. DigitalOcean Load Balancers automatically remove unhealthy Droplets and re-add when healthy; you still need to alert on saturation (P95 latency, CPU steal, memory pressure, 5xx rate).
Suggested alert thresholds.
- Web CPU sustained > 70% for 10 min → consider another node.
- PHP-FPM
max_children reached
events → increase pool or add nodes. - LB 5xx > 1% or health check failures > 0 for 1 min → investigate.
- DB CPU > 60% and P95 query time rising → scale DB or add a read replica (managed).
10) Load testing before and after each change
Use k6
to simulate real traffic mixes—cacheable guests vs. logged-in users.
// save as smoke.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export let options = {
stages: [
{ duration: '2m', target: 100 }, // ramp
{ duration: '5m', target: 100 }, // hold
{ duration: '2m', target: 0 }, // cool down
],
};
export default function () {
// 80% anonymous (cache hits), 20% logged-in (bypass)
const isAnon = Math.random() < 0.8;
const headers = isAnon ? {} : { 'Cookie': 'wordpress_logged_in=1' };
let res = http.get('https://www.example.com/', { headers });
check(res, { 'status is 200': (r) => r.status === 200 });
sleep(1);
}
Run:
k6 run smoke.js
Track X-Cache
headers from Nginx to confirm hit rates rising as FastCGI cache warms.
11) Zero-downtime deploys and graceful degradation
- Blue/green: keep
wp-web-A
andwp-web-B
tags. Deploy to B, test via LB target pool preview, then flip tags so LB drains A and ramps B. - Cache-only mode: if PHP-FPM misbehaves, the Nginx page cache will keep serving most anonymous traffic while you recycle PHP.
- Sticky sessions: if a plugin insists on session affinity, enable it at the LB with a short TTL and plan to externalize session storage later. DigitalOcean LBs support sticky cookies with configurable TTL.
12) Autoscaling options and expectations
DigitalOcean supports scaling LB node count directly; for Droplets you can build autoscaling with the API + metrics (via a short controller, Terraform Cloud run tasks, or your CI) to add/remove tagged web nodes on thresholds. DO’s own guidance emphasizes manual/programmable scaling for Droplets; either way, define min/max pools and cooldowns to avoid flapping. (Create/drop nodes, the LB will include/exclude by tag after health checks.)
13) Hardening and backups
# Snapshots (application-consistent: stop PHP briefly or use LB drain)
doctl compute droplet-action snapshot $(doctl compute droplet list --tag-name wp-web --no-header --format ID) \
--name "wp-web-$(date +%F)"
# Database backups: enable automated backups + PITR in Managed DB settings.
# Configure UFW as a deny-by-default fallback; rely on DO firewall for stateless filtering.
ufw default deny incoming
ufw allow 22/tcp
ufw enable
14) What “good” looks like in production
- p95 TTFB for cached pages < 150 ms at the edge; for uncached < 600 ms.
- Cache hit rate (anonymous) > 85%.
- Error budget: 99.9% monthly uptime; LBs auto-remove failing nodes; DB in managed HA.
- Change safety: Terraform plans PR-reviewed; blue/green swaps; load tests before traffic.
DigitalOcean’s own engineering has demonstrated high-scale LB performance (1M+ concurrent connections) which gives ample headroom if you architect the app tier correctly.
Appendix A: Minimal wp-config.php
constants for this stack
/* Database (Managed MySQL over SSL) */
define('DB_NAME', 'wordpress');
define('DB_USER', 'doadmin');
define('DB_PASSWORD', 'REDACTED');
define('DB_HOST', 'db-xxxx-do-user-yyy-0.b.db.ondigitalocean.com:25060');
define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
/* Redis object cache (shared or local) */
define('WP_CACHE_KEY_SALT', 'yoursite:');
define('WP_REDIS_HOST', '10.10.10.5');
define('WP_REDIS_MAXTTL', 3600);
/* Force HTTPS behind LB */
if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
$_SERVER['HTTPS'] = 'on';
}
/* Real cron off; use OS cron */
define('DISABLE_WP_CRON', true);
/* Memory & edits */
define('WP_MEMORY_LIMIT', '256M');
define('DISALLOW_FILE_EDIT', true);
Add a system cron on one node (or a separate worker) to call wp-cron.php
:
# crontab -e
*/5 * * * * curl -fsS https://www.example.com/wp-cron.php?doing_wp_cron=1 > /dev/null
Appendix B: Nginx TLS, HTTP/2, Brotli
# /etc/nginx/nginx.conf (snippets)
http {
gzip on;
gzip_comp_level 5;
gzip_types text/plain text/css application/javascript application/json image/svg+xml;
# If you add Brotli module:
# brotli on; brotli_comp_level 5; brotli_types text/plain text/css application/javascript application/json image/svg+xml;
server_tokens off;
}
Terminate TLS on the LB for simplicity; optionally re-encrypt to the backend.
Closing loop
You now have a pragmatic path: tune a single node, externalize state (DB + media), introduce a health-checked Load Balancer, and scale web nodes horizontally with shared object cache. Back that with monitoring, automated backups, and reproducible IaC. When traffic surges, scale the LB’s capacity and add more tagged web nodes; when code changes, deploy blue/green and measure with k6 before promoting. As your footprint grows, Managed MySQL’s HA and scaling keep your write path reliable while the CDN carries the static weight.