Shipping small web services often starts with copying files to a remote server and running a few commands by hand. That works until one late night deploy breaks because one step was missed. A repeatable deployment script removes that cognitive load and captures the exact sequence: build, upload, switch traffic, verify, and roll back if needed. The example below targets an Ubuntu Droplet on DigitalOcean and a simple app served by systemd
behind Nginx. You will need SSH access to a Linux host and an account with your cloud provider; if you do not have one, create a DigitalOcean account and a Droplet before you begin. The same pattern applies to other hosts.
What you will build
You will create a pair of shell scripts that perform an atomic release and an instant rollback. The release process creates a timestamped directory, uploads your build, runs migrations, flips a current
symlink, restarts the service, runs a health check, and cleans old releases. The rollback process moves the current
symlink back to the previous known-good release. Both scripts are idempotent and safe to run from your laptop or a CI runner.
Server layout and assumptions
Create a dedicated Unix user, a base directory, and a systemd
service that runs your app from the current
symlink. This keeps deployments simple and reversible.
- The app user is
deploy
, with passwordless sudo only forsystemctl
on your service. - The app lives under
/var/www/myapp
with subfoldersreleases/
andshared/
. - Runtime files that must persist across releases (environment file, storage, sockets) live under
shared/
. - Nginx proxies to a local port, for example
127.0.0.1:3000
, or serves static files fromcurrent/public
.
On the server, prepare the directories once:
sudo adduser --disabled-password --gecos "" deploy
sudo mkdir -p /var/www/myapp/releases /var/www/myapp/shared
sudo chown -R deploy:deploy /var/www/myapp
Put environment variables in /var/www/myapp/shared/.env
and ensure only deploy
can read them:
sudo -u deploy bash -c 'touch /var/www/myapp/shared/.env && chmod 600 /var/www/myapp/shared/.env'
Create a systemd
unit at /etc/systemd/system/myapp.service
that runs your app from current
. Substitute your start command.
[Unit]
Description=MyApp
After=network.target
[Service]
Type=simple
User=deploy
WorkingDirectory=/var/www/myapp/current
EnvironmentFile=/var/www/myapp/shared/.env
ExecStart=/usr/bin/env bash -lc 'NODE_ENV=production node server.js'
Restart=always
RestartSec=2
RuntimeDirectory=myapp
RuntimeDirectoryMode=0755
[Install]
WantedBy=multi-user.target
Reload and enable the service:
sudo systemctl daemon-reload
sudo systemctl enable myapp
The deployment script
This script runs on your workstation or CI runner. It builds the app locally, rsyncs the build and release metadata to the server, links shared assets, runs any migrations, flips the current
symlink, restarts the service, verifies health, and prunes older releases.
Save as deploy.sh
in your project root and make it executable.
#!/usr/bin/env bash
set -Eeuo pipefail
# -------- Configuration --------
APP_NAME="myapp"
REMOTE_USER="deploy"
REMOTE_HOST="your.droplet.ip" # or a DNS name
APP_PATH="/var/www/${APP_NAME}"
RELEASES_PATH="${APP_PATH}/releases"
SHARED_PATH="${APP_PATH}/shared"
KEEP_RELEASES=5
HEALTHCHECK_URL="http://127.0.0.1:3000/health" # server-local health endpoint via Nginx or app
SERVICE_NAME="${APP_NAME}.service"
# Commands used on remote; adjust if using a different shell
SSH="ssh -o StrictHostKeyChecking=accept-new ${REMOTE_USER}@${REMOTE_HOST}"
RSYNC="rsync -az --delete --exclude='.git'"
# -------- Helpers --------
timestamp() { date -u +"%Y%m%d%H%M%S"; }
info() { printf "\n[info] %s\n" "$*"; }
warn() { printf "\n[warn] %s\n" "$*" >&2; }
fail() { printf "\n[fail] %s\n" "$*" >&2; exit 1; }
# -------- Build (local) --------
info "Building application locally"
# Replace with your build. Example for Node.js:
npm ci
npm run build
RELEASE_ID="$(timestamp)"
RELEASE_DIR="release_${RELEASE_ID}"
info "Assembling release directory ${RELEASE_DIR}"
rm -rf "${RELEASE_DIR}"
mkdir -p "${RELEASE_DIR}"
# Copy only what the runtime needs. Avoid shipping dev files.
cp -R package*.json dist "${RELEASE_DIR}/"
# Include start script if not in package.json
# cp server.js "${RELEASE_DIR}/"
# -------- Ship (rsync) --------
info "Syncing release to server"
${SSH} "mkdir -p ${RELEASES_PATH}/${RELEASE_DIR}"
${RSYNC} "${RELEASE_DIR}/" "${REMOTE_USER}@${REMOTE_HOST}:${RELEASES_PATH}/${RELEASE_DIR}/"
# -------- Activate (remote) --------
info "Linking shared resources and installing production deps"
${SSH} bash -lc "
set -Eeuo pipefail
cd ${RELEASES_PATH}/${RELEASE_DIR}
# Link shared environment and any persistent dirs
ln -sfn ${SHARED_PATH}/.env .env
# Example: link a persistent uploads dir
mkdir -p ${SHARED_PATH}/uploads
ln -sfn ${SHARED_PATH}/uploads uploads
# Install production dependencies if needed
if [[ -f package.json ]]; then
npm ci --omit=dev
fi
"
# -------- Pre-migrate hook (optional) --------
# Add database migrations here if your app uses them.
info "Running migrations (if any)"
${SSH} bash -lc "
set -Eeuo pipefail
cd ${RELEASES_PATH}/${RELEASE_DIR}
if [[ -f node_modules/.bin/knex ]]; then
./node_modules/.bin/knex migrate:latest
elif [[ -f node_modules/.bin/prisma ]]; then
./node_modules/.bin/prisma migrate deploy
else
echo 'No migration tool detected; skipping'
fi
"
# -------- Symlink switch (atomic) --------
info "Switching current symlink to new release"
${SSH} bash -lc "
set -Eeuo pipefail
ln -sfn ${RELEASES_PATH}/${RELEASE_DIR} ${APP_PATH}/current
"
# -------- Restart service --------
info "Restarting service ${SERVICE_NAME}"
${SSH} "sudo systemctl restart ${SERVICE_NAME}"
# -------- Health check --------
info "Verifying health"
${SSH} bash -lc "
set -Eeuo pipefail
for i in {1..20}; do
code=\$(curl -s -o /dev/null -w \"%{http_code}\" ${HEALTHCHECK_URL} || true)
if [[ \"\$code\" == \"200\" ]]; then
echo 'Healthy'
exit 0
fi
sleep 1
done
echo 'Unhealthy after timeout'; exit 1
" || {
warn "Health check failed. Attempting rollback."
./rollback.sh "${RELEASE_ID}" || true
fail "Deployment rolled back due to failed health check"
}
# -------- Cleanup --------
info "Pruning old releases, keeping ${KEEP_RELEASES}"
${SSH} bash -lc "
set -Eeuo pipefail
cd ${RELEASES_PATH}
ls -1dt release_* | tail -n +$((KEEP_RELEASES+1)) | xargs -r rm -rf
"
info "Deployment ${RELEASE_ID} complete"
Why this works
The current
symlink avoids half-updated code. Switching a symlink is atomic, so Nginx and systemd
see either the old or the new version, not an in-between state. The health check validates that the app starts and responds before you delete older releases. The migration step runs before traffic moves, which prevents inconsistent schemas under load.
The rollback script
When a deploy fails health checks or you detect a bug, you can revert instantly by pointing current
at the previous release. The script below finds the last known-good release and flips the symlink. If you pass a release ID, it rolls back specifically to that version.
#!/usr/bin/env bash
set -Eeuo pipefail
APP_NAME="myapp"
REMOTE_USER="deploy"
REMOTE_HOST="your.droplet.ip"
APP_PATH="/var/www/${APP_NAME}"
RELEASES_PATH="${APP_PATH}/releases"
SERVICE_NAME="${APP_NAME}.service"
SSH="ssh -o StrictHostKeyChecking=accept-new ${REMOTE_USER}@${REMOTE_HOST}"
TARGET_ID="${1:-}"
pick_previous_release() {
${SSH} bash -lc "
set -Eeuo pipefail
cd ${RELEASES_PATH}
mapfile -t releases < <(ls -1dt release_* 2>/dev/null)
if (( \${#releases[@]} < 2 )); then
echo 'none'
exit 0
fi
# Current target is the one that 'current' points to
current_target=\$(readlink -f ${APP_PATH}/current || true)
# Pick the newest release that is not current
for r in \"\${releases[@]}\"; do
if [[ \"/var/www/${APP_NAME}/releases/\$r\" != \"\$current_target\" ]]; then
echo \"\$r\"
exit 0
fi
done
echo 'none'
"
}
if [[ -z "${TARGET_ID}" ]]; then
CANDIDATE="$(pick_previous_release)"
[[ "${CANDIDATE}" == "none" ]] && { echo "No previous release found"; exit 1; }
TARGET="${CANDIDATE}"
else
TARGET="release_${TARGET_ID}"
fi
echo "[info] Rolling back to ${TARGET}"
${SSH} bash -lc "
set -Eeuo pipefail
test -d ${RELEASES_PATH}/${TARGET} || { echo 'Target release not found'; exit 1; }
ln -sfn ${RELEASES_PATH}/${TARGET} ${APP_PATH}/current
sudo systemctl restart ${SERVICE_NAME}
"
echo "[info] Rollback complete"
Secure SSH and sudo
Use SSH keys and restrict privileges. Allow the deploy
user to restart only your service with sudo by adding a drop-in file:
echo "deploy ALL=NOPASSWD: /bin/systemctl restart myapp.service, /bin/systemctl status myapp.service" | sudo tee /etc/sudoers.d/myapp
sudo visudo -cf /etc/sudoers.d/myapp
Harden SSH by disabling password logins in /etc/ssh/sshd_config
, then reload:
sudo sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo systemctl reload ssh
Nginx as a stable edge
Terminate HTTP at Nginx and proxy to your app. Point static assets to current/public
so they track the active release.
server {
listen 80;
server_name your.domain;
location /health {
proxy_pass http://127.0.0.1:3000/health;
proxy_connect_timeout 2s;
proxy_read_timeout 2s;
}
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
location /assets/ {
alias /var/www/myapp/current/public/;
access_log off;
expires 1h;
}
}
Reload Nginx after changes:
sudo nginx -t && sudo systemctl reload nginx
Treat secrets as files under shared/
and never ship them inside a release. The EnvironmentFile
in systemd
injects the variables at start. If your app writes to disk (for example, uploads), symlink a shared/uploads
directory into each release to keep user data stable across deploys.
Database migrations and safety
Migrations alter state outside your code, so treat them carefully. The deploy script runs migrations before switching traffic. This ensures that new code sees the expected schema. If migrations are not backward compatible, introduce them in two phases: first deploy additive changes that old code tolerates, then deploy the code that relies on them.
Logging and observability
Keep logs outside the release so they survive rotations. For systemd
, read application logs with journalctl -u myapp -f
. If you need file-based logs, write to shared/log/
and rotate with logrotate
. Add a /health
endpoint that checks dependencies (database, cache) to make your health check meaningful.
Integrate with CI
You can run the same deploy.sh
from GitHub Actions or another CI once tests pass. Generate an SSH key in CI, add the public key to the deploy
user’s ~/.ssh/authorized_keys
, and store the private key as a secret. CI then calls:
./deploy.sh
Keep the script stateless and parameterized by environment variables so staging and production differ only in REMOTE_HOST
, HEALTHCHECK_URL
, and SERVICE_NAME
.
Prune old releases to save disk space. Rotate logs. Apply OS security updates on a schedule. Periodically test rollback.sh
to ensure reversions still work. These habits keep your deployment path reliable when you most need it.
Once the basic flow is stable, add blue–green or canary behavior by running two systemd
services on different ports and switching an Nginx upstream. You can also template the scripts for multiple services by externalizing configuration into a small .env.deploy
file. Start small, get the one-click path working, and then refine. The goal is the same every time: a predictable release you can run without thinking, on a DigitalOcean Droplet or any other Linux host.