How to deploy a Rust application on DigitalOcean

How to deploy a Rust application on DigitalOcean

We may earn an affiliate commission through purchases made from our guides and tutorials.

Alright—let’s do this the right way, once, and make it stick. Below is an evergreen, no-nonsense guide to deploying a Rust web app on DigitalOcean. I’ll give you two battle-tested deployment paths you can rely on for years:

  • App Platform (PaaS—fast, managed, minimal ops)
  • Droplet + Nginx + systemd (classic VM—full control, predictable, cheap at small scale)

I’ll also show where Kubernetes fits, how to wire managed Postgres/Redis, and how to keep TLS and health checks boring and automated.

The short version (so you know where we’re heading)

  • If you just want it live with auto-deploys from Git and minimal ops, use App Platform with a Dockerfile. It’ll build your image and run it; connect a Managed Database if needed.
  • If you prefer the traditional route and exact control, use a Droplet (Ubuntu 24.04), compile a release build, run under systemd, and put Nginx in front with Let’s Encrypt. Solid, cheap, transparent.
  • For big teams or multi-service setups, DOKS (Kubernetes) is there; you’ll build and push an image, deploy via manifests/helm, and stick a DO Load Balancer in front with a health check.

Prerequisites (common to all paths)

  • A Rust service that binds to 0.0.0.0:8080 (or your chosen port). Axum, Actix, Rocket—doesn’t matter.
  • A Dockerfile (strongly recommended even if you deploy to a VM; it standardizes builds).
  • A domain ready to point at DigitalOcean (for TLS and clean URLs).
  • A place for secrets (App Platform env vars, or a .env + EnvironmentFile for systemd).

Option A — App Platform (PaaS, Git-to-deploy)

Why: You want deploys tied to Git, built automatically, and you don’t want to babysit servers. App Platform will build your container from source or pull a registry image; it exposes env vars, health checks, scaling, and managed TLS.

1) Add a multi-stage Dockerfile

Put this in your repo root. It builds a small, static-ish binary (glibc by default; you can go MUSL later if you need it).

# ---- builder ----
FROM rust:1.81 as builder
WORKDIR /app

# Cache deps
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm -f target/release/deps/*

# Build
COPY . .
RUN cargo build --release

# ---- runtime ----
FROM debian:bookworm-slim
RUN useradd -m -u 10001 appuser
WORKDIR /app
COPY --from=builder /app/target/release/your-app /usr/local/bin/your-app

# minimal runtime deps; adjust if you use OpenSSL, etc.
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates && rm -rf /var/lib/apt/lists/*

USER appuser
ENV RUST_LOG=info
EXPOSE 8080
CMD ["/usr/local/bin/your-app"]

If you need truly static linking, target MUSL and sort out linker toolchains (watch for recent -crt-static/linker nuances). Good later—but don’t make day one harder than it needs to be.

2) Create the app from the DO control panel (or doctl)

  • Control Panel → Create → App Platform → pick your Git repo → select the Dockerfile → set the run command if needed.

Or with doctl (after you’ve generated a token and logged in):

doctl auth init
doctl apps create --spec app.yaml

The app.yaml is App Platform’s spec (components, env vars, routes). You can check the CLI reference when you want to automate CI/CD later.

3) Configure env vars & health checks

  • Add PORT=8080 (if your framework reads it) and your secrets (DB URLs, API keys).
  • Set a health check path like /healthz returning 200. It affects auto-rollouts and LB health; treat it as a contract with the platform.

4) Databases (Managed Postgres/MySQL/Redis)

Attach a Managed Database and read the connection string from App Platform’s “Connection details.” DO gives you the exact DSN and CA. Use SSL by default.

5) SSL/TLS and domains

App Platform can provision Let’s Encrypt for your domain automatically. Point your DNS (A/AAAA or CNAME) as directed in the UI—done.

Redeploys are automatic on Git push if you enabled that. Manually, you can force them with:

doctl apps create-deployment <APP_ID> --update-sources

Note: You don’t “SSH into” App Platform instances—that’s by design. Logs/exec are handled in the UI/CLI.

Option B — Droplet (Ubuntu) + systemd + Nginx (the classic, dependable way)

Why: You want control, you’re cost-sensitive, or you just prefer knowing exactly what’s running. This is the old reliable stack: your Rust binary runs as a service, Nginx fronts it with TLS, and DO Cloud Firewalls restrict the blast radius at the network level.

1) Create the Droplet

Choose Ubuntu 24.04 LTS. Size it with RAM/CPU that matches your workload. Assign an SSH key. Consider enabling Monitoring when you create it.

2) Install runtime deps and create a user

sudo apt update
sudo apt install -y nginx certbot python3-certbot-nginx
sudo useradd -m -u 10001 appuser || true

3) Build your app (two approaches)

A) Build on the Droplet (simple):

sudo apt install -y build-essential pkg-config libssl-dev
curl https://sh.rustup.rs -sSf | sh -s -- -y
source $HOME/.cargo/env
git clone https://github.com/you/your-app.git
cd your-app
cargo build --release
sudo cp target/release/your-app /usr/local/bin/your-app
sudo chown root:root /usr/local/bin/your-app

B) Build in Docker and copy the binary (repeatable): build locally with the Dockerfile above, docker cp the artifact out of the builder stage, and scp it to the server. This keeps the server clean.

4) Create a systemd service

# /etc/systemd/system/your-app.service
[Unit]
Description=Your Rust app
After=network.target

[Service]
User=appuser
Group=appuser
EnvironmentFile=-/etc/your-app.env
ExecStart=/usr/local/bin/your-app
Restart=always
RestartSec=2
LimitNOFILE=65535
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now your-app
sudo systemctl status your-app

5) Nginx reverse proxy

# /etc/nginx/sites-available/yourapp.conf
server {
    listen 80;
    server_name yourapp.com www.yourapp.com;

    location / {
        proxy_pass         http://127.0.0.1:8080;
        proxy_set_header   Host $host;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
        proxy_read_timeout 60s;
    }
}
sudo ln -s /etc/nginx/sites-available/yourapp.conf /etc/nginx/sites-enabled/yourapp.conf
sudo nginx -t && sudo systemctl reload nginx

This is the standard “Rust app behind Nginx” pattern; we’re forwarding headers and setting sensible timeouts.

6) HTTPS with Let’s Encrypt (auto-renewing)

sudo certbot --nginx -d yourapp.com -d www.yourapp.com --agree-tos -m you@example.com --redirect

certbot edits the Nginx site to serve HTTPS and installs a timer for renewal. That’s your long-term TLS solved.

7) Lock it down with DO Cloud Firewalls

  • Only allow inbound 22 (SSH), 80/443 (web), and your private ports as needed. Deny everything else by default. Attach the firewall to the Droplet. This is network-layer insurance you get “for free.”

8) Health checks and load balancer (optional)

If/when you add a DO Load Balancer in front, expose /healthz and set reasonable thresholds—unhealthy backends drop out automatically until they pass again.

Option C — Kubernetes on DigitalOcean (DOKS), when you outgrow one box

You’ll containerize your app (same Dockerfile), push to a registry, deploy to a DOKS cluster, and front it with a DO Load Balancer via an Ingress Controller. It’s standard K8s, just with DO operating the control plane for you.

High-level flow:

  1. Build and push image → DOCR or Docker Hub.
  2. Deployment (pods) + Service (ClusterIP) + Ingress (Nginx ingress) → DO Load Balancer gets created automatically.
  3. Health checks on the LB keep traffic on healthy pods.

This is the right move for multiple services, canary/blue-green rollouts, and horizontal scaling without re-architecting later.

Attaching a Managed Database (Postgres example)

  • Create the DB cluster in DO, add your Droplet/App as an authorized source (or VPC-only).
  • Grab the connection string from the UI and set it as DATABASE_URL. Most Rust ORMs (SQLx, Diesel) read from it directly. Use SSL mode as provided by DO.

If you’re on App Platform, the “Connection Details” UI gives you a drop-in DSN; on DOKS, mount it as a Kubernetes Secret; on a Droplet, put it in /etc/your-app.env.

Operational polish that pays off

Zero-downtime restarts on Droplets.
Use two systemd units (blue/green) on different ports and swap Nginx upstream with a symlinked conf + nginx -t && systemctl reload nginx. Simple, brutally reliable.

Observability.
App Platform has logs in UI/CLI. On Droplets, journalctl -u your-app -f for app logs; Nginx in /var/log/nginx. DO Monitoring/Graphs tell you if the box is choking.

Secrets.
Don’t bake secrets into images. App Platform env vars / K8s Secrets / /etc/your-app.env on Droplets (root-only, chmod 600).

Static assets.
If you serve lots of static files, consider Spaces with its built-in CDN. Keep the Rust binary focused on dynamic work. It’s S3-compatible; any S3 client works.

Droplet metadata.
Need instance metadata or user-data? There’s a simple metadata API if you ever bootstrap instances automatically.

Troubleshooting quick hits

  • App won’t start on App Platform? Inspect build logs; confirm it’s listening on the PORT the platform expects (often injected). Health check should return 200 fast.
  • TLS on Droplet is flaky? Re-run certbot --nginx …, check Nginx includes, and ensure DNS is actually pointing at your Droplet.
  • Load balancer shows backends unhealthy? Usually a health check path mismatch, firewall blocking the check port, or timeouts too strict.

A minimal, production-lean Axum server (for reference)

Keep it boring and predictable. You can drop this into any of the three deployment paths.

use axum::{routing::get, Router};
use std::net::SocketAddr;

#[tokio::main]
async fn main() {
    let health = get(|| async { "OK" });
    let app = Router::new()
        .route("/healthz", health)
        .route("/", get(|| async { "hello, world" }));

    let port = std::env::var("PORT").ok()
        .and_then(|p| p.parse::<u16>().ok())
        .unwrap_or(8080);

    let addr = SocketAddr::from(([0, 0, 0, 0], port));
    println!("listening on {}", addr);
    axum::Server::bind(&addr)
        .serve(app.into_make_service())
        .await
        .unwrap();
}

When to choose what (pragmatic rubric)

  • App Platform — solo/team, you want speed, auto TLS, auto-deploys, and fewer knobs. Most small/medium apps fit here well.
  • Droplet — you want total control, stable costs, root access, and you’re comfortable with Linux basics. It’s the most “traditional” path and still excellent.
  • DOKS — you already speak Docker + K8s or you’re orchestrating multiple services with shared runtime policies. It scales cleanly.

Final word

Fancy is fragile. Start simple, prove your traffic patterns, then add complexity when there’s a clear payoff. If you follow the App Platform or the Droplet recipe above, you’ll have a Rust service that deploys cleanly, serves over HTTPS, and stays easy to operate a year from now—because we leaned on standards, not heroics.

If you want, tell me in the comments which framework you’re using (Axum/Actix/Rocket) and whether you prefer PaaS or VM, and I’ll tailor a drop-in Dockerfile, App Platform spec, or the exact Nginx/systemd files for your project.

Was this helpful?

Thanks for your feedback!
Alex is the resident editor and oversees all of the guides published. His past work and experience include Colorlib, Stack Diary, Hostvix, and working with a number of editorial publications. He has been wrangling code and publishing his findings about it since the early 2000s.

Leave a comment

Your email address will not be published. Required fields are marked *