If you want the fastest deploy with zero server babysitting, use App Platform. It builds your repo, runs your app with a Python buildpack, handles env vars, HTTPS, autoscaling, workers, and cron. You trade a bit of flexibility and pay a managed premium—fair.
If you want root-level control, repeatable Linux ops, and the lowest baseline cost, use a Droplet with Nginx in front of Gunicorn (Flask/Django) or Uvicorn (FastAPI via gunicorn -k uvicorn.workers.UvicornWorker
). This is the “how it’s always been done” stack and it still holds up.
Kubernetes (DOKS) is worth it when you’re already containerized and need rolling updates, HPA, and multi-service orchestration. Not before.
Managed PostgreSQL (or MySQL/Redis) is the right call nine times out of ten—let DigitalOcean own backups, upgrades, and failover while your app just connects with a URL and SSL.
Below, I’ll show both App Platform and Droplet paths. Pick one and stick to it for the first deploy. You can always move up to containers/Kubernetes later.
Path A — App Platform (fastest path to production)
What you need
- A GitHub/GitLab repo with your Python app, a
requirements.txt
(orpyproject.toml
), and a Procfile-style start command (App Platform can infergunicorn app:app
or you can set the run command in the UI). - Optional: a worker component for background jobs, and a cron worker if you need scheduling.
Core steps (high level)
- Create App → Connect Repo → Detect Python. App Platform uses the Python buildpack and lets you pick the runtime. Keep to current LTS (3.11/3.12 at time of writing).
- Configure components.
- Web Service: set the run command (e.g.,
gunicorn app:app --workers 2 --threads 4 --preload
) and HTTP port (App Platform sets this for you). - Worker(s): define a separate component (same repo or path) for Celery/RQ/etc.
- Static site (optional) for docs or marketing frontends under the same app.
- Web Service: set the run command (e.g.,
- Environment variables & secrets. Add
DATABASE_URL
, API keys, etc., in the Settings → Environment section. - Domain & HTTPS. Attach your domain; App Platform provisions TLS automatically.
- CI/CD. Turn on auto-deploy on commit or wire GitHub Actions using DigitalOcean’s action.
- Background jobs / cron. Add a Worker component for queue consumers; add a Cron Worker if you need scheduled tasks.
That’s honestly it. For a “hello world” you can start with DigitalOcean’s sample Python app to see the knobs.
When you need a database, create a Managed Database and paste the connection string into your env vars. Remember to allow your App Platform app as a trusted source on the DB firewall.
Path B — Droplet + Nginx + Gunicorn/Uvicorn (classic and solid)
This is the perennial Linux playbook. You’ll do a one-time server hardening, install your stack, set up a systemd service, put Nginx in front, then get HTTPS via Let’s Encrypt.
1) Provision and harden the server (Ubuntu LTS)
Spin up an Ubuntu LTS Droplet. Immediately create a non-root sudo user, enable a basic firewall (UFW), and keep SSH open. These steps have barely changed in years for good reason.
# log in as root the first time
adduser deploy
usermod -aG sudo deploy
# optional: set up SSH keys for 'deploy', then:
rsync -av ~/.ssh/authorized_keys deploy@YOUR_SERVER:/home/deploy/.ssh/
# firewall: allow SSH and HTTP(S)
ufw allow OpenSSH
ufw allow http
ufw allow https
ufw --force enable
ufw status verbose
Log in as your new user and keep root for emergencies only.
2) Install system packages, Python, virtualenv, and Nginx
We’ll keep the OS minimal and the app isolated in a venv.
sudo apt update
sudo apt install -y python3-pip python3-venv python3-dev build-essential nginx
3) Check out your app and set up the venv
# app lives in /var/www/yourapp
sudo mkdir -p /var/www/yourapp
sudo chown -R deploy:deploy /var/www/yourapp
cd /var/www/yourapp
# get your code
git clone https://github.com/yourorg/yourapp.git .
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip wheel
pip install -r requirements.txt
If it’s Flask, make sure you expose app
in, say, wsgi.py
. If it’s FastAPI, expose app
in main.py
and plan to run gunicorn -k uvicorn.workers.UvicornWorker yourpkg.main:app
. Gunicorn behind a proxy is still the standard guidance.
4) Test the app locally on the server
# Flask example
export FLASK_ENV=production
gunicorn --bind 127.0.0.1:8000 wsgi:app
# FastAPI example
# gunicorn -k uvicorn.workers.UvicornWorker --bind 127.0.0.1:8000 yourpkg.main:app
If that renders locally (curl 127.0.0.1:8000
), you’re good.
5) Add a systemd service for Gunicorn/Uvicorn
This ensures your app boots on restart and restarts if it crashes. Gunicorn’s docs recommend this approach.
# /etc/systemd/system/yourapp.service
[Unit]
Description=Gunicorn for yourapp
After=network.target
[Service]
User=deploy
Group=www-data
WorkingDirectory=/var/www/yourapp
Environment="PATH=/var/www/yourapp/.venv/bin"
ExecStart=/var/www/yourapp/.venv/bin/gunicorn --workers 2 --threads 4 --bind 127.0.0.1:8000 wsgi:app
Restart=always
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now yourapp
sudo systemctl status yourapp --no-pager
6) Put Nginx in front as a reverse proxy
Nginx terminates TLS, serves static files, and buffers slow clients—the exact reasons Gunicorn recommends it.
# /etc/nginx/sites-available/yourapp
server {
server_name yourdomain.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 60s;
}
# optional: serve static files if your app writes them here
location /static/ {
alias /var/www/yourapp/static/;
access_log off;
expires 1h;
}
listen 80;
}
sudo ln -s /etc/nginx/sites-available/yourapp /etc/nginx/sites-enabled/yourapp
sudo nginx -t && sudo systemctl reload nginx
7) Add HTTPS with Let’s Encrypt (Certbot)
This is straightforward on Ubuntu—install via snap, run the Nginx installer, and you’ll get auto-renew.
sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/bin/certbot
# generate and auto-configure Nginx for HTTPS + redirect
sudo certbot --nginx -d yourdomain.com
# verify renewal
sudo systemctl list-timers | grep certbot
If you later add HSTS, do it after you’ve verified HTTPS everywhere (including www
, subdomains, and assets). I’ve seen people lock themselves out by rushing this.
Database, secrets, and scaling (applies to both paths)
- Managed PostgreSQL: create a cluster, grab
DATABASE_URL
, and add your app as a trusted source. Use server-side SSL. For App Platform, paste the URL into env vars. On a Droplet, set it in your systemd unit or an.env
file read by your app. - Workers & scheduled jobs: On App Platform, make a Worker component; for cron, use a Cron Worker (DigitalOcean publishes a reference repo). On a Droplet, use
systemd
timers orcron
—your call. - Assets and large files: If you need object storage, use Spaces (S3-compatible). (You can wire it with
boto3
or your framework’s storage plugin.) - Containers later? If/when you containerize, push your image to DigitalOcean Container Registry (DOCR) and deploy either on App Platform (Dockerfile service) or Kubernetes (DOKS). The DOCR + DOKS workflow is first-class.
Nginx HTTPS server block (final state)
# /etc/nginx/sites-available/yourapp (after certbot updates it)
server {
server_name yourdomain.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 60s;
}
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
}
server {
if ($host = yourdomain.com) {
return 301 https://$host$request_uri;
}
listen 80;
server_name yourdomain.com;
return 404;
}
Certbot can inject most of this for you. If you later enable HSTS, add the header only after you’re certain all subdomains are HTTPS-clean.
Health checks, logs, and updates
- Systemd & logs:
journalctl -u yourapp -f
to tail app logs. - Nginx:
/var/log/nginx/access.log
anderror.log
. - Updates: Keep Ubuntu patched (
unattended-upgrades
), update Python libs regularly, and schedule DB snapshots if you self-host. - Metrics: App Platform exposes metrics in the UI; on Droplets, you can enable monitoring agents or ship logs to a provider. (For K8s, use your standard stack—Prometheus/Grafana, etc.)
Common foot-guns (so you don’t repeat the classics)
- Forgetting the proxy: Don’t expose Gunicorn directly to the internet. Put Nginx (or the platform’s load balancer) in front. Gunicorn docs tell you why.
- DB firewall: Managed DBs block traffic unless your app source is trusted—remember to add App Platform or your Droplet IP.
- TLS before HSTS: Don’t enable HSTS until every subdomain and asset path is HTTPS. (Ask me how I know.)
- One service, one responsibility: Use App Platform Workers (or
systemd
services/cron) for background jobs; don’t jam everything into the web process. - CI/CD drift: If you’re on App Platform, wire GitHub Actions once and forget about manual redeploys.
If you just want to see something live fast, fork the sample app and deploy it from the control panel. Then retrofit your real app into the same structure. It’s a five-minute victory lap that confirms DNS/HTTPS/env vars are working.
If you started on App Platform and outgrow it, move to Docker + DOCR + DOKS. If you started on a Droplet, containerize your app and shift to App Platform (Dockerfile service) or DOKS. Either way, your code and infra hygiene will carry over.
If you’ve been following along, you’ve already seen why I split this into two tracks earlier—fully managed for speed, and the classic stack for control. Both remain valid; pick the one that matches your team’s appetite for ops.