Every admin eventually learns The Rule: backups you don’t have are the ones you’ll need. PostgreSQL has multiple backup strategies, from low-friction logical dumps to full-on point-in-time recovery (PITR). In the 2000s, pg_dump became the go-to for per-database logical exports because it’s simple, portable, and version-friendly.
Later, parallelism and stronger authentication (like SCRAM-SHA-256) improved both speed and security. Today, with PostgreSQL 18 in the wild, the fundamentals haven’t changed, but the best-practice details have—especially around formats, parallel jobs, auth, and cross-version restores. This guide walks you through a clean, modern workflow on Linux, correcting a few common misconceptions along the way.
What pg_dump is (and isn’t):
pg_dump creates a consistent, point-in-time snapshot of one database. It’s perfect for migrating a database between servers or keeping restorable “checkpoints,” but it cannot rewind you to an arbitrary minute—only to the instant the dump was taken. For recovery to an exact moment, you need continuous archiving + WAL files (PITR). We’ll briefly point to that later.
Connection prep on Linux (PATH, auth, and reloading rules)
Make sure the client tools are on your PATH (most distributions install them under /usr/bin). If you manage multiple installations, prefer calling the tools by full path (e.g., /usr/lib/postgresql/16/bin/pg_dump) to avoid version mismatches. Using a newer pg_dump to back up an older server is generally recommended when moving to newer PostgreSQL versions. The inverse (restoring new→old) isn’t guaranteed.
For authentication policy, PostgreSQL uses pg_hba.conf. Besides CIDR addresses like 192.168.1.0/24, you can use samehost and samenet shorthands. Prefer the modern scram-sha-256 auth method when possible; it’s more secure than md5 and supported on current clients. Remember to restart or reload the service after changing auth rules.
If you script backups, configure a password file once and skip interactive prompts. On Linux, create ~/.pgpass with chmod 0600, or PostgreSQL will ignore it:
# hostname:port:database:username:password
localhost:5432:*:postgres:MyPa$$Word_1Permissions must be 0600.
Choosing the right dump format (plain, custom, directory, tar)
At a glance:
- plain — a text SQL script. Universal and editable; restore with
psql. - custom (
-Fc) — compressed, flexible archive; restore withpg_restore(can parallelize restore). - directory (
-Fd) — a folder per dump with per-table files; supports parallel dump and parallel restore. - tar (
-Ft) — tar archive; restore withpg_restore(no parallelism).
Important parallelism facts (often confused):
pg_dump -jonly works with thedirectoryformat (-Fd).pg_restore -jworks with custom and directory archives; tar/plain do not support parallel restore.
Creating a backup (examples you can paste)
List databases to confirm the exact name you’ll dump:
psql -h localhost -U postgres -lPlain SQL script (easy to review or hand-edit):
pg_dump -h localhost -U postgres -f /mnt/backup/test.sql testUsing shell redirection (>) is also valid on Unix, but -f gives nicer errors (e.g., line numbers) and avoids subtle CLI pitfalls. If you do redirect, prefer it on Unix rather than Windows.
Custom archive (great default for production moves; compressed):
pg_dump -h localhost -U postgres -Fc -f /mnt/backup/test.dump testDirectory archive with parallel dump (fastest on large DBs):
pg_dump -h localhost -U postgres -Fd -f /mnt/backup/test_dir -j 4 testStart with a small -j (like 2–4) and observe CPU and disk throughput before increasing. Parallel dump/restore can be bottlenecked by a single huge table or by storage write speed.
Heads-up: Reusing a non-empty target directory for
-Fdwill fail; clean or recreate it first.
Restoring safely (and quickly)
You restore into a new, empty database. If a DB of the same name exists, drop it first (add -i to get an interactive “are you sure?”):
dropdb -h localhost -U postgres -i test
createdb -h localhost -U postgres -T template0 testRestoring a plain SQL dump uses psql. The correct, reliable form is -f:
psql -h localhost -U postgres -d test -f /mnt/backup/test.sqlThis is slightly preferable to shell input redirection because error messages include source line numbers.
Restoring a custom archive with parallel jobs:
pg_restore -h localhost -U postgres -d test -j 4 /mnt/backup/test.dumpRestoring a directory archive with parallel jobs:
pg_restore -h localhost -U postgres -d test -j 4 /mnt/backup/test_dirYou can let pg_restore drop and recreate the database defined in the dump using -C -c, connecting to a different existing DB (often postgres). Use this carefully—especially on shared servers:
pg_restore -h localhost -U postgres -d postgres -C -c /mnt/backup/test.dumpParallel restore boosts throughput by interleaving COPY and CREATE INDEX where dependencies allow; if your schema has one giant table/index, gains may be limited.
For point-in-time recovery (PITR), combine periodic base backups with continuous WAL archiving. That’s a different, cluster-level strategy (think: all databases in the instance), and it’s what you want when “rewind me to 13:04:17” is a requirement.
Hardening and quality-of-life tips you’ll actually use
- Prefer
scram-sha-256inpg_hba.confif all clients are new enough; older clients must upgrade theirlibpq. - Use
.pgpass(0600) for non-interactive jobs; never commit it to source control. - For cross-version moves, run the destination version of
pg_dumpwhere possible, and always test restores. - Monitor disk throughput during
-jruns; parallelism helps only if I/O keeps up.
Quick reference (copy/paste)
Dump (plain SQL):
pg_dump -h localhost -U postgres -f /mnt/backup/test.sql testDump (custom archive):
pg_dump -h localhost -U postgres -Fc -f /mnt/backup/test.dump testDump (directory, parallel):
pg_dump -h localhost -U postgres -Fd -f /mnt/backup/test_dir -j 4 testRestore (plain with psql):
psql -h localhost -U postgres -d test -f /mnt/backup/test.sqlRestore (custom, parallel):
pg_restore -h localhost -U postgres -d test -j 4 /mnt/backup/test.dumpRestore (directory, parallel):
pg_restore -h localhost -U postgres -d test -j 4 /mnt/backup/test_dirpg_dump remains a fantastic, low-friction way to back up and move single databases, especially when paired with pg_restore for selective and parallel restores. When you need time-travel, step up to PITR. Meanwhile, keep your auth modern (scram-sha-256), your scripts non-interactive (.pgpass), and your formats chosen intentionally (custom or directory, most of the time). If you do that routinely, future-you will sleep better.
References
- PostgreSQL Documentation — Backup & Restore overview. (PostgreSQL)
- PostgreSQL Documentation — pg_dump (formats and options). (PostgreSQL)
- PostgreSQL Documentation — pg_restore (archive types,
-j). (PostgreSQL) - PostgreSQL Documentation — psql (
-fvs shell redirection, error line numbers). (PostgreSQL) - PostgreSQL Documentation — The Password File
.pgpass(format,chmod 0600). (PostgreSQL) - PostgreSQL Documentation — pg_hba.conf (
samehost,samenet). (PostgreSQL) - PostgreSQL Documentation — Password Authentication (
scram-sha-256). (PostgreSQL) - PostgreSQL Documentation — Continuous Archiving & PITR (base backup + WAL). (PostgreSQL)