Skip to content

PostgreSQL: Near Zero-Downtime Migration to a New Database with Docker

PostgreSQL: Near Zero-Downtime Migration to a New Database with Docker

Section titled “PostgreSQL: Near Zero-Downtime Migration to a New Database with Docker”

When teams say “zero downtime migration” for PostgreSQL, they usually mean:

  • No user-facing outage for reads
  • A very short write pause during final cutover (often seconds)

This guide shows a complete, repeatable migration using native PostgreSQL logical replication and Docker.

Logical replication copies row changes, but sequence objects are separate state. If users.id uses users_id_seq, target rows might be copied correctly while users_id_seq remains behind. After cutover, new inserts can produce duplicate key errors unless you reset sequence values.

What “Zero Downtime” Means in Practice

Section titled “What “Zero Downtime” Means in Practice”

With built-in one-way logical replication, a true no-write-pause cutover is hard. The practical pattern is near zero downtime:

  1. Continuously replicate from source to target
  2. Validate data
  3. Briefly freeze writes
  4. Switch app connection to target
  5. Unfreeze writes

If you need true zero write pause, you typically need application dual-writes plus careful conflict handling.

The example below runs two PostgreSQL containers:

  • old-db on local port 5433 (source)
  • new-db on local port 5434 (target)

Step 1: Start Source and Target PostgreSQL Containers

Section titled “Step 1: Start Source and Target PostgreSQL Containers”

Create docker-compose.yml:

version: "3.9"
services:
old-db:
image: postgres:16
container_name: old-db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: appdb
ports:
- "5433:5432"
command:
- postgres
- -c
- wal_level=logical
- -c
- max_wal_senders=20
- -c
- max_replication_slots=20
volumes:
- old_data:/var/lib/postgresql/data
new-db:
image: postgres:16
container_name: new-db
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: appdb
ports:
- "5434:5432"
volumes:
- new_data:/var/lib/postgresql/data
volumes:
old_data:
new_data:

Start both containers:

Terminal window
docker compose up -d
Terminal window
docker exec -i old-db psql -U postgres -d appdb <<'SQL'
CREATE TABLE IF NOT EXISTS users (
id BIGSERIAL PRIMARY KEY,
email TEXT NOT NULL UNIQUE,
full_name TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
INSERT INTO users (email, full_name)
SELECT 'user' || g || '@example.com', 'User ' || g
FROM generate_series(1, 10000) g
ON CONFLICT DO NOTHING;
SQL

Step 3: Create Replication User and Publication on Source

Section titled “Step 3: Create Replication User and Publication on Source”
Terminal window
docker exec -i old-db psql -U postgres -d appdb <<'SQL'
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_roles WHERE rolname = 'repl_user') THEN
CREATE ROLE repl_user WITH LOGIN REPLICATION PASSWORD 'repl_pass';
END IF;
END $$;
DROP PUBLICATION IF EXISTS app_pub;
CREATE PUBLICATION app_pub FOR ALL TABLES;
SQL
Terminal window
docker exec old-db pg_dump -U postgres -d appdb --schema-only \
| docker exec -i new-db psql -U postgres -d appdb

Use old-db as host because both containers are on the same Docker network.

Terminal window
docker exec -i new-db psql -U postgres -d appdb <<'SQL'
DROP SUBSCRIPTION IF EXISTS app_sub;
CREATE SUBSCRIPTION app_sub
CONNECTION 'host=old-db port=5432 dbname=appdb user=repl_user password=repl_pass'
PUBLICATION app_pub
WITH (copy_data = true, create_slot = true, enabled = true);
SQL

Step 6: Monitor Replication and Validate Data

Section titled “Step 6: Monitor Replication and Validate Data”

Check replication status on target:

Terminal window
docker exec -it new-db psql -U postgres -d appdb -c \
"SELECT subname, received_lsn, latest_end_lsn, latest_end_time FROM pg_stat_subscription;"

Compare row counts:

Terminal window
docker exec -it old-db psql -U postgres -d appdb -c "SELECT count(*) FROM users;"
docker exec -it new-db psql -U postgres -d appdb -c "SELECT count(*) FROM users;"

Optional: insert new row on source and confirm it appears on target.

Terminal window
docker exec -i old-db psql -U postgres -d appdb <<'SQL'
INSERT INTO users (email, full_name)
VALUES ('cutover_test@example.com', 'Cutover Test')
ON CONFLICT (email) DO NOTHING;
SQL
  1. Enable app maintenance mode to block writes briefly.
  2. Wait for subscription lag to reach near zero.
  3. Run final critical table checks.
  4. Switch app DB endpoint from old-db (5433) to new-db (5434).
  5. Disable maintenance mode.

After cutover, set each sequence to at least current max ID.

Single-table example:

Terminal window
docker exec -i new-db psql -U postgres -d appdb <<'SQL'
SELECT setval('users_id_seq', (SELECT COALESCE(MAX(id), 1) FROM users), true);
SQL

Generic multi-table sequence repair:

Terminal window
docker exec -i new-db psql -U postgres -d appdb <<'SQL'
DO $$
DECLARE r record;
BEGIN
FOR r IN
SELECT
c.relname AS table_name,
a.attname AS column_name,
s.relname AS seq_name
FROM pg_class s
JOIN pg_depend d ON d.objid = s.oid
JOIN pg_class c ON d.refobjid = c.oid
JOIN pg_attribute a ON a.attrelid = c.oid AND a.attnum = d.refobjsubid
WHERE s.relkind = 'S' AND c.relkind = 'r'
LOOP
EXECUTE format(
'SELECT setval(%L, COALESCE((SELECT MAX(%I) FROM %I), 1), true);',
r.seq_name, r.column_name, r.table_name
);
END LOOP;
END $$;
SQL

If critical issues appear:

  1. Re-enable short write freeze.
  2. Point application back to old-db.
  3. Disable freeze and restore normal traffic.
  4. Investigate and retry cutover later.

Keep source database intact for at least 48 to 72 hours after cutover.

Once target is stable:

Terminal window
docker exec -i new-db psql -U postgres -d appdb -c "ALTER SUBSCRIPTION app_sub DISABLE;"
docker exec -i new-db psql -U postgres -d appdb -c "DROP SUBSCRIPTION app_sub;"
docker exec -i old-db psql -U postgres -d appdb -c "DROP PUBLICATION app_pub;"
  1. Logical replication does not replicate DDL; apply schema changes on both sides.
  2. Use secrets management, not plaintext passwords in production.
  3. Monitor replication lag, WAL growth, and error rates throughout migration.
  4. Test cutover and rollback in staging before production.
  5. If strict zero write pause is required, plan for dual-write architecture.