By default, n8n stores data in SQLite, a file-based database that requires no configuration. SQLite works for local testing but creates problems in production: it locks the entire file on every write, does not support safe hot backups, and is incompatible with queue mode (horizontal scaling with workers). PostgreSQL solves all three issues and is the recommended database for any self-hosted n8n deployment beyond personal use.
In this guide we will set up n8n with PostgreSQL using Docker Compose, and then cover how to migrate an existing SQLite instance safely. The setup takes about 15 minutes and requires no prior database experience.
Why Switch to PostgreSQL?
SQLite stores everything in a single file and locks it entirely on every write. When multiple workflows run at the same time, writes queue up and can time out. In extreme cases the file gets corrupted. There is no way to create a consistent backup while n8n is running: copying the file mid-write captures a broken state.
PostgreSQL uses row-level locking, so parallel workflow executions do not block each other. It supports hot backups via pg_dump without stopping n8n. It is required for queue mode (scaling with Redis and multiple workers) and for multi-main high availability setups. If your n8n instance serves a team or processes more than a few dozen workflows daily, PostgreSQL is the practical choice.
Prerequisites
- A Linux server (Ubuntu 22.04 or 24.04) with at least 2 GB RAM and 20 GB disk.
- Docker and Docker Compose installed.
- SSH access to the server.
- For migration: access to your current n8n data directory (/home/node/.n8n or the Docker volume).
You can rent a VPS on Serverspace and have it ready in minutes. If you prefer to skip manual installation, Serverspace also offers a
VPS with n8n pre-installed as a 1-Click App.
Step 1. Generate an Encryption Key
n8n encrypts all stored credentials with this key. If you lose it, saved credentials become permanently unrecoverable. Generate a key and store it in a password manager:
Step 2. Create the Project Files
Create a directory and place three files inside it:
.env
POSTGRES_PASSWORD=your_strong_postgres_password
POSTGRES_DB=n8n
POSTGRES_NON_ROOT_USER=n8n_user
POSTGRES_NON_ROOT_PASSWORD=your_strong_n8n_password
N8N_ENCRYPTION_KEY=paste_your_generated_key_here
N8N_HOST=0.0.0.0
N8N_PROTOCOL=http
N8N_PORT=5678
GENERIC_TIMEZONE=UTC
docker-compose.yml
db_storage:
n8n_storage:
services:
postgres:
image: postgres:16-alpine
restart: always
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
- POSTGRES_NON_ROOT_USER
- POSTGRES_NON_ROOT_PASSWORD
volumes:
- db_storage:/var/lib/postgresql/data
- ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
healthcheck:
test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
interval: 5s
timeout: 5s
retries: 10
n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
ports:
- 5678:5678
volumes:
- n8n_storage:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
The volume n8n_storage at /home/node/.n8n is required even with PostgreSQL. n8n stores encryption key metadata there. DB_POSTGRESDB_HOST must match the Compose service name ("postgres"), not "localhost".
init-data.sh
This script creates the non-root database user on first launch. Create the file and run chmod +x init-data.sh:
set -e;
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER "$POSTGRES_NON_ROOT_USER" WITH PASSWORD '$POSTGRES_NON_ROOT_PASSWORD';
GRANT ALL PRIVILEGES ON DATABASE "$POSTGRES_DB" TO "$POSTGRES_NON_ROOT_USER";
GRANT ALL ON SCHEMA public TO "$POSTGRES_NON_ROOT_USER";
EOSQL
Step 3. Start the Stack
Docker pulls the images, starts PostgreSQL, waits for the health check to pass, then starts n8n. n8n connects to PostgreSQL and creates all required tables automatically. Open http://your-server-ip:5678 and create your admin account. The first registered user gets administrator privileges.
Step 4. Verify the Setup
Confirm that n8n is using PostgreSQL by listing the tables:
You should see tables like workflow_entity, credentials_entity, and execution_entity. Create a test workflow with a Manual Trigger and a Set node, run it, and check that the execution appears in the history. Then add a test credential, restart n8n with docker compose restart n8n, and verify the credential is still accessible.
Migrating from SQLite to PostgreSQL
If you already have an n8n instance running on SQLite, use the export:entities CLI command (available since n8n v1.67, November 2025). It exports all entity types and supports cross-database import.
- Stop n8n: docker compose stop n8n
- Export:
n8n export:entities --outputDir=/home/node/.n8n/backup
Add --includeExecutionHistoryDataTables=true if you want execution history (can be very large).
- Set up PostgreSQL using Steps 1 through 3 above. Use the same N8N_ENCRYPTION_KEY as your old instance. This is critical: a different key means all credentials become unreadable.
- Import:
n8n import:entities --inputDir=/home/node/.n8n/backup
- Start n8n and verify that workflows, credentials, and users are intact.
For n8n versions older than 1.67 that do not have the export:entities command, use n8n export:workflow --all and n8n export:credentials --all --decrypted. These commands only transfer workflows and credentials, not users, projects, or folders.
Post-Setup Recommendations
Backups. Run pg_dump regularly to back up the database. Unlike SQLite, this works while n8n is running:
docker compose exec postgres pg_dump -U n8n_user -d n8n > backup_$(date +%F).sql
Schedule this command via cron (for example, daily at 3 AM). Keep several recent backups and test restoring from them periodically.
HTTPS. Place a reverse proxy (Nginx or Caddy) in front of n8n and configure a TLS certificate through Let's Encrypt. Do not expose port 5678 directly to the internet. Without HTTPS, login credentials and workflow data travel in plain text.
Execution pruning. n8n saves the result of every workflow run. Since 2025, automatic pruning is enabled by default: 14 days of history, up to 10,000 records. You can adjust this with the environment variables EXECUTIONS_DATA_MAX_AGE (in hours) and EXECUTIONS_DATA_PRUNE_MAX_COUNT.
Common Issues
n8n starts but uses SQLite instead of PostgreSQL. Check that DB_TYPE is set to "postgresdb" (not "postgres"). Verify that all DB_POSTGRESDB_* variables are present in the environment. If any are missing, n8n falls back to SQLite silently.
Connection refused on startup. Make sure DB_POSTGRESDB_HOST matches the Compose service name ("postgres"), not "localhost". The two containers communicate over the Docker network, not the loopback interface. Also confirm that the healthcheck on the PostgreSQL container is passing before n8n tries to connect.
Credentials are empty after migration. The N8N_ENCRYPTION_KEY on the new instance does not match the old one. n8n cannot decrypt credentials encrypted with a different key. Verify the key value and re-import.
"Column already exists" error during migration. This happens when you import table data into a database where n8n has already run migrations. The target database should be empty before importing. If you need to start over, drop and recreate the database, then run the import again.
Conclusion
Three files and one docker compose up command give you a production-ready n8n instance backed by PostgreSQL. The setup supports safe hot backups, concurrent workflow execution, and a clear path to queue mode scaling when you need it.
For a quick start, you can deploy n8n on Serverspace with a pre-configured environment and add PostgreSQL following the steps in this guide.