Activepieces uses a relational database to store workflows, executions, users, and all application data. PostgreSQL is required for production deployments.
Database Support
PostgreSQL Production Fully supported relational database with:
ACID compliance
Replication support
Advanced querying
Horizontal scaling
Minimum : PostgreSQL 14Recommended : PostgreSQL 15 or 16
SQLite Development Only Embedded database for local development:
No server setup needed
File-based storage
Limited concurrency
Not suitable for production. Use PostgreSQL instead.
PostgreSQL Setup
Using Docker Compose
The easiest way is to use the provided Docker Compose configuration:
services :
postgres :
image : 'postgres:14.4'
container_name : postgres
restart : unless-stopped
environment :
- POSTGRES_DB=activepieces
- POSTGRES_PASSWORD=your_secure_password
- POSTGRES_USER=postgres
volumes :
- postgres_data:/var/lib/postgresql/data
ports :
- '5432:5432'
volumes :
postgres_data :
Start PostgreSQL:
docker compose up -d postgres
Manual Installation
Ubuntu/Debian
macOS
Docker
# Add PostgreSQL repository
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
# Install PostgreSQL
sudo apt update
sudo apt install -y postgresql-15 postgresql-contrib-15
# Start PostgreSQL
sudo systemctl start postgresql
sudo systemctl enable postgresql
# Using Homebrew
brew install postgresql@15
# Start PostgreSQL
brew services start postgresql@15
docker run -d \
--name activepieces-postgres \
-e POSTGRES_DB=activepieces \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=your_password \
-v postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:15
Create Database
Create database
CREATE DATABASE activepieces ;
Create user (optional)
CREATE USER activepieces_user WITH ENCRYPTED PASSWORD 'strong_password' ;
GRANT ALL PRIVILEGES ON DATABASE activepieces TO activepieces_user;
Verify connection
psql -h localhost -U postgres -d activepieces -c "SELECT version();"
Database Configuration
Configure Activepieces to connect to PostgreSQL:
Basic Configuration
# Database connection
AP_POSTGRES_DATABASE = activepieces
AP_POSTGRES_HOST = localhost
AP_POSTGRES_PORT = 5432
AP_POSTGRES_USERNAME = postgres
AP_POSTGRES_PASSWORD = your_secure_password
Connection URL
Alternatively, use a connection URL:
AP_POSTGRES_URL = postgresql://postgres:password@localhost:5432/activepieces
The connection URL format is: postgresql://[user[:password]@][host][:port][/database][?param1=value1&...]
SSL Configuration
Enable SSL for secure connections:
AP_POSTGRES_USE_SSL = true
AP_POSTGRES_SSL_CA = /path/to/ca-certificate.crt
With connection URL:
AP_POSTGRES_URL = postgresql://user:pass@host:5432/db? sslmode = require
SSL modes:
disable: No SSL
require: SSL required
verify-ca: Verify CA certificate
verify-full: Verify CA and hostname
Connection Pooling
Configure connection pool for better performance:
# Maximum connections in pool
AP_POSTGRES_POOL_SIZE = 20
# Idle connection timeout (milliseconds)
AP_POSTGRES_IDLE_TIMEOUT_MS = 30000
Pool size guidelines:
Development: 5-10
Production (single instance): 10-20
Production (multiple instances): Calculate total_connections / instance_count
Database Migrations
Activepieces uses TypeORM for database migrations. Migrations run automatically on startup.
Migration Process
When Activepieces starts:
Connects to the database
Checks for pending migrations
Applies migrations in order
Logs migration results
Migrations are located in:
Community Edition: packages/server/api/src/app/database/migration/
Enterprise Edition: packages/server/api/src/app/ee/database/migrations/
Manual Migration
To run migrations manually:
# Install dependencies
npm install
# Run migrations
npm run migration:run
# Revert last migration
npm run migration:revert
Database Entities
Activepieces creates the following tables (source: database-connection.ts:60):
flow : Workflow definitions
flow_version : Workflow versions
flow_run : Execution logs
project : Projects (workspaces)
user : User accounts
file : File storage metadata
trigger_event : Trigger events
app_connection : OAuth and API connections
piece_metadata : Integration metadata
tag : Tags for organizing pieces
piece_tag : Many-to-many piece-tag relationships
app_event_routing : Event routing rules
trigger_source : Webhook trigger sources
store_entry : Key-value storage
table : Database tables
field : Table columns
record : Table rows
cell : Table cell values
project_member : Team members
project_role : Custom roles
api_key : API keys
audit_event : Audit logs
custom_domain : Custom domains
oauth_app : Custom OAuth apps
signing_key : JWT signing keys
Database Maintenance
Backups
Create a full database backup: # Backup to file
pg_dump -h localhost -U postgres activepieces > backup.sql
# Backup with timestamp
pg_dump -h localhost -U postgres activepieces > backup- $( date +%Y%m%d-%H%M%S ) .sql
# Compressed backup
pg_dump -h localhost -U postgres activepieces | gzip > backup.sql.gz
Backup from Docker container: # Using docker exec
docker exec postgres pg_dump -U postgres activepieces > backup.sql
# Using docker compose
docker compose exec postgres pg_dump -U postgres activepieces > backup.sql
Create a backup script: #!/bin/bash
BACKUP_DIR = "/var/backups/activepieces"
DATE = $( date +%Y%m%d-%H%M%S )
mkdir -p $BACKUP_DIR
pg_dump -h localhost -U postgres activepieces | gzip > $BACKUP_DIR /backup- $DATE .sql.gz
# Keep only last 30 days
find $BACKUP_DIR -name "backup-*.sql.gz" -mtime +30 -delete
Schedule with cron: # Daily backup at 2 AM
0 2 * * * /path/to/backup.sh
Restore
Stop Activepieces
docker compose stop activepieces
Drop database
psql -h localhost -U postgres -c "DROP DATABASE activepieces;"
Create new database
psql -h localhost -U postgres -c "CREATE DATABASE activepieces;"
Restore backup
# From SQL file
psql -h localhost -U postgres activepieces < backup.sql
# From compressed file
gunzip -c backup.sql.gz | psql -h localhost -U postgres activepieces
Restart Activepieces
docker compose start activepieces
Vacuum and Analyze
Regularly optimize the database:
# Vacuum (reclaim space)
psql -h localhost -U postgres -d activepieces -c "VACUUM;"
# Analyze (update statistics)
psql -h localhost -U postgres -d activepieces -c "ANALYZE;"
# Full vacuum (locks tables)
psql -h localhost -U postgres -d activepieces -c "VACUUM FULL;"
Automate with cron:
# Weekly vacuum on Sunday at 3 AM
0 3 * * 0 psql -h localhost -U postgres -d activepieces -c "VACUUM ANALYZE;"
PostgreSQL Configuration
Edit postgresql.conf for better performance:
# Memory settings
shared_buffers = 256MB # 25% of RAM
effective_cache_size = 1GB # 50% of RAM
work_mem = 16MB # Per operation
maintenance_work_mem = 128MB # For maintenance
# Connection settings
max_connections = 100
# Write-ahead log
wal_buffers = 16MB
checkpoint_completion_target = 0.9
# Query planner
random_page_cost = 1.1 # For SSD storage
effective_io_concurrency = 200 # For SSD storage
Restart PostgreSQL after changes:
sudo systemctl restart postgresql
Indexes
Activepieces creates indexes automatically, but you can verify:
-- List all indexes
SELECT tablename, indexname, indexdef
FROM pg_indexes
WHERE schemaname = 'public'
ORDER BY tablename, indexname;
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC ;
Monitoring
Monitor database performance:
-- Active connections
SELECT count ( * ) FROM pg_stat_activity;
-- Long-running queries
SELECT pid, now () - pg_stat_activity . query_start AS duration, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY duration DESC ;
-- Database size
SELECT pg_size_pretty(pg_database_size( 'activepieces' ));
-- Table sizes
SELECT
tablename,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC ;
High Availability
PostgreSQL Replication
Setup master-replica replication:
Configure Master
Edit postgresql.conf on master: wal_level = replica
max_wal_senders = 3
wal_keep_size = 1GB
Edit pg_hba.conf: host replication replicator 10.0.0.0/8 md5
Create replication user
CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'password' ;
Setup replica
pg_basebackup -h master-host -U replicator -D /var/lib/postgresql/data -P
Create standby.signal file: touch /var/lib/postgresql/data/standby.signal
Using Managed Services
Cloud PostgreSQL services with built-in HA:
AWS RDS Multi-AZ deployments with automatic failover AP_POSTGRES_HOST = mydb.abc123.us-east-1.rds.amazonaws.com
AP_POSTGRES_USE_SSL = true
Google Cloud SQL High availability with regional replication AP_POSTGRES_HOST = 10.0.0.3
AP_POSTGRES_USE_SSL = true
Azure Database Zone-redundant HA with read replicas AP_POSTGRES_HOST = myserver.postgres.database.azure.com
AP_POSTGRES_USE_SSL = true
SQLite (Development)
For local development only:
SQLite limitations:
Single concurrent writer
No network access
Limited scalability
File corruption risk
Never use in production!
SQLite database location: activepieces.db in the working directory.
Troubleshooting
Check PostgreSQL is running: sudo systemctl status postgresql
Verify port is open: sudo netstat -plnt | grep 5432
Test connection: psql -h localhost -U postgres -c "SELECT 1;"
Check pg_hba.conf: sudo cat /etc/postgresql/15/main/pg_hba.conf
Allow connections: # IPv4 local connections:
host all all 0.0.0.0/0 md5
Reload PostgreSQL: sudo systemctl reload postgresql
Check active connections: SELECT count ( * ) FROM pg_stat_activity;
Increase max_connections: Adjust pool size:
Check database size: SELECT pg_size_pretty(pg_database_size( 'activepieces' ));
Vacuum to reclaim space: Enable data retention: AP_EXECUTION_DATA_RETENTION_DAYS = 30
Next Steps
Storage Configure file storage with S3
Scaling Scale your database
Backup Strategy Implement backup automation
Monitoring Setup database monitoring