Activepieces stores files uploaded by users and generated during workflow execution. You can use local storage or S3-compatible object storage.
Storage Options
Local Storage
Default optionFiles stored on container filesystem:
- Simple setup
- No external dependencies
- Requires persistent volumes
- Limited scalability
Use for: Development, single-server deployments S3 Storage
Recommended for productionFiles stored in S3-compatible object storage:
- Unlimited scalability
- High availability
- Geographic distribution
- Managed backups
Use for: Production, multi-server deployments
Local Storage
Configuration
Local storage is the default. Files are stored in /usr/src/app/cache:
# Use local storage (default)
AP_FILE_STORAGE_LOCATION=local
Docker Volume
Mount a volume to persist files:
services:
activepieces:
image: ghcr.io/activepieces/activepieces:0.79.0
volumes:
- ./cache:/usr/src/app/cache
Or use a named volume:
services:
activepieces:
volumes:
- activepieces_cache:/usr/src/app/cache
volumes:
activepieces_cache:
Kubernetes Persistent Volume
persistence:
enabled: true
size: 10Gi
storageClass: "standard"
mountPath: "/usr/src/app/cache"
Limitations
Local storage limitations:
- Files lost if container is deleted (without volume)
- Cannot share files across multiple instances
- Limited by disk space
- No built-in redundancy
For production with multiple replicas, use S3 storage.
S3 Storage
Supported Services
Activepieces works with any S3-compatible service:
AWS S3
MinIO
Cloudflare R2
DigitalOcean Spaces
Amazon Simple Storage ServiceAP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
AP_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AP_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AP_S3_REGION=us-east-1
Self-hosted S3-compatible storageAP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces
AP_S3_ACCESS_KEY_ID=minioadmin
AP_S3_SECRET_ACCESS_KEY=minioadmin
AP_S3_REGION=us-east-1
AP_S3_ENDPOINT=http://minio:9000
Cloudflare’s S3-compatible storageAP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-bucket
AP_S3_ACCESS_KEY_ID=your_access_key_id
AP_S3_SECRET_ACCESS_KEY=your_secret_access_key
AP_S3_REGION=auto
AP_S3_ENDPOINT=https://<account_id>.r2.cloudflarestorage.com
DigitalOcean’s object storageAP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-space
AP_S3_ACCESS_KEY_ID=your_spaces_key
AP_S3_SECRET_ACCESS_KEY=your_spaces_secret
AP_S3_REGION=nyc3
AP_S3_ENDPOINT=https://nyc3.digitaloceanspaces.com
Configuration
Storage backendOptions: local, s3
S3 bucket nameCreate bucket before deploying Activepieces.
AWS access key ID or equivalentNot required if using AP_S3_USE_IRSA=true on EKS.
AWS secret access key or equivalent
S3 regionExamples: us-east-1, eu-west-1, auto (for Cloudflare R2)
Custom endpoint for S3-compatible servicesLeave empty for AWS S3
Generate pre-signed URLs for file downloadsEnable for private buckets:AP_S3_USE_SIGNED_URLS=true
Use IAM Roles for Service Accounts (EKS)When enabled, no access key/secret required. Authentication uses pod IAM role.
S3 Setup Guide
AWS S3
Create S3 bucket
aws s3 mb s3://activepieces-files --region us-east-1
Or via AWS Console:
- Go to S3 service
- Click “Create bucket”
- Enter bucket name:
activepieces-files
- Select region:
us-east-1
- Keep default settings
- Click “Create bucket”
Configure bucket policy
For public access (not recommended):{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::activepieces-files/*"
}
]
}
For private access (recommended), use signed URLs:AP_S3_USE_SIGNED_URLS=true
Create IAM user
Create IAM user with S3 access:{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::activepieces-files",
"arn:aws:s3:::activepieces-files/*"
]
}
]
}
Generate access keys and add to .env. Configure Activepieces
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
AP_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AP_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AP_S3_REGION=us-east-1
AP_S3_USE_SIGNED_URLS=true
MinIO (Self-Hosted)
Deploy MinIO
services:
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
ports:
- '9000:9000'
- '9001:9001'
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
volumes:
- minio_data:/data
volumes:
minio_data:
Start MinIO:docker compose up -d minio
Create bucket
Access MinIO Console at http://localhost:9001Login with:
- Username:
minioadmin
- Password:
minioadmin
Create bucket named activepieces Configure Activepieces
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces
AP_S3_ACCESS_KEY_ID=minioadmin
AP_S3_SECRET_ACCESS_KEY=minioadmin
AP_S3_REGION=us-east-1
AP_S3_ENDPOINT=http://minio:9000
EKS with IRSA
Use IAM Roles for Service Accounts (no credentials needed):
Create IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::activepieces-files",
"arn:aws:s3:::activepieces-files/*"
]
}
]
}
Create IAM role
eksctl create iamserviceaccount \
--name activepieces \
--namespace default \
--cluster my-cluster \
--attach-policy-arn arn:aws:iam::ACCOUNT_ID:policy/ActivepiecesS3Policy \
--approve
Configure Helm
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/activepieces-role
s3:
enabled: true
bucket: activepieces-files
region: us-east-1
useIrsa: true
useSignedUrls: true
File Structure
Files are organized by platform and project:
/usr/src/app/cache/ (local)
or
s3://bucket-name/ (S3)
├── platform/
│ ├── {platform_id}/
│ │ ├── FILE/
│ │ │ └── {file_id}
│ │ └── PACKAGE_ARCHIVE/
│ │ └── {archive_id}
├── project/
├── {project_id}/
│ ├── FILE/
│ │ └── {file_id}
│ ├── FLOW_RUN_LOG/
│ │ └── {run_id}.log
│ └── STEP_FILE/
│ └── {step_file_id}
File types (from s3-helper.ts:13):
FILE: User-uploaded files
FLOW_RUN_LOG: Execution logs
STEP_FILE: Step output files
PACKAGE_ARCHIVE: Piece package archives
File Operations
Activepieces uses the AWS SDK for S3 operations (source: s3-helper.ts):
Upload
const s3Key = await s3Helper.uploadFile(key, buffer)
Uploads file to S3 using PutObjectCommand.
Download
const buffer = await s3Helper.getFile(s3Key)
Downloads file from S3 using GetObjectCommand.
Signed URLs
const url = await s3Helper.getS3SignedUrl(s3Key, fileName)
Generates pre-signed URL valid for 7 days.
Delete
await s3Helper.deleteFiles([key1, key2, key3])
Batch delete up to 100 files (Cloudflare R2 limit).
Monitoring
Storage Usage
Check disk usage:# Docker container
docker exec activepieces du -sh /usr/src/app/cache
# Host system
du -sh ./cache
AWS CLI:aws s3 ls s3://activepieces-files --recursive --human-readable --summarize
Or check AWS Console → S3 → Metrics
Cleanup
Configure lifecycle policies to automatically delete old files:
S3 Lifecycle
MinIO Lifecycle
{
"Rules": [
{
"Id": "DeleteOldLogs",
"Status": "Enabled",
"Prefix": "project/",
"Filter": {
"Prefix": "FLOW_RUN_LOG/"
},
"Expiration": {
"Days": 30
}
}
]
}
Apply policy:aws s3api put-bucket-lifecycle-configuration \
--bucket activepieces-files \
--lifecycle-configuration file://lifecycle.json
mc ilm add --expiry-days 30 myminio/activepieces/FLOW_RUN_LOG
Migration
Local to S3
Setup S3 bucket
Create and configure S3 bucket as described above.
Sync existing files
# Copy files from container to S3
docker cp activepieces:/usr/src/app/cache ./temp-cache
aws s3 sync ./temp-cache s3://activepieces-files/
rm -rf ./temp-cache
Update configuration
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
# ... S3 credentials ...
Restart Activepieces
docker compose restart activepieces
Troubleshooting
Test S3 configuration:# AWS CLI
aws s3 ls s3://activepieces-files --region us-east-1
# Using environment variables
AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy aws s3 ls s3://bucket
Check logs:docker compose logs activepieces | grep -i s3
Verify IAM permissions include:
s3:PutObject
s3:GetObject
s3:DeleteObject
s3:ListBucket
Check bucket policy allows your IAM user/role.
Enable signed URLs:AP_S3_USE_SIGNED_URLS=true
Verify bucket is private (not public).Check URL expiration (7 days default). Files not persisting (local)
Ensure volume is mounted:docker inspect activepieces | grep -A 10 Mounts
Recreate with volume:docker run -v ./cache:/usr/src/app/cache ...
Best Practices
Use S3 for Production
Always use S3-compatible storage for production deployments with:
- Multiple replicas
- High availability requirements
- Large file volumes
Enable Versioning
Enable S3 bucket versioning to protect against accidental deletion:aws s3api put-bucket-versioning \
--bucket activepieces-files \
--versioning-configuration Status=Enabled
Use Signed URLs
Keep buckets private and use pre-signed URLs:AP_S3_USE_SIGNED_URLS=true
Configure Lifecycle
Automatically delete old logs and temporary files to reduce costs.
Next Steps
Environment Variables
Complete S3 configuration reference
Database
Configure PostgreSQL
Scaling
Scale file storage