Skip to main content
Activepieces stores files uploaded by users and generated during workflow execution. You can use local storage or S3-compatible object storage.

Storage Options

Local Storage

Default optionFiles stored on container filesystem:
  • Simple setup
  • No external dependencies
  • Requires persistent volumes
  • Limited scalability
Use for: Development, single-server deployments

S3 Storage

Recommended for productionFiles stored in S3-compatible object storage:
  • Unlimited scalability
  • High availability
  • Geographic distribution
  • Managed backups
Use for: Production, multi-server deployments

Local Storage

Configuration

Local storage is the default. Files are stored in /usr/src/app/cache:
.env
# Use local storage (default)
AP_FILE_STORAGE_LOCATION=local

Docker Volume

Mount a volume to persist files:
docker-compose.yml
services:
  activepieces:
    image: ghcr.io/activepieces/activepieces:0.79.0
    volumes:
      - ./cache:/usr/src/app/cache
Or use a named volume:
docker-compose.yml
services:
  activepieces:
    volumes:
      - activepieces_cache:/usr/src/app/cache

volumes:
  activepieces_cache:

Kubernetes Persistent Volume

values.yaml
persistence:
  enabled: true
  size: 10Gi
  storageClass: "standard"
  mountPath: "/usr/src/app/cache"

Limitations

Local storage limitations:
  • Files lost if container is deleted (without volume)
  • Cannot share files across multiple instances
  • Limited by disk space
  • No built-in redundancy
For production with multiple replicas, use S3 storage.

S3 Storage

Supported Services

Activepieces works with any S3-compatible service:
Amazon Simple Storage Service
.env
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
AP_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AP_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AP_S3_REGION=us-east-1

Configuration

AP_FILE_STORAGE_LOCATION
enum
required
Storage backendOptions: local, s3
AP_S3_BUCKET
string
required
S3 bucket nameCreate bucket before deploying Activepieces.
AP_S3_ACCESS_KEY_ID
string
required
AWS access key ID or equivalent
Not required if using AP_S3_USE_IRSA=true on EKS.
AP_S3_SECRET_ACCESS_KEY
string
required
AWS secret access key or equivalent
AP_S3_REGION
string
required
S3 regionExamples: us-east-1, eu-west-1, auto (for Cloudflare R2)
AP_S3_ENDPOINT
string
Custom endpoint for S3-compatible servicesLeave empty for AWS S3
AP_S3_USE_SIGNED_URLS
boolean
default:"false"
Generate pre-signed URLs for file downloadsEnable for private buckets:
AP_S3_USE_SIGNED_URLS=true
AP_S3_USE_IRSA
boolean
default:"false"
Use IAM Roles for Service Accounts (EKS)
AP_S3_USE_IRSA=true
When enabled, no access key/secret required. Authentication uses pod IAM role.

S3 Setup Guide

AWS S3

1

Create S3 bucket

aws s3 mb s3://activepieces-files --region us-east-1
Or via AWS Console:
  1. Go to S3 service
  2. Click “Create bucket”
  3. Enter bucket name: activepieces-files
  4. Select region: us-east-1
  5. Keep default settings
  6. Click “Create bucket”
2

Configure bucket policy

For public access (not recommended):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::activepieces-files/*"
    }
  ]
}
For private access (recommended), use signed URLs:
AP_S3_USE_SIGNED_URLS=true
3

Create IAM user

Create IAM user with S3 access:
IAM Policy
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::activepieces-files",
        "arn:aws:s3:::activepieces-files/*"
      ]
    }
  ]
}
Generate access keys and add to .env.
4

Configure Activepieces

.env
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
AP_S3_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AP_S3_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AP_S3_REGION=us-east-1
AP_S3_USE_SIGNED_URLS=true

MinIO (Self-Hosted)

1

Deploy MinIO

docker-compose.yml
services:
  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    ports:
      - '9000:9000'
      - '9001:9001'
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: minioadmin
    volumes:
      - minio_data:/data

volumes:
  minio_data:
Start MinIO:
docker compose up -d minio
2

Create bucket

Access MinIO Console at http://localhost:9001Login with:
  • Username: minioadmin
  • Password: minioadmin
Create bucket named activepieces
3

Configure Activepieces

.env
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces
AP_S3_ACCESS_KEY_ID=minioadmin
AP_S3_SECRET_ACCESS_KEY=minioadmin
AP_S3_REGION=us-east-1
AP_S3_ENDPOINT=http://minio:9000

EKS with IRSA

Use IAM Roles for Service Accounts (no credentials needed):
1

Create IAM policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::activepieces-files",
        "arn:aws:s3:::activepieces-files/*"
      ]
    }
  ]
}
2

Create IAM role

eksctl create iamserviceaccount \
  --name activepieces \
  --namespace default \
  --cluster my-cluster \
  --attach-policy-arn arn:aws:iam::ACCOUNT_ID:policy/ActivepiecesS3Policy \
  --approve
3

Configure Helm

values.yaml
serviceAccount:
  create: true
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/activepieces-role

s3:
  enabled: true
  bucket: activepieces-files
  region: us-east-1
  useIrsa: true
  useSignedUrls: true

File Structure

Files are organized by platform and project:
/usr/src/app/cache/ (local)
or
s3://bucket-name/ (S3)
  ├── platform/
  │   ├── {platform_id}/
  │   │   ├── FILE/
  │   │   │   └── {file_id}
  │   │   └── PACKAGE_ARCHIVE/
  │   │       └── {archive_id}
  ├── project/
      ├── {project_id}/
      │   ├── FILE/
      │   │   └── {file_id}
      │   ├── FLOW_RUN_LOG/
      │   │   └── {run_id}.log
      │   └── STEP_FILE/
      │       └── {step_file_id}
File types (from s3-helper.ts:13):
  • FILE: User-uploaded files
  • FLOW_RUN_LOG: Execution logs
  • STEP_FILE: Step output files
  • PACKAGE_ARCHIVE: Piece package archives

File Operations

Activepieces uses the AWS SDK for S3 operations (source: s3-helper.ts):

Upload

const s3Key = await s3Helper.uploadFile(key, buffer)
Uploads file to S3 using PutObjectCommand.

Download

const buffer = await s3Helper.getFile(s3Key)
Downloads file from S3 using GetObjectCommand.

Signed URLs

const url = await s3Helper.getS3SignedUrl(s3Key, fileName)
Generates pre-signed URL valid for 7 days.

Delete

await s3Helper.deleteFiles([key1, key2, key3])
Batch delete up to 100 files (Cloudflare R2 limit).

Monitoring

Storage Usage

Check disk usage:
# Docker container
docker exec activepieces du -sh /usr/src/app/cache

# Host system
du -sh ./cache

Cleanup

Configure lifecycle policies to automatically delete old files:
{
  "Rules": [
    {
      "Id": "DeleteOldLogs",
      "Status": "Enabled",
      "Prefix": "project/",
      "Filter": {
        "Prefix": "FLOW_RUN_LOG/"
      },
      "Expiration": {
        "Days": 30
      }
    }
  ]
}
Apply policy:
aws s3api put-bucket-lifecycle-configuration \
  --bucket activepieces-files \
  --lifecycle-configuration file://lifecycle.json

Migration

Local to S3

1

Setup S3 bucket

Create and configure S3 bucket as described above.
2

Sync existing files

# Copy files from container to S3
docker cp activepieces:/usr/src/app/cache ./temp-cache
aws s3 sync ./temp-cache s3://activepieces-files/
rm -rf ./temp-cache
3

Update configuration

.env
AP_FILE_STORAGE_LOCATION=s3
AP_S3_BUCKET=activepieces-files
# ... S3 credentials ...
4

Restart Activepieces

docker compose restart activepieces

Troubleshooting

Test S3 configuration:
# AWS CLI
aws s3 ls s3://activepieces-files --region us-east-1

# Using environment variables
AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy aws s3 ls s3://bucket
Check logs:
docker compose logs activepieces | grep -i s3
Verify IAM permissions include:
  • s3:PutObject
  • s3:GetObject
  • s3:DeleteObject
  • s3:ListBucket
Check bucket policy allows your IAM user/role.
Enable signed URLs:
AP_S3_USE_SIGNED_URLS=true
Verify bucket is private (not public).Check URL expiration (7 days default).
Ensure volume is mounted:
docker inspect activepieces | grep -A 10 Mounts
Recreate with volume:
docker run -v ./cache:/usr/src/app/cache ...

Best Practices

Use S3 for Production

Always use S3-compatible storage for production deployments with:
  • Multiple replicas
  • High availability requirements
  • Large file volumes

Enable Versioning

Enable S3 bucket versioning to protect against accidental deletion:
aws s3api put-bucket-versioning \
  --bucket activepieces-files \
  --versioning-configuration Status=Enabled

Use Signed URLs

Keep buckets private and use pre-signed URLs:
AP_S3_USE_SIGNED_URLS=true

Configure Lifecycle

Automatically delete old logs and temporary files to reduce costs.

Next Steps

Environment Variables

Complete S3 configuration reference

Database

Configure PostgreSQL

Scaling

Scale file storage

Backup

Backup S3 files