Skip to main content

File Storage Configuration (Optional)

By default, all media files (uploads, attachments, etc.) are stored locally on your server. For improved scalability, security, and reliability, you may optionally configure external file storage using AWS S3, Google Cloud Storage (GCS), or any S3-compatible storage provider (Cloudflare R2, Wasabi, Backblaze B2, MinIO, etc.).

Note: You should only configure one storage provider (AWS S3, GCS, or S3-compatible), not multiple.

Why Use External File Storage?

  • Offload media files from your application server
  • Improve performance and scalability
  • Enable secure, presigned URLs for file access
  • Recommended for production deployments

Configuration Steps

Option 1: AWS S3 Configuration

  1. Create an S3 bucket in your AWS account.
  2. Generate an IAM user with access to the bucket and obtain the Access Key ID and Secret Access Key.
  3. Update your web-server.env file with the following (uncomment and fill in your details):
# AWS S3 Configuration
AWS_BUCKET_NAME='your-bucket-name'
AWS_ACCESS_KEY_ID='your_aws_access_key'
AWS_SECRET_ACCESS_KEY='your_aws_secret_key'
AWS_REGION='us-east-1'
AWS_PRESIGNED_EXPIRY='43200'

Option 2: Google Cloud Storage (GCS) Configuration

  1. Create a GCS bucket in your Google Cloud project.
  2. Create a service account with Storage Object Admin role and generate a key.
  3. Update your web-server.env file with the following (uncomment and fill in your details):
# Google Cloud Storage Configuration
GCS_BUCKET='your-bucket-name'
GCP_PROJECT_ID='your-project-id'
GCP_PRIVATE_KEY_ID='your-private-key-id'
GCP_PRIVATE_KEY='your-private-key'
GCP_CLIENT_EMAIL='your-service-account-email'
GCP_CLIENT_ID='your-client-id'
GCS_BUCKET_PREFIX='optional-prefix'
GCS_PRESIGNED_EXPIRY_SECONDS='86400'
GCP_AUTH_URI='your_gcp_auth_uri'
GCP_TOKEN_URI='your_gcp_token_uri'
GCP_AUTH_PROVIDER_X509_CERT_URL='your_gcp_auth_provider_x509_cert_url'
GCP_CLIENT_X509_CERT_URL='your_gcp_client_x509_cert_url'
GCP_UNIVERSE_DOMAIN='your_gcp_universe_domain'

Option 3: S3-Compatible Storage Configuration (Cloudflare R2, Wasabi, Backblaze B2, MinIO, etc.)

  1. Create a bucket with your S3-compatible provider.
  2. Generate access credentials (Access Key ID and Secret Access Key) with appropriate permissions.
  3. Update your web-server.env file with the following (uncomment and fill in your details):
# S3-Compatible Storage Configuration (Cloudflare R2, Wasabi, Backblaze B2, MinIO, etc.)
# Uncomment and fill these if using an S3-compatible provider
#
# S3-compatible endpoint URL (e.g., https://<account-id>.r2.cloudflarestorage.com, https://s3.wasabisys.com, https://s3.us-west-001.backblazeb2.com)
S3_COMPATIBLE_ENDPOINT='https://<account-id>.r2.cloudflarestorage.com'
# Region (e.g., auto for R2, us-east-1 for Wasabi, us-west-001 for Backblaze)
S3_COMPATIBLE_REGION='auto'
# Access key ID
S3_COMPATIBLE_ACCESS_KEY_ID='your_access_key_id'
# Secret access key
S3_COMPATIBLE_SECRET_ACCESS_KEY='your_secret_access_key'
# Bucket name
S3_COMPATIBLE_BUCKET='your-bucket-name'
# Optional prefix for files (e.g., uploads/)
S3_COMPATIBLE_BUCKET_PREFIX='optional-prefix'
# Custom base URL for files (e.g., https://files.yourdomain.com for R2 custom domain)
S3_COMPATIBLE_BASE_URL=''
# URL expiry in seconds (e.g., 3600)
S3_COMPATIBLE_PRESIGNED_EXPIRY='3600'
# Path-style URLs (default: true, required for most S3-compatible providers)
S3_COMPATIBLE_FORCE_PATH_STYLE='true'
# Provider name for logging (e.g., r2, wasabi, backblaze, minio)
S3_COMPATIBLE_PROVIDER='r2'

Additional Notes

  • If neither AWS S3, GCS, nor an S3-compatible provider is configured, all files will be stored locally on your server (default).
  • Make sure to keep your cloud credentials secure and never share them publicly.

For more details, refer to the official AWS S3 and Google Cloud Storage documentation, or contact WhautoChat support for guidance on best practices.