Skip to main content

Object Operations via CLI

With your tooling configured, you can perform advanced object operations. While basic uploads and downloads are straightforward, enterprise environments require multipart uploads, metadata manipulation, and optimized sync operations.

Basic vs Multipart Uploads

Standard Upload

For small files (< 100MB), a standard PUT operation via the CLI is sufficient:

r2-aws cp ./local-video.mp4 s3://media-bucket/raw/

Multipart Uploads (Files > 5GB)

R2 enforces a hard limit on a single PUT object (usually 5GB). For massive files, you must use multipart uploads. The AWS CLI handles this automatically behind the scenes for large files, but you can tune it for performance.

To force multipart behavior and increase chunk sizes for a 50GB database dump:

# Set multipart threshold to 100MB and chunk size to 100MB
aws configure set default.s3.multipart_threshold 100MB
aws configure set default.s3.multipart_chunksize 100MB

# The copy command will now automatically split the file into 100MB chunks
r2-aws cp ./database.sql s3://backup-bucket/

Syncing Directories and Optimization

The sync command is the workhorse of object storage. It recursively traverses directories and only uploads new or modified files.

Standard Sync

r2-aws sync ./assets/ s3://static-bucket/assets/

Enterprise Sync Tuning

When syncing thousands of files (e.g., node_modules, log directories), the default AWS CLI settings are too slow. You must optimize concurrency:

# Increase concurrent requests from 10 (default) to 50
aws configure set default.s3.max_concurrent_requests 50

# Sync with exact timestamp matching and deletion of removed files
r2-aws sync ./assets/ s3://static-bucket/assets/ --exact-timestamps --delete

[!WARNING] Using --delete will permanently delete files in the R2 bucket if they do not exist in the local source directory. Use with extreme caution.

Setting Custom Metadata and Cache Headers

By default, R2 infers the Content-Type based on file extensions. However, for CDN delivery, you must set caching headers during the upload.

Uploading with Cache-Control

To tell the Cloudflare Edge to cache an image for 1 year (immutable):

r2-aws cp ./logo.png s3://static-bucket/images/ \
--content-type "image/png" \
--cache-control "public, max-age=31536000, immutable"

Adding Custom Metadata

You can attach arbitrary key-value pairs to objects, which can be read later by Cloudflare Workers or your backend:

r2-aws cp ./user-upload.pdf s3://documents-bucket/ \
--metadata "user_id=12345,category=invoice"

Note: The AWS CLI automatically prepends x-amz-meta- to custom metadata keys when sending the request.

Pre-signed URLs via CLI

Sometimes you need to give a user temporary access to download a private file. You can generate a presigned URL directly from the CLI.

Generate a URL valid for exactly 1 hour (3600 seconds):

r2-aws presign s3://private-bucket/confidential-report.pdf --expires-in 3600

This returns a long, cryptographically signed URL that you can securely share via Slack or email.