Migration and Rclone
Migrating terabytes of data from legacy providers (AWS S3, GCS) to R2 requires careful planning to minimize downtime and egress costs. Cloudflare provides Super Slurper, but Rclone remains the preferred tool for complex, granular migrations.
Super Slurper (Cloudflare Dashboard)
Super Slurper is Cloudflare's managed migration tool. It operates entirely on Cloudflare's backend infrastructure.
Key Benefits
- Zero Local Bandwidth: Your local machine's internet connection is not used.
- Automatic Retries: Handles intermittent network failures automatically.
Limitations & Edge Cases
- No Deletes: Super Slurper does not currently sync deletions. If you delete a file on AWS during the migration, Super Slurper will not delete it on R2.
- Speed Variations: Because it is a free, shared service, migration speeds can fluctuate heavily based on Cloudflare's overall network load.
- Limited Logging: Debugging failed objects is difficult as detailed logs are not always surfaced in the UI.
Advanced Migrations with Rclone
For enterprise migrations where you need absolute control, aggressive concurrency, and perfect synchronization, rclone is the gold standard.
Configuring Rclone for R2
[cloudflare-r2]
type = s3
provider = Cloudflare
access_key_id = <YOUR_ACCESS_KEY_ID>
secret_access_key = <YOUR_SECRET_ACCESS_KEY>
region = auto
endpoint = https://<ACCOUNT_ID>.r2.cloudflarestorage.com
Tuning for Massive Datasets (The Fast List)
When syncing buckets with millions of objects, the standard discovery phase (listing all files) can take hours and cost thousands of Class A/B API operations.
You must use --fast-list. This flag tells Rclone to use more RAM to store the entire directory structure in memory, drastically reducing API calls.
rclone sync aws-s3:old-bucket cloudflare-r2:new-bucket \
--progress \
--fast-list \
--transfers 64 \
--checkers 128
--transfers 64: Upload 64 files simultaneously.--checkers 128: Compare the hashes of 128 files simultaneously.
Bandwidth Throttling (Protecting Source Servers)
If you are migrating from a local production server to R2, saturating the server's uplink will take your production site offline.
Use the --bwlimit flag to restrict Rclone:
# Limit migration to 50 Megabytes per second
rclone copy /var/www/uploads/ cloudflare-r2:production-uploads/ \
--bwlimit 50M \
--progress
You can even schedule bandwidth limits based on time of day:
--bwlimit "08:00,10M 18:00,50M" (10MB/s during business hours, 50MB/s overnight).
The Migration Cutover Strategy
To migrate a live application without downtime:
- Initial Bulk Sync: Run a massive
rclone sync(taking days/weeks). The application is still writing to AWS S3. - Delta Syncs: Run
rclone syncdaily. It will only copy the new files. - The Freeze: Put your application in "Read-Only" mode.
- Final Sync: Run one last
rclone sync. Because the delta is tiny, this takes seconds. - Cutover: Update application environment variables to point to R2 credentials.
- Unfreeze: Application resumes normal operation, writing to R2.