R2 Object Storage
By the end of this lesson you will understand how R2 solves the egress fee problem, how to create buckets, upload and serve objects, use R2 inside Workers, and migrate from AWS S3.
What Is Cloudflare R2?
R2 is Cloudflare's S3-compatible object storage service. It allows you to store large amounts of unstructured data — images, videos, backups, build artifacts, logs — and serve them globally across Cloudflare's network.
Think of it like a warehouse connected directly to every delivery hub in the world. Traditional cloud storage (AWS S3, GCS) is a warehouse in one city — you pay shipping (egress fees) every time something leaves. R2's warehouse is connected to every Cloudflare PoP, and the "shipping" is always free.
flowchart LR
subgraph Traditional["Traditional (AWS S3)"]
T_USER["User\n(Global)"] -->|"Request"| T_CDN["CDN\n(Extra Cost)"]
T_CDN -->|"Cache Miss\n(Egress fee!)"| T_S3["S3\n(us-east-1)"]
end
subgraph R2["Cloudflare R2"]
R_USER["User\n(Global)"] -->|"Request"| R_CF["Cloudflare Edge\n(Nearest PoP)"]
R_CF -->|"Cache Miss\n(Zero egress)"| R_R2["R2\n(Global)"]
end
style T_S3 fill:#dc2626,color:#fff,stroke:#b91c1c
style R_R2 fill:#16a34a,color:#fff,stroke:#15803d
style R_CF fill:#f6821f,color:#fff,stroke:#e5711e
The Egress Fee Problem
Egress fees are charges that cloud providers apply every time data leaves their network — and they add up fast.
You store your users' profile photos on S3. You have 1 million users. On average, each user loads 10 photos per day, averaging 500KB each. That's 5TB of egress per day → ~$450/day in egress fees alone.
This is not a niche problem — egress fees are one of the largest hidden costs in cloud infrastructure. Cloudflare R2 eliminates this entirely.
| Scenario | AWS S3 Cost | R2 Cost |
|---|---|---|
| Get 1 TB/month from storage to CDN | ~$90 | $0 |
| Serve a public image 1M times/day | Egress charges | $0 |
| Build artifact storage (30GB) | Storage + egress per pull | Storage only |
| Video CDN (10TB/month) | $900+ egress | $0 |
R2 vs Amazon S3 vs GCS
| Feature | Cloudflare R2 | Amazon S3 (Standard) | Google Cloud Storage |
|---|---|---|---|
| Zero Egress Fees | ✅ Always | ❌ ~$0.09/GB | ❌ ~$0.08/GB |
| Global CDN | ✅ Built-in (free) | 💰 CloudFront extra | 💰 Cloud CDN extra |
| S3 API Compatible | ✅ Full compatibility | ✅ Native | Partial |
| Free tier (Storage) | ✅ 10 GB/month | ✅ 5 GB (12mo only) | ✅ 5 GB standard |
| Free tier (Reads) | ✅ 10M Class B ops | Limited | Limited |
| Workers native binding | ✅ Yes | ❌ | ❌ |
| Object lifecycle | Limited | Full | Full |
| Lock / Versioning | Beta | ✅ Full | ✅ Full |
Free Tier
| Resource | Free Plan |
|---|---|
| Storage | 10 GB per month |
| Class A Ops (Write, List, Delete) | 1,000,000 per month |
| Class B Ops (Read/Head) | 10,000,000 per month |
| Egress to Internet | ✅ Unlimited — always free |
| Egress to Workers | ✅ Always free |
| Egress to Cloudflare CDN | ✅ Always free |
Core Concepts
Before writing any code, understand R2's building blocks:
| Concept | What It Is | Real-World Analogy |
|---|---|---|
| Bucket | A top-level container for objects | A folder or drive |
| Object | A single stored file (any type, up to 5TB) | A file on disk |
| Key | The unique name/path of an object in a bucket | A file's full path |
| Binding | A Workers-native connection to a bucket | An import / API client |
| R2 API Token | Credential for S3-compatible tools | An API key |
In R2, an object's "key" is its identifier. The key images/2024/avatar.png means an object at path images/2024/avatar.png inside your bucket. It is not a URL — it becomes one when you expose the bucket publicly.
Step 1: Create a Bucket
Via Dashboard
- Go to your Cloudflare Dashboard → R2 Object Storage
- Click "Create bucket"
- Enter a name (e.g.,
my-assets) — names must be globally unique within your account - Choose a Location hint (optional):
| Hint | Region |
|---|---|
ENAM | Eastern North America |
WNAM | Western North America |
WEUR | Western Europe |
EEUR | Eastern Europe |
APAC | Asia-Pacific |
If your Workers or users are concentrated in a specific region, using a location hint can reduce latency for the first "cache miss" fetch. It does not limit where your data is accessible — objects are still globally available.
Via Wrangler CLI
wrangler r2 bucket create my-assets
# List all your buckets
wrangler r2 bucket list
# Delete a bucket (must be empty first)
wrangler r2 bucket delete my-assets
Step 2: Upload and Manage Objects
Via Wrangler CLI
wrangler r2 object put my-assets/images/logo.png --file ./local-logo.png
# Download a file
wrangler r2 object get my-assets/images/logo.png --file ./downloaded-logo.png
# Delete an object
wrangler r2 object delete my-assets/images/logo.png
# List objects in a bucket (prefix optional)
wrangler r2 object list my-assets --prefix images/
Via AWS CLI (S3-Compatible)
Any S3-compatible tool works with R2. Set up the AWS CLI with your R2 credentials:
[cloudflare-r2]
aws_access_key_id = YOUR_R2_ACCESS_KEY_ID
aws_secret_access_key = YOUR_R2_SECRET_ACCESS_KEY
[profile cloudflare-r2]
region = auto
# List all buckets
aws s3 ls \
--endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com \
--profile cloudflare-r2
# Copy a local file to R2
aws s3 cp ./my-video.mp4 s3://my-assets/videos/my-video.mp4 \
--endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com \
--profile cloudflare-r2
# Sync a local directory to R2 (great for deploying static assets)
aws s3 sync ./dist s3://my-assets/static/ \
--endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com \
--profile cloudflare-r2
# Delete an object
aws s3 rm s3://my-assets/videos/old.mp4 \
--endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com \
--profile cloudflare-r2
Step 3: Use R2 in a Worker
Workers connect to R2 buckets via bindings — zero-credential, native API access.
Configure the Binding
name = "my-worker"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[[r2_buckets]]
binding = "ASSETS" # Name used in your Worker code
bucket_name = "my-assets" # Name of the R2 bucket
Full CRUD Example
export interface Env {
ASSETS: R2Bucket;
}
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const url = new URL(request.url);
// Remove leading slash to get the object key
// e.g. /images/logo.png → "images/logo.png"
const key = url.pathname.slice(1);
if (!key) {
return new Response("Missing object key", { status: 400 });
}
switch (request.method) {
// ─── READ ───────────────────────────────────────────────────────────
case "GET": {
const object = await env.ASSETS.get(key);
if (!object) {
return new Response("Object Not Found", { status: 404 });
}
// Copy HTTP metadata (Content-Type, Cache-Control, etc.) from R2
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("ETag", object.httpEtag);
return new Response(object.body, { headers });
}
// ─── HEAD (metadata only, no body) ─────────────────────────────────
case "HEAD": {
const object = await env.ASSETS.head(key);
if (!object) {
return new Response("Object Not Found", { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set("ETag", object.httpEtag);
return new Response(null, { headers });
}
// ─── WRITE ──────────────────────────────────────────────────────────
case "PUT": {
await env.ASSETS.put(key, request.body, {
httpMetadata: {
contentType: request.headers.get("Content-Type") ?? "application/octet-stream",
},
// Store custom key-value pairs alongside the object
customMetadata: {
uploadedAt: new Date().toISOString(),
uploadedBy: "my-worker",
},
});
return new Response(`Stored: ${key}`, { status: 201 });
}
// ─── DELETE ─────────────────────────────────────────────────────────
case "DELETE": {
await env.ASSETS.delete(key);
return new Response("Deleted", { status: 200 });
}
default:
return new Response("Method Not Allowed", { status: 405 });
}
},
};
Listing Objects
const listed = await env.ASSETS.list({
prefix: "images/", // Only list objects under images/
limit: 100, // Max 1000
cursor: url.searchParams.get("cursor") ?? undefined, // Pagination
});
const keys = listed.objects.map(obj => ({
key: obj.key,
size: obj.size,
uploaded: obj.uploaded,
etag: obj.etag,
}));
return Response.json({
objects: keys,
truncated: listed.truncated,
cursor: listed.truncated ? listed.cursor : null,
});
Step 4: Make Objects Publicly Accessible
By default, R2 buckets are private — only accessible via Workers or S3-compatible API with credentials. To serve objects directly to users:
Option A: Custom Domain (Recommended for Production)
- Go to R2 → your bucket → Settings → Custom Domains
- Click "Connect Domain"
- Enter your domain (e.g.,
assets.example.com) - Cloudflare automatically creates a CNAME record and issues a TLS certificate
Your objects are now accessible at:
https://assets.example.com/images/logo.png
The Cloudflare CDN automatically caches responses — files served from this domain go through Cloudflare's full performance and security stack.
Option B: r2.dev Subdomain (Quick Testing Only)
- Go to R2 → your bucket → Settings → Public Access
- Enable the r2.dev subdomain
The r2.dev subdomain is rate-limited and intended only for testing. Do not use it in production — it will throttle under any meaningful load.
Option C: Serve via Worker (Most Control)
Route traffic through your Worker (as shown in the CRUD example above), giving you full control over authentication, caching headers, transformations, and access rules:
case "GET": {
// Check authorization before serving
const authHeader = request.headers.get("Authorization");
if (!isValidToken(authHeader)) {
return new Response("Unauthorized", { status: 401 });
}
const object = await env.ASSETS.get(key);
if (!object) return new Response("Not Found", { status: 404 });
const headers = new Headers();
object.writeHttpMetadata(headers);
// Add aggressive caching for authenticated users
headers.set("Cache-Control", "private, max-age=3600");
return new Response(object.body, { headers });
}
Generating S3 API Tokens
To use R2 with external tools (AWS CLI, rclone, Cyberduck, Terraform, etc.) you need an R2 API Token — not your Cloudflare API Token.
- Go to Cloudflare Dashboard → R2 → Manage R2 API Tokens
- Click "Create API Token"
- Set permissions:
- Admin Read & Write — full access (use for backup tools)
- Object Read & Write — access to specific buckets (more secure)
- Object Read only — for CDN/read-only clients
- Copy the generated:
- Access Key ID (like an AWS Access Key)
- Secret Access Key (like an AWS Secret Key)
- Endpoint URL
https://<ACCOUNT_ID>.r2.cloudflarestorage.com
Migrating from S3 to R2 with rclone
[s3]
type = s3
provider = AWS
access_key_id = YOUR_AWS_KEY
secret_access_key = YOUR_AWS_SECRET
region = us-east-1
[r2]
type = s3
provider = Cloudflare
access_key_id = YOUR_R2_KEY
secret_access_key = YOUR_R2_SECRET
endpoint = https://<ACCOUNT_ID>.r2.cloudflarestorage.com
# Sync all objects (dry run first)
rclone sync s3:my-s3-bucket r2:my-assets --dry-run
# Apply the sync
rclone sync s3:my-s3-bucket r2:my-assets --progress
Common Misconceptions
"R2 is just Cloudflare's cache"
Reality: R2 is persistent object storage — data stays until you explicitly delete it. It is not a cache. Data stored in R2 survives indefinitely and can be read by Workers, public endpoints, or S3 API clients. Cloudflare's CDN does cache R2 responses at the edge, but the underlying storage is R2.
"S3 compatibility is only partial"
Reality: R2 implements the full core S3 API — PutObject, GetObject, DeleteObject, ListObjectsV2, multipart uploads, presigned URLs (via Worker), and more. Features that depend on AWS-specific services (IAM policies, S3 Glacier, S3 Replication) behave differently since they are Cloudflare-managed equivalents.
"R2 egress fees kick in after the free tier"
Reality: R2 never charges egress fees. Once you exceed the free tier, you pay for storage (~$0.015/GB) and operations ($4.50/M writes, $0.36/M reads) — never for bandwidth.
"I need to set up CloudFront or another CDN in front of R2"
Reality: R2's Custom Domain feature automatically connects your bucket to the Cloudflare CDN. No additional CDN setup is needed — it is built in.
Anti-Patterns to Avoid
| Don't Do This | Do This Instead |
|---|---|
| Expose the r2.dev subdomain in production | Use a Custom Domain with your own hostname |
| Store and read secrets/keys directly in your bucket key path | Use a flat prefix structure and access controls in Workers |
| Use R2 like a database (tiny, frequent reads/writes) | Use Workers KV or D1 for small, frequent access patterns |
Skip Content-Type when uploading | Always set contentType in httpMetadata for files to display correctly in browsers |
| Put all objects in the bucket root with long names | Use prefix-based "folders" (images/, videos/, backups/) for organization |
Key Takeaways
- R2 is S3-compatible object storage that eliminates egress fees entirely — even in paid tiers.
- Free tier: 10GB storage, 1M write ops, 10M read ops per month.
- Use Worker bindings for native, zero-credential access from your edge code.
- Use S3 API Tokens to integrate external tools like AWS CLI, rclone, or Terraform.
- Attach a Custom Domain for production-grade CDN-backed public access.
- R2 is ideal for static assets, images, videos, backups, and build artifacts — not for tiny/frequent key-value access.
- Migrating from S3 to R2 is straightforward using
rclone— the S3 API compatibility is high-fidelity.
What's Next
- Continue to Workers AI to learn how to run machine learning models at the edge.