Workers Integration
The ultimate performance pattern for R2 is binding it directly to a Cloudflare Worker. Because Workers and R2 live in the same data centers, latency is effectively zero, and there are no egress fees.
The R2 Binding API
Instead of importing heavy AWS SDKs, Workers use native bindings.
# wrangler.toml
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "production-assets"
The R2Object vs R2ObjectBody
When you retrieve data, R2 returns specific objects:
R2Object: Returned byhead(). Contains only metadata (size, etag, custom metadata). It does not contain the file contents. Fast and cheap.R2ObjectBody: Returned byget(). Inherits fromR2Objectbut adds the.bodyproperty (aReadableStream) containing the file data.
Advanced Pattern: The Caching Layer
While R2 is fast, reading the same object 10,000 times a second costs Class B operation fees. You should wrap your R2 calls in the Worker's native Cache API (caches.default) to serve hot objects from edge RAM.
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const key = url.pathname.slice(1);
// 1. Check Edge Cache first
const cache = caches.default;
let response = await cache.match(request);
if (response) {
return response; // Cache HIT
}
// 2. Cache MISS: Fetch from R2
const object = await env.MY_BUCKET.get(key);
if (object === null) {
return new Response('Not Found', { status: 404 });
}
const headers = new Headers();
object.writeHttpMetadata(headers);
headers.set('etag', object.httpEtag);
// Force cache for 1 hour
headers.set('Cache-Control', 'public, max-age=3600');
response = new Response(object.body, { headers });
// 3. Store in Cache asynchronously (ctx.waitUntil)
ctx.waitUntil(cache.put(request, response.clone()));
return response;
}
};
Advanced Pattern: Multipart Uploads in Workers
If users are uploading massive files (e.g., 2GB videos) through your Worker to R2, standard .put() will fail due to Worker memory limits. You must use the Multipart Upload API natively in the Worker.
export default {
async fetch(request, env) {
const url = new URL(request.url);
if (request.method === 'POST' && url.pathname === '/upload-start') {
// 1. Initialize Multipart
const multipartUpload = await env.MY_BUCKET.createMultipartUpload('large-video.mp4');
return Response.json({ uploadId: multipartUpload.uploadId });
}
if (request.method === 'PUT' && url.pathname === '/upload-part') {
const { uploadId, partNumber } = await request.json();
// Resume the upload object
const multipartUpload = env.MY_BUCKET.resumeMultipartUpload('large-video.mp4', uploadId);
// 2. Upload the chunk
const uploadedPart = await multipartUpload.uploadPart(partNumber, request.body);
return Response.json({ part: uploadedPart });
}
if (request.method === 'POST' && url.pathname === '/upload-complete') {
const { uploadId, parts } = await request.json();
const multipartUpload = env.MY_BUCKET.resumeMultipartUpload('large-video.mp4', uploadId);
// 3. Complete the assembly
await multipartUpload.complete(parts);
return new Response('Upload Success');
}
return new Response('Not Found', { status: 404 });
}
}
This allows your frontend to chunk massive files and stream them through the Worker without hitting memory limits.