Skip to main content

Automation and Events

R2 Event Notifications allow you to build asynchronous, event-driven architectures. Instead of your backend polling the bucket for new files, R2 actively pushes a notification to a Cloudflare Queue, which then triggers a Worker.

Real-World Architecture: Image Optimization

Consider a pipeline where users upload raw 4K .tiff images, and your system automatically converts them to optimized WebP formats.

graph LR
User[User Upload] --> |PutObject| R2Raw[(Raw Bucket)]
R2Raw --> |Event Notification| Queue[Cloudflare Queue]
Queue --> |Batch Deliver| Worker{Optimizer Worker}
Worker --> |Write| R2Opt[(Optimized Bucket)]
Worker -.-> |Failure| DLQ[Dead Letter Queue]

The Event Schema

When an event triggers, the Worker receives a JSON payload. Understanding this payload is critical for writing robust handlers.

{
"action": "PutObject",
"bucket": "raw-uploads",
"object": {
"key": "user_123/avatar.tiff",
"size": 14500000,
"eTag": "d41d8cd98f00b204e9800998ecf8427e"
},
"eventTime": "2024-03-22T10:00:00Z"
}

Implementing the Pipeline

Step 1: The wrangler.toml Configuration

We must define the consuming queue, the dead letter queue (DLQ) for failures, and the R2 bucket event trigger.

name = "image-optimizer"
main = "src/index.js"

# The Queue our Worker consumes from
[[queues.consumers]]
queue = "r2-events-queue"
max_batch_size = 10
max_retries = 3
dead_letter_queue = "r2-events-dlq" # Send here if it fails 3 times

# The bucket that generates the events
[[r2_buckets]]
binding = "RAW_BUCKET"
bucket_name = "raw-uploads"
event_notification = { queue = "r2-events-queue", events = ["PutObject"] }

# The bucket we write optimized images to
[[r2_buckets]]
binding = "OPT_BUCKET"
bucket_name = "optimized-images"

Step 2: The Worker Logic

export default {
async queue(batch, env) {
for (const message of batch.messages) {
const event = message.body;

try {
if (event.action === 'PutObject') {
// 1. Fetch the raw image
const rawImage = await env.RAW_BUCKET.get(event.object.key);

// 2. Process it (Pseudo-code for image library)
const webpBuffer = await processToWebP(await rawImage.arrayBuffer());

// 3. Save to Optimized bucket
const newKey = event.object.key.replace('.tiff', '.webp');
await env.OPT_BUCKET.put(newKey, webpBuffer, {
httpMetadata: { contentType: 'image/webp' }
});

// 4. Mark message as successful
message.ack();
}
} catch (error) {
console.error(`Failed processing ${event.object.key}:`, error);
// Do NOT ack(). Let it retry, and eventually fall to the DLQ.
message.retry();
}
}
}
};

Dead Letter Queues (DLQ)

Notice the dead_letter_queue configuration. If processToWebP throws an error (e.g., corrupted TIFF file), the message will retry 3 times. If it fails all 3 times, it is pushed to the DLQ.

This ensures your main queue doesn't get "clogged" by a poisonous file, and you can later inspect the DLQ to see exactly which files failed processing without losing the event data.