CORS and Presigned URLs
Allowing users to upload files directly from their browser to R2 requires mastering Cross-Origin Resource Sharing (CORS) and Presigned URLs. This pattern eliminates the need for your backend server to handle massive file streams, saving immense bandwidth and CPU.
The CORS Preflight Dance
When a browser attempts a cross-origin PUT request (e.g., https://my-app.com sending to https://my-bucket.r2.cloudflarestorage.com), it first sends an OPTIONS request called a preflight.
If the R2 bucket does not respond to the OPTIONS request with the correct headers, the browser blocks the actual PUT request and throws a CORS error.
Configuring Strict CORS
Use the aws s3api to enforce strict CORS. Do not use * for AllowedOrigins in production.
{
"CORSRules": [
{
"AllowedOrigins": ["https://app.mycompany.com"],
"AllowedMethods": ["PUT", "GET", "HEAD"],
"AllowedHeaders": ["Content-Type", "Content-Length"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 86400
}
]
}
r2-awsapi put-bucket-cors --bucket user-uploads --cors-configuration file://cors-production.json
The Presigned URL Upload Pattern
1. Backend Generation (Node.js)
Your backend authenticates the user, generates a secure URL valid for only 5 minutes, and restricts the content type to prevent users from uploading malicious executables instead of images.
import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
const s3 = new S3Client({
region: "auto",
endpoint: `https://${process.env.ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY,
},
});
app.post('/api/get-upload-url', async (req, res) => {
// Validate user session here...
const command = new PutObjectCommand({
Bucket: "user-uploads",
Key: `avatars/${req.user.id}.jpg`,
ContentType: "image/jpeg" // Enforce JPEG
});
const url = await getSignedUrl(s3, command, { expiresIn: 300 }); // 5 mins
res.json({ url });
});
2. Frontend Implementation (React/Vanilla JS)
The frontend requests the URL, then uses the XMLHttpRequest object (or Axios) to perform the upload. We use XHR here instead of fetch because fetch does not natively support upload progress events.
async function uploadFile(file) {
// 1. Get the Presigned URL
const response = await fetch('/api/get-upload-url', { method: 'POST' });
const { url } = await response.json();
// 2. Perform the Upload with Progress
return new Promise((resolve, reject) => {
const xhr = new XMLHttpRequest();
xhr.upload.addEventListener("progress", (event) => {
if (event.lengthComputable) {
const percentComplete = Math.round((event.loaded / event.total) * 100);
console.log(`Upload progress: ${percentComplete}%`);
// Update UI progress bar here
}
});
xhr.addEventListener("load", () => {
if (xhr.status >= 200 && xhr.status < 300) {
resolve("Upload successful");
} else {
reject(`Upload failed with status: ${xhr.status}`);
}
});
xhr.addEventListener("error", () => reject("Network error during upload"));
xhr.open("PUT", url);
// MUST match the ContentType specified in the backend PutObjectCommand
xhr.setRequestHeader("Content-Type", file.type);
xhr.send(file);
});
}
[!IMPORTANT] The
Content-Typeset in the frontendxhr.setRequestHeadermust exactly match theContentTypedefined in the backendPutObjectCommand. If they differ even slightly, AWS signature verification will fail with a403 SignatureDoesNotMatcherror.