Caching and Performance
While Cloudflare automatically caches static assets (images, CSS, JS) based on standard HTTP headers, dynamic API responses generated by a Worker are NOT cached by default.
To maximize performance and minimize backend load, Cloudflare Workers have native, programmatic access to the Edge Cache via the Cache API.
By the end of this module, you will know how to intercept requests, check if a response already exists in the cache, save new responses to the cache, and understand Smart Placement.
The Default Cache Approach (fetch options)
If your Worker simply fetches data from an upstream URL, you can tell Cloudflare to cache the response directly inside the fetch() call.
export default {
async fetch(request, env, ctx) {
const url = "https://api.github.com/users/octocat";
// Standard fetch, goes to Github every time
// const response = await fetch(url);
// Cached fetch, caches the Github response at the edge for 1 hour (3600s)
const response = await fetch(url, {
cf: {
cacheTtl: 3600,
cacheEverything: true
}
});
return response;
}
};
Note: The cf object is a Cloudflare-specific extension to the standard RequestInit fetch dictionary.
Programmatic Cache API
If your Worker generates the response (e.g., parsing JSON from D1 to construct a custom payload), you must manually read and write to the cache.
The Standard Cache Object
You access the cache via the global caches.default object.
export default {
async fetch(request, env, ctx) {
// 1. Caching only works for GET requests
if (request.method !== "GET") return new Response("Method not allowed", { status: 405 });
const cacheUrl = new URL(request.url);
// Create a generic cache key based on the URL
const cacheKey = new Request(cacheUrl.toString(), request);
const cache = caches.default;
// 2. Try to find the response in the cache
let response = await cache.match(cacheKey);
if (!response) {
// 3. Cache Miss: Generate the response from scratch
console.log(`Cache miss for: ${request.url}`);
const data = await env.MY_DB.prepare("SELECT * FROM heavy_table").all();
// Must set cache-control headers so the Cache object knows how long to store it
response = Response.json(data, {
headers: {
"Cache-Control": "s-maxage=3600" // Cache for 1 hour at the edge
}
});
// 4. Write it to the cache in the background!
// Use waitUntil so the user doesn't wait for the cache write to finish
ctx.waitUntil(cache.put(cacheKey, response.clone()));
} else {
console.log(`Cache hit for: ${request.url}`);
}
// 5. Return the response to the user
return response;
}
};
Important Cache API Rules
- Never Cache Personal Data: Do not cache responses blindly if the response relies on a user's session cookie or Authorization header.
response.clone()is mandatory: You cannot pass a response stream back to a user and into the cache simultaneously without cloning it first.ctx.waitUntil(): Always wrapcache.put()in the execution context to prevent the isolate from shutting down mid-write.
Tiered Caching
When a user in Tokyo requests a file, it checks the Tokyo PoP (Point of Presence). If it's a miss, typically that PoP would query your origin server in New York.
With Tiered Caching enabled in the Cloudflare Dashboard, the Tokyo PoP will first check larger regional hub PoPs (like Singapore or San Jose) before hitting your origin.
In Workers, Tiered Caching applies automatically to any fetch() requests that pass through the edge cache.
Smart Placement
By default, Cloudflare spins up a Worker Isolate in the data center physically closest to the user.
However, if your Worker script does nothing but query a database located in us-east-1, running the Worker in Sydney is inefficient. The Sydney Worker has to establish an SSL connection back to New York anyway.
Smart Placement is a feature you can enable in your wrangler.toml:
[placement]
mode = "smart"
Cloudflare analyzes your script's behavior. If it detects multiple synchronous requests to a slow backend, Cloudflare will automatically execute the Worker script near the backend databases rather than near the user, drastically lowering total latency.
What's Next
Writing high-performance code locally is great, but deploying manually via npx wrangler deploy from your laptop is dangerous for production systems.
Proceed to Module 9: CI/CD GitHub Actions to learn how to automate deployments safely.