Skip to main content

State and Storage

Because Cloudflare Workers execute in distributed V8 Isolates across 300+ cities, they have no local file system and run statelessly. You cannot save a file to disk or confidently store global variables in memory for later.

To process and save persistent data, Cloudflare provides four distinct, edge-native storage products that integrate natively into Workers via bindings.

Learning Focus

By the end of this module, you will understand the distinct use cases for KV, D1, Durable Objects, and R2, and how to read/write to them within a Worker environment.


1. Workers KV

(Global, Read-Heavy Key-Value Store)

KV (Key-Value) is intended for high-read, low-write data. It stores data centrally and caches it at the edge nearest the user.

  • Best for: Configuration flags, translation strings, redirects, caching API responses.
  • Limitation: Data is eventually consistent. Writing to KV can take up to 60 seconds to propagate globally. It is not suitable for realtime counters.

Basic Usage

Once bound to your environment as MY_KV, the API is incredibly simple:

export default {
async fetch(request, env, ctx) {
// 1. WRITE: Set a key
await env.MY_KV.put("welcome_message", "Hello from edge caching!");

// 2. READ: Get a key
const message = await env.MY_KV.get("welcome_message");

// 3. TTL: Write a temporary key that deletes itself in 60 seconds
await env.MY_KV.put("session_id", "xyz123", { expirationTtl: 60 });

return new Response(message);
}
};

2. D1 Relational Database

(Serverless SQLite)

D1 is Cloudflare's serverless relational SQL database. It is built on SQLite, eliminating the need to manage database connections or ORM overhead. D1 read replicas operate automatically at the edge.

  • Best for: User profiles, structured application data, complex queries, traditional relational architecture.
  • Limitation: Relational DB design implies strict schemas; migrating large volumes requires structured schema rollouts.

Basic Usage

export default {
async fetch(request, env, ctx) {
// Prepare a secure, SQL-injection safe query
const statement = env.MY_DB.prepare(
"SELECT * FROM Users WHERE email = ? LIMIT 1"
).bind("user@example.com");

// Execute the query
const { results } = await statement.all();

if (results.length === 0) return new Response("Not found", { status: 404 });

return Response.json(results[0]);
}
};

3. Durable Objects

(Strong Consistency & Stateful Compute)

Durable Objects (DO) are entirely unique to the Cloudflare ecosystem. They provide a single, strictly consistent point of execution globally. When you create a Durable Object, it spins up an isolate in exactly one data center instance and attaches stateful storage to it.

  • Best for: Real-time chat apps, multiplayer game states, rate limiting, and exact transactional counters.
  • Limitation: All traffic for a specific DO must route to a single data center, bypassing edge acceleration for that specific request. High latency if the user is physically far from the DO.

Basic Conceptual Usage (Client Side)

Note: Writing the Durable Object class itself requires advanced setup out of scope for this overview, but calling an existing DO from a Worker looks like this:

export default {
async fetch(request, env, ctx) {
// 1. Get a unique ID for "chat-room-A"
const id = env.CHAT_ROOMS.idFromName("chat-room-A");

// 2. Create the connection stub to that exact global instance
const roomStub = env.CHAT_ROOMS.get(id);

// 3. Forward the request to that specific Durable Object
return await roomStub.fetch(request);
}
};

4. R2 Object Storage

(S3-Compatible Blob Storage)

R2 is Cloudflare's answer to AWS S3. It is designed to store large blob objects (images, videos, backups) with zero egress fees.

  • Best for: Image hosting, user avatars, podcast hosting, large JSON file backups.
  • Limitation: Slower than KV for small, repetitive data strings.

Basic Usage

Within a Worker, R2 provides a powerful streaming API. You do not need the bloated AWS SDK.

export default {
async fetch(request, env, ctx) {
const url = new URL(request.url);
const objectKey = url.pathname.slice(1); // e.g. "image.png"

if (request.method === "PUT") {
// Stream the upload directly into R2
await env.MY_BUCKET.put(objectKey, request.body);
return new Response("Uploaded successfully!");
}

if (request.method === "GET") {
// Get the object
const object = await env.MY_BUCKET.get(objectKey);

if (object === null) {
return new Response("Object Not Found", { status: 404 });
}

// Stream the download back to the user
return new Response(object.body);
}
}
};

Decision Matrix

When architecting a Worker, use this matrix to choose your storage backend:

NeedSolutionExample Use Case
Fast Reads, Global Distribution, Eventual ConsistencyKVGlobal API rate limits, translated string caches.
Structured Data, SQL Queries, JoinsD1User authentication tables, e-commerce product catalogs.
Strict Accuracy, Realtime State, WebSocketsDurable ObjectsMultiplayer syncing, live bidding system.
Large Files, Blobs, StreamingR2Storing profile avatars, serving video streams.

What's Next

To use these databases, your Worker must be explicitly granted permission via "Bindings". Proceed to Module 6: Bindings and Secrets to understand how to connect infrastructure securely.