diff --git a/public/__redirects b/public/__redirects index e4e6c653484babb..fe089fa9c9827c6 100644 --- a/public/__redirects +++ b/public/__redirects @@ -15,6 +15,7 @@ /deprecations/ /fundamentals/api/reference/deprecations/ 301 /learning-paths/ /resources/ 301 /markdown.zip /llms-full.txt 301 +/dynamic-workers/examples/codemode/ /agents/api-reference/codemode/ 301 # changelog /changelog/rss.xml /changelog/rss/index.xml 301 diff --git a/src/content/directory/dynamic-workers.yaml b/src/content/directory/dynamic-workers.yaml new file mode 100644 index 000000000000000..cff988421f94d8d --- /dev/null +++ b/src/content/directory/dynamic-workers.yaml @@ -0,0 +1,12 @@ +name: Dynamic Workers + +entry: + title: Dynamic Workers + url: /dynamic-workers/ + group: Developer platform + additional_groups: [AI] + +meta: + title: Cloudflare Dynamic Workers docs + description: Spin up isolated Workers on demand to execute code + author: "@cloudflare" diff --git a/src/content/docs/dynamic-workers/api-reference.mdx b/src/content/docs/dynamic-workers/api-reference.mdx new file mode 100644 index 000000000000000..935edf394866279 --- /dev/null +++ b/src/content/docs/dynamic-workers/api-reference.mdx @@ -0,0 +1,195 @@ +--- +title: API reference +description: Reference for the Worker Loader binding and the WorkerCode object. +pcx_content_type: reference +sidebar: + order: 5 +--- + +## `load` + +`load(code): WorkerStub` + +Loads a Worker from the provided `WorkerCode` and returns a `WorkerStub` which may be used to invoke the Worker. + +Unlike `get()`, `load()` does not cache by ID. Each call creates a fresh Worker. + +Use `load()` when the code is always new, such as for AI-generated code, one-shot scripts, or previews where reuse does not matter. + +## `get` + +`get(id, getCodeCallback): WorkerStub` + +Loads a Worker with the given ID, returning a `WorkerStub` which may be used to invoke the Worker. + +As a convenience, the loader implements caching of isolates. When a new ID is seen the first time, a new isolate is loaded. But the isolate may be kept warm in memory for a while. If later invocations of the loader request the same ID, the existing isolate may be returned again rather than create a new one. But there is no guarantee. A later call with the same ID may instead start a new isolate from scratch. + +Whenever the system determines it needs to start a new isolate, and it does not already have a copy of the code cached, it invokes `getCodeCallback` to get the Worker's code. This is an async callback, so the application can load the code from remote storage if desired. The callback returns a `WorkerCode` object. + +Because of the caching, you should ensure that the callback always returns exactly the same content when called for the same ID. If anything about the content changes, you must use a new ID. But if the content has not changed, it is best to reuse the same ID to take advantage of caching. If the `WorkerCode` is different every time, you can pass a random ID. + +You could, for example, use IDs of the form `:`, where the version number increments every time the code changes. Or you could compute IDs based on a hash of the code and config, so that any change results in a new ID. + +`get()` returns a `WorkerStub`, which can be used to send requests to the loaded Worker. Note that the stub is returned synchronously. You do not have to await it. If the Worker is not loaded yet, requests made to the stub wait for the Worker to load before they are delivered. If loading fails, the request throws an exception. + +It is never guaranteed that two requests go to the same isolate. Even if you use the same `WorkerStub` to make multiple requests, they could execute in different isolates. The callback passed to `loader.get()` could be called any number of times, although it is unusual for it to be called more than once. + +## `WorkerCode` + +This is the structure returned by `getCodeCallback` to represent a Worker. + +### `compatibilityDate` + +The [compatibility date](/workers/configuration/compatibility-dates/) for the Worker. This has the same meaning as the `compatibility_date` setting in a Wrangler config file. + +### `compatibilityFlags` Optional + +An optional list of [compatibility flags](/workers/configuration/compatibility-flags/) augmenting the compatibility date. This has the same meaning as the `compatibility_flags` setting in a Wrangler config file. + +### `allowExperimental` Optional + +If `true`, experimental compatibility flags are permitted in `compatibilityFlags`. To set this, the Worker calling the loader must itself have the compatibility flag `experimental` set. Experimental flags cannot be enabled in production. + +### `mainModule` + +The name of the Worker's main module. This must be one of the modules listed in `modules`. + +### `modules` + +A dictionary object mapping module names to their string contents. If the module content is a plain string, the module name must have a file extension indicating its type: either `.js` or `.py`. + +A module's content can also be specified as an object to specify its type independent from the name. The allowed objects are: + +- `{js: string}`: A JavaScript module using ES modules syntax for imports and exports. +- `{cjs: string}`: A CommonJS module using `require()` syntax for imports. +- `{py: string}`: A Python module. +- `{text: string}`: An importable string value. +- `{data: ArrayBuffer}`: An importable `ArrayBuffer` value. +- `{json: object}`: An importable object. The value must be JSON-serializable. However, the value is provided as a parsed object and is delivered as a parsed object. Neither side actually sees the JSON serialization. + +:::caution[Warning] + +While Dynamic Workers support Python, Python Workers are much slower to start than JavaScript Workers, which may defeat some of the benefits of dynamic isolate loading. They may also be priced differently when Worker Loaders become generally available. + +::: + +### `globalOutbound` Optional + +Controls whether the dynamic Worker has access to the network. The global `fetch()` and `connect()` functions can be blocked or redirected to isolate the Worker. + +If `globalOutbound` is not specified, the default is to inherit the parent Worker's network access, which usually means the dynamic Worker has full access to the public Internet. + +If `globalOutbound` is `null`, the dynamic Worker is totally cut off from the network. Both `fetch()` and `connect()` throw exceptions. + +`globalOutbound` can also be set to any service binding, including service bindings in the parent Worker's `env` as well as loopback bindings from `ctx.exports`. + +Using `ctx.exports` is particularly useful because it lets you customize the binding further for the specific sandbox by setting the value of `ctx.props` that should be passed back to it. The props can contain information to identify the specific dynamic Worker that made the request. + +For example: + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export class Greeter extends WorkerEntrypoint { + fetch(request) { + return new Response(`Hello, ${this.ctx.props.name}!`); + } +} + +export default { + async fetch(request, env, ctx) { + let worker = env.LOADER.get("alice", () => { + return { + // Redirect the Worker's global outbound to send all requests + // to the `Greeter` class, filling in `ctx.props.name` with + // the name "Alice", so that it always responds "Hello, Alice!". + globalOutbound: ctx.exports.Greeter({ props: { name: "Alice" } }), + + // ... code ... + }; + }); + + return worker.getEntrypoint().fetch(request); + }, +}; +``` + +### `env` + +The environment object to provide to the dynamic Worker. + +Using this, you can provide custom bindings to the Worker. + +`env` is serialized and transferred into the dynamic Worker, where it is used directly as the value of `env` there. It may contain: + +- [Structured clonable types](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm). +- [Service Bindings](/workers/runtime-apis/bindings/service-bindings/), including loopback bindings from `ctx.exports`. + +The second point is the key to creating custom bindings. You can define a binding with any arbitrary API by defining a `WorkerEntrypoint` class implementing an RPC API, and then giving it to the dynamic Worker as a Service Binding. + +Moreover, by using `ctx.exports` loopback bindings, you can further customize the bindings for the specific dynamic Worker by setting `ctx.props`, just as described for `globalOutbound` above. + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +// Implement a binding which can be called by the dynamic Worker. +export class Greeter extends WorkerEntrypoint { + greet() { + return `Hello, ${this.ctx.props.name}!`; + } +} + +export default { + async fetch(request, env, ctx) { + let worker = env.LOADER.get("alice", () => { + return { + env: { + // Provide a binding which has a method greet() which can be called + // to receive a greeting. The binding knows the Worker's name. + GREETER: ctx.exports.Greeter({ props: { name: "Alice" } }), + }, + + // ... code ... + }; + }); + + return worker.getEntrypoint().fetch(request); + }, +}; +``` + +### `tails` Optional + +You may specify one or more Tail Workers which observe console logs, errors, and other details about the dynamically-loaded Worker's execution. A tail event is delivered to the Tail Worker upon completion of a request to the dynamically-loaded Worker. As always, you can implement the Tail Worker as an alternative entrypoint in your parent Worker, referring to it with `ctx.exports`. + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export default { + async fetch(request, env, ctx) { + let worker = env.LOADER.get("alice", () => { + return { + // Send logs, errors, and other details to `LogTailer`. + // We pass `name` in `ctx.props` so that `LogTailer` knows + // what generated the logs. + tails: [ctx.exports.LogTailer({ props: { name: "alice" } })], + + // ... code ... + }; + }); + + return worker.getEntrypoint().fetch(request); + }, +}; + +export class LogTailer extends WorkerEntrypoint { + async tail(events) { + let name = this.ctx.props.name; + + await fetch(`https://example.com/submit-logs/${name}`, { + method: "POST", + body: JSON.stringify(events), + }); + } +} +``` diff --git a/src/content/docs/dynamic-workers/configuration/bindings.mdx b/src/content/docs/dynamic-workers/configuration/bindings.mdx new file mode 100644 index 000000000000000..c17d0f4c76d0096 --- /dev/null +++ b/src/content/docs/dynamic-workers/configuration/bindings.mdx @@ -0,0 +1,197 @@ +--- +title: Bindings +description: Pass data and resource bindings to Dynamic Workers. +pcx_content_type: how-to +sidebar: + order: 1 +--- + +import { WranglerConfig } from "~/components"; + +You can pass [bindings](/workers/runtime-apis/bindings/) to Dynamic Workers to allow them to: + +- Write and read data from a [KV namespace](/kv/) +- Query a [D1 database](/d1/) +- Store and retrieve objects from an [R2 bucket](/r2/) +- Run inference with [Workers AI](/workers-ai/) + +Normally, to give a Worker access to a binding, you declare that binding in the Worker's Wrangler configuration and deploy it with that configuration. + +Dynamic Workers work differently. Because they are created at runtime, they do not come with bindings declared ahead of time. Instead, the Worker that creates the Dynamic Worker is responsible for passing bindings in when it calls `load()`. + +This gives the Worker that loads the Dynamic Worker control over what the Dynamic Worker can access. It can decide which resources to pass through, which names to expose them under, and whether to pass the original binding directly or wrap it in a custom interface so it can add its own logic around each operation. + +To pass bindings to a Dynamic Worker, you need to: + +1. Add the binding to the Worker that is loading the Dynamic Workers, so it can grant the Dynamic Worker access to it. +2. When calling `load()` to create the Dynamic Worker, pass the binding in the [`env`](/workers/runtime-apis/handlers/fetch/#parameters) object. This is where you choose the name the Dynamic Worker will use for that binding. +3. In the Dynamic Worker, read the binding from `env` using the name you assigned in `load()`. + +```txt +┌─────────────────────────────────────────────────────────┐ +│ Loader Worker Bindings │ +│ ┌───────────────┐ ┌──────────┐ ┌──────────┐ │ +│ │ worker_loaders│ │ MY_KV │ │ MY_D1 │ │ +│ └───────┬───────┘ └─────┬────┘ └─────┬────┘ │ +│ │ │ │ │ +│ ▼ │ │ │ +│ ┌───────────────────────┼────────────┼──────────────┐ │ +│ │ Loader Worker │ │ │ │ +│ │ env.LOADER │ │ │ │ +│ │ env.MY_KV ◄─────────┘ │ │ │ +│ │ env.MY_D1 ◄───────────────────────┘ │ │ +│ │ │ │ +│ │ load({ │ │ +│ │ env: { │ │ +│ │ STORAGE: env.MY_KV, ──┐ │ │ +│ │ DB: env.MY_D1, ──┼──┐ │ │ +│ │ } │ │ │ │ +│ │ }) │ │ │ │ +│ └─────────────────────────────┼──┼───────────────────┘ │ +│ │ │ │ +│ ▼ ▼ │ +│ ┌─────────────────────────────────────────────────────┐ │ +│ │ Dynamic Worker (created at runtime) │ │ +│ │ env.STORAGE → reads/writes KV │ │ +│ │ env.DB → queries D1 │ │ +│ └─────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────┘ +``` + +#### Add the binding to the loader Worker + +To pass a resource to a Dynamic Worker, first add that resource as a binding in the loader Worker's Wrangler configuration. + +The loader Worker is the deployed Worker that uses the `worker_loaders` binding to create Dynamic Workers: + + + +```toml +[[worker_loaders]] +binding = "LOADER" + +[[kv_namespaces]] +binding = "MY_KV" +id = "" +``` + + + +#### Pass the binding when creating the Dynamic Worker + +When you call `load()` to create the Dynamic Worker, include an [`env`](/workers/runtime-apis/handlers/fetch/#parameters) object that specifies which bindings the Dynamic Worker should have access to. The keys you set in `env` become the binding names inside the Dynamic Worker. + +You can also use `env` to pass plain values like strings, numbers, booleans, objects, arrays, and `ArrayBuffer` values. These values are copied into the Dynamic Worker when it is created, so changes inside the Dynamic Worker do not affect the original values in the loader Worker. + +```js +const worker = env.LOADER.load({ + // ... + env: { + STORAGE: env.MY_KV, + GREETING: "Hello", + }, +}); +``` + +#### Use the binding in the Dynamic Worker + +The Dynamic Worker accesses `env.STORAGE` to read from KV and `env.GREETING` to use the plain value passed in by the loader Worker: + +```js +// Inside the Dynamic Worker +export default { + async fetch(request, env) { + const name = await env.STORAGE.get("user:1"); + return new Response(`${env.GREETING}, ${name}`); + }, +}; +``` + +## Custom bindings + +When you pass a binding like [KV](/kv/) or [D1](/d1/) to a Dynamic Worker, the Dynamic Worker can call any method on it directly. If you want to control what happens when those calls are made — for example, logging every write, validating inputs before they reach the database, or returning custom errors — you can create a custom binding instead. + +A custom binding is a class you define in the loader Worker with its own methods. You pass an instance of it to the Dynamic Worker in place of the original binding. When the Dynamic Worker calls `env.STORAGE.get("user:1")`, that does not go to KV directly — it runs the `get()` method you defined in your loader Worker, where you can add logging, validation, or any other logic before reading from KV. + +### Logging KV writes on Dynamic Workers to Workers Analytics Engine + +In this example, you create a custom binding so that every time the Dynamic Worker makes a write to KV, that operation is logged to [Workers Analytics Engine](/analytics/analytics-engine/) to track usage. + +#### Add bindings to the loader Worker + +First, grant the loader Worker access to the [KV namespace](/kv/) that the Dynamic Worker will read and write to, and a [Workers Analytics Engine](/analytics/analytics-engine/) dataset so you can log each write operation. + + + +```toml +[[worker_loaders]] +binding = "LOADER" + +[[kv_namespaces]] +binding = "MY_KV" +id = "" + +[[analytics_engine_datasets]] +binding = "ANALYTICS" +dataset = "kv_usage" +``` + + + +#### Define the custom binding + +Next, create a class in the loader Worker that extends `WorkerEntrypoint` and define the methods you want the Dynamic Worker to be able to call. `WorkerEntrypoint` makes these methods callable from the Dynamic Worker using [RPC](/workers/runtime-apis/rpc/), allowing the Dynamic Worker to call them as if they were local, like `env.STORAGE.put()`, even though each call actually runs in the loader Worker where the actual KV binding and your custom logic live. + +In this example, the exported class is called `TrackedKV` and has two methods: + +- `get(key)`: reads a value from KV +- `put(key, value)`: writes a value to KV, then logs the write to Analytics Engine + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export class TrackedKV extends WorkerEntrypoint { + async get(key) { + return this.env.MY_KV.get(key); + } + + async put(key, value) { + await this.env.MY_KV.put(key, value); + + // Write the KV write operation to Workers Analytics Engine + this.env.ANALYTICS.writeDataPoint({ + indexes: ["kv_write"], + blobs: [key], + doubles: [typeof value === "string" ? value.length : 0], + }); + } +} +``` + +Once `TrackedKV` is exported, it becomes available on [`ctx.exports`](/workers/runtime-apis/context/#exports) in the loader Worker's `fetch()` handler — `ctx` is the handler's third parameter, after `request` and `env`. This is how you will pass it to the Dynamic Worker in the next step. + +#### Pass the custom binding to the Dynamic Worker + +Now, instead of passing `env.MY_KV` directly to the Dynamic Worker (which would give it raw access to KV and bypass the logging logic you added), pass an instance of the `TrackedKV` class you defined in the previous step. Create an instance with `ctx.exports.TrackedKV()` and pass it through the `env` object in `load()`, just like you would with any other binding: + +```js +const worker = env.LOADER.load({ + // ... + env: { + STORAGE: ctx.exports.TrackedKV(), + }, +}); +``` + +The Dynamic Worker will see `STORAGE` as a binding and will be able to call `env.STORAGE.get()` and `env.STORAGE.put()` because those are the methods you defined on `TrackedKV`. When the Dynamic Worker calls those methods, they will run in the loader Worker, where your `TrackedKV` class reads from KV and logs writes to Analytics Engine. + +```js +// Inside the Dynamic Worker +export default { + async fetch(request, env) { + await env.STORAGE.put("user:1", "Alice"); + const value = await env.STORAGE.get("user:1"); + return new Response(value); + }, +}; +``` diff --git a/src/content/docs/dynamic-workers/configuration/egress-control.mdx b/src/content/docs/dynamic-workers/configuration/egress-control.mdx new file mode 100644 index 000000000000000..b33e901387ad0cc --- /dev/null +++ b/src/content/docs/dynamic-workers/configuration/egress-control.mdx @@ -0,0 +1,114 @@ +--- +title: Egress control +description: Restrict, intercept, and audit outbound network access for dynamic Workers. +pcx_content_type: how-to +sidebar: + order: 3 +--- + +When you run untrusted or AI-generated code in a dynamic Worker, you need to control what it can access on the network. You might want to: + +- block all outbound access so the dynamic Worker can only use the bindings you give it +- restrict outbound requests to a specific set of allowed destinations +- inject credentials into outbound requests without exposing secrets to the dynamic Worker +- log or audit every outbound request for observability + +The `globalOutbound` option in the `WorkerCode` object returned by `get()` or passed to `load()` controls all of this. It intercepts every `fetch()` and `connect()` call the dynamic Worker makes. + +## Block all outbound access + +Set `globalOutbound` to `null` to fully isolate the dynamic Worker from the network: + +```js +return { + mainModule: "index.js", + modules: { "index.js": code }, + globalOutbound: null, +}; +``` + +This causes any `fetch()` or `connect()` request from the dynamic Worker to throw an exception. If the dynamic Worker needs network access, you can intercept outbound requests instead. + +## Intercept outbound requests + +To intercept outbound requests, define a `WorkerEntrypoint` class in the loader Worker that acts as a gateway. Every `fetch()` and `connect()` call the dynamic Worker makes goes through this gateway instead of hitting the network directly. Pass the gateway to the dynamic Worker with `globalOutbound` and `ctx.exports`: + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export class HttpGateway extends WorkerEntrypoint { + async fetch(request) { + // Every outbound fetch() from the dynamic Worker arrives here. + // Inspect, modify, block, or forward the request. + return fetch(request); + } +} + +export default { + async fetch(request, env, ctx) { + const worker = env.LOADER.get("my-worker", async () => { + return { + mainModule: "index.js", + modules: { "index.js": code }, + + // Pass the gateway as a service binding. + // The dynamic Worker's fetch() and connect() calls + // are routed through HttpGateway instead of going + // to the network directly. + globalOutbound: ctx.exports.HttpGateway(), + }; + }); + + return worker.getEntrypoint().fetch(request); + }, +}; +``` + +From here, you can add any logic to the gateway, such as restricting destinations, injecting credentials, or logging requests. + +## Inject credentials + +A common pattern is attaching credentials to outbound requests so the dynamic Worker never sees the secret. Use `ctx.props` to pass per-tenant or per-request context to the gateway. + +The dynamic Worker calls `fetch()` normally. `HttpGateway` intercepts the request, attaches the token from the loader Worker's environment, and forwards it. The dynamic Worker never has access to `API_TOKEN`. + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export class HttpGateway extends WorkerEntrypoint { + async fetch(request) { + const headers = new Headers(request.headers); + headers.set("Authorization", `Bearer ${this.env.API_TOKEN}`); + headers.set("X-Tenant-Id", this.ctx.props.tenantId); + + return fetch(new Request(request, { headers })); + } +} + +export default { + async fetch(request, env, ctx) { + const tenantId = getTenantFromRequest(request); + + const worker = env.LOADER.get(`tenant:${tenantId}`, async () => { + return { + mainModule: "index.js", + modules: { + "index.js": ` + export default { + async fetch() { + const resp = await fetch("https://api.example.com/data"); + return new Response(await resp.text()); + }, + }; + `, + }, + globalOutbound: ctx.exports.HttpGateway({ + props: { tenantId }, + }), + }; + }); + + return worker.getEntrypoint().fetch(request); + }, +}; +``` diff --git a/src/content/docs/dynamic-workers/configuration/index.mdx b/src/content/docs/dynamic-workers/configuration/index.mdx new file mode 100644 index 000000000000000..23fbe49550c02aa --- /dev/null +++ b/src/content/docs/dynamic-workers/configuration/index.mdx @@ -0,0 +1,15 @@ +--- +title: Configuration +description: Configure bindings, static assets, network access, and logs. +pcx_content_type: navigation +sidebar: + order: 3 + group: + hideIndex: true +--- + +import { DirectoryListing } from "~/components"; + +This section covers the main runtime controls for Dynamic Workers. + + diff --git a/src/content/docs/dynamic-workers/configuration/observability.mdx b/src/content/docs/dynamic-workers/configuration/observability.mdx new file mode 100644 index 000000000000000..c45c87d284455be --- /dev/null +++ b/src/content/docs/dynamic-workers/configuration/observability.mdx @@ -0,0 +1,126 @@ +--- +title: Observability +description: Capture, retrieve, and forward logs from dynamic Workers. +pcx_content_type: how-to +sidebar: + order: 4 +--- + +import { WranglerConfig } from "~/components"; + +Dynamic Workers support logs with `console.log()` calls, exceptions, and request metadata captured during execution. To access those logs, you attach a [Tail Worker](/workers/observability/logs/tail-workers/), a callback that runs after the Dynamic Worker finishes that passes along all the logs, exceptions, and metadata it collected. + +This guide will show you how to: + +- Store Dynamic Worker logs so you can search, filter, and query them +- Collect logs during execution and return them in real time, for development and debugging + +## Capture logs with Tail Workers + +To save logs emitted by a Dynamic Worker, you need to capture them and write them somewhere they can be stored. Setting this up requires three steps: + +1. Enabling [Workers Logs](/workers/observability/logs/workers-logs/) on the loader Worker so that log output is saved. +2. Defining a Tail Worker that receives logs from the Dynamic Worker and writes them to Workers Logs. +3. Attaching the Tail Worker to the Dynamic Worker when you create it. + +:::note +Tail Workers run asynchronously after the Dynamic Worker has already sent its response, so they do not add latency to the request. +::: + +### Enable Workers Logs on the loader Worker + +Enable [Workers Logs](/workers/observability/logs/workers-logs/) by adding the `observability` setting to the loader Worker's Wrangler configuration. However, Workers Logs only captures log output from the loader Worker itself. Dynamic Workers are separate, so their `console.log()` calls are not included automatically. To get Dynamic Worker logs into Workers Logs, you need to define a Tail Worker that receives logs from the Dynamic Worker and writes them into the loader Worker's Workers Logs. + + + +```toml +[observability] +enabled = true +head_sampling_rate = 1 +``` + + + +### Define the Tail Worker + +When a Dynamic Worker runs, the runtime collects all of its `console.log()` calls, exceptions, and request metadata. By default, those logs are discarded after the Dynamic Worker finishes. + +To keep them, you define a Tail Worker on the loader Worker. A Tail Worker is a class with a `tail()` method. This is where you write the code that decides what happens with the logs. The runtime will call this method after the Dynamic Worker finishes, passing in everything it collected during execution. + +Inside `tail()`, you write each log entry to Workers Logs by calling `console.log()` with a JSON object. Include a `workerId` field in each entry so you can tell which Dynamic Worker produced each log and use it to filter and search the logs by Dynamic Worker later on. + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; + +export class DynamicWorkerTail extends WorkerEntrypoint { + async tail(events) { + for (const event of events) { + for (const log of event.logs) { + console.log({ + source: "dynamic-worker-tail", + workerId: this.ctx.props.workerId, + level: log.level, + message: log.message, + }); + } + } + } +} +``` + +The Tail Worker reads `workerId` from `this.ctx.props.workerId`. You set this value when you attach the Tail Worker to the Dynamic Worker in the next step. + +Since the Tail Worker is defined within the loader Worker, its `console.log()` output is saved to Workers Logs along with the loader Worker's own logs. + +### Attach the Tail Worker to the Dynamic Worker + +When you create the Dynamic Worker, pass the Tail Worker in the [`tails`](/dynamic-workers/api-reference/#tails) array. This tells the runtime: after this Dynamic Worker finishes, send its collected logs to the Tail Worker you defined. + +To reference the `DynamicWorkerTail` class you defined in the previous step, use [`ctx.exports`](/workers/runtime-apis/context/#exports). `ctx` is the third parameter in the loader Worker's `fetch(request, env, ctx)` handler. `ctx.exports` gives you access to classes that are exported from the loader Worker. Because the Dynamic Worker runs in a separate context and cannot access the class directly, you use `ctx.exports.DynamicWorkerTail()` to create a reference that the runtime can wire up to the Dynamic Worker. + +You also need to tell the Tail Worker which Dynamic Worker it is logging for. Since the Tail Worker runs separately from the loader Worker's `fetch()` handler, it does not have access to your local variables. To pass it information, use the [`props`](/workers/runtime-apis/context/#props) option when you create the instance. `props` is a plain object of key-value pairs that you set when attaching the Tail Worker and that the Tail Worker can read at `this.ctx.props` when it runs. In this case, you pass the `workerId` so the Tail Worker knows which Dynamic Worker produced the logs. + +```js +const worker = env.LOADER.get(workerId, () => ({ + mainModule: WORKER_MAIN, + modules: { + [WORKER_MAIN]: WORKER_SOURCE, + }, + tails: [ + ctx.exports.DynamicWorkerTail({ + props: { workerId }, + }), + ], +})); + +return worker.getEntrypoint().fetch(request); +``` + +## Return logs in real time + +The setup above stores logs for later, but sometimes you need logs right away for real-time development. The challenge is that the Tail Worker and the loader Worker's `fetch()` handler run separately. The Tail Worker has the logs, but the `fetch()` handler is the one building the response. You need a shared place where the Tail Worker can write the logs and the `fetch()` handler can read them. + +A [Durable Object](/durable-objects/) works well for this. Both the Tail Worker and the `fetch()` handler can look up the same Durable Object instance by name. The Tail Worker writes logs into it after the Dynamic Worker finishes, and the `fetch()` handler reads them out and includes them in the response. + +The pattern works like this: + +1. The `fetch()` handler creates a log session in a Durable Object before running the Dynamic Worker. +2. The Dynamic Worker runs and produces logs. +3. After the Dynamic Worker finishes, the Tail Worker writes the collected logs to the same Durable Object. +4. The `fetch()` handler reads the logs from the Durable Object and returns them in the response. + +```js +import { exports } from "cloudflare:workers"; + +// 1. Create a log session before running the Dynamic Worker. +const logSession = exports.LogSession.getByName(workerName); +const logWaiter = await logSession.waitForLogs(); + +// 2. Run the Dynamic Worker. +const response = await worker.getEntrypoint().fetch(request); + +// 3. Wait up to 1 second for the Tail Worker to deliver logs. +const logs = await logWaiter.getLogs(1000); +``` + +For a full working implementation, refer to this example. diff --git a/src/content/docs/dynamic-workers/configuration/static-assets.mdx b/src/content/docs/dynamic-workers/configuration/static-assets.mdx new file mode 100644 index 000000000000000..6fff22812b843d8 --- /dev/null +++ b/src/content/docs/dynamic-workers/configuration/static-assets.mdx @@ -0,0 +1,191 @@ +--- +title: Static assets +description: Serve HTML, JavaScript, images, and other assets from Dynamic Workers. +pcx_content_type: how-to +sidebar: + order: 2 +--- + +import { WranglerConfig } from "~/components"; + +Dynamic Workers can serve [static assets](/workers/static-assets/) like HTML pages, JavaScript bundles, images, and other files alongside your Worker code. This is useful when you need a Dynamic Worker to serve a full-stack application. + +Static assets for Dynamic Workers work differently from static assets in regular Workers. Instead of [uploading assets at deploy time](/workers/static-assets/direct-upload/), you provide them at runtime through the Worker Loader `get()` callback, sourcing them from R2, KV, or another storage backend. + +## How it works + +There are three parts to setting up static assets for Dynamic Workers: + +1. **Store the assets** — Upload static files to a [KV namespace](/kv/), keyed by project ID and pathname. +2. **Define an asset binding in the loader Worker** — Create a class that handles requests for static files by reading them from KV and returning them with the correct headers. +3. **Pass the binding to the Dynamic Worker** — The Dynamic Worker uses it to serve static files by calling `env.ASSETS.fetch(request)`. + +### Store the static assets + +Static assets are stored in a [KV namespace](/kv/), separated by project ID so each project's files are isolated from each other: + +```txt +project/{projectId}/assets/index.html → file content +project/{projectId}/assets/app.js → file content +project/{projectId}/manifest → asset manifest +``` + +When a user deploys their project through your platform's upload API, store each file in KV under its pathname: + +```js +await env.KV_ASSETS.put(`project/${projectId}/assets${pathname}`, fileContent); +``` + +You also need to store a manifest, a mapping that tells the asset handler which files exist and what their content types are. Use `buildAssetManifest()` from [`@cloudflare/worker-bundler`](https://www.npmjs.com/package/@cloudflare/worker-bundler) to generate it from your assets: + +```js +import { buildAssetManifest } from "@cloudflare/worker-bundler"; + +const assets = { + "/index.html": htmlContent, + "/app.js": jsContent, + "/style.css": cssContent, +}; + +const manifest = await buildAssetManifest(assets); + +await env.KV_ASSETS.put( + `project/${projectId}/manifest`, + JSON.stringify(manifest), +); +``` + +### Add bindings to the loader Worker + +Grant the loader Worker access to the KV namespace where you stored the assets: + + + +```jsonc +{ + "worker_loaders": [{ "binding": "LOADER" }], + "kv_namespaces": [ + { + "binding": "KV_ASSETS", + "id": "", + }, + ], +} +``` + + + +### Define the asset binding + +Create a class in the loader Worker that extends `WorkerEntrypoint` and define a `fetch()` method. `WorkerEntrypoint` makes this method callable from the Dynamic Worker using [RPC](/workers/runtime-apis/rpc/). When the Dynamic Worker calls `env.ASSETS.fetch(request)`, it runs this method in the loader Worker, where the KV binding and your asset-serving logic live. + +The class takes a `projectId` [prop](/workers/runtime-apis/context/#props) so it knows which project's assets to look up. When `fetch()` is called, it: + +1. Loads the project's asset manifest from KV. +2. Resolves the request pathname to a file. +3. Fetches the file content from KV. +4. Returns a `Response` with the correct `Content-Type` header. + +#### Use @cloudflare/worker-bundler to handle static asset serving + +Instead of writing your own logic to match request paths to files, detect content types, and set cache headers, use the [`@cloudflare/worker-bundler`](https://www.npmjs.com/package/@cloudflare/worker-bundler) package to handle static asset serving. In your `fetch()` method, pass `handleAssetRequest()` two things: + +- **A manifest**, the path-to-content-type mapping you stored in KV during upload, built with `buildAssetManifest()`. This tells `handleAssetRequest()` which files exist and what their content types are. +- **A storage object**, tells `handleAssetRequest()` how to read files from your KV namespace. It has one method, `get(pathname)`, which reads and returns the content for a given file path. + +`handleAssetRequest()` serves the file if it finds a match in the manifest, with the correct headers for content type and caching. + +```js +import { WorkerEntrypoint } from "cloudflare:workers"; +import { handleAssetRequest } from "@cloudflare/worker-bundler"; + +export class AssetBinding extends WorkerEntrypoint { + async fetch(request) { + const { projectId } = this.ctx.props; + + // Load the project's asset manifest from KV + const manifest = await this.env.KV_ASSETS.get( + `project/${projectId}/manifest`, + { type: "json", cacheTtl: 300 }, + ); + + if (!manifest) { + return new Response("No assets found", { status: 404 }); + } + + // Storage object — handleAssetRequest calls get() to + // read file content when it needs to serve an asset + const storage = { + async get(pathname) { + return this.env.KV_ASSETS.get( + `project/${projectId}/assets${pathname}`, + { type: "arrayBuffer", cacheTtl: 86_400 }, + ); + }, + }; + + const response = await handleAssetRequest(request, manifest, storage); + return response ?? new Response("Not Found", { status: 404 }); + } +} +``` + +**Note:** The `cacheTtl` option caches KV results so repeated requests do not hit KV storage every time. The manifest uses a shorter cache (5 minutes) so new deploys are picked up quickly. Asset content uses a longer cache (24 hours) since files at the same path do not change between deploys. + +Once `AssetBinding` is exported, it becomes available on [`ctx.exports`](/workers/runtime-apis/context/#exports) in the loader Worker's `fetch()` handler. `ctx` is the handler's third parameter, after `request` and `env`. This is how you pass it to the Dynamic Worker in the next step. + +### Pass the asset binding to the Dynamic Worker + +When you call `get()` to create the Dynamic Worker, include the `AssetBinding` in the `env` object so the Dynamic Worker can use it to serve static files. To reference the `AssetBinding` class you defined in the previous step, use `ctx.exports.AssetBinding()` and pass the `projectId` as a [prop](/workers/runtime-apis/context/#props) so it knows which project's assets to serve. This works the same way as [custom bindings](/dynamic-workers/configuration/bindings/#custom-bindings) — `props` is how you pass information to the class, and the class reads it at `this.ctx.props` when it runs. + +```js +export default { + async fetch(request, env, ctx) { + const projectId = getProjectIdFromRequest(request); + + const worker = env.LOADER.get(projectId, async () => { + const serverCode = await loadServerCode(projectId); + + return { + mainModule: "index.js", + modules: { + "index.js": { js: serverCode }, + }, + compatibilityDate: "2025-01-01", + env: { + ASSETS: ctx.exports.AssetBinding({ + props: { projectId }, + }), + }, + }; + }); + + return await worker.getEntrypoint().fetch(request); + }, +}; +``` + +The Dynamic Worker sees `ASSETS` as a binding and can call `env.ASSETS.fetch(request)` because that is the method you defined on `AssetBinding`. When the Dynamic Worker calls that method, it runs in the loader Worker, where your `AssetBinding` class reads the manifest and file content from KV. + +### Use the asset binding in the Dynamic Worker + +From the Dynamic Worker's perspective, `env.ASSETS` works like any other binding. The user writes their server code and calls `env.ASSETS.fetch()` to serve static files: + +```js +// Inside the Dynamic Worker +export default { + async fetch(request, env) { + const url = new URL(request.url); + + // Handle API routes directly + if (url.pathname.startsWith("/api/")) { + return Response.json({ hello: "world" }); + } + + // Everything else — serve static assets + return env.ASSETS.fetch(request); + }, +}; +``` + +When the Dynamic Worker calls `env.ASSETS.fetch(request)`, the call goes through RPC to the loader Worker's `AssetBinding`, which looks up the file in the manifest and reads it from KV. The Dynamic Worker does not need to handle any of this — it calls `env.ASSETS.fetch(request)` and gets back the file with the correct headers, ready to return to the client. diff --git a/src/content/docs/dynamic-workers/examples/agents-executing-code.mdx b/src/content/docs/dynamic-workers/examples/agents-executing-code.mdx new file mode 100644 index 000000000000000..5cf13e4a73d1dbe --- /dev/null +++ b/src/content/docs/dynamic-workers/examples/agents-executing-code.mdx @@ -0,0 +1,7 @@ +--- +title: Execute code from Agents +description: Use Dynamic Workers when Agents need to generate and execute code safely. +pcx_content_type: example +sidebar: + order: 3 +--- diff --git a/src/content/docs/dynamic-workers/examples/codemode.mdx b/src/content/docs/dynamic-workers/examples/codemode.mdx new file mode 100644 index 000000000000000..e9b08f48f57b02d --- /dev/null +++ b/src/content/docs/dynamic-workers/examples/codemode.mdx @@ -0,0 +1,8 @@ +--- +title: Codemode +description: Read the Codemode reference in Cloudflare Agents docs. +pcx_content_type: navigation +external_link: /agents/api-reference/codemode/ +sidebar: + order: 2 +--- diff --git a/src/content/docs/dynamic-workers/examples/dynamic-workers-playground.mdx b/src/content/docs/dynamic-workers/examples/dynamic-workers-playground.mdx new file mode 100644 index 000000000000000..bedff0af92c39cf --- /dev/null +++ b/src/content/docs/dynamic-workers/examples/dynamic-workers-playground.mdx @@ -0,0 +1,7 @@ +--- +title: Dynamic Workers Playground +description: Explore a playground built with Dynamic Workers. +pcx_content_type: example +sidebar: + order: 1 +--- diff --git a/src/content/docs/dynamic-workers/examples/index.mdx b/src/content/docs/dynamic-workers/examples/index.mdx new file mode 100644 index 000000000000000..f1f62de0a0e7b91 --- /dev/null +++ b/src/content/docs/dynamic-workers/examples/index.mdx @@ -0,0 +1,15 @@ +--- +title: Examples +description: See common Dynamic Workers patterns for code execution and sandboxing. +pcx_content_type: navigation +sidebar: + order: 4 + group: + hideIndex: true +--- + +import { DirectoryListing } from "~/components"; + +These examples show how teams use Dynamic Workers to run code safely, bundle dependencies, and return results to an application or agent. + + diff --git a/src/content/docs/dynamic-workers/examples/running-ai-generated-code.mdx b/src/content/docs/dynamic-workers/examples/running-ai-generated-code.mdx new file mode 100644 index 000000000000000..d0cdcfb371e0b15 --- /dev/null +++ b/src/content/docs/dynamic-workers/examples/running-ai-generated-code.mdx @@ -0,0 +1,7 @@ +--- +title: Run AI-generated code +description: Validate, bundle, and execute AI-generated code with Dynamic Workers. +pcx_content_type: example +sidebar: + order: 4 +--- diff --git a/src/content/docs/dynamic-workers/getting-started.mdx b/src/content/docs/dynamic-workers/getting-started.mdx new file mode 100644 index 000000000000000..99964c1de260f51 --- /dev/null +++ b/src/content/docs/dynamic-workers/getting-started.mdx @@ -0,0 +1,143 @@ +--- +title: Getting started +description: Load and run a dynamic Worker. +pcx_content_type: get-started +sidebar: + order: 2 +--- + +import { WranglerConfig, TypeScriptExample } from "~/components"; + +Worker Loader lets a Worker create and run other Workers at runtime. Each Dynamic Worker runs in its own isolated sandbox. You provide the code, choose which bindings the Dynamic Worker can access, and control whether the Dynamic Worker can reach the network. + +Dynamic Workers support two loading modes: + +- `load()` creates a fresh Dynamic Worker for one-time execution. +- `get(id, callback)` caches a Dynamic Worker by ID so it can stay warm across requests. + +`load()` is best for one-time code execution, for example when using [Codemode](/agents/api-reference/codemode/). `get(id, callback)` is better when the same code will receive subsequent requests, for example when you are building applications. + +### Try it out + +Deploy the Dynamic Workers Playground to get automatically set up with a Worker that bundles and executes Dynamic Workers. Import GitHub repos and deploy them as Dynamic Workers, with real-time logs and observability. + +[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/dinasaur404/dynamic-workers-playground) + +## Configure Worker Loader + +To use Dynamic Workers, first deploy a Worker with a `worker_loaders` binding, which exposes `env.LOADER` for creating Dynamic Workers at runtime. + +This Worker creates and runs Dynamic Workers at runtime. It also defines the platform-level controls for those Dynamic Workers, including: + +- which bindings they can access +- whether they can make outbound network requests +- any other logic for how they are created and invoked + + + +```jsonc +{ + "worker_loaders": [ + { + "binding": "LOADER", + }, + ], +} +``` + + + +## Run a Dynamic Worker + +Use `env.LOADER.load()` to create a Dynamic Worker and run it: + + + +```ts +export default { + async fetch(request: Request, env: Env): Promise { + const worker = env.LOADER.load({ + compatibilityDate: "2026-01-01", + mainModule: "src/index.js", + modules: { + "src/index.js": ` + export default { + fetch() { + return new Response("Hello from a dynamic Worker"); + }, + }; + `, + }, + // Block all outbound network access from the Dynamic Worker. + globalOutbound: null, + }); + + return worker.getEntrypoint().fetch(request); + }, +}; +``` + + + +In this example, `env.LOADER.load()` creates a Dynamic Worker from the code defined in `modules` and returns a stub that represents it. + +`worker.getEntrypoint().fetch(request)` sends the incoming request to the Dynamic Worker's `fetch()` handler, which processes it and returns a response. + +### Reusing a Dynamic Worker across requests + +If the same code will handle multiple requests, use [`get(id, callback)`](/dynamic-workers/api-reference/#get) instead of `load()`. The `id` identifies the Dynamic Worker. When the runtime sees the same `id` again, it can reuse the existing Worker instead of creating a new one. + + + +```ts +const worker = env.LOADER.get("hello-v1", async () => ({ + compatibilityDate: "2026-01-01", + mainModule: "index.js", + modules: { + "index.js": ` + export default { + fetch() { + return new Response("Hello from a dynamic Worker"); + }, + }; + `, + }, + globalOutbound: null, +})); +``` + + + +## Supported languages + +Dynamic Workers support JavaScript (ES modules and CommonJS) and Python. The code is passed as strings in the `modules` object. There is no build step, so languages like TypeScript must be compiled to JavaScript before being passed to `load()` or `get()`. + +For the full list of supported module types, refer to the [API reference](/dynamic-workers/api-reference/#modules). + +### Using TypeScript and npm dependencies + +If your Dynamic Worker needs TypeScript compilation or npm dependencies, use [`@cloudflare/worker-bundler`](https://www.npmjs.com/package/@cloudflare/worker-bundler) to bundle source files into a format that `load()` and `get()` accept: + +```ts +import { createWorker } from "@cloudflare/worker-bundler"; + +const worker = env.LOADER.get("my-worker", async () => { + const { mainModule, modules } = await createWorker({ + files: { + "src/index.ts": ` + import { Hono } from 'hono'; + const app = new Hono(); + app.get('/', (c) => c.text('Hello from Hono!')); + export default app; + `, + "package.json": JSON.stringify({ + dependencies: { hono: "^4.0.0" }, + }), + }, + }); + + return { mainModule, modules, compatibilityDate: "2026-01-01" }; +}); +``` + +`createWorker()` handles TypeScript compilation, dependency resolution from npm, and bundling. It returns `mainModule` and `modules` ready to pass directly to `load()` or `get()`. diff --git a/src/content/docs/dynamic-workers/index.mdx b/src/content/docs/dynamic-workers/index.mdx new file mode 100644 index 000000000000000..fc75f8ad0862b81 --- /dev/null +++ b/src/content/docs/dynamic-workers/index.mdx @@ -0,0 +1,44 @@ +--- +title: Dynamic Workers +description: Spin up isolated Workers on demand to execute code. +pcx_content_type: overview +sidebar: + order: 1 +--- + +import { Description } from "~/components"; + + + +Spin up Workers at runtime to execute code on-demand in a secure, sandboxed environment. + + + +Dynamic Workers let you spin up an unlimited number of Workers to execute code. Because they are created at runtime, they work well for fast, secure code execution. + +Dynamic Workers are the lowest-level primitive for spinning up a Worker, giving you full control over defining how the Worker is composed, which bindings it receives, whether it can reach the network, and more. + +### Get started + +Deploy the Dynamic Workers Playground to create and run Workers dynamically from code you write or import from GitHub, with real-time logs and observability. + +[![Deploy to Cloudflare](https://deploy.workers.cloudflare.com/button)](https://deploy.workers.cloudflare.com/?url=https://github.com/dinasaur404/dynamic-workers-playground) + +## Use Dynamic Workers for + +Use this pattern when code needs to run quickly in a secure, isolated environment. + +- **Code Mode**: LLMs are trained to write code. Run tool-calling logic written in code instead of stepping through many tool calls, which can save up to 80% in inference tokens and cost. +- **AI agents executing code**: Run code for tasks like data analysis, file transformation, API calls, and chained actions. +- **Running AI-generated code**: Run generated code for prototypes, projects, and automations in a secure, isolated sandboxed environment. +- **Fast development and previews**: Load prototypes, previews, and playgrounds in milliseconds. +- **Custom automations**: Create custom tools on the fly that execute a task, call an integration, or automate a workflow. + +## Features + +Because you compose the Worker that runs the code at runtime, you control how that Worker is configured and what it can access. + +- **[Bindings](/dynamic-workers/configuration/bindings/)**: Decide which bindings and structured data the dynamic Worker receives. +- **[Static assets](/dynamic-workers/configuration/static-assets/)**: Decide how the dynamic Worker reads and serves static assets. +- **[Observability](/dynamic-workers/configuration/observability/)**: Attach Tail Workers and capture logs for each run. +- **[Network access](/dynamic-workers/configuration/egress-control/)**: Intercept or block Internet access for outbound requests. diff --git a/src/icons/dynamic-workers.svg b/src/icons/dynamic-workers.svg new file mode 100644 index 000000000000000..8621d61087440ac --- /dev/null +++ b/src/icons/dynamic-workers.svg @@ -0,0 +1 @@ + diff --git a/src/util/sidebar.ts b/src/util/sidebar.ts index 060aa8a846d672d..0d5fe5b6ddeffb8 100644 --- a/src/util/sidebar.ts +++ b/src/util/sidebar.ts @@ -110,9 +110,12 @@ export async function generateSidebar(group: Group) { const product = directory.find((p) => p.id === group.label); if (product && product.data.entry.group === "Developer platform") { const links = [ - ["llms.txt", `/${product.id}/llms.txt`], + ["llms.txt", `${product.data.entry.url}llms.txt`], ["prompt.txt", "/workers/prompt.txt"], - [`${product.data.name} llms-full.txt`, `/${product.id}/llms-full.txt`], + [ + `${product.data.name} llms-full.txt`, + `${product.data.entry.url}llms-full.txt`, + ], ["Developer Platform llms-full.txt", "/developer-platform/llms-full.txt"], ];