-
Notifications
You must be signed in to change notification settings - Fork 27
Description
Cloudflare Containers (Wrangler/Miniflare) local dev failures: container port not found, Monitor failed to find container, and missing cloudflare-dev/* images
Summary
When running a Cloudflare Containers-based Worker locally via wrangler dev (Miniflare) on macOS + Docker, the first requests frequently fail with errors like:
Error checking if container is ready: connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.Container error: [Error: Monitor failed to find container]Uncaught Error: No such image available named cloudflare-dev/<image>:<hash>
We have already ensured the relevant ports are exposed in the image metadata, and we have implemented retries + a dev-only image-tag “seeding” workaround. The issue appears to be a race/consistency bug in the local dev container monitor lifecycle and/or Wrangler’s internal cloudflare-dev/* image tagging / cleanup behavior.
Code reference (shareable)
All references below are either embedded verbatim in this document or link to the public repo. If any links differ from what you see here, treat the embedded snippets in this report as authoritative.
- Repository: https://github.com/ZinTrust/zintrust
- Branch:
release
What we are building
We run a “gateway” Worker that routes by URL prefix to container-backed Durable Objects using @cloudflare/containers.
/mysql/*→ZintrustMySqlProxyContaineron port8789/postgres/*→ZintrustPostgresProxyContaineron port8790/redis/*→ZintrustRedisProxyContaineron port8791/mongodb/*→ZintrustMongoDbProxyContaineron port8792/sqlserver/*→ZintrustSqlServerProxyContaineron port8793/smtp/*→ZintrustSmtpProxyContaineron port8794
Key implementation details:
- Each DO class extends
Containerand starts a single container withstartAndWaitForPorts(). - A single Docker image is used for all services; the runtime entrypoint selects which proxy to run (
proxy:mysql,proxy:redis, etc.) and which port.
Relevant source:
- Worker + DO container classes: https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
- Worker + DO container classes: https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
- Local dev Wrangler config: https://github.com/ZinTrust/zintrust/blob/release/wrangler.containers-proxy.dev.jsonc
- Deployment Wrangler config: https://github.com/ZinTrust/zintrust/blob/release/wrangler.containers-proxy.jsonc
Desired outcome
Local development should be deterministic:
wrangler devshould not fail on first startup because an internalcloudflare-dev/*image tag is missing.- The container monitor should reliably discover the container instance and the configured port without repeated 500s.
- A “first request after starting dev server” should succeed (or at least return a clean 503 “starting” response) without crashing the Worker process.
Environment
- OS: macOS
- Docker: Docker Desktop (local daemon)
- Node: >= 20 (project requirement)
- Miniflare (dev dependency):
miniflare@4.20260217.0 - Wrangler version:
4.67.0
Wrangler config validation note (containers[].port)
Wrangler 4.67.0 warns that port is an unexpected field under containers:
Unexpected fields found in containers field: "port"
Based on Cloudflare’s published Wrangler configuration schema for containers, port is not a supported key.
In our setup, the port the container listens on is defined in code via the container-enabled Durable Object class (defaultPort) and the startAndWaitForPorts({ ports: <port> }) call.
So, the correct config is to omit containers[].port entirely (this report’s embedded configs reflect that).
How to reproduce
1) Start local dev
From the repo root:
npm ci
# Start Containers Worker locally
npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env stagingNotes:
- If you hit missing internal
cloudflare-dev/*image tags (see error below), you can apply the workaround in “Dev-only seeding…” using plain Docker commands. - This uses the local-dev config (embedded in this document, and also linked above).
- The container
imagein Wrangler points to a wrapper Dockerfile: - Some embedded JSONC comments mention
npm run dev:cp(our internal convenience wrapper). It is not required for Cloudflare reproduction; the canonical commands arenpx wrangler dev ...+docker ...shown in this report.
2) Hit the health endpoints
Using curl:
curl -i http://localhost:8787/health
curl -i http://localhost:8787/mysql/health
curl -i http://localhost:8787/redis/healthOr using the REST Client file:
Actual behavior (observed)
Often on first startup (or first request), the logs show repeated readiness failures:
Error checking if container is ready: connect(): Connection refused: container port not found. Make sure you exposed the port in your container definition.
✘ [ERROR] Uncaught Error: No such image available named cloudflare-dev/zintrustmysqlproxycontainer:dcba7228
[dev:cp] Seeding missing image tag: cloudflare-dev/zintrustmysqlproxycontainer:dcba7228
... repeated "container port not found" ...
✘ [ERROR] Container error: [Error: Monitor failed to find container]
✘ [ERROR] Uncaught Error: Monitor failed to find container
... eventually ...
Port 8789 is ready
Impacts:
- The Worker may crash with “Uncaught Error …” during startup.
- Even when the process stays up, the “first request” can return 500/uncaught failures.
- The monitor may later recover and report
Port <n> is ready, but reliability is inconsistent.
Expected behavior
- If the container image is present and
EXPOSEs the relevant port,startAndWaitForPorts()should not producecontainer port not found. - If the container instance is starting, we expect a clean “not ready yet” state (503 or retry), not an uncaught error.
- Wrangler should not abort because it tries to remove an internal
cloudflare-dev/*tag that does not exist.
Evidence that ports are exposed in the image
We explicitly set exposed ports in our Dockerfiles:
-
Base runtime image
Dockerfileexposes both the app server port and all proxy ports:EXPOSE 7772 8789 8790 8791 8792 8793 8794- See: https://github.com/ZinTrust/zintrust/blob/release/Dockerfile
-
Local dev wrapper image disables the base image healthcheck and exposes the proxy ports:
HEALTHCHECK NONEEXPOSE 8789 8790 8791 8792 8793 8794- See: https://github.com/ZinTrust/zintrust/blob/release/docker/containers-proxy-dev/Dockerfile
Despite this, the monitor still reports container port not found intermittently.
Current mitigations / workarounds we implemented
1) Dev-only seeding of missing cloudflare-dev/* tags
Wrangler sometimes fails with:
No such image available named cloudflare-dev/<name>:<hash>
We added a dev wrapper script to detect those messages and run:
docker pull docker.io/zintrust/zintrust:latest
# Example (replace with the exact tag Wrangler prints):
docker tag docker.io/zintrust/zintrust:latest cloudflare-dev/zintrustmysqlproxycontainer:dcba7228Implementation:
This is a workaround for local dev only. It does not address the underlying “why is Wrangler referencing a tag that doesn’t exist yet?” problem.
2) Gateway retries when the container monitor is not ready
When the gateway fetch to the DO stub returns an internal 500 that contains:
Monitor failed to find containercontainer port not foundConnection refused
…we retry up to 20 times with a short delay, returning a 503 JSON response after max retries.
Implementation:
fetchWithContainerRetry()in https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
This reduces first-hit failures, but it still depends on the underlying monitor eventually becoming consistent.
3) Use a lightweight ping endpoint for readiness
We set the Container DO pingEndpoint to a lightweight endpoint (/containerstarthealthcheck) intended to return 200 quickly without depending on downstream DB connectivity.
Implementation:
pingEndpoint = 'containerstarthealthcheck'in each DO class- See https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
Why we think this is an upstream (Wrangler/Miniflare/monitor) issue
From the symptoms:
-
cloudflare-dev/*tags are referenced before they exist- Wrangler emits errors trying to use or remove a
cloudflare-dev/<name>:<hash>image that is not present. - It seems like Wrangler expects the tag to exist but does not guarantee it has been created.
- Wrangler emits errors trying to use or remove a
-
Readiness checks sometimes claim “port not found” even when
EXPOSEis present- Our Dockerfiles include the required
EXPOSEmetadata. - Yet the monitor sometimes cannot find the configured port.
- This suggests either:
- it is inspecting the wrong image (stale tag/reference),
- the monitor is reading image metadata before the image exists locally,
- or there is a race between container creation and the monitor lookup.
- Our Dockerfiles include the required
-
Monitor/container discovery appears racy
Monitor failed to find containerindicates the monitor lost track of or never observed the container instance it is looking for.- In our experience this is most frequent right after startup.
What we’d like Cloudflare to address
A) Make internal image cleanup/tagging robust
- If Wrangler runs
docker rmi cloudflare-dev/...and the tag doesn’t exist, treat it as non-fatal (ignore missing images). - Ensure
cloudflare-dev/<name>:<hash>tags are created deterministically before they are referenced.
B) Make container monitor readiness deterministic
- If the container exists but is not ready, return a stable “starting” state (503) instead of throwing uncaught errors.
- Ensure the monitor checks the correct image reference and does not read stale metadata.
C) Improve documentation and/or configuration ergonomics
- Document clearly that ports must be in
EXPOSEin the image metadata (Compose/YAML cannot addEXPOSE). - Consider allowing
containers[].imageto be a plain image reference (e.g.docker.io/foo/bar:tag) for local dev, not only a Dockerfile path.
Useful artifacts to request when diagnosing
When reproducing internally, it would help to capture:
npx wrangler --version
node --version
uname -a
docker version
docker image ls | grep -E 'cloudflare-dev/zintrust|zintrust/zintrust'
docker ps -a --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}'If you want, we can provide a full log from npx wrangler dev --config wrangler.containers-proxy.dev.jsonc --env staging showing the complete sequence of events.
Real code excerpts (current)
Worker + DO container classes
File: https://github.com/ZinTrust/zintrust/blob/release/packages/cloudflare-containers-proxy/src/index.ts
Container DO startup:
import { Container } from '@cloudflare/containers';
const ensureContainerStarted = async (
container: Container,
port: number,
start: { envVars: Record<string, string>; entrypoint: string[] }
): Promise<void> => {
await container.startAndWaitForPorts({
startOptions: {
envVars: start.envVars,
entrypoint: start.entrypoint,
enableInternet: true,
},
ports: port,
});
};Example DO class:
export class ZintrustMySqlProxyContainer extends Container {
defaultPort = 8789;
sleepAfter = '10m';
// Keep this lightweight: the proxy root path responds quickly (401 without
// signing headers) and does not depend on DB connectivity like /health.
pingEndpoint = 'containerstarthealthcheck';
async fetch(request: Request): Promise<Response> {
const env = getContainerEnv(this);
await ensureContainerStarted(this, 8789, {
envVars: createMySqlProxyEnvVars(env),
entrypoint: createProxyEntrypoint(env, 'proxy:mysql', 8789),
});
return super.fetch(request);
}
}Gateway retry logic for the startup errors:
const CONTAINER_RETRY_ATTEMPTS = 20;
const CONTAINER_RETRY_DELAY_MS = 500;
const isContainerNotReadyMessage = (value: string): boolean => {
return (
value.includes('Monitor failed to find container') ||
value.includes('container port not found') ||
value.includes('Connection refused')
);
};
const fetchWithContainerRetry = async (
stub: { fetch(request: Request): Promise<Response> },
request: Request,
attempt = 1
): Promise<Response> => {
try {
const response = await stub.fetch(request);
const notReady = await responseIndicatesContainerNotReady(response);
if (!notReady) return response;
if (attempt >= CONTAINER_RETRY_ATTEMPTS) {
return createContainerNotReadyResponse('Container monitor not ready (max retries reached).');
}
Logger.warn('Container not ready; retrying', { attempt, max: CONTAINER_RETRY_ATTEMPTS });
await sleepMs(CONTAINER_RETRY_DELAY_MS);
return fetchWithContainerRetry(stub, request, attempt + 1);
} catch (error) {
if (!errorIndicatesContainerNotReady(error)) throw error;
if (attempt >= CONTAINER_RETRY_ATTEMPTS) {
return createContainerNotReadyResponse(String(error));
}
Logger.warn('Container connection error; retrying', {
attempt,
max: CONTAINER_RETRY_ATTEMPTS,
error: String(error),
});
await sleepMs(CONTAINER_RETRY_DELAY_MS);
return fetchWithContainerRetry(stub, request, attempt + 1);
}
};Local dev Wrangler config
File: https://github.com/ZinTrust/zintrust/blob/release/wrangler.containers-proxy.dev.jsonc
Full file: wrangler.containers-proxy.dev.jsonc (verbatim)
/**
* =========================================================================
* ZinTrust Cloudflare Containers Proxy (local dev)
* =========================================================================
* This config is optimised for local development:
* - Uses the prebuilt Docker Hub image as the container base so `wrangler dev`
* does NOT require a local build on every startup.
* - Wrangler's buildx step pulls the image automatically on first run.
* - Refresh the Hub image when you want to pick up the latest release:
* npm run dev:cp -- --pull
*
* Base image (both amd64 + arm64 on Hub):
* docker.io/zintrust/zintrust:latest
*
* Thin wrapper Dockerfile location:
* docker/containers-proxy-dev/Dockerfile
* =========================================================================
*/
{
"name": "zintrust-proxy",
"main": "./packages/cloudflare-containers-proxy/src/index.ts",
"compatibility_date": "2025-04-21",
"compatibility_flags": ["nodejs_compat"],
"workers_dev": true,
"minify": false,
// --------------------------------------------------------------------------
// CONTAINERS
// --------------------------------------------------------------------------
// Wrangler requires a Dockerfile path (not a bare image tag) for the `image`
// field. The wrapper Dockerfile at docker/containers-proxy-dev/Dockerfile does
// a single `FROM docker.io/zintrust/zintrust:latest`, so Wrangler's
// buildx step pulls the Hub image automatically.
//
// To refresh the Hub image before starting:
// npm run dev:cp -- --pull
"containers": [
{
"class_name": "ZintrustMySqlProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustPostgresProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustRedisProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustMongoDbProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSqlServerProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSmtpProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
],
"durable_objects": {
"bindings": [
{ "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" },
{ "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" },
{ "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" },
{ "name": "ZT_PROXY_MONGODB", "class_name": "ZintrustMongoDbProxyContainer" },
{ "name": "ZT_PROXY_SQLSERVER", "class_name": "ZintrustSqlServerProxyContainer" },
{ "name": "ZT_PROXY_SMTP", "class_name": "ZintrustSmtpProxyContainer" },
],
},
"migrations": [
{
"tag": "containers-proxy-v1",
"new_sqlite_classes": [
"ZintrustMySqlProxyContainer",
"ZintrustPostgresProxyContainer",
"ZintrustRedisProxyContainer",
"ZintrustMongoDbProxyContainer",
"ZintrustSqlServerProxyContainer",
"ZintrustSmtpProxyContainer",
],
},
],
// --------------------------------------------------------------------------
// ENVIRONMENTS
// --------------------------------------------------------------------------
"env": {
"staging": {
"name": "zintrust-proxy-dev",
"minify": false,
"vars": {
"ENVIRONMENT": "staging",
"APP_NAME": "ZinTrust",
"CSRF_SKIP_PATHS": "/api/*,/queue-monitor/*",
},
// Wrangler env selection does not reliably inherit container/DO bindings
// from the top-level config for JSONC configs. Keep these duplicated so
// `wrangler dev --env staging` works.
"containers": [
{
"class_name": "ZintrustMySqlProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustPostgresProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustRedisProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustMongoDbProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSqlServerProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSmtpProxyContainer",
"image": "./docker/containers-proxy-dev/Dockerfile",
"max_instances": 10,
},
],
"durable_objects": {
"bindings": [
{ "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" },
{ "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" },
{ "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" },
{ "name": "ZT_PROXY_MONGODB", "class_name": "ZintrustMongoDbProxyContainer" },
{ "name": "ZT_PROXY_SQLSERVER", "class_name": "ZintrustSqlServerProxyContainer" },
{ "name": "ZT_PROXY_SMTP", "class_name": "ZintrustSmtpProxyContainer" },
],
},
"migrations": [
{
"tag": "containers-proxy-v1",
"new_sqlite_classes": [
"ZintrustMySqlProxyContainer",
"ZintrustPostgresProxyContainer",
"ZintrustRedisProxyContainer",
"ZintrustMongoDbProxyContainer",
"ZintrustSqlServerProxyContainer",
"ZintrustSmtpProxyContainer",
],
},
],
},
},
}Full file: wrangler.containers-proxy.jsonc (verbatim)
/**
* =========================================================================
* ZinTrust Cloudflare Containers Proxy
* =========================================================================
* This Worker acts as the "gateway" for the DB/KV/SMTP proxy stack, similar to
* `docker-compose.proxy.yml`'s `proxy-gateway`, but on Cloudflare.
*
* Requests are routed by path prefix:
* /mysql/* -> ZintrustMySqlProxyContainer (port 8789)
* /postgres/* -> ZintrustPostgresProxyContainer (port 8790)
* /redis/* -> ZintrustRedisProxyContainer (port 8791)
* /mongodb/* -> ZintrustMongoDbProxyContainer (port 8792)
* /sqlserver/* -> ZintrustSqlServerProxyContainer (port 8793)
* /smtp/* -> ZintrustSmtpProxyContainer (port 8794)
*
* Runtime env vars are provided via:
* - `vars` (non-secret)
* - `wrangler secret put <NAME>` (secret)
*
* Notes:
* - Docker Compose is NOT used on Cloudflare; this file is the deployment source.
* - `wrangler deploy` builds the container image(s) using your local Docker.
* =========================================================================
*/
{
"name": "zintrust-proxys",
"main": "./packages/cloudflare-containers-proxy/src/index.ts",
"compatibility_date": "2025-04-21",
"compatibility_flags": ["nodejs_compat"],
"workers_dev": true,
"minify": false,
// --------------------------------------------------------------------------
// CONTAINERS
// --------------------------------------------------------------------------
// One DO class per proxy (Option A).
"containers": [
{
"class_name": "ZintrustMySqlProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
{
"class_name": "ZintrustPostgresProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
{
"class_name": "ZintrustRedisProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
{
"class_name": "ZintrustMongoDbProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
{
"class_name": "ZintrustSqlServerProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
{
"class_name": "ZintrustSmtpProxyContainer",
"image": "docker.io/zintrust/zintrust:latest",
"max_instances": 10,
},
],
// Durable Object bindings (required to talk to containers)
"durable_objects": {
"bindings": [
{ "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" },
{ "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" },
{ "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" },
{ "name": "ZT_PROXY_MONGODB", "class_name": "ZintrustMongoDbProxyContainer" },
{ "name": "ZT_PROXY_SQLSERVER", "class_name": "ZintrustSqlServerProxyContainer" },
{ "name": "ZT_PROXY_SMTP", "class_name": "ZintrustSmtpProxyContainer" },
],
},
// Migrations must use `new_sqlite_classes` for container-enabled Durable Objects
"migrations": [
{
"tag": "containers-proxy-v1",
"new_sqlite_classes": [
"ZintrustMySqlProxyContainer",
"ZintrustPostgresProxyContainer",
"ZintrustRedisProxyContainer",
"ZintrustMongoDbProxyContainer",
"ZintrustSqlServerProxyContainer",
"ZintrustSmtpProxyContainer",
],
},
],
// --------------------------------------------------------------------------
// ENVIRONMENTS
// --------------------------------------------------------------------------
"env": {
"staging": {
"name": "zintrust-proxy-staging",
"minify": false,
"vars": {
"ENVIRONMENT": "staging",
"APP_NAME": "ZinTrust",
"CSRF_SKIP_PATHS": "/api/*,/queue-monitor/*",
},
// NOTE: Wrangler's env selection does not reliably inherit container/DO
// bindings from the top-level config for JSONC configs.
// Keep these duplicated so `wrangler dev --env staging` works.
"containers": [
{
"class_name": "ZintrustMySqlProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustPostgresProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustRedisProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustMongoDbProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSqlServerProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
{
"class_name": "ZintrustSmtpProxyContainer",
"image": "./Dockerfile",
"max_instances": 10,
},
],
"durable_objects": {
"bindings": [
{ "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" },
{ "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" },
{ "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" },
{ "name": "ZT_PROXY_MONGODB", "class_name": "ZintrustMongoDbProxyContainer" },
{ "name": "ZT_PROXY_SQLSERVER", "class_name": "ZintrustSqlServerProxyContainer" },
{ "name": "ZT_PROXY_SMTP", "class_name": "ZintrustSmtpProxyContainer" },
],
},
"migrations": [
{
"tag": "containers-proxy-v1",
"new_sqlite_classes": [
"ZintrustMySqlProxyContainer",
"ZintrustPostgresProxyContainer",
"ZintrustRedisProxyContainer",
"ZintrustMongoDbProxyContainer",
"ZintrustSqlServerProxyContainer",
"ZintrustSmtpProxyContainer",
],
},
],
},
},
}
{ "containers": [ { "class_name": "ZintrustMySqlProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, { "class_name": "ZintrustPostgresProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, { "class_name": "ZintrustRedisProxyContainer", "image": "./docker/containers-proxy-dev/Dockerfile", "max_instances": 10, }, ], "durable_objects": { "bindings": [ { "name": "ZT_PROXY_MYSQL", "class_name": "ZintrustMySqlProxyContainer" }, { "name": "ZT_PROXY_POSTGRES", "class_name": "ZintrustPostgresProxyContainer" }, { "name": "ZT_PROXY_REDIS", "class_name": "ZintrustRedisProxyContainer" }, ], }, }