Automated build & deployment backend for Zyotra. Handles cloning, building, packaging, and delivering artifacts to S3; premium users can leverage dockerized builds and deployments to isolated VPS agents.
- Elysia + Bun HTTP API (src/index.ts)
- JWT-protected routes via
middlewares/checkAuth.checkAuthPlugin - Git cloning with
controllers/cloneRepoController - Build orchestration with
controllers/BuildRepoController - Artifact upload to S3 with
controllers/uploadBuildController→bucket/uploadFile - Utility checks: repo URL validation (
utils/checkUrl), size gating (utils/checkSize), repo existence (utils/checkRepoId), ID generation (utils/generateId) - Premium-only dockerized build/deploy pipeline (see below)
- src/index.ts — server bootstrap and route wiring
- Controllers: clone, build, upload
- S3 client:
bucket/s3 - Auth:
middlewares/checkAuth,jwt/verifyTokens - Types:
types/types - Utils:
utils
- Bun runtime
- Git client
- AWS S3-compatible endpoint/credentials
- JWT secret for access tokens
- Optional: Docker (for premium builds/deploys) and SSH access to VPS agents
Create a .env (or export in your environment):
PORT— HTTP port (default:5052)S3_REGIONS3_ENDPOINT(if using a custom S3-compatible store)ACCESS_KEY_IDSECRET_ACCESS_KEYBUCKET_NAME(used by uploader)ACCESS_TOKEN_SECRET(JWT verification injwt/verifyTokens)
bun install
bun run devServer listens on PORT and logs host/port at startup.
All routes are protected by Authorization: Bearer <token> via checkAuthPlugin.
- Body:
{ repoUrl, packageInstallerCommand?, buildCommand?, startCommand?, outputDir?, projectType? } - Validates URL (
checkUrl) and size (checkSize); clones into./cloned-repo/<id>. - Returns repo metadata and defaults (build/start commands, outputDir).
- Body:
{ repoId, buildCommand, startCommand, outputDir, projectType, packageInstallerCommand } - Runs install + build (via Bun
spawn) in./cloned-repo/<repoId>. - Ensures
outputDirexists before returning build metadata.
- Body:
{ repoId, outputDir, startCommand } - Streams
./cloned-repo/<repoId>/<outputDir>to S3 usinguploadFile; keys are prefixed with<repoId>/.
For isolation and reproducibility:
- Build Image: Use a base image (e.g.,
node:ltsor language-specific). Embed common tooling and cacheable layers for faster installs. - Mount Repo:
docker run --rm -v $(pwd)/cloned-repo/<id>:/workspace -w /workspace <build-image> <packageInstallerCommand>then<buildCommand>. - Output Handling: Ensure
outputDiris within the mounted workspace; hand off to/upload-buildto push to S3. - Security: Run containers with non-root users where possible; limit network if needed; apply per-tenant images for strict isolation.
VPS agents pull artifacts from S3 and run them inside containers:
- Download: Agent fetches
s3://<BUCKET_NAME>/<repoId>/<artifact>to a temp dir. - Runtime Image: Choose runtime (e.g.,
nginxfor static,node:ltsfor SSR). Copy or volume-mount artifact into the container. - Start: Use
startCommandreturned by/build-repoto launch the app inside the container. - Health & Rollback: Add health checks; keep previous artifact to enable rollback on failure.
- Security: Use distinct IAM credentials per agent; restrict S3 access by prefix (
<repoId>/).
- Repo size limit: 100 MB (via GitHub API in
checkSize). - Error handling: Build/install errors captured from stderr; status codes via
types/StatusCode. - Auth: Tokens verified with
verifyAccessToken; ensureACCESS_TOKEN_SECRETis configured. - S3 keys: Stored under repoId prefix to avoid collisions.
- Add containerized build runner abstraction (per premium policy).
- Add artifact integrity checks (hashing) before deploy.
- Add tests and CI.