feat: add composite upload option for large file writes#1254
feat: add composite upload option for large file writes#1254mishushakov wants to merge 14 commits intomainfrom
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🦋 Changeset detectedLatest commit: 603f317 The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
PR SummaryMedium Risk Overview Updates the JS SDK Written by Cursor Bugbot for commit 603f317. This will update automatically on new commits. Configure here. |
Package ArtifactsBuilt from 8c9603e. Download artifacts from this workflow run. JS SDK ( npm install ./e2b-2.19.1-mishushakov-composite-upload.0.tgzCLI ( npm install ./e2b-cli-2.9.1-mishushakov-composite-upload.0.tgzPython SDK ( pip install ./e2b-2.20.0+mishushakov.composite.upload-py3-none-any.whl |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Autofix Details
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.
- Build chunk_paths deterministically before asyncio.gather in async _composite_write - Use Username type instead of bare string in JS compositeWrite Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use already-materialized blob/content instead of re-reading original data. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When data fits in a single chunk, fall through to the normal write path instead of duplicating the upload logic inside compositeWrite. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Remove the `composite` option from `write()`. Files over 64MB are now automatically chunked and uploaded via the composite path when the envd version supports it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When data is an IO object and ≤64MB, to_upload_body() consumes the stream. Pass the materialized bytes to write_files() instead of the exhausted IO object. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9de5bc1fe8
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
Composite upload's primary benefit is parallel chunk uploading, which the sync SDK cannot leverage (sequential HTTP requests negate the performance advantage). Only the async Python SDK and JS SDK retain composite upload support via asyncio.gather() and Promise.all(). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
write_files() already calls to_upload_body internally, so the pre-materialization in write() was unnecessary after removing the composite upload size check. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace asyncio.gather with asyncio.TaskGroup for structured concurrency, and offload gzip compression to a thread to avoid blocking the event loop. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
asyncio.TaskGroup requires Python 3.11+, which the SDK's type checker does not support. Revert to asyncio.gather for broader compatibility. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Summary
write()transparently splits data into 64MB chunks, uploads them in parallel, then composes them server-side using zero-copy concatenation via the newPOST /files/composeendpoint/files/composeendpoint to the envd OpenAPI spec withComposeRequestschema (source_paths,destination,username) and regenerates JS SDK typesTest plan
gzip: trueon large file uploads🤖 Generated with Claude Code