Skip to content

Commit

Permalink
[compute logs] standardize on fetching 1MB at a time (#28101)
Browse files Browse the repository at this point in the history
get websocket and polling versions aligned with a smaller chunk size 

## How I Tested These Changes

load large compute logs with websockets on and off
  • Loading branch information
alangenfeld authored Feb 27, 2025
1 parent eccaeda commit b0ac0ea
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ const CAPTURED_LOGS_METADATA_QUERY = gql`
}
`;

const QUERY_LOG_LIMIT = 100000;
const QUERY_LOG_LIMIT = 1048576; // 1MB
const POLL_INTERVAL = 5000;

const CapturedLogsSubscriptionProvider = ({
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
from dagster._record import record
from dagster._serdes import whitelist_for_serdes

MAX_BYTES_CHUNK_READ: Final = 4194304 # 4 MB
MAX_BYTES_CHUNK_READ: Final = 1048576 # 1 MB


class ComputeIOType(Enum):
Expand Down

1 comment on commit b0ac0ea

@github-actions
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deploy preview for dagit-core-storybook ready!

✅ Preview
https://dagit-core-storybook-egvx3p3tt-elementl.vercel.app

Built with commit b0ac0ea.
This pull request is being automatically deployed with vercel-action

Please sign in to comment.