Skip to content

[Flight] Emit timestamps only in forwards advancing time in debug info #33482

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Jun 10, 2025

Conversation

sebmarkbage
Copy link
Collaborator

@sebmarkbage sebmarkbage commented Jun 7, 2025

Previously you weren't guaranteed to have only advancing time entries, you could jump back in time, but now it omits unnecessary duplicates and clamps automatically if you emit a previous time entry to enforce forwards order only.

The reason I didn't do this originally is because await can jump in the order because we're trying to encode a graph into a flat timeline for simplicity of the protocol and consumers.

async function a() {
  await fetch1();
  await fetch2();
}

async function b() {
  await fetch3();
}

async function foo() {
  const p = a();
  await b();
  return p;
}

This can effectively create two parallel sequences:

--1.................----2.......--
------3......---------------------

This can now be flattened to either:

--1.................3---2.......--

Or:

------3......1......----2.......--

Depending on which one we visit first. Regardless, information is lost.

I'd say that the second one is worse encoding of this scenario because it pretends that we weren't waiting for part of the timespan that we were. To solve this I think we should probably make emitAsyncSequence create a temporary flat list and then sort it by start time before emitting.

Although we weren't actually blocked since there was some CPU time that was able to proceed to get to 3. So maybe the second one is actually better. If we wanted that consistently we'd have to figure out what the intersection was.

@sebmarkbage sebmarkbage requested a review from unstubbable June 7, 2025 21:53
@github-actions github-actions bot added the React Core Team Opened by a member of the React Core Team label Jun 7, 2025
@react-sizebot
Copy link

react-sizebot commented Jun 7, 2025

Comparing: 80c03eb...2d22526

Critical size changes

Includes critical production bundles, as well as any change greater than 2%:

Name +/- Base Current +/- gzip Base gzip Current gzip
oss-stable/react-dom/cjs/react-dom.production.js = 6.68 kB 6.68 kB = 1.83 kB 1.83 kB
oss-stable/react-dom/cjs/react-dom-client.production.js = 530.07 kB 530.07 kB = 93.57 kB 93.57 kB
oss-experimental/react-dom/cjs/react-dom.production.js = 6.69 kB 6.69 kB = 1.83 kB 1.83 kB
oss-experimental/react-dom/cjs/react-dom-client.production.js = 651.16 kB 651.16 kB = 114.70 kB 114.69 kB
facebook-www/ReactDOM-prod.classic.js = 676.11 kB 676.11 kB = 118.97 kB 118.97 kB
facebook-www/ReactDOM-prod.modern.js = 666.39 kB 666.39 kB = 117.36 kB 117.36 kB

Significant size changes

Includes any change greater than 0.2%:

Expand to show
Name +/- Base Current +/- gzip Base gzip Current gzip
oss-experimental/react-server-dom-esm/cjs/react-server-dom-esm-client.browser.development.js +1.47% 114.37 kB 116.05 kB +0.66% 21.54 kB 21.68 kB
oss-experimental/react-server-dom-parcel/cjs/react-server-dom-parcel-client.browser.development.js +1.47% 114.72 kB 116.41 kB +0.58% 21.48 kB 21.60 kB
oss-experimental/react-client/cjs/react-client-flight.development.js +1.45% 116.02 kB 117.70 kB +0.69% 21.30 kB 21.45 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-client.browser.development.js +1.45% 116.55 kB 118.24 kB +0.62% 21.93 kB 22.07 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js +1.44% 117.11 kB 118.79 kB +0.71% 22.07 kB 22.23 kB
oss-experimental/react-server-dom-esm/cjs/react-server-dom-esm-client.node.development.js +1.42% 118.96 kB 120.65 kB +0.55% 22.34 kB 22.46 kB
oss-experimental/react-server-dom-parcel/cjs/react-server-dom-parcel-client.edge.development.js +1.40% 120.25 kB 121.94 kB +0.64% 22.68 kB 22.83 kB
oss-experimental/react-server-dom-parcel/cjs/react-server-dom-parcel-client.node.development.js +1.39% 121.49 kB 123.18 kB +0.75% 22.78 kB 22.95 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-client.edge.development.js +1.37% 123.35 kB 125.03 kB +0.63% 23.18 kB 23.33 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-client.edge.development.js +1.37% 123.44 kB 125.12 kB +0.63% 23.21 kB 23.36 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-client.node.development.js +1.35% 124.42 kB 126.10 kB +0.73% 23.29 kB 23.46 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-client.node.unbundled.development.js +1.33% 126.77 kB 128.46 kB +0.72% 23.53 kB 23.70 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-client.node.development.js +1.32% 128.10 kB 129.79 kB +0.71% 23.78 kB 23.95 kB
oss-experimental/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js +0.40% 169.60 kB 170.28 kB +0.20% 31.61 kB 31.67 kB
oss-experimental/react-server-dom-parcel/cjs/react-server-dom-parcel-server.node.development.js +0.39% 174.27 kB 174.94 kB +0.23% 32.06 kB 32.13 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js +0.37% 181.00 kB 181.68 kB +0.18% 33.13 kB 33.19 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js +0.37% 182.15 kB 182.83 kB +0.19% 33.42 kB 33.49 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js +0.37% 182.21 kB 182.89 kB +0.19% 33.43 kB 33.50 kB
oss-experimental/react-markup/cjs/react-markup.react-server.development.js +0.32% 584.66 kB 586.56 kB +0.17% 104.61 kB 104.78 kB
oss-experimental/react-server-dom-esm/esm/react-server-dom-esm-client.browser.development.js +0.26% 153.96 kB 154.36 kB +0.32% 35.92 kB 36.03 kB

Generated by 🚫 dangerJS against 2d22526

@sebmarkbage
Copy link
Collaborator Author

Another possible representation would be:

--1...3......1......----2.......--

Where it's as if we re-awaited on 1.

@sebmarkbage sebmarkbage force-pushed the flattendebugtime branch 4 times, most recently from d70bf88 to f269fe7 Compare June 8, 2025 02:53
childTrackTime = childEndTime;
if (childEndTime > childTrackTime) {
childTrackTime = childEndTime;
}
Copy link
Collaborator Author

@sebmarkbage sebmarkbage Jun 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes the bug where there could end up being overlapping components when they should've been bumped to a parallel track.

(I do think this type of view is not good for this since any async micro-task can cause them to start overlapping and spawn too many parallel tracks. I have another view in mind for the future)

if (ioNode.tag === PROMISE_NODE) {
// If the ioNode was a Promise, then that means we found one in user space since otherwise
// we would've returned an IO node. We assume this has the best stack.
match = ioNode;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fixes the issue where the ThirdPartyComponent's delay call showed up as ThirdPartyComponent as the I/O's name instead of delay.

That was because if there's no explicit await then we picked the outer most Promise as the stack. That made no sense since that could just be the top component. We want the inner one and we hope that it has an "async stack" which includes higher up abstractions as well.

@sebmarkbage sebmarkbage force-pushed the flattendebugtime branch 2 times, most recently from fa5159a to a902c81 Compare June 8, 2025 21:24
@@ -129,42 +147,50 @@ export function initAsyncDebugInfo(): void {
}
pendingOperations.set(asyncId, node);
},
before(asyncId: number): void {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to avoid implementing this phase which requires a lot more look ups into the map all the time.

However, it simplifies a lot by just putting the resolution here. It avoids all the special cases of when we have unresolved promises inside the thing that's currently being resolved and special cases for lazily filling in the IO end time.

What could happen before is that the end time for a already resolved await would be marked after it finished its then callback. However, that means that any awaits started inside of the then would get cut out since they started before the previously emitted await.

@unstubbable
Copy link
Collaborator

I've added overlapping I/O to the Bar component in the Flight fixture that could be represented like this:

-1..........-2.....------
---3....................-

How do we expect this to be flattened? It currently looks suboptimal, and inconsistent between the Requests track and the Components track:
Screenshot 2025-06-09 at 12 13 37

sebmarkbage added a commit that referenced this pull request Jun 9, 2025
This adds some I/O to go get the third party thing to test how it
overlaps.

With #33482, this is what it looks like. The await gets cut off when the
third party component starts rendering. I.e. after the latency to start.

<img width="735" alt="Screenshot 2025-06-08 at 5 42 46 PM"
src="https://github.com/user-attachments/assets/f68d9a84-05a1-4125-b3f0-8f3e4eaaa5c1"
/>

This doesn't fully simulate everything because it should actually also
simulate each chunk of the stream coming back too. We could wrap the
ReadableStream to simulate that. In that scenario, it would probably get
some awaits on the chunks at the end too.
sebmarkbage and others added 8 commits June 9, 2025 10:05
This is tricky because there can be parallel visits and really we should
probably generate a list, sort and then emit the sorted list.
Only count something as contributing to the children end time if it was actually logged

This excludes timing of promises that we didn't log an await for such as microtasks.

Avoids bumping too many parallel levels.
promiseResolve, for already resolved promises, is invoked at the end of
its execution phase. That means that any awaits that happen inside would
have their start time below the end time which cuts them off.
```
-1..........-2.....------
---3....................-
```
@sebmarkbage
Copy link
Collaborator Author

sebmarkbage commented Jun 9, 2025

It's expected to be suboptimal and very inconsistent when making small changes to the order.

The Requests are never flattened but they can be excluded if the await started before the last await and then its I/O is excluded too. It's similar to Promise.all where all the I/O for everything but the last thing unblocked. That's not so bad because it wasn't part of what was blocking it and it wasn't part of what was blocking the I/O from starting in the first place. So you could see things like all of delayTwice getting excluded because it has no impact on the waterfall that led to the result.

In this case we could help this by collecting etnries before we advance the task time to include more but it shouldn't be necessary if they're not part of the waterfall.

But this is still not showing what I'd expect because I think we should've started by visiting the "twice" first. However, there's a big difference between returning a promise and using an explicit await. Because without the explicit await, the await happens within the runtime and everything changes. So if you add an explicit await you get different results. I think that causes the dependency graph to flip somehow. I think this is because the promise inside the runtime of the async function ends up being resolved by returned promise but it doesn't have any "previous" because the previous of that promise is whatever created the function instance when it called the async function. So there's information lost about what was blocking the promise from being returned in the first place.

It's also possible for the awaits to have different time spans than the Requests. Because their microtasks that unblock them can be blocked by other CPU work. So it's often you'd see an await end later than the Request. However not this much when there's nothing else going on.

I think the end time of the "10ms" one is unexpected. Because while the end time of the outer await will be at the 40ms mark, what we're logging is the inner await which would end earlier. I'd expect us to log 10ms and then leave it blank for the remaining 10ms (which is ofc also suboptimal).

@sebmarkbage
Copy link
Collaborator Author

Other than just the general inconsistencies that need follow ups.

I think the main flaw was that an await that don't get a resolve before the begin phase would use its own start time as the end. It's actually not quite right because there seems to be possible to get a sequence where begin happens first even though it was delayed. Also, if the end time is the same as the current time, then we don't emit another timestamp after which would then use the next time stamp as the implicit end. We need to always emit a time stamp after the await to get an end time.

We solve both these issues by just using the begin time as the end time of awaits.

advanceTaskTime should be used before an operation to advance time before
emitting that operation.

markOperationEndTime should be used as its own marker after an operation.
This needs to always be emitted even if it doesn't advance time.
An async function's last piece of code doesn't have any way to trace it
back to the last await it was waiting for inside the body. Instead we
guess this from whatever executed last.
@sebmarkbage
Copy link
Collaborator Author

Ok, I figured out a way to workaround the lack of await.

The last piece of code inside an async function runs in the execution of the async function Promise itself. If you did this in user space, there would have to be another Promise around there but the internal V8 thing doesn't do that. Unfortunately, this means that there's no meta data about what that scope was depending on. Its trigger is whatever called the async function and not the await inside of it.

My hacky workaround is to track whatever executed last and assume that is what ultimately unblocked the next task. It's not flawless but it's something.

Now we can track that the returning of the first promise was blocked on the second promise. Therefore, now we consistently print the long blocking period first and then a zero-width period since the second returned promise was already resolved by the time we got there.

Screenshot 2025-06-09 at 2 57 01 PM

…itted

This lets us include zero width entries for parallel things that weren't
blocking but might still be interesting.
@sebmarkbage
Copy link
Collaborator Author

sebmarkbage commented Jun 9, 2025

I also went back to making the cut off for whether to include an await, the time stamp at the start of the sequence instead of the one that's updating as we go. That way we can include all the awaits inside this component. Those awaits are now zero width.

Screenshot 2025-06-09 at 3 19 26 PM

We visit all the previous nodes before making any decisions about what to
include so we potentially visit a lot of nodes over and over.

We can optimize a bit by just stopping for anything that finished before
the request start.
Copy link
Collaborator

@unstubbable unstubbable left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

Comment on lines +1595 to +1610
[
"delay",
"/packages/react-server/src/__tests__/ReactFlightAsyncDebugInfo-test.js",
133,
12,
132,
3,
],
[
"delayTwice",
"/packages/react-server/src/__tests__/ReactFlightAsyncDebugInfo-test.js",
1319,
13,
1317,
5,
],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is missing the stack frames for delayTrice and Bar. Is this an accepted limitation or a bug?

Screenshot 2025-06-10 at 10 42 19

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a limitation of async stacks in V8. It essentially resumes from the await and since there's no await here, it can't resume the async stack. So unfortunately we're stuck with it unless we disable them in V8 and then reimplement them ourselves by stitching together stacks from the async hooks. Not worth it.

Co-authored-by: Hendrik Liebau <[email protected]>
@sebmarkbage sebmarkbage merged commit 56408a5 into facebook:main Jun 10, 2025
239 of 240 checks passed
sebmarkbage added a commit that referenced this pull request Jun 11, 2025
Stacked on #33482.

There's a flaw with getting information from the execution context of
the ping. For the soft-deprecated "throw a promise" technique, this is a
bit unreliable because you could in theory throw the same one multiple
times. Similarly, a more fundamental flaw with that API is that it
doesn't allow for tracking the information of Promises that are already
synchronously able to resolve.

This stops tracking the async debug info in the case of throwing a
Promise and only when you render a Promise. That means some loss of data
but we should just warn for throwing a Promise anyway.

Instead, this also adds support for tracking `use()`d thenables and
forwarding `_debugInfo` from then. This is done by extracting the info
from the Promise after the fact instead of in the resolve so that it
only happens once at the end after the pings are done.

This also supports passing the same Promise in multiple places and
tracking the debug info at each location, even if it was already
instrumented with a synchronous value by the time of the second use.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed React Core Team Opened by a member of the React Core Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants