You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pageInsights can take a LONG time, longer than 30 seconds.
we have options:
use the openAI streaming API and stream chunks (LLMs use SSE, we could use that or websockets)
persist the answer to a DB & have the client re-request it after timeoute
send intermittent messages saying it's coming soon & then send the final result via subscription
the best option is probably the first because that's what people are used to.
to do that, we could use graphql subscriptions, or the new @stream directive. The latter is the natural choice, because that means we wouldn't have to change our schema based on the whether or not we want the data to come in chunks. Unfortunately, @stream won't be ready until graphql-js v17. So, if we want to implement this now, we should use subscriptions since we've already got that pattern set up. If we can postpone, we should wait for GraphQL v17.
pageInsights can take a LONG time, longer than 30 seconds.
we have options:
the best option is probably the first because that's what people are used to.
to do that, we could use graphql subscriptions, or the new
@stream
directive. The latter is the natural choice, because that means we wouldn't have to change our schema based on the whether or not we want the data to come in chunks. Unfortunately,@stream
won't be ready until graphql-js v17. So, if we want to implement this now, we should use subscriptions since we've already got that pattern set up. If we can postpone, we should wait for GraphQL v17.Sources:
The text was updated successfully, but these errors were encountered: