You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As you know, the query engine does not fully support async yet, which lets the llama-index to only let the chat engine to use async streaming with RetrieverQueryEngine instance.
This leads the users to only use the 'sync mode' for TransformQueryEngine, which could be used for HyDE.
Reason
This happens basically because the query engine does not fully support the async streaming yet, I guess.
Value of Feature
Most people use FastAPI for serving llama-index based application if they use python, and you know that FastAPI could be accelerated if we could use async correctly.
Adding support for streaming mode to TransformQueryEngine and other query engine would be definitely benefit for using llama-index in production apps.
The text was updated successfully, but these errors were encountered:
Feature Description
As you know, the query engine does not fully support async yet, which lets the llama-index to only let the chat engine to use async streaming with RetrieverQueryEngine instance.
This leads the users to only use the 'sync mode' for TransformQueryEngine, which could be used for HyDE.
Reason
This happens basically because the query engine does not fully support the async streaming yet, I guess.
Value of Feature
Most people use FastAPI for serving llama-index based application if they use python, and you know that FastAPI could be accelerated if we could use async correctly.
Adding support for streaming mode to TransformQueryEngine and other query engine would be definitely benefit for using llama-index in production apps.
The text was updated successfully, but these errors were encountered: