You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm finding that the ingestion pipeline is not able to deal with very large amounts of data as it appears to load all of the documents in memory before persisting them in a storage context. Is there a way to partition documents by loader or something similar so that we don't clear out all documents when we run a single loader. I'm specifically looking at the code in https://github.com/run-llama/create-llama/blob/main/templates/types/streaming/fastapi/app/engine/generate.py#L78
The text was updated successfully, but these errors were encountered:
I'm finding that the ingestion pipeline is not able to deal with very large amounts of data as it appears to load all of the documents in memory before persisting them in a storage context. Is there a way to partition documents by loader or something similar so that we don't clear out all documents when we run a single loader. I'm specifically looking at the code in https://github.com/run-llama/create-llama/blob/main/templates/types/streaming/fastapi/app/engine/generate.py#L78
The text was updated successfully, but these errors were encountered: