🚀 Fix and Feature: Improve CrewAI Execution Stability and Add Streaming Support #3856
+253
−6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🚀 Fix and Feature: Improve CrewAI Execution Stability and Add Streaming Support
Summary
This pull request combines two key improvements to the CrewAI runtime system:
final_answeris produced by raising a proper exception instead of failing silently.🧩 Fix: Prevent Early Stop Without Final Answer
Problem
In certain cases, CrewAI would stop execution early when the model reached its limit or failed to return a
final_answer.This caused silent failures and made debugging or error handling difficult.
Solution
final_answeror explicit error.Impact
This change ensures consistent, predictable execution — particularly in multi-agent or long-running tasks.
⚙️ Feature: Add Streaming Module for Real-Time Token Output
Description
Introduces
stream_crew_execution(...), an async function that streams tokens from the LLM in real time.This enables better UX for interactive applications such as web dashboards, chat interfaces, or CLI tools.
Function Signature
Example Usage
Benefits
Enables live token streaming for better user feedback.
Provides fine-grained control via optional parameters:
agent_ids: specify which agents to stream from.sleep_time: adjust pacing between token yields.wait_for_final_answer: toggle final result blocking.Compatible with existing CrewAI agent infrastructure.
✅ Summary of Changes
🧠 Impact
These updates improve CrewAI’s stability, developer experience, and interactivity:
📦 Type of Change
Note
Introduces token-level streaming APIs and changes LLM parse failures to raise exceptions instead of returning a final answer.
crewai/streamingmodule withCrewStreamer,CrewStreamListener, andstream_crew_execution(...)to stream tokens from LLM executions (supportsagent_ids,sleep_time,wait_for_final_answer).format_answerinutilities/agent_utils.pyto re-raise parse exceptions instead of returningAgentFinish.Written by Cursor Bugbot for commit 1ceaeac. This will update automatically on new commits. Configure here.