Skip to content

Conversation

@SuZeAI
Copy link

@SuZeAI SuZeAI commented Nov 7, 2025

🚀 Fix and Feature: Improve CrewAI Execution Stability and Add Streaming Support

Summary

This pull request combines two key improvements to the CrewAI runtime system:

  1. Fix: Prevents premature termination when no final_answer is produced by raising a proper exception instead of failing silently.
  2. Feature: Introduces a new streaming module that enables real-time token-level streaming of responses from LLM executions.

🧩 Fix: Prevent Early Stop Without Final Answer

Problem

In certain cases, CrewAI would stop execution early when the model reached its limit or failed to return a final_answer.
This caused silent failures and made debugging or error handling difficult.

Solution

  • Added exception raising on LLM response parsing failure.
  • Ensured execution flow continues until a valid final_answer or explicit error.
  • Improved reliability and visibility of runtime issues.

Impact

This change ensures consistent, predictable execution — particularly in multi-agent or long-running tasks.


⚙️ Feature: Add Streaming Module for Real-Time Token Output

Description

Introduces stream_crew_execution(...), an async function that streams tokens from the LLM in real time.
This enables better UX for interactive applications such as web dashboards, chat interfaces, or CLI tools.

Function Signature

async def stream_crew_execution(
    crew_instance,
    inputs: dict,
    agent_ids: Optional[List[str]] = None,
    sleep_time: float = 0.01,
    wait_for_final_answer: bool = True,
):
    ...

Example Usage

from crewai.streaming import stream_crew_execution
async def example_1_simple_usage():
    """
    Example 1: Simple usage with the convenience function
    """
    print("=== Example 1: Simple Streaming ===")
    
    # Assume you have a crew_instance already created
    # crew_instance = YourCrewClass()

    inputs = {"task": "Write a poem about AI"}

    try:
        print("Starting stream...")
        async for token in stream_crew_execution(crew_instance, inputs):
            print(token, end="", flush=True)
        print("\n\nStream completed!")

    except Exception as e:
        print(f"Error: {e}")

Benefits

  • Enables live token streaming for better user feedback.

  • Provides fine-grained control via optional parameters:

    • agent_ids: specify which agents to stream from.
    • sleep_time: adjust pacing between token yields.
    • wait_for_final_answer: toggle final result blocking.
  • Compatible with existing CrewAI agent infrastructure.


✅ Summary of Changes

  • fix(crew): raise exception on LLM response parsing failure to prevent early stop.
  • feat(crew): add streaming module with token streaming and customizable parameters.
  • Added a simple working example for quick testing and integration.

🧠 Impact

These updates improve CrewAI’s stability, developer experience, and interactivity:

  • No more silent runtime halts.
  • Support for dynamic, progressive response rendering.
  • Easier debugging and user-facing integration.

📦 Type of Change

  • 🐞 Bug fix (non-breaking change)
  • ✨ New feature (non-breaking change)
  • ⚠️ Breaking change
  • 🧪 Code improvement / developer experience

Note

Introduces token-level streaming APIs and changes LLM parse failures to raise exceptions instead of returning a final answer.

  • Streaming:
    • Adds crewai/streaming module with CrewStreamer, CrewStreamListener, and stream_crew_execution(...) to stream tokens from LLM executions (supports agent_ids, sleep_time, wait_for_final_answer).
  • Agent Utils:
    • Updates format_answer in utilities/agent_utils.py to re-raise parse exceptions instead of returning AgentFinish.

Written by Cursor Bugbot for commit 1ceaeac. This will update automatically on new commits. Configure here.

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR is being reviewed by Cursor Bugbot

Details

Your team is on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle for each member of your team.

To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant