Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: improve error handling of Agent component, solves Empty ExceptionWithMessageError #6097

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

edwinjosechittilappilly
Copy link
Collaborator

This pull request includes several changes to improve error handling and logging in the langflow backend. The most important changes include adding logging for exceptions, introducing a custom error class, updating the initialization of an existing error class, and refining validation and error handling in the agent component.

Error Handling and Logging Improvements:

Custom Error Class:

Existing Error Class Update:

Agent Component Enhancements:

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
@edwinjosechittilappilly edwinjosechittilappilly marked this pull request as ready for review February 3, 2025 20:59
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Feb 3, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Feb 3, 2025
Comment on lines +125 to +139
if not isinstance(self.agent_llm, str):
return self.agent_llm, None

try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)

component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")

return self._build_llm_model(component_class, inputs, prefix), display_name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if not isinstance(self.agent_llm, str):
return self.agent_llm, None
try:
provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)
if not provider_info:
msg = f"Invalid model provider: {self.agent_llm}"
raise ValueError(msg)
component_class = provider_info.get("component_class")
display_name = component_class.display_name
inputs = provider_info.get("inputs")
prefix = provider_info.get("prefix", "")
return self._build_llm_model(component_class, inputs, prefix), display_name
agent_llm = self.agent_llm
if not isinstance(agent_llm, str):
return agent_llm, None
provider_info = MODEL_PROVIDERS_DICT.get(agent_llm)
if provider_info is None:
msg = f"Invalid model provider: {agent_llm}"
raise ValueError(msg)
component_class = provider_info["component_class"]
display_name = component_class.display_name
inputs = provider_info["inputs"]
prefix = provider_info.get("prefix", "")
try:
llm_model = self._build_llm_model(component_class, inputs, prefix)
return llm_model, display_name
error_message = f"Error building {agent_llm} language model: {e!s}"
logger.error(error_message)
raise ValueError(f"Failed to initialize language model: {e!s}") from e

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like the diff is messed up here. I suppose there was a real optimization here but the output seems incorrect. This is happening due to our dependency on a diffing library that is buggy. We are looking to fix this

Copy link
Contributor

codeflash-ai bot commented Feb 3, 2025

⚡️ Codeflash found optimizations for this PR

📄 5,023% (50.23x) speedup for AgentComponent.get_llm in src/backend/base/langflow/components/agents/agent.py

⏱️ Runtime : 555 microseconds 10.8 microseconds (best of 38 runs)

📝 Explanation and details

To optimize this Python program for both runtime and memory, we will focus on a few key areas.

  1. We will use dictionary methods that are faster and more memory-efficient where applicable.
  2. We will reduce nesting and simplify exception handling where possible to improve performance.
  3. We will minimize repeated attribute lookups to improve speed.

Explanation of Changes

  1. Reduced Deep Nesting: The provider check and exception handling are separated, making the control flow clearer and reducing nested code paths.
  2. Removed Unnecessary Re-assignments: Directly accessed the agent_llm and stored in a local variable for slightly improved lookup performance.
  3. Optimized Exception Handling: Moved string interpolation inside the try block only where exceptions are expected.
  4. In-place Dictionary Access and Assignments: Leveraged dictionary methods for fast access and retrieval directly with fewer intermediate steps.

These changes should help in improving runtime efficiency and reduce memory overheads by streamlining the control flow and optimizing the dictionary operations.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 6 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage undefined
🌀 Generated Regression Tests Details
from unittest.mock import MagicMock, patch

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, dict] = {}


# unit tests

# Mock classes and inputs for testing
class MockInput:
    def __init__(self, name):
        self.name = name

class MockComponentClass:
    display_name = "MockComponentClass"
    
    def set(self, **kwargs):
        self.kwargs = kwargs
        return self
    
    def build_model(self):
        return "mock_model"

class MockComponentClassThatRaisesException:
    display_name = "MockComponentClass"
    
    def set(self, **kwargs):
        raise Exception("Mock exception during set")

@pytest.fixture
def setup_model_providers_dict():
    global MODEL_PROVIDERS_DICT
    MODEL_PROVIDERS_DICT = {
        "valid_provider": {
            "component_class": MockComponentClass,
            "inputs": [MockInput(name="param1"), MockInput(name="param2")],
            "prefix": "test_"
        },
        "incomplete_provider": {
            "inputs": [MockInput(name="param1"), MockInput(name="param2")]
            # Missing "component_class" and "prefix"
        },
        "large_input_provider": {
            "component_class": MockComponentClass,
            "inputs": [MockInput(name=f"param{i}") for i in range(1000)],
            "prefix": "test_"
        }
    }

@pytest.fixture
def agent_component():
    return AgentComponent()



def test_invalid_model_provider(setup_model_providers_dict, agent_component):
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent_component.get_llm()


def test_empty_model_providers_dict(agent_component):
    global MODEL_PROVIDERS_DICT
    MODEL_PROVIDERS_DICT = {}
    agent_component.agent_llm = "any_provider"
    with pytest.raises(ValueError, match="Invalid model provider: any_provider"):
        agent_component.get_llm()




def test_logging_on_error(mock_logger, setup_model_providers_dict, agent_component):
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError):
        agent_component.get_llm()
    mock_logger.assert_called_with("Error building invalid_provider language model: Invalid model provider: invalid_provider")

def test_state_modification(setup_model_providers_dict, agent_component):
    original_dict = MODEL_PROVIDERS_DICT.copy()
    agent_component.agent_llm = "some_provider"
    with pytest.raises(ValueError):
        agent_component.get_llm()
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

from unittest.mock import Mock, patch

# imports
import pytest  # used for our unit tests
from langflow.base.models.model_input_constants import MODEL_PROVIDERS_DICT
from langflow.components.agents.agent import AgentComponent
# function to test
from langflow.components.langchain_utilities.tool_calling import \
    ToolCallingAgentComponent
from langflow.logging import logger

MODEL_PROVIDERS_DICT: dict[str, ModelProvidersDict] = {}


# unit tests

# Mock classes for testing
class MockComponentClass:
    display_name = "MockComponent"

    def set(self, **kwargs):
        return self

    def build_model(self):
        return "MockModel"

class MockInput:
    def __init__(self, name):
        self.name = name

class MockModelObject:
    pass

@pytest.fixture
def agent_component():
    return AgentComponent()



def test_invalid_model_provider(agent_component):
    # Edge case: invalid model provider string
    agent_component.agent_llm = "invalid_provider"
    with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
        agent_component.get_llm()

def test_empty_model_providers_dict(agent_component):
    # Edge case: empty MODEL_PROVIDERS_DICT
    agent_component.agent_llm = "valid_provider"
    MODEL_PROVIDERS_DICT.clear()
    with pytest.raises(ValueError, match="Invalid model provider: valid_provider"):
        agent_component.get_llm()





def test_logging_errors(agent_component):
    # Side effects: logging errors
    agent_component.agent_llm = "invalid_provider"
    with patch.object(logger, 'error') as mock_logger_error:
        with pytest.raises(ValueError, match="Invalid model provider: invalid_provider"):
            agent_component.get_llm()
        mock_logger_error.assert_called_once_with("Error building invalid_provider language model: Invalid model provider: invalid_provider")
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

Codeflash

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants