Skip to content

Conversation

savy-91
Copy link
Contributor

@savy-91 savy-91 commented Jul 24, 2025

Why are these changes needed?

Currently the mem0 memory implementation works by adding a system message to the context, however not all LLM support multiple non consecutive system messages.

This PR maintains the current behavior while adding a parameter to the constructor of the mem0 implementation that mocks an LLM making a tool call and mem0 returning relevant memories.

In other words, the system message in the conversation history is replaced by an AssistantMessage and a FunctionExecutionResultMessage.

Related issue number

Checks

Copy link

codecov bot commented Jul 25, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 79.94%. Comparing base (e26bb1c) to head (61fafca).
⚠️ Report is 7 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6850      +/-   ##
==========================================
+ Coverage   79.92%   79.94%   +0.01%     
==========================================
  Files         233      233              
  Lines       18108    18126      +18     
==========================================
+ Hits        14473    14491      +18     
  Misses       3635     3635              
Flag Coverage Δ
unittests 79.94% <100.00%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

chore: Export ContextInjectionMode enum

fix: tests

fix: format

chore: remove mem0 deps to fix tests

fix: patch of memory client

fix: check we are dealing with a FunctionCall before accessing attributes

fix: use public constructor

fix: add empty line at end of file
@savy-91 savy-91 force-pushed the feat/multiple_context_injection_modes_mem0 branch from d3c7e8b to 61fafca Compare July 25, 2025 08:52
class ContextInjectionMode(Enum):
"""Enum for context injection modes."""

SYSTEM_MESSAGE = "system_message"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also add USER_MESSAGE? Because it is naturally a choice if the model doesn't support system message in the middle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern with adding a USER_MESSAGE mode is that I think we'd then require a more opinionated message format to tell the LLM to use the memory retrieved but also don't reply as if the user actually sent them in a message.

"""Enum for context injection modes."""

SYSTEM_MESSAGE = "system_message"
FUNCTION_CALL = "function_call"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add description for each mode enum. Also, note that using the function call mode may cause problem with models that requires the tool definition to be available when calling the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants