-
Notifications
You must be signed in to change notification settings - Fork 161
Description
Bug Description
Summary
When using the @vectorize-io/hindsight-openclaw plugin (v0.4.7) with OpenClaw (v2026.1.30), retrieved memories are displayed as visible user messages in the chat interface rather than being injected as hidden system context for the LLM.
Expected Behavior
Memories retrieved by the Hindsight plugin should be passed to the LLM as part of the system prompt/context, invisible to the end user in the chat UI.
Actual Behavior
Memories appear as a visible "You" message in the chat containing:
- The instruction text: "The following are relevant memories from past conversations with this user. Use them to personalize your response but do not display them raw:"
- Raw JSON-formatted memory data including
document_id,chunk_id,score, andcontentfields - The session initialization prompt
This exposes internal system prompts and raw memory data to users, which should remain hidden.
Environment
- OpenClaw Version: 2026.1.30
- hindsight-openclaw Version: 0.4.7 (installed via npm)
- OS: macOS
- Channel: Direct gateway chat (
agent:main:mainsession)
Steps to Reproduce
- Install OpenClaw (v2026.1.30)
- Install the hindsight-openclaw plugin via
openclaw plugins enable hindsight-openclaw - Configure the plugin in
~/.openclaw/openclaw.jsonwith"plugins": { "entries": { "hindsight-openclaw": { "enabled": true } }, "slots": { "memory": "hindsight-openclaw" } } - Start the OpenClaw gateway with
openclaw gateway - Open the gateway dashboard at http://localhost:18789/chat
- Start a new chat session using
/newor click "New session" - Observe that the first "You" message contains raw memory data and system prompts that should be hidden
Additional Context
The issue persists even after starting new sessions. The memories are correctly retrieved but are being injected at the wrong point in the message pipeline, causing them to appear as user-visible content rather than system context.
Disabling the plugin via the Config UI stops the memory injection, but existing sessions retain the corrupted message history.
Expected Behavior
Memories retrieved by the Hindsight plugin should be passed to the LLM as part of the system prompt/context, invisible to the end user in the chat UI. The user should only see their own messages and the assistant's responses, not the internal memory retrieval data.
Actual Behavior
Memories appear as a visible "You" message in the chat containing:
- The instruction text: "The following are relevant memories from past conversations with this user. Use them to personalize your response but do not display them raw:"
- Raw JSON-formatted memory data including
document_id,chunk_id,score, andcontentfields - The session initialization prompt
This exposes internal system prompts and raw memory data to users, which should remain hidden.
Version
No response
LLM Provider
None