-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Built-in skill tool does not inject SKILL.md content into LLM context when system_message mode is "replace" #997
Description
Environment
- Copilot SDK: github-copilot-sdk v0.2.0
- Copilot CLI version: v1.0.2
- Provider: OpenAI BYOM (GPT5Chat, GPT51Reasoning, etc.)
- Session config: system_message.mode = "replace"
Description
When creating or resuming a Copilot SDK session with system_message mode set to "replace", the built-in skill tool does not inject the contents of SKILL.md files into the LLM context. The LLM only receives a short "loaded successfully" confirmation message as the tool result, while the actual skill instructions from SKILL.md are silently discarded.
This means the agent acknowledges the skill was loaded but has no knowledge of what the skill actually contains, making skill-based workflows non-functional under "replace" mode.
Steps to Reproduce
1. Create a session with system_message.mode = "replace" and a custom system prompt.
2. Configure skill_directories pointing to a directory with valid skill subdirectories (each containing a SKILL.md).
3. Have the LLM invoke the built-in skill tool with a valid skill name.
4. Observe the tool result returned to the LLM.
Expected Behavior
The skill tool should return the full contents of SKILL.md as the tool result (or inject it into the system prompt / context), so the LLM can follow the skill instructions.
Actual Behavior
The skill tool returns only a brief confirmation (e.g., "loaded successfully") without the SKILL.md content. The skill instructions never reach the LLM.
This likely happens because in "replace" mode, the CLI skips the system prompt augmentation path that would normally append skill content to the context. The built-in skill tool appears to rely on mutating the system prompt to deliver skill content, which conflicts with "replace" mode where the caller owns the system prompt entirely.