Skip to content

Conversation

@torresmateo
Copy link
Collaborator

@torresmateo torresmateo commented Dec 10, 2025

Preview here: https://docs-git-mateo-dev-25-write-connecting-arcade-a84eaa-arcade-ai.vercel.app/en/home/connect-arcade-to-your-llm


Note

Introduces a step-by-step Python guide for wiring Arcade tool-calling into an LLM app.

  • New setup-arcade-with-your-llm-python guide: project setup with uv, environment config, initializing Arcade and OpenRouter clients, fetching formatted tool definitions, helper for tool authorization/execution, ReAct-style multi-turn loop with tool calls, interactive chat runner, and example code
  • Updates app/en/guides/agent-frameworks/_meta.tsx to include the new guide in navigation
  • Refreshes public/llms.txt (git-sha/timestamp) and adds a "Connect Arcade to your LLM" entry for discovery

Written by Cursor Bugbot for commit 120198a. This will update automatically on new commits. Configure here.

@vercel
Copy link

vercel bot commented Dec 10, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Review Updated (UTC)
docs Ready Ready Preview, Comment Jan 9, 2026 2:04am

"content": tool_result,
})

continue
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Missing assistant message before tool results in history

When the LLM returns tool calls, the code appends tool result messages to history but never appends the assistant message that contained the tool_calls. The OpenAI API (and compatible APIs like OpenRouter) requires the assistant message with tool_calls to appear in the conversation history before the corresponding tool result messages. This will cause an API error on the next iteration of the loop when the malformed history is sent back to the model.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@nearestnabors nearestnabors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need more openrouter tokens to actually test that it works, but wanted to share what I have so far!

torresmateo and others added 2 commits December 16, 2025 14:58
Co-authored-by: RL "Nearest" Nabors <[email protected]>
Co-authored-by: RL "Nearest" Nabors <[email protected]>

# Print the latest assistant response
assistant_response = history[-1]["content"]
print(f"\n🤖 Assistant: {assistant_response}\n")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Tool result shown as assistant response when max_turns exhausted

When invoke_llm exhausts max_turns while the assistant is still making tool calls, the function returns with a tool response as the last history item. The chat() function then accesses history[-1]["content"] and prints it prefixed with "🤖 Assistant:", displaying raw tool output as if it were the assistant's response. This produces confusing output when many consecutive tool calls are needed.

Additional Locations (1)

Fix in Cursor Fix in Web

Copy link
Contributor

@nearestnabors nearestnabors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still getting the same malformed directory structure after running the commands in step 1:
Screenshot 2025-12-17 at 5 03 31 PM

Screenshot 2025-12-17 at 5 03 23 PM

@torresmateo torresmateo enabled auto-merge (squash) December 19, 2025 16:44
@torresmateo torresmateo disabled auto-merge December 19, 2025 17:10
@torresmateo
Copy link
Collaborator Author

holding the merge until nav is merged
FYI @nearestnabors


# Print the latest assistant response
assistant_response = history[-1]["content"]
print(f"\n🤖 Assistant: {assistant_response}\n")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max turns exceeded causes tool result printed as response

Medium Severity

The chat() function assumes history[-1] is always an assistant message and prints its content as the assistant's response. However, when invoke_llm exits because max_turns is exceeded while still processing tool calls, history[-1] is a tool message with role: "tool". This causes the raw tool result (likely a JSON string) to be displayed to the user as "🤖 Assistant:" output, creating confusing behavior.

🔬 Verification Test

Why verification test was not possible: This edge case requires the LLM to continuously make tool calls until the max_turns limit (5) is reached without providing a final response. This is difficult to trigger reliably in testing as it depends on LLM behavior and requires actual API credentials. The logic flaw is evident from code inspection: when the while loop exits due to turns >= max_turns during tool processing, no assistant message is appended, yet chat() unconditionally treats history[-1]["content"] as the assistant response.

Additional Locations (1)

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants