Inspect what request is being made to LLM endpoints under the hood Check the medium article Chat Completion Inspector: Debugging LLM Interactions for more details.
Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/envAfter cloning the repository, install the dependencies like this
uv syncRunning Inspector API
uv run openai_inspector_server.pyThen every request made to the LLM endpoints will be logged by the inspector API to json file in the root directory.
To test if everything is working correctly, you can run the sample agent:
uv run sample_langgraph_agent.py