Skip to content

Containerising inspector #257

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Jun 16, 2025
Merged

Conversation

aaronpowell
Copy link
Contributor

@aaronpowell aaronpowell commented Apr 3, 2025

Fixes #237

Motivation and Context

I want to be able to use a container version of the inspector rather than via npx.

How Has This Been Tested?

I merged the branch into main on my fork and created a tag so I could publish a release. Here's the workflow run (ignore the failing package job, I can't publish to npm) and here's the published image

Note: since the workflow is configured to publish to GitHub Packages not Docker Hub the GITHUB_TOKEN needs to be able to write packages. This is configured in the workflow, but you also need to update the repo settings to allow the workflow to request a token with write permissions:

image

Breaking Changes

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

Currently, the image only has the nodejs runtime in it, so it doesn't support running stdio servers with python/.NET/Java/etc. Instead, they would probably be handled as different tags. My primary focus is on supporthing servers that are SSE (and streamable http in the future)

@aaronpowell aaronpowell marked this pull request as ready for review April 8, 2025 01:46
@cliffhall
Copy link
Contributor

We really need something in the "how has this been tested" section of these PRs, particularly if it hooks into CI.

@cliffhall cliffhall added enhancement New feature or request waiting on submitter Waiting for the submitter to provide more info labels Apr 8, 2025
@aaronpowell
Copy link
Contributor Author

We really need something in the "how has this been tested" section of these PRs, particularly if it hooks into CI.

I'll update that section, but it's mainly running a docker build and starting a container.

Testing the CI pipeline is a bit harder, since it's artifact publishing, so I'm just using the instructions from the GitHub docs.

@cliffhall
Copy link
Contributor

We really need something in the "how has this been tested" section of these PRs, particularly if it hooks into CI.

I'll update that section, but it's mainly running a docker build and starting a container.

Testing the CI pipeline is a bit harder, since it's artifact publishing, so I'm just using the instructions from the GitHub docs.

Got it. But I don't want to break the inspector with an untested workflow. Can you please test it in your fork and take screenshots of every step?

@aaronpowell
Copy link
Contributor Author

@cliffhall I've updated the issue description with how I've tests it and links to the Actions run on my fork that I used.

Here's some screenshot of the image on my fork:

image

And the workflow run (sorry, it's pretty long, so I'll break it a bit 🤣):

image

image

image

image

@cliffhall
Copy link
Contributor

@aaronpowell thanks for doing that testing, I really appreciate it. Will give it closer eyes soon.

@aaronpowell
Copy link
Contributor Author

@aaronpowell thanks for doing that testing, I really appreciate it. Will give it closer eyes soon.

Let me know if you need any more details on the workflow or the life cycle.

@olaservo
Copy link
Member

Dropping a note here too - looking into how we might get this pushed as an official image on Docker hub. (There appears to be an old version published there that is about ~4 months old.)

@cliffhall
Copy link
Contributor

cliffhall commented Apr 30, 2025

@aaronpowell This is what I get when I try to pull your container image with:

docker pull ghcr.io/aaronpowell/inspector:sha256-8fe64aece8867d5c619e6910f1aa7f7c3c4267bb7374a42ec44bc6ab028f4ea5
Screenshot 2025-04-30 at 9 57 54 AM Screenshot 2025-04-30 at 9 59 07 AM

@dahlsailrunner
Copy link

I got the same error as above when pulling the container with the :sha... tag, but when I did this everything worked fine, including running the container.

docker pull ghcr.io/aaronpowell/inspector

@cliffhall
Copy link
Contributor

cliffhall commented May 8, 2025

I got the same error as above when pulling the container with the :sha... tag, but when I did this everything worked fine, including running the container.

docker pull ghcr.io/aaronpowell/inspector

Yep, this works. I was able to run the container but of course the host's ports are not accessable, out of box, so the link doesn't work. You have to connect the ports in Docker Desktop's Optional Settings on the run dialog, or with docker run -p 6277:6277 -p 6274:6274

Then you're good to go.

Screenshot 2025-05-08 at 5 34 48 PM Screenshot 2025-05-08 at 5 39 12 PM Screenshot 2025-05-08 at 5 44 29 PM

Copy link
Contributor

@cliffhall cliffhall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aaronpowell this is looking good. There is some setup outside the files in this repo though. Will look into it.

Dockerfile Outdated
@@ -0,0 +1,26 @@
FROM node:22-slim
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's likely advantageous to have separate build and runtime stages:

# layer 1 - build stage
FROM node:22-slim AS build

# copies all the files over and installs dependencies

# actually compile the typescript which emits the "server/build/" and "client/dist/"
RUN npm run build

# layer 2 - runtime stage with the compiled /bundled javascript
FROM node:22-slim
WORKDIR /app

# copy over all build / dist files (including node_module dependencies)
COPY --from=build /app/client/dist ./client/dist
COPY --from=build /app/server/build ./server/build
COPY --from=build /app/node_modules ./node_modules

# etc. etc. rest of dockerfile

Dockerfile Outdated
Comment on lines 9 to 12
# Install dependencies
# Working around https://github.com/npm/cli/issues/4828
# RUN npm ci
RUN npm install --no-package-lock
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this workaround? Are there platform specific optional dependencies in the inspector that need to be pulled into the image for running?

Copy link
Contributor

@cliffhall cliffhall May 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably be npm ci instead.

@olaservo
Copy link
Member

cc @slimslenderslacks

@Stefano-Pedretti
Copy link

Hi, we may need to change also the listening host, now it's hardcoded as 127.0.0.1 that is not exposed in the docker network.

a new env variable like CLIENT_HOST and SERVER_HOST should be added.

@cliffhall
Copy link
Contributor

Hey @slimslenderslacks and @aaronpowell! Can we determine what the exact gap is with this? There is already another PR #471 looking to add this. We have at least been able to test this on Aaron's deployment, so I'm inclined to want this one to make it in.

In addition to addressing the comments above by @jpmcb I think we need to set up some Github secrets that @dsp-ant, @ihrpr, @jspahrsummers or other core team members could wrangle for us.

@aaronpowell
Copy link
Contributor Author

@cliffhall I've updated the Dockerfile based on the last round of feedback, but I will admit that it is a little clunky to use the container due to the use of environment variables. What I was intending to do was to have it so you could just run docker run --rm -p 8080:6274 -p 8081:6277 inspect but that has a problem because the server is exposed via the host on 8081 but the client app doesn't know that, it thinks it's on 6277 (which it is, but only inside the container). The resulting is that you would have to go into the inspector UI and configure the endpoint which the server is running on so that it uses the port it exposes to the host via.

The result of this is that you run docker run --rm -p 8080:8080 -p 8081:8081 -e CLIENT_PORT=8080 -e SERVER_PORT=8081 inspector, specifying the environment variables and then overriding the port in the container for the client and server.

Admittedly, this is only a problem if you're mapping a non-default port, as docker run --rm -p 6274:6274 -p 6277:6277 inspector is fine as the ports are matching each other (and the defaults).

@Stefano-Pedretti
Copy link

@cliffhall I've updated the Dockerfile ...
Admittedly, this is only a problem if you're mapping a non-default port, as docker run --rm -p 6274:6274 -p 6277:6277 inspector is fine as the ports are matching each other (and the defaults).

Thanks, please consider a reverse proxy scenario (i.e. traefik) where a path prefix may be added. I can provide a docker compose example if needed.

@cliffhall
Copy link
Contributor

@aaronpowell Is there an updated image I can pull? I see the SHA digest still matches what was there before.

Screenshot 2025-06-04 at 12 00 33 PM

@aaronpowell
Copy link
Contributor Author

New tagged image up:

docker pull ghcr.io/aaronpowell/inspector:docker-test-2

Here's it running under WSL on my machine:
image

@cliffhall
Copy link
Contributor

OK, success. I was testing against a streamableHttp server running on localhost (outside the container) and so had to turn on host networking so it could see localhost:3001 and connect to it.

docker run --rm --network host -p 6274:6274 -p 6277:6277 ghcr.io/aaronpowell/inspector:docker-test-2
Screenshot 2025-06-05 at 4 28 00 PM

with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That secret is provided by GitHub as part of Actions, it doesn't need to be setup

@cliffhall
Copy link
Contributor

cliffhall commented Jun 5, 2025

That secret is provided by GitHub as part of Actions, it doesn't need to be setup

@aaronpowell Nice, thanks.

I'm trying to determine what the gap is so we can commit and try this.

  • Any other special settings to allow publishing to ghcr.io (that we might need a someone with admin rights to do first)?
  • If we try to run this workflow what will the final name be?
    • Your last one was ghcr.io/aaronpowell/inspector:docker-test-2
    • I see that was a release tag in your repo
    • If this runs on every release (0.15.0, 0.16.0, 0.16.1 etc.), I'm guessing the list of images would end up being:
      • ghcr.io/modelcontextprotocol/inspector:0.15.0
      • ghcr.io/modelcontextprotocol/inspector:0.16.0
      • ghcr.io/modelcontextprotocol/inspector:0.16.1

@aaronpowell
Copy link
Contributor Author

@cliffhall here's the docs that describe how to publish to GitHub Container Registry (there is also a section there on publishing to Docker Hub).

From an auth perspective, it uses the GITHUB_TOKEN secret that Actions includes, and the Action requests some additional permissions, which I already have in the workflow file.

If we try to run this workflow what will the final name be?

The name of the image will be ghcr.io/modelcontextprotocol/inspector, which is defined in this line. The reason mine is aaronpowell/inspector is because I use the variable github.repository. If you want to push to Docker Hub then you'd remove the ghcr.io part.

The action only runs on the creation of a release (per this condition) and for my testing I was just creating tags/releases with random names, but using your release structure you'd end up with 0.15.0, etc. as a tag on the image.

@aaronpowell
Copy link
Contributor Author

The workflow step I added here is pretty simplistic in terms of what it generates and publishes, but you can add more registries, tags and metadata if you desire. This is an example of a more complex image build that I use on another project that generates a lot more tags and supports multiple container registries (GitHub Container Registry and Azure Container Registry).

@cliffhall
Copy link
Contributor

The workflow step I added here is pretty simplistic in terms of what it generates and publishes, but you can add more registries, tags and metadata if you desire. This is an example of a more complex image build that I use on another project that generates a lot more tags and supports multiple container registries (GitHub Container Registry and Azure Container Registry).

Right. I think we might want to publish to Docker Hub as well, but we'll need to set up a namespace and login info that goes into secrets, etc. For the first go, since this should work with an admin's approval and no further changes, I think this looks pretty good.

@dsp-ant dsp-ant merged commit 9051727 into modelcontextprotocol:main Jun 16, 2025
3 checks passed
@cliffhall cliffhall removed the waiting on submitter Waiting for the submitter to provide more info label Jun 16, 2025
@aaronpowell aaronpowell deleted the dockerfile branch June 16, 2025 23:49
cliffhall added a commit to cliffhall/mcp-inspector that referenced this pull request Jun 17, 2025
  - Need to do a patch level release in order to test the ghcr docker container publishing feature added to main.yml in modelcontextprotocol#257
@cliffhall cliffhall mentioned this pull request Jun 17, 2025
9 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make inspector available as a Docker image
7 participants