Skip to content

Conversation

@ibetitsmike
Copy link
Contributor

Summary

Adds desktop deep links via the custom mux:// protocol so a URL can launch/focus Mux, select a project, and open a new chat draft with the prompt prefilled (prefill-only; does not auto-send).

Background

External tools (scripts, issue trackers, notifications) can now jump directly into Mux with context, without requiring users to manually find the project and paste an initial prompt.

Implementation

  • Added common deep link payload types + a pure URL parser (mux://chat/new?...) shared by main/preload/renderer.
  • Updated CLI argv routing so packaged Electron launches the desktop UI when invoked with a mux://... arg.
  • Registered the protocol in electron-builder config and added main-process capture for:
    • macOS open-url
    • Windows/Linux startup argv + second-instance argv
    • buffered delivery until the main window finishes loading
  • Bridged deep links through preload with buffering + window.api.consumePendingDeepLinks() / window.api.onDeepLink().
  • Renderer handles deep links by:
    • resolving projectPath / projectId
    • creating a fresh workspace draft
    • persisting the prefilled prompt in the draft input key
    • navigating to the project + draft.

Validation

  • make static-check

Risks

  • Protocol handling differs across platforms; mitigated by handling both open-url and argv/second-instance paths and buffering until renderer is ready.
  • If the target project can't be resolved, the deep link is ignored (no crash).

Generated with mux • Model: openai:gpt-5.2 • Thinking: xhigh • Cost: $7.99

Add a shared MuxDeepLinkPayload type and parseMuxDeepLink() helper for
parsing mux://chat/new deep links (projectPath/projectId/prompt/sectionId),
with unit tests.
In packaged Electron, Windows/Linux deep links are passed in argv. Treating
`mux://...` as an Electron launch arg ensures the desktop app is opened
instead of routing to CLI help.
Register mux:// protocol and forward deep links from main to renderer.
Bridge mux:deep-link IPC events through preload by buffering payloads until the renderer subscribes, and expose them on window.api.

Also extend WindowApi typings with consumePendingDeepLinks() and onDeepLink().
Handle mux:// deep links by opening a fresh workspace draft for the resolved
project and prefilling the creation input text (without auto-sending). Startup
links that arrive before projects load are buffered in-memory and retried.
---

_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `xhigh` • Cost: `.91`_

<!-- mux-attribution: model=openai:gpt-5.2 thinking=xhigh costs=1.91 -->
---

_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `xhigh` • Cost: `$7.99`_

<!-- mux-attribution: model=openai:gpt-5.2 thinking=xhigh costs=7.99 -->
@github-actions github-actions bot added the enhancement New feature or functionality label Jan 30, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a957bf3272

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ibetitsmike
Copy link
Contributor Author

@codex review

Subscribe before draining deep-link buffers and only buffer when there are no subscribers.

This avoids a startup race where mux:// deep links can be lost between consume + subscription.

---

_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `xhigh` • Cost: `$8.49`_

<!-- mux-attribution: model=openai:gpt-5.2 thinking=xhigh costs=8.49 -->
@ibetitsmike
Copy link
Contributor Author

@codex review

Fixed the deep-link startup race by subscribing before draining the preload buffer, and updated preload to only buffer when there are no subscribers.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 42a73fc7bb

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Queue unresolved mux:// deep links when the API/projects list is not ready yet, and retry once projects load.

This avoids dropping links during startup in the window where ProjectContext can report loading=false before the API connection is established.

---

_Generated with `mux` • Model: `openai:gpt-5.2` • Thinking: `xhigh` • Cost: `$8.49`_

<!-- mux-attribution: model=openai:gpt-5.2 thinking=xhigh costs=8.49 -->
@ibetitsmike
Copy link
Contributor Author

@codex review

Handled startup project-loading race: unresolved mux:// deep links are now buffered when the API isn't connected yet / projects list isn't ready, and retried once projects load.

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. You're on a roll.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or functionality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant