Skip to content

Conversation

@GregHolmes
Copy link
Contributor

Description

Add documentation for user input messaging in AI Transport, covering how users send prompts to AI agents over Ably channels.

Topics covered:

  • Agent subscription to receive user messages
  • User identification with verified clientId
  • Publishing prompts with message correlation
  • Correlating agent responses to original input
  • Streaming responses with input correlation
  • Handling multiple concurrent prompts

@GregHolmes GregHolmes self-assigned this Jan 5, 2026
@GregHolmes GregHolmes added the review-app Create a Heroku review app label Jan 5, 2026
@coderabbitai
Copy link

coderabbitai bot commented Jan 5, 2026

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@GregHolmes GregHolmes changed the title Ait 106 feature documentation accepting user input [AIT-106] - Feature documentation accepting user input Jan 5, 2026
@ably-ci ably-ci temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 5, 2026 15:59 Inactive
@GregHolmes GregHolmes changed the base branch from main to AIT-129-AIT-Docs-release-branch January 6, 2026 14:20
@mschristensen mschristensen force-pushed the AIT-129-AIT-Docs-release-branch branch from aebe2c1 to ea0ac8d Compare January 7, 2026 11:48
Copy link
Contributor

@mschristensen mschristensen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really great so far, thanks Greg. Left some suggestions and comments.

3. The agent receives the message, processes it, and generates a response.
4. The agent publishes the response back to the channel, correlating it to the original input.

This decoupled approach means agents don't need persistent connections to individual users. Instead, they subscribe to channels and respond to messages as they arrive.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This decoupled approach means agents don't need persistent connections to individual users. Instead, they subscribe to channels and respond to messages as they arrive.
This decoupled approach means agents don't need to manage persistent connections to individual users. Instead, they subscribe to channels and respond to messages as they arrive.
<Aside data-type="further-reading">
Learn more about channel-based communication in [channel-oriented sessions](/docs/ai-transport/features/sessions-identity#connection-oriented-vs-channel-oriented-sessions).
</Aside>

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


This decoupled approach means agents don't need persistent connections to individual users. Instead, they subscribe to channels and respond to messages as they arrive.

## Subscribe to user input <a id="subscribe"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should appear after we've illustrated how the user input message was published.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

</Code>

<Aside data-type="note">
The agent can use the `message.clientId` to identify which user sent the prompt. This is a verified identity when using [identified clients](/docs/ai-transport/features/sessions-identity/identifying-users-and-agents).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The agent can use the `message.clientId` to identify which user sent the prompt. This is a verified identity when using [identified clients](/docs/ai-transport/features/sessions-identity/identifying-users-and-agents).
The agent can use the `message.clientId` to identify which user sent the prompt. This is a verified identity when using [identified clients](/docs/ai-transport/features/sessions-identity/identifying-users-and-agents#user-identity).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


## Identify the user <a id="identify-user"/>

Users must be [identified clients](/docs/auth/identified-clients) to send input to agents. This ensures the agent can trust the identity of message senders and prevents users from impersonating others.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Users must be [identified clients](/docs/auth/identified-clients) to send input to agents. This ensures the agent can trust the identity of message senders and prevents users from impersonating others.
Use [identified clients](/docs/auth/identified-clients) to establish a verified identity for each user client that sends input to agents. This ensures the agent can trust the identity of message senders and prevents users from impersonating others.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The agent can use the `message.clientId` to identify which user sent the prompt. This is a verified identity when using [identified clients](/docs/ai-transport/features/sessions-identity/identifying-users-and-agents).
</Aside>

## Identify the user <a id="identify-user"/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the simplest requirement is that the sender has a verified role: /docs/ai-transport/features/sessions-identity/identifying-users-and-agents#user-claims i.e. so that the agent can determine the message is from a "user" rather than e.g. another agent sharing the channel. If the agent needs to wants to know the identity of a specific user that sent a message, then verified clients should be used.

Do you think we can describe both patterns in these docs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've rewritten this section to use both patterns and explain why you would use each: a28b828


// Track sent prompts
async function sendPrompt(prompt) {
const result = await channel.publish('user-input', { prompt });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And here, can just be await channel.publish('user-input', prompt);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we not want the result if we're using the newer version of Ably-js?

Such as:

async function sendPrompt(prompt) {
  const result = await channel.publish('user-input', prompt);
  pendingPrompts.set(result.serial, { prompt });
  return result.serial;
}

// Track sent prompts
async function sendPrompt(prompt) {
const result = await channel.publish('user-input', { prompt });
pendingPrompts.set(result.id, { prompt, sentAt: Date.now() });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should avoid adding unrelated functionality to code examples, so lets just store the prompt and exclude the sentAt

</Code>

<Aside data-type="note">
When appending tokens, include the `extras` with all headers to preserve them on the message. If you omit `extras` from an append operation, any existing headers will be removed. See [token streaming](/docs/ai-transport/features/token-streaming/message-per-response) for more details.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
When appending tokens, include the `extras` with all headers to preserve them on the message. If you omit `extras` from an append operation, any existing headers will be removed. See [token streaming](/docs/ai-transport/features/token-streaming/message-per-response) for more details.
When appending tokens, include the `extras` with all headers to preserve them on the message. If you omit `extras` from an append operation, any existing headers will be removed. See token streaming with the [message per response](/docs/ai-transport/features/token-streaming/message-per-response) pattern for more details.

activeRequests.set(inputMessageId, {
userId,
prompt: message.data.prompt,
startedAt: Date.now()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, this is superfluous to requirements

name: 'Messaging',
pages: [
{
name: 'User input',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On reflection, can we call this Accepting user input, which aligns with the style used in other sections?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

8885c59

I've also renamed the file so it follows the same naming for the url.

Document how users send prompts to AI agents over Ably channels,
including identified clients, message correlation, and handling
concurrent prompts.
@GregHolmes GregHolmes force-pushed the AIT-106-Feature-documentation-accepting-user-input branch from 16f6594 to 95627c1 Compare January 8, 2026 10:38
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 10:38 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 11:12 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 11:14 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 11:19 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 11:32 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 13:42 Inactive
@GregHolmes GregHolmes temporarily deployed to ably-docs-ait-106-featu-wfvmlm January 8, 2026 14:07 Inactive
@GregHolmes
Copy link
Contributor Author

Thank you @mschristensen I've updated most. There's a couple comments on others though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review-app Create a Heroku review app

Development

Successfully merging this pull request may close these issues.

4 participants