Description
Description:
I have encountered an issue where group messages sent from an AsyncWebsocketConsumer
in receive
are not processed in real-time by the same consumer if it's self-subscribed to the group. Instead, the messages are accumulated and processed together after the completion of the initial message receive
method. This behavior is unexpected and not explained in the documentation.
Steps to Reproduce:
- Create an async consumer that subscribes to a group.
- In the
receive
method of the consumer, perform a long-running process. - Within the long-running process, call
group_send
to send messages to the same group the consumer is subscribed to. - Observe that the group messages are not processed until the initial message's long-running process is completed.
- Other consumers subscribed to the same group do receive the messages in real time.
Expected Behaviour:
When a group message is sent from an async consumer using group_send
, it should be processed in real-time, even if the receive is the same consumer.
Actual Behaviour:
The group messages sent from an async consumer are accumulated and processed together after the completion of the initial message's long-running process. This causes delays in message delivery and affects the real-time nature of the application. Other consumers in the same group receive the messages as they are emitted.
Environment:
django-channels==4.0.0
channels-redis==4.1.0
Tested with Daphne
and Uvicorn
Additional Information:
I have noticed that if I pass the consumer instance to the async callback and call send
instead of group_send
, the messages are received correctly by the client, but other consumers subscribed to the group do not receive the messages.
Expected Resolution:
If the current behaviour is intentional and part of the channels architecture, it should be documented to help developers understand and work around this limitation. However, if this behaviour is unintended, I believe it should be treated as a bug and addressed in a future release.
Use case:
Our use case is a chat application with a LLM (large language model) that streams the tokens of it responses. When a user sends a message, the consumer performs a long-running process of generating tokens using the LLM. As each token is generated, it needs to be sent to all the users in the chat room in real-time (all but who called the LLM receive the tokens one at a time).
Please let me know if any further information or code samples are needed.