Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open SSE connections with pending streams block graceful shutdown #2673

Open
1 task done
sulami opened this issue Mar 24, 2024 · 2 comments
Open
1 task done

Open SSE connections with pending streams block graceful shutdown #2673

sulami opened this issue Mar 24, 2024 · 2 comments

Comments

@sulami
Copy link

sulami commented Mar 24, 2024

  • I have looked for existing issues (including closed) about this

Bug Report

Version

  • axum 0.7.4
  • axum-core 0.4.3

Platform

Darwin Continuitiy.local 23.3.0 Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000 arm64

(M1 Pro, macOS 14.3.1)

Description

In one of my projects, I have an SSE endpoint that listens on a broadcast channel and forwards items from that through the SSE endpoint. If I want to gracefully shutdown the application, the upstream channel is still pending which in turn prevents the graceful shutdown, until an event is actually sent. If no event is sent, the application hangs forever.

Naively I would've assumed the connection would be closed when a shutdown event occurs, probably after sending out a 204 No Content to avoid the client trying to reconnect. The code in question used to be a websocket, which does close the connection on shutdown.

I haven't looked at the axum code in question yet, but I'd be ready to submit a change to that effect, if so desired.

Reproduction

I've setup a minimal example for this:

[package]
name = "axum-sse-shutdown"
version = "0.1.0"
edition = "2021"

[dependencies]
axum = "0.7"
futures = "0.3"
tokio = { version = "1", features = ["full"] }
use std::convert::Infallible;

use axum::{
    response::{sse::Event, IntoResponse, Sse},
    routing::get,
    Router,
};
use tokio::{net::TcpListener, select, signal};

#[tokio::main]
async fn main() {
    let app = Router::new().route("/sse", get(sse_handler));
    let listener = TcpListener::bind("0.0.0.0:8080").await.unwrap();
    axum::serve(listener, app.into_make_service())
        .with_graceful_shutdown(shutdown_handler())
        .await
        .unwrap();
}

async fn sse_handler() -> impl IntoResponse {
    let stream = futures::stream::pending::<Result<Event, Infallible>>();
    Sse::new(stream)
}

async fn shutdown_handler() {
    let ctrl_c = async {
        signal::ctrl_c()
            .await
            .expect("failed to install Ctrl+C handler");
    };

    let terminate = async {
        signal::unix::signal(signal::unix::SignalKind::terminate())
            .unwrap()
            .recv()
            .await;
    };

    select! {
        _ = ctrl_c => {},
        _ = terminate => {},
    }
}

Alternatively replace the stream with

    let stream = futures::stream::once(async {
        tokio::time::sleep(std::time::Duration::from_secs(10)).await;
        Ok::<_, Infallible>(Event::default())
    });

To observe the graceful shutdown hang until the timer expires and the future resolves.

cargo run the server and access the SSE endpoint at http://localhost:8080/sse, then try to C-c the server while the client is waiting for an event.

@mladedav
Copy link
Collaborator

This is not sse-specific, graceful shutdown waits for all connections to finish. I don't think axum itself can decide which connections to shutdown, but it could maybe expose some kind of ShutdownToken through an extractor.

I'm not sure how websockets worked but maybe they react differently to graceful shutdown than hyper's HTTP connections.

@Threated
Copy link
Contributor

My guess is that the difference is coming from the fact that web sockets use HTTP upgrades and SSE just uses streaming HTTP bodies.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants