Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiplexing with nginx causes model observation to fail silently #203

Open
ihasdapie opened this issue Jun 14, 2024 · 6 comments
Open

Multiplexing with nginx causes model observation to fail silently #203

ihasdapie opened this issue Jun 14, 2024 · 6 comments

Comments

@ihasdapie
Copy link

ihasdapie commented Jun 14, 2024

Describe the bug
This is a really bizarre issue, and I don't really know where to start with it. I don't have a minimally reproducible example, but can try to create one later.

Basically, I'm trying to mutliplex my django service and another service serving the frontend behind the same domain (app.example.com) via the X-Server-Select header (as per following https://sites.psu.edu/jasonheffner/2015/06/19/nginx-use-different-backend-based-on-http-header/) to prevent having to preflight http requests.

The problem is, when http requests are being made to the backend via app. (routed via X-Server-Select) while websocket is routed via api. (no X-Server-Select), it appears that specifically model observation breaks. Consumers written with AsyncAPIConsumer work flawlessly, and so does the initial handshake @action-s for model observation (connection & authentication work correctly and we see responses from the backend over ws). However model changes triggered both by ssh-ing into a live server and running updates on the model via django shell, or via http requests to backend, do not work.

And it works flawlessly if we don't have the X-Server-Select routing! The changes are entirely in nginx configuration.

I'm wondering if there is any hostname config or likewise assumption that this broke? Or if anyone has any suggestions as to where to look for solutions.


# vim: ft=nginx

server { # backend
    listen 80;
    listen [::]:80;
    server_name api.example.com;
    set $upstreamhttp http://api:8000;
    set $upstreamws http://api:5000;

    location /ws/ {
        proxy_pass $upstreamws;
        proxy_http_version  1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $http_host;
    }
    location / {
        proxy_pass  $upstreamhttp;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $http_x_forwarded_proto;
     }
}

map $http_x_server_select $ {
    default http://frontend:80;
    "api" http://api:8000;
}

server { # frontend
    listen 80;
    listen [::]:80;
    server_name app.example.com;

    location / {
        proxy_pass $frontendupstreamhttp;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $http_x_forwarded_proto;
    }
}

I've also tried putting ws under app. as well, but to the same issue.

server { # frontend
    listen 80;
    listen [::]:80;
    server_name app.example.com;

    set $upstreamws http://web:5000;
    location /ws/ {
        proxy_pass $upstreamws;
        proxy_http_version  1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
        proxy_set_header Host $http_host;
    }


    location / {
        proxy_pass $frontendupstreamhttp;
        proxy_set_header    Host                $http_host;
        proxy_set_header    X-Real-IP           $remote_addr;
        proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
        proxy_set_header    X-Forwarded-Proto   $http_x_forwarded_proto;
    }
}

with logging i notice that the model observer never gets invoked, nor does the groups_for_signal. Which is really odd, since all the auth and setup functions work fine. And if i run frontend pointing to api. for requests, then the ws events works perfectly fine. Also when pointing to app. for http requests i have verified that the requests are going through and models are being correctly updated.

  • OS: Debian
  • Version: bullseye

(running in python:11.5:bullseye container)

Running asgi via daphne and wsgi via gunicorn

any ideas or help would be greatly appreciated. Thanks!

@ihasdapie
Copy link
Author

Update: on further testing sending requests via api.example.com does trigger the model observer properly, but sending requests via app.example.com do not, while the ws is connected via the app.example.com. Why would a different host prevent djagno signals from working as intended?

@hishnash
Copy link
Member

Hi @ihasdapie

What channel layer are you using?

So that the Django signals get registered you need to ensure you import the consumers, even for the WSGI server and your ssh session.

The observation uses Django signals hooks so it needs to evaluate the files (import them) so that these hooks are registered on every instances not just the web-socket instance.

@ihasdapie
Copy link
Author

ihasdapie commented Jun 18, 2024

Hey, thanks for the reply!

I'm using the redis channel layer. I've imported all the consumers in the app config, should this be suffice to get it registered for both wsgi and asgi? This doesn't appear to work and I'm seeing the same behaviour as I mentioned previously between the api and app varients of the endpoint. I've logged out the exact proxied request in nginx and they are exactly identical to when it gets to the django service; i've gone as far as to spoof the Host header from app. to api..

For reference, i'm running wsgi via gunicorn and asgi via a gunicorn uvicorn worker

	gunicorn project.asgi:application --workers 4 --threads 8 --log-level info --bind 0.0.0.0:5000 -c ./gunicorn_asgi.config.py &
	gunicorn project.wsgi --bind 0.0.0.0:8000 --timeout 6000 --workers=2 --threads=4 --worker-class gevent -c ./gunicorn_wsgi.config.py &

@ihasdapie
Copy link
Author

Switching to only using the asgi worker seemed to have resolved this problem. However, i'd still like to use the wsgi worker for handling everything but websocket. Not sure if it is out of scope, but is it possible to configure this? It appears that even if the consumers are loaded into wsgi, those events do not make it to django-channels.

@hishnash
Copy link
Member

I should be possible to configure this.

To confirm the consumers are imported within gunicorn_wsgi.config.py not just gunicorn_asgi.config.py and when using WSGI your configuring the channel layer the same?

Are you able to connect to your redis dashboard to check if messages are being sent to redis?

@ihasdapie
Copy link
Author

I am configuring the channels layer the same way for both asgi and wsgi. No messages appear to be sent to redis, however.

In the meantime ive worked around it by using the uvicorn worker to serve everything

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants