Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lockup at client when awaiting JSON body #26

Open
japhb opened this issue Jan 10, 2020 · 2 comments
Open

Lockup at client when awaiting JSON body #26

japhb opened this issue Jan 10, 2020 · 2 comments

Comments

@japhb
Copy link
Contributor

japhb commented Jan 10, 2020

I have been converting the cro-websocket test files into performance tests of the various pieces. This had been working fine with most of the parser/serializer/handler bits, but when I did a test of the whole client/server pair, I ran into a lockup at the client.

See https://gist.github.com/japhb/c07f5699dbb6d2e45a392865b52abe58 for the test file. (The perf test version of this file just comments out the say calls and raises $repeat to something in the ~1000 range.) It's pretty straightforward; I was comparing the performance of round trips of text and JSON bodies, using a server side taken nearly as-is from t/http-router-websocket.t. The client should be pretty uncontroversial, but though it seems to work fine for plain text round trips, it locks up while awaiting the body-text on the client side for JSON round trips.

Any ideas what's going on here? The only obvious thing I can see different between the plain text and JSON cases with CRO_TRACE=1 is that the plain text response is sent unfragmented, while the JSON response message is sent as a fragment frame containing the entirety of the JSON data followed by an empty continuation frame, but if that's really the problem I'm surprised cro-websocket passes it's own test suite.

@jnthn
Copy link
Member

jnthn commented Jan 10, 2020

Messages sent on a Supply are processed one at a time, the sender getting back control once the downstream things have processed the message. This provides a backpressure mechanism. I think the await on a fragmented message induces a circular wait: the frame parser doesn't get back control because the message it emitted is not yet processed, but the await can't make progress until the frame parser also does. Probably we need to introduce a little more concurrency somewhere to avoid this.

@japhb
Copy link
Contributor Author

japhb commented Jan 12, 2020

jnthn: Where are you thinking about adding additional concurrency? I can imagine making changes anywhere from the most specific (hacking additional concurrency specifically in the frame parser / message parser boundary) to the most general (making sure that type of circular wait can't happen in rakudo), and lots of points in between (such as allowing cro pipelines to declare that they do fan-in or fan-out and making sure the pipeline implementation accounts for that).

japhb added a commit to japhb/cro-websocket that referenced this issue Jan 24, 2020
Mostly taken from examples in t/, the majority of these test the performance
of one of the key Cro::WebSocket modules, but there are two exceptions:

  * masking-perf.p6 tests *only* the masking operation in isolation.
  * router-perf.p6 tests pretty much the whole stack.

The tests all have a default number of iterations, but this can be overridden
on the command line; they all accept a single positional argument.  Note that
the default iteration count for all tests except masking-perf.p6 assumes that
the first optimization (faster masking) has been applied; otherwise the frame
modules will run two orders of magnitude slower.

Finally, note that router-perf.p6 exposes a deadlock in Cro::WebSocket, hence
the reason it defaults to one iteration and has debug prints to show where the
lockup occurs.  See croservices#26 for
more details.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants