Replies: 5 comments
-
Hi Frink, Current master branch was just recently rewritten to use only WebSockets and drop WebRTC entirely. Current master also uses the establishment of a WebSocket connection with certain HTTP query parameters as a signal to launch a (remote) application. WebRTC was dropped because WebSockets make it far easier to implement inter-host copy-paste operations (as data source file descriptors can be represented by a WebSocket URL, which makes it easy for the data receiver to connect and receive the data). It also eliminates the need for an intermediate WebRTC signaling server. Another advantage of WebSockets is that it offers far more predictable network behavior which makes it easier to setup in a network environment than WebRTC. Your ideas are definitely in line what I had in mind. If you or others are interesting in exploring this further, you can always reach me on the #wayland IRC channel on FreeNode. |
Beta Was this translation helpful? Give feedback.
-
I would really like to flesh out this idea more. I may have some time for a side project in a month or so. Here are several things that may need to be addressed to for a complete ecosystem:
The big aha for me is realizing that if applications are running in the cloud from everywhere and users are connecting to the same computer from everywhere, the interface paradigm changes to where many applications are multi-tenant and within those seats multi-cursor. The definition of interface changes and the definition of server takes on new meaning as well. It's a brave new world. This stuff is really pushing the limits... |
Beta Was this translation helpful? Give feedback.
-
I've though about all these points as well so allow me to address your points in order:
All these points (and much more) show the endless potential of this approach. 🌈 🦄 But let's first get the foundation rock solid. There are still about 122 TODOs & FIXMEs in the code, and fixing all of those will probably generate another 100 issues. 🚧 Most of the concrete for the foundation is poured. 🏗️ Let's pour the rest, let it harden and stabilize, and build the most awesome temple you can imagine. 🏛️ |
Beta Was this translation helpful? Give feedback.
-
In point three I'm talking about the distributed nature of programs. What if...
When we realize that this could happen as both single seat and multi-seat environments there is a lot to think through. Giving each part of the system a way to collate data to reorder stuff that may come in out of order becomes even more important that in simple multi-thread programming. Now you've got threads running on multiple boxes with milliseconds of latency. That's an interesting set of problems to figure out... ANd that's before you add in things like DDOS prevention which becomes even more of a concern in this distributed world.
I agree 100% - I'll try to carve out some time to help... |
Beta Was this translation helpful? Give feedback.
-
I think the first thing is to get this to the point of replacing the noVNC and XPRA use cases. //edit I have moved your installation issue to a new ticket #10 -Zubnix |
Beta Was this translation helpful? Give feedback.
-
I think you could use the WebSocket implementation in Facil.io to replace WebRTC if you wanted...
You mentioned thinking about this in your Fosdem 2019 presentation...
Obviously, the server piece of this protocol is the most important part in terms of controllable latency. I theory, WebSockets could be used to asynchronously of the synchronous draw loop in the Wayland protocol. Essentially, you could send interface events back via one channel while you are continuously receiving new frames. This may be able to further obscure the latency incurred by the network.
The idea would be that you can essentially split applications between various hosts using the web worker scenario. You have the web worker load wasm so that you can have a very fast drawing library and now you can send draw calls instead of finished frames. Essentially what X11 does now...
Another thing that occurs to me is using the WebRTC protocol to deliver WASM apps to bare metal servers to spin up more processing power. This would be truly cloud computing on the back and front because a call to a web host would then spin instances of worker on raw hosts that live as long as they are needed. Essentially allowing a modest terminal with a nice GPU and hefty backend processes doing all sorts of odd number crunching.
In this sort of scenario the network latency between the "cloud threads" (distributed wasm packages on small containers) having their response come back in a timely manner may or may not happen depending on how the app is built...
Regardless, I like where your head is at here. The opportunities that this technology provides are the closest I've seen to true a neuronet using separate synapses to compute a result simultaneously.
I think it's wise to have both WebRTC and WebSockets to explore every protocol possible. In reality, we really need to be building some of this on UDP directly. But maybe that will be exposed in browsers in the near future. Who knows...
Kudos.
Beta Was this translation helpful? Give feedback.
All reactions