Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remote Visualisation/FrameBuffer Streaming #50

Open
Robadob opened this issue Mar 27, 2021 · 2 comments
Open

Remote Visualisation/FrameBuffer Streaming #50

Robadob opened this issue Mar 27, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@Robadob
Copy link
Member

Robadob commented Mar 27, 2021

This has been a blue-sky feature for sdl_exp for a while, recently Paul has suggested we propose a small project for an undergraduate to work towards it,

Users without a GPU may be running their simulations on remote headless (Linux) machines (probably HPC). We also run tutorials/worshops, with a cloud backend (e.g. controlled via a Jupyter notebook). It would be beneficial for these users if there was a way for them to visualise their models in real-time.

There are 3 different levels which this could be completed, and it would make sense to investigate and apply them in the same order.

  1. Remote Video: Rendering a visualisation on a headless machine directly to a video file.
  2. Streaming Video: Streaming a visualisation so that it can be received in real-time on a different machine.
  3. Remote Visualisation: Streaming visualisation, with the ability to interact (e.g. move the camera).

Remote Video

Directly pass framebuffers to NVEnc to encode them using h265(?) and store this video stream to file.

It's likely an additional step will be required to mux the raw video stream into a suitable container. This could be as simple as running it through ffmpeg(LGPL)/mkvmerge(GPL) to produce an .mp4/.mkv. It's not clear whether we would need to automatically mux the h265s, as we might leave this stage as a form of debugging (VLC can play raw .h265 files?).

It's worth noting, that I previously investigated use of NVEnc and found that it's OpenGL integration is only supported on Linux, so this feature would only be available to Linux builds of flamegpu2 visualiser, unless use of cuda-opengl interop enabled use of the cross platform cuda-nvenc support.

Streaming Video

The next challenge is to stream the video in realtime, there are a couple of options for this.

  • Integrate with a dedicated RTMP(GPL), HLS, WEBRTC(1|2|3) streaming server library
  • Integrate with OBS project (GPL)?

There may be other options I haven't considered.

Initially, streaming to twitch or youtube should suffice as a proof of concept, but a final solution would be better with a dedicated client or self hosted webpage to automate the connection to some degree.

For the special case of streaming from HPC, this may provide some useful info: https://rse.shef.ac.uk/blog/2019-01-31-ssh-forwarding/ Due to the dependencies required, it would likely need to be packaged into a (singularity) container.

Streaming Visualisation

Providing control of the visualisation remotely requires the client to send back data to the flamegpu2 visualisation. The implementation of this will depend on how the video is being received. Supporting this is a rather low priority, as FLAMEGPU2 does allow a user to specify the initial camera configuration via the visualisation config.

@Robadob Robadob added the enhancement New feature or request label Mar 27, 2021
@Robadob
Copy link
Member Author

Robadob commented Apr 19, 2021

Relevant links Pete/Paul found related to headless rendering, starting to seem like it might not require as many changes to the code as initially presumed.

https://virtualgl.org/About/Introduction
carla-simulator/carla#225
https://www.researchgate.net/figure/Communication-between-CUDA-and-OpenGL-off-screen-rendering-contexts-The-spectral-volume_fig8_274900913
https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
SDL added offscreen support in 2019, although it's disabled in builds by default: libsdl-org/SDL@6898537

@ptheywood
Copy link
Member

ptheywood commented Apr 21, 2021

Can confirm that the offscreen video init target is not available in the libsdl2 package from the ubuntu 20.04, so will need to build at configure time (maybe only iff offscreen support is desired via our cmake? (and/or sdl2 could not be found) Not sure how long libsdl2 takes).

I.e. by inserting the following into the top of Visualiser::init()

    if (SDL_VideoInit("offscreen") < 0) {
        SDL_Log("Couldn't initialize the offscreen video driver: %s\n", SDL_GetError());
        return SDL_FALSE;
    }

which outputs:

INFO: Couldn't initialize the offscreen video driver: offscreen not available

note the snippet above is taken from the test defined in the SDL commit referenced above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants