You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your work
The data compression work is excellent
May I ask if you have tested the changes in the use of VRAM for graphics card rendering before and after compression?
The text was updated successfully, but these errors were encountered:
in the parper:
We note that integrating codebooks and half-float type handling into the renderer would
enable the same reduction for a scene’s VRAM consumption as for its required disk storage.
May I ask if you have specific test results?I want to know the specific changes.thank you
Hello and thanks a lot for the question.
The short answer is no, we don't have specific test results for VRAM consumption reduction.
The long answer is that runtime VRAM requirements are difficult to compute from the primitives' count and details. In our paper, we focused on representation size, that is stored in disk file size which is why we don't report any numbers about VRAM consumption during training or during inference/testing. Hence, no specific test results have been acquired. However, we note that the decrease in point numbers directly translates to less VRAM during rendering (proportional but not the same % as the runtime has some memory overhead) and for the other two approaches a more involved implementation can be made to reduce the runtime memory requirements (a quick and dirty implementation for the variable SH bands is included in the code). The amount of the actual and potential reduction though is difficult to calculate.
Hope that helps
Thank you very much for your work
The data compression work is excellent
May I ask if you have tested the changes in the use of VRAM for graphics card rendering before and after compression?
The text was updated successfully, but these errors were encountered: