Can we dergrade the performance of the xrender backend? #569
Replies: 5 comments
-
cc @tryone144 |
Beta Was this translation helpful? Give feedback.
-
To be honest, why provide an xrender backend in the first place? Drop it, focus on GL, clean up the code as much as possible while providing one solid and clean backend. |
Beta Was this translation helpful? Give feedback.
-
If I understand correctly, the I don't think we should drop the xrender backend completely as I am sure there are some users who benefit from it. However, the xrender backend should not become a burden to the glx backend with (questionable) performance hacks.
@yshui Can you elaborate on how we could utilize the depth buffer? Aren't we drawing from back-to-front currently for transparency? |
Beta Was this translation helpful? Give feedback.
-
@tryone144 it's possible to do a depth only pass first to create the depth buffer. we can make it so pixels with alpha < 1 don't update the depth value. this combined with early depth testing should get us performance very close to what we have now. it's kind of like off-loading the region operations to GPU. |
Beta Was this translation helpful? Give feedback.
-
How much slower? 50%? less? |
Beta Was this translation helpful? Give feedback.
-
I am willing to maBke xrender backend less performant if that means we could simplify the codebase.
One example I have right now is that, if we make use of the depth buffer in OpenGL, we could ditch the whole
reg_ignore
logic, potential removing up to 1k loc. But that would also make xrender slower, because there is no depth testing capability there.Is this worth it? I'd like to hear what the users think. I hope there are many users who are forced to use xrender.
Beta Was this translation helpful? Give feedback.
All reactions