-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions #3
Comments
Submitting a shape is faster than executing the individual commands every frame in the following cases:
In other words, if you can render a static frame of your scene fast enough, animating it shouldn't make a big difference. Culling (on your side) might also help but again it really depends on the use case.
Hope I answered your questions adequately :) If not, say it and I'll try to give more detailed answers. |
Hello again, Don't know if you ever used the code and/or you are still using it, but here it goes... Regarding q7 of your list. I just commited some changes to the experimental branch. The path functionality has been moved out of the renderer into a separate class (path.cpp/.h). It might be helpful for generating paths for collision and mouse picking. Instead of executing the path commands on the renderer instance, create a vg::Path, execute the commands on it, and then get back the generated vertices. You can treat each sub-path as a concave polygon which you can pass to your physics engine of choice or perform point-in-polygon tests for mouse picking. I plan on also moving out the stroke and fill functions from the renderer, in order to clean things up a bit more. The stroker might be more useful for generating collision geometry if you use large stroke widths. |
Hello, I'm currently not using it, all I have done is exporting C functions to be used from C#, but no further work on it, but will probably do in the future. I feel like these changes are heading to the right direction. |
If I manage to separate the stroke/fill functions from the renderer, it'll probably be close to what you describe. The only dependency I'll probably keep is bx because I like the AllocatorI interface (currently using it in my code) and the idea of being able to easily compile the code w/o linking to the CRT (haven't tried it but I think it's an option for most of the code I'm using). |
yeah bx is a cool lib, I think it's header only so it's not a real dependency. You could add it as a submodule. |
you could get some inspiration from Nuklear and ImGui library, both dependency free, without a renderer. struct vg_vertex {
float position[3];
uchar col[4];
float uv[2];
}; |
Problem with such a struct is that:
I think it might be better to keep the current design (separate pos/col/uv streams) because you can set all colors and UVs faster in some cases. Either way, I hope to find the time to make the changes. One other thing I'm thinking is to completely remove the IRenderer interface. This means that you won't be able to switch between the current renderer and NanoVG. Haven't thought it much but it feels like having such interface limits the things I can try out. |
Hi, reading this, I realized another possible advantage of this lib over nanovg might be that the stencil buffer is not used ? Does that mean it should be much easier to implement clipping to an arbitrary shape, since we could use stencil for that ? I struggled to find a way to do rounded rect clipping in nanovg and never found a proper solution. Also I second removing the interface. Simple code is the best code. If someone wants to compare nanovg and this library, they can write their own interface. I don't think it belongs in the library itself. (maybe in a test wrapper ?) |
Hello and happy new year!
It should be possible, yes. It's in my list of things to try at some point. I don't really have a use for it at the moment, that's why I haven't done it yet. The only reason I thought about it is because I wanted to replicate fastuidraw's clipIn/clipOut functionality and its painter-cell demo (a rotated grid of 100 individually rotated rectangular cells, each one clipping an image and a string of text). I know it won't be as fast as fastuidraw (~2ms, 1 draw call on my machine), just to get an idea how much slower it'll be (see this for a comparison with other libs)
If I remove the interface it would be because I want to try out things not supported by NanoVG. This means that the comparison won't be possible even with a custom wrapper. In the meantime, nothing stops you from using just BGFXVGRenderer directly instead of the interface. |
@hugoam I uploaded a proof of concept (i.e. haven't tested it thoroughly) to the experimental branch. Currently limited to 254 different clip regions, because I cannot clear the stencil buffer mid-frame (0 = the initial stencil value, +1 stencil value for each clip region). More than that requires a separate view. Instead of following the canvas API, I decided to use Begin/End pairs for specifying clip regions. This means that you can also clip something to the stroke of another shape or independently transform the clip paths. Example:
|
So I almost completely successfully migrated to your renderer, it was a breeze since how both API's are similar. A few issues I encountered :
Apart from that, props for the cool library, I didn't get to measuring performance or try the experimental clipping functionality yet, but I'll get back to you when I fix the remaining issues I have. |
First of all thanks for trying out the code and for the feedback. Having said that, I hope you kept your nanovg code around in case you end up deciding this lib doesn't work for you :)
After adding the clipping functionality, I'm seriously thinking of dropping the IRenderer interface and just keep BGFXVGRenderer. I haven't touched NanoVGRenderer in a while and I'm pretty sure it won't work except from basic stuff (no shapes/caching, no scissor clipping, etc.). Problem is that I'm currently reimplementing my UI and until I manage to fully migrate my code to it, I'm not going to touch vg-renderer (except from fixing bugs of course). |
I solved the second-to-last issue I had, which is that gradients and solid color fills are not rendered in the proper order (that would be submission order). Bgfx has a sequential mode to solve that issue, so you need to add |
Now I realized the last remaining issue I had was actually an error in my code, so that's a wrap, the migration is complete ! To respond to your previous answers : Now that's about it. I just have another possible feature request, but I might implement it myself and send you a pull request. This would allow to do something like a hue selector or a color wheel, the former, with current gradients would require to concatenate many gradients, and the latter is almost impossible : |
👍
Don't worry it won't disappear :) I'm more concerned that you will end up needing something nanovg already supports and it'll be hard to implement in vg-renderer. If you don't see and perf difference between the two and you cannot do what you want, it's logical to return to nanovg asap. That's why I mentioned it.
Yes you need sequential mode, the same way you need it in nanovg. I'm setting the mode outside of the renderer, but since this is bgfx-specific I might as well do that in the constructor.
Actually, in my case, the perf issue was the large amount of uniforms uploaded. If you have a single program for all cases, you have to upload the uniforms for each draw call. Since you cannot reduce the amount of draw calls, you have to upload the uniforms of the program/shader path which uses the most of them. Solid color paths need way less uniforms than gradients, so you end up uploading the same info over and over and it's not used by the selected shader path. E.g. See the numbers from this: https://twitter.com/jdryg/status/834491103680331776
That should be relatively easy to implement as I described above. Hope to be able to try that sometime today and get back to you.
The way the nanovg sample is drawing the color "circle" is by using 6 arcs to make a circle and each arc uses 1 linear gradient. I haven't implemented arcs yet but for a linear hue selector you can try that out with boxes. The color circle is indeed hard (impossible?) to implement with gradients. You can try baking the gradient in a texture and use that to fill a circle. It should work. A couple of things to keep in mind regarding performance (some of them are obvious and some of them are already described in the readme).
(*) Regarding scissor rects: I tried passing the scissor rect to the fragment shader and discard pixels based on that, to be able to merge more draw calls together. It was actually worse in my case, so I haven't uploaded the code. I might reimplement the idea in the future, with a compile time flag to turn it on/off on demand. If this is the case for you, say it and I'll try to implement it sooner. PS. Don't use vg::String! It'll probably be removed in the near future. |
I just commited the changes for a stroke with gradient. Doesn't support AA because currently the gradient shader doesn't take into account per-vertex colors. Actually none of the gradient/image functions (fillConvex/fillConcave) take into account AA. I should fix those at some point. I just didn't have a use for them that's why I postponed it for so long. I guess it's time to fix that and also add strokes with images and concave paths with images/gradients for completeness. Will take a bit more time to fix them all, so if you have any particular need, I'll be glad to give it priority. |
So actually I'm afraid I must reopen the IntersectScissor topic. |
Wouldn't a getScissor() function be better for this? This way you can read the current scissor rect at any time you want and perform whatever operation you want with it. What do you think? Also, it might be more helpful if we move the discussion about specific features into their own issues to keep track of them more easily. |
I actually tried implementing a getScissor() function, but then it got trickier because the scissor is actually stored in global coordinates, whereas I need the current one / local one. So then I would need a getScissor() function to actually transform it to local, and for that you need the inverse transform... So that actually ended up trickier than implementing a checkIntersect function like so : bool checkIntersectScissor(Context* ctx, float x, float y, float w, float h)
{
State* state = getState(ctx);
const float* stateTransform = state->m_TransformMtx;
const float* scissorRect = state->m_ScissorRect;
float pos[2], size[2];
vgutil::transformPos2D(x, y, stateTransform, &pos[0]);
vgutil::transformVec2D(w, h, stateTransform, &size[0]);
if(scissorRect[2] == 0.f || scissorRect[3] == 0.f)
return true;
return !(scissorRect[0] > pos[0] + size[0]
|| scissorRect[1] > pos[1] + size[1]
|| scissorRect[0] + scissorRect[2] < pos[0]
|| scissorRect[1] + scissorRect[3] < pos[1]);
} |
I just added both getScissor() and getTransform() before reading your comment. Can you implement the function you posted using those two in your code for now? Keep in mind that when you are using command lists, all transformations and scissor rects are just recorded to the command list's buffer. No command is applied unless you submit the command list for rendering. So in order to perform those tests in your UI while using command lists, you have to keep track of the state hierarchy on your own. Please be 100% sure you actually need such function, by implementing it on your side using getScissor()/getTransform(), and if the overhead of calling them starts to affect performance, I will add it to the library. |
I finally got to try the gradient stroke you implemented weeks ago, so I'm finally getting back to you. It's perfect ! Those are some really nice looking gradients ! They look better than the NanoVG ones actually. (Almost looks like they are interpolated in sRGB space) EDIT: forget it, I was looking at gradients between different colors, hence why I thought the NanoVG one was worse. Nevermind ! |
Since I can see this is where the clip feature was born, and it is not closed, I will comment on my changes here. I'm not doing a pull request since I believe the changes are too special cased for my needs, but I wanted to share what I did anyway. Here is the commit: carloscm@09795c7 I needed to draw this: Basically one or more circles whose area is fused together when touching, while having alpha blending and not overdrawing them. A bit of thinking out of the box and I managed to do the fill by drawing the circles as the clipping shapes in In mode, and then drawing an fullscreen translucent quad. The stroke part was impossible. The stencil buffer from the previous part was perfect, but issuing a new drawing for the stroked circles didn't work since I needed Out mode for them. So I needed to redraw the clip shapes just to change the mode to Out. If I did that, then the filled areas of the previous call disappeared. It appears it was somehow drawing out of order with respect to the moment its stencil pass was valid, or maybe the stencil writing is not happening when the stencil buffer already has a value. I don't know about those low level details. My change sidesteps both possibilities by adding a "hold" version of the clip modes. This just means to not increment the stencil value after the stencil draw commands are done. This shares the stencil contents with the next drawing by virtue of it using the same stencil value, and most importantly the next draw will also stencil test using the same stencil value. This is still a bad design since even if the stencil and its value are shared some kind of clip shape has to be submitted, otherwise vg does not enable stencil test. In my user code I am just doing offscreen draws to force it. I believe it would be a better idea to just expose a limited stencil API like BeginStencil/EndStencil and then allow some kind of stencil test mode flag in the calls that can issue draw commands like stroke/fill. Maybe even user control of the stencil value for more flexibility, there's space in the u32 flags. Anyway just my brain dump on this. vg-renderer rocks and I am very happy user, thank you so much for it! |
Thanks for using vg-renderer and sorry for the delayed reply. I could change the BeginClip()/EndClip() API to return some kind of handle (similar to images and gradients). You'll then be able to use this handle with the appropriate clipping mode for each draw command. E.g.
Issues:
|
That would allow for great flexibility indeed, it would be a more useful API than the existing one or my hack. The concern I can see it's the fact there is only one stencil buffer, but the user can make perfectly API-legal calls to the beginClip/endClip that will overwrite stencil values from previous calls. Due to the name and usage users may think they are dealing with a geometry-level clipping API and not a stencil based one, that's why I proposed beginStencil/endStencil earlier. But it all boils down to documentation and examples in the end.
|
I think the availability of stencil values depends on the framebuffers you are rendering to. The current layer implementation (layers branch) accepts a different bgfx viewID per layer. vg-renderer doesn't know if two different viewIDs refer to the same framebuffer or not. E.g. if 1 viewID draws to the window back buffer and another to an offscreen buffer with a depth-stencil texture attached, then the available stencil values are twice as many BUT they are not shared between the two buffers/layers. If, on the other hand, you have 2 layers with 2 different viewIDs, both drawing to the window back buffer, then the available stencil values are only those of the back buffer's stencil range and they are shared between the layers. There are ways to overcome the limited stencil range by clearing the stencil mid frame (e.g. drawing a fullscreen quad while replacing existing stencil values with 0), but it complicates things a bit more. Also regarding no 1, it's a bit more complicated than I initally thought because there's only 1 stencil ref value which must be used for both testing and calculating the next value. The next stencil value can only be 1 greater (inc) or less (dec) than the ref value. So if you are clipping a clip mask, you have to use the the stencil value of the clip mask as the ref to test against, which in turn means that you have to use inc/dec as stencil op to generate the new clipped clip mask. Hope the above make sense :) English is not my native language. Either way, some corners must be cut because a generic clipping API with unlimited clipping masks and layers is too complicated. I have to think about it a bit more. |
I wasn't aware of the extra complexity in the layers branch. It then makes sense to pick a simpler design that fits it well, since it's the current development branch. No nesting, attached to whatever is the closest representation of the underlying stencil (I guess it would be view ID then), etc. |
I have couple questions, sorry if they seem stupid sometimes, but maybe you can clear my mind:
What type of Matrix to supply to ApplyTransform (3x2, 4x4?).
Is the matrix supplied is global to the IRenderer or relative to the current state in tree (PopState/PushState) methods?
What are pros/cons using Shapes instances? are they faster? Are they suited only for static shapes or can be animated?
Is this library suited for complex animations? (dynamic shapes+ dynamic colors)
Is it possible to rasterize a Shape into a texture?
Are the shapes vertices stored in the vertexBuffer, or are they just a quads, with the shaders responsible of drawing the outline?
What is the best approach to implement colliders/mouse input detection?
Thank you
The text was updated successfully, but these errors were encountered: