-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to parallelize pixmap loading #67
Comments
Did you run in to the same problem I did in b94d095, that |
I am not sure what you mean. From the looks of it, |
Before b94d095 I was using Rayon to parallelise loading SVGs. But when I switched from using
Seeing as using Rayon was really just a silly case of premature optimisation, I removed it without looking into the underlying problem. What problem are you seeing? |
I am not seeing any problems, other than the desire to optimize the process (in theory we should have proper benchmarks for all this). Rasterisation (getting pixmap) sounds like an expensive process, so if it can be tied to IO, I think it would speed things up. I was not using Rayon. I use async/tokio. The main idea: create multiple futures, each of which will read the file and parse it. Await all futures together - so once they are all done, pass it on to Spreet to generate the spritesheet (this is what i had in the previous version). Now I do almost the same, except that I only await file loading, whereas pixmap is generated sequentially by Spreet. Notice that there are no Rc or RefCell or any other things like that. My thinking is that we could have two Sprite types - loaded and parsed (this can be done using this amazing pattern), so a user can pre-render the data async if needed, or let Spreet do the rendering. |
with the recent release, it is no longer possible to pre-process each SVG pixmap independently, and merge the results. I was doing it all in parallel using Tokio, and I think there is a significant enough benefit, esp for a high-load scenario, to offer rapid sprite generation.
How could we make this a bit more optimized? Thx!
The text was updated successfully, but these errors were encountered: