Replies: 2 comments 2 replies
-
Yes, I also thought that repeatedly computing hash may be a problem. As a solution I was thinking about caching the hashes of each file. This way the hash will be computed only once for each file so that the hash will not be repeatedly computed for the same include. Another, even faster alternative might be to only hash the file name itself. However, in this cases the changes to the file will not trigger shader update, which makes this of limited use. |
Beta Was this translation helpful? Give feedback.
-
hmm, current computation is reliable, and your first solution sounds even better. In a testing-release build, we hack a |
Beta Was this translation helpful? Give feedback.
-
Problem
With this project setup:
About 2400+ shader permutation, each shader(with all includes) about 10000+ lines.
The cache is populated, the subsequent load times are unsatisfactory.
Early inspection
When loading,
RenderStateCache
compute hash of full source, again and again.Maybe we can get the internal hash string, and save it,
and next time provide the hash string,
then
RenderStateCache
check theDearchiver
first?Beta Was this translation helpful? Give feedback.
All reactions