-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why the implementation of Lengauer-Tarjan uses std::deque for a bucket? #383
Comments
Yeah, that bucket type should be Other performance improvements to consider:
so changing from a "struct of vectors" to a "vector of structs" would probably improve cache efficiency. Not sure about I recommend using Google's benchmark library to get your results. Please include a graph of the comparison in any pull request. Also, please don't do all the performance changes together in one PR -- better to do it one at a time so as not to hurt my brain. Thanks. :) |
@jeremy-murphy Thank you very much for the answer and a lot of information and suggestion. I believe developing benchmarks with the google benchmark project ismay itself be a good first issue for one who would like to join boost development. They may be started from a set of "real" CFGs from the boost sources or some other applications. LLVM or even Bolt may help a lot to collect the CFGs. Then the benchmarks may be added to the |
That sounds essentially right. I wouldn't trouble yourself to get 'real' CFGs from LLVM for example, it should be sufficient to generate synthetic ones? If we add Google benchmark tests to the library, they will be optional, as we don't want to make that library a required dependency. |
@jeremy-murphy Hello, sorry for a late response. I would like to implement this point:
but have a question: What about initializing the , buckets_(num_vertices(g))
, bucketMap_(make_iterator_property_map(buckets_.begin(), indexMap)) for explicit unordered_flat_map(size_type n,
const hasher& hf = hasher(),
const key_equal& eql = key_equal(),
const allocator_type& a = allocator_type()); Here, The second question is about using the flat map as a backbone for an |
About this suggestion:
I've replaced baseline
With the
When I initialize an entry for every vertex with an empty vector in the
Clearing the vectors for corresponding vertices instead of erasing the entries at the end of the (with the initialization of the
(without the initialization):
(these numbers should be compared with the second chart in my comment, what we can see, the numbers are a little bit worser for the last two (large) cases: 17325 vs 15915 - 8% and 15990 vs 14380 - 10% but are a little bit better for small cases: 700 vs 760 - 8% for Muchnick or 1184 vs 1211 - 2% for Tarjan's paper.) I believe the numbers demonstrate that a |
Thanks for trying all these different ideas. Well, we fixed the essential issue of using |
The compilation of such huge graph is guarded with the WITH_INLINED_GRAPH compiler directive. Issue: boostorg#383
I was going through the implementation of the Lengauer-Tarjan algorithm in BGL's domibator_tree.hpp and have found that there is a vector of std::deque to store buckets: vertices with the given semidominator.
graph/include/boost/graph/dominator_tree.hpp
Line 198 in 5557ccf
I read the original paper as well as some explanations carefully and found nowhere that the order of procesding
buckets[parent[w]]
makes any sense. Moreover, I see the deque is used there just as a container, not as a backend for thestd::queue
adapter: the code pushes vertices at the back of the deque, walks through the deque with a corresponding iterators and then clears it. In my understanding, std::vector<std::vector<>> might be used there.Could you tell me why the deque is used to store a bucket in the code? My idea is to eliminate an overhead for the vector reallocation during the repeatedly performed
push_back()
andclear()
operations, so this is a technical thing only and has no impact on the correctness of the algorithm.Thank you very much.
The text was updated successfully, but these errors were encountered: