Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve basic_string::find_first_of and basic_string::find_last_of vectorization for large needles or very large haystacks #5029

Draft
wants to merge 44 commits into
base: main
Choose a base branch
from

Conversation

AlexGuteniev
Copy link
Contributor

@AlexGuteniev AlexGuteniev commented Oct 20, 2024

Follow up on #4934 (comment):

The case bm<AlgType::str_member_last, char>/400/50 is changing rom 113 ns to 195 ns, a speedup of 0.58.

Looked closer into that case, and made it even faster than it was.

🗺️ Summary of changes

This PR consists of the following changes:

  • Introduced __std_find_first_of_trivial_pos_N family that is used by strings and string view. The existing __std_find_first_of_trivial_N is still used by the standalone algorithm
  • Moved most of the vectorization decision making into the separately compiled code (further simplifying control flow in the header code as a side effect)
  • Added vectorized bitmap algorithm, in addition to the existing vectorized nested loop (two of them for different element sizes), scalar bitmap, and scalar nested loop algorithms
  • Reimplemented a copy of scalar bitmap algorithm in the separately compiled code
  • Implemented threshold system that better corresponds to the expected run time
  • Restored using scalar bitmap algorithm in header in constexpr context, because why not

⚙️ Vector bitmap algorithm

It is an AVX2-only algorithm. It processes 8 values at once.

In a similar way to the existing scalar bitmap algorithm, can be used when all needle character values do not exceed 255. Instead of having an array of 256 bool values, it uses an actual bitmap. The whole bitmap can fit into __m256i variable, that is, an AVX2 register.

If another AVX2 register contains 8 32-bit values, which are indices to 32-bit bitmap parts, _mm256_permutevar8x32_epi32 (vpermd) can look up 8 parts at once. The indices to the parts are high 3 bits of 8 bit values. The low 5 bits can be then used to obtain the exact bit in 32-bit sequence by a shift. In AVX2 there's are variable 32-bit shift that use a vector of shift values instead of just one for all: _mm256_srlv_epi32, _mm256_sllv_epi32. The resulting mask can be obtained by _mm256_movemask_ps.

Bitmap building

Small needles

Unfortunately, there's no instruction in AVX2 that can combine bits from different values of the same vector in a single element. This means that the bitmap building has to be fully scalar, or at least partially (when doing some processing in parallel, but doing final steps in scalar)

The scalar bitmap building loop performs rather poorly, worse than a loop that builds bool array. So I implemented a loop that uses vector instructions for that, so it uses vector registers and no stack, it seems faster than creating a stack array and loading it after. The key things in this approach is that a value from one of the shifts is expanded via _mm256_cvtepu8_epi64, so a 32-bit shift becomes a 256-bit shift of a lower granularity, the granularity is added back by another shift.

I've managed to have only a slight improvement when trying to partially parallel it, and the complexity of bitmap building grew significantly, so let's probably don't to it.

A different variations of non-parellel bitmap building have about the same performance, so I kept almost the same that I tried at first, except that I adjusted it to work fine in 32-bit x86 too.

Large needles

The vector instructions loop that builds performs poorly relative to the bool array building loop. At some point it makes sense to build bool array and compress it to a bitmap. As the size of array/bitmap is constant, it is constant instructions sequence, without loop, and it takes constant time.

Test for suitable element values

This is done separately, before creating the bitmap. This separate check is vectorized, and allows to bail out quickly, if values aren't right, without building the bitmap. There isn't specific benchmark for that currently, but I think this would work.

Advantage over existing

The cases where the needle and haystack product is big enough to make the existing vector algorithms bad, but the haystack is still way bigger that the needle, so the scalar bitmap lookup is also bad. Added some of them to the benchmark.

Surprisingly, this extends to the case with very small needles. With over like 1000 element, vector bitmap wins over SSE4.2 even for just a few needle elements.

Can we have this in SSE?

No. There's _mm256_shuffle_epi8 to do the bitmap parts extraction. But there's no variable vector shift. There isn't even variable vector shift in AVX2 with vector element width smaller than 32. So probably nothing better than using 8-element AVX2 vector.

⚖️ Selecting algorithm

⚠️ Actual vs run time vs full haystack length

The problem with estimating run time in advance is that we don't know how long will it run. The algorithm doesn't run full haystack, if the position is found earlier.

But when selecting algorithm we know only full length. Knowing the full length we can at least estimate the worst case.

Let's still start with worst case, will get back to early return possibility later on.

Run time evaluation

The nested loop algorithms, both scalar and both vectors, are O(m*n), and definitely the vector algorithms is preferred for any noticeably high values of m and n.

Also any bitmap algorithm is faster than nested scalar, unless the element is found in the very first position. So we can safely exclude the nested loop scalar from consideration.

Both scalar and vector bitmap algorithms are some sort of O(n + m), and they have quite different weights of m and n. Specifically, vector bitmap algorithm treat needle length way worse than haystack length, because this part is not parallel, and scalar bitmap algorithm treats them almost equally (surprisingly, needle has slightly less weight).

Vector nested loop algorithm clearly outperforms when both n and m are small, so their product is also small. In specific cases, vector algorithm is linear, if either n or m is within a single vectorization unit. In this case it doesn't even have a nested loop (for short needle it is a deliberate optimization, for small haystack it is the result of the separate haystack tail processing).

After benchmarking these edge cases, it can be seen that vector nested loop outperforms everything for long needle small haystack, but it doesn't always outperform vector bitmap for short needle / large haystack. The former allows to exclude scalar bitmap algorithm from the consideration: with any not very small haystack, vector bitmap algorithm advantage is noticeable. Very small set of cases where scalar bitmap can win (small but not very small haystack and long needle) still don't give it a solid win, these cases are ultimately bound by the same scalar bitmap building loop for both algorithms. The benchmark here still may show noticeable difference, but only because these are different instances of that loop, and some codegen factors or other random factors might affect it.

So we need to pick:

  • Between vector bitmap and vector nested loop for AVX2
  • Between scalar bitmap and vector nested loop for SSE4.2

It is hard to reason about the threshold functions, so the thresholds were obtained by aggressive benchmarking.

Considering early return

There is early return possibility.
If we don't consider it, we may pick a bitmap algorithm where vector nested loop is better.
If we will expect it, but it will not happen, we may pick vector nested loop when a bitmap algorithm is better.

Looks like that the latter gives worse error.

Generally the price of error is small for short needles. Long needles are gambling cases. But even for long needles the price for not picking vector nested loop when it is better is no more than 2x.

Why this dispatch is not in headers?

No big reason.

There's overflow multiply instrisic used from <intrin.h>, but that one is not essential.

Maybe also this will make maintenance easier, by having fewer functions exposed from vector_algorithm.cpp

Otherwise I guess I'm just like hiding the complexity under a carpet.

🛑 Risks

This time I don't see anything that seems incorrect, it is a complex change with some risks to consider:

  • Regressing some performance for some cases due to spending some time deciding/dispatching. I know, but it is a small one.
  • Regressing some performance due to potentially sometimes worse choice of algorithms. The current thresholds give better big picture, still in some border cases it might give slightly worse answer
  • In particular, might give worse choice for the best case, where the element is found immediately (discussed above)
  • Different performance behavior on different CPUs might break fine tuning. Older AMDs that do AVX2 in two takes is most of the concern.
  • Complexity of the vector tricks as usual
  • Changed __std_find_last_of_trivial_pos_N usage, see below

Changed __std_find_last_of_trivial_pos_N usage

__std_find_last_of_trivial_pos_N has been shipped in #4934. Now it does the bitmap, which is not what old code expects. Although all bad would happen is when the header implementation would fail the scalar bitmap due to bad values, this would unnecessary try the bitmap again. This time the attempt would be even faster due to the vectorization of checking, unless the user does not have SSE4.2

I just don't want to add more functions with more names just for this reason

Not wanting to have this situation for another function is the reason I made this PR before the _not_ vectorization (remaining for find 🐱 family)

⏱️ Benchmark results

Click to expand:
Benchmark main this
bm<AlgType::str_member_first, char>/2/3 6.23 ns 5.45 ns
bm<AlgType::str_member_first, char>/6/81 36.5 ns 21.9 ns
bm<AlgType::str_member_first, char>/7/4 13.2 ns 14.5 ns
bm<AlgType::str_member_first, char>/9/3 11.1 ns 13.8 ns
bm<AlgType::str_member_first, char>/22/5 11.7 ns 14.4 ns
bm<AlgType::str_member_first, char>/58/2 12.8 ns 15.9 ns
bm<AlgType::str_member_first, char>/75/85 49.4 ns 47.2 ns
bm<AlgType::str_member_first, char>/102/4 14.9 ns 18.9 ns
bm<AlgType::str_member_first, char>/200/46 72.9 ns 39.8 ns
bm<AlgType::str_member_first, char>/325/1 35.4 ns 37.5 ns
bm<AlgType::str_member_first, char>/400/50 131 ns 53.6 ns
bm<AlgType::str_member_first, char>/1011/11 88.9 ns 94.9 ns
bm<AlgType::str_member_first, char>/1280/46 443 ns 128 ns
bm<AlgType::str_member_first, char>/1502/23 359 ns 136 ns
bm<AlgType::str_member_first, char>/2203/54 563 ns 206 ns
bm<AlgType::str_member_first, char>/3056/7 253 ns 257 ns
bm<AlgType::str_member_first, wchar_t>/2/3 12.3 ns 13.7 ns
bm<AlgType::str_member_first, wchar_t>/6/81 38.2 ns 29.0 ns
bm<AlgType::str_member_first, wchar_t>/7/4 14.2 ns 17.4 ns
bm<AlgType::str_member_first, wchar_t>/9/3 13.0 ns 18.4 ns
bm<AlgType::str_member_first, wchar_t>/22/5 13.9 ns 19.6 ns
bm<AlgType::str_member_first, wchar_t>/58/2 18.5 ns 24.4 ns
bm<AlgType::str_member_first, wchar_t>/75/85 74.7 ns 61.0 ns
bm<AlgType::str_member_first, wchar_t>/102/4 25.6 ns 29.7 ns
bm<AlgType::str_member_first, wchar_t>/200/46 109 ns 56.2 ns
bm<AlgType::str_member_first, wchar_t>/325/1 64.8 ns 67.2 ns
bm<AlgType::str_member_first, wchar_t>/400/50 180 ns 71.1 ns
bm<AlgType::str_member_first, wchar_t>/1011/11 482 ns 141 ns
bm<AlgType::str_member_first, wchar_t>/1280/46 493 ns 182 ns
bm<AlgType::str_member_first, wchar_t>/1502/23 552 ns 188 ns
bm<AlgType::str_member_first, wchar_t>/2203/54 815 ns 264 ns
bm<AlgType::str_member_first, wchar_t>/3056/7 556 ns 330 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/2/3 16.4 ns 16.7 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/6/81 195 ns 28.0 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/7/4 25.4 ns 16.5 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/9/3 13.3 ns 17.3 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/22/5 14.0 ns 18.4 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/58/2 18.3 ns 22.8 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/75/85 190 ns 188 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/102/4 25.4 ns 28.8 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/200/46 274 ns 279 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/325/1 65.2 ns 66.0 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/400/50 584 ns 580 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/1011/11 479 ns 474 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/1280/46 1624 ns 1593 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/1502/23 991 ns 958 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/2203/54 3074 ns 3043 ns
bm<AlgType::str_member_first, wchar_t, L'\x03B1'>/3056/7 553 ns 524 ns
bm<AlgType::str_member_first, char32_t>/2/3 12.0 ns 12.3 ns
bm<AlgType::str_member_first, char32_t>/6/81 40.8 ns 42.8 ns
bm<AlgType::str_member_first, char32_t>/7/4 14.9 ns 14.9 ns
bm<AlgType::str_member_first, char32_t>/9/3 13.1 ns 15.5 ns
bm<AlgType::str_member_first, char32_t>/22/5 14.8 ns 18.0 ns
bm<AlgType::str_member_first, char32_t>/58/2 14.3 ns 18.4 ns
bm<AlgType::str_member_first, char32_t>/75/85 63.6 ns 51.2 ns
bm<AlgType::str_member_first, char32_t>/102/4 20.1 ns 21.0 ns
bm<AlgType::str_member_first, char32_t>/200/46 109 ns 46.3 ns
bm<AlgType::str_member_first, char32_t>/325/1 33.0 ns 31.3 ns
bm<AlgType::str_member_first, char32_t>/400/50 180 ns 63.2 ns
bm<AlgType::str_member_first, char32_t>/1011/11 332 ns 126 ns
bm<AlgType::str_member_first, char32_t>/1280/46 487 ns 143 ns
bm<AlgType::str_member_first, char32_t>/1502/23 548 ns 155 ns
bm<AlgType::str_member_first, char32_t>/2203/54 809 ns 243 ns
bm<AlgType::str_member_first, char32_t>/3056/7 538 ns 534 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/2/3 15.2 ns 14.4 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/6/81 192 ns 24.3 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/7/4 24.3 ns 15.0 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/9/3 13.1 ns 15.0 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/22/5 14.5 ns 17.5 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/58/2 14.0 ns 18.5 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/75/85 204 ns 203 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/102/4 19.5 ns 21.2 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/200/46 285 ns 285 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/325/1 22.3 ns 33.1 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/400/50 601 ns 597 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/1011/11 336 ns 322 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/1280/46 1751 ns 1729 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/1502/23 1012 ns 1002 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/2203/54 3465 ns 3451 ns
bm<AlgType::str_member_first, char32_t, U'\x03B1'>/3056/7 534 ns 536 ns
bm<AlgType::str_member_last, char>/2/3 5.43 ns 5.44 ns
bm<AlgType::str_member_last, char>/6/81 33.1 ns 18.4 ns
bm<AlgType::str_member_last, char>/7/4 11.8 ns 12.4 ns
bm<AlgType::str_member_last, char>/9/3 10.5 ns 12.5 ns
bm<AlgType::str_member_last, char>/22/5 11.1 ns 12.9 ns
bm<AlgType::str_member_last, char>/58/2 12.1 ns 14.2 ns
bm<AlgType::str_member_last, char>/75/85 52.6 ns 43.9 ns
bm<AlgType::str_member_last, char>/102/4 14.7 ns 16.6 ns
bm<AlgType::str_member_last, char>/200/46 72.0 ns 33.6 ns
bm<AlgType::str_member_last, char>/325/1 33.1 ns 36.3 ns
bm<AlgType::str_member_last, char>/400/50 138 ns 50.9 ns
bm<AlgType::str_member_last, char>/1011/11 90.6 ns 98.8 ns
bm<AlgType::str_member_last, char>/1280/46 425 ns 122 ns
bm<AlgType::str_member_last, char>/1502/23 358 ns 133 ns
bm<AlgType::str_member_last, char>/2203/54 605 ns 201 ns
bm<AlgType::str_member_last, char>/3056/7 267 ns 228 ns
bm<AlgType::str_member_last, wchar_t>/2/3 14.5 ns 12.3 ns
bm<AlgType::str_member_last, wchar_t>/6/81 41.7 ns 25.4 ns
bm<AlgType::str_member_last, wchar_t>/7/4 15.7 ns 14.7 ns
bm<AlgType::str_member_last, wchar_t>/9/3 13.2 ns 15.1 ns
bm<AlgType::str_member_last, wchar_t>/22/5 13.7 ns 15.6 ns
bm<AlgType::str_member_last, wchar_t>/58/2 18.0 ns 19.8 ns
bm<AlgType::str_member_last, wchar_t>/75/85 76.4 ns 53.0 ns
bm<AlgType::str_member_last, wchar_t>/102/4 25.1 ns 24.2 ns
bm<AlgType::str_member_last, wchar_t>/200/46 116 ns 45.4 ns
bm<AlgType::str_member_last, wchar_t>/325/1 65.1 ns 41.2 ns
bm<AlgType::str_member_last, wchar_t>/400/50 191 ns 62.2 ns
bm<AlgType::str_member_last, wchar_t>/1011/11 402 ns 135 ns
bm<AlgType::str_member_last, wchar_t>/1280/46 515 ns 147 ns
bm<AlgType::str_member_last, wchar_t>/1502/23 718 ns 182 ns
bm<AlgType::str_member_last, wchar_t>/2203/54 862 ns 255 ns
bm<AlgType::str_member_last, wchar_t>/3056/7 554 ns 326 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/2/3 16.8 ns 13.2 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/6/81 162 ns 25.3 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/7/4 27.3 ns 14.8 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/9/3 13.6 ns 15.6 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/22/5 13.8 ns 16.2 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/58/2 18.0 ns 20.0 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/75/85 169 ns 170 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/102/4 25.2 ns 28.0 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/200/46 239 ns 240 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/325/1 63.8 ns 67.2 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/400/50 525 ns 527 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/1011/11 406 ns 407 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/1280/46 1409 ns 1432 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/1502/23 865 ns 859 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/2203/54 2821 ns 2811 ns
bm<AlgType::str_member_last, wchar_t, L'\x03B1'>/3056/7 555 ns 557 ns

@AlexGuteniev AlexGuteniev requested a review from a team as a code owner October 20, 2024 14:28
@StephanTLavavej StephanTLavavej added the performance Must go faster label Oct 20, 2024
@StephanTLavavej StephanTLavavej self-assigned this Oct 20, 2024
@StephanTLavavej

This comment was marked as resolved.

@AlexGuteniev

This comment was marked as resolved.

@AlexGuteniev
Copy link
Contributor Author

https://github.com/AlexGuteniev/STL/tree/ascii-table-experiment is a branch with altered benchmark program that I used to confirm the thresholds characteristics and find out their values. It is experimental science, not just plain theory 🔬 !

@AlexGuteniev

This comment was marked as resolved.

@AlexGuteniev AlexGuteniev changed the title Improve basic_string::find_first_of and basic_string::find_last_of vectorization for large needles Improve basic_string::find_first_of and basic_string::find_last_of vectorization for large needles or very large haystacks Oct 21, 2024
@AlexGuteniev AlexGuteniev marked this pull request as draft October 22, 2024 10:04
@AlexGuteniev AlexGuteniev marked this pull request as ready for review October 27, 2024 16:58
@AlexGuteniev AlexGuteniev removed their assignment Oct 27, 2024
@StephanTLavavej StephanTLavavej self-assigned this Oct 27, 2024
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
@StephanTLavavej

This comment was marked as resolved.

Even though sll is potentially more expensive than sllv,
the broadcast is still more expensive
@AlexGuteniev

This comment was marked as resolved.

stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
stl/src/vector_algorithms.cpp Outdated Show resolved Hide resolved
Comment on lines 3908 to 3921
if (_Use_sse42()) {
if (_Use_avx2()) {
if (_Bitmap::_Use_bitmap_avx<_Ty>(_Haystack_length, _Needle_length)
&& _Bitmap::_Can_fit_256_bits_sse(static_cast<const _Ty*>(_Needle), _Needle_length)) {
return _Bitmap::_Impl_last_avx<_Ty>(_Haystack, _Haystack_length, _Needle, _Needle_length);
}
} else {
if (_Bitmap::_Use_bitmap_sse<_Ty>(_Haystack_length, _Needle_length)
&& _Bitmap::_Can_fit_256_bits_sse(static_cast<const _Ty*>(_Needle), _Needle_length)) {
_Bitmap::_Scalar_table_t _Table = {};
_Bitmap::_Build_scalar_table_no_check<_Ty>(_Needle, _Needle_length, _Table);
return _Bitmap::_Impl_last_scalar<_Ty>(_Haystack, _Haystack_length, _Table);
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question about this control flow: If we can use AVX2, but the _Use_bitmap_avx threshold isn't met, we don't try the _Use_bitmap_sse threshold, and instead just fall through. But if we can only use SSE4.2, we try the SSE4.2 threshold before falling through, indicating that the guarded code is valuable. From a brief glance, I see that there are cases where the AVX2 threshold wouldn't activate, but the SSE4.2 threshold would.

Given that the threshold checks are cheap integer comparisons, should we follow the more common pattern of "if we can use AVX2 and the AVX2 threshold is met, return cool AVX2 thing. If we can use SSE4.2 and the SSE4.2 threshold is met, return slightly less cool SSE4.2 thing. Return fallback thing."?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to approximate the problem by not looking into these cases and not picking between three algorithms.

The cases we pessimize by this approximation are the cases where the needle greatly exceeds the haystack, and the pessimization is by some noticeable percentage, but not like 2x.


If we want to do this right, we should handle not only the cases you see here: "AVX bitmap is not preferable over vector nested loop, but scalar bitmap is preferable over vector nested loop", but also a set of cases "AVX bitmap is preferable over vector nested loop, and scalar bitmap is also preferable over vector nested loop, and scalar bitmap is preferable over AVX bitmap". The latter is a greater set.

At some point the AVX threshold function returned enum, we need to return to that:

enum class _Strategy { _No_bitmap, _Scalar_bitmap, _Vector_bitmap };

_Strategy _Pick_strategy_avx(const size_t _Count1, const size_t _Count2) noexcept { ... }

So I don't mind handling these cases better, at the expense of threshold complication, do you want me to do that?

@StephanTLavavej

This comment was marked as resolved.

@AlexGuteniev

This comment was marked as resolved.

@AlexGuteniev
Copy link
Contributor Author

@StephanTLavavej mentioned on Discord that it is ok to use three-way strategy function.

As the implementation of it still needs to be established (there was one but it matched previous AVX bitmap implementation), moving to work in progress again.

@AlexGuteniev AlexGuteniev marked this pull request as draft November 9, 2024 18:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
Status: Work In Progress
Development

Successfully merging this pull request may close these issues.

3 participants