Daniel Lemire's points about low-level hardware optimization notwithstanding, it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know <i>nothing</i> about the data beyond the fact that it is sorted / monotonic.<p>If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better. eg: a human searching a physical paper dictionary can zoom into the right bunch of pages faster than pure idealized binary search; it's a separate matter it's hard for humans to continue binary search till the very end and we might default to scanning linearly for the last few iterations (cognitive convenience / affordances of human wetware / etc).<p>In mathematical language, searching a sorted list is basically <i>inverting</i> a monotonic function, by using a closed-loop control algorithm. Often, we could very well construct a suitable cost function and use gradient descent or its accelerated cousins.<p>More generally, the best bet to solving a problem more efficiently is always to use more information about the specific problem you want to solve, instead of pulling up the solution for an overly abstract representations. That can offer scalable orders of magnitude speedup compared to constant factor speedups from just using hardware better.
> it's worth pointing out that binary search (or low-level implementation variants) is the best only if you know nothing about the data beyond the fact that it is sorted / monotonic<p>Also if you do not learn anything about the data while performing the binary search, no? Like, if you are constantly below the estimate, you could gess that the distribution is biases toward large values and adjust your guess based on this prediction.
For a list of sorted values with no other knowledge, the binary search is optimal. Provably, it is simple information theory on binary information.<p>You can do better if the list is stable by reusing information.<p>But gathering that information during searches is going to require great complexity to leverage, as searches are an irregular information gathering scheme.<p>So create RAM for speedup optimizations up front.<p>1) Create a table that maps the first 8 bits to upper and lower indexes in the list. Then binary search over the last 8 bits. That reduces the search time in half.<p>2) Go all the way, and create an array of 32,768 indexes, with all 1's for misses. Either way, search returns O(1).<p>Stable lists allow for sliding parametric trade offs between RAM-lookup vs. binary search. From full lookup, to full binary.
It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.<p>If we would guess that there is a bias in the distribution based on recently seen elements, the guess is at least as likely to be wrong as it is to be right. And if we guess incorrectly, in the worst case, the algorithm degrades to a linear scan.<p>Unless we have prior knowledge. For example: if there is a particular distribution, or if we know we're dealing with integers without any repetition (i.e. each element is strictly greater than the previous one), etc.
> It's not possible to learn anything about other elements when performing binary search, _except_ the only thing there is to learn: if the target is before or after the recently compared element.<p>You have another piece of information, you don't only know if the element was before or after the compared element. You can also know the delta between what you looked at and what you're looking for. And you also have the delta from the previous item you looked at.
I swear I read an article about treaps but instead of being used to balance the tree, they used the weights to Huffman encode the search depth to reduce the average access time for heterogenous fetch frequencies.<p>I did not bookmark it and about twice a year I go searching for it again. Some say he’s still searching to this day.
> In mathematical language, searching a sorted list is basically inverting a monotonic function, by using a closed-loop control algorithm.<p>Never thought about it this way. Brilliant!
Furthermore, with the vast and immediate knowledge that LLMs have, we could see a proliferation of domain-specific sorting algorithms designed for all types of purposes.
> use that extra information to perform MUCH better<p>Do you mean using a better estimator for the median value? Or something else?
> If you have priors about the data distribution, then it's possible to design algorithms which use that extra information to perform MUCH better.<p>You don't even need priors. See interpolation search, where knowing the position and value of two elements in a sorted list already allows the search to make an educated guess about where the element it's searching for is by estimating the likely place it would be by interpolating the elements.
> knowing the position and value of two elements in a sorted list<p>That's a prior about the distribution, if a relatively weak one (in some sense, at least).
This relies on knowledge of the distribution, just querying in the middle of
A = [1, 2, 4, 8, 16, ..., 2^(n-1)]
is slower than binary search
Since the cpu always accesses a full cache line (64 bytes) at a time, you might as well search the entire cache line (it’s practically free once the data is on-cpu). So I’d like to try a ‘binary’ search that tests all the values in the ‘middle cache line’ and then chooses to go left or right if none match. You can do the cache line search as a single 512bit simd instruction. A cache line is 64 bytes (or 32 16-bit integers); such a search might well be almost 32 times faster than simple binary search; at least it’ll do 32x less memory accesses, which will dominate in most realistic programs.
Searching the upper cache lines in your binary search tree (sorted vector) for your target is unlikely to yield results. Instead you want to use the extra data in the line to shorten the search, which leads you to a B-Tree or B+tree.<p>For 4 byte keys and 4 byte child pointers (or indexes in to an array) your inner nodes would have 7 keys, 8 child pointers and 1 next pointer, completely filling a 64 byte cache-line and your tree depth for 1 million entries would go down from ~20 to ~7, the top few levels of which are likely to remain cache resident.<p>With some thought, it's possible to use SIMD on B-tree nodes to speed up the search within the node, but it's all very data dependent.
I also wrote recently [1] about Exponential Search [2] which is another algorithm if you need to repeatedly binary search in an array where the elements you're searching are themselves are sorted. It allowed for an 8x speedup in our workload!<p>[1] <a href="https://lalitm.com/post/exponential-search/" rel="nofollow">https://lalitm.com/post/exponential-search/</a>
[2] <a href="https://en.wikipedia.org/wiki/Exponential_search" rel="nofollow">https://en.wikipedia.org/wiki/Exponential_search</a>
Exponential search is useful when you're querying a REST API that addresses resources with sequential IDs, and need the last ID, but there's no dedicated endpoint for it:<p><pre><code> HEAD /users/1 -> 200 OK
HEAD /users/2 -> 200 OK
HEAD /users/4 -> 200 OK
...
HEAD /users/2048 -> 200 OK
HEAD /users/4096 -> 404 Not Found
</code></pre>
And then a binary search between 2048 and 4096 to find the most recent user (and incidentally, the number of users). Great info to have if you're researching competing SaaS companies.
Isn't "quaternary" just sort of unrolling the binary search loop by one level? I mean, to find the partition in which the item is located, you still do roughly the same rough number of comparisons. You're just taking them 4 at a time, not 2 at a time. Seems like loop unrolling would give you the same.
It's trickier than that. Modern processors are speculative, which means that they guess at the result for a comparison and keep going along one side of a branch as far as they can until they are told they guessed wrong or hit some internal limit. If they guessed wrong, they throw away the speculative work, take a penalty of a handful of cycles, and do the same thing again from a different starting point.<p>Essentially, this means that all loops are already unrolled from the processors point of view, minus a tiny bit of overhead for the loop itself that can often be ignored. Since in binary search the main cost is grabbing data from memory (or from cache in the "warm cache" examples) this means that the real game is how to get the processor to issue the requests for the data you will eventually need as far in advance as possible so you don't have to wait as long for it to arrive.<p>The difference in algorithm for quad search (or anything higher than binary) is that instead of taking one side of each branch (and thus prefetching deeply in one direction) is that you prefetch all the possible cases but with less depth. This way you are guaranteed to have successfully issued the prefetch you will eventually need, and are spending slightly less of your bandwidth budget on data that will never be used in the actual execution path.<p>As others are pointing out, "number of comparisons" is almost useless metric when comparing search algorithms if your goal is predicting real world performance. The limiting factor is almost never the number of comparisons you can do. Instead, the potential for speedup depends on making maximal use of memory and cache bandwidth. So yes, you can view this as loop unrolling, but only if you consider how branching on modern processors works under the hood.
Yea, I get that the actual comparison instruction itself is insignificant. It's everything that goes along with it. Seems like quaternary is fetching more data, however.<p>For instance, if you have 8 elements, 01234567, and you're looking for 1, with binary, you'd fetch 4, 2, and then 1. With quaternary, you'd fetch 2, 4, 6, then 1. Obviously, if you only have 8 elements, you'd just delegate to the SIMD instruction, but if this was a much larger array, you'd be doing more work.<p>I guess on a modern processor, eliminating the data dependency is worth it because the processor's branch prediction and speculation only follows effectively a single path.<p>Would be interesting to see this at a machine cycle level on a real processor to understand exactly what is happening.
It's not about doing more or less work; it's about doing the work <i>faster</i>. For instance, it's relatively common to discover that some recomputation can be faster than caching or lookup tables. Similarly, fetching more from memory also can be faster if it means you make less roundtrips.
Well that's where I thought this link was going to go before it went down the simd path... We have a way to beat binary search, it is called b-trees, it has the same basic insight that you can easily take 64 elements from your data set evenly spaced, compare against all of those rapidly, and instead of bifurcating your search space once, you do the same as six times, but because you store the 64 elements in an array in memory, they only take one array fetch and you get cache locality... But as you have more elements, you need to repeat this lookup table like three or four or five times, so it costs a bit of extra space, so what if we make it not cost space by just storing the data in these lookup tables...
Yes, this can be seen as unrolling the loop a bit. It improves performance not by significantly reducing the number of instructions or memory reads, but by relaxing the dependencies between operations so that it doesn't have to be executed purely serially. You could also look at it as akin to speculatively executing both sides of the branch.
Quaternary search effectively performs <i>both</i> of the next loop iteration’s possible comparisons simultaneously with the current iteration’s comparison. This is a little more complex than simple loop unrolling.<p>Regardless, both kinds of search are O(log N) with different constants. The constants don’t matter so much in algorithms class but in the real world they can matter a lot.
Sort of, yes, but you're also removing a data dependency between the unrolled stages.
It is because processors do not do what one might naively think they do.
If you are talking smaller arrays, linear search with a sentinel value at the end is already tough to beat. The thing that sucks about that claim, is that "smaller" is such a nebulous qualifier that it is really hard to internalize.
This is simply not true - if you look at this article’s excellent benchmarking, linear search falls behind somewhere around 200-400 elements.<p>In general I love this article, it took what I’ve often wondered about and did a perfect job exploring with useful ablation studies.
I don't really see how this implies the above commenter's statement is "simply not true".
I don't think std::find typically uses a sentinel, though?
For that machine and compiler version, yes.
Except on Apple, where binary search always wins. Does anyone know why?
Prior to the current generation Intel designs, Apple’s branch predictor tables were a good deal larger than Intel’s IIRC, so depending on benchmarking details it’s plausible that Apple Silicon was predicting every branch perfectly in the benchmark, while Intel had a more real-world mispredict rate. Perf counters would confirm.
That's not what the article is about.
The algorithm description was a bit confusing for me.<p>The SIMD part is just in the last step, where it uses SIMD to search the last 16 elements.<p>The Quad part is that it checks 3 points to create 4 paths, but also it's searching for the right block, not just the right key.<p>The details are a bit interesting. The author chooses to use the last element in each block for the quad search. I'm curious how the algorithm would change if you used the first element in each block instead, or even an arbitrary element.
As a teenager I spent a weekend thinking that if binary search was good, because it cuts the search space in half at every step, then wouldn't a <i>ternary</i> search be better? Because we'd cut it into thirds at every step.<p>So instead of just comparing the middle value, we'd compare the one at the 1/3 point, and if that turns out to be too low then we compare the value at the 2/3 point.<p>Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.<p>EDIT: See zamadatix reply, it's actually 5/3 as many comparisons because 2/3 of the time you have to do 2.
This ternary approach doesn't actually average 3/2 comparisons per level:<p>- First third: 1 comparisons<p>- Second third: 2 comparisons<p>- Third third: 2 comparisons<p>(1+2+2)/3 = 5/3 average comparisons. I think the gap starts here at assuming it's 50% of the time because it feels like "either you do 1 comparison or 2" but it's really 33% of the time because "there is 1/3 chance it's in the 1st comparison and 2/3 chance it'll be 2 comparisons".<p>This lets us show ternary is worse in total average comparisons, just barely: 5/3*Log_3[n] = 1.052... * Log_2[n].<p>In other words, you end up with fewer levels but doing more comparisons (on average) to get to the end. This is true for all searches of this type (w/ a few general assumptions like the values being searched for are evenly distributed and the cost of the operations is idealized - which is where the main article comes in) where the number of splits is > 2.
It turns out that teenager you had something.<p>Not for the ternary version of the binary search algorithm, because what you had is just a skewed binary search, not an actual ternary search. Because comparisons are binary by nature, any search algorithm involving comparisons are a type of binary search, and any choice other than the middle element is less efficient in terms of algorithmic complexity, though in some conditions, it may be better on real hardware. For an actual ternary search, you need a 3-way comparison as an elementary operation.<p>Where it gets interesting is when you consider "radix efficiency" [1], for which the best choice is 3, the natural number closest to e. And it is relevant to tree search, that is, a ternary tree may be better than a binary tree.<p>[1] <a href="https://en.wikipedia.org/wiki/Optimal_radix_choice" rel="nofollow">https://en.wikipedia.org/wiki/Optimal_radix_choice</a>
Note that CPUs have also gotten dramatically wider in both execution width and vector capability since you were a teenager. The increased throughput shifts the balance more toward being able to burn operations to reduce dependency chains. It's possible for your idea to have been both non-viable on the CPUs at the time and more viable on CPUs now.
Did you continue by fantasizing about CPU's that contain ternary comparators?
> Unfortunately although we cut the search space to 2/3 of what it was for binary search at each step (1/3 vs 1/2), we do 3/2 as many comparisons at each step (one comparison 50% of the time, two comparisons the other 50%), so it averages out to equivalence.<p>True, but is there some particular reason that you want to minimize the number of comparisons rather than have a faster run time? Daniel doesn't overly emphasize it, but as he mentions in the article: "The net result might generate a few more instructions but the number of instructions is likely not the limiting factor."<p>The main thing this article shows is that (at least sometimes on some processors) a quad search is faster than a binary search _despite_ the fact that that it performs theoretically unnecessary comparisons. While some computer scientists might scoff, I'd bet heavily that an optimized ternary search could also frequently outperform.
This idea is closely related to the famous "Stooge Sort", which is basically quicksort with the pivot at 1/3 rather than 1/2. Naively, one might think it is faster than Quicksort, but of course it isn't.<p>For years--maybe still?--analyzing its running time was a staple of the first or second problem set in a college-level "Introduction to Algorithms" course.<p><a href="https://en.wikipedia.org/wiki/Stooge_sort" rel="nofollow">https://en.wikipedia.org/wiki/Stooge_sort</a>
When you can't seek quickly, e.g. on a disk, you can use a B-tree with say 128-way search. Fetching 128 keys doesn't cost much more than fetching 1 but it saves an additional 7 fetches.
Isn't it a bit better on average, although not as much as you'd hoped? For example 19 steps of binary search get you down to 1/524288 of the original search space with 19 comparisons. 12 steps of ternary search get you down to 1/3^12 = 1/531441 of the original search space with, on average, 12 * 3/2 = 18 comparisons.
Imagine if you split the search space N times, no middles. Then you could just compare the value.
[dead]
The classical canonical Comp Sci algorithms are effectively "designed" for CPUs with no parallelism (either across multiple cores, via Hyper-threading technology, or "just" SIMD style instructions), and also where all memory accesses take the same amount of time (so no concept of L1/L2/L3/etc caches of varying latencies). And all working on general/random data.<p>As soon as you move away from either (or both) of these assumptions then there are likely to be many tweaks you can make to get better performance.<p>What the classical algorithms do offer is a very good starting point for developing a more optimal/efficient solution once you know more about the specific shape of data or quirks/features of a specific CPU.<p>When you start to get at the pointy end of optimising things then you generally end up looking at how the data is stored and accessed in memory, and whether any changes you can make to improve this don't hurt things further down the line. In a job many many years ago I remember someone who spent way too long optimising a specific part of some code only to find that the overall application ran slower as the optimisations meant that a lot more information needed later on had been evicted from the cache.<p>(This is probably just another way of stating Rob Pike's 5th rule of programming which was itself a restatement of something by Fred Brooks in _The Mythical Man Month_. Ref: <a href="https://www.cs.unc.edu/~stotts/COMP590-059-f24/robsrules.html" rel="nofollow">https://www.cs.unc.edu/~stotts/COMP590-059-f24/robsrules.htm...</a>)
I thought this would be about how you can beat binary search in the 'Guess Who?' game. There's a cool math paper about it [0] and an approachable video by the author. [1]<p>[0] <a href="https://arxiv.org/abs/1509.03327" rel="nofollow">https://arxiv.org/abs/1509.03327</a><p>[1] <a href="https://www.youtube.com/watch?v=_3RNB8eOSx0" rel="nofollow">https://www.youtube.com/watch?v=_3RNB8eOSx0</a>
You can't beat binary search in Guess Who. From the abstract:<p>>> Instead, the optimal strategy for the player who trails is to make certain bold plays in an attempt catch up.<p>The reason that's optimal, if you're losing, is that you assume that your opponent, who isn't losing, is going to use binary search. They're going to use binary search because it's the optimal way to find the secret.<p>Since you're behind, if you also use binary search, both players will progress toward the goal at the same rate, and you'll lose.<p>Trying to get lucky means that you intentionally play badly in order to get more victories. You're redistributing guesses taken between games in a negative-sum manner - you take more total guesses (because your search strategy is inferior to binary search), but they are unevenly distributed across your games, and in the relatively few games where you perform well above expectation, you can score a victory.
You're mixing two different objectives the paper presents. You can't beat binary search when the objective is to minimise the expected number of turns <i>in a single player setting</i>.<p>However, in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.<p>Here's another visual demonstration: <a href="https://www.youtube.com/watch?v=zmvn4dnq82U" rel="nofollow">https://www.youtube.com/watch?v=zmvn4dnq82U</a>
What do you think you're saying that I didn't already say?<p>> in a two player setting, using the strategies presented in the paper, you will beat an adversary that uses binary search in more than 50% of the games played.<p>This is technically true. But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search. For the remainder, you're redistributing luck around between potential games in a way that is negative-sum, exactly like I just said.
> But 50 percentage points of your "more than 50%" of games played are games where you exclusively use binary search<p>Although I think I get your point, saying 'You can't beat binary search in Guess Who' is misleading, considering you would probably describe yourself the optimal strategy as 'play binary search when ahead, when behind, don't'.<p>> Trying to get lucky means that you intentionally play badly in order to get more victories<p>That's quite an uncommon definition of good and bad.
If you know about the distribution of keys you can do even better by factoring that knowledge into where you split.
…for 16-bit integers, and it’s still a binary search with the same asymptotic complexity, just a constant-factor speedup.
Some of the plots would have been much more helpful if instead of absolute value in seconds, the y-axis were the multiplier w.r.t binary search (and eyeballing suggests a relatively constant multiplier).<p>Obviously, this isn't changing the big-Oh complexity, but in the "real world", still nice to see a 2-4x speedup.
If you are storing 16-bit integers, wouldn't an 8kB bitmap be even faster?
The library the author is talking about selects between bitmap and array dynamically depending on density.<p><a href="https://roaringbitmap.org/" rel="nofollow">https://roaringbitmap.org/</a>
The range is 1..4096, so 4096 bits = 512 byte bitmap would suffice.<p>That is, if you're only ever going to test for membership in the set.
If you need metadata then ... You could store that in a packed array and use a population count of the bit-vector before the lookup bit as index into it.
For each word of bits, store the accumulated population count of the words before it to speed up lookup.
Modern CPU's are memory-bound so I don't think SIMD would help much over using 64-bit words. For 4096 bits / 64, that would be 64 additional bytes.
On optimizing binary search: <a href="https://en.algorithmica.org/hpc/data-structures/binary-search/" rel="nofollow">https://en.algorithmica.org/hpc/data-structures/binary-searc...</a>
I once did have a need for binary search in memory mapped files and I experimented with Eytzinger layout (which I learned from <a href="https://bannalia.blogspot.com/2015/06/cache-friendly-binary-search.html" rel="nofollow">https://bannalia.blogspot.com/2015/06/cache-friendly-binary-...</a>). It turned out that it was slower than plain binary search, I think because keys I was looking up were often clumped together thus it played quite well with cache anyway.
I always wondered if we could get any faster than O(log n). Glad we're making progress!
The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)). This is still some "divide and conquer" style algorithm just with bunch of CPU specific optimizations. Also this works well with simple data structures, like integers, with more complex objects (custom comparators) it matters less.
The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n). This algorithm just uses modern and <i>current</i> processor architecture artifacts to "improve" it on arrays of up to 4096 elements.<p>So not exactly "n" as in O(n).<p>Also: only for 16-bit integers.
> The complexity of binary search in terms of "search" (comparison) operations is exactly log_2(n)+1, not just O(n)<p>> So not exactly "n" as in O(n).<p>For large enough inputs the algorithm with better Big O complexity will eventually win (at least in the worst cases). Yes, sometimes it never happens in practice when the constants are too large. But say 100 * n * log(n) will eventually beat 5 * n for large enough n. Some advanced algorithms can use algorithms with worse Big O complexity but smaller constants for small enough sub-problems to improve performance. But it's more like to optimization detail rather than a completely different algorithm.<p>> This algorithm just uses modern and current processor architecture artifacts to "improve" it on arrays of up to 4096<p>Yes, that's my point. It's basically "I made binary search for integers X times faster on some specific CPUs". "Beating binary search" is somewhat misleading, it's more like "microptimizing binary search".
> The title is slightly misleading, I mean yes, naive binary search might have larger constant but the algorithm is still O(log(n)).<p>I think the title is not misleading since the Big O notation is only supposed to give a rough estimate of the performance of an algorithm.<p>(I agree though that binary search is already extremely fast, so making something twice as fast won't move the needle for the vast majority of applications where the speed bottleneck is elsewhere. Even infinite speed, i.e. instant sorted search, would likely not be noticeable for most software.)
For me it's slightly misleading because it's almost like saying "I wrote a faster quicksort implementation, so it beats quicksort!". In this case the binary search fundamental idea of "divide and conquer" is still there, the article just does microptimizations (which seem to be not very portable and are less relevant/applicible for more complex data structures) in order to reduce the constant part.<p>Yes, algorithmic complexity is theoretical, it often ignores the real world constants, but they are usually useful when comparing algorithms for larger inputs, unless we are talking about "galactic algorithms" with insanely large constants.
The (AI generated?) image on this article is absolutely not helpful, and I think it's wrong based on how I read the article. Better not to have an image at all.
Seriously. It makes it seem like this is going to be a blog post either intended for elementary school students, or more likely for teachers on how to better explain some arithmetic concept to elementary school students.<p>It's absolutely bizarre. Images communicate meaning. Much better to have no image than to have an image that is completely misleading about the target audience or level of technical sophistication.
Agreed, it threw me off at first but the rest of the article was quite nice.
I remember I had a pedagogy class in Uni taught by psychology faculty, and was messing with them by proposing a mock syllabus where we'd teach students binary search, then the advanced advanced ones ternary search, and the very advanced, Quaternary, with a big Q, as in the geological period. Jokes on me now, I suppose.
A beautiful algorithm.<p>Would there be any value in using simd to check the whole cache line that you fetch for exact matches on the narrowing phase for an early out?
See also: Static search trees: 40x faster than binary search<p>- <a href="https://curiouscoding.nl/slides/p99-text/" rel="nofollow">https://curiouscoding.nl/slides/p99-text/</a><p>- <a href="https://curiouscoding.nl/posts/static-search-tree/" rel="nofollow">https://curiouscoding.nl/posts/static-search-tree/</a><p>- <a href="https://news.ycombinator.com/item?id=42562847">https://news.ycombinator.com/item?id=42562847</a> (656 points; 232 comments)
So is the SIMD the magic piece here, or is it the interpolation search? If the data is evenly distributed, that is pretty optimal for the interpolation search..
This was the entry level project we did in a hardware optimization course I took maybe 15 years ago, using SIMD instructions. Lots of things can be naively optimized by unrolling any loops like this. Compilers do some of this themselves.
You can improve interpolated search by monitoring progress and if it's not converging fast enough, alternate with bisection steps. (and, as clear from the article, switch to linear/vector scanning when the range is small emough).<p>Often when an interpolated search is wrong the interpolation will tend to nail you against one side or the other of the range-- so the worst case is linear. By allowing only a finite number of failed probes (meaning they only move the same boundary as last time, an optimally working search will on average alternate hi/lo) you can maintain the log guarantee of bisection.
What about non-binary search?
And here I thought this was going to be related to quaternions
>"Virtually all processors today have data parallel instructions (sometimes called SIMD) that can check several values at once.<p>[...]<p><i>The binary search checks one value at a time. However, recent processors can load and check more than one value at once. They have excellent memory-level parllelism. This suggest that instead of a binary search, we might want to try a quaternary search...</i>"<p>First of all, brilliant observations! (Overall, a great article too!)<p>Yes, today's processors indeed have a parallelism which was unconceived of at the time the original Mathematicians, then-to-be Computer Scientists, conceived of Binary Search...<p>Now I myself wonder if these ideas might be extended to GPU's, that is, if the massively parallel execution capability of GPU's could be extended to search for data like Binary Search does, and what such an appropriately parallelized algorithm/data structure would look like... keep in mind, if we consider an updateable data structure, then that means that parts of it may need to be appropriately locked at the same time that multiple searches and updates are occurring simultaneously... so what data structure/algorithm would be the most efficient for a massively parallel scenario like that?<p>Anyway, great article and brilliant observations!
Previous related: <a href="https://news.ycombinator.com/item?id=47726340">https://news.ycombinator.com/item?id=47726340</a><p>40x Faster Binary Search -
This talk will first expose the lie that binary search takes O(lg n) time — it very much does not! Instead, we will see that binary search has only constant overhead compared to an oracle. Then, we will exploit everything that modern CPUs have to offer (SIMD, ILP, prefetching, efficient caching) in order to gain 40x increased throughput over the Rust standard library implementation.
Here's my version with a key spline improvement. I should really write this up...<p>#include <stdbool.h>
#include <stdint.h>
#include <arm_neon.h><p>/* Author: aam@fastmail.fm
*
* Apple M4 Max (P-core) variant of simd_quad which uses a key spline
* to great effect (blog post summary incoming!)
<i>/
bool simd_quad_m4(const uint16_t </i>carr, int32_t cardinality, uint16_t pos) {
enum { gap = 64 };<p><pre><code> if (cardinality < gap) {
if (cardinality >= 32) {
// 32 <= n < 64: NEON-compare the first 32 as a single x4 load,
// sweep the remainder.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t v = vld1q_u16_x4(carr);
uint16x8_t hit = vorrq_u16(
vorrq_u16(vceqq_u16(v.val[0], needle), vceqq_u16(v.val[1], needle)),
vorrq_u16(vceqq_u16(v.val[2], needle), vceqq_u16(v.val[3], needle)));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 32; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 16) {
// 16 <= n < 32: paired x2 load + sweep tail.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x2_t v = vld1q_u16_x2(carr);
uint16x8_t hit = vorrq_u16(vceqq_u16(v.val[0], needle),
vceqq_u16(v.val[1], needle));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 16; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 8) {
// 8 <= n < 16: single 128-bit compare + sweep tail.
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8_t v = vld1q_u16(carr);
if (vmaxvq_u16(vceqq_u16(v, needle)) != 0) return true;
for (int32_t j = 8; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
for (int32_t j = 0; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
int32_t num_blocks = cardinality / gap;
int32_t base = 0;
int32_t n = num_blocks;
while (n > 3) {
int32_t quarter = n >> 2;
int32_t k1 = carr[(base + quarter + 1) * gap - 1];
int32_t k2 = carr[(base + 2 * quarter + 1) * gap - 1];
int32_t k3 = carr[(base + 3 * quarter + 1) * gap - 1];
int32_t c1 = (k1 < pos);
int32_t c2 = (k2 < pos);
int32_t c3 = (k3 < pos);
base += (c1 + c2 + c3) * quarter;
n -= 3 * quarter;
}
while (n > 1) {
int32_t half = n >> 1;
base = (carr[(base + half + 1) * gap - 1] < pos) ? base + half : base;
n -= half;
}
int32_t lo = (carr[(base + 1) * gap - 1] < pos) ? base + 1 : base;
if (lo < num_blocks) {
const uint16_t *blk = carr + lo * gap;
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t a = vld1q_u16_x4(blk);
uint16x8x4_t b = vld1q_u16_x4(blk + 32);
uint16x8_t h0 = vorrq_u16(
vorrq_u16(vceqq_u16(a.val[0], needle), vceqq_u16(a.val[1], needle)),
vorrq_u16(vceqq_u16(a.val[2], needle), vceqq_u16(a.val[3], needle)));
uint16x8_t h1 = vorrq_u16(
vorrq_u16(vceqq_u16(b.val[0], needle), vceqq_u16(b.val[1], needle)),
vorrq_u16(vceqq_u16(b.val[2], needle), vceqq_u16(b.val[3], needle)));
return vmaxvq_u16(vorrq_u16(h0, h1)) != 0;
}
for (int32_t j = num_blocks * gap; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;</code></pre>
}<p>/*
* Spine variant, M4 edition.
*
* pack the interpolation probe keys into a dense contiguous region so the
* cold-cache pointer chase streams through consecutive cache lines:
*
* n=4096 -> 64 spine keys -> 128 B = 1 M4 cache line
* n=2048 -> 32 spine keys -> 64 B = half a line
* n=1024 -> 16 spine keys -> 32 B
*
* The entire interpolation phase for a max-sized Roaring container now
* lives in one cache line. The final SIMD block check still loads from
* carr.
*
* The num_blocks <= 3 fallback:
* with very few blocks the carr-based probes accidentally prime the final
* block's lines, which the spine path disrupts.
<i>/
bool simd_quad_m4_spine(const uint16_t </i>carr, const uint16_t <i>spine,
int32_t cardinality, uint16_t pos) {
enum { gap = 64 };<p><pre><code> if (cardinality < gap) {
// Same fast paths as simd_quad_m4 -- spine is irrelevant here.
if (cardinality >= 32) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t v = vld1q_u16_x4(carr);
uint16x8_t hit = vorrq_u16(
vorrq_u16(vceqq_u16(v.val[0], needle), vceqq_u16(v.val[1], needle)),
vorrq_u16(vceqq_u16(v.val[2], needle), vceqq_u16(v.val[3], needle)));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 32; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 16) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x2_t v = vld1q_u16_x2(carr);
uint16x8_t hit = vorrq_u16(vceqq_u16(v.val[0], needle),
vceqq_u16(v.val[1], needle));
if (vmaxvq_u16(hit) != 0) return true;
for (int32_t j = 16; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
if (cardinality >= 8) {
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8_t v = vld1q_u16(carr);
if (vmaxvq_u16(vceqq_u16(v, needle)) != 0) return true;
for (int32_t j = 8; j < cardinality; j++) {
uint16_t x = carr[j];
if (x >= pos) return x == pos;
}
return false;
}
for (int32_t j = 0; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;
}
int32_t num_blocks = cardinality / gap;
if (num_blocks <= 3) {
return simd_quad_m4(carr, cardinality, pos);
}
int32_t base = 0;
int32_t n = num_blocks;
// Pull the whole spine into L1 up front. For n in [256, 4096] this is
// 1 line (128 B); for smaller n it is a partial line. Cheap on cold.
__builtin_prefetch(spine);
while (n > 3) {
int32_t quarter = n >> 2;
int32_t k1 = spine[base + quarter];
int32_t k2 = spine[base + 2 * quarter];
int32_t k3 = spine[base + 3 * quarter];
int32_t c1 = (k1 < pos);
int32_t c2 = (k2 < pos);
int32_t c3 = (k3 < pos);
base += (c1 + c2 + c3) * quarter;
n -= 3 * quarter;
}
while (n > 1) {
int32_t half = n >> 1;
base = (spine[base + half] < pos) ? base + half : base;
n -= half;
}
int32_t lo = (spine[base] < pos) ? base + 1 : base;
if (lo < num_blocks) {
const uint16_t *blk = carr + lo * gap;
uint16x8_t needle = vdupq_n_u16(pos);
uint16x8x4_t a = vld1q_u16_x4(blk);
uint16x8x4_t b = vld1q_u16_x4(blk + 32);
uint16x8_t h0 = vorrq_u16(
vorrq_u16(vceqq_u16(a.val[0], needle), vceqq_u16(a.val[1], needle)),
vorrq_u16(vceqq_u16(a.val[2], needle), vceqq_u16(a.val[3], needle)));
uint16x8_t h1 = vorrq_u16(
vorrq_u16(vceqq_u16(b.val[0], needle), vceqq_u16(b.val[1], needle)),
vorrq_u16(vceqq_u16(b.val[2], needle), vceqq_u16(b.val[3], needle)));
return vmaxvq_u16(vorrq_u16(h0, h1)) != 0;
}
for (int32_t j = num_blocks * gap; j < cardinality; j++) {
uint16_t v = carr[j];
if (v >= pos) return v == pos;
}
return false;</code></pre>
}<p>// Build the spine for a given carr. Caller allocates cardinality/64 u16s.
void simd_quad_m4_build_spine(const uint16_t </i>carr, int32_t cardinality,
uint16_t <i>spine) {
enum { gap = 64 };
int32_t num_blocks = cardinality / gap;
for (int32_t i = 0; i < num_blocks; i++) {
spine[i] = carr[(i + 1) </i> gap - 1];
}
}
Will I get a job if i say i can beat binary search?
[dead]
[dead]
[flagged]
Since binary search is already very fast with its O(log n) time complexity: are there any real world applications which could practically benefit from this improvement?
I guess it matters if you have to do lookup in a tight loop. If you do this occasionally, I think it's not worth it, especially for complex objects with custom comparators. The algorithm is still O(log(n)) just a more advanced "divide and conquer" with smaller constant.
I would expect the standard library of various languages to provide an optimised implementation such as this. Then everyone downstream benefits, and benefits from future improvements when compiled for a newer version of the language / executed under a newer version of the runtime.<p>You see this in rust, where they replaced the hash tables many years ago, the channel a couple of years ago, and most recently the sort implementations for both stable and unstable sort. I expect other languages / runtimes do similar things over time as well as CPUs change and new approaches are discovered.
This is a drop-in improvement for essentially any binary search over 16-bit integer members.