### Faster than the speed of binary search

[Updated 2006-10-28, new text and code at the end.] Everybody (at least everybody competent enough to be worth mentioning) know how to search in sorted array with the binary search algorithm. But everybody (or almost everybody) thinks that this is the fastest approach to searching. Is everybody (or almost everybody) correct regarding that topic? It depends. On general, the answer is true - binary search is fastest. But that only holds if we know nothing about the data stored in the array - in other words, if we pretend that it is random (but sorted, of course). Want an example? Take an array [1..99] of byte. Fill it with numbers in the range 1..100 but leave (random) one out. Use each number only once. Sort it. You'll get an array with first N numbers in the same position as the array index ( Are there more general guidelines to speeding up a binary search. Yes, it's quite simple - we only have to be able to guess the position of the value we're searching for better than with the "it's somewhere in the middle" approach. In a way, that's exactly what we did in the example above. This approach is very simple to use if data in the array is distributed uniformly. In that case, we can take the value at low and high end of the interval (not the indices as in the binary search, but values at those indices) and use a linear interpolation to guess a possible position of the value. Something similar happend to me last week. I was solving some DVB-related problem and built an array of video fragments (video PES, if you know anything about the DVB internals) in a video file (actually I only stored file offset for each fragment). Later, I needed to find (many times) a fragment closest to some file offset (every time a different offset). Of course, I could use a simple binary search, but it was such a nice opportunity to try out a better approach and run some tests ... For test data I used a small, 40 MB video file. There were 51737 entries in the array and the code executed 3128 searches. With standard bisection, the code did 15.45 rounds through the bisection loop per search (on average). With interpolated search, this number was decreased to 4.13 rounds per search. That's better by a factor 3.7! And in the real life, the file will be some 50x larger - around 2 GB - and difference will be even bigger. The number of rounds/number of searches distribution for both approaches:
We can see that standard binary search needed 16 rounds for majority of the searches and almost no searches were completed in less than 15 rounds. On the other hand, interpolated binary search shows almost gaussian distribution from 2 to 6 rounds. But as the code is more complicated (see below), the total speedup was not that impressive, only somewhere around 1.6 times. Still, it was a personal satisfaction to see the theoretically better code run faster in reality, too :) function TGpTSPIDIndex.Find(transportStreamOffset: int64; var index: integer): boolean;
[Updated 2006-10-28] While thinking on the comment posted by the anonymous I looked at the code from a new view and suddenly I realized that I'm an idiot. The 'while' condition in the bisection loop in my version should not differ from the condition in standard binary search! When I put this right, I could suddenly remove all extra tests at the beginning as they were no longer needed. New code (tested to work the same as the old code and as the standard binary search - at least on my data) is posted below. A word of warning: all this works only if there are no repetitive elements in the sorted array! If this condition is not satisfied, you'll soon get a 'division by 0' exception in the calculation of the weighted midpoint. function TGpTSPIDIndex.Find(transportStreamOffset: int64; var index: integer): boolean; |