Improving cache locality of binary search by doing local linear search?

325 views Asked by At

Binary search of a sorted array may have poor cache locality, due to random access of memory, but linear search is slow for a large array. Is it possible to design a hybrid algorithm? For example, you could divide a sorted array of length 100 into 10 chunks, each of length 10. Within each chunk, I do a simple linear search, but on a "global" level, I do a binary search in the conventional sense: if the searched value is not in either the 1st chunk or the 10th chunk, I will look into the middle, i.e. the 5th chunk (because 5 = floor((1+10)/2). If the 5th chunk doesn't contain the searched value, I will look at the 3rd chunk or the 7th chunk, depending on if the searched value is smaller than or greater than the numbers in the 5th chunk.

Have people considered this kinds of algorithms? Is is worth the implementation effort?

0

There are 0 answers