1. Performance comparison: linear search vs binary search

    While working on an implementation of merge sort promised in the previous article, I realized that I'd like to use one neat little thing, which is worth its own post. It is a simple strategy for sorting or doing comparison-based tasks, which works wonderfully when input data is small enough.

    Suppose that we have a very small array and we want to sort it as fast as possible. Indeed, applying some fancy O(N log N) algorithm is not a good idea: although it has optimal asymptotic performance, its logic is too complicated to outperform simple bubble-sort-like algorithms which take O(N^2) time instead. That's why every well-optimized sorting algorithm based on quicksort (e.g. std::sort) or mergesort includes some simple quadratic algorithm which is run for sufficiently small subarrays like N <= 32.

    What exactly should we strive for to get an algorithm efficient for small N? Here is the list of things to look for:

    1. Avoid branches whenever possible: unpredictable ones are very slow.
    2. Reduce data dependency: this allows to fully utilize processing units in CPU pipeline.
    3. Prefer simple data access and manipulation patterns: this allows to vectorize the algorithm.
    4. Avoid complicated algorithms: they almost always fail on one of the previous points, and they sometimes do too much work for small inputs.

    I decided to start investigating a simpler problem first, which is solved by std::lower_bound: given a sorted array of elements and a key, find index of the first array element greater or equal than the key. And this investigation soon developed into a full-length standalone article.

    read more

Page 1 / 1