Java Performance: Sorted vs Unsorted Array Processing

arraylistjavaperformance

The title is in reference to Why is it faster to process a sorted array than an unsorted array?

Is this a branch prediction effect, too? Beware: here the processing for the sorted array is slower!!

Consider the following code:

private static final int LIST_LENGTH = 1000 * 1000;
private static final long SLOW_ITERATION_MILLIS = 1000L * 10L;

@Test
public void testBinarySearch() {
    Random r = new Random(0);
    List<Double> list = new ArrayList<>(LIST_LENGTH);
    for (int i = 0; i < LIST_LENGTH; i++) {
        list.add(r.nextDouble());
    }
    //Collections.sort(list);
    // remove possible artifacts due to the sorting call
    // and rebuild the list from scratch:
    list = new ArrayList<>(list);

    int nIterations = 0;
    long startTime = System.currentTimeMillis();
    do {
        int index = r.nextInt(LIST_LENGTH);
        assertEquals(index, list.indexOf(list.get(index)));
        nIterations++;
    } while (System.currentTimeMillis() < startTime + SLOW_ITERATION_MILLIS);
    long duration = System.currentTimeMillis() - startTime;
    double slowFindsPerSec = (double) nIterations / duration * 1000;
    System.out.println(slowFindsPerSec);

    ...
}

This prints out a value of around 720 on my machine.

Now if I activate the collections sort call, that value drops down to 142. Why?!?

The results are conclusive, they don't change if I increase the number of iterations/time.

Java version is 1.8.0_71 (Oracle VM, 64 bit), running under Windows 10, JUnit test in Eclipse Mars.

UPDATE

Seems to be related to contiguous memory access (Double objects accessed in sequential order vs in random order). The effect starts vanish for me for array lengths of around 10k and less.

Thanks to assylias for providing the results:

/**
 * Benchmark                     Mode  Cnt  Score   Error  Units
 * SO35018999.shuffled           avgt   10  8.895 ± 1.534  ms/op
 * SO35018999.sorted             avgt   10  8.093 ± 3.093  ms/op
 * SO35018999.sorted_contiguous  avgt   10  1.665 ± 0.397  ms/op
 * SO35018999.unsorted           avgt   10  2.700 ± 0.302  ms/op
 */

Best Answer

It looks like caching / prefetching effect.

The clue is that you compare Doubles (objects), not doubles (primitives). When you allocate objects in one thread, they are typically allocated sequentially in memory. So when indexOf scans a list, it goes through sequential memory addresses. This is good for CPU cache prefetching heuristics.

But after you sort the list, you still have to do the same number of memory lookups in average, but this time memory access will be in random order.

UPDATE

Here is the benchmark to prove that the order of allocated objects matters.

Benchmark            (generator)  (length)  (postprocess)  Mode  Cnt  Score   Error  Units
ListIndexOf.indexOf       random   1000000           none  avgt   10  1,243 ± 0,031  ms/op
ListIndexOf.indexOf       random   1000000           sort  avgt   10  6,496 ± 0,456  ms/op
ListIndexOf.indexOf       random   1000000        shuffle  avgt   10  6,485 ± 0,412  ms/op
ListIndexOf.indexOf   sequential   1000000           none  avgt   10  1,249 ± 0,053  ms/op
ListIndexOf.indexOf   sequential   1000000           sort  avgt   10  1,247 ± 0,037  ms/op
ListIndexOf.indexOf   sequential   1000000        shuffle  avgt   10  6,579 ± 0,448  ms/op