Java – Why Is Manually-Unrolled Loop Faster Than Original Loop?

javajitoptimizationperformance

Consider the following two snippets of code on an array of length 2:

boolean isOK(int i) {
    for (int j = 0; j < filters.length; ++j) {
        if (!filters[j].isOK(i)) {
            return false;
        }
    }
    return true;
}

and

boolean isOK(int i) {
     return filters[0].isOK(i) && filters[1].isOK(i);
}

I would assume that the performance of these two pieces should be similar after sufficient warm-up.
I've checked this using JMH micro-benchmarking framework as described e.g. here and here and observed that the second snippet is more than 10% faster.

Question: why hasn't Java optimized my first snippet using the basic loop unrolling technique?
In particular, I'd like to understand the following:

  1. I can easily produce a code that is optimal for cases of 2 filters and still can work in case of another number of filters (imagine a simple builder):
    return (filters.length) == 2 ? new FilterChain2(filters) : new FilterChain1(filters). Can JITC do the same and if not, why?
  2. Can JITC detect that 'filters.length==2' is the most frequent case and produce the code that is optimal for this case after some warm-up? This should be almost as optimal as the manually-unrolled version.
  3. Can JITC detect that a particular instance is used very frequently and then produce a code for this specific instance (for which it knows that the number of filters is always 2)?
    Update: got an answer that JITC works only on a class level. OK, got it.

Ideally, I would like to receive an answer from someone with a deep understanding of how JITC works.

Benchmark run details:

  • Tried on latest versions of Java 8 OpenJDK and Oracle HotSpot, the results are similar
  • Used Java flags: -Xmx4g -Xms4g -server -Xbatch -XX:CICompilerCount=2 (got similar results without the fancy flags as well)
  • By the way, I get similar run time ratio if I simply run it several billion times in a loop (not via JMH), i.e. the second snippet is always clearly faster

Typical benchmark output:

Benchmark (filterIndex) Mode Cnt Score
Error Units
LoopUnrollingBenchmark.runBenchmark 0
avgt 400 44.202 ± 0.224 ns/op
LoopUnrollingBenchmark.runBenchmark 1 avgt 400 38.347
± 0.063 ns/op

(The first line corresponds to the first snippet, the second line – to the second.

Complete benchmark code:

public class LoopUnrollingBenchmark {

    @State(Scope.Benchmark)
    public static class BenchmarkData {
        public Filter[] filters;
        @Param({"0", "1"})
        public int filterIndex;
        public int num;

        @Setup(Level.Invocation) //similar ratio with Level.TRIAL
        public void setUp() {
            filters = new Filter[]{new FilterChain1(), new FilterChain2()};
            num = new Random().nextInt();
        }
    }

    @Benchmark
    @Fork(warmups = 5, value = 20)
    @BenchmarkMode(Mode.AverageTime)
    @OutputTimeUnit(TimeUnit.NANOSECONDS)
    public int runBenchmark(BenchmarkData data) {
        Filter filter = data.filters[data.filterIndex];
        int sum = 0;
        int num = data.num;
        if (filter.isOK(num)) {
            ++sum;
        }
        if (filter.isOK(num + 1)) {
            ++sum;
        }
        if (filter.isOK(num - 1)) {
            ++sum;
        }
        if (filter.isOK(num * 2)) {
            ++sum;
        }
        if (filter.isOK(num * 3)) {
            ++sum;
        }
        if (filter.isOK(num * 5)) {
            ++sum;
        }
        return sum;
    }


    interface Filter {
        boolean isOK(int i);
    }

    static class Filter1 implements Filter {
        @Override
        public boolean isOK(int i) {
            return i % 3 == 1;
        }
    }

    static class Filter2 implements Filter {
        @Override
        public boolean isOK(int i) {
            return i % 7 == 3;
        }
    }

    static class FilterChain1 implements Filter {
        final Filter[] filters = createLeafFilters();

        @Override
        public boolean isOK(int i) {
            for (int j = 0; j < filters.length; ++j) {
                if (!filters[j].isOK(i)) {
                    return false;
                }
            }
            return true;
        }
    }

    static class FilterChain2 implements Filter {
        final Filter[] filters = createLeafFilters();

        @Override
        public boolean isOK(int i) {
            return filters[0].isOK(i) && filters[1].isOK(i);
        }
    }

    private static Filter[] createLeafFilters() {
        Filter[] filters = new Filter[2];
        filters[0] = new Filter1();
        filters[1] = new Filter2();
        return filters;
    }

    public static void main(String[] args) throws Exception {
        org.openjdk.jmh.Main.main(args);
    }
}

Best Answer

The loop presented likely falls under the "non counted" category of loops, which are loops for which the iteration count can neither be determined at compile time nor at run time. Not only because of @Andreas argument about the array size but also because of the randomly conditional break (that used to be in your benchmark when I wrote this post).

State-of-the-art compilers do not aggressively optimize them, since unrolling non-counted loops often involves duplicating also a loop’s exit condition, which thus only improves run-time performance if subsequent compiler optimizations can optimize the unrolled code. See this 2017 paper for details where they make proposals how to unroll such stuff too.

From this follows, that your assumption does not hold that you did sort of "manual unrolling" of the loop. You're considering it a basic loop unrolling technique to transform an iteration over an array with conditional break to an && chained boolean expression. I'd consider this a rather special case and would be surprised to find a hot-spot optimizer do a complex refactoring on the fly. Here they're discussing what it actually might do, perhaps this reference is interesting.

This would reflect closer the mechanics of a contemporary unrolling and is perhaps still nowhere near what unrolled machine code would look like:

if (! filters[0].isOK(i))
{
   return false;
} 
if(! filters[1].isOK(i))
{
   return false;
}
return true;

You're concluding, that because one piece of code runs faster than another piece of code the loop didn't unroll. Even if it did, you still could see the runtime difference due to the fact that you're comparing different implementations.

If you want to gain more certainty, there's the jitwatch analyzer/visualizer of the actual Jit operations including machine code (github) (presentation slides). If there's something to see eventually I'd trust my own eyes more than any opinion about what JIT may or may not do in general, since every case has its specifics. Here they fret about the difficulty to arrive at general statements for specific cases as far as JIT is concerned and provide some interesting links.

Since your goal is minimum runtime, the a && b && c ... form is likely the most efficient one, if you don't want to depend on hope for loop-unrolling, at least more efficient than anything else presented yet. But you can't have that in a generic way. With functional composition of java.util.Function there's huge overhead again (each Function is a class, each call is a virtual method that needs dispatch). Perhaps in such a scenario it might make sense to subvert the language level and generate custom byte code at runtime. On the other hand a && logic requires branching in byte code level as well and may be equivalent to if/return (which also can't be generified without overhead).