Java 8 Stream Performance Benchmarks
Learn all about Java stream performance with benchmarking and awesome metrics.
Join the DZone community and get the full member experience.
Join For Freewhen i read angelika langer’s java performance tutorial – how fast are the java 8 streams? i couldn’t believe that for a specific operation they took about 15 times longer than for loops. could stream performance really be that bad? i had to find out!
coincidently, i recently watched a cool talk about microbenchmarking java code and i decided to put to work what i learned there. so lets see whether streams really are that slow.
overview
as usual i will start with a dull prologue. this one will explain why you should be very careful with what i present here, how i produced the numbers, and how you can easily repeat and tweak the benchmark. if you don’t care about any of this, jump right to stream performance .
but first, two quick pointers: all benchmark code is up on github and this google spreadsheet contains the resulting data.
prologue
disclaimer
this post contains a lot of numbers and numbers are deceitful. they seem all scientific and precise and stuff, and they lure us into focusing on their interrelation and interpretation. but we should always pay equal attention to how they came to be!
the numbers i’ll present below were produced on my system with very specific test cases. it is easy to over-generalize them! i should also add that i have only two day’s worth of experience with non-trivial benchmarking techniques (i.e. ones that are not based on looping and manual system.currenttimemillis() ).
be very careful with incorporating the insights you gained here into your mental performance model. the devil hiding in the details is the jvm itself and it is a deceitful beast. it is entirely possible that my benchmarks fell victim to optimizations that skewed the numbers.
system
- cpu : intel(r) core(tm) i7-4800mq cpu @ 2.70ghz
- ram : samsung ddr3 16gb @ 1.60ghz (the tests ran entirely in ram)
- os : ubuntu 15.04. kernel version 3.19.0-26-generic
- java : 1.8.0_60
- jmh : 1.10.5
benchmark
jmh
the benchmarks were created using the wonderful java microbenchmarking harness (jmh) , which is developed and used by the jvm performance team itself. it’s thoroughly documented, easy to set up and use, and the explanation via samples is awesome!
if you prefer a casual introduction, you might like aleksey shipilev’s talk from devoxx uk 2013 .
setup
to create somewhat reliable results, benchmarks are run individually and repeatedly. there is a separate run for each benchmark method that is made up of several forks , each running a number of warmup iterations before the actual measurement iterations.
i ran separate benchmarks with 50’000, 500’000, 5’000’000, 10’000’000, and 50’000’000 elements. except the last one all had two forks, both consisting of five warmup and five measurement iterations, where each iteration was three seconds long. parts of the last one were run in one fork, two warmup and three measurement iterations, each 30 seconds long.
langer’s article states that their arrays are populated with random integers. i compared this to the more pleasant case where each int in the array equals its position therein. the deviation between the two scenarios averaged 1.2% with the largest difference being 5.4%.
since creating millions of randomized integers takes considerable time, i opted to execute the majority of the benchmarks on the ordered sequences only, so unless otherwise noted numbers pertain to this scenario.
code
the benchmark code itself is available on github . to run it, simply go to the command line, build the project, and execute the resulting jar:
mvn clean install
java -jar target/benchmarks.jar
some easy tweaks:
- adding a regular expression at the end of the execution call will only benchmark methods whose fully-qualified name matches that expression; e.g. to only run controlstructuresbenchmark :
java -jar target/benchmarks.jar control
- the annotations on abstractiterationbenchmark govern how often and how long each benchmark is executed
- the constant number_of_elements defines the length of the array/list that is being iterated over
- tweak create_elements_randomly to switch between an array of ordered or of random numbers
published by bart under cc-by-nc-nd 2.0 .
stream performance
repeating the experiment
let’s start with the case that triggered me to write this post: finding the maximum value in an array of 500’000 random elements.
int m = integer.min_value;
for (int i = 0; i < intarray.length; i++)
if (intarray[i] > m)
m = intarray[i];
first thing i noticed: my laptop performs much better than the machine used for the jax article. this was to be expected as it was described as “outdated hardware (dual core, no dynamic overclocking)” but it made me happy nevertheless since i paid enough for the damn thing. instead of 0.36 ms it only took 0.130 ms to loop through the array.
more interesting are the results for using a stream to find the maximum:
// article uses 'reduce' to which 'max' delegates
arrays.stream(intarray).max();
langer reports a runtime of 5.35 ms for this, which compared to the loop’s 0.36 ms yields the reported slowdown by x15. i consistently measured about 0.560 ms, so i end up with a slowdown of “only” x4.5. still a lot, though.
next, the article compares iterating over lists against streaming them.
// for better comparability with looping over the array
// i do not use a "for each" loop (unlike the langer's article);
// measurements show that this makes things a little faster
int m = integer.min_value;
for (int i = 0; i < intlist.size(); i++)
if (intlist.get(i) > m)
m = intlist.get(i);
intlist.stream().max(math::max);
the results are 6.55 ms for the for loop and 8.33 ms for the stream. my measurements are 0.700 ms and 3.272 ms. while this changes their relative performance considerably, it creates the same order:
angelika langer | me | |||
---|---|---|---|---|
operation | time (ms) | slower | time (ms) | slower |
array_max_for | 0.36 | – | 0.123 | – |
array_max_stream | 5.35 | 14’861% | 0.599 | 487% |
list_max_for | 6.55 | 22% | 0.700 | 17% |
list_max_stream | 8.33 | 27% | 3.272 | 467% |
i ascribe the marked difference between iterations over arrays and lists to boxing; or rather to the resulting indirection. the primitive array is packed with the values we need but the list is backed by an array of integer s, i.e. references to the desired values which we must first resolve.
the considerable difference between langer’s and my series of relative changes (+14’861% +22% +27% vs +487% + 17% + 467%) underlines her statement, that “the performance model of streams is not a trivial one”.
bringing this part to a close, her article makes the following observation:
we just compare two integers, which after jit compilation is barely more than one assembly instruction. for this reason, our benchmarks illustrate the cost of element access – which need not necessarily be a typical situation. the performance figures change substantially if the functionality applied to each element in the sequence is cpu intensive. you will find that there is no measurable difference any more between for-loop and sequential stream if the functionality is heavily cpu bound.
so let’s have a lock at something else than just integer comparison.
comparing operations
i compared the following operations:
- max
- finding the maximum value.
- sum
- computing the sum of all values; aggregated in an int ignoring overflows.
- arithmetic
- to model a less simple numeric operation i combined the values with a a handful of bit shifts and multiplications.
- string
- to model a complex operation that creates new objects i converted the elements to strings and xor’ed them character by character.
these were the results (for 500’000 ordered elements; in milliseconds):
max | sum | arithmetic | string | |||||
---|---|---|---|---|---|---|---|---|
array | list | array | list | array | list | array | list | |
for | 0.123 | 0.700 | 0.186 | 0.714 | 4.405 | 4.099 | 49.533 | 49.943 |
stream | 0.559 | 3.272 | 1.394 | 3.584 | 4.100 | 7.776 | 52.236 | 64.989 |
this underlines how cheap comparison really is, even addition takes a whooping 50% longer. we can also see how more complex operations bring looping and streaming closer together. the difference drops from almost 400% to 25%. similarly, the difference between arrays and lists is reduced considerably. apparently the arithmetic and string operations are cpu bound so that resolving the references had no negative impact.
(don’t ask me why for the arithmetic operation streaming the array’s elements is faster than looping over them. i have been banging my head against that wall for a while.)
so let’s fix the operation and have a look at the iteration mechanism.
comparing iteration mechanisms
there are at least two important variables in accessing the performance of an iteration mechanism: its overhead and whether it causes boxing, which will hurt performance for memory bound operations. i decided to try and bypass boxing by executing a cpu bound operation. as we have seen above, the arithmetic operation fulfills this on my machine.
iteration was implemented with straight forward for and for-each loops. for streams i made some additional experiments:
@benchmark
public int array_stream() {
// implicitly unboxed
return arrays
.stream(intarray)
.reduce(0, this::arithmeticoperation);
}
@benchmark
public int array_stream_boxed() {
// explicitly boxed
return arrays
.stream(intarray)
.boxed()
.reduce(0, this::arithmeticoperation);
}
@benchmark
public int list_stream_unbox() {
// naively unboxed
return intlist
.stream()
.maptoint(integer::intvalue)
.reduce(0, this::arithmeticoperation);
}
@benchmark
public int list_stream() {
// implicitly boxed
return intlist
.stream()
.reduce(0, this::arithmeticoperation);
}
here, boxing and unboxing does not relate to how the data is stored (it’s unboxed in the array and boxed in the list) but how the values are processed by the stream.
note that boxed converts the intstream , a specialized implementation of stream that only deals with primitive int s, to a stream<integer> , a stream over objects. this should have a negative impact on performance but the extent depends on how well escape analysis works.
since the list is generic (i.e. no specialized intarraylist ), it returns a stream<integer> . the last benchmark method calls maptoint , which returns an intstream . this is a naive attempt to unbox the stream elements.
arithmetic | ||
---|---|---|
array | list | |
for | 4.405 | 4.099 |
foreach | 4.434 | 4.707 |
stream (unboxed) | 4.100 | 4.518 |
stream (boxed) | 7.694 | 7.776 |
well, look at that! apparently the naive unboxing does work (in this case). i have some vague notions why that might be the case but nothing i am able to express succinctly (or correctly). ideas, anyone?
(btw, all this talk about boxing/unboxing and specialized implementations makes me ever more happy that project valhalla is advancing so well .)
the more concrete consequence of these tests is that for cpu bound operations, streaming seems to have no considerable performance costs. after fearing a considerable disadvantage this is good to hear.
comparing number of elements
in general the results are pretty stable across runs with a varying sequence length (from 50’000 to 50’000’000). to this end i examined the normalized performance per 1’000’000 elements across those runs.
but i was pretty astonished that performance does not automatically improve with longer sequences. my simple mind assumed, that this would give the jvm the opportunity to apply more optimizations. instead there are some notable cases were performance actually dropped:
from 500’000 to 50’000’000 elements | |
---|---|
method | time |
array_max_for | + 44.3% |
array_sum_for | + 13.4% |
list_max_for | + 12.8% |
interesting that these are the simplest iteration mechanisms and operations.
winners are more complex iteration mechanisms over simple operations:
from 500’000 to 50’000’000 elements | |
---|---|
method | time |
array_sum_stream | – 84.9% |
list_max_stream | – 13.5% |
list_sum_stream | – 7.0% |
this means that the table we have seen above for 500’000 elements looks a little different for 50’000’000 (normalized to 1’000’000 elements; in milliseconds):
max | sum | arithmetic | string | |||||
---|---|---|---|---|---|---|---|---|
array | list | array | list | array | list | array | list | |
500’000 elements | ||||||||
for | 0.246 | 1.400 | 0.372 | 1.428 | 8.810 | 8.199 | 99.066 | 98.650 |
stream | 1.118 | 6.544 | 2.788 | 7.168 | 8.200 | 15.552 | 104.472 | 129.978 |
50’000’000 elements | ||||||||
for | 0.355 | 1.579 | 0.422 | 1.522 | 8.884 | 8.313 | 93.949 | 97.900 |
stream | 1.203 | 3.954 | 0.421 | 6.710 | 8.408 | 15.723 | 96.550 | 117.690 |
we can see that there is almost no change for the arithmetic and string operations. but things changes for the simpler max and sum operations, where more elements brought the field closer together.
reflection
all in all i’d say that there were no big revelations. we have seen that palpable differences between loops and streams exist only with the simplest operations. it was a bit surprising, though, that the gap is closing when we come into the millions of elements. so there is little need to fear a considerable slowdown when using streams.
there are still some open questions, though. the most notable: what about parallel streams? then i am curious to find out at which operation complexity i can see the change from iteration dependent (like sum and max ) to iteration independent (like arithmetic ) performance. i also wonder about the impact of hardware. sure, it will change the numbers, but will there be qualitative differences as well?
another takeaway for me is that microbenchmarking is not so hard. or so i think until someone points out all my errors…
Published at DZone with permission of Nicolai Parlog, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments