After finishing the implementation of mapping and conversion from Ecore to ProtoBuf I wanted to benchmark the performance in comparison to the existing BinaryResourceImpl and XMIResourceImpl. So, I started looking for articles and frameworks for Java performance benchmarks. I found the most interesting article series with the title “Robust Java benchmarking” at IBM developerWorks. It even includes a benchmarking framework, implementing all hints given in the articles. Running the benchmarks with this framework worked and returned sensible results, but two things bothered me: the long execution time of 2-3 minutes for a benchmark run and that it required 3 JARs including a large amount of unused code. Therefor, I looked for an alternative. Besides, I don’t need nanosecond accurate measurements and all the statistics stuff provided by the mentioned framework.

My main requirements for an alternative framework were:

  1. support for code warmup, so the JIT compiler  has a chance to optimize the code
  2. time measurement with System.nanoTime() instead of System.currentTimeMillis()
  3. compatibility with Eclipse’ JUnit Plug-in Tests

Here is the list of candidates I assessed and which requirements they met:

I tried to implement the benchmarks with JUnitBenchmarks and Caliper. Although Caliper seemed to be quite promising, it turned out to not work well on Windows and with JUnit Plug-in Tests, because it spawns new JVM processes for benchmarking. I didn’t have these problems with JUnitBenchmarks. The downside of JUnitBenchmarks is that it uses System.currentTimeMillis() and displays results only with a resolution of 0.01 seconds. Actually, there is a discussion on whether to use System.nanoTime() or not. Finally, I decided to use JUnitBenchmarks and could verify the results I got with the “Robust Java benchmarking” framework, but with a much lower overall execution time.

In the next post I will present the benchmark results.