Caliper: why not use an annotation to define a benchmark?

702 views Asked by At

Just found out about Caliper, and going through the documentation - it looks like a great tool (thanks Kevin and the gang at Google for opensourcing it).

Question. Why isn't there an annotation-based mechanism to define benchmarks for the common use cases? Seems that something like:

public class Foo {
  // Foo's actual code, followed by...

  @Benchmark
  static int timeFoobar(int reps) { 
    Foo foo = new Foo();
    for (int i = 0; i < reps; ++i) foo.bar(); 
  }
}

would save a few lines of code and enhance readability.

2

There are 2 answers

3
Jesse Wilson On BEST ANSWER

We decided to use timeFoo(int reps) rather than @Time foo(int reps) for a few reasons:

  • We still have a lot of JUnit 3.8 tests and like consistency with its testFoo() scheme.
  • No need for import com.google.caliper.Time
  • We'll end up reporting the benchmark name for timeFoo as Foo. This is easy, it's just methodName.substring(4). If we used annotations we'd end up with more complicated machinery to handle names like @Time timeFoo(int reps), @Time benchmarkFoo(int reps) and @Time foo(int reps).

That said, we're reconsidering this for Caliper 1.0.

0
Stephen C On

One possible explanation is that benchmarks that use Annotations cannot be run on pre-Java 1.5 JVMs. (That's not a very persuasive reason, given how old Java 1.5 is.)


Actually this is implausible. The latest Caliper codebase defines an annotation called @VmOption so they can't be aiming to support pre-Java 1.5 platforms. (Not that I'm suggesting they should ...)