I have encountered a pattern in false positive results from Coverity Scan. I have an interface I
, and two implementations, IImpl
and FakeI
interface I {
String f();
}
class IImpl {
String f() {
return "f";
}
}
class FakeI {
String f() {
return null;
}
}
Given this code, if I then do the following
I i;
i.f().equals(other);
I get a null dereference warning, because result of i.f()
could possibly be that null from FakeI
. The FakeI
is implemented in test code, so my production code does not even see it. But Coverity does not know that.
What are the possible solutions? I thought either remove test code from analysis completely, or revisiting my fakes and make sure they don't return null. Is there some Coverity feature which might help handling this?
Static analyzers in general do not benefit from including test code to the analysis. This is in contrast to dynamic analysis, where the tests play crucial role. They are what is being executed, so that there is something to analyze. Since tests represent simplified (shorter, self-contained) usage of the APIs, it is easier to analyze reports generated from tests than from actual running binary.
There are some benefits to including test code in static analysis.
There are disadvantages, though. Especially what I was asking about here.
I am now trying to remove tests from the scope of the analysis, which actually seems what Coverity Scan documentation recommends. Their maven build command is
mvn compile
.