I am currently doing regression testing so I need the fixed results rather than random ones. I have tried all the methods including setting the random seed to 0, set_option('smt.arith.random_initial_value',False) and others. But nothing changes. The float solutions still seem random. Here is the simple example:
from z3 import *
set_option('smt.arith.random_initial_value',False)
set_option('smt.random_seed', 0)
def test_short_expont():
float_type = Float32()
solver = Solver()
solver.set("randomize",False)
fp_5 = FP('fp_5',float_type)
fp_10 = FP('fp_10',float_type)
solver.add( fpEQ(fpAdd(RNE(), fp_5, fp_5),fp_10) )
#solver.add( fpEQ(fpMul(RNE(), fp_5, fp_10),fp_5) )
if solver.check() == sat:
model = solver.model()
return model
else:
return
I expects the solutions maintain same but every time I run there comes different results.
[fp_10 = 1.3322999477386474609375*(2**-112),
fp_5 = 1.3322999477386474609375*(2**-113)]
[fp_10 = -1.66065156459808349609375*(2**1),
fp_5 = -1.66065156459808349609375]
[fp_10 = 1.2059905529022216796875*(2**-126),
fp_5 = 0.60299527645111083984375*(2**-126)]
It seems that fixing the random seed is effective for the int solutions, but don't have influence on the float solutions.
Is there any method to make the float solution not random?
The problem here isn't z3, but rather python I'm afraid. If you add the following line:
right before you call
solver.check(), and grab the SMTLib program and add acheck-sat, on it, you get:If you run the above using z3 directly, you'll see that it produces the exact same output. In fact, your Python program will generate the same output if you start it from scratch as well; but not if you call
test_short_expontrepeatedly, which is what you are observing.There was a related question on GitHub z3 tracker, see: https://github.com/Z3Prover/z3/issues/6679
Bottom line: If you want determinism, don't use python. Or, you can use Python to generate the SMTLib, but then save that, and call z3 directly on it. (You can of course use Python to do the call to the external program z3 if you like.) Otherwise, you won't get determinism.
I should add that there is another source of non-determinism that comes with building regressions like this: What happens if you upgrade z3. (For instance, compiling from the GitHub master.) That, I'm afraid, is unavoidable; i.e., you'll have different results based on which version of z3 you use. The only way to get deterministic in that scenario is to arrange for your test to have only one possible satisfying model. (Of course, that beats the whole purpose of testing.)
The best idea here is not to rely on determinism, but rather have your test-suite get the model, and check that it is a valid model on the Python side. That way even if the model is different, you can rest assured it's still doing the right thing. Of course, without knowing your exact requirements/setup, that might be a tall order.