Yu Feng
Yu Feng

Reputation: 303

Performance about the check() in Z3 for Java

I am using Z3 for Java to check satisfiability on terms with uninterpreted functions such as (type(o)>1 or type(p)<1). I am running into a performance issue caused by the check() function. For instance, it takes 6 ms to run solver.check() for a very simple constraint(type(o)>2 and type(o)=1).

        FuncDecl typeFun = ctx.MkFuncDecl("type", ctx.IntSort(), ctx.IntSort());        
        Expr o = ctx.MkConst("o", ctx.IntSort());
        //type(o)
        IntExpr to = (IntExpr)typeFun.Apply(o);
        //type(o)=1
        BoolExpr subExpr1 = ctx.MkEq(to, ctx.MkInt("1"));   
        //type(o)>2    
        BoolExpr subExpr2 = ctx.MkGt(to, ctx.MkInt("2"));  
        //type(o)>2 and type(o)=1
        BoolExpr expr = ctx.MkAnd(new BoolExpr[] { subExpr1, subExpr2 });

        solver.Assert(expr);
        //this step will take 6 ms.
        solver.check();

Given that the size of actual constraints in my project is much bigger(But each term is very simple such as type(o1)=1, type(o2)>1, etc) than this example and there are billions of such constraint need to resolve:
1. Is the performance of check() supposed to be like this? 2. If the answer to 1 is Yes, is there any other alternative way to bypass the performance issue?

Thanks in advance.

@ChristophWintersteiger: I think a large portion of constraints in my system should be SAT. I am implementing a pointer analysis for Java and I am using Z3 to resolve the potential targets of virtual calls in a bottom-up way. Suppose I have a virtual callsite v.foo() and this callsite may invoke different methods based on the dynamic type of v. So for each callee foo(), I will introduce a constraint type(o) = T where o is the points-to set of receiver v and T is the class that declares foo. The constraint means that v.foo() can invoke method foo() in T when one of its dynamic points-to set has a type of T. All the constraint in my current system is some linear arithmetic with only one uninterpreted function "type(o)". But since I am analyzing the callgraph in a bottom-up manner, the constraint related to each virtual callsite can be expanded util the analysis reaches the root level and all the points-to targets of receivers have been resolved.

Upvotes: 4

Views: 333

Answers (2)

Christoph Wintersteiger
Christoph Wintersteiger

Reputation: 8402

6ms sounds pretty fast to me. Why is this too slow?

solver.check() calls out to a native .dll/.so, so there is quite a bit of overhead involved before the call can be executed. Once .check() returns, there is another bit of overhead in checking the result for errors. So, it might well take 6ms simply because the application is using the Java API.

If the application is performance critical to such a high degree, then there might be no way around using the C API, because all the other APIs incur some non-zero overhead.

Addendum: I've done some profiling of the following simple kind:

long startTime = System.nanoTime();
int i = 0;
for (; i < 10000; i++) {                                
    status = solver.check();
}
long elapsedTime = System.nanoTime() - startTime;

which takes about 22 seconds to run. The default solver that gets invoked here does indeed exhibit suboptimal performance on this particular instance. We can force Z3 to fall back to an older solver by adding the ignore_solver1 option (or by using push/pop commands), i.e., setting it up as follows:

Solver solver = ctx.mkSolver();
Params solver_params = ctx.mkParams();
solver_params.add("ignore_solver1", true);
solver.setParameters(solver_params);

Solving the same problem 10k times now takes ~100ms (I get 9us/check).

The solver which gets invoked on these commands also supports incremental solving, so we can add push/pop commands almost for free:

for (; i < 10000; i++) {
    solver.push();
    status = solver.check();
    solver.pop();
}

which increases the runtime for 10k checks to ~120ms. Now, moving solver.add(expr) into this loop increases the runtime considerably, i.e., for this:

for (; i < 10000; i++) {
    solver.push();
    solver.add(expr);
    status = solver.check();
    solver.pop();
}

I get a runtime of ~275ms, i.e., close to double what we had before. Thus, at this point, the time required for expression construction/adding is of the same order of magnitude as the solving time. At this point, the bottleneck is indeed the Java API.

I should perhaps add that I only tried the example provided in the question. It may very well be the case that on a slightly different example the solvers behave differently and adding the ignore_solver1 option will behave worse.

Upvotes: 2

usr
usr

Reputation: 171246

You could try using custom tactics. The default solver uses quite an extensive portfolio of algorithms (I believe). Try using a simple tactic without preprocessing like MkTactic("smt").

Upvotes: 1

Related Questions