3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2025-08-30 15:00:08 +00:00

Spacer Global Guidance (#6026)

* Make spacer_sem_matcher::reset() public

* Add .clang-format for src/muz/spacer

* Mark substitution::get_bindings() as const

* Fix in spacer_antiunify

* Various helper methods in spacer_util

Minor functions to compute number of free variables, detect presence of certain
sub-expressions, etc.

The diff is ugly because of clang-format

* Add spacer_cluster for clustering lemmas

A cluster of lemmas is a set of lemmas that are all instances of the same
pattern, where a pattern is a qff formula with free variables.

Currently, the instances are required to be explicit, that is, they are all
obtained by substituting concrete values (i.e., numbers) for free variables of
the pattern.

Lemmas are clustered in cluster_db in each predicate transformer.

* Integrate spacer_cluster into spacer_context

* Custom clang-format pragmas for spacer_context

spacer_context.(cpp|h) are large and have inconsistent formatting. Disable
clang-format for them until merge with main z3 branch and re-format.

* Computation of convex closure and matrix kernel

Various LA functions. The implementations are somewhat preliminary.

Convex closure is simplemented via syntactic convex closure procedure.
Kernel computation considers many common cases.

spacer_arith_kernel_sage implements kernel computation by call external
Sage binary. It is used only for debugging and experiments. There is no
link dependence on Sage. If desired, it can be removed.

* Add spacer_concretize

* Utility methods for spacer conjecture rule

* Add spacer_expand_bnd_generalizer

Generalizes arithmetic inequality literals of the form x <= c,
by changing constant c to other constants found in the problem.

* Add spacer_global_generalizer

Global generalizer checks every new lemma against a cluster
of previously learned lemmas, and, if possible, conjectures
a new pob, that, when blocked, generalizes multiple existing
lemmas.

* Remove fp.spacer.print_json option

The option is used to dump state of spacer into json for debugging.

It has been replaced by `fp.spacer.trace_file` that allows dumping an execution
of spacer. The json file can be reconstructed from the trace file elsewhere.

* Workaround for segfault in spacer_proof_utils

Issue #3 in hgvk94/z3

Segfault in some proof reduction. Avoid by bailing out on reduction.

* Revert bug for incomplete models

* Use local fresh variables in spacer_global_generalizer

* Cleanup of spacer_convex_closure

* Allow arbitrary expressions to name cols in convex_closure

* WIP: convex closure

* WIP: convex closure

* Fix bindings order in spacer_global_generalizer

The matcher creates substitution using std_order, which is
reverse of expected order (variable 0 is last). Adjust the code
appropriately for that.

* Increase verbosity level for smt_context stats

* Dead code in qe_mbp

* bug fixes in spacer_global_generalizer::subsumer

* Partially remove dependence of size of m_alphas

I want m_alphas to potentially be greater than currently used alpha variables.
This is helpful for reusing them across multiple calls to convex closure

* Subtle bug in kernel computation

Coefficient was being passed by reference and, therefore, was
being changed indirectly.

In the process, updated the code to be more generic to avoid rational
computation in the middle of matrix manipulation.

* another test for sparse_matrix_ops::kernel

* Implementation of matrix kernel using Fraction Free Elimination

Ensures that the kernel is int for int matrices. All divisions are exact.

* clang-format sparse_matrix_ops.h

* another implementation of ffe kernel in sparse_matrix_ops

* Re-do arith_kernel and convex_closure

* update spacer_global_generalization for new subsumer

* remove spacer.gg.use_sage parameter

* cleanup of spacer_global_generalizer

* Removed dependency on sage

* fix in spacer_convex_closure

* spacer_sem_matcher: consider an additional semantic matching

disabled until it is shown useful

* spacer_global_generalizer: improve do_conjecture

 - if conjecture does not apply to pob, use lemma instead
 - better normalization
 - improve debug prints

* spacer_conjecture: formatting

* spacer_cluster: improve debug prints

* spacer_context: improve debug prints

* spacer_context: re-queue may pobs

enabled even if global re-queue is disabled

* spacer_cluster print formatting

* reset methods on pob

* cleanup of print and local variable names

* formatting

* reset generalization data once it has been used

* refactored extra pob creation during global guidance

* fix bug copying sparse matrix into spacer matrix

* bug fix in spacer_convex_closure

* formatting change in spacer_context

* spacer_cluster: get_min_lvl

chose level based on pob as well as lemmas

* spacer_context: add desired_level to pob

desired_level indicates at which level pob should be proved.
A pob will be pushed to desired_level if necessary

* spacer_context: renamed subsume stats

the name of success/failed was switched

* spacer_convex_closure: fix prototype of is_congruent_mod()

* spacer_convex_closure: hacks in infer_div_pred()

* spacer_util: do not expand literals with mod

By default, equality literal t=p is expanded into t<=p && t>=p

Disable the expansion in case t contains 'mod' operator since such
expansion is usually not helpful for divisibility

* spacer_util: rename m_util into m_arith

* spacer_util: cleanup normalize()

* spacer_util: formatting

* spacer_context: formatting cleanup on subsume and conjecture

* spacer_context: fix handling may pobs when abs_weakness is enabled

A pob might be undef, so weakness must be bumped up

* spacer_arith_kernel: enhance debug print

* spacer_global_generalizer: improve matching on conjecture

* spacer_global_generalizer: set desired level on conjecture pob

* spacer_global_generalizer: debug print

* spacer_global_generalizer: set min level on new pobs

the new level should not be higher than the pob that was generalized

* spacer_global_generalizer: do no re-create closed pobs

If a generalized pob exist and closed, do not re-create it.

* spacer_context: normalize twice

* spacer_context: forward propagate only same kind of pobs

* sketch of inductive generalizer

A better implementation of inductive generalizer that in addition to dropping
literals also attempts to weaken them.

Current implementation is a sketch to be extended based on examples/requirements.

* fix ordering in spacer_cluster_util

* fix resetting of substitution matcher in spacer_conjecture

Old code would forget to reset the substitution provided to the sem_matcher.
Thus, if the substitution was matched once (i.e., one literal of interest is
found), no other literal would be matched.

* add spacer_util is_normalized() method

used for debugging only

* simplify normalization of pob expressions

pob expressions are normalized to increase syntactic matching.
Some of the normalization rules seem out of place, so removing them for now.

* fix in spacer_global_generalizer

If conjecture fails, do not try other generalization strategies -- they will not apply.

* fix in spacer_context

do not check that may pob is blocked by existing lemmas.
It is likely to be blocked. Our goal is to block it again and generalize
to a new lemma.

This can be further improved by moving directly to generalization when pob is
blocked by existing lemmas...

Co-authored-by: hgvk94 <hgvk94@gmail.com>
This commit is contained in:
Arie Gurfinkel 2022-08-30 18:47:00 -04:00 committed by GitHub
parent 1a79d92f3a
commit d2b618df23
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
41 changed files with 6063 additions and 1917 deletions

View file

@ -20,6 +20,7 @@ Notes:
--*/
// clang-format off
#include <sstream>
#include <iomanip>
@ -51,30 +52,36 @@ Notes:
#include "muz/transforms/dl_mk_rule_inliner.h"
#include "muz/spacer/spacer_qe_project.h"
#include "muz/spacer/spacer_sat_answer.h"
#include "muz/spacer/spacer_concretize.h"
#include "muz/spacer/spacer_global_generalizer.h"
#define WEAKNESS_MAX 65535
namespace spacer {
/// pob -- proof obligation
pob::pob (pob* parent, pred_transformer& pt,
unsigned level, unsigned depth, bool add_to_parent):
m_ref_count (0),
m_parent (parent), m_pt (pt),
m_post (m_pt.get_ast_manager ()),
m_binding(m_pt.get_ast_manager()),
m_new_post (m_pt.get_ast_manager ()),
m_level (level), m_depth (depth),
m_open (true), m_use_farkas (true), m_in_queue(false),
m_weakness(0), m_blocked_lvl(0) {
pob::pob(pob *parent, pred_transformer &pt, unsigned level, unsigned depth,
bool add_to_parent)
: m_ref_count(0), m_parent(parent), m_pt(pt),
m_post(m_pt.get_ast_manager()), m_binding(m_pt.get_ast_manager()),
m_new_post(m_pt.get_ast_manager()), m_level(level), m_depth(depth),
m_desired_level(0), m_open(true), m_use_farkas(true), m_in_queue(false), m_is_conjecture(false),
m_enable_local_gen(true), m_enable_concretize(false), m_is_subsume(false),
m_enable_expand_bnd_gen(false), m_weakness(0), m_blocked_lvl(0),
m_concretize_pat(m_pt.get_ast_manager()),
m_gas(0) {
if (add_to_parent && m_parent) {
m_parent->add_child(*this);
}
if (m_parent) {
m_is_conjecture = m_parent->is_conjecture();
// m_is_subsume = m_parent->is_subsume();
m_gas = m_parent->get_gas();
}
}
void pob::set_post(expr* post) {
app_ref_vector empty_binding(get_ast_manager());
set_post(post, empty_binding);
set_post(post, {get_ast_manager()});
}
void pob::set_post(expr* post, app_ref_vector const &binding) {
@ -91,6 +98,11 @@ void pob::inherit(pob const &p) {
SASSERT(!is_in_queue());
SASSERT(m_parent == p.m_parent);
SASSERT(&m_pt == &p.m_pt);
// -- HACK: normalize second time because th_rewriter is not idempotent
if (m_post != p.m_post) {
normalize(m_post, m_post, false, false);
}
SASSERT(m_post == p.m_post);
SASSERT(!m_new_post);
@ -99,11 +111,21 @@ void pob::inherit(pob const &p) {
m_level = p.m_level;
m_depth = p.m_depth;
m_desired_level = std::max(m_desired_level, p.m_desired_level);
m_open = p.m_open;
m_use_farkas = p.m_use_farkas;
m_is_conjecture = p.m_is_conjecture;
m_enable_local_gen = p.m_enable_local_gen;
m_enable_concretize = p.m_enable_concretize;
m_is_subsume = p.m_is_subsume;
m_enable_expand_bnd_gen = p.m_enable_expand_bnd_gen;
m_weakness = p.m_weakness;
m_derivation = nullptr;
m_gas = p.m_gas;
}
void pob::close () {
@ -756,6 +778,8 @@ void pred_transformer::collect_statistics(statistics& st) const
m_must_reachable_watch.get_seconds ());
st.update("time.spacer.ctp", m_ctp_watch.get_seconds());
st.update("time.spacer.mbp", m_mbp_watch.get_seconds());
// -- Max cluster size can decrease during run
st.update("SPACER max cluster size", m_cluster_db.get_max_cluster_size());
}
void pred_transformer::reset_statistics()
@ -1193,6 +1217,7 @@ expr_ref pred_transformer::get_origin_summary (model &mdl,
for (auto* s : summary) {
if (!is_quantifier(s) && !mdl.is_true(s)) {
TRACE("spacer", tout << "Summary not true in the model: " << mk_pp(s, m) << "\n";);
return expr_ref(m);
}
}
@ -1258,12 +1283,11 @@ void pred_transformer::get_pred_bg_invs(expr_ref_vector& out) {
/// \brief Returns true if the obligation is already blocked by current lemmas
bool pred_transformer::is_blocked (pob &n, unsigned &uses_level)
{
bool pred_transformer::is_blocked(pob &n, unsigned &uses_level, model_ref *model) {
ensure_level (n.level ());
prop_solver::scoped_level _sl (*m_solver, n.level ());
m_solver->set_core (nullptr);
m_solver->set_model (nullptr);
m_solver->set_model(model);
expr_ref_vector post(m), _aux(m);
post.push_back (n.post ());
@ -1335,7 +1359,7 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
model_ref* model, unsigned& uses_level,
bool& is_concrete, datalog::rule const*& r,
bool_vector& reach_pred_used,
unsigned& num_reuse_reach)
unsigned& num_reuse_reach, bool use_iuc)
{
TRACE("spacer",
tout << "is-reachable: " << head()->get_name() << " level: "
@ -1349,7 +1373,8 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
// prepare the solver
prop_solver::scoped_level _sl(*m_solver, n.level());
prop_solver::scoped_subset_core _sc (*m_solver, !n.use_farkas_generalizer ());
prop_solver::scoped_subset_core _sc(
*m_solver, !(use_iuc && n.use_farkas_generalizer()));
prop_solver::scoped_weakness _sw(*m_solver, 0,
ctx.weak_abs() ? n.weakness() : UINT_MAX);
m_solver->set_core(core);
@ -1406,9 +1431,10 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
r = find_rule(**model, is_concrete, reach_pred_used, num_reuse_reach);
TRACE("spacer",
tout << "reachable is_sat: " << is_sat << " "
<< r << " is_concrete " << is_concrete << " rused: " << reach_pred_used << "\n";
ctx.get_datalog_context().get_rule_manager().display_smt2(*r, tout) << "\n";
);
<< r << " is_concrete " << is_concrete << " rused: " << reach_pred_used << "\n";);
CTRACE("spacer", r,
ctx.get_datalog_context().get_rule_manager().display_smt2(*r, tout);
tout << "\n";);
TRACE("spacer_sat", tout << "model is:\n" << **model << "\n";);
}
@ -2272,7 +2298,8 @@ context::context(fp_params const& params, ast_manager& m) :
m_last_result(l_undef),
m_inductive_lvl(0),
m_expanded_lvl(0),
m_json_marshaller(this),
m_global_gen(nullptr),
m_expand_bnd_gen(nullptr),
m_trace_stream(nullptr) {
params_ref p;
@ -2290,6 +2317,7 @@ context::context(fp_params const& params, ast_manager& m) :
m_pool1 = alloc(solver_pool, pool1_base.get(), max_num_contexts);
m_pool2 = alloc(solver_pool, pool2_base.get(), max_num_contexts);
m_lmma_cluster = alloc(lemma_cluster_finder, m);
updt_params();
if (m_params.spacer_trace_file().is_non_empty_string()) {
@ -2302,6 +2330,7 @@ context::context(fp_params const& params, ast_manager& m) :
context::~context()
{
reset_lemma_generalizers();
dealloc(m_lmma_cluster);
reset();
if (m_trace_stream) {
@ -2347,7 +2376,13 @@ void context::updt_params() {
m_restart_initial_threshold = m_params.spacer_restart_initial_threshold();
m_pdr_bfs = m_params.spacer_gpdr_bfs();
m_use_bg_invs = m_params.spacer_use_bg_invs();
m_global = m_params.spacer_global();
m_expand_bnd = m_params.spacer_expand_bnd();
m_gg_conjecture = m_params.spacer_gg_conjecture();
m_gg_subsume = m_params.spacer_gg_subsume();
m_gg_concretize = m_params.spacer_gg_concretize();
m_use_iuc = m_params.spacer_use_iuc();
if (m_use_gpdr) {
// set options to be compatible with GPDR
m_weak_abs = false;
@ -2665,7 +2700,8 @@ void context::init_lemma_generalizers()
//m_lemma_generalizers.push_back (alloc (unsat_core_generalizer, *this));
if (m_use_ind_gen) {
m_lemma_generalizers.push_back(alloc(lemma_bool_inductive_generalizer, *this, 0));
// m_lemma_generalizers.push_back(alloc(lemma_bool_inductive_generalizer, *this, 0));
m_lemma_generalizers.push_back(alloc_lemma_inductive_generalizer(*this));
}
// after the lemma is minimized (maybe should also do before)
@ -2678,6 +2714,15 @@ void context::init_lemma_generalizers()
m_lemma_generalizers.push_back(alloc(lemma_array_eq_generalizer, *this));
}
if (m_global) {
m_global_gen = alloc(lemma_global_generalizer, *this);
m_lemma_generalizers.push_back(m_global_gen);
}
if (m_expand_bnd) {
m_expand_bnd_gen = alloc(lemma_expand_bnd_generalizer, *this);
m_lemma_generalizers.push_back(m_expand_bnd_gen);
}
if (m_validate_lemmas) {
m_lemma_generalizers.push_back(alloc(lemma_sanity_checker, *this));
}
@ -3020,9 +3065,7 @@ lbool context::solve_core (unsigned from_lvl)
if (check_reachability()) { return l_true; }
if (lvl > 0 && m_use_propagate)
if (propagate(m_expanded_lvl, lvl, UINT_MAX)) { dump_json(); return l_false; }
dump_json();
if (propagate(m_expanded_lvl, lvl, UINT_MAX)) { return l_false; }
if (is_inductive()){
return l_false;
@ -3070,6 +3113,8 @@ void context::log_expand_pob(pob &n) {
if (n.parent()) pob_id = std::to_string(n.parent()->post()->get_id());
*m_trace_stream << "** expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth())
<< " exprID: " << n.post()->get_id() << " pobID: " << pob_id << "\n"
@ -3077,13 +3122,18 @@ void context::log_expand_pob(pob &n) {
}
TRACE("spacer", tout << "expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth())
<< " fvsz: " << n.get_free_vars_size() << "\n"
<< " fvsz: " << n.get_free_vars_size()
<< " gas: " << n.get_gas() << "\n"
<< mk_pp(n.post(), m) << "\n";);
STRACE("spacer_progress",
tout << "** expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth()) << "\n"
<< mk_epp(n.post(), m) << "\n\n";);
@ -3151,13 +3201,21 @@ bool context::check_reachability ()
node = last_reachable;
last_reachable = nullptr;
if (m_pob_queue.is_root(*node)) { return true; }
if (is_reachable (*node->parent())) {
last_reachable = node->parent ();
// do not check the parent if its may pob status is different
if (node->parent()->is_may_pob() != node->is_may_pob())
{
last_reachable = nullptr;
break;
}
if (is_reachable(*node->parent())) {
last_reachable = node->parent();
SASSERT(last_reachable->is_closed());
last_reachable->close ();
last_reachable->close();
} else if (!node->parent()->is_closed()) {
/* bump node->parent */
node->parent ()->bump_weakness();
node->parent()->bump_weakness();
}
}
@ -3202,20 +3260,37 @@ bool context::check_reachability ()
case l_true:
SASSERT(m_pob_queue.size() == old_sz);
SASSERT(new_pobs.empty());
node->close();
last_reachable = node;
last_reachable->close ();
if (m_pob_queue.is_root(*node)) {return true;}
if (m_pob_queue.is_root(*node)) { return true; }
break;
case l_false:
SASSERT(m_pob_queue.size() == old_sz);
// re-queue all pobs introduced by global gen and any pobs that can be blocked at a higher level
for (auto pob : new_pobs) {
if (is_requeue(*pob)) {m_pob_queue.push(*pob);}
TRACE("gg", tout << "pob: is_may_pob " << pob->is_may_pob()
<< " with post:\n"
<< mk_pp(pob->post(), m)
<< "\n";);
//if ((pob->is_may_pob() && pob->post() != node->post()) || is_requeue(*pob)) {
if (is_requeue(*pob)) {
TRACE("gg",
tout << "Adding back blocked pob at level "
<< pob->level()
<< " and depth " << pob->depth() << "\n");
m_pob_queue.push(*pob);
}
}
if (m_pob_queue.is_root(*node)) {return false;}
break;
case l_undef:
SASSERT(m_pob_queue.size() == old_sz);
// collapse may pobs if the reachability of one of them cannot
// be estimated
if ((node->is_may_pob()) && new_pobs.size() == 0) {
close_all_may_parents(node);
}
for (auto pob : new_pobs) {m_pob_queue.push(*pob);}
break;
}
@ -3228,7 +3303,10 @@ bool context::check_reachability ()
/// returns true if the given pob can be re-scheduled
bool context::is_requeue(pob &n) {
if (!m_push_pob) {return false;}
// if have not reached desired level, then requeue
if (n.level() <= n.desired_level()) { return true; }
if (!m_push_pob) { return false; }
unsigned max_depth = m_push_pob_max_depth;
return (n.level() >= m_pob_queue.max_level() ||
m_pob_queue.max_level() - n.level() <= max_depth);
@ -3270,9 +3348,9 @@ bool context::is_reachable(pob &n)
unsigned saved = n.level ();
// TBD: don't expose private field
n.m_level = infty_level ();
lbool res = n.pt().is_reachable(n, nullptr, &mdl,
uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach);
lbool res =
n.pt().is_reachable(n, nullptr, &mdl, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach, m_use_iuc);
n.m_level = saved;
if (res != l_true || !is_concrete) {
@ -3326,16 +3404,6 @@ bool context::is_reachable(pob &n)
return next ? is_reachable(*next) : true;
}
void context::dump_json()
{
if (m_params.spacer_print_json().is_non_empty_string()) {
std::ofstream of;
of.open(m_params.spacer_print_json().bare_str());
m_json_marshaller.marshal(of);
of.close();
}
}
void context::predecessor_eh()
{
for (unsigned i = 0; i < m_callbacks.size(); i++) {
@ -3446,14 +3514,15 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
log_expand_pob(n);
stopwatch watch;
IF_VERBOSE (1, verbose_stream () << "expand: " << n.pt ().head ()->get_name ()
<< " (" << n.level () << ", "
IF_VERBOSE(1, verbose_stream()
<< "expand: " << n.pt().head()->get_name() << " ("
<< n.level() << ", "
<< (n.depth () - m_pob_queue.min_depth ()) << ") "
<< (n.use_farkas_generalizer () ? "FAR " : "SUB ")
<< " w(" << n.weakness() << ") "
<< n.post ()->get_id ();
verbose_stream().flush ();
watch.start (););
<< (n.is_conjecture() ? "CONJ " : "")
<< (n.is_subsume() ? " SUBS" : "") << " w("
<< n.weakness() << ") " << n.post()->get_id();
verbose_stream().flush(); watch.start(););
// used in case n is unreachable
unsigned uses_level = infty_level ();
@ -3468,25 +3537,59 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
unsigned num_reuse_reach = 0;
if (m_push_pob && n.pt().is_blocked(n, uses_level)) {
if (!n.is_may_pob() && m_push_pob && n.pt().is_blocked(n, uses_level)) {
// if (!m_pob_queue.is_root (n)) n.close ();
IF_VERBOSE (1, verbose_stream () << " K "
<< std::fixed << std::setprecision(2)
<< watch.get_seconds () << "\n";);
n.inc_level();
out.push_back(&n);
return l_false;
}
<< watch.get_seconds () << "\n";);
n.inc_level();
out.push_back(&n);
return l_false;
}
if (/* XXX noop */ n.pt().is_qblocked(n)) {
STRACE("spacer_progress",
tout << "This pob can be blocked by instantiation\n";);
}
if (/* XXX noop */ n.pt().is_qblocked(n)) {
STRACE("spacer_progress",
tout << "This pob can be blocked by instantiation\n";);
}
if ((n.is_may_pob()) && n.get_gas() == 0) {
TRACE("global", tout << "Cant prove may pob. Collapsing "
<< mk_pp(n.post(), m) << "\n";);
m_stats.m_num_pob_ofg++;
return l_undef;
}
// Decide whether to concretize pob
// get a model that satisfies the pob and the current set of lemmas
// TODO: if push_pob is enabled, avoid calling is_blocked twice
if (m_gg_concretize && n.is_concretize_enabled() &&
!n.pt().is_blocked(n, uses_level, &model)) {
TRACE("global",
tout << "Concretizing: " << mk_pp(n.post(), m) << "\n"
<< "\t" << n.get_gas() << " attempts left\n";);
SASSERT(m_global_gen);
if (pob *new_pob = m_global_gen->mk_concretize_pob(n, model)) {
m_stats.m_num_concretize++;
out.push_back(new_pob);
out.push_back(&n);
IF_VERBOSE(1, verbose_stream()
<< " C " << std::fixed << std::setprecision(2)
<< watch.get_seconds() << "\n";);
unsigned gas = n.get_gas();
SASSERT(gas > 0);
// dec gas for orig pob to limit number of concretizations
new_pob->set_gas(gas--);
n.set_gas(gas);
return l_undef;
}
}
model = nullptr;
predecessor_eh();
lbool res = n.pt ().is_reachable (n, &cube, &model, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach);
lbool res =
n.pt().is_reachable(n, &cube, &model, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach, m_use_iuc);
if (model) model->set_model_completion(false);
if (res == l_undef && model) res = handle_unknown(n, r, *model);
@ -3534,7 +3637,18 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
out.push_back (next);
}
}
if(n.is_subsume())
m_stats.m_num_subsume_pob_reachable++;
if(n.is_conjecture())
m_stats.m_num_conj_failed++;
CTRACE("global", n.is_conjecture(),
tout << "Failed to block conjecture "
<< n.post()->get_id() << "\n";);
CTRACE("global", n.is_subsume(),
tout << "Failed to block subsume generalization "
<< mk_pp(n.post(), m) << "\n";);
IF_VERBOSE(1, verbose_stream () << (next ? " X " : " T ")
<< std::fixed << std::setprecision(2)
@ -3565,7 +3679,7 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
throw unknown_exception();
}
case l_false: {
// n is unreachable, create new summary facts
// n is unreachable, create a new lemma
timeit _timer (is_trace_enabled("spacer_timeit"),
"spacer::expand_pob::false",
verbose_stream ());
@ -3573,40 +3687,68 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
// -- only update expanded level when new lemmas are generated at it.
if (n.level() < m_expanded_lvl) { m_expanded_lvl = n.level(); }
TRACE("spacer", tout << "cube:\n";
for (unsigned j = 0; j < cube.size(); ++j)
tout << mk_pp(cube[j].get(), m) << "\n";);
TRACE("spacer", tout << "cube:\n" << cube << "\n";);
if(n.is_conjecture()) m_stats.m_num_conj_success++;
if(n.is_subsume()) m_stats.m_num_subsume_pob_blckd++;
pob_ref nref(&n);
// -- create lemma from a pob and last unsat core
lemma_ref lemma = alloc(class lemma, pob_ref(&n), cube, uses_level);
lemma_ref lemma_pob;
if (n.is_local_gen_enabled()) {
lemma_pob = alloc(class lemma, nref, cube, uses_level);
// -- run all lemma generalizers
for (unsigned i = 0;
// -- only generalize if lemma was constructed using farkas
n.use_farkas_generalizer() && !lemma_pob->is_false() &&
i < m_lemma_generalizers.size();
++i) {
checkpoint ();
(*m_lemma_generalizers[i])(lemma_pob);
}
} else if (m_global_gen || m_expand_bnd_gen) {
m_stats.m_non_local_gen++;
// -- run all lemma generalizers
for (unsigned i = 0;
// -- only generalize if lemma was constructed using farkas
n.use_farkas_generalizer () && !lemma->is_false() &&
i < m_lemma_generalizers.size(); ++i) {
checkpoint ();
(*m_lemma_generalizers[i])(lemma);
expr_ref_vector pob_cube(m);
n.get_post_simplified(pob_cube);
lemma_pob = alloc(class lemma, nref, pob_cube, n.level());
TRACE("global", tout << "Disabled local gen on pob (id: "
<< n.post()->get_id() << ")\n"
<< mk_pp(n.post(), m) << "\n"
<< "Lemma:\n"
<< mk_and(lemma_pob->get_cube()) << "\n";);
if (m_global_gen) (*m_global_gen)(lemma_pob);
if (m_expand_bnd_gen) (*m_expand_bnd_gen)(lemma_pob);
} else {
lemma_pob = alloc(class lemma, nref, cube, uses_level);
}
DEBUG_CODE(
lemma_sanity_checker sanity_checker(*this);
sanity_checker(lemma);
);
CTRACE("global", n.is_conjecture() || n.is_subsume(),
tout << "Blocked "
<< (n.is_conjecture() ? "conjecture " : "subsume ") << n.post()->get_id()
<< " at level " << n.level()
<< " using lemma\n" << mk_pp(lemma_pob->get_expr(), m) << "\n";);
TRACE("spacer", tout << "invariant state: "
<< (is_infty_level(lemma->level())?"(inductive)":"")
<< mk_pp(lemma->get_expr(), m) << "\n";);
DEBUG_CODE(lemma_sanity_checker sanity_checker(*this);
sanity_checker(lemma_pob););
bool v = n.pt().add_lemma (lemma.get());
if (v) { m_stats.m_num_lemmas++; }
TRACE("spacer",
tout << "invariant state: "
<< (is_infty_level(lemma_pob->level()) ? "(inductive)" : "")
<< mk_pp(lemma_pob->get_expr(), m) << "\n";);
bool is_new = n.pt().add_lemma(lemma_pob.get());
if (is_new) {
if (m_global) m_lmma_cluster->cluster(lemma_pob);
m_stats.m_num_lemmas++;
}
// Optionally update the node to be the negation of the lemma
if (v && m_use_lemma_as_pob) {
if (is_new && m_use_lemma_as_pob) {
expr_ref c(m);
c = mk_and(lemma->get_cube());
c = mk_and(lemma_pob->get_cube());
// check that the post condition is different
if (c != n.post()) {
pob *f = n.pt().find_pob(n.parent(), c);
@ -3622,6 +3764,28 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
}
}
if (m_global_gen) {
// if global gen is enabled, post-process the pob to create new subsume or conjecture pob
if (pob* new_pob = m_global_gen->mk_subsume_pob(n)) {
new_pob->set_gas(n.get_gas() - 1);
n.set_gas(n.get_gas() - 1);
out.push_back(new_pob);
m_stats.m_num_subsume_pobs++;
TRACE("global_verbose",
tout << "New subsume pob\n" << mk_pp(new_pob->post(), m) << "\n"
<< "gas:" << new_pob->get_gas() << "\n";);
} else if (pob* new_pob = m_gg_conjecture ? m_global_gen->mk_conjecture_pob(n) : nullptr) {
new_pob->set_gas(n.get_gas() - 1);
n.set_gas(n.get_gas() - 1);
out.push_back(new_pob);
m_stats.m_num_conj++;
TRACE("global",
tout << "New conjecture pob\n" << mk_pp(new_pob->post(), m) << "\n";);
}
}
// schedule the node to be placed back in the queue
n.inc_level();
out.push_back(&n);
@ -3636,7 +3800,24 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
return l_false;
}
case l_undef:
// something went wrong
// if the pob is a may pob, handle specially
if (n.is_may_pob()) {
// do not create children, but bump weakness
// bail out if this does not help
// AG: do not know why this is a good strategy
if (n.weakness() < 10) {
SASSERT(out.empty());
n.bump_weakness();
return expand_pob(n, out);
}
n.close();
m_stats.m_expand_pob_undef++;
IF_VERBOSE(1, verbose_stream() << " UNDEF "
<< std::fixed << std::setprecision(2)
<< watch.get_seconds () << "\n";);
return l_undef;
}
if (n.weakness() < 10 /* MAX_WEAKENSS */) {
bool has_new_child = false;
SASSERT(m_weak_abs);
@ -3933,6 +4114,11 @@ bool context::create_children(pob& n, datalog::rule const& r,
!mdl.is_true(n.post())))
{ kid->reset_derivation(); }
if (kid->is_may_pob()) {
SASSERT(n.get_gas() > 0);
n.set_gas(n.get_gas() - 1);
kid->set_gas(n.get_gas() - 1);
}
out.push_back(kid);
m_stats.m_num_queries++;
return true;
@ -3971,6 +4157,17 @@ void context::collect_statistics(statistics& st) const
st.update("SPACER num lemmas", m_stats.m_num_lemmas);
// -- number of restarts taken
st.update("SPACER restarts", m_stats.m_num_restarts);
// -- number of time pob abstraction was invoked
st.update("SPACER conj", m_stats.m_num_conj);
st.update("SPACER conj success", m_stats.m_num_conj_success);
st.update("SPACER conj failed",
m_stats.m_num_conj_failed);
st.update("SPACER pob out of gas", m_stats.m_num_pob_ofg);
st.update("SPACER subsume pob", m_stats.m_num_subsume_pobs);
st.update("SPACER subsume failed", m_stats.m_num_subsume_pob_reachable);
st.update("SPACER subsume success", m_stats.m_num_subsume_pob_blckd);
st.update("SPACER concretize", m_stats.m_num_concretize);
st.update("SPACER non local gen", m_stats.m_non_local_gen);
// -- time to initialize the rules
st.update ("time.spacer.init_rules", m_init_rules_watch.get_seconds ());
@ -3991,6 +4188,7 @@ void context::collect_statistics(statistics& st) const
for (unsigned i = 0; i < m_lemma_generalizers.size(); ++i) {
m_lemma_generalizers[i]->collect_statistics(st);
}
m_lmma_cluster->collect_statistics(st);
}
void context::reset_statistics()
@ -4008,6 +4206,7 @@ void context::reset_statistics()
m_lemma_generalizers[i]->reset_statistics();
}
m_lmma_cluster->reset_statistics();
m_init_rules_watch.reset ();
m_solve_watch.reset ();
m_propagate_watch.reset ();
@ -4093,8 +4292,6 @@ void context::add_constraint (expr *c, unsigned level)
}
void context::new_lemma_eh(pred_transformer &pt, lemma *lem) {
if (m_params.spacer_print_json().is_non_empty_string())
m_json_marshaller.register_lemma(lem);
bool handle=false;
for (unsigned i = 0; i < m_callbacks.size(); i++) {
handle|=m_callbacks[i]->new_lemma();
@ -4116,10 +4313,7 @@ void context::new_lemma_eh(pred_transformer &pt, lemma *lem) {
}
}
void context::new_pob_eh(pob *p) {
if (m_params.spacer_print_json().is_non_empty_string())
m_json_marshaller.register_pob(p);
}
void context::new_pob_eh(pob *p) { }
bool context::is_inductive() {
// check that inductive level (F infinity) of the query predicate
@ -4140,6 +4334,10 @@ inline bool pob_lt_proc::operator() (const pob *pn1, const pob *pn2) const
if (n1.depth() != n2.depth()) { return n1.depth() < n2.depth(); }
if (n1.is_subsume() != n2.is_subsume()) { return n1.is_subsume(); }
if (n1.is_conjecture() != n2.is_conjecture()) { return n1.is_conjecture(); }
if (n1.get_gas() != n2.get_gas()) { return n1.get_gas() > n2.get_gas(); }
// -- a more deterministic order of proof obligations in a queue
// if (!n1.get_context ().get_params ().spacer_nondet_tie_break ())
{
@ -4193,5 +4391,27 @@ inline bool pob_lt_proc::operator() (const pob *pn1, const pob *pn2) const
}
// set gas of each may parent to 0
// TODO: close siblings as well. kids of a pob are not stored in the pob
void context::close_all_may_parents(pob_ref node) {
pob_ref_vector to_do;
to_do.push_back(node.get());
while (to_do.size() != 0) {
pob_ref t = to_do.back();
t->set_gas(0);
if (t->is_may_pob()) {
t->close();
} else
break;
to_do.pop_back();
to_do.push_back(t->parent());
}
}
// construct a simplified version of the post
void pob::get_post_simplified(expr_ref_vector &pob_cube) {
pob_cube.reset();
pob_cube.push_back(m_post);
flatten_and(pob_cube);
simplify_bounds(pob_cube);
}
}