mirror of
https://github.com/Z3Prover/z3
synced 2025-12-05 19:42:23 +00:00
* Make spacer_sem_matcher::reset() public * Add .clang-format for src/muz/spacer * Mark substitution::get_bindings() as const * Fix in spacer_antiunify * Various helper methods in spacer_util Minor functions to compute number of free variables, detect presence of certain sub-expressions, etc. The diff is ugly because of clang-format * Add spacer_cluster for clustering lemmas A cluster of lemmas is a set of lemmas that are all instances of the same pattern, where a pattern is a qff formula with free variables. Currently, the instances are required to be explicit, that is, they are all obtained by substituting concrete values (i.e., numbers) for free variables of the pattern. Lemmas are clustered in cluster_db in each predicate transformer. * Integrate spacer_cluster into spacer_context * Custom clang-format pragmas for spacer_context spacer_context.(cpp|h) are large and have inconsistent formatting. Disable clang-format for them until merge with main z3 branch and re-format. * Computation of convex closure and matrix kernel Various LA functions. The implementations are somewhat preliminary. Convex closure is simplemented via syntactic convex closure procedure. Kernel computation considers many common cases. spacer_arith_kernel_sage implements kernel computation by call external Sage binary. It is used only for debugging and experiments. There is no link dependence on Sage. If desired, it can be removed. * Add spacer_concretize * Utility methods for spacer conjecture rule * Add spacer_expand_bnd_generalizer Generalizes arithmetic inequality literals of the form x <= c, by changing constant c to other constants found in the problem. * Add spacer_global_generalizer Global generalizer checks every new lemma against a cluster of previously learned lemmas, and, if possible, conjectures a new pob, that, when blocked, generalizes multiple existing lemmas. * Remove fp.spacer.print_json option The option is used to dump state of spacer into json for debugging. It has been replaced by `fp.spacer.trace_file` that allows dumping an execution of spacer. The json file can be reconstructed from the trace file elsewhere. * Workaround for segfault in spacer_proof_utils Issue #3 in hgvk94/z3 Segfault in some proof reduction. Avoid by bailing out on reduction. * Revert bug for incomplete models * Use local fresh variables in spacer_global_generalizer * Cleanup of spacer_convex_closure * Allow arbitrary expressions to name cols in convex_closure * WIP: convex closure * WIP: convex closure * Fix bindings order in spacer_global_generalizer The matcher creates substitution using std_order, which is reverse of expected order (variable 0 is last). Adjust the code appropriately for that. * Increase verbosity level for smt_context stats * Dead code in qe_mbp * bug fixes in spacer_global_generalizer::subsumer * Partially remove dependence of size of m_alphas I want m_alphas to potentially be greater than currently used alpha variables. This is helpful for reusing them across multiple calls to convex closure * Subtle bug in kernel computation Coefficient was being passed by reference and, therefore, was being changed indirectly. In the process, updated the code to be more generic to avoid rational computation in the middle of matrix manipulation. * another test for sparse_matrix_ops::kernel * Implementation of matrix kernel using Fraction Free Elimination Ensures that the kernel is int for int matrices. All divisions are exact. * clang-format sparse_matrix_ops.h * another implementation of ffe kernel in sparse_matrix_ops * Re-do arith_kernel and convex_closure * update spacer_global_generalization for new subsumer * remove spacer.gg.use_sage parameter * cleanup of spacer_global_generalizer * Removed dependency on sage * fix in spacer_convex_closure * spacer_sem_matcher: consider an additional semantic matching disabled until it is shown useful * spacer_global_generalizer: improve do_conjecture - if conjecture does not apply to pob, use lemma instead - better normalization - improve debug prints * spacer_conjecture: formatting * spacer_cluster: improve debug prints * spacer_context: improve debug prints * spacer_context: re-queue may pobs enabled even if global re-queue is disabled * spacer_cluster print formatting * reset methods on pob * cleanup of print and local variable names * formatting * reset generalization data once it has been used * refactored extra pob creation during global guidance * fix bug copying sparse matrix into spacer matrix * bug fix in spacer_convex_closure * formatting change in spacer_context * spacer_cluster: get_min_lvl chose level based on pob as well as lemmas * spacer_context: add desired_level to pob desired_level indicates at which level pob should be proved. A pob will be pushed to desired_level if necessary * spacer_context: renamed subsume stats the name of success/failed was switched * spacer_convex_closure: fix prototype of is_congruent_mod() * spacer_convex_closure: hacks in infer_div_pred() * spacer_util: do not expand literals with mod By default, equality literal t=p is expanded into t<=p && t>=p Disable the expansion in case t contains 'mod' operator since such expansion is usually not helpful for divisibility * spacer_util: rename m_util into m_arith * spacer_util: cleanup normalize() * spacer_util: formatting * spacer_context: formatting cleanup on subsume and conjecture * spacer_context: fix handling may pobs when abs_weakness is enabled A pob might be undef, so weakness must be bumped up * spacer_arith_kernel: enhance debug print * spacer_global_generalizer: improve matching on conjecture * spacer_global_generalizer: set desired level on conjecture pob * spacer_global_generalizer: debug print * spacer_global_generalizer: set min level on new pobs the new level should not be higher than the pob that was generalized * spacer_global_generalizer: do no re-create closed pobs If a generalized pob exist and closed, do not re-create it. * spacer_context: normalize twice * spacer_context: forward propagate only same kind of pobs * sketch of inductive generalizer A better implementation of inductive generalizer that in addition to dropping literals also attempts to weaken them. Current implementation is a sketch to be extended based on examples/requirements. * fix ordering in spacer_cluster_util * fix resetting of substitution matcher in spacer_conjecture Old code would forget to reset the substitution provided to the sem_matcher. Thus, if the substitution was matched once (i.e., one literal of interest is found), no other literal would be matched. * add spacer_util is_normalized() method used for debugging only * simplify normalization of pob expressions pob expressions are normalized to increase syntactic matching. Some of the normalization rules seem out of place, so removing them for now. * fix in spacer_global_generalizer If conjecture fails, do not try other generalization strategies -- they will not apply. * fix in spacer_context do not check that may pob is blocked by existing lemmas. It is likely to be blocked. Our goal is to block it again and generalize to a new lemma. This can be further improved by moving directly to generalization when pob is blocked by existing lemmas... Co-authored-by: hgvk94 <hgvk94@gmail.com>
303 lines
9.6 KiB
C++
303 lines
9.6 KiB
C++
#include "ast/expr_functors.h"
|
|
#include "muz/spacer/spacer_context.h"
|
|
|
|
using namespace spacer;
|
|
|
|
namespace {
|
|
|
|
class contains_array_op_proc : public i_expr_pred {
|
|
ast_manager &m;
|
|
family_id m_array_fid;
|
|
|
|
public:
|
|
contains_array_op_proc(ast_manager &manager)
|
|
: m(manager), m_array_fid(array_util(m).get_family_id()) {}
|
|
bool operator()(expr *e) override {
|
|
return is_app(e) && to_app(e)->get_family_id() == m_array_fid;
|
|
}
|
|
};
|
|
|
|
class lemma_inductive_generalizer : public lemma_generalizer {
|
|
struct stats {
|
|
unsigned count;
|
|
unsigned weaken_success;
|
|
unsigned weaken_fail;
|
|
stopwatch watch;
|
|
stats() { reset(); }
|
|
void reset() {
|
|
count = 0;
|
|
weaken_success = 0;
|
|
weaken_fail = 0;
|
|
watch.reset();
|
|
}
|
|
};
|
|
|
|
ast_manager &m;
|
|
expr_ref m_true;
|
|
stats m_st;
|
|
bool m_only_array_eligible;
|
|
bool m_enable_litweak;
|
|
|
|
contains_array_op_proc m_contains_array_op;
|
|
check_pred m_contains_array_pred;
|
|
|
|
expr_ref_vector m_pinned;
|
|
lemma *m_lemma = nullptr;
|
|
spacer::pred_transformer *m_pt = nullptr;
|
|
unsigned m_weakness = 0;
|
|
unsigned m_level = 0;
|
|
ptr_vector<expr> m_cube;
|
|
|
|
// temporary vector
|
|
expr_ref_vector m_core;
|
|
|
|
public:
|
|
lemma_inductive_generalizer(spacer::context &ctx,
|
|
bool only_array_eligible = false,
|
|
bool enable_literal_weakening = true)
|
|
: lemma_generalizer(ctx), m(ctx.get_ast_manager()),
|
|
m_true(m.mk_true(), m), m_only_array_eligible(only_array_eligible),
|
|
m_enable_litweak(enable_literal_weakening), m_contains_array_op(m),
|
|
m_contains_array_pred(m_contains_array_op, m),
|
|
|
|
m_pinned(m), m_core(m) {}
|
|
|
|
private:
|
|
// -- true if literal \p lit is eligible to be generalized
|
|
bool is_eligible(expr *lit) {
|
|
return !m_only_array_eligible || has_arrays(lit);
|
|
}
|
|
|
|
bool has_arrays(expr *lit) { return m_contains_array_op(lit); }
|
|
|
|
void reset() {
|
|
m_cube.reset();
|
|
m_weakness = 0;
|
|
m_level = 0;
|
|
m_pt = nullptr;
|
|
m_pinned.reset();
|
|
m_core.reset();
|
|
}
|
|
|
|
void setup(lemma_ref &lemma) {
|
|
// check that we start in uninitialized state
|
|
SASSERT(m_pt == nullptr);
|
|
m_lemma = lemma.get();
|
|
m_pt = &lemma->get_pob()->pt();
|
|
m_weakness = lemma->weakness();
|
|
m_level = lemma->level();
|
|
auto &cube = lemma->get_cube();
|
|
m_cube.reset();
|
|
for (auto *lit : cube) { m_cube.push_back(lit); }
|
|
}
|
|
|
|
// loads current generalization from m_cube to m_core
|
|
void load_cube_to_core() {
|
|
m_core.reset();
|
|
for (unsigned i = 0, sz = m_cube.size(); i < sz; ++i) {
|
|
auto *lit = m_cube.get(i);
|
|
if (lit == m_true) continue;
|
|
m_core.push_back(lit);
|
|
}
|
|
}
|
|
|
|
// returns true if m_cube is inductive
|
|
bool is_cube_inductive() {
|
|
load_cube_to_core();
|
|
if (m_core.empty()) return false;
|
|
|
|
unsigned used_level;
|
|
if (m_pt->check_inductive(m_level, m_core, used_level, m_weakness)) {
|
|
m_level = std::max(m_level, used_level);
|
|
return true;
|
|
}
|
|
return false;
|
|
}
|
|
|
|
// intersect m_cube with m_core
|
|
unsigned update_cube_by_core(unsigned from = 0) {
|
|
// generalize away all literals in m_cube that are not in m_core
|
|
// do not assume anything about order of literals in m_core
|
|
|
|
unsigned success = 0;
|
|
// mark core
|
|
ast_fast_mark2 marked_core;
|
|
for (auto *v : m_core) { marked_core.mark(v); }
|
|
|
|
// replace unmarked literals by m_true in m_cube
|
|
for (unsigned i = from, sz = m_cube.size(); i < sz; ++i) {
|
|
auto *lit = m_cube.get(i);
|
|
if (lit == m_true) continue;
|
|
if (!marked_core.is_marked(lit)) {
|
|
m_cube[i] = m_true;
|
|
success++;
|
|
}
|
|
}
|
|
return success;
|
|
}
|
|
// generalizes m_core and removes from m_cube all generalized literals
|
|
unsigned generalize_core(unsigned from = 0) {
|
|
unsigned success = 0;
|
|
unsigned used_level;
|
|
|
|
// -- while it is possible that a single literal can be generalized to
|
|
// false,
|
|
// -- it is fairly unlikely. Thus, we give up generalizing in this case.
|
|
if (m_core.empty()) return 0;
|
|
|
|
// -- check whether candidate in m_core is inductive
|
|
if (m_pt->check_inductive(m_level, m_core, used_level, m_weakness)) {
|
|
success += update_cube_by_core(from);
|
|
// update m_level to the largest level at which the the current
|
|
// candidate in m_cube is inductive
|
|
m_level = std::max(m_level, used_level);
|
|
}
|
|
|
|
return success;
|
|
}
|
|
|
|
// generalizes (i.e., drops) a specific literal of m_cube
|
|
unsigned generalize1(unsigned lit_idx) {
|
|
|
|
if (!is_eligible(m_cube.get(lit_idx))) return 0;
|
|
|
|
// -- populate m_core with all literals except the one being generalized
|
|
m_core.reset();
|
|
for (unsigned i = 0, sz = m_cube.size(); i < sz; ++i) {
|
|
auto *lit = m_cube.get(i);
|
|
if (lit == m_true || i == lit_idx) continue;
|
|
m_core.push_back(lit);
|
|
}
|
|
|
|
return generalize_core(lit_idx);
|
|
}
|
|
|
|
// generalizes all literals of m_cube in a given range
|
|
unsigned generalize_range(unsigned from, unsigned to) {
|
|
unsigned success = 0;
|
|
for (unsigned i = from; i < to; ++i) { success += generalize1(i); }
|
|
return success;
|
|
}
|
|
|
|
// weakens a given literal of m_cube
|
|
// weakening replaces a literal by a weaker literal(s)
|
|
// for example, x=y might get weakened into one of x<=y or y<=x
|
|
unsigned weaken1(unsigned lit_idx) {
|
|
if (!is_eligible(m_cube.get(lit_idx))) return 0;
|
|
if (m_cube.get(lit_idx) == m_true) return 0;
|
|
|
|
unsigned success = 0;
|
|
unsigned cube_sz = m_cube.size();
|
|
|
|
// -- save literal to be generalized, and replace it by true
|
|
expr *saved_lit = m_cube.get(lit_idx);
|
|
m_cube[lit_idx] = m_true;
|
|
|
|
// -- add new weaker literals to end of m_cube and attempt to generalize
|
|
expr_ref_vector weakening(m);
|
|
weakening.push_back(saved_lit);
|
|
expand_literals(m, weakening);
|
|
if (weakening.get(0) != saved_lit) {
|
|
for (auto *lit : weakening) {
|
|
m_cube.push_back(lit);
|
|
m_pinned.push_back(lit);
|
|
}
|
|
|
|
if (m_cube.size() - cube_sz >= 2) {
|
|
// normal case: generalize new weakening
|
|
success += generalize_range(cube_sz, m_cube.size());
|
|
} else {
|
|
// special case -- weaken literal by another literal, check that
|
|
// cube is still inductive
|
|
success += (is_cube_inductive() ? 1 : 0);
|
|
}
|
|
}
|
|
|
|
// -- failed to generalize, restore removed literal and m_cube
|
|
if (success == 0) {
|
|
m_cube[lit_idx] = saved_lit;
|
|
m_cube.shrink(cube_sz);
|
|
m_st.weaken_fail++;
|
|
} else {
|
|
m_st.weaken_success++;
|
|
}
|
|
|
|
return success;
|
|
}
|
|
|
|
// weakens literals of m_cube in a given range
|
|
unsigned weaken_range(unsigned from, unsigned to) {
|
|
unsigned success = 0;
|
|
for (unsigned i = from; i < to; ++i) { success += weaken1(i); }
|
|
return success;
|
|
}
|
|
|
|
public:
|
|
// entry point for generalization
|
|
void operator()(lemma_ref &lemma) override {
|
|
if (lemma->get_cube().empty()) return;
|
|
|
|
m_st.count++;
|
|
scoped_watch _w_(m_st.watch);
|
|
|
|
setup(lemma);
|
|
|
|
unsigned num_gens = 0;
|
|
|
|
// -- first round -- generalize by dropping literals
|
|
num_gens += generalize_range(0, m_cube.size());
|
|
|
|
// -- if weakening is enabled, start next round
|
|
if (m_enable_litweak) {
|
|
unsigned cube_sz = m_cube.size();
|
|
// -- second round -- weaken literals that cannot be dropped
|
|
num_gens += weaken_range(0, cube_sz);
|
|
|
|
// -- third round -- weaken literals produced in prev round
|
|
if (cube_sz < m_cube.size())
|
|
num_gens += weaken_range(cube_sz, m_cube.size());
|
|
}
|
|
|
|
// if there is at least one generalization, update lemma
|
|
if (num_gens > 0) {
|
|
TRACE("indgen",
|
|
tout << "Generalized " << num_gens << " literals\n";);
|
|
|
|
// reuse m_core since it is not needed for anything else
|
|
m_core.reset();
|
|
for (auto *lit : m_cube) {
|
|
if (lit != m_true) m_core.push_back(lit);
|
|
}
|
|
|
|
TRACE("indgen", tout << "Original: " << lemma->get_cube() << "\n"
|
|
<< "Generalized: " << m_core << "\n";);
|
|
|
|
lemma->update_cube(lemma->get_pob(), m_core);
|
|
lemma->set_level(m_level);
|
|
}
|
|
|
|
reset();
|
|
|
|
return;
|
|
}
|
|
|
|
void collect_statistics(statistics &st) const override {
|
|
st.update("time.spacer.solve.reach.gen.ind", m_st.watch.get_seconds());
|
|
st.update("SPACER inductive gen", m_st.count);
|
|
st.update("SPACER inductive gen weaken success", m_st.weaken_success);
|
|
st.update("SPACER inductive gen weaken fail", m_st.weaken_fail);
|
|
}
|
|
void reset_statistics() override { m_st.reset(); }
|
|
};
|
|
} // namespace
|
|
|
|
namespace spacer {
|
|
lemma_generalizer *
|
|
alloc_lemma_inductive_generalizer(spacer::context &ctx,
|
|
bool only_array_eligible,
|
|
bool enable_literal_weakening) {
|
|
return alloc(lemma_inductive_generalizer, ctx, only_array_eligible,
|
|
enable_literal_weakening);
|
|
}
|
|
|
|
} // namespace spacer
|