3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2025-04-23 09:05:31 +00:00

Spacer Global Guidance (#6026)

* Make spacer_sem_matcher::reset() public

* Add .clang-format for src/muz/spacer

* Mark substitution::get_bindings() as const

* Fix in spacer_antiunify

* Various helper methods in spacer_util

Minor functions to compute number of free variables, detect presence of certain
sub-expressions, etc.

The diff is ugly because of clang-format

* Add spacer_cluster for clustering lemmas

A cluster of lemmas is a set of lemmas that are all instances of the same
pattern, where a pattern is a qff formula with free variables.

Currently, the instances are required to be explicit, that is, they are all
obtained by substituting concrete values (i.e., numbers) for free variables of
the pattern.

Lemmas are clustered in cluster_db in each predicate transformer.

* Integrate spacer_cluster into spacer_context

* Custom clang-format pragmas for spacer_context

spacer_context.(cpp|h) are large and have inconsistent formatting. Disable
clang-format for them until merge with main z3 branch and re-format.

* Computation of convex closure and matrix kernel

Various LA functions. The implementations are somewhat preliminary.

Convex closure is simplemented via syntactic convex closure procedure.
Kernel computation considers many common cases.

spacer_arith_kernel_sage implements kernel computation by call external
Sage binary. It is used only for debugging and experiments. There is no
link dependence on Sage. If desired, it can be removed.

* Add spacer_concretize

* Utility methods for spacer conjecture rule

* Add spacer_expand_bnd_generalizer

Generalizes arithmetic inequality literals of the form x <= c,
by changing constant c to other constants found in the problem.

* Add spacer_global_generalizer

Global generalizer checks every new lemma against a cluster
of previously learned lemmas, and, if possible, conjectures
a new pob, that, when blocked, generalizes multiple existing
lemmas.

* Remove fp.spacer.print_json option

The option is used to dump state of spacer into json for debugging.

It has been replaced by `fp.spacer.trace_file` that allows dumping an execution
of spacer. The json file can be reconstructed from the trace file elsewhere.

* Workaround for segfault in spacer_proof_utils

Issue #3 in hgvk94/z3

Segfault in some proof reduction. Avoid by bailing out on reduction.

* Revert bug for incomplete models

* Use local fresh variables in spacer_global_generalizer

* Cleanup of spacer_convex_closure

* Allow arbitrary expressions to name cols in convex_closure

* WIP: convex closure

* WIP: convex closure

* Fix bindings order in spacer_global_generalizer

The matcher creates substitution using std_order, which is
reverse of expected order (variable 0 is last). Adjust the code
appropriately for that.

* Increase verbosity level for smt_context stats

* Dead code in qe_mbp

* bug fixes in spacer_global_generalizer::subsumer

* Partially remove dependence of size of m_alphas

I want m_alphas to potentially be greater than currently used alpha variables.
This is helpful for reusing them across multiple calls to convex closure

* Subtle bug in kernel computation

Coefficient was being passed by reference and, therefore, was
being changed indirectly.

In the process, updated the code to be more generic to avoid rational
computation in the middle of matrix manipulation.

* another test for sparse_matrix_ops::kernel

* Implementation of matrix kernel using Fraction Free Elimination

Ensures that the kernel is int for int matrices. All divisions are exact.

* clang-format sparse_matrix_ops.h

* another implementation of ffe kernel in sparse_matrix_ops

* Re-do arith_kernel and convex_closure

* update spacer_global_generalization for new subsumer

* remove spacer.gg.use_sage parameter

* cleanup of spacer_global_generalizer

* Removed dependency on sage

* fix in spacer_convex_closure

* spacer_sem_matcher: consider an additional semantic matching

disabled until it is shown useful

* spacer_global_generalizer: improve do_conjecture

 - if conjecture does not apply to pob, use lemma instead
 - better normalization
 - improve debug prints

* spacer_conjecture: formatting

* spacer_cluster: improve debug prints

* spacer_context: improve debug prints

* spacer_context: re-queue may pobs

enabled even if global re-queue is disabled

* spacer_cluster print formatting

* reset methods on pob

* cleanup of print and local variable names

* formatting

* reset generalization data once it has been used

* refactored extra pob creation during global guidance

* fix bug copying sparse matrix into spacer matrix

* bug fix in spacer_convex_closure

* formatting change in spacer_context

* spacer_cluster: get_min_lvl

chose level based on pob as well as lemmas

* spacer_context: add desired_level to pob

desired_level indicates at which level pob should be proved.
A pob will be pushed to desired_level if necessary

* spacer_context: renamed subsume stats

the name of success/failed was switched

* spacer_convex_closure: fix prototype of is_congruent_mod()

* spacer_convex_closure: hacks in infer_div_pred()

* spacer_util: do not expand literals with mod

By default, equality literal t=p is expanded into t<=p && t>=p

Disable the expansion in case t contains 'mod' operator since such
expansion is usually not helpful for divisibility

* spacer_util: rename m_util into m_arith

* spacer_util: cleanup normalize()

* spacer_util: formatting

* spacer_context: formatting cleanup on subsume and conjecture

* spacer_context: fix handling may pobs when abs_weakness is enabled

A pob might be undef, so weakness must be bumped up

* spacer_arith_kernel: enhance debug print

* spacer_global_generalizer: improve matching on conjecture

* spacer_global_generalizer: set desired level on conjecture pob

* spacer_global_generalizer: debug print

* spacer_global_generalizer: set min level on new pobs

the new level should not be higher than the pob that was generalized

* spacer_global_generalizer: do no re-create closed pobs

If a generalized pob exist and closed, do not re-create it.

* spacer_context: normalize twice

* spacer_context: forward propagate only same kind of pobs

* sketch of inductive generalizer

A better implementation of inductive generalizer that in addition to dropping
literals also attempts to weaken them.

Current implementation is a sketch to be extended based on examples/requirements.

* fix ordering in spacer_cluster_util

* fix resetting of substitution matcher in spacer_conjecture

Old code would forget to reset the substitution provided to the sem_matcher.
Thus, if the substitution was matched once (i.e., one literal of interest is
found), no other literal would be matched.

* add spacer_util is_normalized() method

used for debugging only

* simplify normalization of pob expressions

pob expressions are normalized to increase syntactic matching.
Some of the normalization rules seem out of place, so removing them for now.

* fix in spacer_global_generalizer

If conjecture fails, do not try other generalization strategies -- they will not apply.

* fix in spacer_context

do not check that may pob is blocked by existing lemmas.
It is likely to be blocked. Our goal is to block it again and generalize
to a new lemma.

This can be further improved by moving directly to generalization when pob is
blocked by existing lemmas...

Co-authored-by: hgvk94 <hgvk94@gmail.com>
This commit is contained in:
Arie Gurfinkel 2022-08-30 18:47:00 -04:00 committed by GitHub
parent 1a79d92f3a
commit d2b618df23
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
41 changed files with 6063 additions and 1917 deletions

View file

@ -169,7 +169,6 @@ def_module_params('fp',
('spacer.p3.share_lemmas', BOOL, False, 'Share frame lemmas'),
('spacer.p3.share_invariants', BOOL, False, "Share invariants lemmas"),
('spacer.min_level', UINT, 0, 'Minimal level to explore'),
('spacer.print_json', SYMBOL, '', 'Print pobs tree in JSON format to a given file'),
('spacer.trace_file', SYMBOL, '', 'Log file for progress events'),
('spacer.ctp', BOOL, True, 'Enable counterexample-to-pushing'),
('spacer.use_inc_clause', BOOL, True, 'Use incremental clause to represent trans'),
@ -181,4 +180,10 @@ def_module_params('fp',
('spacer.use_lim_num_gen', BOOL, False, 'Enable limit numbers generalizer to get smaller numbers'),
('spacer.logic', SYMBOL, '', 'SMT-LIB logic to configure internal SMT solvers'),
('spacer.arith.solver', UINT, 2, 'arithmetic solver: 0 - no solver, 1 - bellman-ford based solver (diff. logic only), 2 - simplex based solver, 3 - floyd-warshall based solver (diff. logic only) and no theory combination 4 - utvpi, 5 - infinitary lra, 6 - lra solver'),
('spacer.global', BOOL, False, 'Enable global guidance'),
('spacer.gg.concretize', BOOL, True, 'Enable global guidance concretize'),
('spacer.gg.conjecture', BOOL, True, 'Enable global guidance conjecture'),
('spacer.gg.subsume', BOOL, True, 'Enable global guidance subsume'),
('spacer.use_iuc', BOOL, True, 'Enable Interpolating Unsat Core(IUC) for lemma generalization'),
('spacer.expand_bnd', BOOL, False, 'Enable expand-bound lemma generalization'),
))

View file

@ -0,0 +1,8 @@
---
BasedOnStyle: LLVM
AllowShortFunctionsOnASingleLine: All
IndentWidth: '4'
AllowShortBlocksOnASingleLine: true
AllowShortIfStatementsOnASingleLine: true
AllowShortLoopsOnASingleLine: true
...

View file

@ -10,6 +10,7 @@ z3_add_component(spacer
spacer_prop_solver.cpp
spacer_sym_mux.cpp
spacer_util.cpp
spacer_cluster_util.cpp
spacer_iuc_solver.cpp
spacer_legacy_mbp.cpp
spacer_proof_utils.cpp
@ -22,12 +23,19 @@ z3_add_component(spacer
spacer_sem_matcher.cpp
spacer_quant_generalizer.cpp
spacer_arith_generalizers.cpp
spacer_global_generalizer.cpp
spacer_ind_lemma_generalizer.cpp
spacer_expand_bnd_generalizer.cpp
spacer_cluster.cpp
spacer_callback.cpp
spacer_json.cpp
spacer_iuc_proof.cpp
spacer_mbc.cpp
spacer_pdr.cpp
spacer_sat_answer.cpp
spacer_concretize.cpp
spacer_convex_closure.cpp
spacer_conjecture.cpp
spacer_arith_kernel.cpp
COMPONENT_DEPENDENCIES
arith_tactics
core_tactics

View file

@ -155,6 +155,7 @@ void anti_unifier::operator()(expr *e1, expr *e2, expr_ref &res,
m_pinned.push_back(u);
m_cache.insert(n1, n2, u);
}
m_todo.pop_back();
}
expr *r;

View file

@ -0,0 +1,108 @@
/**++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_arith_kernel.cpp
Abstract:
Compute kernel of a matrix
Author:
Hari Govind
Arie Gurfinkel
Notes:
--*/
#include "muz/spacer/spacer_arith_kernel.h"
#include "math/simplex/sparse_matrix_def.h"
#include "math/simplex/sparse_matrix_ops.h"
using namespace spacer;
bool spacer_arith_kernel::compute_kernel() {
SASSERT(m_matrix.num_rows() > 1);
if (false && m_matrix.compute_linear_deps(m_kernel)) {
// the matrix cannot be reduced further
if (m_matrix.num_cols() - m_kernel.num_rows() <= 1) return true;
m_kernel.reset(m_kernel.num_cols());
SASSERT(m_matrix.num_cols() > 2);
}
if (m_matrix.num_cols() > 2) m_st.m_failed++;
if (m_plugin /* && m_matrix.num_cols() > 2 */) {
return m_plugin->compute_kernel(m_matrix, m_kernel, m_basic_vars);
}
return false;
}
namespace {
class simplex_arith_kernel_plugin : public spacer_arith_kernel::plugin {
public:
simplex_arith_kernel_plugin() {}
bool compute_kernel(const spacer_matrix &in, spacer_matrix &out,
vector<unsigned> &basics) override {
using qmatrix = simplex::sparse_matrix<simplex::mpq_ext>;
unsynch_mpq_manager m;
qmatrix qmat(m);
// extra column for column of 1
qmat.ensure_var(in.num_cols());
for (unsigned i = 0, n_rows = in.num_rows(); i < n_rows; ++i) {
auto row_id = qmat.mk_row();
unsigned j, n_cols;
for (j = 0, n_cols = in.num_cols(); j < n_cols; ++j) {
qmat.add_var(row_id, in.get(i, j).to_mpq(), j);
}
qmat.add_var(row_id, rational::one().to_mpq(), n_cols);
}
TRACE("gg", qmat.display(tout););
qmatrix kern(m);
simplex::sparse_matrix_ops::kernel_ffe<simplex::mpq_ext>(qmat, kern,
basics);
out.reset(kern.num_vars());
vector<rational> vec;
for (auto row : kern.get_rows()) {
vec.reset();
vec.reserve(kern.num_vars(), rational(0));
for (auto &[coeff, v] : kern.get_row(row)) {
vec[v] = rational(coeff);
}
out.add_row(vec);
}
TRACE("gg", {
tout << "Computed kernel\n";
qmat.display(tout);
tout << "\n";
kern.display(tout);
tout << "\n";
tout << "basics: " << basics << "\n";
out.display(tout);
});
return out.num_rows() > 0;
}
void collect_statistics(statistics &st) const override {}
void reset_statistics() override {}
void reset() override {}
};
} // namespace
namespace spacer {
spacer_arith_kernel::plugin *mk_simplex_kernel_plugin() {
return alloc(simplex_arith_kernel_plugin);
}
} // namespace spacer

View file

@ -0,0 +1,93 @@
#pragma once
/**++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_arith_kernel.cpp
Abstract:
Compute kernel of a matrix
Author:
Hari Govind
Arie Gurfinkel
Notes:
--*/
#include "spacer_matrix.h"
#include "util/statistics.h"
namespace spacer {
/**
Computes a kernel of a matrix.
*/
class spacer_arith_kernel {
public:
class plugin {
public:
virtual ~plugin() {}
virtual bool compute_kernel(const spacer_matrix &in_matrix,
spacer_matrix &out_kernel,
vector<unsigned> &basics) = 0;
virtual void collect_statistics(statistics &st) const = 0;
virtual void reset_statistics() = 0;
virtual void reset() = 0;
};
protected:
struct stats {
unsigned m_failed;
stats() { reset(); }
void reset() { m_failed = 0; }
};
stats m_st;
/// Input matrix for which kernel is to be computed
const spacer_matrix &m_matrix;
/// Output matrix representing the kernel
spacer_matrix m_kernel;
/// columns in the kernel that correspond to basic vars
vector<unsigned> m_basic_vars;
scoped_ptr<plugin> m_plugin;
public:
spacer_arith_kernel(spacer_matrix &matrix)
: m_matrix(matrix), m_kernel(0, 0) {}
virtual ~spacer_arith_kernel() = default;
void set_plugin(spacer_arith_kernel::plugin *plugin) { m_plugin = plugin; }
/// Computes kernel of a matrix
/// returns true if the computation was successful
/// use \p spacer_arith_kernel::get_kernel() to get the kernel
bool compute_kernel();
bool operator()() { return compute_kernel(); }
const spacer_matrix &get_kernel() const { return m_kernel; }
const vector<unsigned> &get_basic_vars() const { return m_basic_vars; }
void reset() {
m_kernel = spacer_matrix(0, 0);
if (m_plugin) m_plugin->reset();
}
virtual void collect_statistics(statistics &st) const {
st.update("SPACER arith kernel failed", m_st.m_failed);
if (m_plugin) { m_plugin->collect_statistics(st); }
}
virtual void reset_statistics() {
m_st.reset();
if (m_plugin) m_plugin->reset_statistics();
}
};
spacer_arith_kernel::plugin *mk_simplex_kernel_plugin();
} // namespace spacer

View file

@ -0,0 +1,397 @@
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_cluster.cpp
Abstract:
Discover and mark lemma clusters
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include <algorithm>
#include "ast/arith_decl_plugin.h"
#include "ast/ast_util.h"
#include "ast/rewriter/var_subst.h"
#include "ast/substitution/substitution.h"
#include "muz/spacer/spacer_antiunify.h"
#include "muz/spacer/spacer_cluster.h"
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_manager.h"
#include "muz/spacer/spacer_util.h"
#include "smt/tactic/unit_subsumption_tactic.h"
#include "util/mpq.h"
#include "util/vector.h"
#define MAX_CLUSTER_SIZE 5
#define MAX_CLUSTERS 5
#define GAS_INIT 10
namespace spacer {
using var_offset = std::pair<unsigned, unsigned>;
lemma_cluster::lemma_cluster(const expr_ref &pattern)
: m(pattern.get_manager()), m_arith(m), m_bv(m), m_ref_count(0),
m_pattern(pattern), m_matcher(m), m_gas(GAS_INIT) {
m_num_vars = get_num_vars(m_pattern);
}
lemma_cluster::lemma_cluster(const lemma_cluster &other)
: m(other.get_manager()), m_arith(m), m_bv(m), m_ref_count(0),
m_pattern(other.get_pattern()), m_num_vars(other.m_num_vars),
m_matcher(m), m_gas(other.get_gas()) {
for (const auto &li : other.get_lemmas()) { m_lemma_vec.push_back(li); }
}
/// Get a conjunction of all the lemmas in cluster
void lemma_cluster::get_conj_lemmas(expr_ref &e) const {
expr_ref_vector conj(m);
for (const auto &lem : get_lemmas()) {
conj.push_back(lem.get_lemma()->get_expr());
}
e = mk_and(conj);
}
bool lemma_cluster::contains(const lemma_ref &lemma) {
for (const auto &li : get_lemmas()) {
if (lemma->get_expr() == li.get_lemma()->get_expr()) return true;
}
return false;
}
unsigned lemma_cluster::get_min_lvl() {
if (m_lemma_vec.empty()) return 0;
unsigned lvl = m_lemma_vec[0].get_lemma()->level();
for (auto l : m_lemma_vec) { lvl = std::min(lvl, l.get_lemma()->level()); }
// if all lemmas are at infinity, use the level of the lowest pob
if (is_infty_level(lvl)) {
for (auto l : m_lemma_vec) {
if (l.get_lemma()->has_pob())
lvl = std::min(lvl, l.get_lemma()->get_pob()->level());
}
}
return lvl;
}
/// Checks whether \p e matches the pattern of the cluster
/// Returns true on success and set \p sub to the corresponding substitution
bool lemma_cluster::match(const expr_ref &e, substitution &sub) {
bool pos;
var_offset var;
expr_offset r;
m_matcher.reset();
bool is_match = m_matcher(m_pattern, e, sub, pos);
if (!(is_match && pos)) return false;
unsigned n_binds = sub.get_num_bindings();
auto is_numeral = [&](expr *e) {
return m_arith.is_numeral(e) || m_bv.is_numeral(e);
};
// All the matches should be numerals
for (unsigned i = 0; i < n_binds; i++) {
sub.get_binding(i, var, r);
if (!is_numeral(r.get_expr())) return false;
}
return true;
}
bool lemma_cluster::can_contain(const lemma_ref &lemma) {
substitution sub(m);
expr_ref cube(m);
sub.reserve(1, m_num_vars);
cube = mk_and(lemma->get_cube());
normalize_order(cube, cube);
return match(cube, sub);
}
lemma_cluster::lemma_info *
lemma_cluster::get_lemma_info(const lemma_ref &lemma) {
SASSERT(contains(lemma));
for (auto &li : m_lemma_vec) {
if (lemma == li.get_lemma()) { return &li; }
}
UNREACHABLE();
return nullptr;
}
/// Removes subsumed lemmas in the cluster
///
/// Removed lemmas are placed into \p removed_lemmas
void lemma_cluster::rm_subsumed(lemma_info_vector &removed_lemmas) {
removed_lemmas.reset();
if (m_lemma_vec.size() <= 1) return;
// set up and run the simplifier
tactic_ref simplifier = mk_unit_subsumption_tactic(m);
goal_ref g(alloc(goal, m, false, false, false));
goal_ref_buffer result;
for (auto l : m_lemma_vec) { g->assert_expr(l.get_lemma()->get_expr()); }
(*simplifier)(g, result);
SASSERT(result.size() == 1);
goal *r = result[0];
// nothing removed
if (r->size() == m_lemma_vec.size()) return;
// collect removed lemmas
lemma_info_vector keep;
for (auto lem : m_lemma_vec) {
bool found = false;
for (unsigned i = 0; i < r->size(); i++) {
if (lem.get_lemma()->get_expr() == r->form(i)) {
found = true;
keep.push_back(lem);
TRACE("cluster_stats_verb", tout << "Keeping lemma "
<< lem.get_lemma()->get_cube()
<< "\n";);
break;
}
}
if (!found) {
TRACE("cluster_stats_verb", tout << "Removing subsumed lemma "
<< lem.get_lemma()->get_cube()
<< "\n";);
removed_lemmas.push_back(lem);
}
}
m_lemma_vec.reset();
m_lemma_vec.append(keep);
}
/// A a lemma to a cluster
///
/// Removes subsumed lemmas if \p subs_check is true
///
/// Returns false if lemma does not match the pattern or if it is already in the
/// cluster. Repetition of lemmas is avoided by doing a linear scan over the
/// lemmas in the cluster. Adding a lemma can reduce the size of the cluster due
/// to subsumption reduction.
bool lemma_cluster::add_lemma(const lemma_ref &lemma, bool subsume) {
substitution sub(m);
expr_ref cube(m);
sub.reserve(1, m_num_vars);
cube = mk_and(lemma->get_cube());
normalize_order(cube, cube);
if (!match(cube, sub)) return false;
// cluster already contains the lemma
if (contains(lemma)) return false;
TRACE("cluster_stats_verb",
tout << "Trying to add lemma " << lemma->get_cube() << "\n";);
lemma_cluster::lemma_info li(lemma, sub);
m_lemma_vec.push_back(li);
if (subsume) {
lemma_info_vector removed_lemmas;
rm_subsumed(removed_lemmas);
for (auto rm : removed_lemmas) {
// There is going to at most one removed lemma that matches l_i
// if there is one, return false since the new lemma was not added
if (rm.get_lemma() == li.get_lemma()) return false;
}
}
TRACE("cluster_stats", tout << "Added lemma\n" << mk_and(lemma->get_cube()) << "\n"
<< "to existing cluster\n" << m_pattern << "\n";);
return true;
}
lemma_cluster_finder::lemma_cluster_finder(ast_manager &_m)
: m(_m), m_arith(m), m_bv(m) {}
/// Check whether \p cube and \p lcube differ only in interpreted constants
bool lemma_cluster_finder::are_neighbours(const expr_ref &cube1,
const expr_ref &cube2) {
SASSERT(is_ground(cube1));
SASSERT(is_ground(cube2));
anti_unifier antiunify(m);
expr_ref pat(m);
substitution sub1(m), sub2(m);
antiunify(cube1, cube2, pat, sub1, sub2);
SASSERT(sub1.get_num_bindings() == sub2.get_num_bindings());
return is_numeric_sub(sub1) && is_numeric_sub(sub2);
}
/// Compute antiunification of \p cube with all formulas in \p fmls.
///
/// Should return
/// \exist res (\forall f \in fmls (\exist i_sub res[i_sub] == f))
/// However, the algorithm is incomplete: it returns such a res iff
/// res \in {antiU(cube, e) | e \in fmls}
/// Returns true if res is found
/// TODO: do complete n-ary anti-unification. Not done now
/// because anti_unifier does not support free variables
bool lemma_cluster_finder::anti_unify_n_intrp(const expr_ref &cube,
expr_ref_vector &fmls,
expr_ref &res) {
expr_ref_vector patterns(m);
expr_ref pat(m);
anti_unifier antiunify(m);
substitution sub1(m), sub2(m);
TRACE("cluster_stats_verb",
tout << "Trying to generate a general pattern for " << cube
<< " neighbours are " << fmls << "\n";);
// collect candidates for res
for (expr *c : fmls) {
antiunify.reset();
sub1.reset();
sub2.reset();
SASSERT(are_neighbours(cube, {c, m}));
antiunify(cube, expr_ref(c, m), pat, sub1, sub2);
patterns.push_back(pat);
}
// go through all the patterns to see if there is a pattern which is general
// enough to include all lemmas.
bool is_general_pattern = false, pos = true, all_same = true;
sem_matcher matcher(m);
unsigned n_vars_pat = 0;
for (expr *e : patterns) {
TRACE("cluster_stats_verb",
tout << "Checking pattern " << mk_pp(e, m) << "\n";);
is_general_pattern = true;
n_vars_pat = get_num_vars(e);
all_same = all_same && n_vars_pat == 0;
for (auto *lcube : fmls) {
matcher.reset();
sub1.reset();
sub1.reserve(1, n_vars_pat);
if (!(matcher(e, lcube, sub1, pos) && pos)) {
// this pattern is no good
is_general_pattern = false;
break;
}
}
if (is_general_pattern) {
SASSERT(e != nullptr);
TRACE("cluster_stats",
tout << "Found a general pattern\n" << mk_pp(e, m) << "\n";);
// found a good pattern
res = expr_ref(e, m);
return true;
}
}
CTRACE("cluster_stats", !all_same,
tout << "Failed to find a general pattern for cluster. Cube is: "
<< cube << " Patterns are " << patterns << "\n";);
return false;
}
/// Add a new lemma \p lemma to a cluster
///
/// Creates a new cluster for the lemma if necessary
void lemma_cluster_finder::cluster(lemma_ref &lemma) {
scoped_watch _w_(m_st.watch);
pred_transformer &pt = (lemma->get_pob())->pt();
// check whether lemmas has already been added
if (pt.clstr_contains(lemma)) return;
/// Add the lemma to a cluster it is matched against
lemma_cluster *clstr = pt.clstr_match(lemma);
if (clstr && clstr->get_size() <= MAX_CLUSTER_SIZE) {
TRACE("cluster_stats_verb", {
tout << "Trying to add lemma\n" << lemma->get_cube()
<< " to an existing cluster\n";
for (auto lem : clstr->get_lemmas())
tout << lem.get_lemma()->get_cube() << "\n";
});
clstr->add_lemma(lemma);
return;
}
/// Dont create more than MAX_CLUSTERS number of clusters
if (clstr && pt.clstr_count(clstr->get_pattern()) > MAX_CLUSTERS) {
return;
}
// Try to create a new cluster with lemma even if it can belong to an
// oversized cluster. The new cluster will not contain any lemma that is
// already in another cluster.
lemma_ref_vector all_lemmas;
pt.get_all_lemmas(all_lemmas, false);
expr_ref lcube(m), cube(m);
lcube = mk_and(lemma->get_cube());
normalize_order(lcube, lcube);
expr_ref_vector lma_cubes(m);
lemma_ref_vector neighbours;
for (auto *l : all_lemmas) {
cube.reset();
cube = mk_and(l->get_cube());
normalize_order(cube, cube);
// make sure that l is not in any other clusters
if (are_neighbours(lcube, cube) && cube != lcube &&
!pt.clstr_contains(l)) {
neighbours.push_back(l);
lma_cubes.push_back(cube);
}
}
if (neighbours.empty()) return;
// compute the most general pattern to which lemmas fit
expr_ref pattern(m);
bool is_cluster = anti_unify_n_intrp(lcube, lma_cubes, pattern);
// no general pattern
if (!is_cluster || get_num_vars(pattern) == 0) return;
// When creating a cluster, its size can be more than MAX_CLUSTER_SIZE. The
// size limitation is only for adding new lemmas to the cluster. The size is
// just an arbitrary number.
// What matters is that we do not allow a cluster to grow indefinitely.
// for example, given a cluster in which one lemma subsumes all other
// lemmas. No matter how big the cluster is, GSpacer is going to produce the
// exact same pob on this cluster. This can lead to divergence. The
// subsumption check we do is based on unit propagation, it is not complete.
lemma_cluster *cluster = pt.mk_cluster(pattern);
TRACE("cluster_stats",
tout << "created new cluster with pattern:\n" << pattern << "\n"
<< " and lemma cube:\n" << lcube << "\n";);
IF_VERBOSE(2, verbose_stream() << "\ncreated new cluster with pattern: "
<< pattern << "\n"
<< " and lemma cube: " << lcube << "\n";);
for (const lemma_ref &l : neighbours) {
SASSERT(cluster->can_contain(l));
bool added = cluster->add_lemma(l, false);
CTRACE("cluster_stats", added,
tout << "Added neighbour lemma\n" << mk_and(l->get_cube()) << "\n";);
}
// finally add the lemma and do subsumption check
cluster->add_lemma(lemma, true);
SASSERT(cluster->get_size() >= 1);
}
void lemma_cluster_finder::collect_statistics(statistics &st) const {
st.update("time.spacer.solve.reach.cluster", m_st.watch.get_seconds());
}
} // namespace spacer

View file

@ -0,0 +1,176 @@
#pragma once
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_cluster.h
Abstract:
Discover and mark lemma clusters
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include <ast/ast_util.h>
#include <ast/substitution/substitution.h>
#include <muz/spacer/spacer_sem_matcher.h>
#include <util/ref_vector.h>
#include <util/statistics.h>
#include <util/stopwatch.h>
#define GAS_POB_COEFF 5
namespace spacer {
class lemma;
using lemma_ref = ref<lemma>;
/// Representation of a cluster of lemmas
///
/// A cluster of lemmas is a collection of lemma instances. A cluster is
/// defined by a \p pattern that is a qff formula with free variables, and
/// contains lemmas that are instances of the pattern (i.e., obtained from the
/// pattern by substitution of constants for variables). That is, each lemma
/// in the cluster matches the pattern.
class lemma_cluster {
/// Lemma in a cluster
///
/// A lemma and a substitution witnessing that lemma is an instance of a
/// pattern
class lemma_info {
// a lemma
lemma_ref m_lemma;
// a substitution such that for some pattern, \p m_lemma is an instance
// substitution is stored in std_order for quantifiers (i.e., reverse of
// expected)
substitution m_sub;
public:
lemma_info(const lemma_ref &body, const substitution &sub)
: m_lemma(body), m_sub(sub) {}
const lemma_ref &get_lemma() const { return m_lemma; }
const substitution &get_sub() const { return m_sub; }
};
public:
using lemma_info_vector = vector<lemma_cluster::lemma_info, true>;
private:
ast_manager &m;
arith_util m_arith;
bv_util m_bv;
// reference counter
unsigned m_ref_count;
// pattern defining the cluster
expr_ref m_pattern;
unsigned m_num_vars;
// vector of lemmas in the cluster
lemma_info_vector m_lemma_vec;
// shared matcher object to match lemmas against the pattern
sem_matcher m_matcher;
// The number of times CSM has to be tried using this cluster
unsigned m_gas;
/// Remove subsumed lemmas in the cluster.
///
/// Returns list of removed lemmas in \p removed_lemmas
void rm_subsumed(lemma_info_vector &removed_lemmas);
/// Checks whether \p e matches m_pattern.
///
/// Returns true on success and sets \p sub to the corresponding
/// substitution
bool match(const expr_ref &e, substitution &sub);
ast_manager &get_manager() const { return m; }
public:
lemma_cluster(const expr_ref &pattern);
lemma_cluster(const lemma_cluster &other);
const lemma_info_vector &get_lemmas() const { return m_lemma_vec; }
void dec_gas() {
if (m_gas > 0) m_gas--;
}
unsigned get_gas() const { return m_gas; }
unsigned get_pob_gas() const { return GAS_POB_COEFF * m_lemma_vec.size(); }
/// Get a conjunction of all the lemmas in cluster
void get_conj_lemmas(expr_ref &e) const;
/// Try to add \p lemma to cluster. Remove subsumed lemmas if \p subs_check
/// is true
///
/// Returns false if lemma does not match the pattern or if it is already in
/// the cluster Repetition of lemmas is avoided by doing a linear scan over
/// the lemmas in the cluster. Adding a lemma can reduce the size of the
/// cluster due to subs_check
bool add_lemma(const lemma_ref &lemma, bool subsume = false);
bool contains(const lemma_ref &lemma);
bool can_contain(const lemma_ref &lemma);
/// Return the minimum level of lemmas in he cluster
unsigned get_min_lvl();
lemma_cluster::lemma_info *get_lemma_info(const lemma_ref &lemma);
unsigned get_size() const { return m_lemma_vec.size(); }
const expr_ref &get_pattern() const { return m_pattern; }
void inc_ref() { ++m_ref_count; }
void dec_ref() {
--m_ref_count;
if (m_ref_count == 0) { dealloc(this); }
}
};
class lemma_cluster_finder {
struct stats {
unsigned max_group_size;
stopwatch watch;
stats() { reset(); }
void reset() {
max_group_size = 0;
watch.reset();
}
};
stats m_st;
ast_manager &m;
arith_util m_arith;
bv_util m_bv;
/// Check whether \p cube and \p lcube differ only in interpreted constants
bool are_neighbours(const expr_ref &cube, const expr_ref &lcube);
/// N-ary antiunify
///
/// Returns whether there is a substitution with only interpreted consts
bool anti_unify_n_intrp(const expr_ref &cube, expr_ref_vector &fmls,
expr_ref &res);
public:
lemma_cluster_finder(ast_manager &m);
/// Add a new lemma \p lemma to a cluster
///
/// Creates a new cluster for the lemma if necessary
void cluster(lemma_ref &lemma);
void collect_statistics(statistics &st) const;
void reset_statistics() { m_st.reset(); }
};
using lemma_info_vector = lemma_cluster::lemma_info_vector;
} // namespace spacer

View file

@ -0,0 +1,240 @@
/**++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_cluster_util.cpp
Abstract:
Utility methods for clustering
Author:
Hari Govind
Arie Gurfinkel
Notes:
--*/
#include "ast/arith_decl_plugin.h"
#include "ast/ast.h"
#include "ast/ast_pp.h"
#include "ast/for_each_expr.h"
#include "ast/rewriter/rewriter.h"
#include "ast/rewriter/rewriter_def.h"
#include "ast/rewriter/th_rewriter.h"
#include "muz/spacer/spacer_util.h"
namespace spacer {
/// Arithmetic term order
struct arith_add_less_proc {
const arith_util &m_arith;
arith_add_less_proc(const arith_util &arith) : m_arith(arith) {}
bool operator()(expr *e1, expr *e2) const {
if (e1 == e2) return false;
ast_lt_proc ast_lt;
expr *k1 = nullptr, *t1 = nullptr, *k2 = nullptr, *t2 = nullptr;
// k1*t1 < k2*t2 iff t1 < t2 or t1 == t2 && k1 < k2
// k1 and k2 can be null
if (!m_arith.is_mul(e1, k1, t1)) { t1 = e1; }
if (!m_arith.is_mul(e2, k2, t2)) { t2 = e2; }
SASSERT(t1 && t2);
if (t1 != t2) return ast_lt(t1, t2);
// here: t1 == t2 && k1 != k2
SASSERT(k1 != k2);
// check for null
if (!k1 || !k2) return !k1;
return ast_lt(k1, k2);
}
};
struct bool_and_less_proc {
ast_manager &m;
const arith_util &m_arith;
bool_and_less_proc(ast_manager &mgr, const arith_util &arith)
: m(mgr), m_arith(arith) {}
bool operator()(expr *e1, expr *e2) const {
expr *a1 = nullptr, *a2 = nullptr;
bool is_not1, is_not2;
if (e1 == e2) return false;
is_not1 = m.is_not(e1, a1);
a1 = is_not1 ? a1 : e1;
is_not2 = m.is_not(e2, a2);
a2 = is_not2 ? a2 : e2;
return a1 == a2 ? is_not1 < is_not2 : arith_lt(a1, a2);
}
bool arith_lt(expr *e1, expr *e2) const {
ast_lt_proc ast_lt;
expr *t1, *k1, *t2, *k2;
if (e1 == e2) return false;
if (e1->get_kind() != e2->get_kind()) return e1->get_kind() < e2->get_kind();
if (!is_app(e1)) return ast_lt(e1, e2);
app *a1 = to_app(e1), *a2 = to_app(e2);
if (a1->get_family_id() != a2->get_family_id())
return a1->get_family_id() < a2->get_family_id();
if (a1->get_decl_kind() != a2->get_decl_kind())
return a1->get_decl_kind() < a2->get_decl_kind();
if (!(m_arith.is_le(e1, t1, k1) || m_arith.is_lt(e1, t1, k1) ||
m_arith.is_ge(e1, t1, k1) || m_arith.is_gt(e1, t1, k1))) {
t1 = e1;
k1 = nullptr;
}
if (!(m_arith.is_le(e2, t2, k2) || m_arith.is_lt(e2, t2, k2) ||
m_arith.is_ge(e2, t2, k2) || m_arith.is_gt(e2, t2, k2))) {
t2 = e2;
k2 = nullptr;
}
if (!k1 || !k2) { return k1 == k2 ? ast_lt(t1, t2) : k1 < k2; }
if (t1 == t2) return ast_lt(k1, k2);
if (t1->get_kind() != t2->get_kind())
return t1->get_kind() < t2->get_kind();
if (!is_app(t1)) return ast_lt(t1, t2);
unsigned d1 = to_app(t1)->get_depth();
unsigned d2 = to_app(t2)->get_depth();
if (d1 != d2) return d1 < d2;
// AG: order by the leading uninterpreted constant
expr *u1 = nullptr, *u2 = nullptr;
u1 = get_first_uc(t1);
u2 = get_first_uc(t2);
if (!u1 || !u2) { return u1 == u2 ? ast_lt(t1, t2) : u1 < u2; }
return u1 == u2 ? ast_lt(t1, t2) : ast_lt(u1, u2);
}
/// Returns first in left-most traversal uninterpreted constant of \p e
///
/// Returns null when no uninterpreted constant is found.
/// Recursive, assumes that expression is shallow and recursion is bounded.
expr *get_first_uc(expr *e) const {
expr *t, *k;
if (is_uninterp_const(e))
return e;
else if (m_arith.is_add(e)) {
if (to_app(e)->get_num_args() == 0) return nullptr;
expr *a1 = to_app(e)->get_arg(0);
// HG: for 3 + a, returns nullptr
return get_first_uc(a1);
} else if (m_arith.is_mul(e, k, t)) {
return get_first_uc(t);
}
return nullptr;
}
};
// Rewriter for normalize_order()
struct term_ordered_rpp : public default_rewriter_cfg {
ast_manager &m;
arith_util m_arith;
arith_add_less_proc m_add_less;
bool_and_less_proc m_and_less;
term_ordered_rpp(ast_manager &man)
: m(man), m_arith(m), m_add_less(m_arith), m_and_less(m, m_arith) {}
bool is_add(func_decl const *n) const {
return is_decl_of(n, m_arith.get_family_id(), OP_ADD);
}
br_status reduce_app(func_decl *f, unsigned num, expr *const *args,
expr_ref &result, proof_ref &result_pr) {
br_status st = BR_FAILED;
if (is_add(f)) {
ptr_buffer<expr> kids;
kids.append(num, args);
std::stable_sort(kids.data(), kids.data() + kids.size(),
m_add_less);
result = m_arith.mk_add(num, kids.data());
return BR_DONE;
}
if (m.is_and(f)) {
ptr_buffer<expr> kids;
kids.append(num, args);
std::stable_sort(kids.data(), kids.data() + kids.size(),
m_and_less);
result = m.mk_and(num, kids.data());
return BR_DONE;
}
return st;
}
};
// Normalize an arithmetic expression using term order
void normalize_order(expr *e, expr_ref &out) {
params_ref params;
// -- arith_rewriter params
params.set_bool("sort_sums", true);
// params.set_bool("gcd_rounding", true);
// params.set_bool("arith_lhs", true);
// -- poly_rewriter params
// params.set_bool("som", true);
// params.set_bool("flat", true);
// apply theory rewriter
th_rewriter rw1(out.m(), params);
rw1(e, out);
STRACE("spacer_normalize_order'",
tout << "OUT Before:" << mk_pp(out, out.m()) << "\n";);
// apply term ordering
term_ordered_rpp t_ordered(out.m());
rewriter_tpl<term_ordered_rpp> rw2(out.m(), false, t_ordered);
rw2(out.get(), out);
STRACE("spacer_normalize_order'",
tout << "OUT After :" << mk_pp(out, out.m()) << "\n";);
}
/// Multiply an expression \p fml by a rational \p num
///
/// \p fml should be of sort Int, Real, or BitVec
/// multiplication is simplifying
void mul_by_rat(expr_ref &fml, rational num) {
if (num.is_one()) return;
ast_manager &m = fml.get_manager();
arith_util m_arith(m);
bv_util m_bv(m);
expr_ref e(m);
SASSERT(m_arith.is_int_real(fml) || m_bv.is_bv(fml));
if (m_arith.is_int_real(fml)) {
e = m_arith.mk_mul(m_arith.mk_numeral(num, m_arith.is_int(fml)), fml);
} else if (m_bv.is_bv(fml)) {
unsigned sz = m_bv.get_bv_size(fml);
e = m_bv.mk_bv_mul(m_bv.mk_numeral(num, sz), fml);
}
// use theory rewriter to simplify
params_ref params;
params.set_bool("som", true);
params.set_bool("flat", true);
th_rewriter rw(m, params);
rw(e, fml);
}
} // namespace spacer
template class rewriter_tpl<spacer::term_ordered_rpp>;

View file

@ -0,0 +1,208 @@
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_concretize.cpp
Abstract:
Concretize a pob
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "spacer_concretize.h"
namespace pattern_var_marker_ns {
struct proc {
arith_util &m_arith;
expr_fast_mark2 &m_marks;
proc(arith_util &arith, expr_fast_mark2 &marks)
: m_arith(arith), m_marks(marks) {}
void operator()(var *n) const {}
void operator()(quantifier *q) const {}
void operator()(app const *n) const {
expr *e1, *e2;
if (m_arith.is_mul(n, e1, e2)) {
if (is_var(e1) && !is_var(e2))
m_marks.mark(e2);
else if (is_var(e2) && !is_var(e1))
m_marks.mark(e1);
}
}
};
}; // namespace pattern_var_marker_ns
namespace spacer {
void pob_concretizer::mark_pattern_vars() {
pattern_var_marker_ns::proc proc(m_arith, m_var_marks);
quick_for_each_expr(proc, const_cast<expr *>(m_pattern));
}
bool pob_concretizer::push_out(expr_ref_vector &out, const expr_ref &e) {
// using m_var_marks to mark both variables and expressions sent to out
// the two sets are distinct so we can reuse the same marks
if (!m_var_marks.is_marked(e)) {
m_var_marks.mark(e);
out.push_back(e);
return true;
}
return false;
}
bool pob_concretizer::apply(const expr_ref_vector &cube, expr_ref_vector &out) {
// mark variables that are being split out
mark_pattern_vars();
for (auto *lit : cube) {
if (!apply_lit(lit, out)) {
out.reset();
m_var_marks.reset();
return false;
}
}
m_var_marks.reset();
return true;
}
bool pob_concretizer::is_split_var(expr *e, expr *&var, bool &pos) {
expr *e1, *e2;
rational n;
if (m_var_marks.is_marked(e)) {
var = e;
pos = true;
return true;
} else if (m_arith.is_mul(e, e1, e2) && m_arith.is_numeral(e1, n) &&
m_var_marks.is_marked(e2)) {
var = e2;
pos = !n.is_neg();
return true;
}
return false;
}
void pob_concretizer::split_lit_le_lt(expr *_lit, expr_ref_vector &out) {
expr *e1, *e2;
expr *lit = _lit;
m.is_not(_lit, lit);
VERIFY(m_arith.is_le(lit, e1, e2) || m_arith.is_gt(lit, e1, e2) ||
m_arith.is_lt(lit, e1, e2) || m_arith.is_ge(lit, e1, e2));
ptr_buffer<expr> kids;
expr *var;
bool pos;
expr_ref val(m);
for (auto *arg : *to_app(e1)) {
if (is_split_var(arg, var, pos)) {
val = m_model->operator()(var);
// reuse val to keep the new literal
val = pos ? m_arith.mk_le(var, val) : m_arith.mk_ge(var, val);
push_out(out, val);
} else {
kids.push_back(arg);
}
}
if (kids.empty()) return;
// -- nothing was changed in the literal, move it out as is
if (kids.size() == to_app(e1)->get_num_args()) {
push_out(out, {_lit, m});
return;
}
// create new leftover literal using remaining arguments
expr_ref lhs(m);
if (kids.size() == 1) {
lhs = kids.get(0);
} else
lhs = m_arith.mk_add(kids.size(), kids.data());
expr_ref rhs = m_model->operator()(lhs);
expr_ref new_lit(m_arith.mk_le(lhs, rhs), m);
push_out(out, new_lit);
}
void pob_concretizer::split_lit_ge_gt(expr *_lit, expr_ref_vector &out) {
expr *e1, *e2;
expr *lit = _lit;
m.is_not(_lit, lit);
VERIFY(m_arith.is_le(lit, e1, e2) || m_arith.is_gt(lit, e1, e2) ||
m_arith.is_lt(lit, e1, e2) || m_arith.is_ge(lit, e1, e2));
ptr_buffer<expr> kids;
expr *var;
bool pos;
expr_ref val(m);
for (auto *arg : *to_app(e1)) {
if (is_split_var(arg, var, pos)) {
val = m_model->operator()(var);
// reuse val to keep the new literal
val = pos ? m_arith.mk_ge(var, val) : m_arith.mk_le(var, val);
push_out(out, val);
} else {
kids.push_back(arg);
}
}
if (kids.empty()) return;
// -- nothing was changed in the literal, move it out as is
if (kids.size() == to_app(e1)->get_num_args()) {
push_out(out, {_lit, m});
return;
}
// create new leftover literal using remaining arguments
expr_ref lhs(m);
if (kids.size() == 1) {
lhs = kids.get(0);
} else
lhs = m_arith.mk_add(kids.size(), kids.data());
expr_ref rhs = m_model->operator()(lhs);
expr_ref new_lit(m_arith.mk_ge(lhs, rhs), m);
push_out(out, new_lit);
}
bool pob_concretizer::apply_lit(expr *_lit, expr_ref_vector &out) {
expr *lit = _lit;
bool is_neg = m.is_not(_lit, lit);
// split literals of the form a1*x1 + ... + an*xn ~ c, where c is a
// constant, ~ is <, <=, >, or >=, and the top level operator of LHS is +
expr *e1, *e2;
if ((m_arith.is_lt(lit, e1, e2) || m_arith.is_le(lit, e1, e2)) &&
m_arith.is_add(e1)) {
SASSERT(m_arith.is_numeral(e2));
if (!is_neg) {
split_lit_le_lt(_lit, out);
} else {
split_lit_ge_gt(_lit, out);
}
} else if ((m_arith.is_gt(lit, e1, e2) || m_arith.is_ge(lit, e1, e2)) &&
m_arith.is_add(e1)) {
SASSERT(m_arith.is_numeral(e2));
if (!is_neg) {
split_lit_ge_gt(_lit, out);
} else {
split_lit_le_lt(_lit, out);
}
} else {
out.push_back(_lit);
}
return true;
}
} // namespace spacer

View file

@ -0,0 +1,77 @@
#pragma once
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_concretize.h
Abstract:
Concretize a pob
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "ast/ast.h"
#include "ast/rewriter/expr_safe_replace.h"
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_util.h"
#include "tactic/core/ctx_simplify_tactic.h"
#include "util/obj_ref_hashtable.h"
#include "util/rational.h"
namespace spacer {
class pob_concretizer {
ast_manager &m;
arith_util m_arith;
model_ref &m_model;
const expr *m_pattern;
expr_fast_mark2 m_var_marks;
// Marks all constants to be split in the pattern
void mark_pattern_vars();
// Adds a literal to \p out (unless it is already there)
bool push_out(expr_ref_vector &out, const expr_ref &e);
// Determines whether \p e is a*var, for some variable in \p m_var_marks
// Sets \p pos to sign(a)
bool is_split_var(expr *e, expr *&var, bool &pos);
// Splits a < or <= literal using the model
void split_lit_le_lt(expr *lit, expr_ref_vector &out);
// See split_lit_le_lt
void split_lit_ge_gt(expr *lit, expr_ref_vector &out);
public:
pob_concretizer(ast_manager &_m, model_ref &model, const expr *pattern)
: m(_m), m_arith(m), m_model(model), m_pattern(pattern) {}
/// Concretize \p cube into conjunction of simpler literals
///
/// Returns true on success and adds new literals to out
/// ensures: mk_and(out) ==> cube
bool apply(expr *cube, expr_ref_vector &out) {
expr_ref_vector flat(m);
flatten_and(cube, flat);
return apply(flat, out);
}
/// Concretizes a vector of literals
bool apply(const expr_ref_vector &cube, expr_ref_vector &out);
/// Concretizes a single literal
///
/// Returns true on success, new literals are added to \p out
bool apply_lit(expr *lit, expr_ref_vector &out);
};
} // namespace spacer

View file

@ -0,0 +1,98 @@
/**
Copyright (c) 2019 Microsoft Corporation and Arie Gurfinkel
Module Name:
spacer_conjecture.cpp
Abstract:
Methods to implement conjecture rule in gspacer
Author:
Arie Gurfinkel
Hari Govind
Notes:
--*/
#include "ast/for_each_expr.h"
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_util.h"
namespace spacer {
/// Returns true if \p lit is LA/BV inequality with a single free variable
bool is_mono_var_lit(expr *lit, ast_manager &m) {
expr *e;
bv_util bv(m);
arith_util a_util(m);
if (m.is_not(lit, e)) return is_mono_var_lit(e, m);
if (a_util.is_arith_expr(lit) || bv.is_bv_ule(lit) ||
bv.is_bv_sle(lit)) {
return get_num_vars(lit) == 1 && !has_nonlinear_var_mul(lit, m);
}
return false;
}
/// Returns true if \p pattern contains a single mono-var literal
///
/// That is, \p pattern contains a single suitable literal.
/// The literal is returned in \p res
bool find_unique_mono_var_lit(const expr_ref &pattern, expr_ref &res) {
if (get_num_vars(pattern) != 1) return false;
ast_manager &m = res.m();
// if the pattern has multiple literals, check whether exactly one of them
// is leq
expr_ref_vector conj(m);
conj.push_back(pattern);
flatten_and(conj);
unsigned count = 0;
for (auto *lit : conj) {
if (is_mono_var_lit(lit, m)) {
if (count) return false;
res = lit;
count++;
}
}
SASSERT(count <= 1);
return count == 1;
}
/// Filter out a given literal \p lit from a list of literals
///
/// Returns true if at least one literal was filtered out
/// \p out contains all the remaining (not filtered) literals
/// \p out holds the result. Returns true if any literal has been dropped
bool filter_out_lit(const expr_ref_vector &vec, const expr_ref &lit, expr_ref_vector &out) {
ast_manager &m = vec.get_manager();
bool dirty = false, pos = false;
sem_matcher matcher(m);
substitution sub(m);
out.reset();
unsigned lit_num_vars = get_num_vars(lit.get());
SASSERT(!(m.is_not(lit) && m.is_eq(to_app(lit)->get_arg(0))));
for (auto &c : vec) {
sub.reset();
sub.reserve(1, lit_num_vars);
matcher.reset();
if (matcher(lit, c, sub, pos) && pos) {
if (is_numeric_sub(sub)) {
dirty = true;
continue;
}
}
out.push_back(c);
}
CTRACE("global", dirty,
tout << "Filtered " << lit << " from " << vec << "\n got " << out << "\n";);
return dirty;
}
} // namespace spacer

View file

@ -20,6 +20,7 @@ Notes:
--*/
// clang-format off
#include <sstream>
#include <iomanip>
@ -51,30 +52,36 @@ Notes:
#include "muz/transforms/dl_mk_rule_inliner.h"
#include "muz/spacer/spacer_qe_project.h"
#include "muz/spacer/spacer_sat_answer.h"
#include "muz/spacer/spacer_concretize.h"
#include "muz/spacer/spacer_global_generalizer.h"
#define WEAKNESS_MAX 65535
namespace spacer {
/// pob -- proof obligation
pob::pob (pob* parent, pred_transformer& pt,
unsigned level, unsigned depth, bool add_to_parent):
m_ref_count (0),
m_parent (parent), m_pt (pt),
m_post (m_pt.get_ast_manager ()),
m_binding(m_pt.get_ast_manager()),
m_new_post (m_pt.get_ast_manager ()),
m_level (level), m_depth (depth),
m_open (true), m_use_farkas (true), m_in_queue(false),
m_weakness(0), m_blocked_lvl(0) {
pob::pob(pob *parent, pred_transformer &pt, unsigned level, unsigned depth,
bool add_to_parent)
: m_ref_count(0), m_parent(parent), m_pt(pt),
m_post(m_pt.get_ast_manager()), m_binding(m_pt.get_ast_manager()),
m_new_post(m_pt.get_ast_manager()), m_level(level), m_depth(depth),
m_desired_level(0), m_open(true), m_use_farkas(true), m_in_queue(false), m_is_conjecture(false),
m_enable_local_gen(true), m_enable_concretize(false), m_is_subsume(false),
m_enable_expand_bnd_gen(false), m_weakness(0), m_blocked_lvl(0),
m_concretize_pat(m_pt.get_ast_manager()),
m_gas(0) {
if (add_to_parent && m_parent) {
m_parent->add_child(*this);
}
if (m_parent) {
m_is_conjecture = m_parent->is_conjecture();
// m_is_subsume = m_parent->is_subsume();
m_gas = m_parent->get_gas();
}
}
void pob::set_post(expr* post) {
app_ref_vector empty_binding(get_ast_manager());
set_post(post, empty_binding);
set_post(post, {get_ast_manager()});
}
void pob::set_post(expr* post, app_ref_vector const &binding) {
@ -91,6 +98,11 @@ void pob::inherit(pob const &p) {
SASSERT(!is_in_queue());
SASSERT(m_parent == p.m_parent);
SASSERT(&m_pt == &p.m_pt);
// -- HACK: normalize second time because th_rewriter is not idempotent
if (m_post != p.m_post) {
normalize(m_post, m_post, false, false);
}
SASSERT(m_post == p.m_post);
SASSERT(!m_new_post);
@ -99,11 +111,21 @@ void pob::inherit(pob const &p) {
m_level = p.m_level;
m_depth = p.m_depth;
m_desired_level = std::max(m_desired_level, p.m_desired_level);
m_open = p.m_open;
m_use_farkas = p.m_use_farkas;
m_is_conjecture = p.m_is_conjecture;
m_enable_local_gen = p.m_enable_local_gen;
m_enable_concretize = p.m_enable_concretize;
m_is_subsume = p.m_is_subsume;
m_enable_expand_bnd_gen = p.m_enable_expand_bnd_gen;
m_weakness = p.m_weakness;
m_derivation = nullptr;
m_gas = p.m_gas;
}
void pob::close () {
@ -756,6 +778,8 @@ void pred_transformer::collect_statistics(statistics& st) const
m_must_reachable_watch.get_seconds ());
st.update("time.spacer.ctp", m_ctp_watch.get_seconds());
st.update("time.spacer.mbp", m_mbp_watch.get_seconds());
// -- Max cluster size can decrease during run
st.update("SPACER max cluster size", m_cluster_db.get_max_cluster_size());
}
void pred_transformer::reset_statistics()
@ -1193,6 +1217,7 @@ expr_ref pred_transformer::get_origin_summary (model &mdl,
for (auto* s : summary) {
if (!is_quantifier(s) && !mdl.is_true(s)) {
TRACE("spacer", tout << "Summary not true in the model: " << mk_pp(s, m) << "\n";);
return expr_ref(m);
}
}
@ -1258,12 +1283,11 @@ void pred_transformer::get_pred_bg_invs(expr_ref_vector& out) {
/// \brief Returns true if the obligation is already blocked by current lemmas
bool pred_transformer::is_blocked (pob &n, unsigned &uses_level)
{
bool pred_transformer::is_blocked(pob &n, unsigned &uses_level, model_ref *model) {
ensure_level (n.level ());
prop_solver::scoped_level _sl (*m_solver, n.level ());
m_solver->set_core (nullptr);
m_solver->set_model (nullptr);
m_solver->set_model(model);
expr_ref_vector post(m), _aux(m);
post.push_back (n.post ());
@ -1335,7 +1359,7 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
model_ref* model, unsigned& uses_level,
bool& is_concrete, datalog::rule const*& r,
bool_vector& reach_pred_used,
unsigned& num_reuse_reach)
unsigned& num_reuse_reach, bool use_iuc)
{
TRACE("spacer",
tout << "is-reachable: " << head()->get_name() << " level: "
@ -1349,7 +1373,8 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
// prepare the solver
prop_solver::scoped_level _sl(*m_solver, n.level());
prop_solver::scoped_subset_core _sc (*m_solver, !n.use_farkas_generalizer ());
prop_solver::scoped_subset_core _sc(
*m_solver, !(use_iuc && n.use_farkas_generalizer()));
prop_solver::scoped_weakness _sw(*m_solver, 0,
ctx.weak_abs() ? n.weakness() : UINT_MAX);
m_solver->set_core(core);
@ -1406,9 +1431,10 @@ lbool pred_transformer::is_reachable(pob& n, expr_ref_vector* core,
r = find_rule(**model, is_concrete, reach_pred_used, num_reuse_reach);
TRACE("spacer",
tout << "reachable is_sat: " << is_sat << " "
<< r << " is_concrete " << is_concrete << " rused: " << reach_pred_used << "\n";
ctx.get_datalog_context().get_rule_manager().display_smt2(*r, tout) << "\n";
);
<< r << " is_concrete " << is_concrete << " rused: " << reach_pred_used << "\n";);
CTRACE("spacer", r,
ctx.get_datalog_context().get_rule_manager().display_smt2(*r, tout);
tout << "\n";);
TRACE("spacer_sat", tout << "model is:\n" << **model << "\n";);
}
@ -2272,7 +2298,8 @@ context::context(fp_params const& params, ast_manager& m) :
m_last_result(l_undef),
m_inductive_lvl(0),
m_expanded_lvl(0),
m_json_marshaller(this),
m_global_gen(nullptr),
m_expand_bnd_gen(nullptr),
m_trace_stream(nullptr) {
params_ref p;
@ -2290,6 +2317,7 @@ context::context(fp_params const& params, ast_manager& m) :
m_pool1 = alloc(solver_pool, pool1_base.get(), max_num_contexts);
m_pool2 = alloc(solver_pool, pool2_base.get(), max_num_contexts);
m_lmma_cluster = alloc(lemma_cluster_finder, m);
updt_params();
if (m_params.spacer_trace_file().is_non_empty_string()) {
@ -2302,6 +2330,7 @@ context::context(fp_params const& params, ast_manager& m) :
context::~context()
{
reset_lemma_generalizers();
dealloc(m_lmma_cluster);
reset();
if (m_trace_stream) {
@ -2347,7 +2376,13 @@ void context::updt_params() {
m_restart_initial_threshold = m_params.spacer_restart_initial_threshold();
m_pdr_bfs = m_params.spacer_gpdr_bfs();
m_use_bg_invs = m_params.spacer_use_bg_invs();
m_global = m_params.spacer_global();
m_expand_bnd = m_params.spacer_expand_bnd();
m_gg_conjecture = m_params.spacer_gg_conjecture();
m_gg_subsume = m_params.spacer_gg_subsume();
m_gg_concretize = m_params.spacer_gg_concretize();
m_use_iuc = m_params.spacer_use_iuc();
if (m_use_gpdr) {
// set options to be compatible with GPDR
m_weak_abs = false;
@ -2665,7 +2700,8 @@ void context::init_lemma_generalizers()
//m_lemma_generalizers.push_back (alloc (unsat_core_generalizer, *this));
if (m_use_ind_gen) {
m_lemma_generalizers.push_back(alloc(lemma_bool_inductive_generalizer, *this, 0));
// m_lemma_generalizers.push_back(alloc(lemma_bool_inductive_generalizer, *this, 0));
m_lemma_generalizers.push_back(alloc_lemma_inductive_generalizer(*this));
}
// after the lemma is minimized (maybe should also do before)
@ -2678,6 +2714,15 @@ void context::init_lemma_generalizers()
m_lemma_generalizers.push_back(alloc(lemma_array_eq_generalizer, *this));
}
if (m_global) {
m_global_gen = alloc(lemma_global_generalizer, *this);
m_lemma_generalizers.push_back(m_global_gen);
}
if (m_expand_bnd) {
m_expand_bnd_gen = alloc(lemma_expand_bnd_generalizer, *this);
m_lemma_generalizers.push_back(m_expand_bnd_gen);
}
if (m_validate_lemmas) {
m_lemma_generalizers.push_back(alloc(lemma_sanity_checker, *this));
}
@ -3020,9 +3065,7 @@ lbool context::solve_core (unsigned from_lvl)
if (check_reachability()) { return l_true; }
if (lvl > 0 && m_use_propagate)
if (propagate(m_expanded_lvl, lvl, UINT_MAX)) { dump_json(); return l_false; }
dump_json();
if (propagate(m_expanded_lvl, lvl, UINT_MAX)) { return l_false; }
if (is_inductive()){
return l_false;
@ -3070,6 +3113,8 @@ void context::log_expand_pob(pob &n) {
if (n.parent()) pob_id = std::to_string(n.parent()->post()->get_id());
*m_trace_stream << "** expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth())
<< " exprID: " << n.post()->get_id() << " pobID: " << pob_id << "\n"
@ -3077,13 +3122,18 @@ void context::log_expand_pob(pob &n) {
}
TRACE("spacer", tout << "expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth())
<< " fvsz: " << n.get_free_vars_size() << "\n"
<< " fvsz: " << n.get_free_vars_size()
<< " gas: " << n.get_gas() << "\n"
<< mk_pp(n.post(), m) << "\n";);
STRACE("spacer_progress",
tout << "** expand-pob: " << n.pt().head()->get_name()
<< (n.is_conjecture() ? " CONJ" : "")
<< (n.is_subsume() ? " SUBS" : "")
<< " level: " << n.level()
<< " depth: " << (n.depth() - m_pob_queue.min_depth()) << "\n"
<< mk_epp(n.post(), m) << "\n\n";);
@ -3151,13 +3201,21 @@ bool context::check_reachability ()
node = last_reachable;
last_reachable = nullptr;
if (m_pob_queue.is_root(*node)) { return true; }
if (is_reachable (*node->parent())) {
last_reachable = node->parent ();
// do not check the parent if its may pob status is different
if (node->parent()->is_may_pob() != node->is_may_pob())
{
last_reachable = nullptr;
break;
}
if (is_reachable(*node->parent())) {
last_reachable = node->parent();
SASSERT(last_reachable->is_closed());
last_reachable->close ();
last_reachable->close();
} else if (!node->parent()->is_closed()) {
/* bump node->parent */
node->parent ()->bump_weakness();
node->parent()->bump_weakness();
}
}
@ -3202,20 +3260,37 @@ bool context::check_reachability ()
case l_true:
SASSERT(m_pob_queue.size() == old_sz);
SASSERT(new_pobs.empty());
node->close();
last_reachable = node;
last_reachable->close ();
if (m_pob_queue.is_root(*node)) {return true;}
if (m_pob_queue.is_root(*node)) { return true; }
break;
case l_false:
SASSERT(m_pob_queue.size() == old_sz);
// re-queue all pobs introduced by global gen and any pobs that can be blocked at a higher level
for (auto pob : new_pobs) {
if (is_requeue(*pob)) {m_pob_queue.push(*pob);}
TRACE("gg", tout << "pob: is_may_pob " << pob->is_may_pob()
<< " with post:\n"
<< mk_pp(pob->post(), m)
<< "\n";);
//if ((pob->is_may_pob() && pob->post() != node->post()) || is_requeue(*pob)) {
if (is_requeue(*pob)) {
TRACE("gg",
tout << "Adding back blocked pob at level "
<< pob->level()
<< " and depth " << pob->depth() << "\n");
m_pob_queue.push(*pob);
}
}
if (m_pob_queue.is_root(*node)) {return false;}
break;
case l_undef:
SASSERT(m_pob_queue.size() == old_sz);
// collapse may pobs if the reachability of one of them cannot
// be estimated
if ((node->is_may_pob()) && new_pobs.size() == 0) {
close_all_may_parents(node);
}
for (auto pob : new_pobs) {m_pob_queue.push(*pob);}
break;
}
@ -3228,7 +3303,10 @@ bool context::check_reachability ()
/// returns true if the given pob can be re-scheduled
bool context::is_requeue(pob &n) {
if (!m_push_pob) {return false;}
// if have not reached desired level, then requeue
if (n.level() <= n.desired_level()) { return true; }
if (!m_push_pob) { return false; }
unsigned max_depth = m_push_pob_max_depth;
return (n.level() >= m_pob_queue.max_level() ||
m_pob_queue.max_level() - n.level() <= max_depth);
@ -3270,9 +3348,9 @@ bool context::is_reachable(pob &n)
unsigned saved = n.level ();
// TBD: don't expose private field
n.m_level = infty_level ();
lbool res = n.pt().is_reachable(n, nullptr, &mdl,
uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach);
lbool res =
n.pt().is_reachable(n, nullptr, &mdl, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach, m_use_iuc);
n.m_level = saved;
if (res != l_true || !is_concrete) {
@ -3326,16 +3404,6 @@ bool context::is_reachable(pob &n)
return next ? is_reachable(*next) : true;
}
void context::dump_json()
{
if (m_params.spacer_print_json().is_non_empty_string()) {
std::ofstream of;
of.open(m_params.spacer_print_json().bare_str());
m_json_marshaller.marshal(of);
of.close();
}
}
void context::predecessor_eh()
{
for (unsigned i = 0; i < m_callbacks.size(); i++) {
@ -3446,14 +3514,15 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
log_expand_pob(n);
stopwatch watch;
IF_VERBOSE (1, verbose_stream () << "expand: " << n.pt ().head ()->get_name ()
<< " (" << n.level () << ", "
IF_VERBOSE(1, verbose_stream()
<< "expand: " << n.pt().head()->get_name() << " ("
<< n.level() << ", "
<< (n.depth () - m_pob_queue.min_depth ()) << ") "
<< (n.use_farkas_generalizer () ? "FAR " : "SUB ")
<< " w(" << n.weakness() << ") "
<< n.post ()->get_id ();
verbose_stream().flush ();
watch.start (););
<< (n.is_conjecture() ? "CONJ " : "")
<< (n.is_subsume() ? " SUBS" : "") << " w("
<< n.weakness() << ") " << n.post()->get_id();
verbose_stream().flush(); watch.start(););
// used in case n is unreachable
unsigned uses_level = infty_level ();
@ -3468,25 +3537,59 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
unsigned num_reuse_reach = 0;
if (m_push_pob && n.pt().is_blocked(n, uses_level)) {
if (!n.is_may_pob() && m_push_pob && n.pt().is_blocked(n, uses_level)) {
// if (!m_pob_queue.is_root (n)) n.close ();
IF_VERBOSE (1, verbose_stream () << " K "
<< std::fixed << std::setprecision(2)
<< watch.get_seconds () << "\n";);
n.inc_level();
out.push_back(&n);
return l_false;
}
<< watch.get_seconds () << "\n";);
n.inc_level();
out.push_back(&n);
return l_false;
}
if (/* XXX noop */ n.pt().is_qblocked(n)) {
STRACE("spacer_progress",
tout << "This pob can be blocked by instantiation\n";);
}
if (/* XXX noop */ n.pt().is_qblocked(n)) {
STRACE("spacer_progress",
tout << "This pob can be blocked by instantiation\n";);
}
if ((n.is_may_pob()) && n.get_gas() == 0) {
TRACE("global", tout << "Cant prove may pob. Collapsing "
<< mk_pp(n.post(), m) << "\n";);
m_stats.m_num_pob_ofg++;
return l_undef;
}
// Decide whether to concretize pob
// get a model that satisfies the pob and the current set of lemmas
// TODO: if push_pob is enabled, avoid calling is_blocked twice
if (m_gg_concretize && n.is_concretize_enabled() &&
!n.pt().is_blocked(n, uses_level, &model)) {
TRACE("global",
tout << "Concretizing: " << mk_pp(n.post(), m) << "\n"
<< "\t" << n.get_gas() << " attempts left\n";);
SASSERT(m_global_gen);
if (pob *new_pob = m_global_gen->mk_concretize_pob(n, model)) {
m_stats.m_num_concretize++;
out.push_back(new_pob);
out.push_back(&n);
IF_VERBOSE(1, verbose_stream()
<< " C " << std::fixed << std::setprecision(2)
<< watch.get_seconds() << "\n";);
unsigned gas = n.get_gas();
SASSERT(gas > 0);
// dec gas for orig pob to limit number of concretizations
new_pob->set_gas(gas--);
n.set_gas(gas);
return l_undef;
}
}
model = nullptr;
predecessor_eh();
lbool res = n.pt ().is_reachable (n, &cube, &model, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach);
lbool res =
n.pt().is_reachable(n, &cube, &model, uses_level, is_concrete, r,
reach_pred_used, num_reuse_reach, m_use_iuc);
if (model) model->set_model_completion(false);
if (res == l_undef && model) res = handle_unknown(n, r, *model);
@ -3534,7 +3637,18 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
out.push_back (next);
}
}
if(n.is_subsume())
m_stats.m_num_subsume_pob_reachable++;
if(n.is_conjecture())
m_stats.m_num_conj_failed++;
CTRACE("global", n.is_conjecture(),
tout << "Failed to block conjecture "
<< n.post()->get_id() << "\n";);
CTRACE("global", n.is_subsume(),
tout << "Failed to block subsume generalization "
<< mk_pp(n.post(), m) << "\n";);
IF_VERBOSE(1, verbose_stream () << (next ? " X " : " T ")
<< std::fixed << std::setprecision(2)
@ -3565,7 +3679,7 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
throw unknown_exception();
}
case l_false: {
// n is unreachable, create new summary facts
// n is unreachable, create a new lemma
timeit _timer (is_trace_enabled("spacer_timeit"),
"spacer::expand_pob::false",
verbose_stream ());
@ -3573,40 +3687,68 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
// -- only update expanded level when new lemmas are generated at it.
if (n.level() < m_expanded_lvl) { m_expanded_lvl = n.level(); }
TRACE("spacer", tout << "cube:\n";
for (unsigned j = 0; j < cube.size(); ++j)
tout << mk_pp(cube[j].get(), m) << "\n";);
TRACE("spacer", tout << "cube:\n" << cube << "\n";);
if(n.is_conjecture()) m_stats.m_num_conj_success++;
if(n.is_subsume()) m_stats.m_num_subsume_pob_blckd++;
pob_ref nref(&n);
// -- create lemma from a pob and last unsat core
lemma_ref lemma = alloc(class lemma, pob_ref(&n), cube, uses_level);
lemma_ref lemma_pob;
if (n.is_local_gen_enabled()) {
lemma_pob = alloc(class lemma, nref, cube, uses_level);
// -- run all lemma generalizers
for (unsigned i = 0;
// -- only generalize if lemma was constructed using farkas
n.use_farkas_generalizer() && !lemma_pob->is_false() &&
i < m_lemma_generalizers.size();
++i) {
checkpoint ();
(*m_lemma_generalizers[i])(lemma_pob);
}
} else if (m_global_gen || m_expand_bnd_gen) {
m_stats.m_non_local_gen++;
// -- run all lemma generalizers
for (unsigned i = 0;
// -- only generalize if lemma was constructed using farkas
n.use_farkas_generalizer () && !lemma->is_false() &&
i < m_lemma_generalizers.size(); ++i) {
checkpoint ();
(*m_lemma_generalizers[i])(lemma);
expr_ref_vector pob_cube(m);
n.get_post_simplified(pob_cube);
lemma_pob = alloc(class lemma, nref, pob_cube, n.level());
TRACE("global", tout << "Disabled local gen on pob (id: "
<< n.post()->get_id() << ")\n"
<< mk_pp(n.post(), m) << "\n"
<< "Lemma:\n"
<< mk_and(lemma_pob->get_cube()) << "\n";);
if (m_global_gen) (*m_global_gen)(lemma_pob);
if (m_expand_bnd_gen) (*m_expand_bnd_gen)(lemma_pob);
} else {
lemma_pob = alloc(class lemma, nref, cube, uses_level);
}
DEBUG_CODE(
lemma_sanity_checker sanity_checker(*this);
sanity_checker(lemma);
);
CTRACE("global", n.is_conjecture() || n.is_subsume(),
tout << "Blocked "
<< (n.is_conjecture() ? "conjecture " : "subsume ") << n.post()->get_id()
<< " at level " << n.level()
<< " using lemma\n" << mk_pp(lemma_pob->get_expr(), m) << "\n";);
TRACE("spacer", tout << "invariant state: "
<< (is_infty_level(lemma->level())?"(inductive)":"")
<< mk_pp(lemma->get_expr(), m) << "\n";);
DEBUG_CODE(lemma_sanity_checker sanity_checker(*this);
sanity_checker(lemma_pob););
bool v = n.pt().add_lemma (lemma.get());
if (v) { m_stats.m_num_lemmas++; }
TRACE("spacer",
tout << "invariant state: "
<< (is_infty_level(lemma_pob->level()) ? "(inductive)" : "")
<< mk_pp(lemma_pob->get_expr(), m) << "\n";);
bool is_new = n.pt().add_lemma(lemma_pob.get());
if (is_new) {
if (m_global) m_lmma_cluster->cluster(lemma_pob);
m_stats.m_num_lemmas++;
}
// Optionally update the node to be the negation of the lemma
if (v && m_use_lemma_as_pob) {
if (is_new && m_use_lemma_as_pob) {
expr_ref c(m);
c = mk_and(lemma->get_cube());
c = mk_and(lemma_pob->get_cube());
// check that the post condition is different
if (c != n.post()) {
pob *f = n.pt().find_pob(n.parent(), c);
@ -3622,6 +3764,28 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
}
}
if (m_global_gen) {
// if global gen is enabled, post-process the pob to create new subsume or conjecture pob
if (pob* new_pob = m_global_gen->mk_subsume_pob(n)) {
new_pob->set_gas(n.get_gas() - 1);
n.set_gas(n.get_gas() - 1);
out.push_back(new_pob);
m_stats.m_num_subsume_pobs++;
TRACE("global_verbose",
tout << "New subsume pob\n" << mk_pp(new_pob->post(), m) << "\n"
<< "gas:" << new_pob->get_gas() << "\n";);
} else if (pob* new_pob = m_gg_conjecture ? m_global_gen->mk_conjecture_pob(n) : nullptr) {
new_pob->set_gas(n.get_gas() - 1);
n.set_gas(n.get_gas() - 1);
out.push_back(new_pob);
m_stats.m_num_conj++;
TRACE("global",
tout << "New conjecture pob\n" << mk_pp(new_pob->post(), m) << "\n";);
}
}
// schedule the node to be placed back in the queue
n.inc_level();
out.push_back(&n);
@ -3636,7 +3800,24 @@ lbool context::expand_pob(pob& n, pob_ref_buffer &out)
return l_false;
}
case l_undef:
// something went wrong
// if the pob is a may pob, handle specially
if (n.is_may_pob()) {
// do not create children, but bump weakness
// bail out if this does not help
// AG: do not know why this is a good strategy
if (n.weakness() < 10) {
SASSERT(out.empty());
n.bump_weakness();
return expand_pob(n, out);
}
n.close();
m_stats.m_expand_pob_undef++;
IF_VERBOSE(1, verbose_stream() << " UNDEF "
<< std::fixed << std::setprecision(2)
<< watch.get_seconds () << "\n";);
return l_undef;
}
if (n.weakness() < 10 /* MAX_WEAKENSS */) {
bool has_new_child = false;
SASSERT(m_weak_abs);
@ -3933,6 +4114,11 @@ bool context::create_children(pob& n, datalog::rule const& r,
!mdl.is_true(n.post())))
{ kid->reset_derivation(); }
if (kid->is_may_pob()) {
SASSERT(n.get_gas() > 0);
n.set_gas(n.get_gas() - 1);
kid->set_gas(n.get_gas() - 1);
}
out.push_back(kid);
m_stats.m_num_queries++;
return true;
@ -3971,6 +4157,17 @@ void context::collect_statistics(statistics& st) const
st.update("SPACER num lemmas", m_stats.m_num_lemmas);
// -- number of restarts taken
st.update("SPACER restarts", m_stats.m_num_restarts);
// -- number of time pob abstraction was invoked
st.update("SPACER conj", m_stats.m_num_conj);
st.update("SPACER conj success", m_stats.m_num_conj_success);
st.update("SPACER conj failed",
m_stats.m_num_conj_failed);
st.update("SPACER pob out of gas", m_stats.m_num_pob_ofg);
st.update("SPACER subsume pob", m_stats.m_num_subsume_pobs);
st.update("SPACER subsume failed", m_stats.m_num_subsume_pob_reachable);
st.update("SPACER subsume success", m_stats.m_num_subsume_pob_blckd);
st.update("SPACER concretize", m_stats.m_num_concretize);
st.update("SPACER non local gen", m_stats.m_non_local_gen);
// -- time to initialize the rules
st.update ("time.spacer.init_rules", m_init_rules_watch.get_seconds ());
@ -3991,6 +4188,7 @@ void context::collect_statistics(statistics& st) const
for (unsigned i = 0; i < m_lemma_generalizers.size(); ++i) {
m_lemma_generalizers[i]->collect_statistics(st);
}
m_lmma_cluster->collect_statistics(st);
}
void context::reset_statistics()
@ -4008,6 +4206,7 @@ void context::reset_statistics()
m_lemma_generalizers[i]->reset_statistics();
}
m_lmma_cluster->reset_statistics();
m_init_rules_watch.reset ();
m_solve_watch.reset ();
m_propagate_watch.reset ();
@ -4093,8 +4292,6 @@ void context::add_constraint (expr *c, unsigned level)
}
void context::new_lemma_eh(pred_transformer &pt, lemma *lem) {
if (m_params.spacer_print_json().is_non_empty_string())
m_json_marshaller.register_lemma(lem);
bool handle=false;
for (unsigned i = 0; i < m_callbacks.size(); i++) {
handle|=m_callbacks[i]->new_lemma();
@ -4116,10 +4313,7 @@ void context::new_lemma_eh(pred_transformer &pt, lemma *lem) {
}
}
void context::new_pob_eh(pob *p) {
if (m_params.spacer_print_json().is_non_empty_string())
m_json_marshaller.register_pob(p);
}
void context::new_pob_eh(pob *p) { }
bool context::is_inductive() {
// check that inductive level (F infinity) of the query predicate
@ -4140,6 +4334,10 @@ inline bool pob_lt_proc::operator() (const pob *pn1, const pob *pn2) const
if (n1.depth() != n2.depth()) { return n1.depth() < n2.depth(); }
if (n1.is_subsume() != n2.is_subsume()) { return n1.is_subsume(); }
if (n1.is_conjecture() != n2.is_conjecture()) { return n1.is_conjecture(); }
if (n1.get_gas() != n2.get_gas()) { return n1.get_gas() > n2.get_gas(); }
// -- a more deterministic order of proof obligations in a queue
// if (!n1.get_context ().get_params ().spacer_nondet_tie_break ())
{
@ -4193,5 +4391,27 @@ inline bool pob_lt_proc::operator() (const pob *pn1, const pob *pn2) const
}
// set gas of each may parent to 0
// TODO: close siblings as well. kids of a pob are not stored in the pob
void context::close_all_may_parents(pob_ref node) {
pob_ref_vector to_do;
to_do.push_back(node.get());
while (to_do.size() != 0) {
pob_ref t = to_do.back();
t->set_gas(0);
if (t->is_may_pob()) {
t->close();
} else
break;
to_do.pop_back();
to_do.push_back(t->parent());
}
}
// construct a simplified version of the post
void pob::get_post_simplified(expr_ref_vector &pob_cube) {
pob_cube.reset();
pob_cube.push_back(m_post);
flatten_and(pob_cube);
simplify_bounds(pob_cube);
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,389 @@
/**++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_convex_closure.cpp
Abstract:
Compute convex closure of polyhedra
Author:
Hari Govind
Arie Gurfinkel
Notes:
--*/
#include "muz/spacer/spacer_convex_closure.h"
#include "ast/rewriter/th_rewriter.h"
namespace {
bool is_int_matrix(const spacer::spacer_matrix &matrix) {
rational val;
for (unsigned i = 0, rows = matrix.num_rows(); i < rows; i++) {
for (unsigned j = 0, cols = matrix.num_cols(); j < cols; j++)
if (!matrix.get(i, j).is_int()) return false;
}
return true;
}
bool is_sorted(const vector<rational> &data) {
for (unsigned i = 0; i < data.size() - 1; i++) {
if (!(data[i] >= data[i + 1])) return false;
}
return true;
}
/// Check whether all elements of \p data are congruent modulo \p m
bool is_congruent_mod(const vector<rational> &data, const rational &m) {
SASSERT(data.size() > 0);
rational p = data[0] % m;
for (auto k : data)
if (k % m != p) return false;
return true;
}
app *mk_bvadd(bv_util &bv, unsigned num, expr *const *args) {
if (num == 0) return nullptr;
if (num == 1) return is_app(args[0]) ? to_app(args[0]) : nullptr;
if (num == 2) { return bv.mk_bv_add(args[0], args[1]); }
/// XXX no mk_bv_add for n-ary bv_add
return bv.get_manager().mk_app(bv.get_fid(), OP_BADD, num, args);
}
} // namespace
namespace spacer {
convex_closure::convex_closure(ast_manager &_m)
: m(_m), m_arith(m), m_bv(m), m_bv_sz(0), m_enable_implicit(true), m_dim(0),
m_data(0, 0), m_col_vars(m), m_kernel(m_data), m_alphas(m),
m_implicit_cc(m), m_explicit_cc(m) {
m_kernel.set_plugin(mk_simplex_kernel_plugin());
}
void convex_closure::reset(unsigned n_cols) {
m_dim = n_cols;
m_kernel.reset();
m_data.reset(m_dim);
m_col_vars.reset();
m_col_vars.reserve(m_dim);
m_dead_cols.reset();
m_dead_cols.reserve(m_dim, false);
m_alphas.reset();
m_bv_sz = 0;
m_enable_implicit = true;
}
void convex_closure::collect_statistics(statistics &st) const {
st.update("time.spacer.solve.reach.gen.global.cc",
m_st.watch.get_seconds());
st.update("SPACER cc num dim reduction success", m_st.m_num_reductions);
st.update("SPACER cc max reduced dim", m_st.m_max_dim);
m_kernel.collect_statistics(st);
}
// call m_kernel to reduce dimensions of m_data
// return the rank of m_data
unsigned convex_closure::reduce() {
if (m_dim <= 1) return m_dim;
bool has_kernel = m_kernel.compute_kernel();
if (!has_kernel) {
TRACE("cvx_dbg",
tout << "No linear dependencies between pattern vars\n";);
return m_dim;
}
const spacer_matrix &ker = m_kernel.get_kernel();
SASSERT(ker.num_rows() > 0);
SASSERT(ker.num_rows() <= m_dim);
SASSERT(ker.num_cols() == m_dim + 1);
// m_dim - ker.num_rows() is the number of variables that have no linear
// dependencies
for (auto v : m_kernel.get_basic_vars())
// XXX sometimes a constant can be basic, need to find a way to
// switch it to var
if (v < m_dead_cols.size()) m_dead_cols[v] = true;
return m_dim - ker.num_rows();
}
// For row \p row in m_kernel, construct the equality:
//
// row * m_col_vars = 0
//
// In the equality, exactly one variable from m_col_vars is on the lhs
void convex_closure::kernel_row2eq(const vector<rational> &row, expr_ref &out) {
expr_ref_buffer lhs(m);
expr_ref e1(m);
bool is_int = false;
for (unsigned i = 0, sz = row.size(); i < sz; ++i) {
rational val_i = row.get(i);
if (val_i.is_zero()) continue;
SASSERT(val_i.is_int());
if (i < sz - 1) {
e1 = m_col_vars.get(i);
is_int |= m_arith.is_int(e1);
mul_by_rat(e1, val_i);
} else {
e1 = mk_numeral(val_i, is_int);
}
lhs.push_back(e1);
}
e1 = !has_bv() ? mk_add(lhs) : mk_bvadd(m_bv, lhs.size(), lhs.data());
e1 = m.mk_eq(e1, mk_numeral(rational::zero(), is_int));
// revisit this simplification step, it is here only to prevent/simplify
// formula construction everywhere else
params_ref params;
params.set_bool("som", true);
params.set_bool("flat", true);
th_rewriter rw(m, params);
rw(e1, out);
}
/// Generates linear equalities implied by m_data
///
/// the linear equalities are m_kernel * m_col_vars = 0 (where * is matrix
/// multiplication) the new equalities are stored in m_col_vars for each row
/// [0, 1, 0, 1 , 1] in m_kernel, the equality v1 = -1*v3 + -1*1 is
/// constructed and stored at index 1 of m_col_vars
void convex_closure::kernel2fmls(expr_ref_vector &out) {
// assume kernel has been computed already
const spacer_matrix &kern = m_kernel.get_kernel();
SASSERT(kern.num_rows() > 0);
TRACE("cvx_dbg", kern.display(tout););
expr_ref eq(m);
for (unsigned i = kern.num_rows(); i > 0; i--) {
auto &row = kern.get_row(i - 1);
kernel_row2eq(row, eq);
out.push_back(eq);
}
}
expr *convex_closure::mk_add(const expr_ref_buffer &vec) {
SASSERT(!vec.empty());
expr_ref s(m);
if (vec.size() == 1) {
return vec[0];
} else if (vec.size() > 1) {
return m_arith.mk_add(vec.size(), vec.data());
}
UNREACHABLE();
return nullptr;
}
expr *convex_closure::mk_numeral(const rational &n, bool is_int) {
if (!has_bv())
return m_arith.mk_numeral(n, is_int);
else
return m_bv.mk_numeral(n, m_bv_sz);
}
/// Construct the equality ((m_alphas . m_data[*][i]) = m_col_vars[i])
///
/// Where . is the dot product, m_data[*][i] is
/// the ith column of m_data. Add the result to res_vec.
void convex_closure::cc_col2eq(unsigned col, expr_ref_vector &out) {
SASSERT(!has_bv());
expr_ref_buffer sum(m);
for (unsigned row = 0, sz = m_data.num_rows(); row < sz; row++) {
expr_ref alpha(m);
auto n = m_data.get(row, col);
if (n.is_zero()) {
; // noop
} else {
alpha = m_alphas.get(row);
if (!n.is_one()) {
alpha = m_arith.mk_mul(
m_arith.mk_numeral(n, false /* is_int */), alpha);
}
}
if (alpha) sum.push_back(alpha);
}
SASSERT(!sum.empty());
expr_ref s(m);
s = mk_add(sum);
expr_ref v(m);
expr *vi = m_col_vars.get(col);
v = m_arith.is_int(vi) ? m_arith.mk_to_real(vi) : vi;
out.push_back(m.mk_eq(s, v));
}
void convex_closure::cc2fmls(expr_ref_vector &out) {
sort_ref real_sort(m_arith.mk_real(), m);
expr_ref zero(m_arith.mk_real(rational::zero()), m);
for (unsigned row = 0, sz = m_data.num_rows(); row < sz; row++) {
if (row >= m_alphas.size()) {
m_alphas.push_back(m.mk_fresh_const("a!cc", real_sort));
}
SASSERT(row < m_alphas.size());
// forall j :: alpha_j >= 0
out.push_back(m_arith.mk_ge(m_alphas.get(row), zero));
}
for (unsigned k = 0, sz = m_col_vars.size(); k < sz; k++) {
if (m_col_vars.get(k) && !m_dead_cols[k]) cc_col2eq(k, out);
}
//(\Sum j . m_new_vars[j]) = 1
out.push_back(m.mk_eq(
m_arith.mk_add(m_data.num_rows(),
reinterpret_cast<expr *const *>(m_alphas.data())),
m_arith.mk_real(rational::one())));
}
#define MAX_DIV_BOUND 101
// check whether \exists m, d s.t data[i] mod m = d. Returns the largest m and
// corresponding d
// TODO: find the largest divisor, not the smallest.
// TODO: improve efficiency
bool convex_closure::infer_div_pred(const vector<rational> &data, rational &m,
rational &d) {
TRACE("cvx_dbg_verb", {
tout << "computing div constraints for ";
for (rational r : data) tout << r << " ";
tout << "\n";
});
SASSERT(data.size() > 1);
SASSERT(is_sorted(data));
m = rational(2);
// special handling for even/odd
if (is_congruent_mod(data, m)) {
mod(data.back(), m, d);
return true;
}
// hard cut off to save time
rational bnd(MAX_DIV_BOUND);
rational big = data.back();
// AG: why (m < big)? Note that 'big' is the smallest element of data
for (; m < big && m < bnd; m++) {
if (is_congruent_mod(data, m)) break;
}
if (m >= big) return false;
if (m == bnd) return false;
mod(data[0], m, d);
SASSERT(d >= rational::zero());
TRACE("cvx_dbg_verb", tout << "div constraint generated. cf " << m
<< " and off " << d << "\n";);
return true;
}
bool convex_closure::compute() {
scoped_watch _w_(m_st.watch);
SASSERT(is_int_matrix(m_data));
unsigned rank = reduce();
// store dim var before rewrite
expr_ref var(m_col_vars.get(0), m);
if (rank < dims()) {
m_st.m_num_reductions++;
kernel2fmls(m_explicit_cc);
TRACE("cvx_dbg", tout << "Linear equalities true of the matrix "
<< mk_and(m_explicit_cc) << "\n";);
}
m_st.m_max_dim = std::max(m_st.m_max_dim, rank);
if (rank == 0) {
// AG: Is this possible?
return false;
} else if (rank > 1) {
if (m_enable_implicit) {
TRACE("subsume", tout << "Computing syntactic convex closure\n";);
cc2fmls(m_implicit_cc);
} else {
return false;
}
return true;
}
SASSERT(rank == 1);
cc_1dim(var, m_explicit_cc);
return true;
}
// construct the formula result_var <= bnd or result_var >= bnd
expr *convex_closure::mk_le_ge(expr *v, rational n, bool is_le) {
if (m_arith.is_int_real(v)) {
expr *en = m_arith.mk_numeral(n, m_arith.is_int(v));
return is_le ? m_arith.mk_le(v, en) : m_arith.mk_ge(v, en);
} else if (m_bv.is_bv(v)) {
expr *en = m_bv.mk_numeral(n, m_bv.get_bv_size(v->get_sort()));
return is_le ? m_bv.mk_ule(v, en) : m_bv.mk_ule(en, v);
} else {
UNREACHABLE();
}
return nullptr;
}
void convex_closure::cc_1dim(const expr_ref &var, expr_ref_vector &out) {
// XXX assumes that var corresponds to col 0
// The convex closure over one dimension is just a bound
vector<rational> data;
m_data.get_col(0, data);
auto gt_proc = [](rational const &x, rational const &y) -> bool {
return x > y;
};
std::sort(data.begin(), data.end(), gt_proc);
// -- compute LB <= var <= UB
expr_ref res(m);
res = var;
// upper-bound
out.push_back(mk_le_ge(res, data[0], true));
// lower-bound
out.push_back(mk_le_ge(res, data.back(), false));
// -- compute divisibility constraints
rational cr, off;
// add div constraints for all variables.
for (unsigned j = 0; j < m_data.num_cols(); j++) {
auto *v = m_col_vars.get(j);
if (v && (m_arith.is_int(v) || m_bv.is_bv(v))) {
data.reset();
m_data.get_col(j, data);
std::sort(data.begin(), data.end(), gt_proc);
if (infer_div_pred(data, cr, off)) {
out.push_back(mk_eq_mod(v, cr, off));
}
}
}
}
expr *convex_closure::mk_eq_mod(expr *v, rational d, rational r) {
expr *res = nullptr;
if (m_arith.is_int(v)) {
res = m.mk_eq(m_arith.mk_mod(v, m_arith.mk_int(d)), m_arith.mk_int(r));
} else if (m_bv.is_bv(v)) {
res = m.mk_eq(m_bv.mk_bv_urem(v, m_bv.mk_numeral(d, m_bv_sz)),
m_bv.mk_numeral(r, m_bv_sz));
} else {
UNREACHABLE();
}
return res;
}
} // namespace spacer

View file

@ -0,0 +1,187 @@
#pragma once
/**++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_convex_closure.h
Abstract:
Compute convex closure of polyhedra
Author:
Hari Govind
Arie Gurfinkel
Notes:
--*/
#include "ast/arith_decl_plugin.h"
#include "ast/ast.h"
#include "ast/ast_util.h"
#include "muz/spacer/spacer_arith_kernel.h"
#include "muz/spacer/spacer_matrix.h"
#include "muz/spacer/spacer_util.h"
#include "util/statistics.h"
namespace spacer {
/// Computes a convex closure of a set of points
class convex_closure {
struct stats {
unsigned m_num_reductions;
unsigned m_max_dim;
stopwatch watch;
stats() { reset(); }
void reset() {
m_num_reductions = 0;
m_max_dim = 0;
watch.reset();
}
};
stats m_st;
ast_manager &m;
arith_util m_arith;
bv_util m_bv;
// size of all bit vectors in m_col_vars
unsigned m_bv_sz;
// Enable computation of implicit syntactic convex closure
bool m_enable_implicit;
// number of columns in \p m_data
unsigned m_dim;
// A vector of rational valued points
spacer_matrix m_data;
// Variables naming columns in `m_data`
// \p m_col_vars[k] is a var for column \p k
expr_ref_vector m_col_vars;
vector<bool> m_dead_cols;
// Kernel of \p m_data
// Set at the end of computation
spacer_arith_kernel m_kernel;
// Free variables introduced by syntactic convex closure
// These variables are always of sort Real
expr_ref_vector m_alphas;
expr_ref_vector m_implicit_cc;
expr_ref_vector m_explicit_cc;
/// Reduces dimension of \p m_data and returns its rank
unsigned reduce();
/// Constructs an equality corresponding to a given row in the kernel
///
/// The equality is conceptually corresponds to
/// row * m_col_vars = 0
/// where row is a row vector and m_col_vars is a column vector.
/// However, the equality is put in a form so that exactly one variable from
/// \p m_col_vars is on the LHS
void kernel_row2eq(const vector<rational> &row, expr_ref &out);
/// Construct all linear equations implied by points in \p m_data
/// This is defined by \p m_kernel * m_col_vars = 0
void kernel2fmls(expr_ref_vector &out);
/// Compute syntactic convex closure of \p m_data
void cc2fmls(expr_ref_vector &out);
/// Construct the equality ((m_alphas . m_data[*][k]) = m_col_vars[k])
///
/// \p m_data[*][k] is the kth column of m_data
/// The equality is added to \p out.
void cc_col2eq(unsigned k, expr_ref_vector &out);
/// Compute one dimensional convex closure over \p var
///
/// \p var is the dimension over which convex closure is computed
/// Result is stored in \p out
void cc_1dim(const expr_ref &var, expr_ref_vector &out);
/// Computes div constraint implied by a set of data points
///
/// Finds the largest numbers \p m, \p d such that \p m_data[i] mod m = d
/// Returns true if successful
bool infer_div_pred(const vector<rational> &data, rational &m, rational &d);
/// Constructs a formula \p var ~ n , where ~ = is_le ? <= : >=
expr *mk_le_ge(expr *var, rational n, bool is_le);
expr *mk_add(const expr_ref_buffer &vec);
expr *mk_numeral(const rational &n, bool is_int);
/// Returns equality (v = r mod d)
expr *mk_eq_mod(expr *v, rational d, rational r);
bool has_bv() { return m_bv_sz > 0; }
public:
convex_closure(ast_manager &_m);
/// Resets all data points
///
/// n_cols is the number of dimensions of new expected data points
void reset(unsigned n_cols);
/// Turn support for fixed sized bit-vectors of size \p sz
///
/// Disables syntactic convex closure as a side-effect
void set_bv(unsigned sz) {
SASSERT(sz > 0);
m_bv_sz = sz;
m_enable_implicit = false;
}
/// \brief Name dimension \p i with a variable \p v.
void set_col_var(unsigned i, expr *v) {
SASSERT(i < dims());
SASSERT(m_col_vars.get(i) == nullptr);
m_col_vars[i] = v;
}
/// \brief Return number of dimensions of each point
unsigned dims() const { return m_dim; }
/// \brief Add an n-dimensional point to convex closure
void add_row(const vector<rational> &point) {
SASSERT(point.size() == dims());
m_data.add_row(point);
};
bool operator()() { return this->compute(); }
bool compute();
bool has_implicit() { return !m_implicit_cc.empty(); }
bool has_explicit() { return !m_explicit_cc.empty(); }
/// Returns the implicit component of convex closure (if available)
///
/// Implicit component contains constants from get_alphas() that are
/// implicitly existentially quantified
const expr_ref_vector &get_implicit() { return m_implicit_cc; }
/// \brief Return implicit constants in implicit convex closure
const expr_ref_vector &get_alphas() const { return m_alphas; }
/// Returns the explicit component of convex closure (if available)
///
/// The explicit component is in term of column variables
const expr_ref_vector &get_explicit() { return m_explicit_cc; }
/// Returns constants used to name columns
///
/// Explicit convex closure is in terms of these variables
const expr_ref_vector &get_col_vars() { return m_col_vars; }
void collect_statistics(statistics &st) const;
void reset_statistics() { m_st.reset(); }
};
} // namespace spacer

View file

@ -0,0 +1,229 @@
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_expand_bnd_generalizer.cpp
Abstract:
Strengthen lemmas by changing numeral constants inside arithmetic literals
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "muz/spacer/spacer_expand_bnd_generalizer.h"
#include "ast/for_each_expr.h"
#include "ast/rewriter/expr_safe_replace.h"
namespace {
/// Returns true if \p e is arithmetic comparison
///
/// Returns true if \p e is of the form \p lhs op rhs, where
/// op in {<=, <, >, >=}, and rhs is a numeric value
bool is_arith_comp(const expr *e, expr *&lhs, rational &rhs, bool &is_int,
ast_manager &m) {
arith_util arith(m);
expr *e1, *e2;
if (m.is_not(e, e1)) return is_arith_comp(e1, lhs, rhs, is_int, m);
if (arith.is_le(e, lhs, e2) || arith.is_lt(e, lhs, e2) ||
arith.is_ge(e, lhs, e2) || arith.is_gt(e, lhs, e2))
return arith.is_numeral(e2, rhs, is_int);
return false;
}
bool is_arith_comp(const expr *e, expr *&lhs, rational &rhs, ast_manager &m) {
bool is_int;
return is_arith_comp(e, lhs, rhs, is_int, m);
}
bool is_arith_comp(const expr *e, rational &rhs, ast_manager &m) {
expr *lhs;
return is_arith_comp(e, lhs, rhs, m);
}
/// If \p lit is of the form (x op v), replace v with num
///
/// Supports arithmetic literals where op is <, <=, >, >=, or negation
bool update_bound(const expr *lit, rational num, expr_ref &res,
bool negate = false) {
SASSERT(is_app(lit));
ast_manager &m = res.get_manager();
expr *e1;
if (m.is_not(lit, e1)) { return update_bound(e1, num, res, !negate); }
arith_util arith(m);
expr *lhs;
rational val;
bool is_int;
if (!is_arith_comp(lit, lhs, val, is_int, m)) return false;
res = m.mk_app(to_app(lit)->get_decl(), lhs, arith.mk_numeral(num, is_int));
if (negate) { m.mk_not(res); }
return true;
}
} // namespace
namespace spacer {
namespace collect_rationals_ns {
/// Finds rationals in an expression
struct proc {
ast_manager &m;
arith_util m_arith;
vector<rational> &m_res;
proc(ast_manager &a_m, vector<rational> &res)
: m(a_m), m_arith(m), m_res(res) {}
void operator()(expr *n) const {}
void operator()(app *n) {
rational val;
if (m_arith.is_numeral(n, val)) m_res.push_back(val);
}
};
} // namespace collect_rationals_ns
/// Extract all numerals from an expression
void collect_rationals(expr *e, vector<rational> &res, ast_manager &m) {
collect_rationals_ns::proc proc(m, res);
quick_for_each_expr(proc, e);
}
lemma_expand_bnd_generalizer::lemma_expand_bnd_generalizer(context &ctx)
: lemma_generalizer(ctx), m(ctx.get_ast_manager()), m_arith(m) {
// -- collect rationals from initial condition and transition relation
for (auto &kv : ctx.get_pred_transformers()) {
collect_rationals(kv.m_value->init(), m_values, m);
collect_rationals(kv.m_value->transition(), m_values, m);
}
// remove duplicates
std::sort(m_values.begin(), m_values.end());
auto last = std::unique(m_values.begin(), m_values.end());
for (unsigned i = 0, sz = std::distance(last, m_values.end()); i < sz; ++i)
m_values.pop_back();
}
void lemma_expand_bnd_generalizer::operator()(lemma_ref &lemma) {
scoped_watch _w_(m_st.watch);
if (!lemma->get_pob()->is_expand_bnd_enabled()) return;
expr_ref_vector cube(lemma->get_cube());
// -- temporary stores a core
expr_ref_vector core(m);
expr_ref lit(m), new_lit(m);
rational bnd;
// for every literal
for (unsigned i = 0, sz = cube.size(); i < sz; i++) {
lit = cube.get(i);
if (m.is_true(lit)) continue;
if (!is_arith_comp(lit, bnd, m)) continue;
TRACE("expand_bnd", tout << "Attempting to expand " << lit << " inside "
<< cube << "\n";);
// for every value
for (rational n : m_values) {
if (!is_interesting(lit, bnd, n)) continue;
m_st.atmpts++;
TRACE("expand_bnd", tout << "Attempting to expand " << lit
<< " with numeral " << n << "\n";);
// -- update bound on lit
VERIFY(update_bound(lit, n, new_lit));
// -- update lit to new_lit for a new candidate lemma
cube[i] = new_lit;
core.reset();
core.append(cube);
// -- check that candidate is inductive
if (check_inductive(lemma, core)) {
expr_fast_mark1 in_core;
for (auto *e : core) in_core.mark(e);
for (unsigned i = 0, sz = cube.size(); i < sz; ++i) {
if (!in_core.is_marked(cube.get(i))) cube[i] = m.mk_true();
}
// move to next literal if the current has been removed
if (!in_core.is_marked(new_lit)) break;
} else {
// -- candidate not inductive, restore original lit
cube[i] = lit;
}
}
}
// Currently, we allow for only one round of expand bound per lemma
// Mark lemma as already expanded so that it is not generalized in this way
// again
lemma->get_pob()->disable_expand_bnd_gen();
}
/// Check whether \p candidate is a possible generalization for \p lemma.
/// Side-effect: update \p lemma with the new candidate
bool lemma_expand_bnd_generalizer::check_inductive(lemma_ref &lemma,
expr_ref_vector &candidate) {
TRACE("expand_bnd_verb",
tout << "Attempting to update lemma with " << candidate << "\n";);
unsigned uses_level = 0;
auto &pt = lemma->get_pob()->pt();
bool res = pt.check_inductive(lemma->level(), candidate, uses_level,
lemma->weakness());
if (res) {
m_st.success++;
lemma->update_cube(lemma->get_pob(), candidate);
lemma->set_level(uses_level);
TRACE("expand_bnd", tout << "expand_bnd succeeded with "
<< mk_and(candidate) << " at level "
<< uses_level << "\n";);
}
return res;
}
/// Check whether lit ==> lit[val |--> n] (barring special cases). That is,
/// whether \p lit becomes weaker if \p val is replaced with \p n
///
/// \p lit has to be of the form t <= v where v is a numeral.
/// Special cases:
/// In the trivial case in which \p val == \p n, return false.
/// if lit is an equality or the negation of an equality, return true.
bool lemma_expand_bnd_generalizer::is_interesting(const expr *lit, rational val,
rational new_val) {
SASSERT(lit);
// the only case in which negation and non negation agree
if (val == new_val) return false;
if (m.is_eq(lit)) return true;
// negation is the actual negation modulo val == n
expr *e1;
if (m.is_not(lit, e1)) {
return m.is_eq(lit) || !is_interesting(e1, val, new_val);
}
SASSERT(val != new_val);
SASSERT(is_app(lit));
if (to_app(lit)->get_family_id() != m_arith.get_family_id()) return false;
switch (to_app(lit)->get_decl_kind()) {
case OP_LE:
case OP_LT:
return new_val > val;
case OP_GT:
case OP_GE:
return new_val < val;
default:
return false;
}
}
void lemma_expand_bnd_generalizer::collect_statistics(statistics &st) const {
st.update("time.spacer.solve.reach.gen.expand", m_st.watch.get_seconds());
st.update("SPACER expand_bnd attmpts", m_st.atmpts);
st.update("SPACER expand_bnd success", m_st.success);
}
} // namespace spacer

View file

@ -0,0 +1,68 @@
#pragma once
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_expand_bnd_generalizer.h
Abstract:
Strengthen lemmas by changing numeral constants inside arithmetic literals
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "muz/spacer/spacer_context.h"
namespace spacer {
class lemma_expand_bnd_generalizer : public lemma_generalizer {
struct stats {
unsigned atmpts;
unsigned success;
stopwatch watch;
stats() { reset(); }
void reset() {
watch.reset();
atmpts = 0;
success = 0;
}
};
stats m_st;
ast_manager &m;
arith_util m_arith;
/// A set of numeral values that can be used to expand bound
vector<rational> m_values;
public:
lemma_expand_bnd_generalizer(context &ctx);
~lemma_expand_bnd_generalizer() override {}
void operator()(lemma_ref &lemma) override;
void collect_statistics(statistics &st) const override;
void reset_statistics() override { m_st.reset(); }
private:
/// Check whether lit ==> lit[val |--> n] (barring special cases). That is,
/// whether \p lit becomes weaker if \p val is replaced with \p n
///
/// \p lit has to be of the form t <= v where v is a numeral.
/// Special cases:
/// In the trivial case in which \p val == \p n, return false.
/// if lit is an equality or the negation of an equality, return true.
bool is_interesting(const expr *lit, rational val, rational n);
/// check whether \p conj is a possible generalization for \p lemma.
/// update \p lemma if it is.
bool check_inductive(lemma_ref &lemma, expr_ref_vector &candiate);
};
} // namespace spacer

View file

@ -21,6 +21,7 @@ Revision History:
#include "ast/arith_decl_plugin.h"
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_expand_bnd_generalizer.h"
namespace spacer {
@ -174,5 +175,10 @@ class limit_num_generalizer : public lemma_generalizer {
void collect_statistics(statistics &st) const override;
void reset_statistics() override { m_st.reset(); }
};
} // namespace spacer
lemma_generalizer *
alloc_lemma_inductive_generalizer(spacer::context &ctx,
bool only_array_eligible = false,
bool enable_literal_weakening = true);
} // namespace spacer

View file

@ -0,0 +1,796 @@
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_global_generalizer.cpp
Abstract:
Global Guidance for Spacer
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "muz/spacer/spacer_global_generalizer.h"
#include "ast/arith_decl_plugin.h"
#include "ast/ast_util.h"
#include "ast/for_each_expr.h"
#include "ast/rewriter/expr_safe_replace.h"
#include "muz/spacer/spacer_cluster.h"
#include "muz/spacer/spacer_concretize.h"
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_manager.h"
#include "muz/spacer/spacer_matrix.h"
#include "muz/spacer/spacer_util.h"
#include "smt/smt_solver.h"
using namespace spacer;
namespace {
// LOCAL HELPER FUNCTIONS IN ANONYMOUS NAMESPACE
class to_real_stripper {
ast_manager &m;
arith_util m_arith;
public:
to_real_stripper(ast_manager &_m) : m(_m), m_arith(m) {}
bool operator()(expr_ref &e, unsigned depth = 8) {
rational num;
if (m_arith.is_int(e)) return true;
if (depth == 0) return false;
if (!is_app(e)) return false;
if (m_arith.is_to_real(e)) {
// strip to_real()
e = to_app(e)->get_arg(0);
return true;
} else if (m_arith.is_numeral(e, num)) {
// convert number to an integer
if (denominator(num).is_one()) {
e = m_arith.mk_int(num);
return true;
} else {
return false;
}
}
app *e_app = to_app(e);
expr_ref_buffer args(m);
expr_ref kid(m);
bool dirty = false;
for (unsigned i = 0, sz = e_app->get_num_args(); i < sz; ++i) {
auto *arg = e_app->get_arg(i);
kid = arg;
if (this->operator()(kid, depth - 1)) {
dirty |= (kid.get() != arg);
args.push_back(std::move(kid));
} else {
return false;
}
}
if (dirty)
e = m.mk_app(e_app->get_family_id(), e_app->get_decl_kind(),
args.size(), args.data());
return true;
}
bool operator()(expr_ref_vector &vec, unsigned depth = 8) {
bool res = true;
expr_ref e(m);
for (unsigned i = 0, sz = vec.size(); res && i < sz; ++i) {
e = vec.get(i);
res = this->operator()(e, depth);
if (res) { vec[i] = e; }
}
return res;
}
};
// Check whether \p sub contains a mapping to a bv_numeral.
// return bv_size of the bv_numeral in the first such mapping.
bool contains_bv(ast_manager &m, const substitution &sub, unsigned &sz) {
bv_util m_bv(m);
std::pair<unsigned, unsigned> v;
expr_offset r;
rational num;
for (unsigned j = 0, sz = sub.get_num_bindings(); j < sz; j++) {
sub.get_binding(j, v, r);
if (m_bv.is_numeral(r.get_expr(), num, sz)) return true;
}
return false;
}
// Check whether 1) all expressions in the range of \p sub are bv_numerals 2)
// all bv_numerals in range are of size sz
bool all_same_sz(ast_manager &m, const substitution &sub, unsigned sz) {
bv_util m_bv(m);
std::pair<unsigned, unsigned> v;
expr_offset r;
rational num;
unsigned n_sz;
for (unsigned j = 0; j < sub.get_num_bindings(); j++) {
sub.get_binding(j, v, r);
if (!m_bv.is_numeral(r.get_expr(), num, n_sz) || n_sz != sz)
return false;
}
return true;
}
} // namespace
namespace spacer {
lemma_global_generalizer::subsumer::subsumer(ast_manager &a_m, bool ground_pob)
: m(a_m), m_arith(m), m_bv(m), m_tags(m), m_used_tags(0), m_col_names(m),
m_ground_pob(ground_pob) {
scoped_ptr<solver_factory> factory(
mk_smt_strategic_solver_factory(symbol::null));
m_solver = (*factory)(m, params_ref::get_empty(), false, true, false,
symbol::null);
}
app *lemma_global_generalizer::subsumer::mk_fresh_tag() {
if (m_used_tags == m_tags.size()) {
auto *bool_sort = m.mk_bool_sort();
// -- create 4 new tags
m_tags.push_back(m.mk_fresh_const(symbol("t"), bool_sort));
m_tags.push_back(m.mk_fresh_const(symbol("t"), bool_sort));
m_tags.push_back(m.mk_fresh_const(symbol("t"), bool_sort));
m_tags.push_back(m.mk_fresh_const(symbol("t"), bool_sort));
}
return m_tags.get(m_used_tags++);
}
lemma_global_generalizer::lemma_global_generalizer(context &ctx)
: lemma_generalizer(ctx), m(ctx.get_ast_manager()),
m_subsumer(m, ctx.use_ground_pob()), m_do_subsume(ctx.do_subsume()) {}
void lemma_global_generalizer::operator()(lemma_ref &lemma) {
scoped_watch _w_(m_st.watch);
generalize(lemma);
}
void lemma_global_generalizer::subsumer::mk_col_names(const lemma_cluster &lc) {
expr_offset r;
std::pair<unsigned, unsigned> v;
auto &lemmas = lc.get_lemmas();
SASSERT(!lemmas.empty());
const substitution &sub = lemmas.get(0).get_sub();
m_col_names.reserve(sub.get_num_bindings());
for (unsigned j = 0, sz = sub.get_num_bindings(); j < sz; j++) {
// get var id (sub is in reverse order)
sub.get_binding(sz - 1 - j, v, r);
auto *sort = r.get_expr()->get_sort();
if (!m_col_names.get(j) || m_col_names.get(j)->get_sort() != sort) {
// create a fresh skolem constant for the jth variable
// reuse variables if they are already here and have matching sort
m_col_names[j] = m.mk_fresh_const("mrg_cvx!!", sort);
}
}
// -- lcm corresponds to a column, reset them since names have potentially
// changed
// -- this is a just-in-case
m_col_lcm.reset();
}
// Populate m_cvx_cls by 1) collecting all substitutions in the cluster \p lc
// 2) normalizing them to integer numerals
void lemma_global_generalizer::subsumer::setup_cvx_closure(
convex_closure &cc, const lemma_cluster &lc) {
expr_offset r;
std::pair<unsigned, unsigned> v;
mk_col_names(lc);
const lemma_info_vector &lemmas = lc.get_lemmas();
m_col_lcm.reset();
unsigned n_vars = 0;
rational num;
bool is_first = true;
for (const auto &lemma : lemmas) {
const substitution &sub = lemma.get_sub();
if (is_first) {
n_vars = sub.get_num_bindings();
m_col_lcm.reserve(n_vars, rational::one());
is_first = false;
}
for (unsigned j = 0; j < n_vars; j++) {
sub.get_binding(n_vars - 1 - j, v, r);
if (is_numeral(r.get_expr(), num)) {
m_col_lcm[j] = lcm(m_col_lcm.get(j), abs(denominator(num)));
}
}
}
cc.reset(n_vars);
unsigned bv_width;
if (contains_bv(m, lc.get_lemmas()[0].get_sub(), bv_width)) {
cc.set_bv(bv_width);
}
for (unsigned j = 0; j < n_vars; ++j)
cc.set_col_var(j, mk_rat_mul(m_col_lcm.get(j), m_col_names.get(j)));
vector<rational> row;
for (const auto &lemma : lemmas) {
row.reset();
const substitution &sub = lemma.get_sub();
for (unsigned j = 0, sz = sub.get_num_bindings(); j < sz; j++) {
sub.get_binding(sz - 1 - j, v, r);
VERIFY(is_numeral(r.get_expr(), num));
row.push_back(m_col_lcm.get(j) * num);
}
// -- add normalized row to convex closure
cc.add_row(row);
}
}
/// Find a representative for \p c
// TODO: replace with a symbolic representative
expr *lemma_global_generalizer::subsumer::find_repr(const model_ref &mdl,
const app *c) {
return mdl->get_const_interp(c->get_decl());
}
/// Skolemize implicitly existentially quantified constants
///
/// Constants in \p m_dim_frsh_cnsts are existentially quantified in \p f. They
/// are replaced by specific skolem constants. The \p out vector is populated
/// with corresponding instantiations. Currently, instantiations are values
/// chosen from the model
void lemma_global_generalizer::subsumer::skolemize_for_quic3(
expr_ref &f, const model_ref &mdl, app_ref_vector &out) {
unsigned idx = out.size();
app_ref sk(m);
expr_ref eval(m);
expr_safe_replace sub(m);
expr_ref_vector f_cnsts(m);
spacer::collect_uninterp_consts(f, f_cnsts);
expr_fast_mark2 marks;
for (auto *c : f_cnsts) { marks.mark(c); }
for (unsigned i = 0, sz = m_col_names.size(); i < sz; i++) {
app *c = m_col_names.get(i);
if (!marks.is_marked(c)) continue;
SASSERT(m_arith.is_int(c));
// Make skolem constants for ground pob
sk = mk_zk_const(m, i + idx, c->get_sort());
eval = find_repr(mdl, c);
SASSERT(is_app(eval));
out.push_back(to_app(eval));
sub.insert(c, sk);
}
sub(f.get(), f);
TRACE("subsume", tout << "skolemized into " << f << "\n";);
m_col_names.reset();
}
bool lemma_global_generalizer::subsumer::find_model(
const expr_ref_vector &cc, const expr_ref_vector &alphas, expr *bg,
model_ref &out_model) {
// push because we re-use the solver
solver::scoped_push _sp(*m_solver);
if (bg) m_solver->assert_expr(bg);
// -- assert syntactic convex closure constraints
m_solver->assert_expr(cc);
// if there are alphas, we have syntactic convex closure
if (!alphas.empty()) {
SASSERT(alphas.size() >= 2);
// try to get an interior point in convex closure that also satisfies bg
{
// push because this might be unsat
solver::scoped_push _sp2(*m_solver);
expr_ref zero(m_arith.mk_real(0), m);
for (auto *alpha : alphas) {
m_solver->assert_expr(m_arith.mk_gt(alpha, zero));
}
auto res = m_solver->check_sat();
if (res == l_true) {
m_solver->get_model(out_model);
return true;
}
}
}
// failed, try to get any point in convex closure
auto res = m_solver->check_sat();
if (res == l_true) {
m_solver->get_model(out_model);
return true;
}
UNREACHABLE();
// something went wrong and there is no model, even though one was expected
return false;
}
/// Returns false if subsumption is not supported for \p lc
bool lemma_global_generalizer::subsumer::is_handled(const lemma_cluster &lc) {
// check whether all substitutions are to bv_numerals
unsigned sz = 0;
bool bv_clus = contains_bv(m, lc.get_lemmas()[0].get_sub(), sz);
// If there are no BV numerals, cases are handled.
// TODO: put restriction on Arrays, non linear arithmetic etc
if (!bv_clus) return true;
if (!all_same_sz(m, lc.get_lemmas()[0].get_sub(), sz)) {
TRACE("subsume",
tout << "cannot compute cvx cls of different size variables\n";);
return false;
}
return true;
}
void lemma_global_generalizer::subsumer::reset() {
m_used_tags = 0;
m_col_lcm.reset();
}
bool lemma_global_generalizer::subsumer::subsume(const lemma_cluster &lc,
expr_ref_vector &new_post,
app_ref_vector &bindings) {
if (!is_handled(lc)) return false;
convex_closure cvx_closure(m);
reset();
setup_cvx_closure(cvx_closure, lc);
// compute convex closure
if (!cvx_closure.compute()) { return false; }
bool is_syntactic = cvx_closure.has_implicit();
if (is_syntactic) { m_st.m_num_syn_cls++; }
CTRACE("subsume_verb", is_syntactic,
tout << "Convex closure introduced new variables. Implicit part of "
"closure is: "
<< mk_and(cvx_closure.get_implicit()) << "\n";);
expr_ref grounded(m);
ground_free_vars(lc.get_pattern(), grounded);
expr_ref_vector vec(m);
auto &implicit_cc = cvx_closure.get_implicit();
auto &explicit_cc = cvx_closure.get_explicit();
vec.append(implicit_cc.size(), implicit_cc.data());
vec.append(explicit_cc.size(), explicit_cc.data());
// get a model for mbp
model_ref mdl;
auto &alphas = cvx_closure.get_alphas();
find_model(vec, alphas, grounded, mdl);
app_ref_vector vars(m);
expr_ref conj(m);
vec.reset();
// eliminate real-valued alphas from syntactic convex closure
if (!implicit_cc.empty()) {
vec.append(implicit_cc.size(), implicit_cc.data());
conj = mk_and(vec);
vars.append(alphas.size(),
reinterpret_cast<app *const *>(alphas.data()));
qe_project(m, vars, conj, *mdl.get(), true, true, !m_ground_pob);
// mbp failed, not expected, bail out
if (!vars.empty()) return false;
}
// vec = [implicit_cc]
// store full cc, this is what we want to over-approximate explicitly
vec.append(explicit_cc.size(), explicit_cc.data());
flatten_and(grounded, vec);
// vec = [implicit_cc(alpha_j, v_i), explicit_cc(v_i), phi(v_i)]
expr_ref full_cc(mk_and(vec), m);
vec.reset();
if (conj) {
// if explicit version of implicit cc was successfully computed
// conj is it, but need to ensure it has no to_real()
to_real_stripper stripper(m);
flatten_and(conj, vec);
stripper(vec);
}
vec.append(explicit_cc.size(), explicit_cc.data());
flatten_and(grounded, vec);
// here vec is [cc(v_i), phi(v_i)], and we need to eliminate v_i from it
vars.reset();
vars.append(m_col_names.size(),
reinterpret_cast<app *const *>(m_col_names.data()));
conj = mk_and(vec);
qe_project(m, vars, conj, *mdl.get(), true, true, !m_ground_pob);
// failed
if (!vars.empty()) return false;
// at the end, new_post must over-approximate the implicit convex closure
flatten_and(conj, new_post);
return over_approximate(new_post, full_cc);
}
/// Find a weakening of \p a such that \p b ==> a
///
/// Returns true on success and sets \p a to the result
bool lemma_global_generalizer::subsumer::over_approximate(expr_ref_vector &a,
const expr_ref b) {
// B && !(A1 && A2 && A3) is encoded as
// B && ((tag1 && !A1) || (tag2 && !A2) || (tag3 && !A3))
// iterate and check tags
expr_ref_vector tags(m), tagged_a(m);
std::string tag_prefix = "o";
for (auto *lit : a) {
tags.push_back(mk_fresh_tag());
tagged_a.push_back(m.mk_implies(tags.back(), lit));
}
TRACE("subsume_verb", tout << "weakening " << mk_and(a)
<< " to over approximate " << b << "\n";);
solver::scoped_push _sp(*m_solver);
m_solver->assert_expr(b);
m_solver->assert_expr(push_not(mk_and(tagged_a)));
while (true) {
lbool res = m_solver->check_sat(tags.size(), tags.data());
if (res == l_false) {
break;
} else if (res == l_undef) {
break;
}
// flip tags for all satisfied literals of !A
model_ref mdl;
m_solver->get_model(mdl);
for (unsigned i = 0, sz = a.size(); i < sz; ++i) {
if (!m.is_not(tags.get(i)) && mdl->is_false(a.get(i))) {
tags[i] = m.mk_not(tags.get(i));
}
}
}
expr_ref_buffer res(m);
// remove all expressions whose tags are false
for (unsigned i = 0, sz = tags.size(); i < sz; i++) {
if (!m.is_not(tags.get(i))) { res.push_back(a.get(i)); }
}
a.reset();
a.append(res.size(), res.data());
if (a.empty()) {
// could not find an over approximation
TRACE("subsume",
tout << "mbp did not over-approximate convex closure\n";);
m_st.m_num_no_ovr_approx++;
return false;
}
TRACE("subsume",
tout << "over approximate produced " << mk_and(a) << "\n";);
return true;
}
/// Attempt to set a conjecture on pob \p n.
///
/// Done by dropping literal \p lit from
/// post of \p n. \p lvl is level for conjecture pob. \p gas is the gas for
/// the conjecture pob returns true if conjecture is set
bool lemma_global_generalizer::do_conjecture(pob_ref &n, lemma_ref &lemma,
const expr_ref &lit, unsigned lvl,
unsigned gas) {
arith_util arith(m);
expr_ref_vector fml_vec(m);
expr_ref n_post(n->post(), m);
normalize(n_post, n_post, false, false);
// normalize_order(n_post, n_post);
fml_vec.push_back(n_post);
flatten_and(fml_vec);
expr_ref_vector conj(m);
bool is_filtered = filter_out_lit(fml_vec, lit, conj);
expr *e1 = nullptr, *e2 = nullptr;
if (!is_filtered &&
(arith.is_le(lit, e1, e2) || arith.is_ge(lit, e1, e2))) {
// if lit is '<=' or '>=', try matching '=='
is_filtered =
filter_out_lit(fml_vec, expr_ref(m.mk_eq(e1, e2), m), conj);
}
if (!is_filtered) {
// -- try using the corresponding lemma instead
conj.reset();
n_post = mk_and(lemma->get_cube());
normalize_order(n_post, n_post);
fml_vec.reset();
fml_vec.push_back(n_post);
flatten_and(fml_vec);
is_filtered = filter_out_lit(fml_vec, lit, conj);
}
SASSERT(0 < gas && gas < UINT_MAX);
if (conj.empty()) {
// If the pob cannot be abstracted, stop using generalization on
// it
TRACE("global", tout << "stop local generalization on pob " << n_post
<< " id is " << n_post->get_id() << "\n";);
n->disable_local_gen();
return false;
} else if (!is_filtered) {
// The literal to be abstracted is not in the pob
TRACE("global", tout << "Conjecture failed:\n"
<< lit << "\n"
<< n_post << "\n"
<< "conj:" << conj << "\n";);
n->disable_local_gen();
m_st.m_num_cant_abs++;
return false;
}
pob *root = n->parent();
while (root->parent()) root = root->parent();
scoped_ptr<pob> new_pob = alloc(pob, root, n->pt(), lvl, n->depth(), false);
if (!new_pob) return false;
new_pob->set_desired_level(n->level());
new_pob->set_post(mk_and(conj));
new_pob->set_conjecture();
// -- register with current pob
n->set_data(new_pob.detach());
// -- update properties of the current pob itself
n->set_expand_bnd();
n->set_gas(gas);
n->disable_local_gen();
TRACE("global", tout << "set conjecture " << mk_pp(n->get_data()->post(), m)
<< " at level " << n->get_data()->level() << "\n";);
return true;
}
// Decide global guidance based on lemma
void lemma_global_generalizer::generalize(lemma_ref &lemma) {
// -- pob that the lemma blocks
pob_ref &pob = lemma->get_pob();
// -- cluster that the lemma belongs to
lemma_cluster *cluster = pob->pt().clstr_match(lemma);
/// Lemma does not belong to any cluster. return
if (!cluster) return;
// if the cluster does not have enough gas, stop local generalization
// and return
if (cluster->get_gas() == 0) {
m_st.m_num_cls_ofg++;
pob->disable_local_gen();
TRACE("global", tout << "stop local generalization on pob "
<< mk_pp(pob->post(), m) << " id is "
<< pob->post()->get_id() << "\n";);
return;
}
// -- local cluster that includes the new lemma
lemma_cluster lc(*cluster);
// XXX most of the time lemma clustering happens before generalization
// XXX so `add_lemma` is likely to return false, but this does not mean
// XXX that the lemma is not new
bool is_new = lc.add_lemma(lemma, true);
(void)is_new;
const expr_ref &pat = lc.get_pattern();
TRACE("global", {
tout << "Global generalization of:\n"
<< mk_and(lemma->get_cube()) << "\n"
<< "at lvl: " << lemma->level() << "\n"
<< (is_new ? "new" : "old") << "\n"
<< "Using cluster:\n"
<< pat << "\n"
<< "Existing lemmas in the cluster:\n";
for (const auto &li : cluster->get_lemmas()) {
tout << mk_and(li.get_lemma()->get_cube())
<< " lvl:" << li.get_lemma()->level() << "\n";
}
});
// Concretize
if (has_nonlinear_var_mul(pat, m)) {
m_st.m_num_non_lin++;
TRACE("global",
tout << "Found non linear pattern. Marked to concretize \n";);
// not constructing the concrete pob here since we need a model for
// n->post()
pob->set_concretize_pattern(pat);
pob->set_concretize(true);
pob->set_gas(cluster->get_pob_gas());
cluster->dec_gas();
return;
}
// Conjecture
expr_ref lit(m);
if (find_unique_mono_var_lit(pat, lit)) {
// Create a conjecture by dropping literal from pob.
TRACE("global", tout << "Conjecture with pattern\n"
<< mk_pp(pat, m) << "\n"
<< "with gas " << cluster->get_gas() << "\n";);
unsigned gas = cluster->get_pob_gas();
unsigned lvl = lc.get_min_lvl();
if (pob) lvl = std::min(lvl, pob->level());
if (do_conjecture(pob, lemma, lit, lvl, gas)) {
// decrease the number of times this cluster is going to be used
// for conjecturing
cluster->dec_gas();
return;
} else {
// -- if conjecture failed, there is nothing else to do.
// -- the pob matched pre-condition for conjecture, so it should not
// be subsumed
return;
}
}
// if subsumption removed all the other lemmas, there is nothing to
// generalize
if (lc.get_size() < 2) return;
if (!m_do_subsume) return;
// -- new pob that is blocked by generalized lemma
expr_ref_vector new_post(m);
// -- bindings for free variables of new_pob
// -- subsumer might introduce extra free variables
app_ref_vector bindings(lemma->get_bindings());
if (m_subsumer.subsume(lc, new_post, bindings)) {
class pob *root = pob->parent();
while (root->parent()) root = root->parent();
unsigned new_lvl = lc.get_min_lvl();
if (pob) new_lvl = std::min(new_lvl, pob->level());
scoped_ptr<class pob> new_pob =
alloc(class pob, root, pob->pt(), new_lvl, pob->depth(), false);
if (!new_pob) return;
new_pob->set_desired_level(pob->level());
new_pob->set_post(mk_and(new_post), bindings);
new_pob->set_subsume();
pob->set_data(new_pob.detach());
// -- update properties of the pob itself
pob->set_gas(cluster->get_pob_gas() + 1);
pob->set_expand_bnd();
// Stop local generalization. Perhaps not the best choice in general.
// Helped with one instance on our benchmarks
pob->disable_local_gen();
cluster->dec_gas();
TRACE("global", tout << "Create subsume pob at level " << new_lvl
<< "\n"
<< mk_and(new_post) << "\n";);
}
}
/// Replace bound vars in \p fml with uninterpreted constants
void lemma_global_generalizer::subsumer::ground_free_vars(expr *pat,
expr_ref &out) {
SASSERT(!is_ground(pat));
var_subst vs(m, false);
// m_col_names might be bigger since it contains previously used constants
// relying on the fact that m_col_lcm was just set. Better to compute free
// vars of pat
SASSERT(m_col_lcm.size() <= m_col_names.size());
out = vs(pat, m_col_lcm.size(),
reinterpret_cast<expr *const *>(m_col_names.data()));
SASSERT(is_ground(out));
}
pob *lemma_global_generalizer::mk_concretize_pob(pob &n, model_ref &model) {
expr_ref_vector new_post(m);
spacer::pob_concretizer proc(m, model, n.get_concretize_pattern());
if (proc.apply(n.post(), new_post)) {
pob *new_pob = n.pt().mk_pob(n.parent(), n.level(), n.depth(),
mk_and(new_post), n.get_binding());
TRACE("concretize", tout << "pob:\n"
<< mk_pp(n.post(), m)
<< " is concretized into:\n"
<< mk_pp(new_pob->post(), m) << "\n";);
return new_pob;
}
return nullptr;
}
pob *lemma_global_generalizer::mk_subsume_pob(pob &n) {
if (!(n.get_gas() >= 0 && n.has_data() && n.get_data()->is_subsume()))
return nullptr;
pob *data = n.get_data();
pob *f = n.pt().find_pob(data->parent(), data->post());
if (f && (f->is_in_queue() || f->is_closed())) {
n.reset_data();
return nullptr;
}
TRACE("global", tout << "mk_subsume_pob at level " << data->level()
<< " with post state:\n"
<< mk_pp(data->post(), m) << "\n";);
f = n.pt().mk_pob(data->parent(), data->level(), data->depth(),
data->post(), n.get_binding());
f->set_subsume();
f->inherit(*data);
n.reset_data();
return f;
}
pob *lemma_global_generalizer::mk_conjecture_pob(pob &n) {
if (!(n.has_data() && n.get_data()->is_conjecture() && n.get_gas() > 0))
return nullptr;
pob *data = n.get_data();
pob *f = n.pt().find_pob(data->parent(), data->post());
if (f && (f->is_in_queue() || f->is_closed())) {
n.reset_data();
return nullptr;
}
f = n.pt().mk_pob(data->parent(), data->level(), data->depth(),
data->post(), {m});
// inherit all metadata from new_pob
f->inherit(*data);
n.reset_data();
return f;
}
void lemma_global_generalizer::subsumer::collect_statistics(
statistics &st) const {
st.update("SPACER num no over approximate", m_st.m_num_no_ovr_approx);
st.update("SPACER num sync cvx cls", m_st.m_num_syn_cls);
st.update("SPACER num mbp failed", m_st.m_num_mbp_failed);
// m_cvx_closure.collect_statistics(st);
}
void lemma_global_generalizer::collect_statistics(statistics &st) const {
st.update("time.spacer.solve.reach.gen.global", m_st.watch.get_seconds());
st.update("SPACER cluster out of gas", m_st.m_num_cls_ofg);
st.update("SPACER num non lin", m_st.m_num_non_lin);
st.update("SPACER num cant abstract", m_st.m_num_cant_abs);
}
} // namespace spacer

View file

@ -0,0 +1,176 @@
#pragma once
/*++
Copyright (c) 2020 Arie Gurfinkel
Module Name:
spacer_global_generalizer.h
Abstract:
Global Guidance for Spacer
Author:
Hari Govind V K
Arie Gurfinkel
--*/
#include "muz/spacer/spacer_context.h"
#include "muz/spacer/spacer_convex_closure.h"
namespace spacer {
/// Global guided generalization
///
/// See Hari Govind et al. Global Guidance for Local Generalization in Model
/// Checking. CAV 2020
class lemma_global_generalizer : public lemma_generalizer {
/// Subsumption strategy
class subsumer {
struct stats {
unsigned m_num_syn_cls;
unsigned m_num_mbp_failed;
unsigned m_num_no_ovr_approx;
stopwatch watch;
stats() { reset(); }
void reset() {
watch.reset();
m_num_syn_cls = 0;
m_num_mbp_failed = 0;
m_num_no_ovr_approx = 0;
}
};
stats m_st;
ast_manager &m;
arith_util m_arith;
bv_util m_bv;
// boolean variables used as local tags
app_ref_vector m_tags;
// number of tags currently used
unsigned m_used_tags;
// save fresh constants for mbp
app_ref_vector m_col_names;
vector<rational> m_col_lcm;
// create pob without free vars
bool m_ground_pob;
// Local solver to get model for computing mbp and to check whether
// cvx_cls ==> mbp
ref<solver> m_solver;
/// Return a fresh boolean variable
app *mk_fresh_tag();
void reset();
/// Returns false if subsumption is not supported for given cluster
bool is_handled(const lemma_cluster &lc);
/// Find a representative for \p c
expr *find_repr(const model_ref &mdl, const app *c);
/// Skolemize m_dim_frsh_cnsts in \p f
///
/// \p cnsts is appended with ground terms from \p mdl
void skolemize_for_quic3(expr_ref &f, const model_ref &mdl,
app_ref_vector &cnsts);
/// Create new vars to compute convex cls
void mk_col_names(const lemma_cluster &lc);
void setup_cvx_closure(convex_closure &cc, const lemma_cluster &lc);
/// Make \p fml ground using m_dim_frsh_cnsts. Store result in \p out
void ground_free_vars(expr *fml, expr_ref &out);
/// Weaken \p a such that (and a) overapproximates \p b
bool over_approximate(expr_ref_vector &a, const expr_ref b);
bool find_model(const expr_ref_vector &cc,
const expr_ref_vector &alphas, expr *bg,
model_ref &out_model);
bool is_numeral(const expr *e, rational &n) {
return m_arith.is_numeral(e, n) || m_bv.is_numeral(e, n);
}
expr *mk_rat_mul(rational n, expr *v) {
if (n.is_one()) return v;
return m_arith.mk_mul(m_arith.mk_numeral(n, m_arith.is_int(v)), v);
}
public:
subsumer(ast_manager &m, bool ground_pob);
void collect_statistics(statistics &st) const;
/// Compute a cube \p res such that \neg p subsumes all the lemmas in \p
/// lc
///
/// \p cnsts is a set of constants that can be used to make \p res
/// ground
bool subsume(const lemma_cluster &lc, expr_ref_vector &res,
app_ref_vector &cnsts);
};
struct stats {
unsigned m_num_cls_ofg;
unsigned m_num_syn_cls;
unsigned m_num_mbp_failed;
unsigned m_num_non_lin;
unsigned m_num_no_ovr_approx;
unsigned m_num_cant_abs;
stopwatch watch;
stats() { reset(); }
void reset() {
watch.reset();
m_num_cls_ofg = 0;
m_num_non_lin = 0;
m_num_syn_cls = 0;
m_num_mbp_failed = 0;
m_num_no_ovr_approx = 0;
m_num_cant_abs = 0;
}
};
stats m_st;
ast_manager &m;
subsumer m_subsumer;
/// Decide global guidance based on lemma
void generalize(lemma_ref &lemma);
/// Attempt to set a conjecture on pob \p n.
///
/// Done by dropping literal \p lit from
/// post of \p n. \p lvl is level for conjecture pob. \p gas is the gas for
/// the conjecture pob returns true if conjecture is set
bool do_conjecture(pob_ref &n, lemma_ref &lemma, const expr_ref &lit, unsigned lvl,
unsigned gas);
/// Enable/disable subsume rule
bool m_do_subsume;
public:
lemma_global_generalizer(context &ctx);
~lemma_global_generalizer() override {}
void operator()(lemma_ref &lemma) override;
void collect_statistics(statistics &st) const override;
void reset_statistics() override { m_st.reset(); }
// post-actions for pobs produced during generalization
pob *mk_concretize_pob(pob &n, model_ref &model);
pob *mk_subsume_pob(pob &n);
pob *mk_conjecture_pob(pob &n);
};
} // namespace spacer

View file

@ -0,0 +1,303 @@
#include "ast/expr_functors.h"
#include "muz/spacer/spacer_context.h"
using namespace spacer;
namespace {
class contains_array_op_proc : public i_expr_pred {
ast_manager &m;
family_id m_array_fid;
public:
contains_array_op_proc(ast_manager &manager)
: m(manager), m_array_fid(array_util(m).get_family_id()) {}
bool operator()(expr *e) override {
return is_app(e) && to_app(e)->get_family_id() == m_array_fid;
}
};
class lemma_inductive_generalizer : public lemma_generalizer {
struct stats {
unsigned count;
unsigned weaken_success;
unsigned weaken_fail;
stopwatch watch;
stats() { reset(); }
void reset() {
count = 0;
weaken_success = 0;
weaken_fail = 0;
watch.reset();
}
};
ast_manager &m;
expr_ref m_true;
stats m_st;
bool m_only_array_eligible;
bool m_enable_litweak;
contains_array_op_proc m_contains_array_op;
check_pred m_contains_array_pred;
expr_ref_vector m_pinned;
lemma *m_lemma = nullptr;
spacer::pred_transformer *m_pt = nullptr;
unsigned m_weakness = 0;
unsigned m_level = 0;
ptr_vector<expr> m_cube;
// temporary vector
expr_ref_vector m_core;
public:
lemma_inductive_generalizer(spacer::context &ctx,
bool only_array_eligible = false,
bool enable_literal_weakening = true)
: lemma_generalizer(ctx), m(ctx.get_ast_manager()),
m_true(m.mk_true(), m), m_only_array_eligible(only_array_eligible),
m_enable_litweak(enable_literal_weakening), m_contains_array_op(m),
m_contains_array_pred(m_contains_array_op, m),
m_pinned(m), m_core(m) {}
private:
// -- true if literal \p lit is eligible to be generalized
bool is_eligible(expr *lit) {
return !m_only_array_eligible || has_arrays(lit);
}
bool has_arrays(expr *lit) { return m_contains_array_op(lit); }
void reset() {
m_cube.reset();
m_weakness = 0;
m_level = 0;
m_pt = nullptr;
m_pinned.reset();
m_core.reset();
}
void setup(lemma_ref &lemma) {
// check that we start in uninitialized state
SASSERT(m_pt == nullptr);
m_lemma = lemma.get();
m_pt = &lemma->get_pob()->pt();
m_weakness = lemma->weakness();
m_level = lemma->level();
auto &cube = lemma->get_cube();
m_cube.reset();
for (auto *lit : cube) { m_cube.push_back(lit); }
}
// loads current generalization from m_cube to m_core
void load_cube_to_core() {
m_core.reset();
for (unsigned i = 0, sz = m_cube.size(); i < sz; ++i) {
auto *lit = m_cube.get(i);
if (lit == m_true) continue;
m_core.push_back(lit);
}
}
// returns true if m_cube is inductive
bool is_cube_inductive() {
load_cube_to_core();
if (m_core.empty()) return false;
unsigned used_level;
if (m_pt->check_inductive(m_level, m_core, used_level, m_weakness)) {
m_level = std::max(m_level, used_level);
return true;
}
return false;
}
// intersect m_cube with m_core
unsigned update_cube_by_core(unsigned from = 0) {
// generalize away all literals in m_cube that are not in m_core
// do not assume anything about order of literals in m_core
unsigned success = 0;
// mark core
ast_fast_mark2 marked_core;
for (auto *v : m_core) { marked_core.mark(v); }
// replace unmarked literals by m_true in m_cube
for (unsigned i = from, sz = m_cube.size(); i < sz; ++i) {
auto *lit = m_cube.get(i);
if (lit == m_true) continue;
if (!marked_core.is_marked(lit)) {
m_cube[i] = m_true;
success++;
}
}
return success;
}
// generalizes m_core and removes from m_cube all generalized literals
unsigned generalize_core(unsigned from = 0) {
unsigned success = 0;
unsigned used_level;
// -- while it is possible that a single literal can be generalized to
// false,
// -- it is fairly unlikely. Thus, we give up generalizing in this case.
if (m_core.empty()) return 0;
// -- check whether candidate in m_core is inductive
if (m_pt->check_inductive(m_level, m_core, used_level, m_weakness)) {
success += update_cube_by_core(from);
// update m_level to the largest level at which the the current
// candidate in m_cube is inductive
m_level = std::max(m_level, used_level);
}
return success;
}
// generalizes (i.e., drops) a specific literal of m_cube
unsigned generalize1(unsigned lit_idx) {
if (!is_eligible(m_cube.get(lit_idx))) return 0;
// -- populate m_core with all literals except the one being generalized
m_core.reset();
for (unsigned i = 0, sz = m_cube.size(); i < sz; ++i) {
auto *lit = m_cube.get(i);
if (lit == m_true || i == lit_idx) continue;
m_core.push_back(lit);
}
return generalize_core(lit_idx);
}
// generalizes all literals of m_cube in a given range
unsigned generalize_range(unsigned from, unsigned to) {
unsigned success = 0;
for (unsigned i = from; i < to; ++i) { success += generalize1(i); }
return success;
}
// weakens a given literal of m_cube
// weakening replaces a literal by a weaker literal(s)
// for example, x=y might get weakened into one of x<=y or y<=x
unsigned weaken1(unsigned lit_idx) {
if (!is_eligible(m_cube.get(lit_idx))) return 0;
if (m_cube.get(lit_idx) == m_true) return 0;
unsigned success = 0;
unsigned cube_sz = m_cube.size();
// -- save literal to be generalized, and replace it by true
expr *saved_lit = m_cube.get(lit_idx);
m_cube[lit_idx] = m_true;
// -- add new weaker literals to end of m_cube and attempt to generalize
expr_ref_vector weakening(m);
weakening.push_back(saved_lit);
expand_literals(m, weakening);
if (weakening.get(0) != saved_lit) {
for (auto *lit : weakening) {
m_cube.push_back(lit);
m_pinned.push_back(lit);
}
if (m_cube.size() - cube_sz >= 2) {
// normal case: generalize new weakening
success += generalize_range(cube_sz, m_cube.size());
} else {
// special case -- weaken literal by another literal, check that
// cube is still inductive
success += (is_cube_inductive() ? 1 : 0);
}
}
// -- failed to generalize, restore removed literal and m_cube
if (success == 0) {
m_cube[lit_idx] = saved_lit;
m_cube.shrink(cube_sz);
m_st.weaken_fail++;
} else {
m_st.weaken_success++;
}
return success;
}
// weakens literals of m_cube in a given range
unsigned weaken_range(unsigned from, unsigned to) {
unsigned success = 0;
for (unsigned i = from; i < to; ++i) { success += weaken1(i); }
return success;
}
public:
// entry point for generalization
void operator()(lemma_ref &lemma) override {
if (lemma->get_cube().empty()) return;
m_st.count++;
scoped_watch _w_(m_st.watch);
setup(lemma);
unsigned num_gens = 0;
// -- first round -- generalize by dropping literals
num_gens += generalize_range(0, m_cube.size());
// -- if weakening is enabled, start next round
if (m_enable_litweak) {
unsigned cube_sz = m_cube.size();
// -- second round -- weaken literals that cannot be dropped
num_gens += weaken_range(0, cube_sz);
// -- third round -- weaken literals produced in prev round
if (cube_sz < m_cube.size())
num_gens += weaken_range(cube_sz, m_cube.size());
}
// if there is at least one generalization, update lemma
if (num_gens > 0) {
TRACE("indgen",
tout << "Generalized " << num_gens << " literals\n";);
// reuse m_core since it is not needed for anything else
m_core.reset();
for (auto *lit : m_cube) {
if (lit != m_true) m_core.push_back(lit);
}
TRACE("indgen", tout << "Original: " << lemma->get_cube() << "\n"
<< "Generalized: " << m_core << "\n";);
lemma->update_cube(lemma->get_pob(), m_core);
lemma->set_level(m_level);
}
reset();
return;
}
void collect_statistics(statistics &st) const override {
st.update("time.spacer.solve.reach.gen.ind", m_st.watch.get_seconds());
st.update("SPACER inductive gen", m_st.count);
st.update("SPACER inductive gen weaken success", m_st.weaken_success);
st.update("SPACER inductive gen weaken fail", m_st.weaken_fail);
}
void reset_statistics() override { m_st.reset(); }
};
} // namespace
namespace spacer {
lemma_generalizer *
alloc_lemma_inductive_generalizer(spacer::context &ctx,
bool only_array_eligible,
bool enable_literal_weakening) {
return alloc(lemma_inductive_generalizer, ctx, only_array_eligible,
enable_literal_weakening);
}
} // namespace spacer

View file

@ -1,191 +0,0 @@
/**++
Copyright (c) 2017 Microsoft Corporation and Matteo Marescotti
Module Name:
spacer_json.cpp
Abstract:
SPACER json marshalling support
Author:
Matteo Marescotti
Notes:
--*/
#include <iomanip>
#include "spacer_context.h"
#include "spacer_json.h"
#include "spacer_util.h"
namespace spacer {
static std::ostream &json_marshal(std::ostream &out, ast *t, ast_manager &m) {
mk_epp pp = mk_epp(t, m);
std::ostringstream ss;
ss << pp;
out << "\"";
for (auto &c:ss.str()) {
switch (c) {
case '"':
out << "\\\"";
break;
case '\\':
out << "\\\\";
break;
case '\b':
out << "\\b";
break;
case '\f':
out << "\\f";
break;
case '\n':
out << "\\n";
break;
case '\r':
out << "\\r";
break;
case '\t':
out << "\\t";
break;
default:
if ('\x00' <= c && c <= '\x1f') {
out << "\\u"
<< std::hex << std::setw(4) << std::setfill('0') << (int) c;
} else {
out << c;
}
}
}
out << "\"";
return out;
}
static std::ostream &json_marshal(std::ostream &out, lemma *l) {
out << "{"
<< R"("init_level":")" << l->init_level()
<< R"(", "level":")" << l->level()
<< R"(", "expr":)";
json_marshal(out, l->get_expr(), l->get_ast_manager());
out << "}";
return out;
}
static std::ostream &json_marshal(std::ostream &out, const lemma_ref_vector &lemmas) {
std::ostringstream ls;
for (auto l:lemmas) {
ls << ((unsigned)ls.tellp() == 0 ? "" : ",");
json_marshal(ls, l);
}
out << "[" << ls.str() << "]";
return out;
}
void json_marshaller::register_lemma(lemma *l) {
if (l->has_pob()) {
m_relations[&*l->get_pob()][l->get_pob()->depth()].push_back(l);
}
}
void json_marshaller::register_pob(pob *p) {
m_relations[p];
}
void json_marshaller::marshal_lemmas_old(std::ostream &out) const {
unsigned pob_id = 0;
for (auto &pob_map:m_relations) {
std::ostringstream pob_lemmas;
for (auto &depth_lemmas : pob_map.second) {
pob_lemmas << ((unsigned)pob_lemmas.tellp() == 0 ? "" : ",")
<< "\"" << depth_lemmas.first << "\":";
json_marshal(pob_lemmas, depth_lemmas.second);
}
if (pob_lemmas.tellp()) {
out << ((unsigned)out.tellp() == 0 ? "" : ",\n");
out << "\"" << pob_id << "\":{" << pob_lemmas.str() << "}";
}
pob_id++;
}
}
void json_marshaller::marshal_lemmas_new(std::ostream &out) const {
unsigned pob_id = 0;
for (auto &pob_map:m_relations) {
std::ostringstream pob_lemmas;
pob *n = pob_map.first;
unsigned i = 0;
for (auto *l : n->lemmas()) {
pob_lemmas << ((unsigned)pob_lemmas.tellp() == 0 ? "" : ",")
<< "\"" << i++ << "\":";
lemma_ref_vector lemmas_vec;
lemmas_vec.push_back(l);
json_marshal(pob_lemmas, lemmas_vec);
}
if (pob_lemmas.tellp()) {
out << ((unsigned)out.tellp() == 0 ? "" : ",\n");
out << "\"" << pob_id << "\":{" << pob_lemmas.str() << "}";
}
pob_id++;
}
}
std::ostream &json_marshaller::marshal(std::ostream &out) const {
std::ostringstream nodes;
std::ostringstream edges;
std::ostringstream lemmas;
if (m_old_style)
marshal_lemmas_old(lemmas);
else
marshal_lemmas_new(lemmas);
unsigned pob_id = 0;
unsigned depth = 0;
while (true) {
double root_expand_time = m_ctx->get_root().get_expand_time(depth);
bool a = false;
pob_id = 0;
for (auto &pob_map:m_relations) {
pob *n = pob_map.first;
double expand_time = n->get_expand_time(depth);
if (expand_time > 0) {
a = true;
std::ostringstream pob_expr;
json_marshal(pob_expr, n->post(), n->get_ast_manager());
nodes << ((unsigned)nodes.tellp() == 0 ? "" : ",\n") <<
"{\"id\":\"" << depth << n <<
"\",\"relative_time\":\"" << expand_time / root_expand_time <<
"\",\"absolute_time\":\"" << std::setprecision(2) << expand_time <<
"\",\"predicate\":\"" << n->pt().head()->get_name() <<
"\",\"expr_id\":\"" << n->post()->get_id() <<
"\",\"pob_id\":\"" << pob_id <<
"\",\"depth\":\"" << depth <<
"\",\"expr\":" << pob_expr.str() << "}";
if (n->parent()) {
edges << ((unsigned)edges.tellp() == 0 ? "" : ",\n") <<
"{\"from\":\"" << depth << n->parent() <<
"\",\"to\":\"" << depth << n << "\"}";
}
}
pob_id++;
}
if (!a) {
break;
}
depth++;
}
out << "{\n\"nodes\":[\n" << nodes.str() << "\n],\n";
out << "\"edges\":[\n" << edges.str() << "\n],\n";
out << "\"lemmas\":{\n" << lemmas.str() << "\n}\n}\n";
return out;
}
}

View file

@ -1,59 +0,0 @@
/**++
Copyright (c) 2017 Microsoft Corporation and Matteo Marescotti
Module Name:
spacer_json.h
Abstract:
SPACER json marshalling support
Author:
Matteo Marescotti
Notes:
--*/
#pragma once
#include<ostream>
#include<map>
#include "util/ref.h"
#include "util/ref_vector.h"
class ast;
class ast_manager;
namespace spacer {
class lemma;
typedef sref_vector<lemma> lemma_ref_vector;
class context;
class pob;
class json_marshaller {
context *m_ctx;
bool m_old_style;
std::map<pob*, std::map<unsigned, lemma_ref_vector>> m_relations;
void marshal_lemmas_old(std::ostream &out) const;
void marshal_lemmas_new(std::ostream &out) const;
public:
json_marshaller(context *ctx, bool old_style = false) :
m_ctx(ctx), m_old_style(old_style) {}
void register_lemma(lemma *l);
void register_pob(pob *p);
std::ostream &marshal(std::ostream &out) const;
};
}

View file

@ -17,143 +17,169 @@ Revision History:
--*/
#include "muz/spacer/spacer_matrix.h"
namespace spacer
{
spacer_matrix::spacer_matrix(unsigned m, unsigned n) : m_num_rows(m), m_num_cols(n)
{
for (unsigned i=0; i < m; ++i)
{
vector<rational> v;
for (unsigned j=0; j < n; ++j)
{
v.push_back(rational(0));
namespace spacer {
spacer_matrix::spacer_matrix(unsigned m, unsigned n)
: m_num_rows(m), m_num_cols(n) {
m_matrix.reserve(m_num_rows);
for (unsigned i = 0; i < m_num_rows; ++i) {
m_matrix[i].reserve(m_num_cols, rational(0));
}
}
void spacer_matrix::get_col(unsigned i, vector<rational> &row) const {
SASSERT(i < m_num_cols);
row.reset();
row.reserve(m_num_rows);
unsigned j = 0;
for (auto &v : m_matrix) { row[j++] = (v.get(i)); }
SASSERT(row.size() == m_num_rows);
}
void spacer_matrix::add_row(const vector<rational> &row) {
SASSERT(row.size() == m_num_cols);
m_matrix.push_back(row);
m_num_rows = m_matrix.size();
}
unsigned spacer_matrix::perform_gaussian_elimination() {
unsigned i = 0;
unsigned j = 0;
while (i < m_matrix.size() && j < m_matrix[0].size()) {
// find maximal element in column with row index bigger or equal i
rational max = m_matrix[i][j];
unsigned max_index = i;
for (unsigned k = i + 1; k < m_matrix.size(); ++k) {
if (max < m_matrix[k][j]) {
max = m_matrix[k][j];
max_index = k;
}
m_matrix.push_back(v);
}
}
unsigned spacer_matrix::num_rows()
{
return m_num_rows;
}
unsigned spacer_matrix::num_cols()
{
return m_num_cols;
}
const rational& spacer_matrix::get(unsigned int i, unsigned int j)
{
SASSERT(i < m_num_rows);
SASSERT(j < m_num_cols);
return m_matrix[i][j];
}
void spacer_matrix::set(unsigned int i, unsigned int j, const rational& v)
{
SASSERT(i < m_num_rows);
SASSERT(j < m_num_cols);
m_matrix[i][j] = v;
}
unsigned spacer_matrix::perform_gaussian_elimination()
{
unsigned i=0;
unsigned j=0;
while(i < m_matrix.size() && j < m_matrix[0].size())
if (max.is_zero()) // skip this column
{
// find maximal element in column with row index bigger or equal i
rational max = m_matrix[i][j];
unsigned max_index = i;
++j;
} else {
// reorder rows if necessary
vector<rational> tmp = m_matrix[i];
m_matrix[i] = m_matrix[max_index];
m_matrix[max_index] = m_matrix[i];
for (unsigned k=i+1; k < m_matrix.size(); ++k)
{
if (max < m_matrix[k][j])
{
max = m_matrix[k][j];
max_index = k;
// normalize row
rational pivot = m_matrix[i][j];
if (!pivot.is_one()) {
for (unsigned k = 0; k < m_matrix[i].size(); ++k) {
m_matrix[i][k] = m_matrix[i][k] / pivot;
}
}
if (max.is_zero()) // skip this column
{
++j;
}
else
{
// reorder rows if necessary
vector<rational> tmp = m_matrix[i];
m_matrix[i] = m_matrix[max_index];
m_matrix[max_index] = m_matrix[i];
// normalize row
rational pivot = m_matrix[i][j];
if (!pivot.is_one())
{
for (unsigned k=0; k < m_matrix[i].size(); ++k)
{
m_matrix[i][k] = m_matrix[i][k] / pivot;
// subtract row from all other rows
for (unsigned k = 1; k < m_matrix.size(); ++k) {
if (k != i) {
rational factor = m_matrix[k][j];
for (unsigned l = 0; l < m_matrix[k].size(); ++l) {
m_matrix[k][l] =
m_matrix[k][l] - (factor * m_matrix[i][l]);
}
}
// subtract row from all other rows
for (unsigned k=1; k < m_matrix.size(); ++k)
{
if (k != i)
{
rational factor = m_matrix[k][j];
for (unsigned l=0; l < m_matrix[k].size(); ++l)
{
m_matrix[k][l] = m_matrix[k][l] - (factor * m_matrix[i][l]);
}
}
}
++i;
++j;
}
}
if (get_verbosity_level() >= 1)
{
SASSERT(m_matrix.size() > 0);
++i;
++j;
}
return i; //i points to the row after the last row which is non-zero
}
void spacer_matrix::print_matrix()
{
verbose_stream() << "\nMatrix\n";
for (const auto& row : m_matrix)
{
for (const auto& element : row)
{
verbose_stream() << element << ", ";
}
verbose_stream() << "\n";
}
verbose_stream() << "\n";
if (get_verbosity_level() >= 1) { SASSERT(m_matrix.size() > 0); }
return i; // i points to the row after the last row which is non-zero
}
std::ostream &spacer_matrix::display(std::ostream &out) const {
out << "Matrix\n";
for (const auto &row : m_matrix) {
for (const auto &element : row) { out << element << ", "; }
out << "\n";
}
void spacer_matrix::normalize()
{
rational den = rational::one();
for (unsigned i=0; i < m_num_rows; ++i)
{
for (unsigned j=0; j < m_num_cols; ++j)
{
den = lcm(den, denominator(m_matrix[i][j]));
}
out << "\n";
return out;
}
void spacer_matrix::normalize() {
rational den = rational::one();
for (unsigned i = 0; i < m_num_rows; ++i) {
for (unsigned j = 0; j < m_num_cols; ++j) {
den = lcm(den, denominator(m_matrix[i][j]));
}
for (unsigned i=0; i < m_num_rows; ++i)
{
for (unsigned j=0; j < m_num_cols; ++j)
{
m_matrix[i][j] = den * m_matrix[i][j];
SASSERT(m_matrix[i][j].is_int());
}
}
for (unsigned i = 0; i < m_num_rows; ++i) {
for (unsigned j = 0; j < m_num_cols; ++j) {
m_matrix[i][j] = den * m_matrix[i][j];
SASSERT(m_matrix[i][j].is_int());
}
}
}
// attempt to guess that all rows of the matrix are linearly dependent
bool spacer_matrix::is_lin_reltd(unsigned i, unsigned j, rational &coeff1,
rational &coeff2, rational &off) const {
SASSERT(m_num_rows > 1);
coeff1 = m_matrix[0][j] - m_matrix[1][j];
coeff2 = m_matrix[1][i] - m_matrix[0][i];
off = (m_matrix[0][i] * m_matrix[1][j]) - (m_matrix[1][i] * m_matrix[0][j]);
for (unsigned k = 0; k < m_num_rows; k++) {
if (((coeff1 * m_matrix[k][i]) + (coeff2 * m_matrix[k][j]) + off) !=
rational::zero()) {
TRACE("cvx_dbg_verb",
tout << "Didn't work for " << m_matrix[k][i] << " and "
<< m_matrix[k][j] << " with coefficients " << coeff1
<< " , " << coeff2 << " and offset " << off << "\n";);
return false;
}
}
rational div = gcd(coeff1, gcd(coeff2, off));
if (div == 0) return false;
coeff1 = coeff1 / div;
coeff2 = coeff2 / div;
off = off / div;
return true;
}
bool spacer_matrix::compute_linear_deps(spacer_matrix &eq) const {
SASSERT(m_num_rows > 1);
eq.reset(m_num_cols + 1);
rational coeff1, coeff2, off;
vector<rational> lin_dep;
lin_dep.reserve(m_num_cols + 1);
for (unsigned i = 0; i < m_num_cols; i++) {
for (unsigned j = i + 1; j < m_num_cols; j++) {
if (is_lin_reltd(i, j, coeff1, coeff2, off)) {
SASSERT(!(coeff1 == 0 && coeff2 == 0 && off == 0));
lin_dep[i] = coeff1;
lin_dep[j] = coeff2;
lin_dep[m_num_cols] = off;
eq.add_row(lin_dep);
TRACE("cvx_dbg_verb", {
tout << "Adding row ";
for (rational r : lin_dep) tout << r << " ";
tout << "\n";
});
// reset everything
lin_dep[i] = rational::zero();
lin_dep[j] = rational::zero();
lin_dep[m_num_cols] = 0;
// Found a dependency for this row, move on.
// sound because of transitivity of is_lin_reltd
break;
}
}
}
return eq.num_rows() > 0;
}
} // namespace spacer

View file

@ -22,24 +22,44 @@ Revision History:
namespace spacer {
class spacer_matrix {
public:
spacer_matrix(unsigned m, unsigned n); // m rows, n columns
class spacer_matrix {
private:
unsigned m_num_rows;
unsigned m_num_cols;
vector<vector<rational>> m_matrix;
unsigned num_rows();
unsigned num_cols();
bool is_lin_reltd(unsigned i, unsigned j, rational &coeff1,
rational &coeff2, rational &off) const;
const rational& get(unsigned i, unsigned j);
void set(unsigned i, unsigned j, const rational& v);
public:
spacer_matrix(unsigned m, unsigned n); // m rows, n columns
unsigned perform_gaussian_elimination();
unsigned num_rows() const { return m_num_rows; }
unsigned num_cols() const { return m_num_cols; }
void print_matrix();
void normalize();
private:
unsigned m_num_rows;
unsigned m_num_cols;
vector<vector<rational>> m_matrix;
};
}
const rational &get(unsigned i, unsigned j) const { return m_matrix[i][j]; }
void set(unsigned i, unsigned j, const rational &v) { m_matrix[i][j] = v; }
const vector<rational> &get_row(unsigned i) const {
SASSERT(i < num_rows());
return m_matrix.get(i);
}
/// Returns a copy of row \p i
void get_col(unsigned i, vector<rational> &row) const;
void add_row(const vector<rational> &row);
void reset(unsigned n_cols) {
m_num_rows = 0;
m_num_cols = n_cols;
m_matrix.reset();
}
std::ostream &display(std::ostream &out) const;
void normalize();
unsigned perform_gaussian_elimination();
bool compute_linear_deps(spacer_matrix &eq) const;
};
} // namespace spacer

View file

@ -284,6 +284,11 @@ namespace spacer {
ptr_buffer<proof> const &parents,
unsigned num_params,
parameter const *params) {
if(num_params != parents.size() + 1) {
//TODO: fix bug
TRACE("spacer.fkab", tout << "UNEXPECTED INPUT TO FUNCTION. Bailing out\n";);
return proof_ref(m);
}
SASSERT(num_params == parents.size() + 1 /* one param is missing */);
arith_util a(m);
th_rewriter rw(m);

View file

@ -84,7 +84,7 @@ bool sem_matcher::operator()(expr * e1, expr * e2, substitution & s, bool &pos)
top = false;
if (n1->get_decl() != n2->get_decl()) {
expr *e1 = nullptr, *e2 = nullptr;
expr *e1 = nullptr, *e2 = nullptr, *e3 = nullptr, *e4 = nullptr, *e5 = nullptr;
rational val1, val2;
// x<=y == !(x>y)
@ -120,6 +120,26 @@ bool sem_matcher::operator()(expr * e1, expr * e2, substitution & s, bool &pos)
else {
return false;
}
#if 0
// x >= var and !(y <= n)
// match (x, y) and (var, n+1)
if (m_arith.is_ge(n1, e1, e2) && is_var(e2) &&
m.is_not(n2, e3) && m_arith.is_le(e3, e4, e5) &&
m_arith.is_int(e5) &&
m_arith.is_numeral(e5, val2)) {
expr* num2 = m_arith.mk_numeral(val2 + 1, true);
m_pinned.push_back(num2);
if (!match_var(to_var(e2), num2)) return false;
m_todo.pop_back();
m_todo.push_back(expr_pair(e1, e4));
continue;
}
#endif
}
unsigned num_args1 = n1->get_num_args();

View file

@ -38,11 +38,11 @@ class sem_matcher {
substitution * m_subst;
svector<expr_pair> m_todo;
void reset();
bool match_var(var *v, expr *e);
public:
sem_matcher(ast_manager &man);
void reset();
/**
\brief Return true if e2 is an instance of e1.

View file

@ -396,8 +396,8 @@ namespace spacer {
matrix.set(i, map[pair.second], pair.first);
}
}
matrix.print_matrix();
IF_VERBOSE(10, matrix.display(verbose_stream()););
// 3. normalize matrix to integer values
matrix.normalize();

File diff suppressed because it is too large Load diff

View file

@ -21,126 +21,163 @@ Revision History:
#pragma once
#include "ast/arith_decl_plugin.h"
#include "ast/array_decl_plugin.h"
#include "ast/ast.h"
#include "ast/ast_pp.h"
#include "ast/ast_util.h"
#include "ast/bv_decl_plugin.h"
#include "ast/expr_map.h"
#include "model/model.h"
#include "util/obj_hashtable.h"
#include "util/ref_vector.h"
#include "util/trace.h"
#include "util/vector.h"
#include "ast/arith_decl_plugin.h"
#include "ast/array_decl_plugin.h"
#include "ast/bv_decl_plugin.h"
#include "ast/ast_util.h"
#include "ast/expr_map.h"
#include "model/model.h"
#include "util/stopwatch.h"
#include "muz/spacer/spacer_antiunify.h"
#include "util/stopwatch.h"
class model;
class model_core;
namespace spacer {
inline unsigned infty_level () {
return UINT_MAX;
}
inline unsigned infty_level() { return UINT_MAX; }
inline bool is_infty_level(unsigned lvl) {
// XXX: level is 16 bits in class pob
return lvl >= 65535;
}
inline unsigned next_level(unsigned lvl) {
return is_infty_level(lvl)?lvl:(lvl+1);
}
inline unsigned prev_level (unsigned lvl) {
if (is_infty_level(lvl)) return infty_level();
if (lvl == 0) return 0;
return lvl - 1;
}
struct pp_level {
unsigned m_level;
pp_level(unsigned l): m_level(l) {}
};
inline std::ostream& operator<<(std::ostream& out, pp_level const& p) {
if (is_infty_level(p.m_level)) {
return out << "oo";
} else {
return out << p.m_level;
}
}
typedef ptr_vector<app> app_vector;
typedef ptr_vector<func_decl> decl_vector;
typedef obj_hashtable<func_decl> func_decl_set;
/**
\brief hoist non-boolean if expressions.
*/
void to_mbp_benchmark(std::ostream &out, const expr* fml, const app_ref_vector &vars);
// TBD: deprecate by qe::mbp
/**
* do the following in sequence
* 1. use qe_lite to cheaply eliminate vars
* 2. for remaining boolean vars, substitute using M
* 3. use MBP for remaining array and arith variables
* 4. for any remaining arith variables, substitute using M
*/
void qe_project (ast_manager& m, app_ref_vector& vars,
expr_ref& fml, model &mdl,
bool reduce_all_selects=false,
bool native_mbp=false,
bool dont_sub=false);
// deprecate
void qe_project (ast_manager& m, app_ref_vector& vars, expr_ref& fml,
model_ref& M, expr_map& map);
// TBD: sort out
void expand_literals(ast_manager &m, expr_ref_vector& conjs);
expr_ref_vector compute_implicant_literals(model &mdl,
expr_ref_vector &formula);
void simplify_bounds (expr_ref_vector &lemmas);
void normalize(expr *e, expr_ref &out, bool use_simplify_bounds = true, bool factor_eqs = false);
/**
* Ground expression by replacing all free variables by skolem
* constants. On return, out is the resulting expression, and vars is
* a map from variable ids to corresponding skolem constants.
*/
void ground_expr (expr *e, expr_ref &out, app_ref_vector &vars);
void mbqi_project(model &mdl, app_ref_vector &vars, expr_ref &fml);
bool contains_selects (expr* fml, ast_manager& m);
void get_select_indices (expr* fml, app_ref_vector& indices);
void find_decls (expr* fml, app_ref_vector& decls, std::string& prefix);
/**
* extended pretty-printer
* used for debugging
* disables aliasing of common sub-expressions
*/
struct mk_epp : public mk_pp {
params_ref m_epp_params;
expr_ref m_epp_expr;
mk_epp(ast *t, ast_manager &m, unsigned indent = 0, unsigned num_vars = 0, char const * var_prefix = nullptr);
void rw(expr *e, expr_ref &out);
};
bool is_clause(ast_manager &m, expr *n);
bool is_literal(ast_manager &m, expr *n);
bool is_atom(ast_manager &m, expr *n);
// set f to true in model
void set_true_in_mdl(model &model, func_decl *f);
inline bool is_infty_level(unsigned lvl) {
// XXX: level is 16 bits in class pob
return lvl >= 65535;
}
inline unsigned next_level(unsigned lvl) {
return is_infty_level(lvl) ? lvl : (lvl + 1);
}
inline unsigned prev_level(unsigned lvl) {
if (is_infty_level(lvl)) return infty_level();
if (lvl == 0) return 0;
return lvl - 1;
}
struct pp_level {
unsigned m_level;
pp_level(unsigned l) : m_level(l) {}
};
inline std::ostream &operator<<(std::ostream &out, pp_level const &p) {
if (is_infty_level(p.m_level)) {
return out << "oo";
} else {
return out << p.m_level;
}
}
typedef ptr_vector<app> app_vector;
typedef ptr_vector<func_decl> decl_vector;
typedef obj_hashtable<func_decl> func_decl_set;
/**
\brief hoist non-boolean if expressions.
*/
void to_mbp_benchmark(std::ostream &out, const expr *fml,
const app_ref_vector &vars);
// TBD: deprecate by qe::mbp
/**
* do the following in sequence
* 1. use qe_lite to cheaply eliminate vars
* 2. for remaining boolean vars, substitute using M
* 3. use MBP for remaining array and arith variables
* 4. for any remaining arith variables, substitute using M
*/
void qe_project(ast_manager &m, app_ref_vector &vars, expr_ref &fml, model &mdl,
bool reduce_all_selects = false, bool native_mbp = false,
bool dont_sub = false);
// deprecate
void qe_project(ast_manager &m, app_ref_vector &vars, expr_ref &fml,
model_ref &M, expr_map &map);
// TBD: sort out
void expand_literals(ast_manager &m, expr_ref_vector &conjs);
expr_ref_vector compute_implicant_literals(model &mdl,
expr_ref_vector &formula);
void simplify_bounds(expr_ref_vector &lemmas);
bool is_normalized(expr_ref e, bool use_simplify_bounds = true,
bool factor_eqs = false);
void normalize(expr *e, expr_ref &out, bool use_simplify_bounds = true,
bool factor_eqs = false);
void normalize_order(expr *e, expr_ref &out);
/**
* Ground expression by replacing all free variables by skolem
* constants. On return, out is the resulting expression, and vars is
* a map from variable ids to corresponding skolem constants.
*/
void ground_expr(expr *e, expr_ref &out, app_ref_vector &vars);
void mbqi_project(model &mdl, app_ref_vector &vars, expr_ref &fml);
bool contains_selects(expr *fml, ast_manager &m);
void get_select_indices(expr *fml, app_ref_vector &indices);
void find_decls(expr *fml, app_ref_vector &decls, std::string &prefix);
/**
* extended pretty-printer
* used for debugging
* disables aliasing of common sub-expressions
*/
struct mk_epp : public mk_pp {
params_ref m_epp_params;
expr_ref m_epp_expr;
mk_epp(ast *t, ast_manager &m, unsigned indent = 0, unsigned num_vars = 0,
char const *var_prefix = nullptr);
void rw(expr *e, expr_ref &out);
};
bool is_clause(ast_manager &m, expr *n);
bool is_literal(ast_manager &m, expr *n);
bool is_atom(ast_manager &m, expr *n);
// set f to true in model
void set_true_in_mdl(model &model, func_decl *f);
/// Returns number of free variables in \p e
unsigned get_num_vars(expr *e);
// Return all uninterpreted constants of \p q
void collect_uninterp_consts(expr *a, expr_ref_vector &out);
bool has_nonlinear_mul(expr *e, ast_manager &m);
// Returns true if \p e contains a multiplication a variable by not-a-number
bool has_nonlinear_var_mul(expr *e, ast_manager &m);
// check whether lit is an instance of mono_var_pattern
bool is_mono_var(expr *lit, ast_manager &m, arith_util &a_util);
// a mono_var_pattern has only one variable in the whole expression and is
// linear. lit is the literal with the variable
bool find_unique_mono_var_lit(const expr_ref &p, expr_ref &lit);
/// Drop all literals that numerically match \p lit, from \p fml_vec.
///
/// \p abs_fml holds the result. Returns true if any literal has been dropped
bool filter_out_lit(const expr_ref_vector &in, const expr_ref &lit,
expr_ref_vector &out);
/// Returns true if range of s is numeric
bool is_numeric_sub(const substitution &s);
// Returns true if \p e contains \p mod
bool contains_mod(const expr_ref &e);
// Returns true if \p e contains a real-valued sub-term
bool contains_real(const expr_ref &e);
// multiply fml with num and simplify rationals to ints
// fml should be in LIA/LRA/Arrays
// assumes that fml is a sum of products
void mul_by_rat(expr_ref &fml, rational num);
} // namespace spacer