3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2025-08-26 21:16:02 +00:00

Merge branch 'parallel-solving' of github.com:ilanashapiro/z3 into parallel-solving

This commit is contained in:
Ilana Shapiro 2025-07-29 16:45:56 -07:00
commit 375d537471
23 changed files with 525 additions and 218 deletions

125
PARALLEL_PROJECT_NOTES.md Normal file
View file

@ -0,0 +1,125 @@
# Parallel project notes
We track notes for updates to
[smt/parallel.cpp](https://github.com/Z3Prover/z3/blob/master/src/smt/smt_parallel.cpp)
and possibly
[solver/parallel_tactic.cpp](https://github.com/Z3Prover/z3/blob/master/src/solver/parallel_tactical.cpp).
## Variable selection heuristics
* Lookahead solvers:
* lookahead in the smt directory performs a simplistic lookahead search using unit propagation.
* lookahead in the sat directory uses custom lookahead solver based on MARCH. March is described in Handbook of SAT and Knuth volumne 4.
* They both proxy on a cost model where the most useful variable to branch on is the one that _minimizes_ the set of new clauses maximally
through unit propagation. In other words, if a literal _p_ is set to true, and _p_ occurs in clause $\neg p \vee q \vee r$, then it results in
reducing the clause from size 3 to 2 (because $\neg p$ will be false after propagating _p_).
* Selected references: SAT handbook, Knuth Volumne 4, Marijn's March solver on github, [implementation of march in z3](https://github.com/Z3Prover/z3/blob/master/src/sat/sat_lookahead.cpp)
* VSIDS:
* As referenced in Matteo and Antti's solvers.
* Variable activity is a proxy for how useful it is to case split on a variable during search. Variables with a higher VSIDS are split first.
* VSIDS is updated dynamically during search. It was introduced in the paper with Moscovitz, Malik, et al in early 2000s. A good overview is in Armin's tutorial slides (also in my overview of SMT).
* VSIDS does not keep track of variable phases (if the variable was set to true or false).
* Selected refernces [DAC 2001](https://www.princeton.edu/~chaff/publication/DAC2001v56.pdf) and [Biere Tutorial, slide 64 on Variable Scoring Schemes](https://alexeyignatiev.github.io/ssa-school-2019/slides/ab-satsmtar19-slides.pdf)
* Proof prefix:
* Collect the literals that occur in learned clauses. Count their occurrences based on polarity. This gets tracked in a weighted score.
* The weight function can be formulated to take into account clause sizes.
* The score assignment may also decay similar to VSIDS.
* We could also use a doubly linked list for literals used in conflicts and keep reinsert literals into the list when they are used. This would be a "Variable move to front" (VMTF) variant.
* Selected references: [Battleman et al](https://www.cs.cmu.edu/~mheule/publications/proofix-SAT25.pdf)
* From local search:
* Note also that local search solvers can be used to assign variable branch priorities.
* We are not going to directly run a local search solver in the mix up front, but let us consider this heuristic for completeness.
* The heuristic is documented in Biere and Cai's journal paper on integrating local search for CDCL.
* Roughly, it considers clauses that move from the UNSAT set to the SAT set of clauses. It then keeps track of the literals involved.
* Selected references: [Cai et al](https://www.jair.org/index.php/jair/article/download/13666/26833/)
* Assignment trails:
* We could also consider the assignments to variables during search.
* Variables that are always assigned to the same truth value could be considered to be safe to assign that truth value.
* The cubes resulting from such variables might be a direction towards finding satisfying solutions.
* Selected references: [Alex and Vadim](https://link.springer.com/chapter/10.1007/978-3-319-94144-8_7) and most recently [Robin et al](https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.SAT.2024.9).
## Algorithms
This section considers various possible algorithms.
In the following, $F$ refers to the original goal, $T$ is the number of CPU cores or CPU threads.
### Base algorithm
The existing algorithm in <b>smt_parallel</b> is as follows:
1. Run a solver on $F$ with a bounded number of conflicts.
2. If the result is SAT/UNSAT, or UNKNOWN with an interrupt or timeout, return. If the maximal number of conflicts were reached continue.
3. Spawn $T$ solvers on $F$ with a bounded number of conflicts, wait until a thread returns UNSAT/SAT or all threads have reached a maximal number of conflicts.
4. Perform a similar check as in 2.
5. Share unit literals learned by each thread.
6. Compute unit cubes for each thread $T$.
7. Spawn $T$ solvers with $F \wedge \ell$, where $\ell$ is a unit literal determined by lookahead function in each thread.
8. Perform a similar check as in 2. But note that a thread can be UNSAT because the unit cube $\ell$ contradicted $F$. In this case learn the unit literal $\neg \ell$.
9. Shared unit literals learned by each thread, increase the maximal number of conflicts, go to 3.
### Algorithm Variants
* Instead of using lookahead solving to find unit cubes use the proof-prefix based scoring function.
* Instead of using independent unit cubes, perform a systematic (where systematic can mean many things) cube and conquer strategy.
* Spawn some threads to work in "SAT" mode, tuning to find models instead of short resolution proofs.
* Change the synchronization barrier discipline.
* [Future] Include in-processing
### Cube and Conquer strategy
We could maintain a global decomposition of the search space by maintaing a list of _cubes_.
Initially, the list of cubes has just one element, the cube with no literals $[ [] ]$.
By using a list of cubes instead of a _set_ of cubes we can refer to an ordering.
For example, cubes can be ordered by a suffix traversal of the _cube tree_ (the tree formed by
case splitting on the first literal, children of the _true_ branch are the cubes where the first
literal is true, children of the _false_ branch are the cubes where the first literal is false).
The main question is going to be how the cube decomposition is created.
#### Static cubing
We can aim for a static cube strategy that uses a few initial (concurrent) probes to find cube literals.
This strategy would be a parallel implementaiton of proof-prefix approach. The computed cubes are inserted
into the list of cubes and the list is consumed by a second round.
#### Growing cubes on demand
Based on experiences with cubing so far, there is high variance in how easy cubes are to solve.
Some cubes will be harder than others to solve. For hard cubes, it is tempting to develop a recursive
cubing strategy. Ideally, a recursive cubing strategy is symmetric to top-level cubing.
* The solver would have to identify hard cubes vs. easy cubes.
* It would have to know when to stop working on a hard cube and replace it in the list of cubes by
a new list of sub-cubes.
* Ideally, we don't need any static cubing and cubing is grown on demand while all threads are utilized.
* If we spawn $T$ threads to initially work with empty cubes, we could extract up to $T$ indepenent cubes
by examining the proof-prefix of their traces. This can form the basis for the first, up to $2^T$ cubes.
* After a round of solving with each thread churning on some cubes, we may obtain more proof-prefixes from
_hard_ cubes. It is not obvious that we want to share cubes from different proof prefixes at this point.
But a starting point is to split a hard cube into two by using the proof-prefix from attempting to solve it.
* Suppose we take the proof-prefix sampling algorithm at heart: It says to start with some initial cube prefix
and then sample for other cube literals. If we translate it to the case where multiple cubes are being processed
in parallel, then an analogy is to share candidates for new cube literals among cubes that are close to each-other.
For example, if thread $t_1$ processes cube $a, b, c$ and $t_2$ processes $a,b, \neg c$. They are close. They are only
separated by Hamming distance 1. If $t_1$ finds cube literal $d$ and $t_2$ finds cube literal $e$, we could consider the cubes
$a, b, c, d, e$, and $a, b, c, d, \neg e$, $\ldots$, $a, b, \neg c, \neg d, \neg e$.
#### Representing cubes implicitly
We can represent a list of cubes by using intervals and only represent start and end-points of the intervals.
#### Batching
Threads can work on more than one cube in a batch.
### Synchronization
* The first thread to time out or finish could kill other threads instead of joining on all threads to finish.
* Instead of synchronization barriers have threads continue concurrently without terminating. They synchronize on signals and new units. This is trickier to implement, but in some guises accomplished in [sat/sat_parallel.cpp](https://github.com/Z3Prover/z3/blob/master/src/sat/sat_parallel.cpp)

View file

@ -225,13 +225,15 @@ extern "C" {
Z3_TRY;
LOG_Z3_mk_fresh_func_decl(c, prefix, domain_size, domain, range);
RESET_ERROR_CODE();
CHECK_IS_SORT(range, nullptr);
CHECK_SORTS(domain_size, domain, nullptr);
if (prefix == nullptr) {
prefix = "";
}
func_decl* d = mk_c(c)->m().mk_fresh_func_decl(prefix,
domain_size,
reinterpret_cast<sort*const*>(domain),
to_sorts(domain),
to_sort(range), false);
mk_c(c)->save_ast_trail(d);
@ -243,9 +245,11 @@ extern "C" {
Z3_TRY;
LOG_Z3_mk_fresh_const(c, prefix, ty);
RESET_ERROR_CODE();
CHECK_IS_SORT(ty, nullptr);
if (prefix == nullptr) {
prefix = "";
}
app* a = mk_c(c)->m().mk_fresh_const(prefix, to_sort(ty), false);
mk_c(c)->save_ast_trail(a);
RETURN_Z3(of_ast(a));
@ -654,6 +658,7 @@ extern "C" {
Z3_TRY;
LOG_Z3_get_sort_name(c, t);
RESET_ERROR_CODE();
CHECK_IS_SORT(t, of_symbol(symbol::null));
CHECK_VALID_AST(t, of_symbol(symbol::null));
return of_symbol(to_sort(t)->get_name());
Z3_CATCH_RETURN(of_symbol(symbol::null));

View file

@ -286,10 +286,13 @@ namespace api {
inline api::context * mk_c(Z3_context c) { return reinterpret_cast<api::context*>(c); }
#define RESET_ERROR_CODE() { mk_c(c)->reset_error_code(); }
#define SET_ERROR_CODE(ERR, MSG) { mk_c(c)->set_error_code(ERR, MSG); }
#define CHECK_NON_NULL(_p_,_ret_) { if (_p_ == 0) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is null"); return _ret_; } }
#define CHECK_VALID_AST(_a_, _ret_) { if (_a_ == 0 || !CHECK_REF_COUNT(_a_)) { SET_ERROR_CODE(Z3_INVALID_ARG, "not a valid ast"); return _ret_; } }
#define CHECK_NON_NULL(_p_,_ret_) { if (_p_ == nullptr) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is null"); return _ret_; } }
#define CHECK_VALID_AST(_a_, _ret_) { if (_a_ == nullptr || !CHECK_REF_COUNT(_a_)) { SET_ERROR_CODE(Z3_INVALID_ARG, "not a valid ast"); return _ret_; } }
inline bool is_expr(Z3_ast a) { return is_expr(to_ast(a)); }
#define CHECK_IS_EXPR(_p_, _ret_) { if (_p_ == 0 || !is_expr(_p_)) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is not an expression"); return _ret_; } }
#define CHECK_IS_EXPR(_p_, _ret_) { if (_p_ == nullptr || !is_expr(_p_)) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is not an expression"); return _ret_; } }
#define CHECK_IS_SORT(_p_, _ret_) { if (_p_ == nullptr || !is_sort(_p_)) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is not a sort"); return _ret_; } }
#define CHECK_SORTS(_n_, _ps_, _ret_) { for (unsigned i = 0; i < _n_; ++i) if (!is_sort(_ps_[i])) { SET_ERROR_CODE(Z3_INVALID_ARG, "ast is not a sort"); return _ret_; } }
inline bool is_bool_expr(Z3_context c, Z3_ast a) { return is_expr(a) && mk_c(c)->m().is_bool(to_expr(a)); }
#define CHECK_FORMULA(_a_, _ret_) { if (_a_ == 0 || !CHECK_REF_COUNT(_a_) || !is_bool_expr(c, _a_)) { SET_ERROR_CODE(Z3_INVALID_ARG, nullptr); return _ret_; } }
#define CHECK_FORMULA(_a_, _ret_) { if (_a_ == nullptr || !CHECK_REF_COUNT(_a_) || !is_bool_expr(c, _a_)) { SET_ERROR_CODE(Z3_INVALID_ARG, nullptr); return _ret_; } }
inline void check_sorts(Z3_context c, ast * n) { mk_c(c)->check_sorts(n); }

View file

@ -67,6 +67,7 @@ inline ast * const * to_asts(Z3_ast const* a) { return reinterpret_cast<ast* con
inline sort * to_sort(Z3_sort a) { return reinterpret_cast<sort*>(a); }
inline Z3_sort of_sort(sort* s) { return reinterpret_cast<Z3_sort>(s); }
inline bool is_sort(Z3_sort a) { return is_sort(to_sort(a)); }
inline sort * const * to_sorts(Z3_sort const* a) { return reinterpret_cast<sort* const*>(a); }
inline Z3_sort const * of_sorts(sort* const* s) { return reinterpret_cast<Z3_sort const*>(s); }

View file

@ -1506,6 +1506,8 @@ def Consts(names, sort):
def FreshConst(sort, prefix="c"):
"""Create a fresh constant of a specified sort"""
if z3_debug():
_z3_assert(is_sort(sort), f"Z3 sort expected, got {type(sort)}")
ctx = _get_ctx(sort.ctx)
return _to_expr_ref(Z3_mk_fresh_const(ctx.ref(), prefix, sort.ast), ctx)

View file

@ -14,7 +14,7 @@ Author:
Nikolaj Bjorner (nbjorner) 2023-11-11
Completion modulo AC
E set of eqs
pick critical pair xy = z by j1 xu = v by j2 in E
Add new equation zu = xyu = vy by j1, j2
@ -22,7 +22,7 @@ Completion modulo AC
Sets P - processed, R - reductions, S - to simplify
new equality l = r:
new equality l = r:
reduce l = r modulo R if equation is external
orient l = r - if it cannot be oriented, discard
if l = r is a reduction rule then reduce R, S using l = r, insert into R
@ -46,9 +46,9 @@ backward subsumption e as (l = r) using (l' = r') in P u S:
is reduction rule e as (l = r):
l is a unit, and r is unit, is empty, or is zero.
superpose e as (l = r) with (l' = r') in P:
if l and l' share a common subset x.
if l and l' share a common subset x.
forward simplify (l' = r') in P u S using e as (l = r):
@ -56,10 +56,10 @@ forward simplify (l' = r') in P u S using e as (l = r):
More notes:
Justifications for new equations are joined (requires extension to egraph/justification)
Process new merges so use list is updated
Justifications for processed merges are recorded
Updated equations are recorded for restoration on backtracking
Keep track of foreign / shared occurrences of AC functions.
@ -91,7 +91,7 @@ More notes:
TODOs:
- Efficiency of handling shared terms.
- The shared terms hash table is not incremental.
- The shared terms hash table is not incremental.
It could be made incremental by updating it on every merge similar to how the egraph handles it.
- V2 using multiplicities instead of repeated values in monomials.
- Squash trail updates when equations or monomials are modified within the same epoch.
@ -131,20 +131,18 @@ namespace euf {
return;
for (auto arg : enode_args(n))
if (is_op(arg))
register_shared(arg);
register_shared(arg);
}
// unit -> {}
void ac_plugin::add_unit(enode* u) {
m_units.push_back(u);
mk_node(u);
auto m_id = to_monomial(u, {});
init_equation(eq(to_monomial(u), m_id, justification::axiom(get_id())));
void ac_plugin::add_unit(enode* u) {
push_equation(u, nullptr);
}
// zero x -> zero
void ac_plugin::add_zero(enode* z) {
mk_node(z)->is_zero = true;
// zeros persist
}
void ac_plugin::register_shared(enode* n) {
@ -165,12 +163,16 @@ namespace euf {
push_undo(is_register_shared);
}
void ac_plugin::push_scope_eh() {
push_undo(is_push_scope);
}
void ac_plugin::undo() {
auto k = m_undo.back();
m_undo.pop_back();
switch (k) {
case is_add_eq: {
m_active.pop_back();
case is_queue_eq: {
m_queued.pop_back();
break;
}
case is_add_node: {
@ -180,14 +182,15 @@ namespace euf {
n->~node();
break;
}
case is_add_monomial: {
m_monomials.pop_back();
case is_push_scope: {
m_active.reset();
m_passive.reset();
m_units.reset();
m_queue_head = 0;
break;
}
case is_update_eq: {
auto const& [idx, eq] = m_update_eq_trail.back();
m_active[idx] = eq;
m_update_eq_trail.pop_back();
case is_add_monomial: {
m_monomials.pop_back();
break;
}
case is_add_shared_index: {
@ -203,7 +206,7 @@ namespace euf {
break;
}
case is_register_shared: {
auto s = m_shared.back();
auto s = m_shared.back();
m_shared_nodes[s.n->get_id()] = false;
m_shared.pop_back();
break;
@ -316,14 +319,24 @@ namespace euf {
}
void ac_plugin::merge_eh(enode* l, enode* r) {
if (l == r)
return;
push_equation(l, r);
}
void ac_plugin::pop_equation(enode* l, enode* r) {
m_fuel += m_fuel_inc;
auto j = justification::equality(l, r);
auto m1 = to_monomial(l);
auto m2 = to_monomial(r);
TRACE(plugin, tout << "merge: " << m_name << " " << g.bpp(l) << " == " << g.bpp(r) << " " << m_pp_ll(*this, monomial(m1)) << " == " << m_pp_ll(*this, monomial(m2)) << "\n");
init_equation(eq(m1, m2, j));
if (!r) {
m_units.push_back(l);
mk_node(l);
auto m_id = to_monomial(l, {});
init_equation(eq(to_monomial(l), m_id, justification::axiom(get_id())), true);
}
else {
auto j = justification::equality(l, r);
auto m1 = to_monomial(l);
auto m2 = to_monomial(r);
TRACE(plugin, tout << "merge: " << m_name << " " << g.bpp(l) << " == " << g.bpp(r) << " " << m_pp_ll(*this, monomial(m1)) << " == " << m_pp_ll(*this, monomial(m2)) << "\n");
init_equation(eq(m1, m2, j), true);
}
}
void ac_plugin::diseq_eh(enode* eq) {
@ -336,64 +349,80 @@ namespace euf {
register_shared(b);
}
bool ac_plugin::init_equation(eq const& e) {
m_active.push_back(e);
auto& eq = m_active.back();
deduplicate(monomial(eq.l).m_nodes, monomial(eq.r).m_nodes);
if (orient_equation(eq)) {
auto& ml = monomial(eq.l);
auto& mr = monomial(eq.r);
unsigned eq_id = m_active.size() - 1;
if (ml.size() == 1 && mr.size() == 1)
push_merge(ml[0]->n, mr[0]->n, eq.j);
for (auto n : ml) {
if (!n->n->is_marked2()) {
n->eqs.push_back(eq_id);
n->n->mark2();
push_undo(is_add_eq_index);
m_node_trail.push_back(n);
for (auto s : n->shared)
m_shared_todo.insert(s);
}
}
for (auto n : mr) {
if (!n->n->is_marked2()) {
n->eqs.push_back(eq_id);
n->n->mark2();
push_undo(is_add_eq_index);
m_node_trail.push_back(n);
for (auto s : n->shared)
m_shared_todo.insert(s);
}
}
for (auto n : ml)
n->n->unmark2();
for (auto n : mr)
n->n->unmark2();
SASSERT(well_formed(eq));
TRACE(plugin, display_equation_ll(tout, eq) << " shared: " << m_shared_todo << "\n");
m_to_simplify_todo.insert(eq_id);
m_new_eqs.push_back(eq_id);
//display_equation_ll(verbose_stream() << "init " << eq_id << ": ", eq) << "\n";
return true;
}
else {
m_active.pop_back();
return false;
}
void ac_plugin::push_equation(enode* l, enode* r) {
if (l == r)
return;
m_queued.push_back({ l, r });
push_undo(is_queue_eq);
}
bool ac_plugin::init_equation(eq eq, bool is_active) {
deduplicate(monomial(eq.l), monomial(eq.r));
if (!orient_equation(eq))
return false;
#if 1
if (is_reducing(eq))
is_active = true;
#else
is_active = true; // set to active by default
#endif
if (!is_active) {
m_passive.push_back(eq);
return true;
}
m_active.push_back(eq);
auto& ml = monomial(eq.l);
auto& mr = monomial(eq.r);
unsigned eq_id = m_active.size() - 1;
if (ml.size() == 1 && mr.size() == 1)
push_merge(ml[0]->n, mr[0]->n, eq.j);
for (auto n : ml) {
if (!n->n->is_marked2()) {
n->eqs.push_back(eq_id);
n->n->mark2();
push_undo(is_add_eq_index);
m_node_trail.push_back(n);
for (auto s : n->shared)
m_shared_todo.insert(s);
}
}
for (auto n : mr) {
if (!n->n->is_marked2()) {
n->eqs.push_back(eq_id);
n->n->mark2();
push_undo(is_add_eq_index);
m_node_trail.push_back(n);
for (auto s : n->shared)
m_shared_todo.insert(s);
}
}
for (auto n : ml)
n->n->unmark2();
for (auto n : mr)
n->n->unmark2();
SASSERT(well_formed(eq));
TRACE(plugin, display_equation_ll(tout, eq) << " shared: " << m_shared_todo << "\n");
m_to_simplify_todo.insert(eq_id);
m_new_eqs.push_back(eq_id);
//display_equation_ll(verbose_stream() << "init " << eq_id << ": ", eq) << "\n";
return true;
}
bool ac_plugin::orient_equation(eq& e) {
auto& ml = monomial(e.l);
auto& mr = monomial(e.r);
@ -402,7 +431,7 @@ namespace euf {
if (ml.size() < mr.size()) {
std::swap(e.l, e.r);
return true;
}
}
else {
sort(ml);
sort(mr);
@ -412,7 +441,7 @@ namespace euf {
if (ml[i]->id() < mr[i]->id())
std::swap(e.l, e.r);
return true;
}
}
return false;
}
}
@ -429,7 +458,7 @@ namespace euf {
return false;
if (!is_sorted(mr))
return false;
for (unsigned i = 0; i < ml.size(); ++i) {
for (unsigned i = 0; i < ml.size(); ++i) {
if (ml[i]->id() == mr[i]->id())
continue;
if (ml[i]->id() < mr[i]->id())
@ -455,8 +484,11 @@ namespace euf {
uint64_t ac_plugin::filter(monomial_t& m) {
auto& bloom = m.m_bloom;
if (bloom.m_tick == m_tick)
if (bloom.m_tick == m_tick) {
SASSERT(bloom_filter_is_correct(m.m_nodes, m.m_bloom));
return bloom.m_filter;
}
bloom.m_filter = 0;
for (auto n : m)
bloom.m_filter |= (1ull << (n->id() % 64ull));
@ -466,6 +498,13 @@ namespace euf {
return bloom.m_filter;
}
bool ac_plugin::bloom_filter_is_correct(ptr_vector<node> const& m, bloom const& b) const {
uint64_t f = 0;
for (auto n : m)
f |= (1ull << (n->id() % 64ull));
return b.m_filter == f;
}
bool ac_plugin::can_be_subset(monomial_t& subset, monomial_t& superset) {
if (subset.size() > superset.size())
return false;
@ -477,6 +516,7 @@ namespace euf {
bool ac_plugin::can_be_subset(monomial_t& subset, ptr_vector<node> const& m, bloom& bloom) {
if (subset.size() > m.size())
return false;
SASSERT(bloom.m_tick != m_tick || bloom_filter_is_correct(m, bloom));
if (bloom.m_tick != m_tick) {
bloom.m_filter = 0;
for (auto n : m)
@ -501,10 +541,10 @@ namespace euf {
ns.push_back(n);
for (unsigned i = 0; i < ns.size(); ++i) {
auto k = ns[i];
if (is_op(k))
ns.append(k->num_args(), k->args());
else
m.push_back(mk_node(k));
if (is_op(k))
ns.append(k->num_args(), k->args());
else
m.push_back(mk_node(k));
}
return to_monomial(n, m);
}
@ -562,6 +602,10 @@ namespace euf {
}
void ac_plugin::propagate() {
while (m_queue_head < m_queued.size()) {
auto [l, r] = m_queued[m_queue_head++];
pop_equation(l, r);
}
while (true) {
loop_start:
if (m_fuel == 0)
@ -575,15 +619,15 @@ namespace euf {
SASSERT(well_formed(m_active[eq_id]));
// simplify eq using processed
TRACE(plugin,
for (auto other_eq : forward_iterator(eq_id))
tout << "forward iterator " << eq_id << " vs " << other_eq << " " << is_processed(other_eq) << "\n");
TRACE(plugin,
for (auto other_eq : forward_iterator(eq_id))
tout << "forward iterator " << eq_id << " vs " << other_eq << " " << is_processed(other_eq) << "\n");
for (auto other_eq : forward_iterator(eq_id))
if (is_processed(other_eq) && forward_simplify(eq_id, other_eq))
goto loop_start;
auto& eq = m_active[eq_id];
deduplicate(monomial(eq.l).m_nodes, monomial(eq.r).m_nodes);
deduplicate(monomial(eq.l), monomial(eq.r));
if (monomial(eq.l).size() == 0) {
set_status(eq_id, eq_status::is_dead_eq);
continue;
@ -605,8 +649,8 @@ namespace euf {
for (auto other_eq : backward_iterator(eq_id))
if (is_active(other_eq))
backward_simplify(eq_id, other_eq);
forward_subsume_new_eqs();
forward_subsume_new_eqs();
// superpose, create new equations
unsigned new_sup = 0;
for (auto other_eq : superpose_iterator(eq_id))
@ -623,12 +667,20 @@ namespace euf {
}
unsigned ac_plugin::pick_next_eq() {
init_pick:
while (!m_to_simplify_todo.empty()) {
unsigned id = *m_to_simplify_todo.begin();
if (id < m_active.size() && is_to_simplify(id))
return id;
m_to_simplify_todo.remove(id);
}
if (!m_passive.empty()) {
auto eq = m_passive.back();
// verbose_stream() << "pick passive " << eq_pp_ll(*this, eq) << "\n";
m_passive.pop_back();
init_equation(eq, true);
goto init_pick;
}
return UINT_MAX;
}
@ -637,14 +689,10 @@ namespace euf {
auto& eq = m_active[id];
if (eq.status == eq_status::is_dead_eq)
return;
if (are_equal(monomial(eq.l), monomial(eq.r)))
if (are_equal(monomial(eq.l), monomial(eq.r)))
s = eq_status::is_dead_eq;
if (eq.status != s) {
m_update_eq_trail.push_back({ id, eq });
eq.status = s;
push_undo(is_update_eq);
}
eq.status = s;
switch (s) {
case eq_status::is_processed_eq:
case eq_status::is_reducing_eq:
@ -657,7 +705,7 @@ namespace euf {
set_status(id, eq_status::is_dead_eq);
}
break;
}
}
}
//
@ -673,7 +721,7 @@ namespace euf {
}
//
// backward iterator allows simplification of eq
// forward iterator allows simplification of eq
// The rhs of eq is a super-set of lhs of other eq.
//
unsigned_vector const& ac_plugin::forward_iterator(unsigned eq_id) {
@ -733,7 +781,7 @@ namespace euf {
unsigned j = 0;
m_eq_seen.reserve(m_active.size() + 1, false);
for (unsigned i = 0; i < m_eq_occurs.size(); ++i) {
unsigned id = m_eq_occurs[i];
unsigned id = m_eq_occurs[i];
if (m_eq_seen[id])
continue;
if (id == eq_id)
@ -749,7 +797,7 @@ namespace euf {
}
//
// forward iterator simplifies other eqs where their rhs is a superset of lhs of eq
// backward iterator simplifies other eqs where their rhs is a superset of lhs of eq
//
unsigned_vector const& ac_plugin::backward_iterator(unsigned eq_id) {
auto& eq = m_active[eq_id];
@ -768,7 +816,7 @@ namespace euf {
}
void ac_plugin::init_ref_counts(monomial_t const& monomial, ref_counts& counts) const {
init_ref_counts(monomial.m_nodes, counts);
init_ref_counts(monomial.m_nodes, counts);
}
void ac_plugin::init_ref_counts(ptr_vector<node> const& monomial, ref_counts& counts) const {
@ -786,7 +834,7 @@ namespace euf {
init_ref_counts(m, check);
return
all_of(counts, [&](unsigned i) { return check[i] == counts[i]; }) &&
all_of(check, [&](unsigned i) { return check[i] == counts[i]; });
all_of(check, [&](unsigned i) { return check[i] == counts[i]; });
}
void ac_plugin::backward_simplify(unsigned src_eq, unsigned dst_eq) {
@ -843,10 +891,8 @@ namespace euf {
reduce(m_src_r, j);
auto new_r = to_monomial(m_src_r);
index_new_r(dst_eq, monomial(m_active[dst_eq].r), monomial(new_r));
m_update_eq_trail.push_back({ dst_eq, m_active[dst_eq] });
m_active[dst_eq].r = new_r;
m_active[dst_eq].j = j;
push_undo(is_update_eq);
m_src_r.reset();
m_src_r.append(monomial(src.r).m_nodes);
TRACE(plugin_verbose, tout << "rewritten to " << m_pp_ll(*this, monomial(new_r)) << "\n");
@ -862,7 +908,7 @@ namespace euf {
//
// dst_ids, dst_count contain rhs of dst_eq
//
TRACE(plugin, tout << "backward simplify " << eq_pp_ll(*this, src) << " " << eq_pp_ll(*this, dst) << " can-be-subset: " << can_be_subset(monomial(src.l), monomial(dst.r)) << "\n");
TRACE(plugin, tout << "forward simplify " << eq_pp_ll(*this, src) << " " << eq_pp_ll(*this, dst) << " can-be-subset: " << can_be_subset(monomial(src.l), monomial(dst.r)) << "\n");
if (forward_subsumes(src_eq, dst_eq)) {
set_status(dst_eq, eq_status::is_dead_eq);
@ -873,11 +919,11 @@ namespace euf {
// check that src.l is a subset of dst.r
if (!can_be_subset(monomial(src.l), monomial(dst.r)))
return false;
if (!is_subset(m_dst_r_counts, m_src_l_counts, monomial(src.l)))
return false;
if (monomial(dst.r).size() == 0)
return false;
if (!is_subset(m_dst_r_counts, m_src_l_counts, monomial(src.l)))
return false;
if (monomial(dst.r).size() == 0)
return false;
SASSERT(is_correct_ref_count(monomial(dst.r), m_dst_r_counts));
@ -885,17 +931,15 @@ namespace euf {
init_ref_counts(monomial(src.l), m_src_l_counts);
//verbose_stream() << "forward simplify " << eq_pp_ll(*this, src_eq) << " for " << eq_pp_ll(*this, dst_eq) << "\n";
rewrite1(m_src_l_counts, monomial(src.r), m_dst_r_counts, m);
auto j = justify_rewrite(src_eq, dst_eq);
reduce(m, j);
auto new_r = to_monomial(m);
index_new_r(dst_eq, monomial(m_active[dst_eq].r), monomial(new_r));
m_update_eq_trail.push_back({ dst_eq, m_active[dst_eq] });
m_active[dst_eq].r = new_r;
m_active[dst_eq].j = j;
TRACE(plugin, tout << "rewritten to " << m_pp(*this, monomial(new_r)) << "\n");
push_undo(is_update_eq);
return true;
}
@ -913,7 +957,7 @@ namespace euf {
m_new_eqs.reset();
}
bool ac_plugin::is_forward_subsumed(unsigned eq_id) {
bool ac_plugin::is_forward_subsumed(unsigned eq_id) {
return any_of(forward_iterator(eq_id), [&](unsigned other_eq) { return forward_subsumes(other_eq, eq_id); });
}
@ -968,14 +1012,16 @@ namespace euf {
}
// now dst.r and src.r should align and have the same elements.
// since src.r is a subset of dst.r we iterate over dst.r
if (!all_of(monomial(src.r), [&](node* n) {
unsigned id = n->id();
if (!all_of(monomial(src.r), [&](node* n) {
unsigned id = n->id();
return m_src_r_counts[id] == m_dst_r_counts[id]; })) {
TRACE(plugin_verbose, tout << "dst.r and src.r do not align\n");
SASSERT(!are_equal(m_active[src_eq], m_active[dst_eq]));
return false;
}
return all_of(monomial(dst.r), [&](node* n) { unsigned id = n->id(); return m_src_r_counts[id] == m_dst_r_counts[id]; });
bool r = all_of(monomial(dst.r), [&](node* n) { unsigned id = n->id(); return m_src_r_counts[id] == m_dst_r_counts[id]; });
SASSERT(r || !are_equal(m_active[src_eq], m_active[dst_eq]));
return r;
}
// src_l_counts, src_r_counts are initialized for src.l, src.r
@ -990,7 +1036,7 @@ namespace euf {
return false;
unsigned size_diff = monomial(dst.l).size() - monomial(src.l).size();
if (size_diff != monomial(dst.r).size() - monomial(src.r).size())
return false;
return false;
if (!is_superset(m_src_l_counts, m_dst_l_counts, monomial(dst.l)))
return false;
if (!is_superset(m_src_r_counts, m_dst_r_counts, monomial(dst.r)))
@ -1026,14 +1072,14 @@ namespace euf {
unsigned dst_count = dst_counts[id];
unsigned src_count = src_l[id];
SASSERT(dst_count > 0);
if (src_count == 0) {
dst[j++] = n;
dst[j++] = n;
}
else if (src_count < dst_count) {
dst[j++] = n;
dst_counts.dec(id, 1);
}
}
}
dst.shrink(j);
dst.append(src_r.m_nodes);
@ -1047,11 +1093,11 @@ namespace euf {
init_loop:
if (m.size() == 1)
return change;
bloom b;
bloom b;
init_ref_counts(m, m_m_counts);
for (auto n : m) {
if (n->is_zero) {
m[0] = n;
m[0] = n;
m.shrink(1);
break;
}
@ -1060,9 +1106,9 @@ namespace euf {
if (!is_reducing(eq)) // also can use processed?
continue;
auto& src = m_active[eq];
if (!is_equation_oriented(src))
continue;
if (!is_equation_oriented(src))
continue;
if (!can_be_subset(monomial(src.l), m, b))
continue;
if (!is_subset(m_m_counts, m_eq_counts, monomial(src.l)))
@ -1078,9 +1124,8 @@ namespace euf {
change = true;
goto init_loop;
}
}
}
while (false);
}
} while (false);
VERIFY(sz >= m.size());
return change;
}
@ -1122,7 +1167,7 @@ namespace euf {
auto& src = m_active[src_eq];
auto& dst = m_active[dst_eq];
unsigned max_left = std::max(monomial(src.l).size(), monomial(dst.l).size());
unsigned max_left = std::max(monomial(src.l).size(), monomial(dst.l).size());
unsigned min_right = std::max(monomial(src.r).size(), monomial(dst.r).size());
@ -1151,7 +1196,7 @@ namespace euf {
m_src_r.shrink(src_r_size);
return false;
}
// compute CD
for (auto n : monomial(src.l)) {
unsigned id = n->id();
@ -1171,18 +1216,18 @@ namespace euf {
reduce(m_src_r, j);
deduplicate(m_src_r, m_dst_r);
bool added_eq = false;
auto src_r = src.r;
unsigned max_left_new = std::max(m_src_r.size(), m_dst_r.size());
unsigned min_right_new = std::min(m_src_r.size(), m_dst_r.size());
if (max_left_new <= max_left && min_right_new <= min_right)
added_eq = init_equation(eq(to_monomial(m_src_r), to_monomial(m_dst_r), j));
if (max_left_new <= max_left && min_right_new <= min_right)
added_eq = init_equation(eq(to_monomial(m_src_r), to_monomial(m_dst_r), j), false);
CTRACE(plugin, added_eq,
tout << "superpose: " << m_name << " " << eq_pp_ll(*this, src) << " " << eq_pp_ll(*this, dst) << " --> ";
tout << m_pp_ll(*this, m_src_r) << "== " << m_pp_ll(*this, m_dst_r) << "\n";);
tout << m_pp_ll(*this, m_src_r) << "== " << m_pp_ll(*this, m_dst_r) << "\n";);
m_src_r.reset();
m_src_r.append(monomial(src_r).m_nodes);
return added_eq;
@ -1191,7 +1236,7 @@ namespace euf {
bool ac_plugin::is_reducing(eq const& e) const {
auto const& l = monomial(e.l);
auto const& r = monomial(e.r);
return l.size() == 1 && r.size() <= 1;
return l.size() == 1 && r.size() <= 1;
}
void ac_plugin::backward_reduce(unsigned eq_id) {
@ -1202,23 +1247,36 @@ namespace euf {
SASSERT(is_active(other_eq));
backward_reduce(eq, other_eq);
}
for (auto p : m_passive) {
bool change = false;
if (backward_reduce_monomial(eq, p, monomial(p.l)))
change = true;
if (backward_reduce_monomial(eq, p, monomial(p.r)))
change = true;
(void)change;
CTRACE(plugin, change,
verbose_stream() << "backward reduce " << eq_pp_ll(*this, eq) << " " << eq_pp_ll(*this, p) << "\n");
}
}
// TODO: this is destructive. It breaks reversibility.
// TODO: also need justifications from eq if there is a change.
void ac_plugin::backward_reduce(eq const& eq, unsigned other_eq_id) {
auto& other_eq = m_active[other_eq_id];
auto& other_eq = m_active[other_eq_id];
TRACE(plugin_verbose,
tout << "backward reduce " << eq_pp_ll(*this, eq) << " " << eq_pp_ll(*this, other_eq) << "\n");
bool change = false;
if (backward_reduce_monomial(eq, monomial(other_eq.l)))
if (backward_reduce_monomial(eq, other_eq, monomial(other_eq.l)))
change = true;
if (backward_reduce_monomial(eq, monomial(other_eq.r)))
if (backward_reduce_monomial(eq, other_eq, monomial(other_eq.r)))
change = true;
if (change)
set_status(other_eq_id, eq_status::is_to_simplify_eq);
CTRACE(plugin, change,
tout << "backward reduce " << eq_pp_ll(*this, eq) << " " << eq_pp_ll(*this, other_eq) << "\n");
if (change) {
set_status(other_eq_id, eq_status::is_to_simplify_eq);
}
}
bool ac_plugin::backward_reduce_monomial(eq const& eq, monomial_t& m) {
auto const& r = monomial(eq.r);
bool ac_plugin::backward_reduce_monomial(eq const& src, eq& dst, monomial_t& m) {
auto const& r = monomial(src.r);
unsigned j = 0;
bool change = false;
for (auto n : m) {
@ -1241,7 +1299,11 @@ namespace euf {
m.m_nodes[j++] = r[0];
}
m.m_nodes.shrink(j);
return change;
if (change) {
m.m_bloom.m_tick = 0;
dst.j = join(dst.j, src);
}
return change;
}
bool ac_plugin::are_equal(monomial_t& a, monomial_t& b) {
@ -1252,8 +1314,8 @@ namespace euf {
if (a.size() != b.size())
return false;
m_eq_counts.reset();
for (auto n : a)
m_eq_counts.inc(n->id(), 1);
for (auto n : a)
m_eq_counts.inc(n->id(), 1);
for (auto n : b) {
unsigned id = n->id();
if (m_eq_counts[id] == 0)
@ -1277,21 +1339,29 @@ namespace euf {
return true;
}
void ac_plugin::deduplicate(monomial_t& a, monomial_t& b) {
unsigned sza = a.size(), szb = b.size();
deduplicate(a.m_nodes, b.m_nodes);
if (sza != a.size())
a.m_bloom.m_tick = 0;
if (szb != b.size())
b.m_bloom.m_tick = 0;
}
void ac_plugin::deduplicate(ptr_vector<node>& a, ptr_vector<node>& b) {
{
for (auto n : a) {
if (n->is_zero) {
a[0] = n;
a.shrink(1);
break;
}
for (auto n : a) {
if (n->is_zero) {
a[0] = n;
a.shrink(1);
break;
}
for (auto n : b) {
if (n->is_zero) {
b[0] = n;
b.shrink(1);
break;
}
}
for (auto n : b) {
if (n->is_zero) {
b[0] = n;
b.shrink(1);
break;
}
}
@ -1340,14 +1410,14 @@ namespace euf {
while (!m_shared_todo.empty()) {
auto idx = *m_shared_todo.begin();
m_shared_todo.remove(idx);
if (idx < m_shared.size())
if (idx < m_shared.size())
simplify_shared(idx, m_shared[idx]);
}
m_monomial_table.reset();
for (auto const& s1 : m_shared) {
shared s2;
TRACE(plugin_verbose, tout << "shared " << s1.m << ": " << m_pp_ll(*this, monomial(s1.m)) << "\n");
if (!m_monomial_table.find(s1.m, s2))
if (!m_monomial_table.find(s1.m, s2))
m_monomial_table.insert(s1.m, s1);
else if (s2.n->get_root() != s1.n->get_root()) {
TRACE(plugin, tout << "merge shared " << g.bpp(s1.n->get_root()) << " and " << g.bpp(s2.n->get_root()) << "\n");
@ -1380,14 +1450,12 @@ namespace euf {
}
}
for (auto n : monomial(old_m))
n->n->unmark2();
n->n->unmark2();
m_update_shared_trail.push_back({ idx, s });
push_undo(is_update_shared);
m_shared[idx].m = new_m;
m_shared[idx].j = j;
TRACE(plugin_verbose, tout << "shared simplified to " << m_pp_ll(*this, monomial(new_m)) << "\n");
push_merge(old_n, new_n, j);
}
@ -1397,19 +1465,15 @@ namespace euf {
}
justification::dependency* ac_plugin::justify_equation(unsigned eq) {
auto const& e = m_active[eq];
auto* j = m_dep_manager.mk_leaf(e.j);
j = justify_monomial(j, monomial(e.l));
j = justify_monomial(j, monomial(e.r));
return j;
}
justification::dependency* ac_plugin::justify_monomial(justification::dependency* j, monomial_t const& m) {
return j;
return m_dep_manager.mk_leaf(m_active[eq].j);
}
justification ac_plugin::join(justification j, unsigned eq) {
return justification::dependent(m_dep_manager.mk_join(m_dep_manager.mk_leaf(j), justify_equation(eq)));
}
justification ac_plugin::join(justification j, eq const& eq) {
return justification::dependent(m_dep_manager.mk_join(m_dep_manager.mk_leaf(j), m_dep_manager.mk_leaf(eq.j)));
}
}

View file

@ -123,6 +123,7 @@ namespace euf {
func_decl* m_decl = nullptr;
bool m_is_injective = false;
vector<eq> m_active, m_passive;
enode_pair_vector m_queued;
ptr_vector<node> m_nodes;
bool_vector m_shared_nodes;
vector<monomial_t> m_monomials;
@ -146,21 +147,21 @@ namespace euf {
// backtrackable state
enum undo_kind {
is_add_eq,
is_queue_eq,
is_add_monomial,
is_add_node,
is_update_eq,
is_add_shared_index,
is_add_eq_index,
is_register_shared,
is_update_shared
is_update_shared,
is_push_scope
};
svector<undo_kind> m_undo;
ptr_vector<node> m_node_trail;
unsigned m_queue_head = 0;
svector<std::pair<unsigned, shared>> m_update_shared_trail;
svector<std::tuple<node*, unsigned, unsigned>> m_merge_trail;
svector<std::pair<unsigned, eq>> m_update_eq_trail;
@ -178,6 +179,7 @@ namespace euf {
enode* from_monomial(ptr_vector<node> const& m);
monomial_t const& monomial(unsigned i) const { return m_monomials[i]; }
monomial_t& monomial(unsigned i) { return m_monomials[i]; }
void sort(monomial_t& monomial);
bool is_sorted(monomial_t const& monomial) const;
uint64_t filter(monomial_t& m);
@ -188,11 +190,12 @@ namespace euf {
bool are_equal(eq const& a, eq const& b) {
return are_equal(monomial(a.l), monomial(b.l)) && are_equal(monomial(a.r), monomial(b.r));
}
bool bloom_filter_is_correct(ptr_vector<node> const& m, bloom const& b) const;
bool well_formed(eq const& e) const;
bool is_reducing(eq const& e) const;
void backward_reduce(unsigned eq_id);
void backward_reduce(eq const& src, unsigned dst);
bool backward_reduce_monomial(eq const& eq, monomial_t& m);
void backward_reduce(eq const& eq, unsigned dst);
bool backward_reduce_monomial(eq const& src, eq & dst, monomial_t& m);
void forward_subsume_new_eqs();
bool is_forward_subsumed(unsigned dst_eq);
bool forward_subsumes(unsigned src_eq, unsigned dst_eq);
@ -207,8 +210,10 @@ namespace euf {
UNREACHABLE();
return nullptr;
}
bool init_equation(eq const& e);
bool init_equation(eq e, bool is_active);
void push_equation(enode* l, enode* r);
void pop_equation(enode* l, enode* r);
bool orient_equation(eq& e);
void set_status(unsigned eq_id, eq_status s);
unsigned pick_next_eq();
@ -217,6 +222,7 @@ namespace euf {
void backward_simplify(unsigned eq_id, unsigned using_eq);
bool forward_simplify(unsigned eq_id, unsigned using_eq);
bool superpose(unsigned src_eq, unsigned dst_eq);
void deduplicate(monomial_t& a, monomial_t& b);
void deduplicate(ptr_vector<node>& a, ptr_vector<node>& b);
ptr_vector<node> m_src_r, m_src_l, m_dst_r, m_dst_l;
@ -260,8 +266,8 @@ namespace euf {
justification justify_rewrite(unsigned eq1, unsigned eq2);
justification::dependency* justify_equation(unsigned eq);
justification::dependency* justify_monomial(justification::dependency* d, monomial_t const& m);
justification join(justification j1, unsigned eq);
justification join(justification j1, eq const& eq);
bool is_correct_ref_count(monomial_t const& m, ref_counts const& counts) const;
bool is_correct_ref_count(ptr_vector<node> const& m, ref_counts const& counts) const;
@ -301,6 +307,8 @@ namespace euf {
void undo() override;
void push_scope_eh() override;
void propagate() override;
std::ostream& display(std::ostream& out) const override;

View file

@ -43,6 +43,11 @@ namespace euf {
void undo() override;
void push_scope_eh() override {
m_add.push_scope_eh();
m_mul.push_scope_eh();
}
void propagate() override;
std::ostream& display(std::ostream& out) const override;

View file

@ -103,6 +103,9 @@ namespace euf {
m_scopes.push_back(m_updates.size());
m_region.push_scope();
m_updates.push_back(update_record(m_new_th_eqs_qhead, update_record::new_th_eq_qhead()));
for (auto p : m_plugins)
if (p)
p->push_scope_eh();
}
SASSERT(m_new_th_eqs_qhead <= m_new_th_eqs.size());
}

View file

@ -52,6 +52,8 @@ namespace euf {
virtual void propagate() = 0;
virtual void undo() = 0;
virtual void push_scope_eh() {}
virtual std::ostream& display(std::ostream& out) const = 0;

View file

@ -2265,6 +2265,20 @@ br_status bv_rewriter::mk_bv_ext_rotate_left(expr * arg1, expr * arg2, expr_ref
unsigned shift = static_cast<unsigned>((r2 % numeral(bv_size)).get_uint64() % static_cast<uint64_t>(bv_size));
return mk_bv_rotate_left(shift, arg1, result);
}
expr* x = nullptr, * y = nullptr;
if (m_util.is_ext_rotate_right(arg1, x, y) && arg2 == y) {
// bv_ext_rotate_left(bv_ext_rotate_right(x, y), y) --> x
result = x;
return BR_DONE;
}
if (m_util.is_ext_rotate_left(arg1, x, y)) {
result = m_util.mk_bv_rotate_left(x, m_util.mk_bv_add(y, arg2));
return BR_REWRITE2;
}
if (m_util.is_ext_rotate_right(arg1, x, y)) {
result = m_util.mk_bv_rotate_left(x, m_util.mk_bv_sub(arg2, y));
return BR_REWRITE2;
}
return BR_FAILED;
}
@ -2275,6 +2289,20 @@ br_status bv_rewriter::mk_bv_ext_rotate_right(expr * arg1, expr * arg2, expr_ref
unsigned shift = static_cast<unsigned>((r2 % numeral(bv_size)).get_uint64() % static_cast<uint64_t>(bv_size));
return mk_bv_rotate_right(shift, arg1, result);
}
expr* x = nullptr, * y = nullptr;
if (m_util.is_ext_rotate_left(arg1, x, y) && arg2 == y) {
// bv_ext_rotate_right(bv_ext_rotate_left(x, y), y) --> x
result = x;
return BR_DONE;
}
if (m_util.is_ext_rotate_right(arg1, x, y)) {
result = m_util.mk_bv_rotate_right(x, m_util.mk_bv_add(y, arg2));
return BR_REWRITE2;
}
if (m_util.is_ext_rotate_left(arg1, x, y)) {
result = m_util.mk_bv_rotate_right(x, m_util.mk_bv_sub(arg2, y));
return BR_REWRITE2;
}
return BR_FAILED;
}

View file

@ -1224,16 +1224,40 @@ namespace seq {
let n = len(x)
- len(a ++ b) = len(a) + len(b) if x = a ++ b
- len(unit(u)) = 1 if x = unit(u)
- len(extract(x, o, l)) = l if len(x) >= o + l etc
- len(str) = str.length() if x = str
- len(empty) = 0 if x = empty
- len(int.to.str(i)) >= 1 if x = int.to.str(i) and more generally if i = 0 then 1 else 1+floor(log(|i|))
- len(x) >= 0 otherwise
*/
void axioms::length_axiom(expr* n) {
expr* x = nullptr;
expr* x = nullptr, * y = nullptr, * offs = nullptr, * l = nullptr;
VERIFY(seq.str.is_length(n, x));
if (seq.str.is_concat(x) ||
seq.str.is_unit(x) ||
if (seq.str.is_concat(x) && to_app(x)->get_num_args() != 0) {
ptr_vector<expr> args;
for (auto arg : *to_app(x))
args.push_back(seq.str.mk_length(arg));
expr_ref len(a.mk_add(args), m);
add_clause(mk_eq(len, n));
}
else if (seq.str.is_extract(x, y, offs, l)) {
// len(extract(y, o, l)) = l if len(y) >= o + l, o >= 0, l >= 0
// len(extract(y, o, l)) = 0 if o < 0 or l <= 0 or len(y) < o
// len(extract(y, o, l)) = len(y) - o if o <= len(y) < o + l
expr_ref len_y(mk_len(y), m);
expr_ref z(a.mk_int(0), m);
expr_ref y_ge_l = mk_ge(a.mk_sub(len_y, a.mk_add(offs, l)), 0);
expr_ref y_ge_o = mk_ge(a.mk_sub(len_y, offs), 0);
expr_ref offs_ge_0 = mk_ge(offs, 0);
expr_ref l_ge_0 = mk_ge(l, 0);
add_clause(~offs_ge_0, ~l_ge_0, ~y_ge_l, mk_eq(n, l));
add_clause(offs_ge_0, mk_eq(n, z));
add_clause(l_ge_0, mk_eq(n, z));
add_clause(y_ge_o, mk_eq(n, z));
add_clause(~y_ge_o, y_ge_l, mk_eq(n, a.mk_sub(len_y, offs)));
}
else if (seq.str.is_unit(x) ||
seq.str.is_empty(x) ||
seq.str.is_string(x)) {
expr_ref len(n, m);

View file

@ -6021,6 +6021,12 @@ bool seq_rewriter::reduce_eq_empty(expr* l, expr* r, expr_ref& result) {
result = m_autil.mk_lt(s, zero());
return true;
}
// at(s, offset) = "" <=> len(s) <= offset or offset < 0
if (str().is_at(r, s, offset)) {
expr_ref len_s(str().mk_length(s), m());
result = m().mk_or(m_autil.mk_le(len_s, offset), m_autil.mk_lt(offset, zero()));
return true;
}
return false;
}

View file

@ -112,7 +112,7 @@ eliminate:
--*/
#include "params/smt_params_helper.hpp"
#include "ast/ast_ll_pp.h"
#include "ast/ast_pp.h"
#include "ast/recfun_decl_plugin.h"
@ -166,7 +166,7 @@ void elim_unconstrained::eliminate() {
expr_ref rr(m.mk_app(t->get_decl(), t->get_num_args(), m_args.data() + sz), m);
bool inverted = m_inverter(t->get_decl(), t->get_num_args(), m_args.data() + sz, r);
proof_ref pr(m);
if (inverted && m_enable_proofs) {
if (inverted && m_config.m_enable_proofs) {
expr * s = m.mk_app(t->get_decl(), t->get_num_args(), m_args.data() + sz);
expr * eq = m.mk_eq(s, r);
proof * pr1 = m.mk_def_intro(eq);
@ -267,7 +267,7 @@ void elim_unconstrained::reset_nodes() {
*/
void elim_unconstrained::init_nodes() {
m_enable_proofs = false;
m_config.m_enable_proofs = false;
m_trail.reset();
m_fmls.freeze_suffix();
@ -276,7 +276,7 @@ void elim_unconstrained::init_nodes() {
auto [f, p, d] = m_fmls[i]();
terms.push_back(f);
if (p)
m_enable_proofs = true;
m_config.m_enable_proofs = true;
}
m_heap.reset();
@ -303,7 +303,7 @@ void elim_unconstrained::init_nodes() {
for (expr* e : terms)
get_node(e).set_top();
m_inverter.set_produce_proofs(m_enable_proofs);
m_inverter.set_produce_proofs(m_config.m_enable_proofs);
}
@ -422,6 +422,8 @@ void elim_unconstrained::update_model_trail(generic_model_converter& mc, vector<
}
void elim_unconstrained::reduce() {
if (!m_config.m_enabled)
return;
generic_model_converter_ref mc = alloc(generic_model_converter, m, "elim-unconstrained");
m_inverter.set_model_converter(mc.get());
m_created_compound = true;
@ -436,3 +438,8 @@ void elim_unconstrained::reduce() {
mc->reset();
}
}
void elim_unconstrained::updt_params(params_ref const& p) {
smt_params_helper sp(p);
m_config.m_enabled = sp.elim_unconstrained();
}

View file

@ -79,6 +79,10 @@ class elim_unconstrained : public dependent_expr_simplifier {
unsigned m_num_eliminated = 0;
void reset() { m_num_eliminated = 0; }
};
struct config {
bool m_enabled = true;
bool m_enable_proofs = false;
};
expr_inverter m_inverter;
ptr_vector<node> m_nodes;
var_lt m_lt;
@ -86,8 +90,8 @@ class elim_unconstrained : public dependent_expr_simplifier {
expr_ref_vector m_trail;
expr_ref_vector m_args;
stats m_stats;
config m_config;
bool m_created_compound = false;
bool m_enable_proofs = false;
bool is_var_lt(int v1, int v2) const;
node& get_node(unsigned n) const { return *m_nodes[n]; }
@ -119,4 +123,7 @@ public:
void collect_statistics(statistics& st) const override { st.update("elim-unconstrained", m_stats.m_num_eliminated); }
void reset_statistics() override { m_stats.reset(); }
void updt_params(params_ref const& p) override;
};

View file

@ -46,6 +46,7 @@ Outline of a presumably better scheme:
#include "ast/simplifiers/solve_context_eqs.h"
#include "ast/converters/generic_model_converter.h"
#include "params/tactic_params.hpp"
#include "params/smt_params_helper.hpp"
namespace euf {
@ -118,7 +119,10 @@ namespace euf {
SASSERT(j == var2id(v));
if (m_fmls.frozen(v))
continue;
if (!m_config.m_enable_non_ground && has_quantifiers(t))
continue;
bool is_safe = true;
unsigned todo_sz = todo.size();
@ -126,6 +130,8 @@ namespace euf {
// all time-stamps must be at or above current level
// unexplored variables that are part of substitution are appended to work list.
SASSERT(m_todo.empty());
m_todo.push_back(t);
expr_fast_mark1 visited;
while (!m_todo.empty()) {
@ -224,6 +230,9 @@ namespace euf {
void solve_eqs::reduce() {
if (!m_config.m_enabled)
return;
m_fmls.freeze_suffix();
for (extract_eq* ex : m_extract_plugins)
@ -330,6 +339,9 @@ namespace euf {
for (auto* ex : m_extract_plugins)
ex->updt_params(p);
m_rewriter.updt_params(p);
smt_params_helper sp(p);
m_config.m_enabled = sp.solve_eqs();
m_config.m_enable_non_ground = sp.solve_eqs_non_ground();
}
void solve_eqs::collect_param_descrs(param_descrs& r) {

View file

@ -41,6 +41,8 @@ namespace euf {
struct config {
bool m_context_solve = true;
unsigned m_max_occs = UINT_MAX;
bool m_enabled = true;
bool m_enable_non_ground = true;
};
stats m_stats;

View file

@ -386,7 +386,7 @@ public:
void change_basis(unsigned entering, unsigned leaving) {
TRACE(lar_solver, tout << "entering = " << entering << ", leaving = " << leaving << "\n";);
SASSERT(m_basis_heading[entering] < 0);
SASSERT(m_basis_heading[leaving] >= 0);
SASSERT(m_basis_heading[leaving] >= 0);
int place_in_basis = m_basis_heading[leaving];
int place_in_non_basis = - m_basis_heading[entering] - 1;
@ -568,17 +568,17 @@ public:
insert_column_into_inf_heap(j);
}
void insert_column_into_inf_heap(unsigned j) {
if (!m_inf_heap.contains(j)) {
if (!m_inf_heap.contains(j)) {
m_inf_heap.reserve(j+1);
m_inf_heap.insert(j);
m_inf_heap.insert(j);
TRACE(lar_solver_inf_heap, tout << "insert into inf_heap j = " << j << "\n";);
}
SASSERT(!column_is_feasible(j));
}
void remove_column_from_inf_heap(unsigned j) {
if (m_inf_heap.contains(j)) {
if (m_inf_heap.contains(j)) {
TRACE(lar_solver_inf_heap, tout << "erase from heap j = " << j << "\n";);
m_inf_heap.erase(j);
m_inf_heap.erase(j);
}
SASSERT(column_is_feasible(j));
}

View file

@ -20,6 +20,7 @@ def_module_params(module_name='smt',
('delay_units_threshold', UINT, 32, 'maximum number of learned unit clauses before restarting, ignored if delay_units is false'),
('elim_unconstrained', BOOL, True, 'pre-processing: eliminate unconstrained subterms'),
('solve_eqs', BOOL, True, 'pre-processing: solve equalities'),
('solve_eqs.non_ground', BOOL, True, 'pre-processing: solve equalities. Allow eliminating variables by non-ground solutions which can break behavior for model evaluation.'),
('propagate_values', BOOL, True, 'pre-processing: propagate values'),
('bound_simplifier', BOOL, True, 'apply bounds simplification during pre-processing'),
('pull_nested_quantifiers', BOOL, False, 'pre-processing: pull nested quantifiers'),

View file

@ -198,6 +198,7 @@ namespace smt {
};
lit_node* m_dll_lits;
svector<std::array<double, 2>> m_lit_scores;
clause_vector m_aux_clauses;
clause_vector m_lemmas;
vector<clause_vector> m_clauses_to_reinit;

View file

@ -1545,6 +1545,7 @@ namespace smt {
auto new_score = m_lit_scores[v][0] * m_lit_scores[v][1];
m_pq_scores.set(v, new_score);
}
}

View file

@ -92,6 +92,7 @@ namespace smt {
sl.push_child(&(new_m->limit()));
}
auto cube = [](context& ctx, expr_ref_vector& lasms, expr_ref& c) {
lookahead lh(ctx);
c = lh.choose();
@ -122,6 +123,7 @@ namespace smt {
expr* e = ctx.bool_var2expr(node.key);
if (!e) continue;
expr_ref lit(e, m);
conjuncts.push_back(lit);

View file

@ -439,7 +439,6 @@ final_check_status theory_seq::final_check_eh() {
}
bool theory_seq::set_empty(expr* x) {
add_axiom(~mk_eq(m_autil.mk_int(0), mk_len(x), false), mk_eq_empty(x));
return true;
@ -475,9 +474,8 @@ bool theory_seq::check_fixed_length(bool is_zero, bool check_long_strings) {
bool found = false;
for (unsigned i = 0; i < m_length.size(); ++i) {
expr* e = m_length.get(i);
if (fixed_length(e, is_zero, check_long_strings)) {
found = true;
}
if (fixed_length(e, is_zero, check_long_strings))
found = true;
}
return found;
}