3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-05-06 18:35:18 +00:00

Add global backbones to parallel architecture (#9343)

* setting up new backbone experiment

* fix phase scores bug

* debug crash from negated atoms

* backbone thread/global backbones in progress, does NOT compile yet

* debug, still need to add backbones worker as a new thread

* setting up complicated condition variable thing for backbones worker thread

* debug

* debug lock contention

* it's a little messy, but change how i'm checking backbones by initiating with batch check

* don't split on global backbones, share global backbones once detected. still need to prune search tree with backbones

* close global backbone branches in search tree

* fix backbone ranking (take average of bb age over cubes and incorporate hits/num cubes the bb appears in

* add stats to backbone experiment

* gate the backbones experiment by local vs global

* update stats and fix bug about unsat core size=1 means global backbone

* phase negation ablation

* unforce phase ablation

* reset ablations

* add todo notes

* fix backbone aging

* first draft of Janota Alg 7

* process exactly 10 bb candidates in each batch

* fixing the Janota algorithm

* add backbone stats for Janota algorithm

* fix bug about global backbones not being checked unless local is also true

* hopefully fix bug about closing global backbones in search tree

* fix another bug in janota alg

* report random seed for debug

* print random seed for debug

* refactor janota alg code, still can't repro the crash

* fix some bugs in the janota algorithm

* try to fix weird memory leak thing with ramon/linux

* revert fix, it didn't work

* add second backbones thread

* increase chunk size when undef

* fix how the 2 backbone threads work on batches (they each race to finish the same batch). this was very complicated to code due to thread synchronization and while it runs there may be bugs

* update how we report stats for backbones

* first draft of doing the bb threads in neg and pos mode, needs revising

* fix some bugs in the positive version of the bb check, still need to review

* debug some more things in the positive bb worker

* keep bb candidates sorted, increase batch and chunk size

* try to resolve a couple of bugs

* fix very bad bug about backbones workers not doing anything

* ablate positive backbone thread

* fix how we record backbones in positive mode (shouldn't impact previous run)

* clarify code about adding found backbones

* add back the positive bb thread

* try to fix the random segfault bug + ablate the postiive bb thread again

* clean up logs

* share clauses with bb threads

* fixes

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* resolve deadlock

* add comment about SAT bb case

* todo comments

* complete TODOs in code, still need to debug bb threads

* debug bb threads, add bb_positive thread back in

* ablate bb_positive thread

* style

* configure num bb threads as param

* enable sat and unsat mode

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* updates

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove while true

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* updates

* try to fix rewriter_exception bug

* possibly reduce code under lock when only 1 bb thread

* add some copilot-suggested optimizations

* add copilot suggestions to fix condition variable synchronization with bb threads

* revert changes that are too messy with the code

* ablate collect clauses

* ablate condition variable logic changes

* ablate reset batch

* revert ablation

* remove m_batch_in_progress that makes the bb threads wait until both have exited the batch after one signals cancel (can be long if one is stuck in ctx check)

* sharing theory lemmas

* finish setup for search tree thread modes, and fix local bb setup to pull from the global pool

* variable renames

* update bb hyperparams after copilot (hopefully??) ran tuning experiments

* fix possible AST manager bug

* ablate collect clauses

* remove bb collect shared clauses

* fix local bb experiment bug and reinstate collect clauses for global bb

* local bb cands are thread-local ablation

* remove thread-local local bb ablation

* fix bug in nonthread-local bb experiment

* fix more nonthreadlocal bb bugs

* try to fix local bb bug

* AST manager mismatch bugfix

* attempt to fix another canonicalization bug

* try another bugfix

* try another bugfix

* try yet another bugfix

* thread local bb ablation

* ablate force phase

* ablate set activity

* undo ablations since apparently it's not forcing phase or boosting activities

* remove old experiments

* try guarding m_birthdate size

* try to fix several bugs including with m_birthdate initialization and how we're storing original phases

* one more bugfix

* remove local bb experiment after negative signals on experiments, and change bb ranking to VSIDS scores as opposed to phase

* select bb polarity based on phase, not VSIDS

* first attempt with codex. Codex notes:
What changed:

  - Each tree node now tracks:
      - active worker count
      - lease epoch
      - cancel epoch
  - get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch).
  - try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the
    node.
  - If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed.
  - Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a
    global exception.
  - The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish.
  - Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other
    workers are still on it.

  I also updated search_tree bookkeeping so:

  - closure bumps node cancel/lease epochs
  - active-node counting uses actual active-worker presence, not just status == active

* fix smts bugfix git merge issues with backtrack

* fix(parallel-smt): gate split/backtrack by lease epoch

What it changes:

  - util/search_tree.h
      - bumps node epoch on split
      - threads epoch through should_split(...) and try_split(...)
      - always records effort, but only split/reopen if the lease epoch still matches
  - smt/smt_parallel.cpp
      - requires is_lease_valid(..., lease.epoch) before backtrack(...)
      - passes lease.epoch into m_search_tree.try_split(...)

* clean up code and add some comments

* fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to
  close n, and that closure should then cancel the other workers on n and its subtree.

  I changed smt/smt_parallel.cpp accordingly:

  - try_split(...) still uses epoch to reject stale structural splits
  - backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled

  So the intended asymmetry is now restored:

  - stale split: reject
  - stale unsat/backtrack: allow closure, then cancel affected workers

* ablate to no backtracking on stale leases

* fix merge

* revert codex change about exception handling

* fix linux bugs

* ablate backtrack gating

* attempt to fix linux crashes

* ablate backtracking on global bb

* the rare bb bug appears to be from creating the synthetic lease for a bb node and then backtracking on the synthetic lease. this is an attempt to fix it

* clean up code

* try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement
  - canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited

* delay premature root activation

* fix major semantic bug about threads continually choosing the root if their lease is reset

* fix cancellation to unknown status

* fix very bad bug about all threads needing to start at the root

* ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0)

* fix some bugs about leases

* ablate adding static effort only

* fix some bugs about leases

* don't explode effort for portfolio nodes

* fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves

* restore dynamically scaled effort

* clean up merge from cherry pick

* tighten which nodes we detect for proven global bb closure (only detect nonclosed nodes)

* fix cancel to unknown exception on bb code

* lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root

* cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch
ecked in ctx where max conflicts is also checked

* move atomic lease check in the context to the more global get_cancel_flag function

* Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state —
  propagation queues partially processed, theory state potentially inconsistent with boolean state.

  For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter.

  For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate
  state → SIGSEGV.

  The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent.

  Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be
   gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start.

* fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled.

* ablate lease cancel check in ctx final theory check due to crash (??)

* gate bb-specific code behind param

* try some possible bugfixes for the sigsev

* ablate some bugfixes

* remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check

* restore exception handling logic to master branch

* restore reslimit cancels since the bug appears to be latent

* add bookkeeping for race condition of multiple lease cancels on a single node (messes with reslimit)

* restore unrelated code to master

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
This commit is contained in:
Ilana Shapiro 2026-04-20 09:22:07 -07:00 committed by GitHub
parent 75039d631c
commit 19e95f40af
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 750 additions and 33 deletions

View file

@ -4,4 +4,5 @@ def_module_params('smt_parallel',
params=(
('inprocessing', BOOL, False, 'integrate in-processing as a heuristic simplification'),
('sls', BOOL, False, 'add sls-tactic as a separate worker thread outside the search tree parallelism'),
('num_global_bb_threads', UINT, 0, 'default is 0 (off), can configure to 1 (negative mode only) or 2 (negative and positive mode) separate worker thread(s) outside the search tree parallelism'),
))

View file

@ -288,8 +288,13 @@ namespace smt {
if (!decision && d.m_phase == l.sign())
m_agility += (1.0 - m_fparams.m_agility_factor);
}
bool new_phase = !l.sign();
m_stats.m_num_assignments++;
if (d.m_phase_available && d.m_phase != new_phase)
m_birthdate[l.var()] = m_stats.m_num_assignments; // reset birthdate when phase changes
d.m_phase_available = true;
d.m_phase = !l.sign();
d.m_phase = new_phase;
TRACE(assign_core, tout << (decision?"decision: ":"propagating: ") << l << " ";
display_literal_smt2(tout, l) << "\n";
tout << "relevant: " << is_relevant_core(l) << " level: " << m_scope_lvl << " is atom " << d.is_atom() << "\n";

View file

@ -138,6 +138,7 @@ namespace smt {
scoped_ptr<base_dependent_expr_state> m_fmls;
svector<double> m_lit_scores[2];
svector<unsigned> m_birthdate;
// -----------------------------------

View file

@ -423,6 +423,7 @@ namespace smt {
st.update("minimized lits", m_stats.m_num_minimized_lits);
st.update("num checks", m_stats.m_num_checks);
st.update("mk bool var", m_stats.m_num_mk_bool_var ? m_stats.m_num_mk_bool_var - 1 : 0);
st.update("random seed", m_fparams.m_random_seed);
m_qmanager->collect_statistics(st);
m_asserted_formulas.collect_statistics(st);
for (theory* th : m_theory_set) {

View file

@ -935,6 +935,8 @@ namespace smt {
m_lit_scores[0].reserve(v + 1);
m_lit_scores[1].reserve(v + 1);
m_lit_scores[0][v] = m_lit_scores[1][v] = 0.0;
m_birthdate.reserve(v+1);
m_birthdate[v] = 0;
literal l(v, false);
literal not_l(v, true);
@ -1884,4 +1886,3 @@ namespace smt {
SASSERT(th->is_attached_to_var(n));
}
};

View file

@ -29,6 +29,7 @@ Author:
#include <cmath>
#include <mutex>
#include <condition_variable>
class bounded_pp_exprs {
expr_ref_vector const &es;
@ -61,6 +62,7 @@ namespace smt {
#include <thread>
#define LOG_WORKER(lvl, s) IF_VERBOSE(lvl, verbose_stream() << "Worker " << id << s)
#define LOG_BB_WORKER(lvl, s) IF_VERBOSE(lvl, verbose_stream() << "Backbones Worker " << id << s)
namespace smt {
@ -107,6 +109,292 @@ namespace smt {
}
}
void parallel::backbones_worker::run() {
bb_candidates bb_candidates;
while (m.inc()) {
if (!b.wait_for_backbone_job(id, m_g2l, bb_candidates, m.limit()))
return;
if (bb_candidates.empty())
continue;
LOG_BB_WORKER(1, " received batch of " << bb_candidates.size() << " candidates\n");
unsigned local_cancel_epoch = b.get_cancel_epoch();
auto canceled = [&] { return local_cancel_epoch != b.get_cancel_epoch(); };
auto fallback_singletons = [&](expr_ref_vector const& chunk_lits, expr_ref_vector& bb_candidate_lits) {
m_stats.m_fallback_singleton_checks++;
for (expr* c : chunk_lits) {
if (!bb_candidate_lits.contains(c)) continue; // already handled in singleton core → backbone case below
expr_ref bb_ref(c, m);
if (!m.inc() || canceled())
return;
if (m_mode == bb_mode::bb_positive)
bb_ref = mk_not(bb_ref); // F ∧ U since check_backbone flips it back
if (!b.is_global_backbone(m_l2g, bb_ref) && check_backbone(bb_ref)) {
m_stats.m_backbones_detected++;
LOG_BB_WORKER(1, " fallback found backbone: " << mk_bounded_pp(bb_ref.get(), m, 3) << "\n");
bool is_new_bb = b.collect_global_backbone(m_l2g, bb_ref);
if (is_new_bb) {
m_stats.m_backbones_found++;
ctx->assert_expr(bb_ref.get()); // since bb workers don't collect clauses they themselves shared
}
}
bb_candidate_lits.erase(c);
}
};
m_stats.m_batches_total++;
m_stats.m_candidates_total += bb_candidates.size();
expr_ref_vector bb_candidate_lits(m);
for (auto const& c : bb_candidates)
bb_candidate_lits.push_back(c.lit);
unsigned chunk_delta = 1;
// in mode bb_neg this is Algorithm 7 from https://sat.inesc-id.pt/~mikolas/bb-aicom-preprint.pdf
while (!bb_candidate_lits.empty() && !canceled() && m.inc()) {
// remove candidates that the other backbone thread found to be backbones
if (m_num_global_bb_threads > 1) {
for (unsigned i = 0; i < bb_candidate_lits.size();) {
expr* tmp = bb_candidate_lits.get(i);
if (b.is_global_backbone(m_l2g, tmp))
bb_candidate_lits.erase(i);
else
++i;
}
}
unsigned chunk_size = std::min(m_bb_chunk_size * chunk_delta, bb_candidate_lits.size());
expr_ref_vector chunk_lits(m);
expr_ref_vector negated_chunk_lits(m);
for (unsigned i = 0; i < chunk_size; ++i) {
expr *e = bb_candidate_lits.get(i);
chunk_lits.push_back(e);
negated_chunk_lits.push_back(m.mk_not(e));
}
expr_ref_vector bb_asms(m);
if (m_mode == bb_mode::bb_negated)
bb_asms.append(negated_chunk_lits); // F ∧ ¬U
else
bb_asms.append(chunk_lits); // F ∧ U
collect_shared_clauses();
while (true) {
if (!m.inc())
return;
if (canceled())
break;
m_stats.m_core_refinement_rounds++;
unsigned base_asms_sz = asms.size();
for (expr* a : bb_asms)
asms.push_back(a);
lbool r = l_undef;
try {
r = ctx->check(asms.size(), asms.data());
} catch (z3_error &err) {
if (!m.limit().is_canceled())
b.set_exception(err.error_code());
} catch (z3_exception &ex) {
if (!m.limit().is_canceled() && !is_cancellation_exception(ex.what()))
b.set_exception(ex.what());
} catch (...) {
if (!m.limit().is_canceled())
b.set_exception("unknown exception");
}
asms.shrink(base_asms_sz);
if (!m.inc())
return;
if (canceled())
break;
if (r == l_undef) {
LOG_BB_WORKER(1, " UNDEF at chunk_size=" << chunk_size << "\n");
if (chunk_size < bb_candidate_lits.size()) {
chunk_delta++; // try again with a bigger chunk
m_stats.m_num_chunk_increases++;
break;
}
LOG_BB_WORKER(1, " UNDEF and max chunk → fallback\n");
fallback_singletons(chunk_lits, bb_candidate_lits);
m_stats.m_fallback_reason_undef++;
chunk_delta = 1;
break;
}
if (r == l_true) {
LOG_BB_WORKER(1, " batch check returned SAT, thus entire formula is SAT\n");
model_ref mdl;
ctx->get_model(mdl);
b.set_sat(m_l2g, *mdl);
bb_candidates.reset();
return;
}
// ----- UNSAT: inspect core -----
expr_ref_vector bb_asms_in_core(m);
auto const& unsat_core = ctx->unsat_core();
for (expr* a : unsat_core)
if (bb_asms.contains(a))
bb_asms_in_core.push_back(a);
// ---- empty core intersection → formula is UNSAT independent of backbone assumptions ----
if (bb_asms_in_core.empty()) {
expr_ref_vector core_vec(m);
for (expr* e : unsat_core)
core_vec.push_back(e);
b.set_unsat(m_l2g, core_vec);
return;
}
// ---- singleton core → backbone ----
if (bb_asms_in_core.size() == 1) {
expr* a = bb_asms_in_core.back();
expr_ref backbone_lit(m.mk_not(a), m);
m_stats.m_singleton_backbones++;
m_stats.m_backbones_detected++;
bool is_new_bb = b.collect_global_backbone(m_l2g, backbone_lit);
if (is_new_bb) {
m_stats.m_backbones_found++;
ctx->assert_expr(backbone_lit.get()); // since bb workers don't collect clauses they themselves shared
}
expr* candidate_to_remove =
(m_mode == bb_mode::bb_negated)
? backbone_lit.get() // since core contains ¬candidates in negated mode
: a; // since core contains candidates in positive mode
bb_candidate_lits.erase(candidate_to_remove);
}
unsigned sz_before = bb_asms.size();
for (expr* a : bb_asms_in_core)
bb_asms.erase(a);
m_stats.m_lits_removed_by_core += sz_before - bb_asms.size();
chunk_delta = 1;
if (bb_asms.empty()) {
LOG_BB_WORKER(1, " no more negated chunk literals, fallback to individual checks\n");
fallback_singletons(chunk_lits, bb_candidate_lits);
m_stats.m_fallback_reason_chunk_exhausted++;
break;
}
}
}
if (!canceled()) {
b.cancel_current_backbone_batch();
}
bb_candidates.reset();
}
}
void parallel::backbones_worker::cancel() {
LOG_BB_WORKER(1, " BACKBONES WORKER cancelling\n");
m.limit().cancel();
}
bool parallel::batch_manager::collect_global_backbone(ast_translation &l2g, expr_ref const &backbone) {
IF_VERBOSE(1, verbose_stream() << "collect-global-backbone\n");
std::scoped_lock lock(mux);
SASSERT(&m == &l2g.to());
if (is_global_backbone_unlocked(l2g, backbone))
return false;
expr_ref g_bb_ref(l2g(backbone.get()), m);
m_global_backbones.push_back(g_bb_ref);
IF_VERBOSE(1, verbose_stream() << " Found and sharing new global backbone: " << mk_bounded_pp(g_bb_ref, m, 3) << "\n");
collect_clause_unlocked(l2g, /*source_worker_id=*/UINT_MAX, backbone.get());
expr_ref neg_g_bb_ref(mk_not(g_bb_ref), m);
ptr_vector<node> matches;
m_search_tree.find_nonclosed_nodes_with_literal(neg_g_bb_ref, matches);
if (!matches.empty()) {
IF_VERBOSE(1, verbose_stream() << " Closing negation of the new global backbone: " << mk_bounded_pp(g_bb_ref, m, 3) << "\n");
expr_ref_vector l_core(l2g.from());
l_core.push_back(mk_not(backbone));
vector<node_lease> targets;
for (node* t : matches) {
if (!t || m_search_tree.is_lease_canceled(t, t->get_cancel_epoch()))
continue;
// Keep only highest matching nodes: if an ancestor is already selected,
// closing it will also close this descendant subtree.
bool is_highest_ancestor = true;
for (node* p = t->parent(); p; p = p->parent()) {
if (any_of(targets, [&](node_lease const& target) { return target.node == p; })) {
is_highest_ancestor = false;
break;
}
}
if (!is_highest_ancestor)
continue;
targets.push_back({ t, t->epoch(), t->get_cancel_epoch() });
}
if (!targets.empty())
backtrack_unlocked(l2g, UINT_MAX, l_core, nullptr, &targets);
}
return true;
}
void parallel::backbones_worker::collect_statistics(::statistics& st) const {
st.update("bb-batches-total", m_stats.m_batches_total);
st.update("bb-candidates-total", m_stats.m_candidates_total);
st.update("bb-backbones-detected", m_stats.m_backbones_detected);
st.update("bb-backbones-found", m_stats.m_backbones_found);
st.update("bb-core-refinement-rounds", m_stats.m_core_refinement_rounds);
st.update("bb-singleton-backbones", m_stats.m_singleton_backbones);
st.update("bb-fallback-singleton-checks", m_stats.m_fallback_singleton_checks);
st.update("bb-fallback-chunk-exhausted", m_stats.m_fallback_reason_chunk_exhausted);
st.update("bb-fallback-undef", m_stats.m_fallback_reason_undef);
st.update("bb-literals-removed-by-core", m_stats.m_lits_removed_by_core);
st.update("bb-num-chunk-increases", m_stats.m_num_chunk_increases);
double backbone_yield = 0.0;
if (m_stats.m_candidates_total > 0)
backbone_yield = 100.0 * (double)m_stats.m_backbones_found / (double)m_stats.m_candidates_total;
double avg_backbones_per_batch = 0.0;
if (m_stats.m_batches_total > 0)
avg_backbones_per_batch = (double)m_stats.m_backbones_found / (double)m_stats.m_batches_total;
double core_refinement_cost = 0.0;
if (m_stats.m_batches_total > 0)
core_refinement_cost = (double)m_stats.m_core_refinement_rounds / (double)m_stats.m_batches_total;
double core_effectiveness = 0.0;
if (m_stats.m_core_refinement_rounds > 0)
core_effectiveness = (double)m_stats.m_lits_removed_by_core / (double)m_stats.m_core_refinement_rounds;
st.update("bb-backbone-yield-pct", backbone_yield);
st.update("bb-avg-backbones-per-batch", avg_backbones_per_batch);
st.update("bb-core-refinement-rounds-per-batch", core_refinement_cost);
st.update("bb-core-effectiveness-lit-removed-per-round", core_effectiveness);
}
void parallel::sls_worker::cancel() {
IF_VERBOSE(1, verbose_stream() << " SLS WORKER cancelling\n");
m.limit().cancel();
@ -131,6 +419,14 @@ namespace smt {
check_cube_start:
LOG_WORKER(1, " CUBE SIZE IN MAIN LOOP: " << cube.size() << "\n");
if (m_config.m_global_backbones) {
bb_candidates local_candidates = find_backbone_candidates();
b.collect_backbone_candidates(m_l2g, local_candidates);
if (!m.inc())
return;
}
lbool r = check_cube(cube);
if (b.lease_canceled(lease)) {
@ -210,6 +506,7 @@ namespace smt {
smt_parallel_params pp(p.ctx.m_params);
m_config.m_inprocessing = pp.inprocessing();
m_config.m_global_backbones = pp.num_global_bb_threads() > 0;
}
parallel::sls_worker::sls_worker(parallel& p)
@ -219,6 +516,101 @@ namespace smt {
m_sls = alloc(sls::smt_solver, m, m_params);
}
parallel::backbones_worker::backbones_worker(unsigned id, parallel &p, expr_ref_vector const &_asms)
: id(id), b(p.m_batch_manager), m(), asms(m), m_smt_params(p.ctx.get_fparams()), m_g2l(p.ctx.m, m), m_l2g(m, p.ctx.m) {
for (auto e : _asms)
asms.push_back(m_g2l(e));
IF_VERBOSE(1, verbose_stream() << "Initialized backbones thread " << id << "\n");
m_mode = id == 0 ? bb_mode::bb_negated : bb_mode::bb_positive;
ctx = alloc(context, m, m_smt_params, p.ctx.get_params());
ctx->set_logic(p.ctx.m_setup.get_logic());
ctx->get_fparams().m_max_conflicts = m_bb_conflicts_per_chunk;
context::copy(p.ctx, *ctx, true);
smt_parallel_params pp(p.ctx.m_params);
m_num_global_bb_threads = pp.num_global_bb_threads();
}
parallel::bb_candidates parallel::worker::find_backbone_candidates(unsigned k) {
bb_candidates backbone_candidates;
expr_ref candidate(m);
unsigned curr_time = ctx->m_stats.m_num_assignments;
for (bool_var v = 0; v < ctx->get_num_bool_vars(); ++v) {
if (ctx->get_assignment(v) != l_undef && ctx->get_assign_level(v) == ctx->m_base_lvl)
continue;
candidate = ctx->bool_var2expr(v);
if (!candidate)
continue;
auto birth = ctx->m_birthdate[v];
auto age = curr_time - birth;
auto const& d = ctx->get_bdata(v);
if (d.m_phase_available && !d.m_phase)
candidate = m.mk_not(candidate);
if (b.is_global_backbone(m_l2g, candidate))
continue;
bb_candidate bb_cand(m, candidate, age, 1);
backbone_candidates.push_back(bb_cand);
}
// sort from oldest to youngest
std::stable_sort(
backbone_candidates.begin(),
backbone_candidates.end(),
[](bb_candidate const& a, bb_candidate const& b) {
return a.age > b.age;
}
);
// take top-k oldest
if (backbone_candidates.size() > k)
backbone_candidates.shrink(k);
return backbone_candidates;
}
//
// Assume the negation of all candidates (or a batch of them)
// run the solver with a low budget of conflicts
// if the unsat core contains a single candidate we have found a backbone literal
//
bool parallel::backbones_worker::check_backbone(expr* bb_candidate) {
unsigned sz = asms.size();
asms.push_back(m.mk_not(bb_candidate));
lbool r = l_undef;
try {
r = ctx->check(asms.size(), asms.data());
} catch (z3_error &err) {
if (!m.limit().is_canceled())
b.set_exception(err.error_code());
} catch (z3_exception &ex) {
if (!m.limit().is_canceled() && !is_cancellation_exception(ex.what()))
b.set_exception(ex.what());
} catch (...) {
if (!m.limit().is_canceled())
b.set_exception("unknown exception");
}
asms.shrink(sz);
LOG_BB_WORKER(1, " RESULT: " << r << " FOR CANDIDATE: " << mk_bounded_pp(bb_candidate, m, 3) << "\n");
if (r == l_false) {
auto core = ctx->unsat_core();
if (core.size() == 1) {
return true;
}
}
return false;
}
void parallel::worker::share_units() {
// Collect new units learned locally by this worker and send to batch manager
@ -243,7 +635,7 @@ namespace smt {
continue;
if (lit.sign())
e = m.mk_not(e); // negate if literal is negative
e = mk_not(e); // negate if literal is negative
b.collect_clause(m_l2g, id, e);
}
m_num_shared_units = sz;
@ -351,32 +743,56 @@ namespace smt {
continue;
auto const& lease = m_worker_leases[worker_id];
// only cancel workers that currently hold a lease, and whose lease is canceled
if (lease.node && m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch))
// only cancel workers that currently hold a lease, whose lease is canceled,
// and haven't already been signaled (prevents multiple inc_cancel() for same lease)
if (lease.node && !lease.cancel_signaled && m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch)) {
p.m_workers[worker_id]->cancel_lease();
m_worker_leases[worker_id].cancel_signaled = true;
}
}
}
void parallel::batch_manager::backtrack(ast_translation &l2g, unsigned worker_id, expr_ref_vector const &core,
node_lease const &lease) {
std::scoped_lock lock(mux);
backtrack_unlocked(l2g, worker_id, core, &lease, nullptr);
}
void parallel::batch_manager::backtrack_unlocked(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core,
node_lease const* lease, vector<node_lease> const* targets) {
if (m_state != state::is_running)
return;
// we close/backtrack regardless of whether this lease is stale or not, as long as the lease isn't canceled
// i.e. worker 1 splits this node, but then worker 2 determines UNSAT --> worker 2 is stale but we still close this node and backtrack
if (m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch))
return;
IF_VERBOSE(1, verbose_stream() << "Batch manager backtracking.\n");
release_lease_unlocked(worker_id, lease.node, lease.epoch);
vector<cube_config::literal> g_core;
for (auto c : core)
g_core.push_back(expr_ref(l2g(c), m));
m_search_tree.backtrack(lease.node, g_core);
SASSERT((lease != nullptr) != (targets != nullptr));
if (lease) {
// we close/backtrack regardless of whether this lease is stale or not, as long as the lease isn't canceled
// i.e. worker 1 splits this node, but then worker 2 determines UNSAT --> worker 2 is stale but we still close this node and backtrack
if (m_search_tree.is_lease_canceled(lease->node, lease->cancel_epoch))
return;
IF_VERBOSE(1, verbose_stream() << "Batch manager backtracking.\n");
release_lease_unlocked(worker_id, lease->node, lease->epoch);
m_search_tree.backtrack(lease->node, g_core);
}
else {
bool has_open_targets = false;
for (auto const& target : *targets) {
if (m_search_tree.is_lease_canceled(target.node, target.cancel_epoch))
continue;
has_open_targets = true;
IF_VERBOSE(1, verbose_stream() << "Batch manager backtracking external targets.\n");
m_search_tree.backtrack(target.node, g_core);
}
if (!has_open_targets)
return;
}
// terminate on-demand the workers that are currently exploring the now-closed nodes
cancel_closed_leases_unlocked(worker_id);
@ -428,6 +844,10 @@ namespace smt {
void parallel::batch_manager::collect_clause(ast_translation &l2g, unsigned source_worker_id, expr *clause) {
std::scoped_lock lock(mux);
collect_clause_unlocked(l2g, source_worker_id, clause);
}
void parallel::batch_manager::collect_clause_unlocked(ast_translation &l2g, unsigned source_worker_id, expr *clause) {
expr *g_clause = l2g(clause);
if (!shared_clause_set.contains(g_clause)) {
shared_clause_set.insert(g_clause);
@ -445,6 +865,138 @@ namespace smt {
}
}
void parallel::backbones_worker::collect_shared_clauses() {
expr_ref_vector new_clauses = b.return_shared_clauses(m_g2l, m_shared_clause_limit, UINT_MAX);
// iterate over new clauses and assert them in the local context
for (expr *e : new_clauses) {
ctx->assert_expr(e);
LOG_BB_WORKER(4, " asserting shared clause: " << mk_bounded_pp(e, m, 3) << "\n");
}
}
void parallel::batch_manager::collect_backbone_candidates(ast_translation& l2g, bb_candidates& bb_candidates) {
std::scoped_lock lock(mux);
auto find_existing_candidate_idx = [&](expr* e) -> int {
for (unsigned i = 0; i < m_bb_candidates.size(); ++i) {
if (m_bb_candidates[i].lit.get() == e)
return i;
}
return -1;
};
auto rank_of = [&](bb_candidate const& c) {
return c.age * std::log2(2.0 + c.hits);
};
for (auto const& c : bb_candidates) {
if (is_global_backbone_unlocked(l2g, c.lit))
continue;
expr* worker_lit = c.lit.get();
expr_ref g_lit(l2g(worker_lit), m);
double age = c.age;
int idx = find_existing_candidate_idx(g_lit.get());
if (idx >= 0) {
auto& existing = m_bb_candidates[idx];
existing.age = (existing.age * existing.hits + age) / (existing.hits + 1);
existing.hits++;
continue;
}
if (m_bb_candidates.size() < m_max_global_bb_candidates) {
m_bb_candidates.push_back(bb_candidate(m, g_lit.get(), age, 1));
continue;
}
// Find worst candidate to evict
unsigned worst_idx = 0;
double worst_rank = rank_of(m_bb_candidates[0]);
for (unsigned i = 1; i < m_bb_candidates.size(); ++i) {
double r = rank_of(m_bb_candidates[i]);
if (r < worst_rank) {
worst_rank = r;
worst_idx = i;
}
}
bb_candidate new_bb_candidate = bb_candidate(m, g_lit.get(), age, 1);
if (rank_of(new_bb_candidate) > worst_rank)
m_bb_candidates[worst_idx] = new_bb_candidate;
}
if (!m_bb_candidates.empty()) {
std::stable_sort(
m_bb_candidates.begin(),
m_bb_candidates.end(),
[&](bb_candidate const& a, bb_candidate const& b) {
return rank_of(a) > rank_of(b);
}
);
m_bb_cv.notify_all();
}
}
parallel::bb_candidates parallel::batch_manager::return_global_bb_candidates(ast_translation& g2l) {
bb_candidates bb_candidates_local;
std::scoped_lock lock(mux);
for (auto const& gc : m_bb_candidates) {
expr_ref l_lit(g2l(gc.lit.get()), g2l.to());
bb_candidates_local.push_back(bb_candidate(g2l.to(), l_lit, gc.age, gc.hits));
}
return bb_candidates_local;
}
bool parallel::batch_manager::wait_for_backbone_job(unsigned bb_thread_id, ast_translation& g2l, bb_candidates& out, reslimit& lim) {
out.reset();
std::unique_lock<std::mutex> lock(mux);
// ---- WAIT UNTIL:
// (a) a new batch is ready that this thread hasn't seen yet, OR
// (b) candidates are available AND the previous batch is finished (not in progress)
m_bb_cv.wait(lock, [&]() {
return lim.is_canceled() ||
m_state != state::is_running ||
m_bb_last_batch_processed[bb_thread_id] < m_bb_batch_id ||
!m_bb_candidates.empty();
});
if (lim.is_canceled())
return false;
if (m_state != state::is_running)
return false;
// ---- NEED NEW BATCH? ----
// Only create a new batch if this thread has already seen the current batch.
if (m_bb_last_batch_processed[bb_thread_id] == m_bb_batch_id) {
// pop new batch once
unsigned n = std::min<unsigned>(m_bb_batch_size, m_bb_candidates.size());
m_bb_current_batch.reset();
for (unsigned i = 0; i < n; ++i) {
m_bb_current_batch.push_back(m_bb_candidates.back());
m_bb_candidates.pop_back();
}
m_bb_batch_id++;
// wake all threads to see new batch
m_bb_cv.notify_all();
}
for (auto const& gc : m_bb_current_batch) {
expr_ref l_lit(g2l(gc.lit.get()), g2l.to());
out.push_back(bb_candidate(g2l.to(), l_lit, gc.age, gc.hits));
}
m_bb_last_batch_processed[bb_thread_id] = m_bb_batch_id;
return true;
}
expr_ref_vector parallel::batch_manager::return_shared_clauses(ast_translation &g2l, unsigned &worker_limit,
unsigned worker_id) {
std::scoped_lock lock(mux);
@ -494,6 +1046,14 @@ namespace smt {
if (!e)
continue;
// don't split on a backbone
if (m_config.m_global_backbones) {
expr_ref e_ref(e, m);
expr_ref neg_e_ref(mk_not(e_ref), m);
if (b.is_global_backbone(m_l2g, e_ref) || b.is_global_backbone(m_l2g, neg_e_ref))
continue;
}
double new_score = ctx->m_lit_scores[0][v] * ctx->m_lit_scores[1][v];
ctx->m_lit_scores[0][v] /= 2;
@ -611,10 +1171,17 @@ namespace smt {
return true;
}
void parallel::batch_manager::initialize(unsigned initial_max_thread_conflicts) {
void parallel::batch_manager::initialize(unsigned num_global_bb_threads, unsigned initial_max_thread_conflicts) {
m_state = state::is_running;
m_num_global_bb_threads = num_global_bb_threads;
m_bb_last_batch_processed.reset();
m_bb_last_batch_processed.resize(m_num_global_bb_threads);
m_bb_candidates.reset();
m_search_tree.reset();
m_search_tree.set_effort_unit(initial_max_thread_conflicts);
m_worker_leases.reset();
m_worker_leases.resize(p.m_workers.size());
}
@ -627,8 +1194,9 @@ namespace smt {
lbool parallel::operator()(expr_ref_vector const &asms) {
smt_parallel_params pp(ctx.m_params);
unsigned num_workers = std::min((unsigned)std::thread::hardware_concurrency(), ctx.get_fparams().m_threads);
unsigned num_sls_threads = (pp.sls() ? 1 : 0);
unsigned total_threads = num_workers + num_sls_threads;
unsigned num_sls_threads = (pp.sls() ? 1 : 0);
unsigned num_global_bb_threads = pp.num_global_bb_threads();
unsigned total_threads = num_workers + num_sls_threads + num_global_bb_threads;
IF_VERBOSE(1, verbose_stream() << "Parallel SMT with " << total_threads << " threads\n";);
ast_manager &m = ctx.m;
@ -642,6 +1210,7 @@ namespace smt {
~scoped_clear() {
p.m_workers.reset();
p.m_sls_worker = nullptr;
p.m_global_backbones_workers.reset();
}
};
scoped_clear clear(*this);
@ -660,18 +1229,28 @@ namespace smt {
m_sls_worker = alloc(sls_worker, *this);
sl.push_child(&(m_sls_worker->limit()));
}
for (unsigned i = 0; i < num_global_bb_threads; ++i) {
auto *w = alloc(backbones_worker, i, *this, asms);
m_global_backbones_workers.push_back(w);
sl.push_child(&(w->limit()));
}
IF_VERBOSE(1, verbose_stream() << "Launched " << m_workers.size() << " CDCL threads, "
<< (m_sls_worker ? 1 : 0) << " SLS threads\n";);
<< (m_sls_worker ? 1 : 0) << " SLS threads, "
<< m_global_backbones_workers.size() << " global backbone threads.\n";);
m_batch_manager.initialize();
m_batch_manager.initialize(num_global_bb_threads);
// Launch threads
vector<std::thread> threads;
threads.resize(total_threads);
unsigned thread_idx = 0;
for (auto* w : m_workers)
threads[thread_idx++] = std::thread([&, w]() { w->run(); });
for (auto* w : m_workers)
threads[thread_idx++] = std::thread([&, w]() { w->run(); });
if (m_sls_worker)
threads[thread_idx++] = std::thread([&]() { m_sls_worker->run(); });
for (auto* w : m_global_backbones_workers)
threads[thread_idx++] = std::thread([&, w]() { w->run(); });
// Wait for all threads to finish
for (auto &th : threads)
@ -682,6 +1261,8 @@ namespace smt {
m_batch_manager.collect_statistics(ctx.m_aux_stats);
if (m_sls_worker)
m_sls_worker->collect_statistics(ctx.m_aux_stats);
for (auto* bb_w : m_global_backbones_workers)
bb_w->collect_statistics(ctx.m_aux_stats);
return m_batch_manager.get_result();
}

View file

@ -23,6 +23,7 @@ Revision History:
#include "ast/sls/sls_smt_solver.h"
#include <thread>
#include <mutex>
#include <condition_variable>
namespace smt {
@ -41,18 +42,31 @@ namespace smt {
expr_ref clause;
};
struct bb_candidate {
expr_ref lit;
double age;
unsigned hits; // how many cubes reported it
bb_candidate(ast_manager& m, expr* e, double s, unsigned h) : lit(e, m), age(s), hits(h) {}
};
using bb_candidates = vector<bb_candidate>;
struct node_lease {
search_tree::node<cube_config>* node = nullptr;
// Version counter for structural mutations of this node (e.g., split/close).
// Used to detect stale leases: if a worker's lease.epoch != node.epoch,
// the node has changed since it was acquired and must not be mutated.
unsigned epoch = 0;
unsigned epoch = 0;
// Cancellation generation counter for this node/subtree.
// Incremented when the node is closed; used to signal that all
// workers holding leases on this node (or its descendants)
// must abandon work immediately.
unsigned cancel_epoch = 0;
unsigned cancel_epoch = 0;
// Guards against multiple inc_cancel() calls for the same lease.
// Set when cancel_lease() is signaled; cleared when a new lease is assigned.
bool cancel_signaled = false;
};
class batch_manager {
@ -69,8 +83,6 @@ namespace smt {
unsigned m_max_cube_depth = 0;
unsigned m_num_cubes = 0;
};
ast_manager& m;
parallel& p;
std::mutex mux;
@ -85,6 +97,19 @@ namespace smt {
vector<shared_clause> shared_clause_trail; // store all shared clauses with worker IDs
obj_hashtable<expr> shared_clause_set; // for duplicate filtering on per-thread clause expressions
bb_candidates m_bb_candidates;
unsigned m_max_global_bb_candidates = 100;
unsigned m_bb_batch_size = 150;
expr_ref_vector m_global_backbones;
// Backbone job queue
std::condition_variable m_bb_cv;
bb_candidates m_bb_current_batch;
unsigned m_bb_batch_id = 0;
unsigned m_num_global_bb_threads = 0;
unsigned_vector m_bb_last_batch_processed;
unsigned m_bb_cancel_epoch = 0; // When a backbone worker finishes early, it increments m_bb_cancel_epoch and notifies all
// called from batch manager to cancel other workers if we've reached a verdict
void cancel_workers() {
IF_VERBOSE(1, verbose_stream() << "Canceling workers\n");
@ -99,18 +124,37 @@ namespace smt {
p.m_sls_worker->cancel();
}
void cancel_background_threads() {
cancel_workers();
cancel_sls_worker();
void cancel_backbones_worker() {
IF_VERBOSE(1, verbose_stream() << "Canceling backbones workers\n");
for (auto* w : p.m_global_backbones_workers)
w->cancel();
}
void cancel_background_threads() {
cancel_workers();
cancel_sls_worker();
if (!p.m_global_backbones_workers.empty()) {
cancel_backbones_worker();
m_bb_cv.notify_all();
}
}
// to avoid deadlock
bool is_global_backbone_unlocked(ast_translation& l2g, expr* bb_cand) {
expr_ref cand(l2g(bb_cand), l2g.to());
return any_of(m_global_backbones, [&](expr *bb) { return bb == cand.get(); });
}
void backtrack_unlocked(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core,
node_lease const* lease = nullptr, vector<node_lease> const* targets = nullptr);
void collect_clause_unlocked(ast_translation &l2g, unsigned source_worker_id, expr *clause);
void release_lease_unlocked(unsigned worker_id, node* n, unsigned epoch);
void cancel_closed_leases_unlocked(unsigned source_worker_id);
public:
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)) { }
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)), m_global_backbones(m) { }
void initialize(unsigned initial_max_thread_conflicts = 1000); // TODO: pass in from worker defaults
void initialize(unsigned num_global_bb_threads, unsigned initial_max_thread_conflicts = 1000); // TODO: pass in from worker config
void set_unsat(ast_translation& l2g, expr_ref_vector const& unsat_core);
void set_sat(ast_translation& l2g, model& m);
@ -118,6 +162,11 @@ namespace smt {
void set_exception(unsigned error_code);
void collect_statistics(::statistics& st) const;
void collect_backbone_candidates(ast_translation& l2g, bb_candidates& bb_candidates);
bool collect_global_backbone(ast_translation& l2g, expr_ref const& backbone);
bool wait_for_backbone_job(unsigned bb_thread_id, ast_translation& g2l, vector<parallel::bb_candidate>& out, reslimit& lim);
bb_candidates return_global_bb_candidates(ast_translation& g2l);
bool get_cube(ast_translation& g2l, unsigned id, expr_ref_vector& cube, bool is_first_run, node_lease& lease);
void backtrack(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core, node_lease const& lease);
void try_split(ast_translation& l2g, unsigned worker_id, node_lease const& lease, expr* atom, unsigned effort);
@ -128,6 +177,22 @@ namespace smt {
expr_ref_vector return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id);
lbool get_result() const;
bool is_global_backbone(ast_translation& l2g, expr* bb_cand) {
std::scoped_lock lock(mux);
return is_global_backbone_unlocked(l2g, bb_cand);
}
void cancel_current_backbone_batch() {
std::scoped_lock lock(mux);
m_bb_cancel_epoch++;
m_bb_cv.notify_all();
}
unsigned get_cancel_epoch() {
std::scoped_lock lock(mux);
return m_bb_cancel_epoch;
}
};
class worker {
@ -139,6 +204,7 @@ namespace smt {
bool m_share_units_initial_only = true;
double m_max_conflict_mul = 1.5;
bool m_inprocessing = false;
bool m_global_backbones = false;
bool m_sls = false;
unsigned m_inprocessing_delay = 1;
unsigned m_max_cube_depth = 20;
@ -172,6 +238,7 @@ namespace smt {
} // allow for backoff scheme of conflicts within the thread for cube timeouts.
void simplify();
bb_candidates find_backbone_candidates(unsigned k = 10);
public:
worker(unsigned id, parallel& p, expr_ref_vector const& _asms);
@ -208,9 +275,53 @@ namespace smt {
}
};
class backbones_worker {
struct stats {
unsigned m_batches_total = 0;
unsigned m_candidates_total = 0;
unsigned m_singleton_backbones = 0;
unsigned m_backbones_detected = 0;
unsigned m_backbones_found = 0;
unsigned m_fallback_singleton_checks = 0;
unsigned m_fallback_reason_chunk_exhausted = 0;
unsigned m_fallback_reason_undef = 0;
unsigned m_core_refinement_rounds = 0;
unsigned m_lits_removed_by_core = 0;
unsigned m_num_chunk_increases = 0;
};
enum bb_mode {
bb_negated,
bb_positive
};
unsigned id; // unique identifier for the worker
batch_manager& b;
ast_manager m;
expr_ref_vector asms;
smt_params m_smt_params;
scoped_ptr<context> ctx;
ast_translation m_g2l, m_l2g;
unsigned m_bb_chunk_size = 20;
unsigned m_bb_conflicts_per_chunk = 1000;
stats m_stats;
bb_mode m_mode;
unsigned m_num_global_bb_threads = 1; // used to toggle behavior when testing bb candidates
unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share
bool check_backbone(expr* bb_candidate);
public:
backbones_worker(unsigned id, parallel &p, expr_ref_vector const &_asms);
void cancel();
void collect_statistics(::statistics& st) const;
void run();
void collect_shared_clauses();
reslimit &limit() { return m.limit(); }
};
batch_manager m_batch_manager;
scoped_ptr_vector<worker> m_workers;
scoped_ptr<sls_worker> m_sls_worker;
scoped_ptr_vector<backbones_worker> m_global_backbones_workers;
public:
parallel(context& ctx) :

View file

@ -45,6 +45,7 @@ namespace smt {
unsigned m_num_checks;
unsigned m_num_simplifications;
unsigned m_num_del_clauses;
unsigned m_num_assignments;
statistics() {
reset();
}

View file

@ -170,7 +170,7 @@ namespace search_tree {
// Used for tree expansion throttling policy in should_split()
// SMTS says set to num workers, but our experiments show a big regression
// Leaving at 0 for now, but making it confirgurable for future experimentation
// Leaving at 0 for now, but making it configurable for future experimentation
unsigned m_min_tree_size = 0;
struct candidate {
@ -524,6 +524,21 @@ namespace search_tree {
return m_root.get();
}
void find_nonclosed_nodes_with_literal(literal const& lit, ptr_vector<node<Config>>& out) {
find_nonclosed_nodes_with_literal_rec(m_root.get(), lit, out);
}
void find_nonclosed_nodes_with_literal_rec(node<Config>* n, literal const& lit, ptr_vector<node<Config>>& out) {
if (!n)
return;
if (!Config::literal_is_null(n->get_literal()) && n->get_literal() == lit && n->get_status() != status::closed)
out.push_back(n);
find_nonclosed_nodes_with_literal_rec(n->left(), lit, out);
find_nonclosed_nodes_with_literal_rec(n->right(), lit, out);
}
void dec_active_workers(node<Config>* n) {
if (!n)
return;