From d02e9fcfab20fb5672fe714c92b061a6ed2c0379 Mon Sep 17 00:00:00 2001 From: Ilana Shapiro Date: Mon, 11 May 2026 15:08:23 -0700 Subject: [PATCH] Final version of parallel architecture for FMCAD26 submission (#9476) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Final version of parallel architecture for FMCAD26 submission (#9475) * setting up new backbone experiment * fix phase scores bug * debug crash from negated atoms * backbone thread/global backbones in progress, does NOT compile yet * debug, still need to add backbones worker as a new thread * setting up complicated condition variable thing for backbones worker thread * debug * debug lock contention * it's a little messy, but change how i'm checking backbones by initiating with batch check * don't split on global backbones, share global backbones once detected. still need to prune search tree with backbones * close global backbone branches in search tree * fix backbone ranking (take average of bb age over cubes and incorporate hits/num cubes the bb appears in * add stats to backbone experiment * gate the backbones experiment by local vs global * update stats and fix bug about unsat core size=1 means global backbone * phase negation ablation * unforce phase ablation * reset ablations * add todo notes * fix backbone aging * first draft of Janota Alg 7 * process exactly 10 bb candidates in each batch * fixing the Janota algorithm * add backbone stats for Janota algorithm * fix bug about global backbones not being checked unless local is also true * hopefully fix bug about closing global backbones in search tree * fix another bug in janota alg * report random seed for debug * print random seed for debug * refactor janota alg code, still can't repro the crash * fix some bugs in the janota algorithm * try to fix weird memory leak thing with ramon/linux * revert fix, it didn't work * add second backbones thread * increase chunk size when undef * fix how the 2 backbone threads work on batches (they each race to finish the same batch). this was very complicated to code due to thread synchronization and while it runs there may be bugs * update how we report stats for backbones * first draft of doing the bb threads in neg and pos mode, needs revising * fix some bugs in the positive version of the bb check, still need to review * debug some more things in the positive bb worker * keep bb candidates sorted, increase batch and chunk size * try to resolve a couple of bugs * fix very bad bug about backbones workers not doing anything * ablate positive backbone thread * fix how we record backbones in positive mode (shouldn't impact previous run) * clarify code about adding found backbones * add back the positive bb thread * try to fix the random segfault bug + ablate the postiive bb thread again * clean up logs * share clauses with bb threads * fixes Signed-off-by: Nikolaj Bjorner * resolve deadlock * add comment about SAT bb case * todo comments * complete TODOs in code, still need to debug bb threads * debug bb threads, add bb_positive thread back in * ablate bb_positive thread * style * configure num bb threads as param * enable sat and unsat mode Signed-off-by: Nikolaj Bjorner * updates Signed-off-by: Nikolaj Bjorner * remove while true Signed-off-by: Nikolaj Bjorner * updates * try to fix rewriter_exception bug * possibly reduce code under lock when only 1 bb thread * add some copilot-suggested optimizations * add copilot suggestions to fix condition variable synchronization with bb threads * revert changes that are too messy with the code * ablate collect clauses * ablate condition variable logic changes * ablate reset batch * revert ablation * remove m_batch_in_progress that makes the bb threads wait until both have exited the batch after one signals cancel (can be long if one is stuck in ctx check) * sharing theory lemmas * finish setup for search tree thread modes, and fix local bb setup to pull from the global pool * variable renames * update bb hyperparams after copilot (hopefully??) ran tuning experiments * fix possible AST manager bug * ablate collect clauses * remove bb collect shared clauses * fix local bb experiment bug and reinstate collect clauses for global bb * local bb cands are thread-local ablation * remove thread-local local bb ablation * fix bug in nonthread-local bb experiment * fix more nonthreadlocal bb bugs * try to fix local bb bug * AST manager mismatch bugfix * attempt to fix another canonicalization bug * try another bugfix * try another bugfix * try yet another bugfix * thread local bb ablation * ablate force phase * ablate set activity * undo ablations since apparently it's not forcing phase or boosting activities * remove old experiments * try guarding m_birthdate size * try to fix several bugs including with m_birthdate initialization and how we're storing original phases * one more bugfix * remove local bb experiment after negative signals on experiments, and change bb ranking to VSIDS scores as opposed to phase * select bb polarity based on phase, not VSIDS * first attempt with codex. Codex notes: What changed: - Each tree node now tracks: - active worker count - lease epoch - cancel epoch - get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch). - try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the node. - If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed. - Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a global exception. - The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish. - Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other workers are still on it. I also updated search_tree bookkeeping so: - closure bumps node cancel/lease epochs - active-node counting uses actual active-worker presence, not just status == active * fix smts bugfix git merge issues with backtrack * fix(parallel-smt): gate split/backtrack by lease epoch What it changes: - util/search_tree.h - bumps node epoch on split - threads epoch through should_split(...) and try_split(...) - always records effort, but only split/reopen if the lease epoch still matches - smt/smt_parallel.cpp - requires is_lease_valid(..., lease.epoch) before backtrack(...) - passes lease.epoch into m_search_tree.try_split(...) * clean up code and add some comments * fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to close n, and that closure should then cancel the other workers on n and its subtree. I changed smt/smt_parallel.cpp accordingly: - try_split(...) still uses epoch to reject stale structural splits - backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled So the intended asymmetry is now restored: - stale split: reject - stale unsat/backtrack: allow closure, then cancel affected workers * ablate to no backtracking on stale leases * fix merge * revert codex change about exception handling * fix linux bugs * ablate backtrack gating * attempt to fix linux crashes * ablate backtracking on global bb * the rare bb bug appears to be from creating the synthetic lease for a bb node and then backtracking on the synthetic lease. this is an attempt to fix it * clean up code * try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement - canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited * delay premature root activation * fix major semantic bug about threads continually choosing the root if their lease is reset * fix cancellation to unknown status * fix very bad bug about all threads needing to start at the root * ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0) * fix some bugs about leases * ablate adding static effort only * fix some bugs about leases * don't explode effort for portfolio nodes * fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves * restore dynamically scaled effort * clean up merge from cherry pick * tighten which nodes we detect for proven global bb closure (only detect nonclosed nodes) * fix cancel to unknown exception on bb code * lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root * cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch ecked in ctx where max conflicts is also checked * move atomic lease check in the context to the more global get_cancel_flag function * Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state — propagation queues partially processed, theory state potentially inconsistent with boolean state. For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter. For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate state → SIGSEGV. The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent. Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start. * fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled. * ablate lease cancel check in ctx final theory check due to crash (??) * gate bb-specific code behind param * try some possible bugfixes for the sigsev * ablate some bugfixes * remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check * restore exception handling logic to master branch * restore reslimit cancels since the bug appears to be latent * add bookkeeping for race condition of multiple lease cancels on a single node (messes with reslimit) * restore unrelated code to master * restore local bb experiment * ablate restore local bb phase/activity after search * undo local bb ablation about resetting phase/activities, and reinstate the shared lemmas of length 2 and 3 experiment * re-ablate restore local bb phase/activity after search, due to positive experimental signal on smt comp LIA * change split policy from lightweight proof skeleton to VSIDS. NOTE: enabling local bb will mess with this since we aren't restoring activities right now * backtrack more aggressively in search tree: close matching external targets (i.e. repeat literals on other branches) * find_shallowest_timed_out_leaf_depth is now shallowest_unsolved_leaf_depth and is based on num activations > 0, not effort > 0 * fix soundness bug about closing external targets with nontrivial cores * epoch is no longer needed, just cancel epoch. remove epoch * core minimize draft, and fix bug in tree expansion policy about shallowest leaf depth needing to be timed out * core minimization thread (remove search tree worker core min since it was blocking) * collect shared clauses in core min thread * bugfixes in core min algorithm * fix more bugs in core min algorithm * more core min bugfixes based on feedback and increase m_core_minimize_conflict_budget to 5000 (might need to increase it more for harder SMT COMP problems) * fix bug in backtrack_unlocked * fix compiler error * more core min bugfix from nikolaj * clean up * failed literal probe collects shared clauses * core min thread shares units * failed literal thread now tests the top 500 global bb cands each round, instead of scanning everything. on QF_LIA/Sz128_2823.smt2 this got us from 51->75 discovered backbones * remove core minimizer unit sharing (experimentally showed no effect) * core minimization thread candidate cores are ranked first by depth (deepest->shallowest) then by size (largest->smallest). also, the core's node is set to the deepest node in the core which is not necessarily the search node (slightly semantically stronger). finally, clean up bb/failed literal params * failed literal probe runs continuously * fix a lot of things about the FL thread and how bb cands are being processed. also re-add the local bb experiment for ablations * ablate continuous run on FL thread (up to the max BM examples * ablate m_max_failed_literal_prioritized_size back to 100 * redesign FL probe again * ablate FL continuous probe * reinstate continuous FL probe after positive NIA signal, but also re-add the BM maintaining 1000 bb cands and these are used a s backups instead of just looping over the top 100 all the time * change FL thread scheduling to attempt to do less duplicate checks * restore some old FL behavior * batch manager dedups global bb cands by atoms, not literals. if we have 2 of the same atom, the polarity with higher rank is kept for the stored bb candidate literal. * ablations * reinstate FL check for new batch with epochs, before merge, this is a temp branch * undo comment out cv call * restore old changes * bb batch mode is continuous, with checks for new candidates after the first round * separate FL probe into 2 threads for pos and neg mode * attempt to add unit-based bb detection in chunking mode * add bb detection via workers' units. also rename some variables * modify the fallback policies for bb detection in batch mode but also in FL mode * ablate continuous checking for batch bb mode * major refactor for bb code. we share units and collect them as pruning bb's in all threads now (including core min and bb threads). we always check for units in batch mode now. finally, the batch mode fallback is now FL probing * bb candidates are atoms, not literals, since we currently test both polarities in parallel. batch mode retry terminates if we made zero progress after a retry round to avoid resource stress fix bug about bb ranking being backwards for how we process them * fix polarity bug for FL mode dedup * restore polarity-sensitive bb candidate ranking via lits * ablate sharing non-worker units * ablate share unit as bb * ablate incomplete-theory give-up paths * restore unit sharing as bb collection on workers * restore incomplete-theory give-up paths * clean up code * clean up code * clean up code --------- Signed-off-by: Nikolaj Bjorner Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro Co-authored-by: Nikolaj Bjorner Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro * add ablate_backtracking experiment --------- Signed-off-by: Nikolaj Bjorner Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro Co-authored-by: Nikolaj Bjorner Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro Co-authored-by: Ilana Shapiro --- src/params/smt_parallel_params.pyg | 9 +- src/smt/smt_parallel.cpp | 1097 +++++++++++++++++++++------- src/smt/smt_parallel.h | 121 ++- src/util/search_tree.h | 33 +- 4 files changed, 947 insertions(+), 313 deletions(-) diff --git a/src/params/smt_parallel_params.pyg b/src/params/smt_parallel_params.pyg index ff4590afa..1a043b6fb 100644 --- a/src/params/smt_parallel_params.pyg +++ b/src/params/smt_parallel_params.pyg @@ -4,6 +4,9 @@ def_module_params('smt_parallel', params=( ('inprocessing', BOOL, False, 'integrate in-processing as a heuristic simplification'), ('sls', BOOL, False, 'add sls-tactic as a separate worker thread outside the search tree parallelism'), - ('failed_literal_backbones', BOOL, False, 'use failed literal backbone test'), - ('num_global_bb_threads', UINT, 0, 'default is 0 (off), can configure to 1 (negative mode only) or 2 (negative and positive mode) separate worker thread(s) outside the search tree parallelism'), - )) \ No newline at end of file + ('num_global_bb_fl_threads', UINT, 0, 'run failed-literal backbone worker threads; default is 0 (off), supported values are 1 (negative mode only) or 2 (negative and positive mode)'), + ('num_global_bb_batch_threads', UINT, 0, 'run Janota-style chunking backbone worker threads; default is 0 (off), supported values are 1 (negative mode only) or 2 (negative and positive mode)'), + ('local_backbones', BOOL, False, 'enable local backbones experiment within the search tree parallelism'), + ('core_minimize', BOOL, True, 'minimize unsat cores used for parallel cube backtracking'), + ('ablate_backtracking', BOOL, False, 'ablation: pass entire cube as core instead of unsat core during backtracking'), + )) diff --git a/src/smt/smt_parallel.cpp b/src/smt/smt_parallel.cpp index 39c3dd607..7d6dbe612 100644 --- a/src/smt/smt_parallel.cpp +++ b/src/smt/smt_parallel.cpp @@ -111,78 +111,50 @@ namespace smt { run_failed_literal_mode(); else run_batch_mode(); - } - - lbool parallel::backbones_worker::check_sat(expr_ref_vector const &asms) { - return b.check(asms, *ctx); } void parallel::backbones_worker::run_failed_literal_mode() { ctx->get_fparams().m_max_conflicts = 10; - auto num_atoms = ctx->get_num_bool_vars(); - uint_set units; - for (unsigned v = 0; v < num_atoms && m.inc(); ++v) - if (ctx->get_assignment(v) != l_undef && ctx->get_assign_level(v) == ctx->m_base_lvl) - units.insert(v); - while (m.inc()) { - for (unsigned v = 0; v < num_atoms && m.inc(); ++v) { - if (units.contains(v)) - continue; - expr_ref e(ctx->bool_var2expr(v), m); - if (!e) - continue; - if (m.is_or(e) || m.is_ite(e) || m.is_and(e) || m.is_iff(e)) - continue; - if (ctx->get_assignment(v) != l_undef && ctx->get_assign_level(v) == ctx->m_base_lvl) { - bool is_true = ctx->get_assignment(v) == l_true; - IF_VERBOSE(2, verbose_stream() << "backbone on trail " << e << "\n"); - units.insert(v); - if (!is_true) - e = m.mk_not(e); - if (b.collect_global_backbone(m_l2g, e)) - m_stats.m_backbones_found++; - continue; + + auto is_unit = [&](unsigned v) { + return ctx->get_assignment(v) != l_undef && ctx->get_assign_level(v) == ctx->m_base_lvl; + }; + + auto probe_var = [&](unsigned v, expr* preferred, bool is_retry) -> lbool { + expr_ref e(ctx->bool_var2expr(v), m); + if (!e) + return l_undef; + if (m.is_or(e) || m.is_ite(e) || m.is_and(e) || m.is_iff(e)) + return l_undef; + + if (is_unit(v)) { + bool is_true = ctx->get_assignment(v) == l_true; + IF_VERBOSE(2, verbose_stream() << "backbone on trail " << mk_bounded_pp(e.get(), m) << "\n"); + if (!is_true) + e = m.mk_not(e); + if (b.collect_global_backbone(m_l2g, e)) { + m_stats.m_internal_backbones_found++; + if (is_retry) + m_stats.m_retry_backbones_found++; } - auto r = probe_literal(v, units, e); - if (r != l_undef) - break; - if (units.contains(v)) - continue; - e = mk_not(e); - r = probe_literal(v, units, e); - if (r != l_undef) - break; + return l_undef; } - } - } - lbool parallel::backbones_worker::probe_literal(bool_var v, uint_set& units, expr *e) { - asms.push_back(e); - auto r = check_sat(asms); - asms.pop_back(); - if (r == l_false) { - if (!ctx->unsat_core().contains(e)) { - b.set_unsat(m_l2g, ctx->unsat_core()); - return l_false; + expr_ref first(e, m), second(mk_not(e), m); + if (preferred) { + expr* atom = preferred; + bool is_negated = m.is_not(preferred, atom); + first = is_negated ? mk_not(e) : e; + second = is_negated ? e : mk_not(e); } - IF_VERBOSE(2, verbose_stream() << "failed literal " << mk_bounded_pp(e, m) << "\n"); - expr_ref not_e(mk_not(m, e), m); - if (b.collect_global_backbone(m_l2g, not_e)) - m_stats.m_backbones_found++; - ctx->assert_expr(not_e); - units.insert(v); - r = l_undef; - } - if (r == l_true) { - model_ref mdl; - ctx->get_model(mdl); - b.set_sat(m_l2g, *mdl); - } - return r; - } + lbool r = probe_literal(v, first.get(), is_retry); + if (r != l_undef || is_unit(v)) + return r; + + return probe_literal(v, second.get(), is_retry); + }; - void parallel::backbones_worker::run_batch_mode() { bb_candidates bb_candidates; while (m.inc()) { if (!b.wait_for_backbone_job(id, m_g2l, bb_candidates, m.limit())) @@ -191,63 +163,221 @@ namespace smt { if (bb_candidates.empty()) continue; - LOG_BB_WORKER(1, " received batch of " << bb_candidates.size() << " candidates\n"); - + collect_shared_clauses(); + unsigned local_cancel_epoch = b.get_cancel_epoch(); auto canceled = [&] { return local_cancel_epoch != b.get_cancel_epoch(); }; - auto fallback_singletons = [&](expr_ref_vector const& chunk_lits, expr_ref_vector& bb_candidate_lits) { - m_stats.m_fallback_singleton_checks++; - for (expr* c : chunk_lits) { - if (!bb_candidate_lits.contains(c)) continue; // already handled in singleton core → backbone case below - - expr_ref bb_ref(c, m); - if (!m.inc() || canceled()) - return; - - if (m_mode == bb_mode::bb_positive) - bb_ref = mk_not(bb_ref); // F ∧ U since check_backbone flips it back - - if (!b.is_global_backbone(m_l2g, bb_ref) && check_backbone(bb_ref)) { - m_stats.m_backbones_detected++; - LOG_BB_WORKER(1, " fallback found backbone: " << mk_bounded_pp(bb_ref.get(), m, 3) << "\n"); - bool is_new_bb = b.collect_global_backbone(m_l2g, bb_ref); - if (is_new_bb) { - m_stats.m_backbones_found++; - ctx->assert_expr(bb_ref.get()); // since bb workers don't collect clauses they themselves shared - } - } - bb_candidate_lits.erase(c); - } - }; - - m_stats.m_batches_total++; - m_stats.m_candidates_total += bb_candidates.size(); + bool is_retry = false; + unsigned bb_candidate_epoch = b.get_bb_candidate_epoch(); expr_ref_vector bb_candidate_lits(m); for (auto const& c : bb_candidates) bb_candidate_lits.push_back(c.lit); + while (m.inc() && !canceled()) { + lbool terminal_result = l_undef; + uint_set seen_vars; // polarity dedup (since the same variable can appear in both polarities in the candidate list) + for (expr* lit : bb_candidate_lits) { + if (is_retry && b.has_new_backbone_candidates(bb_candidate_epoch)) + break; + if (!m.inc() || canceled()) + break; + + expr* atom = lit; + m.is_not(lit, atom); + if (!ctx->b_internalized(atom)) + continue; + sat::bool_var v = ctx->get_bool_var(atom); + if (v == sat::null_bool_var || seen_vars.contains(v)) + continue; + seen_vars.insert(v); + + terminal_result = probe_var(v, lit, is_retry); + if (terminal_result != l_undef) + break; + } + + if (terminal_result != l_undef) + break; + + if (b.has_new_backbone_candidates(bb_candidate_epoch) || canceled() || !m.inc()) + break; + + is_retry = true; + + expr_ref_vector bb_snapshot = b.get_global_backbones_snapshot(m_g2l); + expr_mark bb_mark; + for (expr* e : bb_snapshot) { + bb_mark.mark(e); + bb_mark.mark(mk_not(m, e)); + } + bb_candidate_lits.reset(); + for (auto const& c : bb_candidates) + if (!bb_mark.is_marked(c.lit.get())) + bb_candidate_lits.push_back(c.lit); + } + + if (!m.inc()) + return; + if (!canceled()) + b.cancel_current_backbone_batch(); + bb_candidates.reset(); + } + } + + lbool parallel::backbones_worker::probe_literal(bool_var v, expr *e, bool is_retry) { + asms.push_back(e); + auto terminal_result = b.check(asms, *ctx); + asms.pop_back(); + if (terminal_result == l_false) { + // If the tested literal is not part of the unsat core, then the + // formula is UNSAT independently of this failed-literal probe. + if (!ctx->unsat_core().contains(e)) { + b.set_unsat(m_l2g, ctx->unsat_core()); + return l_false; + } + // Ordinary failed-literal backbone discovery is non-terminal: + // share/assert the backbone, then continue probing. + IF_VERBOSE(2, verbose_stream() << "failed literal " << mk_bounded_pp(e, m) << "\n"); + expr_ref not_e(mk_not(m, e), m); + + m_stats.m_backbones_detected++; + if (b.collect_global_backbone(m_l2g, not_e)) { + m_stats.m_internal_backbones_found++; + if (is_retry) + m_stats.m_retry_backbones_found++; + } + ctx->assert_expr(not_e); + terminal_result = l_undef; + } + if (terminal_result == l_true) { + model_ref mdl; + ctx->get_model(mdl); + b.set_sat(m_l2g, *mdl); + } + return terminal_result; + } + + void parallel::backbones_worker::run_batch_mode() { + bb_candidates bb_curr_batch_candidates; + + while (m.inc()) { + if (!b.wait_for_backbone_job(id, m_g2l, bb_curr_batch_candidates, m.limit())) + return; + + if (bb_curr_batch_candidates.empty()) + continue; + + LOG_BB_WORKER(1, " received batch of " << bb_curr_batch_candidates.size() << " candidates\n"); + collect_shared_clauses(); + + unsigned local_cancel_epoch = b.get_cancel_epoch(); + auto canceled = [&] { return local_cancel_epoch != b.get_cancel_epoch(); }; + unsigned bb_candidate_epoch = b.get_bb_candidate_epoch(); + + auto fallback_failed_literal_probe = [&](expr_ref_vector const& chunk_lits, expr_ref_vector& bb_candidate_lits, bool is_retry = false) { + unsigned old_max_conflicts = ctx->get_fparams().m_max_conflicts; + ctx->get_fparams().m_max_conflicts = 10; + if (is_retry) + ++m_stats.m_bb_retries; + else + ++m_stats.m_fallback_singleton_checks; + + for (expr* lit : chunk_lits) { + if (is_retry && b.has_new_backbone_candidates(bb_candidate_epoch)) { + ctx->get_fparams().m_max_conflicts = old_max_conflicts; + return; + } + if (!m.inc() || canceled()) { + ctx->get_fparams().m_max_conflicts = old_max_conflicts; + return; + } + if (!bb_candidate_lits.contains(lit)) // already handled in singleton core → backbone case below + continue; + + expr_ref bb_ref(lit, m); + if (m_mode == bb_mode::bb_positive) + bb_ref = mk_not(m, bb_ref); // Normalize to the backbone literal for this mode; probe_literal tests its negation + + if (!b.is_global_backbone_or_negation(m_l2g, bb_ref)) { + expr_ref backbone(m); + if (try_get_unit_backbone(bb_ref.get(), backbone)) { + m_stats.m_backbones_detected++; + LOG_BB_WORKER(1, " fallback found unit backbone: " << mk_bounded_pp(backbone.get(), m, 3) << "\n"); + if (b.collect_global_backbone(m_l2g, backbone)) + m_stats.m_internal_backbones_found++; + } else { + expr* atom = bb_ref.get(); + m.is_not(bb_ref.get(), atom); + if (ctx->b_internalized(atom)) { + sat::bool_var v = ctx->get_bool_var(atom); + + if (v != sat::null_bool_var) { + lbool terminal_result = probe_literal(v, mk_not(m, bb_ref), is_retry); // failed literal probing (i.e. probe the negation of the bb candidate) + LOG_BB_WORKER(1, " RESULT: " << terminal_result << " FOR CANDIDATE: " << mk_bounded_pp(bb_ref.get(), m, 3) << "\n"); + } + } + } + } + bb_candidate_lits.erase(lit); + } + ctx->get_fparams().m_max_conflicts = old_max_conflicts; + }; + + m_stats.m_batches_total++; + m_stats.m_candidates_total += bb_curr_batch_candidates.size(); + + expr_ref_vector bb_candidate_lits(m); + for (auto const& c : bb_curr_batch_candidates) + bb_candidate_lits.push_back(c.lit); + unsigned chunk_delta = 1; // in mode bb_neg this is Algorithm 7 from https://sat.inesc-id.pt/~mikolas/bb-aicom-preprint.pdf while (!bb_candidate_lits.empty() && !canceled() && m.inc()) { - // remove candidates that the other backbone thread found to be backbones - if (m_num_global_bb_threads > 1) { + // remove candidates that the other threads found to be backbones + { unsigned j = 0; - for (auto tmp : bb_candidate_lits) - if (!b.is_global_backbone(m_l2g, tmp)) - bb_candidate_lits[j++] = tmp; + for (auto lit : bb_candidate_lits) { + if (!b.is_global_backbone_or_negation(m_l2g, lit)) + bb_candidate_lits[j++] = lit; + } + bb_candidate_lits.shrink(j); + } + + // remove candidates that are units and assert them as backbones + { + unsigned j = 0; + for (expr* lit : bb_candidate_lits) { + expr_ref backbone(m); + if (try_get_unit_backbone(lit, backbone)) { + IF_VERBOSE(2, verbose_stream() << "backbone on trail " << mk_bounded_pp(backbone.get(), m) << "\n"); + if (b.collect_global_backbone(m_l2g, backbone)) + m_stats.m_internal_backbones_found++; + m_stats.m_backbones_detected++; + continue; + } + bb_candidate_lits[j++] = lit; + } bb_candidate_lits.shrink(j); } unsigned chunk_size = std::min(m_bb_chunk_size * chunk_delta, bb_candidate_lits.size()); expr_ref_vector chunk_lits(m); expr_ref_vector negated_chunk_lits(m); + expr_mark chunk_atoms; - for (unsigned i = 0; i < chunk_size; ++i) { - expr *e = bb_candidate_lits.get(i); - chunk_lits.push_back(e); - negated_chunk_lits.push_back(m.mk_not(e)); + // Keep at most one polarity per atom in a chunk since this otherwise this leads to + // immediate contradictions and thus no progress on finding backbones in the batch + for (unsigned i = 0; i < bb_candidate_lits.size() && chunk_lits.size() < chunk_size; ++i) { + expr* lit = bb_candidate_lits.get(i); + expr* atom = lit; + m.is_not(lit, atom); + if (chunk_atoms.is_marked(atom)) + continue; + chunk_atoms.mark(atom); + chunk_lits.push_back(lit); + negated_chunk_lits.push_back(mk_not(m, lit)); } expr_ref_vector bb_asms(m); @@ -266,16 +396,13 @@ namespace smt { break; m_stats.m_core_refinement_rounds++; - unsigned base_asms_sz = asms.size(); for (expr* a : bb_asms) asms.push_back(a); - lbool r = check_sat(asms); + lbool r = b.check(asms, *ctx); asms.shrink(base_asms_sz); - if (!m.inc()) - return; - if (canceled()) + if (!m.inc() || canceled()) break; if (r == l_undef) { @@ -289,7 +416,7 @@ namespace smt { LOG_BB_WORKER(1, " UNDEF and max chunk → fallback\n"); - fallback_singletons(chunk_lits, bb_candidate_lits); + fallback_failed_literal_probe(chunk_lits, bb_candidate_lits); m_stats.m_fallback_reason_undef++; chunk_delta = 1; break; @@ -300,7 +427,7 @@ namespace smt { model_ref mdl; ctx->get_model(mdl); b.set_sat(m_l2g, *mdl); - bb_candidates.reset(); + bb_curr_batch_candidates.reset(); return; } @@ -321,15 +448,13 @@ namespace smt { // ---- singleton core → backbone ---- if (bb_asms_in_core.size() == 1) { expr* a = bb_asms_in_core.back(); - expr_ref backbone_lit(m.mk_not(a), m); + expr_ref backbone_lit(mk_not(m, a), m); m_stats.m_singleton_backbones++; m_stats.m_backbones_detected++; - bool is_new_bb = b.collect_global_backbone(m_l2g, backbone_lit); - - if (is_new_bb) { - m_stats.m_backbones_found++; + if (b.collect_global_backbone(m_l2g, backbone_lit)) { + m_stats.m_internal_backbones_found++; ctx->assert_expr(backbone_lit.get()); // since bb workers don't collect clauses they themselves shared } @@ -349,17 +474,48 @@ namespace smt { if (bb_asms.empty()) { LOG_BB_WORKER(1, " no more negated chunk literals, fallback to individual checks\n"); - fallback_singletons(chunk_lits, bb_candidate_lits); + fallback_failed_literal_probe(chunk_lits, bb_candidate_lits); m_stats.m_fallback_reason_chunk_exhausted++; break; } } } - if (!canceled()) { - b.cancel_current_backbone_batch(); + // retry loop so thread isn't idle + // experiments on my laptop showed this had about a 0% success rate and i'm not sure if this + // messes with resource scheduling little enough to justify keeping it + // for now we only retry if we make progress on finding at least 1 backbone in each retry round + while (!b.has_new_backbone_candidates(bb_candidate_epoch) && !canceled() && m.inc()) { + collect_shared_clauses(); + unsigned found_before = m_stats.m_internal_backbones_found; + + // filter candidates for retry + expr_ref_vector bb_snapshot = b.get_global_backbones_snapshot(m_g2l); + expr_mark bb_mark; + for (expr* e : bb_snapshot) { + bb_mark.mark(e); + bb_mark.mark(mk_not(m, e)); + } + bb_candidate_lits.reset(); + for (auto const& c : bb_curr_batch_candidates) + if (!bb_mark.is_marked(c.lit.get())) + bb_candidate_lits.push_back(c.lit); + + if (bb_candidate_lits.empty()) + break; + + fallback_failed_literal_probe(bb_candidate_lits, bb_candidate_lits, true); + + // break if we made no progress on this batch + // unlikely to make progress on future runs and idk if this creates any kind of resource stress + if (m_stats.m_internal_backbones_found == found_before) + break; } - bb_candidates.reset(); + + if (!canceled()) + b.cancel_current_backbone_batch(); + + bb_curr_batch_candidates.reset(); } } @@ -368,11 +524,70 @@ namespace smt { m.limit().cancel(); } + // returns true if the global bb is new, false if it was already known + bool parallel::batch_manager::collect_global_backbone(ast_translation &l2g, expr_ref const &backbone, unsigned source_worker_id) { + IF_VERBOSE(1, verbose_stream() << "collect-global-backbone\n"); + std::scoped_lock lock(mux); + SASSERT(&m == &l2g.to()); + + if (is_global_backbone_unlocked(l2g, backbone)) + return false; + + expr_ref g_bb_ref(l2g(backbone.get()), m); + m_global_backbones.insert(g_bb_ref.get()); + ++m_stats.m_backbones_found; + + IF_VERBOSE(1, verbose_stream() << " Found and sharing new global backbone: " << mk_bounded_pp(g_bb_ref, m, 3) << "\n"); + collect_clause_unlocked(l2g, source_worker_id, backbone.get()); + + expr_ref neg_g_bb_ref(mk_not(g_bb_ref), m); + vector g_core; + g_core.push_back(neg_g_bb_ref); + vector targets; + collect_matching_targets_unlocked(nullptr, neg_g_bb_ref, g_core, targets); + + if (!targets.empty()) { + IF_VERBOSE(1, verbose_stream() << " Closing negation of the new global backbone: " << mk_bounded_pp(g_bb_ref, m, 3) << "\n"); + + if (m_ablate_backtracking) { + // Ablation: for each target, pass the entire path from root to that node + for (auto const& target : targets) { + if (m_search_tree.is_lease_canceled(target.leased_node, target.cancel_epoch)) + continue; + + // Reconstruct the full path from root to this target node + expr_ref_vector full_cube(l2g.from()); + node* n = target.leased_node; + while (n) { + if (!cube_config::literal_is_null(n->get_literal())) { + expr* lit = n->get_literal().get(); + full_cube.push_back(expr_ref(lit, l2g.from())); + } + n = n->parent(); + } + + // Backtrack this one target with its full path + vector single_target = { target }; + backtrack_unlocked(l2g, UINT_MAX, full_cube, nullptr, &single_target); + } + } else { + // Normal: just use the negated backbone + expr_ref_vector l_core(l2g.from()); + l_core.push_back(mk_not(backbone)); + backtrack_unlocked(l2g, UINT_MAX, l_core, nullptr, &targets); + } + } + + return true; + } + void parallel::backbones_worker::collect_statistics(::statistics& st) const { st.update("bb-batches-total", m_stats.m_batches_total); st.update("bb-candidates-total", m_stats.m_candidates_total); st.update("bb-backbones-detected", m_stats.m_backbones_detected); - st.update("bb-backbones-found", m_stats.m_backbones_found); + st.update("bb-internal-backbones-found", m_stats.m_internal_backbones_found); + st.update("bb-retry-backbones-found", m_stats.m_retry_backbones_found); + st.update("bb-retries", m_stats.m_bb_retries); st.update("bb-core-refinement-rounds", m_stats.m_core_refinement_rounds); st.update("bb-singleton-backbones", m_stats.m_singleton_backbones); st.update("bb-fallback-singleton-checks", m_stats.m_fallback_singleton_checks); @@ -383,10 +598,10 @@ namespace smt { double backbone_yield = 0.0; if (m_stats.m_candidates_total > 0) - backbone_yield = 100.0 * (double)m_stats.m_backbones_found / (double)m_stats.m_candidates_total; + backbone_yield = 100.0 * (double)m_stats.m_internal_backbones_found / (double)m_stats.m_candidates_total; double avg_backbones_per_batch = 0.0; if (m_stats.m_batches_total > 0) - avg_backbones_per_batch = (double)m_stats.m_backbones_found / (double)m_stats.m_batches_total; + avg_backbones_per_batch = (double)m_stats.m_internal_backbones_found / (double)m_stats.m_batches_total; double core_refinement_cost = 0.0; if (m_stats.m_batches_total > 0) core_refinement_cost = (double)m_stats.m_core_refinement_rounds / (double)m_stats.m_batches_total; @@ -409,6 +624,134 @@ namespace smt { m_sls->collect_statistics(st); } + parallel::core_minimizer_worker::core_minimizer_worker(parallel& p, expr_ref_vector const& _asms) + : b(p.m_batch_manager), asms(m), m_smt_params(p.ctx.get_fparams()), m_g2l(p.ctx.m, m), m_l2g(m, p.ctx.m) { + for (expr* e : _asms) + asms.push_back(m_g2l(e)); + IF_VERBOSE(1, verbose_stream() << "Initialized core minimizer thread\n"); + ctx = alloc(context, m, m_smt_params, p.ctx.get_params()); + ctx->set_logic(p.ctx.m_setup.get_logic()); + context::copy(p.ctx, *ctx, true); + ctx->pop_to_base_lvl(); + ctx->get_fparams().m_preprocess = false; + } + + void parallel::core_minimizer_worker::cancel() { + IF_VERBOSE(1, verbose_stream() << "Core minimizer cancelling\n"); + m.limit().cancel(); + } + + void parallel::core_minimizer_worker::collect_statistics(::statistics& st) const { + ctx->collect_statistics(st); + st.update("parallel-core-minimize-calls", m_num_core_minimize_calls); + st.update("parallel-core-minimize-undef", m_num_core_minimize_undef); + st.update("parallel-core-minimize-refined", m_num_core_minimize_refined); + st.update("parallel-core-minimize-lits-removed", m_num_core_minimize_lits_removed); + st.update("parallel-core-minimize-found-sat", m_num_core_minimize_found_sat); + } + + void parallel::core_minimizer_worker::minimize_unsat_core(expr_ref_vector& core) { + expr_ref_vector unknown(core), mus(m), trial(m); // mus = literals we have NOT managed to eliminate + + unsigned original_size = core.size(); + ++m_num_core_minimize_calls; + + + // Invariant: F and mus and unknown is UNSAT. + while (!unknown.empty()) { + if (!m.inc()) { + core.reset(); + core.append(mus); + core.append(unknown); + return; + } + + expr* lit = unknown.back(); + unknown.pop_back(); + expr_ref not_lit(mk_not(m, lit), m); + + trial.reset(); + trial.append(mus); + trial.append(unknown); + trial.push_back(not_lit); + + lbool r = l_undef; + try { + flet _max_conflicts(ctx->get_fparams().m_max_conflicts, m_core_minimize_conflict_budget); + r = ctx->check(trial.size(), trial.data()); + } + catch (...) { + r = l_undef; + } + + switch (r) { + case l_undef: // the solver failed to show that lit is removable, so we must keep it to be safe + ++m_num_core_minimize_undef; + mus.push_back(lit); + break; + case l_true: { // If all asms are true (or as an approximation, if asms is empty), it found a model. It can report sat and exit the minimization worker thread. + if (!asms.empty()) { + mus.push_back(lit); + break; + } + ++m_num_core_minimize_found_sat; + model_ref mdl; + ctx->get_model(mdl); + b.set_sat(m_l2g, *mdl); + return; + } + case l_false: { + auto const& unsat_core = ctx->unsat_core(); + if (!unsat_core.contains(not_lit)) { + ++m_num_core_minimize_refined; + unknown.reset(); + expr_ref_vector new_mus(m); + for (expr* c : unsat_core) { + if (mus.contains(c)) + new_mus.push_back(c); + else + unknown.push_back(c); + } + mus.reset(); + mus.append(new_mus); + } + break; + } + default: + UNREACHABLE(); + } + } + + core.reset(); + core.append(mus); + core.append(unknown); // to reflect loop invariant, and in case we add an early exit + if (core.size() < original_size) + m_num_core_minimize_lits_removed += original_size - core.size(); + return; + } + + void parallel::core_minimizer_worker::run() { + while (m.inc()) { + node* source = nullptr; + expr_ref_vector core(m); + if (!b.wait_for_core_min_job(m_g2l, source, core, m.limit())) + return; + + unsigned original_size = core.size(); + if (original_size <= 1) + continue; + + collect_shared_clauses(); + + expr_ref_vector minimized(m); + minimized.append(core); + minimize_unsat_core(minimized); + + if (minimized.size() < original_size) + b.publish_minimized_core(m_l2g, asms, source, original_size, minimized); + } + } + void parallel::worker::run() { bool is_first_run = true; node_lease lease; @@ -425,7 +768,7 @@ namespace smt { check_cube_start: LOG_WORKER(1, " CUBE SIZE IN MAIN LOOP: " << cube.size() << "\n"); - if (m_config.m_global_backbones) { + if (m_config.m_global_backbones) { bb_candidates local_candidates = find_backbone_candidates(); b.collect_backbone_candidates(m_l2g, local_candidates); if (!m.inc()) @@ -479,7 +822,17 @@ namespace smt { } LOG_WORKER(1, " found unsat cube\n"); - b.backtrack(m_l2g, id, unsat_core, lease); + node* source = lease.leased_node; + + // When ablating backtracking, use the entire cube path instead of the unsat core + expr_ref_vector const& core_to_use = m_config.m_ablate_backtracking ? cube : unsat_core; + if (m_config.m_ablate_backtracking) { + LOG_WORKER(1, " ablating backtracking: using full cube path of size " << core_to_use.size() << "\n"); + } + + b.backtrack(m_l2g, id, core_to_use, lease); + if (m_config.m_core_minimize) + b.enqueue_core_minimization(m_l2g, source, unsat_core); lease = {}; if (m_config.m_share_conflicts) @@ -511,7 +864,15 @@ namespace smt { smt_parallel_params pp(p.ctx.m_params); m_config.m_inprocessing = pp.inprocessing(); - m_config.m_global_backbones = pp.num_global_bb_threads() > 0 && !pp.failed_literal_backbones(); + m_config.m_global_backbones = pp.num_global_bb_batch_threads() > 0 || pp.num_global_bb_fl_threads() > 0; + m_config.m_local_backbones = pp.local_backbones(); + m_config.m_core_minimize = pp.core_minimize(); + m_config.m_ablate_backtracking = pp.ablate_backtracking(); + + // When ablating backtracking, disable core minimization since we're using the full cube path + if (m_config.m_ablate_backtracking) { + m_config.m_core_minimize = false; + } } parallel::sls_worker::sls_worker(parallel& p) @@ -531,10 +892,12 @@ namespace smt { ctx->set_logic(p.ctx.m_setup.get_logic()); ctx->get_fparams().m_max_conflicts = m_bb_conflicts_per_chunk; context::copy(p.ctx, *ctx, true); + ctx->pop_to_base_lvl(); + m_shared_units_prefix = ctx->assigned_literals().size(); + m_num_initial_atoms = ctx->get_num_bool_vars(); smt_parallel_params pp(p.ctx.m_params); - m_num_global_bb_threads = pp.num_global_bb_threads(); - m_use_failed_literal_test = pp.failed_literal_backbones(); + m_use_failed_literal_test = pp.num_global_bb_fl_threads() > 0; } parallel::bb_candidates parallel::worker::find_backbone_candidates(unsigned k) { @@ -557,7 +920,7 @@ namespace smt { if (d.m_phase_available && !d.m_phase) candidate = m.mk_not(candidate); - if (b.is_global_backbone(m_l2g, candidate)) + if (b.is_global_backbone_or_negation(m_l2g, candidate)) continue; bb_candidate bb_cand(m, candidate, age, 1); @@ -580,22 +943,20 @@ namespace smt { return backbone_candidates; } - // - // Assume the negation of all candidates (or a batch of them) - // run the solver with a low budget of conflicts - // if the unsat core contains a single candidate we have found a backbone literal - // - bool parallel::backbones_worker::check_backbone(expr* bb_candidate) { - unsigned sz = asms.size(); - asms.push_back(m.mk_not(bb_candidate)); - - lbool r = b.check(asms, *ctx); - - asms.shrink(sz); - - LOG_BB_WORKER(1, " RESULT: " << r << " FOR CANDIDATE: " << mk_bounded_pp(bb_candidate, m, 3) << "\n"); - - return r == l_false && ctx->unsat_core().size() == 1; + // checks if candidate or its negation is a unit backbone on the trail and returns the backbone if so + bool parallel::backbones_worker::try_get_unit_backbone(expr* candidate, expr_ref& backbone) { + expr* atom = candidate; + m.is_not(candidate, atom); + if (!ctx->b_internalized(atom)) + return false; + sat::bool_var v = ctx->get_bool_var(atom); + if (v == sat::null_bool_var || ctx->get_assignment(v) == l_undef || ctx->get_assign_level(v) != ctx->m_base_lvl) + return false; + bool is_true = ctx->get_assignment(v) == l_true; + backbone = expr_ref(atom, m); + if (!is_true) + backbone = mk_not(backbone); + return true; } // NSB review: the code appares to use the assumption that we are not at base level @@ -607,7 +968,8 @@ namespace smt { // so we would re-examine literals that are not necessarily on base level in later calls. // void parallel::worker::share_units() { - // Collect new units learned locally by this worker and send to batch manager + // Collect new base-level units learned locally by this worker. + // Such units are globally valid and are thus part of the backbone unsigned sz = ctx->assigned_literals().size(); unsigned prefix_sz = m_shared_units_prefix; @@ -642,7 +1004,8 @@ namespace smt { if (lit.sign()) e = mk_not(e); // negate if literal is negative - b.collect_clause(m_l2g, id, e); + + b.collect_global_backbone(m_l2g, e, id); } m_shared_units_prefix = prefix_sz; } @@ -750,65 +1113,13 @@ namespace smt { return r; } - bool parallel::batch_manager::collect_global_backbone(ast_translation &l2g, expr_ref const &backbone) { - IF_VERBOSE(1, verbose_stream() << "collect-global-backbone\n"); - std::scoped_lock lock(mux); - SASSERT(&m == &l2g.to()); - - if (is_global_backbone_unlocked(l2g, backbone)) - return false; - - expr_ref g_bb_ref(l2g(backbone.get()), m); - m_global_backbones.push_back(g_bb_ref); - - IF_VERBOSE(1, verbose_stream() << " Found and sharing new global backbone: " << mk_bounded_pp(g_bb_ref, m, 3) - << "\n"); - collect_clause_unlocked(l2g, /*source_worker_id=*/UINT_MAX, backbone.get()); - - expr_ref neg_g_bb_ref(mk_not(g_bb_ref), m); - ptr_vector matches; - m_search_tree.find_nonclosed_nodes_with_literal(neg_g_bb_ref, matches); - - if (!matches.empty()) { - IF_VERBOSE(1, verbose_stream() << " Closing negation of the new global backbone: " - << mk_bounded_pp(g_bb_ref, m, 3) << "\n"); - expr_ref_vector l_core(l2g.from()); - l_core.push_back(mk_not(backbone)); - - vector targets; - for (node *t : matches) { - if (!t || m_search_tree.is_lease_canceled(t, t->get_cancel_epoch())) - continue; - - // Keep only highest matching nodes: if an ancestor is already selected, - // closing it will also close this descendant subtree. - bool is_highest_ancestor = true; - for (node *p = t->parent(); p; p = p->parent()) { - if (any_of(targets, [&](node_lease const &target) { return target.node == p; })) { - is_highest_ancestor = false; - break; - } - } - if (!is_highest_ancestor) - continue; - - targets.push_back({t, t->epoch(), t->get_cancel_epoch()}); - } - - if (!targets.empty()) - backtrack_unlocked(l2g, UINT_MAX, l_core, nullptr, &targets); - } - - return true; - } - - void parallel::batch_manager::release_lease_unlocked(unsigned worker_id, node* n, unsigned epoch) { + void parallel::batch_manager::release_lease_unlocked(unsigned worker_id, node* n) { if (worker_id >= m_worker_leases.size()) return; auto &lease = m_worker_leases[worker_id]; - if (!lease.node || lease.node != n || lease.epoch != epoch) + if (!lease.leased_node || lease.leased_node != n) return; - m_search_tree.dec_active_workers(lease.node); + m_search_tree.dec_active_workers(lease.leased_node); lease = {}; } @@ -821,7 +1132,7 @@ namespace smt { // only cancel workers that currently hold a lease, whose lease is canceled, // and haven't already been signaled (prevents multiple inc_cancel() for same lease) - if (lease.node && !lease.cancel_signaled && m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch)) { + if (lease.leased_node && !lease.cancel_signaled && m_search_tree.is_lease_canceled(lease.leased_node, lease.cancel_epoch)) { p.m_workers[worker_id]->cancel_lease(); m_worker_leases[worker_id].cancel_signaled = true; } @@ -831,7 +1142,232 @@ namespace smt { void parallel::batch_manager::backtrack(ast_translation &l2g, unsigned worker_id, expr_ref_vector const &core, node_lease const &lease) { std::scoped_lock lock(mux); - backtrack_unlocked(l2g, worker_id, core, &lease, nullptr); + vector g_core; + for (auto c : core) + g_core.push_back(expr_ref(l2g(c), m)); + + vector targets; + collect_matching_targets_unlocked(lease.leased_node, lease.leased_node->get_literal().get(), g_core, targets); + backtrack_unlocked(l2g, worker_id, core, &lease, targets.empty() ? nullptr : &targets); + } + + void parallel::batch_manager::enqueue_core_minimization(ast_translation& l2g, node* source, + expr_ref_vector const& core) { + std::scoped_lock lock(mux); + if (m_state != state::is_running || !p.m_core_minimizer_worker || !source || core.empty()) + return; + if (core.size() <= 1) { + ++m_stats.m_core_min_jobs_skipped; + return; + } + + source = find_core_source_unlocked(l2g, source, core); + if (!source) { + ++m_stats.m_core_min_jobs_skipped; + return; + } + + scoped_ptr job = alloc(core_min_job, m, source); + for (expr* c : core) + job->core.push_back(l2g(c)); + m_core_min_jobs.push_back(job.detach()); + ++m_stats.m_core_min_jobs_enqueued; + m_core_min_cv.notify_one(); + } + + bool parallel::batch_manager::wait_for_core_min_job(ast_translation& g2l, node*& source, + expr_ref_vector& core, reslimit& lim) { + std::unique_lock lock(mux); + m_core_min_cv.wait(lock, [&]() { + return lim.is_canceled() || m_state != state::is_running || !m_core_min_jobs.empty(); + }); + + if (lim.is_canceled() || m_state != state::is_running) + return false; + + unsigned best_idx = select_best_core_min_job_unlocked(); + m_core_min_jobs.swap(best_idx, m_core_min_jobs.size() - 1); + core_min_job* job = m_core_min_jobs.detach_back(); + m_core_min_jobs.pop_back(); + SASSERT(job); + source = job->source; + core.reset(); + for (expr* c : job->core) + core.push_back(g2l(c)); + dealloc(job); + return source != nullptr; + } + + // Given a newly closed node, source, and its core, find the lowest ancestor of source that + // contains a core literal, and return it as the source for the core minimization job + parallel::node* parallel::batch_manager::find_core_source_unlocked( + ast_translation& l2g, node* source, expr_ref_vector const& core) { + if (!source) + return nullptr; + + vector g_core; + for (expr* c : core) + g_core.push_back(expr_ref(l2g(c), m)); + + for (node* cur = source; cur; cur = cur->parent()) { + if (cube_config::literal_is_null(cur->get_literal())) + continue; + if (any_of(g_core, [&](cube_config::literal const& lit) { return lit == cur->get_literal(); })) + return cur; + } + return nullptr; + } + + unsigned parallel::batch_manager::select_best_core_min_job_unlocked() const { + SASSERT(!m_core_min_jobs.empty()); + + unsigned best_idx = 0; + node* best_source = m_core_min_jobs[0]->source; + unsigned best_depth = best_source ? best_source->depth() : 0; + unsigned best_core_size = m_core_min_jobs[0]->core.size(); + + for (unsigned i = 1; i < m_core_min_jobs.size(); ++i) { + core_min_job* job = m_core_min_jobs[i]; + node* job_source = job->source; + unsigned job_depth = job_source ? job_source->depth() : 0; + unsigned job_core_size = job->core.size(); + + // rank first by core source node depth (deepest -> shallowest), then by core size (largest -> smallest) + if (job_depth > best_depth || (job_depth == best_depth && job_core_size > best_core_size)) { + best_idx = i; + best_depth = job_depth; + best_core_size = job_core_size; + } + } + return best_idx; + } + + void parallel::batch_manager::publish_minimized_core(ast_translation& l2g, expr_ref_vector const& asms, node* source, + unsigned original_core_size, expr_ref_vector const& minimized_core) { + std::scoped_lock lock(mux); + if (m_state != state::is_running || !source || minimized_core.size() >= original_core_size) { + ++m_stats.m_core_min_jobs_skipped; + return; + } + + vector g_core; + for (expr* c : minimized_core) + g_core.push_back(expr_ref(l2g(c), m)); + + // don't publish a minimized core if the node already has an equal-or-smaller core by the time the minimizer thread finishes + // (e.g. from another thread or from backtracking resulotion propagation) + if (source->get_core().size() <= g_core.size()) { + ++m_stats.m_core_min_jobs_skipped; + return; + } + + IF_VERBOSE(1, verbose_stream() << "Batch manager publishing minimized core " + << original_core_size << " -> " << g_core.size() << "\n"); + + if (all_of(g_core, [&](cube_config::literal const& lit) { return asms.contains(lit.get()); })) { + IF_VERBOSE(1, verbose_stream() << "Minimized core removed all path literals, setting UNSAT\n"); + m_state = state::is_unsat; + SASSERT(p.ctx.m_unsat_core.empty()); + for (expr* e : minimized_core) + p.ctx.m_unsat_core.push_back(l2g(e)); + ++m_stats.m_core_min_jobs_published; + ++m_stats.m_core_min_global_unsat; + cancel_background_threads(); + return; + } + + // do not backtrack through the batch manager since this only handles non-closed leases + // and the batch manager also tries to search for external matching targets in the tree + // which is a problem since we must backtrack only on the source node or the core is invalid + m_search_tree.backtrack(source, g_core); + + vector targets; + if (!g_core.empty()) { + collect_matching_targets_unlocked(source, g_core[0].get(), g_core, targets); + for (auto const& target : targets) { + if (!m_search_tree.is_lease_canceled(target.leased_node, target.cancel_epoch)) + m_search_tree.backtrack(target.leased_node, g_core); + } + } + + ++m_stats.m_core_min_jobs_published; + cancel_closed_leases_unlocked(UINT_MAX); + + IF_VERBOSE(2, m_search_tree.display(verbose_stream() << bounded_pp_exprs(minimized_core) << "\n");); + if (m_search_tree.is_closed()) { + IF_VERBOSE(1, verbose_stream() << "Search tree closed by minimized core, setting UNSAT\n"); + m_state = state::is_unsat; + SASSERT(p.ctx.m_unsat_core.empty()); + for (auto e : m_search_tree.get_core_from_root()) + p.ctx.m_unsat_core.push_back(e); + cancel_background_threads(); + } + } + + void parallel::batch_manager::collect_matching_targets_unlocked(node* source, expr* lit, vector const& core, + vector& targets) { + targets.reset(); + if (!lit) + return; + + auto is_ancestor_of = [&](node* ancestor, node* cur) { + if (!ancestor) + return false; + for (node* p = cur; p; p = p->parent()) { + if (p == ancestor) + return true; + } + return false; + }; + + auto path_contains = [&](node* cur, cube_config::literal const& lit) { + for (node* p = cur; p; p = p->parent()) { + if (p->get_literal() == lit) + return true; + } + return false; + }; + + auto path_contains_core = [&](node* cur) { + return all_of(core, [&](cube_config::literal const& c) { + return path_contains(cur, c); + }); + }; + + ptr_vector matches; + m_search_tree.find_nonclosed_nodes_with_literal(expr_ref(lit, m), matches); + for (node* t : matches) { + if (!t || t == source) + continue; + if (m_search_tree.is_lease_canceled(t, t->get_cancel_epoch())) + continue; + + // When source is provided, keep only external matches. Nodes in the + // same branch are already closed by backtracking on the source node. + if (source && (is_ancestor_of(source, t) || is_ancestor_of(t, source))) + continue; + + // Reusing a conflict on another branch is sound only if that + // the path from that node->root contains every literal in the core. + // Matching on the closing literal alone is insufficient: F & a & l + // may be UNSAT while F & c & l is SAT. + if (!path_contains_core(t)) + continue; + + // Keep only highest matching nodes: closing an ancestor also closes + // all of its matching descendants. + bool is_highest_ancestor = true; + for (node* p = t->parent(); p; p = p->parent()) { + if (any_of(targets, [&](node_lease const& target) { return target.leased_node == p; })) { + is_highest_ancestor = false; + break; + } + } + if (!is_highest_ancestor) + continue; + + targets.push_back({ t, t->get_cancel_epoch() }); + } } void parallel::batch_manager::backtrack_unlocked(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core, @@ -843,37 +1379,34 @@ namespace smt { for (auto c : core) g_core.push_back(expr_ref(l2g(c), m)); - SASSERT((lease != nullptr) != (targets != nullptr)); + SASSERT(lease != nullptr || targets != nullptr); + bool did_backtrack = false; - if (lease) { + if (lease && !m_search_tree.is_lease_canceled(lease->leased_node, lease->cancel_epoch)) { // we close/backtrack regardless of whether this lease is stale or not, as long as the lease isn't canceled // i.e. worker 1 splits this node, but then worker 2 determines UNSAT --> worker 2 is stale but we still close this node and backtrack - if (m_search_tree.is_lease_canceled(lease->node, lease->cancel_epoch)) - return; - + did_backtrack = true; IF_VERBOSE(1, verbose_stream() << "Batch manager backtracking.\n"); - - release_lease_unlocked(worker_id, lease->node, lease->epoch); - m_search_tree.backtrack(lease->node, g_core); + release_lease_unlocked(worker_id, lease->leased_node); + m_search_tree.backtrack(lease->leased_node, g_core); } - else { - bool has_open_targets = false; + if (targets) { for (auto const& target : *targets) { - if (m_search_tree.is_lease_canceled(target.node, target.cancel_epoch)) + if (m_search_tree.is_lease_canceled(target.leased_node, target.cancel_epoch)) continue; - has_open_targets = true; + did_backtrack = true; IF_VERBOSE(1, verbose_stream() << "Batch manager backtracking external targets.\n"); - m_search_tree.backtrack(target.node, g_core); + m_search_tree.backtrack(target.leased_node, g_core); } - if (!has_open_targets) - return; } + if (!did_backtrack) + return; // terminate on-demand the workers that are currently exploring the now-closed nodes cancel_closed_leases_unlocked(worker_id); - IF_VERBOSE(1, m_search_tree.display(verbose_stream() << bounded_pp_exprs(core) << "\n");); + IF_VERBOSE(2, m_search_tree.display(verbose_stream() << bounded_pp_exprs(core) << "\n");); if (m_search_tree.is_closed()) { IF_VERBOSE(1, verbose_stream() << "Search tree closed, setting UNSAT\n"); m_state = state::is_unsat; @@ -891,31 +1424,31 @@ namespace smt { if (m_state != state::is_running) return; - if (m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch)) + if (m_search_tree.is_lease_canceled(lease.leased_node, lease.cancel_epoch)) return; expr_ref lit(m), nlit(m); lit = l2g(atom); nlit = mk_not(m, lit); - bool did_split = m_search_tree.try_split(lease.node, lease.epoch, lease.cancel_epoch, lit, nlit, effort); + bool did_split = m_search_tree.try_split(lease.leased_node, lease.cancel_epoch, lit, nlit, effort); - release_lease_unlocked(worker_id, lease.node, lease.epoch); + release_lease_unlocked(worker_id, lease.leased_node); if (did_split) { ++m_stats.m_num_cubes; - m_stats.m_max_cube_depth = std::max(m_stats.m_max_cube_depth, lease.node->depth() + 1); + m_stats.m_max_cube_depth = std::max(m_stats.m_max_cube_depth, lease.leased_node->depth() + 1); IF_VERBOSE(1, verbose_stream() << "Batch manager splitting on literal: " << mk_bounded_pp(lit, m, 3) << "\n"); } } void parallel::batch_manager::release_lease(unsigned worker_id, node_lease const &lease) { std::scoped_lock lock(mux); - release_lease_unlocked(worker_id, lease.node, lease.epoch); + release_lease_unlocked(worker_id, lease.leased_node); } bool parallel::batch_manager::lease_canceled(node_lease const &lease) { std::scoped_lock lock(mux); - return m_state == state::is_running && m_search_tree.is_lease_canceled(lease.node, lease.cancel_epoch); + return m_state == state::is_running && m_search_tree.is_lease_canceled(lease.leased_node, lease.cancel_epoch); } void parallel::batch_manager::collect_clause(ast_translation &l2g, unsigned source_worker_id, expr *clause) { @@ -950,8 +1483,19 @@ namespace smt { } } + void parallel::core_minimizer_worker::collect_shared_clauses() { + expr_ref_vector new_clauses = b.return_shared_clauses(m_g2l, m_shared_clause_limit, UINT_MAX); + // iterate over new clauses and assert them in the local context + for (expr *e : new_clauses) { + ctx->assert_expr(e); + IF_VERBOSE(4, verbose_stream() << "Core minimizer asserting shared clause: " + << mk_bounded_pp(e, m, 3) << "\n";); + } + } + void parallel::batch_manager::collect_backbone_candidates(ast_translation& l2g, bb_candidates& bb_candidates) { std::scoped_lock lock(mux); + bool changed = false; auto find_existing_candidate_idx = [&](expr* e) -> int { for (unsigned i = 0; i < m_bb_candidates.size(); ++i) { @@ -966,11 +1510,10 @@ namespace smt { }; for (auto const& c : bb_candidates) { - if (is_global_backbone_unlocked(l2g, c.lit)) + expr_ref g_lit(l2g(c.lit.get()), m); + if (is_global_backbone_or_negation_unlocked(l2g, c.lit)) continue; - expr* worker_lit = c.lit.get(); - expr_ref g_lit(l2g(worker_lit), m); double age = c.age; int idx = find_existing_candidate_idx(g_lit.get()); @@ -983,48 +1526,37 @@ namespace smt { if (m_bb_candidates.size() < m_max_global_bb_candidates) { m_bb_candidates.push_back(bb_candidate(m, g_lit.get(), age, 1)); + changed = true; continue; } - // Find worst candidate to evict - unsigned worst_idx = 0; - double worst_rank = rank_of(m_bb_candidates[0]); - - for (unsigned i = 1; i < m_bb_candidates.size(); ++i) { - double r = rank_of(m_bb_candidates[i]); - if (r < worst_rank) { - worst_rank = r; - worst_idx = i; - } - } - bb_candidate new_bb_candidate = bb_candidate(m, g_lit.get(), age, 1); - if (rank_of(new_bb_candidate) > worst_rank) - m_bb_candidates[worst_idx] = new_bb_candidate; - } - - if (!m_bb_candidates.empty()) { - std::stable_sort( + auto worst_it = std::min_element( m_bb_candidates.begin(), m_bb_candidates.end(), [&](bb_candidate const& a, bb_candidate const& b) { - return rank_of(a) > rank_of(b); + return rank_of(a) < rank_of(b); + } + ); + if (worst_it != m_bb_candidates.end() && rank_of(new_bb_candidate) > rank_of(*worst_it)) { + *worst_it = new_bb_candidate; // replace worst candidate with new candidate + changed = true; + } + } + + if (changed && !m_bb_candidates.empty()) { + m_bb_candidate_epoch.fetch_add(1, std::memory_order_release); + std::sort( + m_bb_candidates.begin(), + m_bb_candidates.end(), + [&](bb_candidate const& a, bb_candidate const& b) { + return rank_of(a) < rank_of(b); // sort ascending so we can pop off the best candidates from the end in O(1) in the bb threads } ); m_bb_cv.notify_all(); } } - parallel::bb_candidates parallel::batch_manager::return_global_bb_candidates(ast_translation& g2l) { - bb_candidates bb_candidates_local; - std::scoped_lock lock(mux); - for (auto const& gc : m_bb_candidates) { - expr_ref l_lit(g2l(gc.lit.get()), g2l.to()); - bb_candidates_local.push_back(bb_candidate(g2l.to(), l_lit, gc.age, gc.hits)); - } - return bb_candidates_local; - } - bool parallel::batch_manager::wait_for_backbone_job(unsigned bb_thread_id, ast_translation& g2l, bb_candidates& out, reslimit& lim) { out.reset(); std::unique_lock lock(mux); @@ -1112,18 +1644,20 @@ namespace smt { if (!e) continue; - // don't split on a backbone + // don't split on a backbone or its negation if (m_config.m_global_backbones) { - expr_ref e_ref(e, m); - expr_ref neg_e_ref(mk_not(e_ref), m); - if (b.is_global_backbone(m_l2g, e_ref) || b.is_global_backbone(m_l2g, neg_e_ref)) + if (b.is_global_backbone_or_negation(m_l2g, e)) continue; } - double new_score = ctx->m_lit_scores[0][v] * ctx->m_lit_scores[1][v]; + // Lightweight Proof Skeleton Approach + // double new_score = ctx->m_lit_scores[0][v] * ctx->m_lit_scores[1][v]; - ctx->m_lit_scores[0][v] /= 2; - ctx->m_lit_scores[1][v] /= 2; + // ctx->m_lit_scores[0][v] /= 2; + // ctx->m_lit_scores[1][v] /= 2; + + // VSIDS Approach + double new_score = ctx->get_activity(v); if (new_score > score || !result || (new_score == score && m_rand(++n) == 0)) { score = new_score; @@ -1216,10 +1750,9 @@ namespace smt { if (!t) return false; - IF_VERBOSE(1, m_search_tree.display(verbose_stream()); verbose_stream() << "\n";); + IF_VERBOSE(2, m_search_tree.display(verbose_stream()); verbose_stream() << "\n";); - lease.node = t; - lease.epoch = t->epoch(); + lease.leased_node = t; lease.cancel_epoch = t->get_cancel_epoch(); if (id >= m_worker_leases.size()) m_worker_leases.resize(id + 1); @@ -1244,25 +1777,45 @@ namespace smt { m_bb_last_batch_processed.reset(); m_bb_last_batch_processed.resize(m_num_global_bb_threads); m_bb_candidates.reset(); + m_global_backbones.reset(); + m_bb_candidate_epoch.store(0, std::memory_order_release); + m_core_min_jobs.reset(); m_search_tree.reset(); m_search_tree.set_effort_unit(initial_max_thread_conflicts); m_worker_leases.reset(); m_worker_leases.resize(p.m_workers.size()); + + smt_parallel_params pp(p.ctx.m_params); + m_ablate_backtracking = pp.ablate_backtracking(); } void parallel::batch_manager::collect_statistics(::statistics &st) const { st.update("parallel-num_cubes", m_stats.m_num_cubes); st.update("parallel-max-cube-size", m_stats.m_max_cube_depth); + st.update("bb-backbones-found", m_stats.m_backbones_found); + st.update("parallel-core-min-jobs-enqueued", m_stats.m_core_min_jobs_enqueued); + st.update("parallel-core-min-jobs-published", m_stats.m_core_min_jobs_published); + st.update("parallel-core-min-jobs-skipped", m_stats.m_core_min_jobs_skipped); + st.update("parallel-core-min-global-unsat", m_stats.m_core_min_global_unsat); } lbool parallel::operator()(expr_ref_vector const &asms) { smt_parallel_params pp(ctx.m_params); + unsigned num_global_bb_batch_threads = pp.num_global_bb_batch_threads(); + if (num_global_bb_batch_threads > 2) + throw default_exception("smt_parallel.num_global_bb_batch_threads must be 0, 1, or 2"); unsigned num_workers = std::min((unsigned)std::thread::hardware_concurrency(), ctx.get_fparams().m_threads); unsigned num_sls_threads = (pp.sls() ? 1 : 0); - unsigned num_global_bb_threads = pp.num_global_bb_threads(); - unsigned total_threads = num_workers + num_sls_threads + num_global_bb_threads; + unsigned num_core_min_threads = (pp.core_minimize() ? 1 : 0); + unsigned num_global_bb_fl_threads = pp.num_global_bb_fl_threads(); + if (num_global_bb_fl_threads > 2) + throw default_exception("smt_parallel.num_global_bb_fl_threads must be 0, 1, or 2"); + if (num_global_bb_fl_threads > 0 && num_global_bb_batch_threads > 0) + throw default_exception("smt_parallel.num_global_bb_fl_threads and smt_parallel.num_global_bb_batch_threads cannot both be enabled"); + unsigned num_global_bb_threads = num_global_bb_fl_threads > 0 ? num_global_bb_fl_threads : num_global_bb_batch_threads; + unsigned total_threads = num_workers + num_sls_threads + num_core_min_threads + num_global_bb_threads; IF_VERBOSE(1, verbose_stream() << "Parallel SMT with " << total_threads << " threads\n";); ast_manager &m = ctx.m; @@ -1276,13 +1829,14 @@ namespace smt { ~scoped_clear() { p.m_workers.reset(); p.m_sls_worker = nullptr; + p.m_core_minimizer_worker = nullptr; p.m_global_backbones_workers.reset(); } }; scoped_clear clear(*this); m_workers.reset(); - + m_core_minimizer_worker = nullptr; scoped_limits sl(m.limit()); flet _nt(ctx.m_fparams.m_threads, 1); SASSERT(num_workers > 1); @@ -1295,6 +1849,10 @@ namespace smt { m_sls_worker = alloc(sls_worker, *this); sl.push_child(&(m_sls_worker->limit())); } + if (pp.core_minimize()) { + m_core_minimizer_worker = alloc(core_minimizer_worker, *this, asms); + sl.push_child(&(m_core_minimizer_worker->limit())); + } for (unsigned i = 0; i < num_global_bb_threads; ++i) { auto *w = alloc(backbones_worker, i, *this, asms); m_global_backbones_workers.push_back(w); @@ -1302,6 +1860,7 @@ namespace smt { } IF_VERBOSE(1, verbose_stream() << "Launched " << m_workers.size() << " CDCL threads, " << (m_sls_worker ? 1 : 0) << " SLS threads, " + << (m_core_minimizer_worker ? 1 : 0) << " core minimizer threads, " << m_global_backbones_workers.size() << " global backbone threads.\n";); m_batch_manager.initialize(num_global_bb_threads); @@ -1313,6 +1872,8 @@ namespace smt { threads[thread_idx++] = std::thread([&, w]() { w->run(); }); if (m_sls_worker) threads[thread_idx++] = std::thread([&]() { m_sls_worker->run(); }); + if (m_core_minimizer_worker) + threads[thread_idx++] = std::thread([&]() { m_core_minimizer_worker->run(); }); for (auto* w : m_global_backbones_workers) threads[thread_idx++] = std::thread([&, w]() { w->run(); }); @@ -1326,6 +1887,8 @@ namespace smt { m_batch_manager.collect_statistics(ctx.m_aux_stats); if (m_sls_worker) m_sls_worker->collect_statistics(ctx.m_aux_stats); + if (m_core_minimizer_worker) + m_core_minimizer_worker->collect_statistics(ctx.m_aux_stats); for (auto* bb_w : m_global_backbones_workers) bb_w->collect_statistics(ctx.m_aux_stats); diff --git a/src/smt/smt_parallel.h b/src/smt/smt_parallel.h index fa45dc6f6..64fcc1186 100644 --- a/src/smt/smt_parallel.h +++ b/src/smt/smt_parallel.h @@ -21,6 +21,7 @@ Revision History: #include "smt/smt_context.h" #include "util/search_tree.h" #include "ast/sls/sls_smt_solver.h" +#include #include #include #include @@ -36,6 +37,8 @@ namespace smt { class parallel { context& ctx; + class core_minimizer_worker; + using node = search_tree::node; struct shared_clause { unsigned source_worker_id; @@ -52,11 +55,7 @@ namespace smt { using bb_candidates = vector; struct node_lease { - search_tree::node* node = nullptr; - // Version counter for structural mutations of this node (e.g., split/close). - // Used to detect stale leases: if a worker's lease.epoch != node.epoch, - // the node has changed since it was acquired and must not be mutated. - unsigned epoch = 0; + node* leased_node = nullptr; // Cancellation generation counter for this node/subtree. // Incremented when the node is closed; used to signal that all @@ -82,13 +81,22 @@ namespace smt { struct stats { unsigned m_max_cube_depth = 0; unsigned m_num_cubes = 0; + unsigned m_backbones_found = 0; + unsigned m_core_min_jobs_enqueued = 0; + unsigned m_core_min_jobs_published = 0; + unsigned m_core_min_jobs_skipped = 0; + unsigned m_core_min_global_unsat = 0; + }; + struct core_min_job { + node* source = nullptr; + expr_ref_vector core; + core_min_job(ast_manager& m, node* source) : source(source), core(m) {} }; ast_manager& m; parallel& p; std::mutex mux; state m_state = state::is_running; stats m_stats; - using node = search_tree::node; search_tree::tree m_search_tree; vector m_worker_leases; @@ -100,7 +108,8 @@ namespace smt { bb_candidates m_bb_candidates; unsigned m_max_global_bb_candidates = 100; unsigned m_bb_batch_size = 150; - expr_ref_vector m_global_backbones; + obj_hashtable m_global_backbones; + std::atomic m_bb_candidate_epoch = 0; // Backbone job queue std::condition_variable m_bb_cv; @@ -110,6 +119,12 @@ namespace smt { unsigned_vector m_bb_last_batch_processed; unsigned m_bb_cancel_epoch = 0; // When a backbone worker finishes early, it increments m_bb_cancel_epoch and notifies all + // Core minimization job queue + std::condition_variable m_core_min_cv; + scoped_ptr_vector m_core_min_jobs; + + bool m_ablate_backtracking = false; + // called from batch manager to cancel other workers if we've reached a verdict void cancel_workers() { IF_VERBOSE(1, verbose_stream() << "Canceling workers\n"); @@ -137,22 +152,36 @@ namespace smt { cancel_backbones_worker(); m_bb_cv.notify_all(); } + if (p.m_core_minimizer_worker) { + p.m_core_minimizer_worker->cancel(); + m_core_min_cv.notify_all(); + } } // to avoid deadlock bool is_global_backbone_unlocked(ast_translation& l2g, expr* bb_cand) { expr_ref cand(l2g(bb_cand), l2g.to()); - return any_of(m_global_backbones, [&](expr *bb) { return bb == cand.get(); }); + return m_global_backbones.contains(cand.get()); + } + + bool is_global_backbone_or_negation_unlocked(ast_translation& l2g, expr* bb_cand) { + expr_ref cand(l2g(bb_cand), l2g.to()); + expr_ref neg_cand(mk_not(l2g.to(), cand), l2g.to()); + return m_global_backbones.contains(cand.get()) || m_global_backbones.contains(neg_cand.get()); } void backtrack_unlocked(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core, node_lease const* lease = nullptr, vector const* targets = nullptr); void collect_clause_unlocked(ast_translation &l2g, unsigned source_worker_id, expr *clause); - void release_lease_unlocked(unsigned worker_id, node* n, unsigned epoch); + void release_lease_unlocked(unsigned worker_id, node* n); void cancel_closed_leases_unlocked(unsigned source_worker_id); + void collect_matching_targets_unlocked(node* source, expr* lit, vector const& core, + vector& targets); + node* find_core_source_unlocked(ast_translation& l2g, node* source, expr_ref_vector const& core); + unsigned select_best_core_min_job_unlocked() const; public: - batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)), m_global_backbones(m) { } + batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)) { } void initialize(unsigned num_global_bb_threads, unsigned initial_max_thread_conflicts = 1000); // TODO: pass in from worker config @@ -163,12 +192,30 @@ namespace smt { void collect_statistics(::statistics& st) const; void collect_backbone_candidates(ast_translation& l2g, bb_candidates& bb_candidates); - bool collect_global_backbone(ast_translation& l2g, expr_ref const& backbone); + void collect_backbone_evidence(ast_translation& l2g, expr* lit, double delta); + bool collect_global_backbone(ast_translation& l2g, expr_ref const& backbone, unsigned source_worker_id = UINT_MAX); bool wait_for_backbone_job(unsigned bb_thread_id, ast_translation& g2l, vector& out, reslimit& lim); - bb_candidates return_global_bb_candidates(ast_translation& g2l); + bool has_new_backbone_candidates(unsigned epoch) { + return m_bb_candidate_epoch.load(std::memory_order_acquire) != epoch; + } + unsigned get_bb_candidate_epoch() const { + return m_bb_candidate_epoch.load(std::memory_order_acquire); + } + expr_ref_vector get_global_backbones_snapshot(ast_translation& g2l) { + std::scoped_lock lock(mux); + expr_ref_vector snapshot(g2l.to()); + for (expr* gb : m_global_backbones) + snapshot.push_back(g2l(gb)); + return snapshot; + } bool get_cube(ast_translation& g2l, unsigned id, expr_ref_vector& cube, bool is_first_run, node_lease& lease); void backtrack(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core, node_lease const& lease); + void enqueue_core_minimization(ast_translation& l2g, node* source, expr_ref_vector const& core); + bool wait_for_core_min_job(ast_translation& g2l, node*& source, + expr_ref_vector& core, reslimit& lim); + void publish_minimized_core(ast_translation& l2g, expr_ref_vector const& asms, node* source, + unsigned original_core_size, expr_ref_vector const& minimized_core); void try_split(ast_translation& l2g, unsigned worker_id, node_lease const& lease, expr* atom, unsigned effort); void release_lease(unsigned worker_id, node_lease const& lease); bool lease_canceled(node_lease const& lease); @@ -178,9 +225,9 @@ namespace smt { lbool get_result() const; - bool is_global_backbone(ast_translation& l2g, expr* bb_cand) { + bool is_global_backbone_or_negation(ast_translation& l2g, expr* bb_cand) { std::scoped_lock lock(mux); - return is_global_backbone_unlocked(l2g, bb_cand); + return is_global_backbone_or_negation_unlocked(l2g, bb_cand); } void cancel_current_backbone_batch() { @@ -207,14 +254,15 @@ namespace smt { double m_max_conflict_mul = 1.5; bool m_inprocessing = false; bool m_global_backbones = false; + bool m_local_backbones = false; bool m_sls = false; unsigned m_inprocessing_delay = 1; unsigned m_max_cube_depth = 20; unsigned m_max_conflicts = UINT_MAX; + bool m_core_minimize = false; + bool m_ablate_backtracking = false; }; - using node = search_tree::node; - unsigned id; // unique identifier for the worker parallel& p; batch_manager& b; @@ -279,13 +327,42 @@ namespace smt { } }; + class core_minimizer_worker { + batch_manager &b; + ast_manager m; + expr_ref_vector asms; + smt_params m_smt_params; + scoped_ptr ctx; + ast_translation m_g2l, m_l2g; + + unsigned m_num_core_minimize_calls = 0; + unsigned m_num_core_minimize_undef = 0; + unsigned m_num_core_minimize_refined = 0; + unsigned m_num_core_minimize_lits_removed = 0; + unsigned m_num_core_minimize_found_sat = 0; + unsigned m_core_minimize_conflict_budget = 5000; + unsigned m_shared_clause_limit = 0; + + void minimize_unsat_core(expr_ref_vector& core); + void collect_shared_clauses(); + + public: + core_minimizer_worker(parallel& p, expr_ref_vector const& _asms); + void run(); + void cancel(); + void collect_statistics(::statistics& st) const; + reslimit& limit() { return m.limit(); } + }; + class backbones_worker { struct stats { unsigned m_batches_total = 0; unsigned m_candidates_total = 0; unsigned m_singleton_backbones = 0; unsigned m_backbones_detected = 0; - unsigned m_backbones_found = 0; + unsigned m_internal_backbones_found = 0; + unsigned m_retry_backbones_found = 0; + unsigned m_bb_retries = 0; unsigned m_fallback_singleton_checks = 0; unsigned m_fallback_reason_chunk_exhausted = 0; unsigned m_fallback_reason_undef = 0; @@ -308,16 +385,17 @@ namespace smt { ast_translation m_g2l, m_l2g; unsigned m_bb_chunk_size = 20; unsigned m_bb_conflicts_per_chunk = 1000; + uint_set m_known_units; bool m_use_failed_literal_test; stats m_stats; bb_mode m_mode; - unsigned m_num_global_bb_threads = 1; // used to toggle behavior when testing bb candidates unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share - bool check_backbone(expr* bb_candidate); + unsigned m_shared_units_prefix = 0; + unsigned m_num_initial_atoms = 0; + bool try_get_unit_backbone(expr* candidate, expr_ref& backbone); void run_batch_mode(); void run_failed_literal_mode(); - lbool check_sat(expr_ref_vector const &asms); - lbool probe_literal(bool_var v, uint_set& units, expr *e); + lbool probe_literal(bool_var v, expr *e, bool is_retry); public: backbones_worker(unsigned id, parallel &p, expr_ref_vector const &_asms); void cancel(); @@ -330,6 +408,7 @@ namespace smt { batch_manager m_batch_manager; scoped_ptr_vector m_workers; scoped_ptr m_sls_worker; + scoped_ptr m_core_minimizer_worker; scoped_ptr_vector m_global_backbones_workers; public: diff --git a/src/util/search_tree.h b/src/util/search_tree.h index 2c42657d3..85294d550 100644 --- a/src/util/search_tree.h +++ b/src/util/search_tree.h @@ -50,7 +50,6 @@ namespace search_tree { unsigned m_effort_spent = 0; unsigned m_round_max_effort = 0; unsigned m_active_workers = 0; - unsigned m_epoch = 0; unsigned m_cancel_epoch = 0; public: @@ -78,7 +77,6 @@ namespace search_tree { SASSERT(!m_right); m_left = alloc(node, a, this); m_right = alloc(node, b, this); - inc_epoch(); } node* left() const { return m_left; } @@ -150,12 +148,6 @@ namespace search_tree { m_round_max_effort = effort; m_effort_spent += m_round_max_effort; } - unsigned epoch() const { - return m_epoch; - } - void inc_epoch() { - ++m_epoch; - } unsigned get_cancel_epoch() const { return m_cancel_epoch; } @@ -242,7 +234,7 @@ namespace search_tree { count_active_nodes(cur->right()); } - // Find the shallowest leaf node that at least 1 worker has visited + // Find the depth of the shallowest leaf node that at least 1 worker has timed out on // Used for tree expansion policy void find_shallowest_timed_out_leaf_depth(node* cur, unsigned& best_depth) const { if (!cur || cur->get_status() == status::closed) @@ -255,8 +247,8 @@ namespace search_tree { find_shallowest_timed_out_leaf_depth(cur->right(), best_depth); } - bool should_split(node* n, unsigned epoch) { - if (!is_lease_valid(n, epoch) || !n->is_leaf()) + bool should_split(node* n) { + if (!n || n->get_status() != status::active || !n->is_leaf()) return false; unsigned num_active_nodes = count_active_nodes(m_root.get()); @@ -348,7 +340,6 @@ namespace search_tree { void close(node *n, vector const &C) { if (!n || n->get_status() == status::closed) return; - n->inc_epoch(); n->inc_cancel_epoch(); n->set_status(status::closed); n->set_core(C); @@ -373,8 +364,10 @@ namespace search_tree { node *p = n->parent(); - // The conflict does NOT depend on the decision literal at node n, so n’s split literal is irrelevant to this conflict - // thus the entire subtree under n is closed regardless of the split, so the conflict should be attached higher, at the nearest ancestor that does participate + // The conflict does NOT depend on the decision literal at node n, so n’s decision literal is irrelevant to this conflict + // thus the entire subtree under n is closed, so the conflict should be attached higher, at the nearest ancestor that does participate + // NOTE: I think this is dead code because the backtrack function already walks up to the nearest ancestor whose literal is in the conflict, which is the only place where this is called + // Keep for now since it does generalize this function to be used for arbitrary conflict attachment if (p && all_of(C, [n](auto const &l) { return l != n->get_literal(); })) { close_with_core(p, C); return; @@ -451,7 +444,7 @@ namespace search_tree { // On timeout, either expand the current leaf or reopen the node for a // later revisit, depending on the tree-expansion heuristic. - bool try_split(node *n, unsigned epoch, unsigned cancel_epoch, literal const &a, literal const &b, unsigned effort) { + bool try_split(node *n, unsigned cancel_epoch, literal const &a, literal const &b, unsigned effort) { if (is_lease_canceled(n, cancel_epoch)) return false; @@ -460,7 +453,7 @@ namespace search_tree { n->update_round_max_effort(effort); bool did_split = false; - if (should_split(n, epoch)) { + if (should_split(n)) { n->split(a, b); did_split = true; } @@ -494,6 +487,8 @@ namespace search_tree { // Walk upward to find the nearest ancestor whose decision participates in the conflict while (n) { + // Does the UNSAT core contain the decision literal at node n? + // If yes, i.e. if the core contains n->literal, then the conflict depends on the decision made at node n. if (any_of(conflict, [&](auto const &a) { return a == n->get_literal(); })) { // close the subtree under n (preserves core attached to n), and attempt to resolve upwards close_with_core(n, conflict); @@ -532,7 +527,6 @@ namespace search_tree { find_nonclosed_nodes_with_literal_rec(m_root.get(), lit, out); } - private: void find_nonclosed_nodes_with_literal_rec(node* n, literal const& lit, ptr_vector>& out) { if (!n) return; @@ -544,17 +538,12 @@ namespace search_tree { find_nonclosed_nodes_with_literal_rec(n->right(), lit, out); } - public: void dec_active_workers(node* n) { if (!n) return; n->dec_active_workers(); } - bool is_lease_valid(node* n, unsigned epoch) const { - return n && n->get_status() == status::active && n->epoch() == epoch; - } - bool is_lease_canceled(node* n, unsigned cancel_epoch) const { return !n || n->get_status() == status::closed || n->get_cancel_epoch() != cancel_epoch; }