3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-05-15 22:55:33 +00:00

Final version of parallel architecture for FMCAD26 submission (#9476)

* Final version of parallel architecture for FMCAD26 submission (#9475)

* setting up new backbone experiment

* fix phase scores bug

* debug crash from negated atoms

* backbone thread/global backbones in progress, does NOT compile yet

* debug, still need to add backbones worker as a new thread

* setting up complicated condition variable thing for backbones worker thread

* debug

* debug lock contention

* it's a little messy, but change how i'm checking backbones by initiating with batch check

* don't split on global backbones, share global backbones once detected. still need to prune search tree with backbones

* close global backbone branches in search tree

* fix backbone ranking (take average of bb age over cubes and incorporate hits/num cubes the bb appears in

* add stats to backbone experiment

* gate the backbones experiment by local vs global

* update stats and fix bug about unsat core size=1 means global backbone

* phase negation ablation

* unforce phase ablation

* reset ablations

* add todo notes

* fix backbone aging

* first draft of Janota Alg 7

* process exactly 10 bb candidates in each batch

* fixing the Janota algorithm

* add backbone stats for Janota algorithm

* fix bug about global backbones not being checked unless local is also true

* hopefully fix bug about closing global backbones in search tree

* fix another bug in janota alg

* report random seed for debug

* print random seed for debug

* refactor janota alg code, still can't repro the crash

* fix some bugs in the janota algorithm

* try to fix weird memory leak thing with ramon/linux

* revert fix, it didn't work

* add second backbones thread

* increase chunk size when undef

* fix how the 2 backbone threads work on batches (they each race to finish the same batch). this was very complicated to code due to thread synchronization and while it runs there may be bugs

* update how we report stats for backbones

* first draft of doing the bb threads in neg and pos mode, needs revising

* fix some bugs in the positive version of the bb check, still need to review

* debug some more things in the positive bb worker

* keep bb candidates sorted, increase batch and chunk size

* try to resolve a couple of bugs

* fix very bad bug about backbones workers not doing anything

* ablate positive backbone thread

* fix how we record backbones in positive mode (shouldn't impact previous run)

* clarify code about adding found backbones

* add back the positive bb thread

* try to fix the random segfault bug + ablate the postiive bb thread again

* clean up logs

* share clauses with bb threads

* fixes

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* resolve deadlock

* add comment about SAT bb case

* todo comments

* complete TODOs in code, still need to debug bb threads

* debug bb threads, add bb_positive thread back in

* ablate bb_positive thread

* style

* configure num bb threads as param

* enable sat and unsat mode

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* updates

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove while true

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* updates

* try to fix rewriter_exception bug

* possibly reduce code under lock when only 1 bb thread

* add some copilot-suggested optimizations

* add copilot suggestions to fix condition variable synchronization with bb threads

* revert changes that are too messy with the code

* ablate collect clauses

* ablate condition variable logic changes

* ablate reset batch

* revert ablation

* remove m_batch_in_progress that makes the bb threads wait until both have exited the batch after one signals cancel (can be long if one is stuck in ctx check)

* sharing theory lemmas

* finish setup for search tree thread modes, and fix local bb setup to pull from the global pool

* variable renames

* update bb hyperparams after copilot (hopefully??) ran tuning experiments

* fix possible AST manager bug

* ablate collect clauses

* remove bb collect shared clauses

* fix local bb experiment bug and reinstate collect clauses for global bb

* local bb cands are thread-local ablation

* remove thread-local local bb ablation

* fix bug in nonthread-local bb experiment

* fix more nonthreadlocal bb bugs

* try to fix local bb bug

* AST manager mismatch bugfix

* attempt to fix another canonicalization bug

* try another bugfix

* try another bugfix

* try yet another bugfix

* thread local bb ablation

* ablate force phase

* ablate set activity

* undo ablations since apparently it's not forcing phase or boosting activities

* remove old experiments

* try guarding m_birthdate size

* try to fix several bugs including with m_birthdate initialization and how we're storing original phases

* one more bugfix

* remove local bb experiment after negative signals on experiments, and change bb ranking to VSIDS scores as opposed to phase

* select bb polarity based on phase, not VSIDS

* first attempt with codex. Codex notes:
What changed:

  - Each tree node now tracks:
      - active worker count
      - lease epoch
      - cancel epoch
  - get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch).
  - try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the
    node.
  - If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed.
  - Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a
    global exception.
  - The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish.
  - Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other
    workers are still on it.

  I also updated search_tree bookkeeping so:

  - closure bumps node cancel/lease epochs
  - active-node counting uses actual active-worker presence, not just status == active

* fix smts bugfix git merge issues with backtrack

* fix(parallel-smt): gate split/backtrack by lease epoch

What it changes:

  - util/search_tree.h
      - bumps node epoch on split
      - threads epoch through should_split(...) and try_split(...)
      - always records effort, but only split/reopen if the lease epoch still matches
  - smt/smt_parallel.cpp
      - requires is_lease_valid(..., lease.epoch) before backtrack(...)
      - passes lease.epoch into m_search_tree.try_split(...)

* clean up code and add some comments

* fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to
  close n, and that closure should then cancel the other workers on n and its subtree.

  I changed smt/smt_parallel.cpp accordingly:

  - try_split(...) still uses epoch to reject stale structural splits
  - backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled

  So the intended asymmetry is now restored:

  - stale split: reject
  - stale unsat/backtrack: allow closure, then cancel affected workers

* ablate to no backtracking on stale leases

* fix merge

* revert codex change about exception handling

* fix linux bugs

* ablate backtrack gating

* attempt to fix linux crashes

* ablate backtracking on global bb

* the rare bb bug appears to be from creating the synthetic lease for a bb node and then backtracking on the synthetic lease. this is an attempt to fix it

* clean up code

* try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement
  - canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited

* delay premature root activation

* fix major semantic bug about threads continually choosing the root if their lease is reset

* fix cancellation to unknown status

* fix very bad bug about all threads needing to start at the root

* ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0)

* fix some bugs about leases

* ablate adding static effort only

* fix some bugs about leases

* don't explode effort for portfolio nodes

* fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves

* restore dynamically scaled effort

* clean up merge from cherry pick

* tighten which nodes we detect for proven global bb closure (only detect nonclosed nodes)

* fix cancel to unknown exception on bb code

* lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root

* cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch
ecked in ctx where max conflicts is also checked

* move atomic lease check in the context to the more global get_cancel_flag function

* Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state —
  propagation queues partially processed, theory state potentially inconsistent with boolean state.

  For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter.

  For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate
  state → SIGSEGV.

  The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent.

  Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be
   gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start.

* fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled.

* ablate lease cancel check in ctx final theory check due to crash (??)

* gate bb-specific code behind param

* try some possible bugfixes for the sigsev

* ablate some bugfixes

* remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check

* restore exception handling logic to master branch

* restore reslimit cancels since the bug appears to be latent

* add bookkeeping for race condition of multiple lease cancels on a single node (messes with reslimit)

* restore unrelated code to master

* restore local bb experiment

* ablate restore local bb phase/activity after search

* undo local bb ablation about resetting phase/activities, and reinstate the shared lemmas of length 2 and 3 experiment

* re-ablate restore local bb phase/activity after search, due to positive experimental signal on smt comp LIA

* change split policy from lightweight proof skeleton to VSIDS. NOTE: enabling local bb will mess with this since we aren't restoring activities right now

* backtrack more aggressively in search tree: close matching external targets (i.e. repeat literals on other branches)

* find_shallowest_timed_out_leaf_depth is now shallowest_unsolved_leaf_depth and is based on num activations > 0, not effort > 0

* fix soundness bug about closing external targets with nontrivial cores

* epoch is no longer needed, just cancel epoch. remove epoch

* core minimize draft, and fix bug in tree expansion policy about shallowest leaf depth needing to be timed out

* core minimization thread (remove search tree worker core min since it was blocking)

* collect shared clauses in core min thread

* bugfixes in core min algorithm

* fix more bugs in core min algorithm

* more core min bugfixes based on feedback and increase m_core_minimize_conflict_budget to 5000 (might need to increase it more for harder SMT COMP problems)

* fix bug in backtrack_unlocked

* fix compiler error

* more core min bugfix from nikolaj

* clean up

* failed literal probe collects shared clauses

* core min thread shares units

* failed literal thread now tests the top 500 global bb cands each round, instead of scanning everything. on QF_LIA/Sz128_2823.smt2 this got us from 51->75 discovered backbones

* remove core minimizer unit sharing (experimentally showed no effect)

* core minimization thread candidate cores are ranked first by depth (deepest->shallowest) then by size (largest->smallest). also, the core's node is set to the deepest node in the core which is not necessarily the search node (slightly semantically stronger). finally, clean up bb/failed literal params

* failed literal probe runs continuously

* fix a lot of things about the FL thread and how bb cands are being processed. also re-add the local bb experiment for ablations

* ablate continuous run on FL thread (up to the max BM examples

* ablate m_max_failed_literal_prioritized_size back to 100

* redesign FL probe again

* ablate FL continuous probe

* reinstate continuous FL probe after positive NIA signal, but also re-add the BM maintaining 1000 bb cands and these are used a
s backups instead of just looping over the top 100 all the time

* change FL thread scheduling to attempt to do less duplicate checks

* restore some old FL behavior

* batch manager dedups global bb cands by atoms, not literals. if we have 2 of the same atom, the polarity with higher rank is kept for the stored bb candidate literal.

* ablations

* reinstate FL check for new batch with epochs, before merge, this is a temp branch

* undo comment out cv call

* restore old changes

* bb batch mode is continuous, with checks for new candidates after the first round

* separate FL probe into 2 threads for pos and neg mode

* attempt to add unit-based bb detection in chunking mode

* add bb detection via workers' units. also rename some variables

* modify the fallback policies for bb detection in batch mode but also in FL mode

* ablate continuous checking for batch bb mode

* major refactor for bb code. we share units and collect them as pruning bb's in all threads now (including core min and bb threads). we always check for units in batch mode now. finally, the batch mode fallback is now FL probing

* bb candidates are atoms, not literals, since we currently test both polarities in parallel.
batch mode retry terminates if we made zero progress after a retry round to avoid resource stress
fix bug about bb ranking being backwards for how we process them

* fix polarity bug for FL mode dedup

* restore polarity-sensitive bb candidate ranking via lits

* ablate sharing non-worker units

* ablate share unit as bb

* ablate incomplete-theory give-up paths

* restore unit sharing as bb collection on workers

* restore incomplete-theory give-up paths

* clean up code

* clean up code

* clean up code

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>

* add ablate_backtracking experiment

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
This commit is contained in:
Ilana Shapiro 2026-05-11 15:08:23 -07:00 committed by GitHub
parent 4ea5ec0287
commit d02e9fcfab
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 947 additions and 313 deletions

View file

@ -4,6 +4,9 @@ def_module_params('smt_parallel',
params=(
('inprocessing', BOOL, False, 'integrate in-processing as a heuristic simplification'),
('sls', BOOL, False, 'add sls-tactic as a separate worker thread outside the search tree parallelism'),
('failed_literal_backbones', BOOL, False, 'use failed literal backbone test'),
('num_global_bb_threads', UINT, 0, 'default is 0 (off), can configure to 1 (negative mode only) or 2 (negative and positive mode) separate worker thread(s) outside the search tree parallelism'),
))
('num_global_bb_fl_threads', UINT, 0, 'run failed-literal backbone worker threads; default is 0 (off), supported values are 1 (negative mode only) or 2 (negative and positive mode)'),
('num_global_bb_batch_threads', UINT, 0, 'run Janota-style chunking backbone worker threads; default is 0 (off), supported values are 1 (negative mode only) or 2 (negative and positive mode)'),
('local_backbones', BOOL, False, 'enable local backbones experiment within the search tree parallelism'),
('core_minimize', BOOL, True, 'minimize unsat cores used for parallel cube backtracking'),
('ablate_backtracking', BOOL, False, 'ablation: pass entire cube as core instead of unsat core during backtracking'),
))

File diff suppressed because it is too large Load diff

View file

@ -21,6 +21,7 @@ Revision History:
#include "smt/smt_context.h"
#include "util/search_tree.h"
#include "ast/sls/sls_smt_solver.h"
#include <atomic>
#include <thread>
#include <mutex>
#include <condition_variable>
@ -36,6 +37,8 @@ namespace smt {
class parallel {
context& ctx;
class core_minimizer_worker;
using node = search_tree::node<cube_config>;
struct shared_clause {
unsigned source_worker_id;
@ -52,11 +55,7 @@ namespace smt {
using bb_candidates = vector<bb_candidate>;
struct node_lease {
search_tree::node<cube_config>* node = nullptr;
// Version counter for structural mutations of this node (e.g., split/close).
// Used to detect stale leases: if a worker's lease.epoch != node.epoch,
// the node has changed since it was acquired and must not be mutated.
unsigned epoch = 0;
node* leased_node = nullptr;
// Cancellation generation counter for this node/subtree.
// Incremented when the node is closed; used to signal that all
@ -82,13 +81,22 @@ namespace smt {
struct stats {
unsigned m_max_cube_depth = 0;
unsigned m_num_cubes = 0;
unsigned m_backbones_found = 0;
unsigned m_core_min_jobs_enqueued = 0;
unsigned m_core_min_jobs_published = 0;
unsigned m_core_min_jobs_skipped = 0;
unsigned m_core_min_global_unsat = 0;
};
struct core_min_job {
node* source = nullptr;
expr_ref_vector core;
core_min_job(ast_manager& m, node* source) : source(source), core(m) {}
};
ast_manager& m;
parallel& p;
std::mutex mux;
state m_state = state::is_running;
stats m_stats;
using node = search_tree::node<cube_config>;
search_tree::tree<cube_config> m_search_tree;
vector<node_lease> m_worker_leases;
@ -100,7 +108,8 @@ namespace smt {
bb_candidates m_bb_candidates;
unsigned m_max_global_bb_candidates = 100;
unsigned m_bb_batch_size = 150;
expr_ref_vector m_global_backbones;
obj_hashtable<expr> m_global_backbones;
std::atomic<unsigned> m_bb_candidate_epoch = 0;
// Backbone job queue
std::condition_variable m_bb_cv;
@ -110,6 +119,12 @@ namespace smt {
unsigned_vector m_bb_last_batch_processed;
unsigned m_bb_cancel_epoch = 0; // When a backbone worker finishes early, it increments m_bb_cancel_epoch and notifies all
// Core minimization job queue
std::condition_variable m_core_min_cv;
scoped_ptr_vector<core_min_job> m_core_min_jobs;
bool m_ablate_backtracking = false;
// called from batch manager to cancel other workers if we've reached a verdict
void cancel_workers() {
IF_VERBOSE(1, verbose_stream() << "Canceling workers\n");
@ -137,22 +152,36 @@ namespace smt {
cancel_backbones_worker();
m_bb_cv.notify_all();
}
if (p.m_core_minimizer_worker) {
p.m_core_minimizer_worker->cancel();
m_core_min_cv.notify_all();
}
}
// to avoid deadlock
bool is_global_backbone_unlocked(ast_translation& l2g, expr* bb_cand) {
expr_ref cand(l2g(bb_cand), l2g.to());
return any_of(m_global_backbones, [&](expr *bb) { return bb == cand.get(); });
return m_global_backbones.contains(cand.get());
}
bool is_global_backbone_or_negation_unlocked(ast_translation& l2g, expr* bb_cand) {
expr_ref cand(l2g(bb_cand), l2g.to());
expr_ref neg_cand(mk_not(l2g.to(), cand), l2g.to());
return m_global_backbones.contains(cand.get()) || m_global_backbones.contains(neg_cand.get());
}
void backtrack_unlocked(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core,
node_lease const* lease = nullptr, vector<node_lease> const* targets = nullptr);
void collect_clause_unlocked(ast_translation &l2g, unsigned source_worker_id, expr *clause);
void release_lease_unlocked(unsigned worker_id, node* n, unsigned epoch);
void release_lease_unlocked(unsigned worker_id, node* n);
void cancel_closed_leases_unlocked(unsigned source_worker_id);
void collect_matching_targets_unlocked(node* source, expr* lit, vector<cube_config::literal> const& core,
vector<node_lease>& targets);
node* find_core_source_unlocked(ast_translation& l2g, node* source, expr_ref_vector const& core);
unsigned select_best_core_min_job_unlocked() const;
public:
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)), m_global_backbones(m) { }
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_search_tree(expr_ref(m)) { }
void initialize(unsigned num_global_bb_threads, unsigned initial_max_thread_conflicts = 1000); // TODO: pass in from worker config
@ -163,12 +192,30 @@ namespace smt {
void collect_statistics(::statistics& st) const;
void collect_backbone_candidates(ast_translation& l2g, bb_candidates& bb_candidates);
bool collect_global_backbone(ast_translation& l2g, expr_ref const& backbone);
void collect_backbone_evidence(ast_translation& l2g, expr* lit, double delta);
bool collect_global_backbone(ast_translation& l2g, expr_ref const& backbone, unsigned source_worker_id = UINT_MAX);
bool wait_for_backbone_job(unsigned bb_thread_id, ast_translation& g2l, vector<parallel::bb_candidate>& out, reslimit& lim);
bb_candidates return_global_bb_candidates(ast_translation& g2l);
bool has_new_backbone_candidates(unsigned epoch) {
return m_bb_candidate_epoch.load(std::memory_order_acquire) != epoch;
}
unsigned get_bb_candidate_epoch() const {
return m_bb_candidate_epoch.load(std::memory_order_acquire);
}
expr_ref_vector get_global_backbones_snapshot(ast_translation& g2l) {
std::scoped_lock lock(mux);
expr_ref_vector snapshot(g2l.to());
for (expr* gb : m_global_backbones)
snapshot.push_back(g2l(gb));
return snapshot;
}
bool get_cube(ast_translation& g2l, unsigned id, expr_ref_vector& cube, bool is_first_run, node_lease& lease);
void backtrack(ast_translation& l2g, unsigned worker_id, expr_ref_vector const& core, node_lease const& lease);
void enqueue_core_minimization(ast_translation& l2g, node* source, expr_ref_vector const& core);
bool wait_for_core_min_job(ast_translation& g2l, node*& source,
expr_ref_vector& core, reslimit& lim);
void publish_minimized_core(ast_translation& l2g, expr_ref_vector const& asms, node* source,
unsigned original_core_size, expr_ref_vector const& minimized_core);
void try_split(ast_translation& l2g, unsigned worker_id, node_lease const& lease, expr* atom, unsigned effort);
void release_lease(unsigned worker_id, node_lease const& lease);
bool lease_canceled(node_lease const& lease);
@ -178,9 +225,9 @@ namespace smt {
lbool get_result() const;
bool is_global_backbone(ast_translation& l2g, expr* bb_cand) {
bool is_global_backbone_or_negation(ast_translation& l2g, expr* bb_cand) {
std::scoped_lock lock(mux);
return is_global_backbone_unlocked(l2g, bb_cand);
return is_global_backbone_or_negation_unlocked(l2g, bb_cand);
}
void cancel_current_backbone_batch() {
@ -207,14 +254,15 @@ namespace smt {
double m_max_conflict_mul = 1.5;
bool m_inprocessing = false;
bool m_global_backbones = false;
bool m_local_backbones = false;
bool m_sls = false;
unsigned m_inprocessing_delay = 1;
unsigned m_max_cube_depth = 20;
unsigned m_max_conflicts = UINT_MAX;
bool m_core_minimize = false;
bool m_ablate_backtracking = false;
};
using node = search_tree::node<cube_config>;
unsigned id; // unique identifier for the worker
parallel& p;
batch_manager& b;
@ -279,13 +327,42 @@ namespace smt {
}
};
class core_minimizer_worker {
batch_manager &b;
ast_manager m;
expr_ref_vector asms;
smt_params m_smt_params;
scoped_ptr<context> ctx;
ast_translation m_g2l, m_l2g;
unsigned m_num_core_minimize_calls = 0;
unsigned m_num_core_minimize_undef = 0;
unsigned m_num_core_minimize_refined = 0;
unsigned m_num_core_minimize_lits_removed = 0;
unsigned m_num_core_minimize_found_sat = 0;
unsigned m_core_minimize_conflict_budget = 5000;
unsigned m_shared_clause_limit = 0;
void minimize_unsat_core(expr_ref_vector& core);
void collect_shared_clauses();
public:
core_minimizer_worker(parallel& p, expr_ref_vector const& _asms);
void run();
void cancel();
void collect_statistics(::statistics& st) const;
reslimit& limit() { return m.limit(); }
};
class backbones_worker {
struct stats {
unsigned m_batches_total = 0;
unsigned m_candidates_total = 0;
unsigned m_singleton_backbones = 0;
unsigned m_backbones_detected = 0;
unsigned m_backbones_found = 0;
unsigned m_internal_backbones_found = 0;
unsigned m_retry_backbones_found = 0;
unsigned m_bb_retries = 0;
unsigned m_fallback_singleton_checks = 0;
unsigned m_fallback_reason_chunk_exhausted = 0;
unsigned m_fallback_reason_undef = 0;
@ -308,16 +385,17 @@ namespace smt {
ast_translation m_g2l, m_l2g;
unsigned m_bb_chunk_size = 20;
unsigned m_bb_conflicts_per_chunk = 1000;
uint_set m_known_units;
bool m_use_failed_literal_test;
stats m_stats;
bb_mode m_mode;
unsigned m_num_global_bb_threads = 1; // used to toggle behavior when testing bb candidates
unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share
bool check_backbone(expr* bb_candidate);
unsigned m_shared_units_prefix = 0;
unsigned m_num_initial_atoms = 0;
bool try_get_unit_backbone(expr* candidate, expr_ref& backbone);
void run_batch_mode();
void run_failed_literal_mode();
lbool check_sat(expr_ref_vector const &asms);
lbool probe_literal(bool_var v, uint_set& units, expr *e);
lbool probe_literal(bool_var v, expr *e, bool is_retry);
public:
backbones_worker(unsigned id, parallel &p, expr_ref_vector const &_asms);
void cancel();
@ -330,6 +408,7 @@ namespace smt {
batch_manager m_batch_manager;
scoped_ptr_vector<worker> m_workers;
scoped_ptr<sls_worker> m_sls_worker;
scoped_ptr<core_minimizer_worker> m_core_minimizer_worker;
scoped_ptr_vector<backbones_worker> m_global_backbones_workers;
public:

View file

@ -50,7 +50,6 @@ namespace search_tree {
unsigned m_effort_spent = 0;
unsigned m_round_max_effort = 0;
unsigned m_active_workers = 0;
unsigned m_epoch = 0;
unsigned m_cancel_epoch = 0;
public:
@ -78,7 +77,6 @@ namespace search_tree {
SASSERT(!m_right);
m_left = alloc(node<Config>, a, this);
m_right = alloc(node<Config>, b, this);
inc_epoch();
}
node* left() const { return m_left; }
@ -150,12 +148,6 @@ namespace search_tree {
m_round_max_effort = effort;
m_effort_spent += m_round_max_effort;
}
unsigned epoch() const {
return m_epoch;
}
void inc_epoch() {
++m_epoch;
}
unsigned get_cancel_epoch() const {
return m_cancel_epoch;
}
@ -242,7 +234,7 @@ namespace search_tree {
count_active_nodes(cur->right());
}
// Find the shallowest leaf node that at least 1 worker has visited
// Find the depth of the shallowest leaf node that at least 1 worker has timed out on
// Used for tree expansion policy
void find_shallowest_timed_out_leaf_depth(node<Config>* cur, unsigned& best_depth) const {
if (!cur || cur->get_status() == status::closed)
@ -255,8 +247,8 @@ namespace search_tree {
find_shallowest_timed_out_leaf_depth(cur->right(), best_depth);
}
bool should_split(node<Config>* n, unsigned epoch) {
if (!is_lease_valid(n, epoch) || !n->is_leaf())
bool should_split(node<Config>* n) {
if (!n || n->get_status() != status::active || !n->is_leaf())
return false;
unsigned num_active_nodes = count_active_nodes(m_root.get());
@ -348,7 +340,6 @@ namespace search_tree {
void close(node<Config> *n, vector<literal> const &C) {
if (!n || n->get_status() == status::closed)
return;
n->inc_epoch();
n->inc_cancel_epoch();
n->set_status(status::closed);
n->set_core(C);
@ -373,8 +364,10 @@ namespace search_tree {
node<Config> *p = n->parent();
// The conflict does NOT depend on the decision literal at node n, so ns split literal is irrelevant to this conflict
// thus the entire subtree under n is closed regardless of the split, so the conflict should be attached higher, at the nearest ancestor that does participate
// The conflict does NOT depend on the decision literal at node n, so ns decision literal is irrelevant to this conflict
// thus the entire subtree under n is closed, so the conflict should be attached higher, at the nearest ancestor that does participate
// NOTE: I think this is dead code because the backtrack function already walks up to the nearest ancestor whose literal is in the conflict, which is the only place where this is called
// Keep for now since it does generalize this function to be used for arbitrary conflict attachment
if (p && all_of(C, [n](auto const &l) { return l != n->get_literal(); })) {
close_with_core(p, C);
return;
@ -451,7 +444,7 @@ namespace search_tree {
// On timeout, either expand the current leaf or reopen the node for a
// later revisit, depending on the tree-expansion heuristic.
bool try_split(node<Config> *n, unsigned epoch, unsigned cancel_epoch, literal const &a, literal const &b, unsigned effort) {
bool try_split(node<Config> *n, unsigned cancel_epoch, literal const &a, literal const &b, unsigned effort) {
if (is_lease_canceled(n, cancel_epoch))
return false;
@ -460,7 +453,7 @@ namespace search_tree {
n->update_round_max_effort(effort);
bool did_split = false;
if (should_split(n, epoch)) {
if (should_split(n)) {
n->split(a, b);
did_split = true;
}
@ -494,6 +487,8 @@ namespace search_tree {
// Walk upward to find the nearest ancestor whose decision participates in the conflict
while (n) {
// Does the UNSAT core contain the decision literal at node n?
// If yes, i.e. if the core contains n->literal, then the conflict depends on the decision made at node n.
if (any_of(conflict, [&](auto const &a) { return a == n->get_literal(); })) {
// close the subtree under n (preserves core attached to n), and attempt to resolve upwards
close_with_core(n, conflict);
@ -532,7 +527,6 @@ namespace search_tree {
find_nonclosed_nodes_with_literal_rec(m_root.get(), lit, out);
}
private:
void find_nonclosed_nodes_with_literal_rec(node<Config>* n, literal const& lit, ptr_vector<node<Config>>& out) {
if (!n)
return;
@ -544,17 +538,12 @@ namespace search_tree {
find_nonclosed_nodes_with_literal_rec(n->right(), lit, out);
}
public:
void dec_active_workers(node<Config>* n) {
if (!n)
return;
n->dec_active_workers();
}
bool is_lease_valid(node<Config>* n, unsigned epoch) const {
return n && n->get_status() == status::active && n->epoch() == epoch;
}
bool is_lease_canceled(node<Config>* n, unsigned cancel_epoch) const {
return !n || n->get_status() == status::closed || n->get_cancel_epoch() != cancel_epoch;
}