mk_qfnia_preamble invoked lia2card with no params, so the default
max_range=101 was in effect. Any integer variable with a concrete
range hi-lo <= 101 was expanded into that many fresh Booleans plus
a sum-of-ITEs, bloating SAT search alongside the nonlinear structure.
On an observed QF_UFNIA benchmark this drove a 0.2s problem to a 30s
timeout.
Mirror the throttle already applied in mk_preamble_tactic
(qflia_tactic.cpp, commit 99cbfa715): limit lia2card to 0-1 integer
variables and nesting depth 1. Wrap with using_params so the
override survives and_then's downstream updt_params calls (passing
the params to mk_lia2card_tactic alone is overwritten when and_then
re-propagates the ambient params to each child).
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* setting up new backbone experiment
* fix phase scores bug
* debug crash from negated atoms
* backbone thread/global backbones in progress, does NOT compile yet
* debug, still need to add backbones worker as a new thread
* setting up complicated condition variable thing for backbones worker thread
* debug
* debug lock contention
* it's a little messy, but change how i'm checking backbones by initiating with batch check
* don't split on global backbones, share global backbones once detected. still need to prune search tree with backbones
* close global backbone branches in search tree
* fix backbone ranking (take average of bb age over cubes and incorporate hits/num cubes the bb appears in
* add stats to backbone experiment
* gate the backbones experiment by local vs global
* update stats and fix bug about unsat core size=1 means global backbone
* phase negation ablation
* unforce phase ablation
* reset ablations
* add todo notes
* fix backbone aging
* first draft of Janota Alg 7
* process exactly 10 bb candidates in each batch
* fixing the Janota algorithm
* add backbone stats for Janota algorithm
* fix bug about global backbones not being checked unless local is also true
* hopefully fix bug about closing global backbones in search tree
* fix another bug in janota alg
* report random seed for debug
* print random seed for debug
* refactor janota alg code, still can't repro the crash
* fix some bugs in the janota algorithm
* try to fix weird memory leak thing with ramon/linux
* revert fix, it didn't work
* add second backbones thread
* increase chunk size when undef
* fix how the 2 backbone threads work on batches (they each race to finish the same batch). this was very complicated to code due to thread synchronization and while it runs there may be bugs
* update how we report stats for backbones
* first draft of doing the bb threads in neg and pos mode, needs revising
* fix some bugs in the positive version of the bb check, still need to review
* debug some more things in the positive bb worker
* keep bb candidates sorted, increase batch and chunk size
* try to resolve a couple of bugs
* fix very bad bug about backbones workers not doing anything
* ablate positive backbone thread
* fix how we record backbones in positive mode (shouldn't impact previous run)
* clarify code about adding found backbones
* add back the positive bb thread
* try to fix the random segfault bug + ablate the postiive bb thread again
* clean up logs
* share clauses with bb threads
* fixes
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* resolve deadlock
* add comment about SAT bb case
* todo comments
* complete TODOs in code, still need to debug bb threads
* debug bb threads, add bb_positive thread back in
* ablate bb_positive thread
* style
* configure num bb threads as param
* enable sat and unsat mode
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* updates
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* remove while true
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* updates
* try to fix rewriter_exception bug
* possibly reduce code under lock when only 1 bb thread
* add some copilot-suggested optimizations
* add copilot suggestions to fix condition variable synchronization with bb threads
* revert changes that are too messy with the code
* ablate collect clauses
* ablate condition variable logic changes
* ablate reset batch
* revert ablation
* remove m_batch_in_progress that makes the bb threads wait until both have exited the batch after one signals cancel (can be long if one is stuck in ctx check)
* sharing theory lemmas
* finish setup for search tree thread modes, and fix local bb setup to pull from the global pool
* variable renames
* update bb hyperparams after copilot (hopefully??) ran tuning experiments
* fix possible AST manager bug
* ablate collect clauses
* remove bb collect shared clauses
* fix local bb experiment bug and reinstate collect clauses for global bb
* local bb cands are thread-local ablation
* remove thread-local local bb ablation
* fix bug in nonthread-local bb experiment
* fix more nonthreadlocal bb bugs
* try to fix local bb bug
* AST manager mismatch bugfix
* attempt to fix another canonicalization bug
* try another bugfix
* try another bugfix
* try yet another bugfix
* thread local bb ablation
* ablate force phase
* ablate set activity
* undo ablations since apparently it's not forcing phase or boosting activities
* remove old experiments
* try guarding m_birthdate size
* try to fix several bugs including with m_birthdate initialization and how we're storing original phases
* one more bugfix
* remove local bb experiment after negative signals on experiments, and change bb ranking to VSIDS scores as opposed to phase
* select bb polarity based on phase, not VSIDS
* first attempt with codex. Codex notes:
What changed:
- Each tree node now tracks:
- active worker count
- lease epoch
- cancel epoch
- get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch).
- try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the
node.
- If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed.
- Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a
global exception.
- The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish.
- Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other
workers are still on it.
I also updated search_tree bookkeeping so:
- closure bumps node cancel/lease epochs
- active-node counting uses actual active-worker presence, not just status == active
* fix smts bugfix git merge issues with backtrack
* fix(parallel-smt): gate split/backtrack by lease epoch
What it changes:
- util/search_tree.h
- bumps node epoch on split
- threads epoch through should_split(...) and try_split(...)
- always records effort, but only split/reopen if the lease epoch still matches
- smt/smt_parallel.cpp
- requires is_lease_valid(..., lease.epoch) before backtrack(...)
- passes lease.epoch into m_search_tree.try_split(...)
* clean up code and add some comments
* fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to
close n, and that closure should then cancel the other workers on n and its subtree.
I changed smt/smt_parallel.cpp accordingly:
- try_split(...) still uses epoch to reject stale structural splits
- backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled
So the intended asymmetry is now restored:
- stale split: reject
- stale unsat/backtrack: allow closure, then cancel affected workers
* ablate to no backtracking on stale leases
* fix merge
* revert codex change about exception handling
* fix linux bugs
* ablate backtrack gating
* attempt to fix linux crashes
* ablate backtracking on global bb
* the rare bb bug appears to be from creating the synthetic lease for a bb node and then backtracking on the synthetic lease. this is an attempt to fix it
* clean up code
* try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement
- canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited
* delay premature root activation
* fix major semantic bug about threads continually choosing the root if their lease is reset
* fix cancellation to unknown status
* fix very bad bug about all threads needing to start at the root
* ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0)
* fix some bugs about leases
* ablate adding static effort only
* fix some bugs about leases
* don't explode effort for portfolio nodes
* fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves
* restore dynamically scaled effort
* clean up merge from cherry pick
* tighten which nodes we detect for proven global bb closure (only detect nonclosed nodes)
* fix cancel to unknown exception on bb code
* lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root
* cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch
ecked in ctx where max conflicts is also checked
* move atomic lease check in the context to the more global get_cancel_flag function
* Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state —
propagation queues partially processed, theory state potentially inconsistent with boolean state.
For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter.
For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate
state → SIGSEGV.
The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent.
Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be
gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start.
* fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled.
* ablate lease cancel check in ctx final theory check due to crash (??)
* gate bb-specific code behind param
* try some possible bugfixes for the sigsev
* ablate some bugfixes
* remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check
* restore exception handling logic to master branch
* restore reslimit cancels since the bug appears to be latent
* add bookkeeping for race condition of multiple lease cancels on a single node (messes with reslimit)
* restore unrelated code to master
---------
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
* first attempt with codex. Codex notes:
What changed:
- Each tree node now tracks:
- active worker count
- lease epoch
- cancel epoch
- get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch).
- try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the
node.
- If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed.
- Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a
global exception.
- The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish.
- Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other
workers are still on it.
I also updated search_tree bookkeeping so:
- closure bumps node cancel/lease epochs
- active-node counting uses actual active-worker presence, not just status == active
* fix(parallel-smt): gate split/backtrack by lease epoch
What it changes:
- util/search_tree.h
- bumps node epoch on split
- threads epoch through should_split(...) and try_split(...)
- always records effort, but only split/reopen if the lease epoch still matches
- smt/smt_parallel.cpp
- requires is_lease_valid(..., lease.epoch) before backtrack(...)
- passes lease.epoch into m_search_tree.try_split(...)
* clean up code and add some comments
* fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to
close n, and that closure should then cancel the other workers on n and its subtree.
I changed smt/smt_parallel.cpp accordingly:
- try_split(...) still uses epoch to reject stale structural splits
- backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled
So the intended asymmetry is now restored:
- stale split: reject
- stale unsat/backtrack: allow closure, then cancel affected workers
* ablate to no backtracking on stale leases
* revert codex change about exception handling
* remove old code
* ablate backtracking gate
* attempt to fix linux crashes
* try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement
- canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited
* delay premature root activation
* fix major semantic bug about threads continually choosing the root if their lease is reset
* fix cancellation to unknown status
* fix very bad bug about all threads needing to start at the root
* ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0)
* fix some bugs about leases
* ablate adding static effort only
* fix some bugs about leases
* don't explode effort for portfolio nodes
* fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves
* restore dynamically scaled effort
* lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root
* cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch
ecked in ctx where max conflicts is also checked
* move atomic lease check in the context to the more global get_cancel_flag function
* Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state —
propagation queues partially processed, theory state potentially inconsistent with boolean state.
For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter.
For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate
state → SIGSEGV.
The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent.
Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be
gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start.
* fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled.
* remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check
* check epoch match in release_lease_unlocked
* restore exception handling logic to master branch
* restore reslimit cancels since the bug appears to be latent
---------
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
- bincover.py: typo `NOne` -> `None` in _value2bin fallback path
(would raise NameError if bin_index is out of range).
- complex/complex.py: rename `__neq__` to `__ne__`. Python has no
`__neq__` dunder, so `!=` was not using the intended definition.
On Python 3 it silently fell back to the auto-derived inverse of
`__eq__`; on Python 2 it fell back to identity comparison.
Fixes a double-free (SIGSEGV in mpz_manager::del) in
algebraic_numbers::manager:👿:del_poly, reached through the
destruction of nlsat::evaluator's scoped_anum_vector members on a
subsequent call to nra::solver:👿:reset.
Root cause: sort_roots runs std::sort over a numeral_vector with a
comparator (lt_proc -> manager::lt -> compare_core) that legitimately
throws when the reslimit fires mid-comparison. libc++'s insertion sort
shifts elements via move-assignment inside its inner loop, and because
anum previously had only compiler-generated shallow copy/move (both
just copied m_cell without nulling the source), a throw between two
consecutive shifts could leave two vector slots pointing at the same
algebraic_cell. When the owning scoped_anum_vector was later destroyed
it del'd the same cell twice, reading through a freed chunk whose
first bytes had been overwritten by small_object_allocator's free-list
next pointer.
Fix: give anum proper move constructor and move assignment that
transfer the tagged m_cell pointer and null the source. Copy stays
a shallow handle copy (ownership is still tracked externally by the
manager / owning vector, as before). With the new move, every
intermediate state of sort's move-via-tmp sequence has at most one
slot referencing any given cell, so a throwing comparator can leak
the in-flight tmp cell but cannot produce aliased slots and therefore
cannot cause the downstream double-free.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Fix broken term_comparer in m_normalized_terms_to_columns lookup
The `m_normalized_terms_to_columns` map in `lar_solver` uses a
`term_comparer` that delegates to `lar_term::operator==`, which
intentionally returns `false` (with comment "take care not to create
identical terms"). This makes `fetch_normalized_term_column` unable to
find any term, rendering the Horner module's `interval_from_term`
bounds-recovery path dead code.
History: `lar_term::operator==` returning `false` has been present since
the original "merge LRA" commit (911b24784, 2018). The
`m_normalized_terms_to_columns` lookup was added later (dfe0e856,
c95f66e0, Aug 2019) as "toward fetching existing terms intervals from
lar_solver". The initial code had `lp_assert(find == end)` on
registration (always true with broken ==) and `lp_assert(find != end)`
on deregister (always false). The very next commit (207c1c50, one day
later) removed both asserts, replacing them with soft checks. The
`term_comparer` struct delegating to `operator==` was introduced during
a later PIMPL refactor (b375faa77).
Fix: Replace the `term_comparer` implementation with a structural
comparison that checks size and then verifies each coefficient-variable
pair via `coeffs().find_core()`. This is localized to the
`m_normalized_terms_to_columns` map and does not change
`lar_term::operator==`, preserving its intentional semantics elsewhere.
Validated: on a QF_UFNIA benchmark, `interval_from_term` lookups go
from 0/573 successful to 34/573 successful. Unit test added for the
`fetch_normalized_term_column` round-trip.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* Disable operator== for lar_term
The operator== for lar_term was never intended to be used.
This changes physically disables it to identify what happens to depend
on the operator.
* Work around missing lar_term==
Previous commit disabled lar_term==. This is the only use of the
operator that seems meaningful. Changed it to compare by references
instead.
Compiles, but not sure this is the best solution.
* replace with e
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* Delete unused ineq::operator==
The operator is unused, so there is no need to figure what is
the best fix for it.
* Remove lp tests that use ineq::operator==
---------
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* fix: address issues 1,2,4,5 and add Goal API to Go bindings
Issue 2 (Go): Add Substitute, SubstituteVars, SubstituteFuns to Expr
Issue 4 (Go): Add GetDecl, NumArgs, Arg to Expr for AST app introspection
Goal API (Go): Add IsInconsistent and ToDimacsString to Goal
ASTVector (Go): Add public Size, Get, String methods
ASTMap (Go): Add ASTMap type with full CRUD API in spacer.go
Issue 1 (Go): Add Spacer fixedpoint methods QueryFromLvl, GetGroundSatAnswer,
GetRulesAlongTrace, GetRuleNamesAlongTrace, AddInvariant, GetReachable
Issue 1 (Go): Add context-level QE functions ModelExtrapolate, QeLite,
QeModelProject, QeModelProjectSkolem, QeModelProjectWithWitness
Issue 5 (OCaml): Add substitute_funs to z3.ml and z3.mli
Agent-Logs-Url: https://github.com/Z3Prover/z3/sessions/afa18588-47af-4720-8cea-55fe0544ae55
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
* fix: add substitute_funs to Expr module sig in z3.ml
The internal sig...end block in z3.ml (the module type declaration for Expr)
was missing val substitute_funs, causing OCaml compiler error:
The value substitute_funs is required but not provided
Agent-Logs-Url: https://github.com/Z3Prover/z3/sessions/c6662702-46a3-4aa0-b225-d6b73c2a2505
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
The is_mod handler in theory_lra called ensure_nla(), which
unnecessarily created the NLA solver for pure linear problems, causing
the optimizer to return a finite value instead of -infinity.
Fix: check `m_nla` instead of calling `ensure_nla()`, matching the
pattern used by the is_idiv handler. The mod division is only registered
when NLA is already active due to nonlinear terms.
Update mod_factor tests to use QF_NIA logic and assert the mul term
before the mod term so that internalize_mul triggers ensure_nla() before
mod internalization.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* mini_quip: port to Python 3 and fix several bugs
examples/python/mini_quip.py was Python 2 only and had several
latent bugs that prevented it from running on Python 3 or producing
correct results on benchmarks beyond horn1..5.
Python 3 / import fixes:
- Convert `print stmt` to `print(...)` calls (lines 457-458, 567,
710, 747, 765, 776).
- The bare `print("Test file: %s") % file` form was applying `%`
to the return value of print() (None); rewrite as
`print("Test file: %s" % file)`.
- Add `import sys` (used by sys.stdout.write/flush) and
`import copy` (used by QReach.state2cube via copy.deepcopy);
neither was previously imported.
- next()/prev() passed `zip(...)` directly to z3.substitute. In
Python 3 zip returns a one-shot generator; wrap with list() the
same way mini_ic3 already does.
Bug fixes:
- is_transition(): when an init rule's body is an And without any
Invariant predicate, is_body() returns (And(...), None). The
function then passed inv0=None to subst_vars and crashed inside
get_vars(). Add an explicit None check so the rule falls through
to is_init() (same fix as mini_ic3).
- generalize(): guard against an empty unsat core. Without the
guard, an empty core can be returned and become
cube2clause([])=Or([])=False, poisoning all frames (same class
of bug as in mini_ic3).
- check_reachable(): self.prev(cube) on an empty cube produced an
empty list which was then added to a solver as a no-op
constraint, so an empty cube would always look reachable. Only
add the constraint when cube is non-empty.
- quip_blocked() at f==0 for must goals contained
`assert is_sat == s.check()` where `is_sat` is undefined in that
scope; the intent is `assert sat == s.check()`.
- Inside the lemma-pushing loop in quip_blocked(), `is_sat == unsat`
was a comparison whose result was discarded; the intended
assignment is `is_sat = unsat`.
Verified on horn1..5 (unchanged behavior, all return same
SAFE/UNSAFE result and validate). Larger benchmarks (h_CRC,
h_FIFO, cache_coherence_three) now at least run without exceptions
(performance is a separate matter).
* mini_quip: guard against None from QReach.intersect in CEX trace loop
In quip_blocked, the must-goal CEX-tracing loop calls
self.reachable.intersect(self.prev(r)) and immediately uses
r.children() on the result. QReach.intersect can return None when
the model literals do not match any state in the partial reachable
set, which crashes with AttributeError: 'NoneType' object has no
attribute 'children'. Reproduces on data/h_FIFO.smt2.
Fix: save the model, and when intersect returns None fall back to
the raw self.project0(model) as the predecessor cube. This still
gives a concrete predecessor and lets the CEX trace make progress
instead of crashing.
* Refactor parallel search tree to use global node selection (SMTS-style) instead of DFS traversal.
Introduce effort-based prioritization, allow activation of any open node, and add controlled/gated
expansion to prevent over-partitioning and improve load balancing.
* clean up code
* ablations
* ablations2: effort
* ablations2: activation
* ablations3: more activations
* ablations4: visit all nodes before splitting
* throttle tree size min is based on workers not activated nodes
* ablate random throttling
* ablate nonlinear effort
* clean up code
* ablate throttle
* ablate where add_effort is
* reset
* clean up a function and add comment
---------
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Two fixes in examples/python/mini_ic3.py:
1. generalize(): the polarity of the disjointness check was inverted,
and there was no guard against an empty unsat core. With an empty
core, And([])=True so check_disjoint(init, prev(True)) is always
False (init is sat), and the code returned the empty core. That
empty core then became cube2clause([])=Or([])=False, which got
added as a lemma to all frames. The frame became inconsistent and
is_valid() returned And(Or())=False as the "inductive invariant".
Fix: require len(core) > 0 AND check_disjoint(init, prev(core))
(without the spurious 'not'), so the core is only used when it
is genuinely disjoint from init.
2. is_transition(): when an init rule's body happens to be an And
without any Invariant predicate (e.g. (and (not A) (not B) ...)),
is_body() returns (And(...), None). is_transition then passed
inv0=None to subst_vars() which crashed inside get_vars(). Add an
explicit None check so the rule falls through to is_init().
Verified on horn1..5 (unchanged behavior), h_CRC and h_FIFO from the
blocksys benchmarks (now correctly return CEX matching z3 spacer),
and cache_coherence_three (no longer collapses to (and or)).
When a monic x*y has a factor x with mod(x, p) = 0 (fixed), propagate
mod(x*y, p) = 0. This enables Z3 to prove divisibility properties like
x mod p = 0 => (x*y) mod p = 0, which previously timed out even for
p = 2. The lemma fires in the NLA divisions check and allows Gröbner
basis and LIA to subsequently derive distributivity of div over addition.
Extends division tuples from (q, x, y) to (q, x, y, r) to track the
mod lpvar. Also registers bounded divisions from the mod internalization
path in theory_lra, not just the idiv path.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>