`sat.smt=true` could return `sat` with an invalid model for QF_UFBV
formulas combining Bool-valued UFs and BV range constraints. The failure
came from broken model-trail reconstruction in `elim_unconstrained`,
where `ADD` entries were effectively turned into identity substitutions.
- **Root-cause fix: restore model-trail substitution composition**
- In `elim_unconstrained::update_model_trail`,
`generic_model_converter::ADD` entries now use `entry.m_def` (rewritten
through accumulated substitutions) instead of creating self-referential
const-to-const mappings.
- This re-enables correct back-substitution for eliminated unconstrained
terms during witness reconstruction.
- **Regression coverage: QF_UFBV + `sat.smt=true` + model validation**
- Added a focused unit test in `src/test/simplifier.cpp` for:
- Bool-valued UF predicate over BV vars
- split BV range constraints (`bvuge` / `bvult`)
- `:sat.smt true` and `:model_validate true`
- The test asserts the solver returns `sat` without emitting an
invalid-model error.
```cpp
// before (effectively no-op back-mapping)
new_def = m.mk_const(entry.m_f);
sub->insert(new_def, new_def, nullptr, nullptr);
// after (compose and apply recorded definition)
new_def = entry.m_def;
(*rp)(new_def);
sub->insert(m.mk_const(entry.m_f), new_def, nullptr, nullptr);
```
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Recent GCC builds report two warning classes in core codepaths: a
possible uninitialized read in `bound_propagator::relevant_bound`, and
repeated `-Warray-bounds` diagnostics in `mpz_manager::get_sign_cell`
when materializing small integers into a reserved `mpz_cell`.
- **Warning cleanup in bound propagation**
- Initialize `interval_size` at declaration in
`src/ast/simplifiers/bound_propagator.cpp` so the compiler can prove
safety across templated `LOWER/UPPER` instantiations using `std::clamp`.
- Preserves existing control flow and refinement heuristics (`bounded`
remains the gate for interval-based logic).
- **Warning cleanup in mpz small-value cell materialization**
- In `src/util/mpz.h`, replace direct writes through `cell->m_digits[0]`
with writes via a derived digits pointer
(`reinterpret_cast<digit_t*>(cell + 1)`), avoiding zero-length
trailing-array indexing diagnostics.
- Keeps memory layout and semantics unchanged for the stack-reserved
`sign_cell` buffer.
- **Representative change**
```cpp
cell = reserve;
cell->m_size = 1;
digit_t* cell_digits = reinterpret_cast<digit_t*>(cell + 1);
cell_digits[0] = a.value() < 0 ? -a.value() : a.value();
```
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
This updates `src/solver/parallel_tactical2.cpp` with targeted
simplifications to align style and reduce local verbosity without
changing solver behavior. The changes are limited to member naming
consistency, cast style consistency, and a small `expr_ref` construction
cleanup.
- **`batch_manager` member naming consistency**
- Renamed the two outlier private fields to follow the existing `m_`
convention:
- `shared_clause_trail` → `m_shared_clause_trail`
- `shared_clause_set` → `m_shared_clause_set`
- Updated all corresponding reads/writes in clause collection, reset,
and clause-return paths.
- **Cast style consistency**
- Replaced C-style cast in thread count calculation with
`static_cast<unsigned>(...)` for consistency with style already used in
the file.
- **Simplified `expr_ref` construction in cube assembly**
- Collapsed default-init + assignment into direct construction when
pushing cube literals.
```cpp
// Before
expr_ref lit(g2l.to());
lit = g2l(cur->get_literal().get());
cube.push_back(std::move(lit));
// After
cube.push_back(expr_ref(g2l(cur->get_literal().get()), g2l.to()));
```
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Discussion #9524 reported systematic TPTP result polarity errors
(notably `Unsatisfiable/Theorem` cases returned as `Satisfiable`) and
timeout-path instability in the frontend. The dominant issue was
semantic: free variables in FOF/CNF formulas were parsed as constants
instead of implicitly universally quantified variables.
- **Parser semantics: free variables now follow TPTP rules**
- In `/home/runner/work/z3/z3/src/cmd_context/tptp_frontend.cpp`, free
variables encountered while parsing an annotated formula are now
collected in formula scope and wrapped with a top-level `forall`.
- This applies to both term and atomic-formula paths, so variable
occurrences are treated consistently.
- Explicitly bound quantifier variables continue to take precedence over
implicit ones.
- **Scoped implementation cleanup**
- Added scoped state for implicit-variable collection to avoid leaking
parser state across formulas.
- Kept variable binding order stable so quantifier construction is
deterministic.
- **Timeout-path robustness**
- Updated frontend exception catches to `const&` in the TPTP stream
entrypoint to make timeout/error handling behavior consistent with
thrown exception forms.
- **Regression tests**
- Extended `/home/runner/work/z3/z3/src/test/tptp.cpp` with focused
cases for:
- FOF free-variable implicit universal quantification.
- CNF free-variable implicit universal quantification.
```tptp
cnf(c1,axiom, p(X)).
cnf(c2,axiom, ~ p(a)).
```
This now maps to `forall X. p(X)` plus `~p(a)`, yielding `% SZS status
Unsatisfiable` as expected.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
`fpa2bv_converter::mk_to_real` computed `2^(1/|exp|)` instead of
`1/(2^|exp|)` for floats with negative exponents, causing the NRA solver
to reach contradictory conclusions and return spurious `unsat` for
satisfiable QF_FPLRA formulas.
## Root Cause
After the loop that evaluates `exp2 = |unbiased_exp|` as an integer, the
code took `1/exp2` (reciprocal of the integer) before calling
`mk_power`, yielding `2^(1/3)` instead of `2^(-3) = 1/8` for a float
with exponent -3:
```cpp
// Buggy
one_div_exp2 = mk_div(one, exp2); // 1/|exp|, not 1/2^|exp|
exp2 = mk_ite(exp_is_neg, one_div_exp2, exp2);
two_exp2 = mk_power(two, exp2); // 2^(1/3) ≠ 1/8 for exp=-3
```
## Fix
Compute the power of 2 first, then invert it:
```cpp
// Fixed
two_exp2 = mk_power(two, exp2); // 2^|exp|
one_div_two_exp2 = mk_div(one, two_exp2); // 1/(2^|exp|)
two_exp2 = mk_ite(exp_is_neg, one_div_two_exp2, two_exp2); // correct 2^exp
```
## Impact
- **QF_FPLRA**: `to_fp(RTZ, r)` with a symbolic real `r` constrained to
an interval containing a float's exact rational value now correctly
returns `sat`.
- **fp.to_real**: Fixes incorrect real-valued encoding for all floats
with negative exponents, including denormals (which adjust the exponent
by subtracting leading-zero count).
A regression test covering the reported case is added to
`src/test/fpa.cpp`.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
This PR targets the main TFF frontend parsing failures: bare numeric
atoms were being treated as uninterpreted terms (`U`) in formula
position, decimal literals were not parsed, and `$uminus` was not
recognized as an arithmetic builtin. These issues caused widespread
parse/type failures in arithmetic-heavy TPTP inputs.
- **Numeric atom parsing in formulas (TFF)**
- Added a shared numeric-literal parser path for term/formula contexts.
- In atomic formulas, numeric LHS now parses as arithmetic numerals
before `=`/`!=` handling, avoiding `U` vs `Int/Real` mismatches.
- **Decimal literal support**
- Extended numeral parsing to accept `d.d` forms and construct `Real`
numerals.
- Keeps existing integer (`d`) and rational (`d/d`) behavior on the same
code path.
- **`$uminus` builtin support**
- Added explicit handling for `$uminus(<arg>)` in term parsing.
- Enforces arity 1 and arithmetic-argument checks; maps directly to
arithmetic unary minus.
- **Focused regression coverage**
- Added/updated TPTP unit cases for:
- bare integer inequality: `31 != 12`
- decimal arithmetic literal usage
- `$uminus` in arithmetic predicates
Example of now-supported inputs:
```tptp
tff(c1,conjecture, 31 != 12).
tff(c2,conjecture, ~ $less(-3.25,-8.69)).
tff(c3,conjecture, $less($uminus(2),0)).
```
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
* Initial plan
* Simplify parallel SMT code: clean comments and deduplicate stat computation
- Normalize tab to spaces on line 7 of smt_parallel_params.pyg
- Remove extraneous blank line after ++m_num_core_minimize_calls
- Replace informal retry-loop comments with professional descriptions
- Extract repeated safe-division pattern into safe_ratio lambda in backbones_worker::collect_statistics
Agent-Logs-Url: https://github.com/Z3Prover/z3/sessions/e0418d2f-7d4d-4980-897f-98d4057bddc3
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
* Final version of parallel architecture for FMCAD26 submission (#9475)
* setting up new backbone experiment
* fix phase scores bug
* debug crash from negated atoms
* backbone thread/global backbones in progress, does NOT compile yet
* debug, still need to add backbones worker as a new thread
* setting up complicated condition variable thing for backbones worker thread
* debug
* debug lock contention
* it's a little messy, but change how i'm checking backbones by initiating with batch check
* don't split on global backbones, share global backbones once detected. still need to prune search tree with backbones
* close global backbone branches in search tree
* fix backbone ranking (take average of bb age over cubes and incorporate hits/num cubes the bb appears in
* add stats to backbone experiment
* gate the backbones experiment by local vs global
* update stats and fix bug about unsat core size=1 means global backbone
* phase negation ablation
* unforce phase ablation
* reset ablations
* add todo notes
* fix backbone aging
* first draft of Janota Alg 7
* process exactly 10 bb candidates in each batch
* fixing the Janota algorithm
* add backbone stats for Janota algorithm
* fix bug about global backbones not being checked unless local is also true
* hopefully fix bug about closing global backbones in search tree
* fix another bug in janota alg
* report random seed for debug
* print random seed for debug
* refactor janota alg code, still can't repro the crash
* fix some bugs in the janota algorithm
* try to fix weird memory leak thing with ramon/linux
* revert fix, it didn't work
* add second backbones thread
* increase chunk size when undef
* fix how the 2 backbone threads work on batches (they each race to finish the same batch). this was very complicated to code due to thread synchronization and while it runs there may be bugs
* update how we report stats for backbones
* first draft of doing the bb threads in neg and pos mode, needs revising
* fix some bugs in the positive version of the bb check, still need to review
* debug some more things in the positive bb worker
* keep bb candidates sorted, increase batch and chunk size
* try to resolve a couple of bugs
* fix very bad bug about backbones workers not doing anything
* ablate positive backbone thread
* fix how we record backbones in positive mode (shouldn't impact previous run)
* clarify code about adding found backbones
* add back the positive bb thread
* try to fix the random segfault bug + ablate the postiive bb thread again
* clean up logs
* share clauses with bb threads
* fixes
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* resolve deadlock
* add comment about SAT bb case
* todo comments
* complete TODOs in code, still need to debug bb threads
* debug bb threads, add bb_positive thread back in
* ablate bb_positive thread
* style
* configure num bb threads as param
* enable sat and unsat mode
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* updates
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* remove while true
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
* updates
* try to fix rewriter_exception bug
* possibly reduce code under lock when only 1 bb thread
* add some copilot-suggested optimizations
* add copilot suggestions to fix condition variable synchronization with bb threads
* revert changes that are too messy with the code
* ablate collect clauses
* ablate condition variable logic changes
* ablate reset batch
* revert ablation
* remove m_batch_in_progress that makes the bb threads wait until both have exited the batch after one signals cancel (can be long if one is stuck in ctx check)
* sharing theory lemmas
* finish setup for search tree thread modes, and fix local bb setup to pull from the global pool
* variable renames
* update bb hyperparams after copilot (hopefully??) ran tuning experiments
* fix possible AST manager bug
* ablate collect clauses
* remove bb collect shared clauses
* fix local bb experiment bug and reinstate collect clauses for global bb
* local bb cands are thread-local ablation
* remove thread-local local bb ablation
* fix bug in nonthread-local bb experiment
* fix more nonthreadlocal bb bugs
* try to fix local bb bug
* AST manager mismatch bugfix
* attempt to fix another canonicalization bug
* try another bugfix
* try another bugfix
* try yet another bugfix
* thread local bb ablation
* ablate force phase
* ablate set activity
* undo ablations since apparently it's not forcing phase or boosting activities
* remove old experiments
* try guarding m_birthdate size
* try to fix several bugs including with m_birthdate initialization and how we're storing original phases
* one more bugfix
* remove local bb experiment after negative signals on experiments, and change bb ranking to VSIDS scores as opposed to phase
* select bb polarity based on phase, not VSIDS
* first attempt with codex. Codex notes:
What changed:
- Each tree node now tracks:
- active worker count
- lease epoch
- cancel epoch
- get_cube() now hands each worker an explicit lease: (node, epoch, cancel_epoch).
- try_split() and backtrack() now operate on that lease, and the batch manager releases the worker’s lease under the tree lock before mutating the
node.
- If another worker closes the leased node or subtree, the batch manager cancels only the workers whose current leased nodes are now closed.
- Workers detect canceled leases after check(), reset their local cancel flag, abandon the stale lease, and continue instead of turning that into a
global exception.
- The “reopen immediately into the open queue” policy is preserved. I did not add a barrier waiting for all workers on a node to finish.
- Active-worker accounting is now separate from the open/active/closed scheduling status, so reopening a node no longer erases the fact that other
workers are still on it.
I also updated search_tree bookkeeping so:
- closure bumps node cancel/lease epochs
- active-node counting uses actual active-worker presence, not just status == active
* fix smts bugfix git merge issues with backtrack
* fix(parallel-smt): gate split/backtrack by lease epoch
What it changes:
- util/search_tree.h
- bumps node epoch on split
- threads epoch through should_split(...) and try_split(...)
- always records effort, but only split/reopen if the lease epoch still matches
- smt/smt_parallel.cpp
- requires is_lease_valid(..., lease.epoch) before backtrack(...)
- passes lease.epoch into m_search_tree.try_split(...)
* clean up code and add some comments
* fix bug about backtracking condition being too strict: The epoch guard should not block backtrack(...) the same way it blocks try_split(...). A stale worker that proves UNSAT for n should still be able to
close n, and that closure should then cancel the other workers on n and its subtree.
I changed smt/smt_parallel.cpp accordingly:
- try_split(...) still uses epoch to reject stale structural splits
- backtrack(...) no longer requires is_lease_valid(..., epoch); it only requires that the lease is not already canceled
So the intended asymmetry is now restored:
- stale split: reject
- stale unsat/backtrack: allow closure, then cancel affected workers
* ablate to no backtracking on stale leases
* fix merge
* revert codex change about exception handling
* fix linux bugs
* ablate backtrack gating
* attempt to fix linux crashes
* ablate backtracking on global bb
* the rare bb bug appears to be from creating the synthetic lease for a bb node and then backtracking on the synthetic lease. this is an attempt to fix it
* clean up code
* try to fix bug about active worker counts/lease accounting. current policy should hold: - stale leases: release/decrement
- canceled leases: do not release/decrement (just ignore since we have an invariant that canceled leases mean closed nodes that are never revisited
* delay premature root activation
* fix major semantic bug about threads continually choosing the root if their lease is reset
* fix cancellation to unknown status
* fix very bad bug about all threads needing to start at the root
* ablate active ranking: now nodes are only reopened if they are truly inactive (active worker count is 0)
* fix some bugs about leases
* ablate adding static effort only
* fix some bugs about leases
* don't explode effort for portfolio nodes
* fix: still accumulate per-node effort, but don't over-accumulate on portfolio solves
* restore dynamically scaled effort
* clean up merge from cherry pick
* tighten which nodes we detect for proven global bb closure (only detect nonclosed nodes)
* fix cancel to unknown exception on bb code
* lease cancellation doens't touch rlimit now, it just sets max conflicts to 0. also fix a VERY BAD BUG about effort never being updated until all leases are done on a node, which meant we never left the root
* cross-thread modification of max conflicts is unsafe, so create an atomic lease canceled variable that's ch
ecked in ctx where max conflicts is also checked
* move atomic lease check in the context to the more global get_cancel_flag function
* Fix new SIGSEV. issue: The root cause: get_cancel_flag() is called from within propagation loops (mid-BCP, mid-equality-propagation, mid-atom-propagation). When it returns true there, the solver exits early and leaves the context in an intermediate state —
propagation queues partially processed, theory state potentially inconsistent with boolean state.
For the global cancel (m.limit().cancel()), this is harmless: the worker exits entirely and the context is destroyed. Intermediate state doesn't matter.
For a lease cancel, the context is reused — the worker gets a new cube and calls ctx->check() again on the same context object. Re-entering check() on a context interrupted mid-propagation causes it to access that corrupted intermediate
state → SIGSEGV.
The m_max_conflicts check is the only checkpoint that's safe for re-entry: it only fires post-conflict-resolution, pre-decision, when propagation queues are empty and theory state is consistent.
Fix: Remove m_lease_canceled from get_cancel_flag(). Keep it only at safe, between-phase checkpoints where the context is in a known-consistent state. The result is two safe checkpoints for m_lease_canceled: after each conflict (post-resolution, queues empty) and before each theory final check (not yet entered the theory). Neither interrupts the solver mid-mutation. The SIGSEGV should be
gone, and NIA performance should improve because long theory final checks (where NIA burns most time) are now preemptable before they start.
* fix new inconsistent theory bug: The problem is returning FC_GIVEUP from inside final_check() after some theories have already run final_check_eh() and pushed propagations into the queue. Those pending propagations reference context state that gets invalidated on the next check() call → SIGSEGV. The fix: check m_lease_canceled before entering final_check() in bounded_search(), never from inside it. That way the context is always in a clean pre-final-check state when we bail out. This is safe: decide() returned false (all variables assigned, no pending propagations), theories haven't been touched yet, context is in a fully consistent state. For NIA, this is still a meaningful win — we avoid entering expensive arithmetic final checks entirely when the lease is already canceled.
* ablate lease cancel check in ctx final theory check due to crash (??)
* gate bb-specific code behind param
* try some possible bugfixes for the sigsev
* ablate some bugfixes
* remove second lease cancel check in smt_context, not sure it's safe. only check where we do the max conflicts check
* restore exception handling logic to master branch
* restore reslimit cancels since the bug appears to be latent
* add bookkeeping for race condition of multiple lease cancels on a single node (messes with reslimit)
* restore unrelated code to master
* restore local bb experiment
* ablate restore local bb phase/activity after search
* undo local bb ablation about resetting phase/activities, and reinstate the shared lemmas of length 2 and 3 experiment
* re-ablate restore local bb phase/activity after search, due to positive experimental signal on smt comp LIA
* change split policy from lightweight proof skeleton to VSIDS. NOTE: enabling local bb will mess with this since we aren't restoring activities right now
* backtrack more aggressively in search tree: close matching external targets (i.e. repeat literals on other branches)
* find_shallowest_timed_out_leaf_depth is now shallowest_unsolved_leaf_depth and is based on num activations > 0, not effort > 0
* fix soundness bug about closing external targets with nontrivial cores
* epoch is no longer needed, just cancel epoch. remove epoch
* core minimize draft, and fix bug in tree expansion policy about shallowest leaf depth needing to be timed out
* core minimization thread (remove search tree worker core min since it was blocking)
* collect shared clauses in core min thread
* bugfixes in core min algorithm
* fix more bugs in core min algorithm
* more core min bugfixes based on feedback and increase m_core_minimize_conflict_budget to 5000 (might need to increase it more for harder SMT COMP problems)
* fix bug in backtrack_unlocked
* fix compiler error
* more core min bugfix from nikolaj
* clean up
* failed literal probe collects shared clauses
* core min thread shares units
* failed literal thread now tests the top 500 global bb cands each round, instead of scanning everything. on QF_LIA/Sz128_2823.smt2 this got us from 51->75 discovered backbones
* remove core minimizer unit sharing (experimentally showed no effect)
* core minimization thread candidate cores are ranked first by depth (deepest->shallowest) then by size (largest->smallest). also, the core's node is set to the deepest node in the core which is not necessarily the search node (slightly semantically stronger). finally, clean up bb/failed literal params
* failed literal probe runs continuously
* fix a lot of things about the FL thread and how bb cands are being processed. also re-add the local bb experiment for ablations
* ablate continuous run on FL thread (up to the max BM examples
* ablate m_max_failed_literal_prioritized_size back to 100
* redesign FL probe again
* ablate FL continuous probe
* reinstate continuous FL probe after positive NIA signal, but also re-add the BM maintaining 1000 bb cands and these are used a
s backups instead of just looping over the top 100 all the time
* change FL thread scheduling to attempt to do less duplicate checks
* restore some old FL behavior
* batch manager dedups global bb cands by atoms, not literals. if we have 2 of the same atom, the polarity with higher rank is kept for the stored bb candidate literal.
* ablations
* reinstate FL check for new batch with epochs, before merge, this is a temp branch
* undo comment out cv call
* restore old changes
* bb batch mode is continuous, with checks for new candidates after the first round
* separate FL probe into 2 threads for pos and neg mode
* attempt to add unit-based bb detection in chunking mode
* add bb detection via workers' units. also rename some variables
* modify the fallback policies for bb detection in batch mode but also in FL mode
* ablate continuous checking for batch bb mode
* major refactor for bb code. we share units and collect them as pruning bb's in all threads now (including core min and bb threads). we always check for units in batch mode now. finally, the batch mode fallback is now FL probing
* bb candidates are atoms, not literals, since we currently test both polarities in parallel.
batch mode retry terminates if we made zero progress after a retry round to avoid resource stress
fix bug about bb ranking being backwards for how we process them
* fix polarity bug for FL mode dedup
* restore polarity-sensitive bb candidate ranking via lits
* ablate sharing non-worker units
* ablate share unit as bb
* ablate incomplete-theory give-up paths
* restore unit sharing as bb collection on workers
* restore incomplete-theory give-up paths
* clean up code
* clean up code
* clean up code
---------
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
* add ablate_backtracking experiment
---------
Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MacBook-Pro.local>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.lan1>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Ilanas-MBP.localdomain>
Co-authored-by: Ilana Shapiro <ilanashapiro@Mac.lan1>
mk_preamble_tactic in qflia_tactic.cpp constructed lia2card with a
throttled params_ref but did not wrap the call in using_params, so
when the preamble is invoked standalone the throttle is silently
clobbered by and_then's ambient param propagation: each child gets
updt_params(outer_p) called on it, re-reading lia2card.max_range
from the outer params (default 101) and discarding the constructor
override.
mk_qflia_tactic masks the bug because it wraps the whole chain in
using_params(..., main_p) where main_p also carries
lia2card.max_range=1. But QF_UFNIA goals reach mk_preamble_tactic
through the fall-through tail in mk_default_tactic
(default_tactic.cpp:52) without that outer wrap: is_qfnia_probe
rejects goals containing UF, so QF_UFNIA does not route through
mk_qfnia_tactic and instead lands on the unguarded preamble. Any
integer variable with concrete range hi-lo <= 101 then gets
hot-encoded into ~hi-lo indicator Booleans plus a sum-of-ITEs,
inflating SAT search and bloating each NLA refutation that touches
the partition.
Fix: wrap the call with using_params(mk_lia2card_tactic(m),
lia2card_p) so the throttle survives ambient propagation.
Verified on a Certora QF_UFNIA VC with a 0..98 integer: metrics now
match running with explicit tactic.lia2card.max_range=0 (mk-bool-var,
decisions, nlsat-restarts all within run-to-run noise of the
workaround), confirming the built-in throttle is finally effective.
This mirrors the pattern from commit 87e45accd ("Throttle lia2card
in QF_NIA preamble", #9362) which fixed the same bug in
mk_qfnia_preamble but did not propagate the fix to the QF_LIA
preamble. The original throttle parameters (max_range=1,
max_ite_nesting=1) were introduced by Nikolaj's commit 99cbfa715
("Add a sharp throttle to lia2card tactic to control overhead in
default tactic mode") in Feb 2025; that commit set the params at
construction time, which works under mk_qflia_tactic's outer
using_params wrap but not under standalone invocation.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Compute term generations based on minimal match
* Tidy up get_*_f_app
* Update euf_mam to the minimum generation number among matches
* Update euf_mam.cpp
* Move the UNREACHABLE() test to smt_mam.cpp
* Enforce stickiness of max-generation
* Add current generation tracking to bind structure
* Fix build error
---------
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Two TRACE blocks under the existing nla_solver tag:
1. theory_lra::false_case_of_check_nla emits a "varmap:" line for each
NLA lemma, listing j-var → SMT-name mappings for variables in the
lemma's collect_vars set. Lets lemur nla resolve the LP-internal
j-numbering back to the original SMT term names when displaying
lemmas. Without this, lemma-level analysis has to either guess at
variable identities (and j-numbers are reused across nlsat
invocations under backtracking — see j-vars-unstable note) or use a
different trace tag entirely (-tr:nra) for stable algebraic-number
IDs.
2. nla_grobner emits a "grobner-linear-eq:" line at each call to
add_term + update_column_type_and_bound that produces a Linear
Propagation row from completion. Lets us count Gröbner's effective
contribution to the LP tableau independently of the lemma stream.
Useful when investigating Gröbner-deficit hypotheses in NLA cascade
diagnosis.
Both are pure trace emission, behind TRACE(nla_solver, ...). Zero
runtime cost when tracing is off; no semantic change.
* Add adaptive growth knobs for Gröbner under arith.nl.grobner_adaptive
When enabled, the per-call growth budget (m_eqs_growth, m_expr_size_growth,
m_expr_degree_growth, m_max_simplified) is scaled by m_growth_boost:
- two consecutive productive runs bump the boost by 3/2 (cap 4x)
- a miss resets the streak and decays the boost toward 1.0x by 1/4 of excess
Default is off; the existing miss-frequency throttle (m_quota / m_delay_base)
is unchanged, so this only affects per-call power, not call frequency.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Update src/params/smt_params_helper.pyg
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: Arie Gurfinkel <arie.gurfinkel@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
theory_lra reports num_final_check_levels()==2: full nlsat (m_nra.check)
only runs at level >= 2. When a level-1 round-trip ends with FC_GIVEUP
and the loop escalates to level 2, the previously accumulated 'result',
'f', and 'm_incomplete_theories' were retained, so a subsequent
successful (FC_DONE) round at level 2 was still reported as
(incomplete (theory arithmetic)). Reset that state on each level
escalation.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Add dual-row shared-factor sandwich for NLA bound propagation
When enabled via arith.nl.monomial_sandwich (default off), monomial_bounds
finds LP term columns whose term has shape a_m * m + a_v * v with exactly
two variables — both factors of a binary monomial m = u*v. The term column's
bound bounds (a_m * m + a_v * v); substituting m = u*v gives v * (a_m*u + a_v),
and sign-aware interval division by v plus an affine shift yields a numeric
bound on u. The derived interval is fed to the existing propagate_value path
so the lemma channel and integer rounding logic are shared with the rest of
NLA's forward/backward propagation; no new emit code.
Catches conflicts of the form
α_v1 * v + α_m * m ≥ k1
α_v2 * v + α_m * m ≤ k2
that today require nlsat (when no single row alone yields infeasibility but
their conjunction tightly bounds u after factoring v).
Scope: binary monomials only (m.size()==2, no squares); cap of 16 term-columns
scanned per call; one lemma per (u,v) attempt to keep the lemma channel quiet.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Add arith.nl.order.binomial_sign flag (default true)
Granular gate for order_lemma_on_binomial_sign — the only order family that
embeds a model-snapshot literal (x ≷ val(x)) in the lemma body. Disabling it
keeps the always-good structural mon-ol family running while removing the
SAT-splitter shape that cascades under model perturbations (e.g., from
arith.nl.monomial_sandwich tightening factor bounds).
Default true preserves master behaviour; the flag is intended as an
experimental knob to measure how much of an observed cascade is specifically
attributable to the binomial-sign splitter vs. the structural cancellation
lemmas in the same module.
See ord-binom-opportunities.md for the full gap analysis and the
deterministic-replacement directions (sandwich, McCormick) that would let
this flag eventually default to false without regressing leaves where
ord-binom currently carries the proof.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Add sign-pinned binomial bound for NLA (Opportunity 1 from ord-binom doc)
When enabled via arith.nl.monomial_binomial_sign (default off), monomial_bounds
adds a third pass alongside propagate_down (existing) and propagate_shared_factor
(sandwich). For a binary monomial m = u*v in m_to_refine whose model value mv
disagrees with val(u)*val(v), and where v has a determined sign:
1. synthesize a one-sided interval for m.var() at mv (no deps; the snapshot
enters as a literal in the lemma body, not as an antecedent)
2. divide by v's interval (sign-aware via dep.div<with_deps>) to get a
deterministic interval for u
3. emit a propagate_value-style lemma whose body is
m.var() < mv (or > mv) ∨ u-bound
conditioned on v's bound witness
Targets the case ord-binom currently handles: factors have determined signs,
m.var() may have no LP bound. The clause is sound modulo the monomial
definition (same condition propagate_down, propagate_shared_factor, and
ord-binom already rely on).
A new throttle kind MONOMIAL_BINOMIAL_SIGN keyed on (m.var, u, v, direction)
prevents cascading: without it, each new val(m.var()) snapshot would re-emit
across model changes the same way ord-binom does.
Validated via smt.arith.validate=true: 0 soundness errors across the
32-leaf test corpus.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* Add McCormick box-corner tangent points (Opportunity 2 from ord-binom doc)
When enabled via arith.nl.tangents.box_corners (default off, sub-flag of
arith.nl.tangents), tangent_imp::get_points selects m_a, m_b at the corners
of the bound box [x_lo, x_hi] × [y_lo, y_hi] instead of the model-centered
points val(x) ± delta. The selection follows the classical McCormick
under/over envelope:
- m_below=true (under-approximation):
m_a = (x_lo, y_lo), m_b = (x_hi, y_hi)
- m_below=false (over-approximation):
m_a = (x_lo, y_hi), m_b = (x_hi, y_lo)
The existing generate_plane already produces the McCormick linear form
xy ≷ pl.y·x + pl.x·y − pl.x·pl.y at any chosen point pl. push_point is
skipped in box-corner mode: corners are extremes, so doubling the offset
moves out of the box and would invalidate the McCormick property.
Falls back to the existing model-driven point selection when either factor
has an unbounded side or the box is degenerate (single-point in a
dimension).
Soundness — non-strict inequality at corners. The classical model-driven
flow uses pl strictly in the interior of the box, so generate_plane emits
xy > T (strict). At the box corners the tangent meets the surface along
the box's edges (xy = T when x = pl.x or y = pl.y), so the strict
inequality is violated by any model with x at the box boundary. A new
m_pl_strict_interior member, set false on a successful set_box_corners(),
switches generate_plane's emission to ≥/≤ (non-strict). The model-driven
path keeps strict — its push_point + plane_is_correct_cut chain already
guarantees pl is interior.
Validated via smt.arith.validate=true: 0 validate_conflict() failures
across the 32-leaf test corpus with box_corners=true.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
updates to nlsat polynomial simplification introduced checkpoints.
These can throw exceptions (if setting a timeout).
The code that uses this was not properly protected from exceptions to distinguish timeout based tactics from genuine exceptions that should terminate solving altogether.
see updates such as: 117da362f0
1. unit extraction could skip some units
2. create a shared method to check satisfiability that handles exception cases. They are the same among different workers, so shared in the batch manager.