3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2025-10-17 21:10:26 +00:00

Parallel solving (#7860)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

* merge

* fix #7603: race condition in Ctrl-C handling (#7755)

* fix #7603: race condition in Ctrl-C handling

* fix race in cancel_eh

* fix build

* add arithemtic saturation

* add an option to register callback on quantifier instantiation

Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation

* missing new closure

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add Z3_solver_propagate_on_binding to ml callback declarations

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add python file

Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>

* debug under defined calls

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* more untangle params

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* precalc parameters to define the eval order

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* remove a printout

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename a Python file

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add on_binding callbacks across APIs

update release notes,
add to Java, .Net, C++

* use jboolean in Native interface

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* register on_binding attribute

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix java build for java bindings

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* avoid interferring side-effects in function calls

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove theory_str and classes that are only used by it

* remove automata from python build

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove ref to theory_str

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* get the finest factorizations before project

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename add_lcs to add_lc

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints

* initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel?

* Update RELEASE_NOTES.md

* resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback

* remove unused square-free check

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add some debug prints and impelement internal polynomial fix

* restore the square-free check

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add some comments and debug m_assumptions_used

* redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first)

* set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still

* Add .github/copilot-instructions.md with comprehensive Z3 development guide (#7766)

* Initial plan

* Add comprehensive .github/copilot-instructions.md with validated build commands and timing

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Remove test_example binary file from repository

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Bump actions/checkout from 4 to 5 (#7773)

Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* turn off logging at level 0 for testing

* add max thread conflicts backoff

* Parallel solving (#7775)

* very basic setup

* very basic setup (#7741)

* add score access and reset

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* added notes

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* add bash files for test runs

* fix compilation

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* more notes

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* Update PARALLEL_PROJECT_NOTES.md

* add top-k fixed-sized min-heap priority queue for top scoring literals

* fixed-size min-heap for tracking top-k literals (#7752)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* debugging

* process cubes as lists of individual lits

* Parallel solving (#7756)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* process cubes as lists of individual lits

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>

* snapshot

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* pair programming

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* pair programming

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* merge

* chipping away at the new code structure

* Parallel solving (#7758)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* updates

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

* Parallel solving (#7759)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* updates

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* simplify output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* merge

* resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints

* initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel?

* Parallel solving (#7769)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

* merge

* resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints

* initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel?

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback

* Parallel solving (#7771)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

* merge

* fix #7603: race condition in Ctrl-C handling (#7755)

* fix #7603: race condition in Ctrl-C handling

* fix race in cancel_eh

* fix build

* add arithemtic saturation

* add an option to register callback on quantifier instantiation

Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation

* missing new closure

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add Z3_solver_propagate_on_binding to ml callback declarations

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add python file

Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>

* debug under defined calls

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* more untangle params

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* precalc parameters to define the eval order

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* remove a printout

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename a Python file

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add on_binding callbacks across APIs

update release notes,
add to Java, .Net, C++

* use jboolean in Native interface

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* register on_binding attribute

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix java build for java bindings

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* avoid interferring side-effects in function calls

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove theory_str and classes that are only used by it

* remove automata from python build

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove ref to theory_str

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* get the finest factorizations before project

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename add_lcs to add_lc

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints

* initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel?

* Update RELEASE_NOTES.md

* resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>
Signed-off-by: Lev Nachmanson <levnach@hotmail.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Co-authored-by: Lev Nachmanson <levnach@hotmail.com>

* code and notes

* add some debug prints and impelement internal polynomial fix

* add some comments and debug m_assumptions_used

* redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first)

* set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still

* Parallel solving (#7774)

* very basic setup

* ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* respect smt configuration parameter in elim_unconstrained simplifier

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* indentation

* add bash files for test runs

* add option to selectively disable variable solving for only ground expressions

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove verbose output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7745

axioms for len(substr(...)) escaped due to nested rewriting

* ensure atomic constraints are processed by arithmetic solver

* #7739 optimization

add simplification rule for at(x, offset) = ""

Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions.
The example highlights some opportunities for simplification, noteworthy at(..) = "".
The example is solved in both versions after adding this simplification.

* fix unsound len(substr) axiom

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* FreshConst is_sort (#7748)

* #7750

add pre-processing simplification

* Add parameter validation for selected API functions

* updates to ac-plugin

fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop.

* enable passive, add check for bloom up-to-date

* add top-k fixed-sized min-heap priority queue for top scoring literals

* set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still

* fix bug in parallel solving batch setup

* fix bug

* allow for internalize implies

* disable pre-processing during cubing

* debugging

* remove default constructor

* remove a bunch of string copies

* Update euf_ac_plugin.cpp

include reduction rules in forward simplification

* Update euf_completion.cpp

try out restricting scope of equalities added by instantation

* Update smt_parallel.cpp

Drop non-relevant units from shared structures.

* process cubes as lists of individual lits

* merge

* Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734)

* Initial plan

* Add datatype type definitions to types.ts (work in progress)

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype type definitions with working TypeScript compilation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Implement core datatype functionality with TypeScript compilation success

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete datatype implementation with full Context integration and tests

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* chipping away at the new code structure

* comments

* debug infinite recursion and split cubes on existing split atoms that aren't in the cube

* share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing

* merge

* fix #7603: race condition in Ctrl-C handling (#7755)

* fix #7603: race condition in Ctrl-C handling

* fix race in cancel_eh

* fix build

* add arithemtic saturation

* add an option to register callback on quantifier instantiation

Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation

* missing new closure

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add Z3_solver_propagate_on_binding to ml callback declarations

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add python file

Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>

* debug under defined calls

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* more untangle params

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* precalc parameters to define the eval order

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* remove a printout

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename a Python file

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add on_binding callbacks across APIs

update release notes,
add to Java, .Net, C++

* use jboolean in Native interface

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* register on_binding attribute

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix java build for java bindings

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* avoid interferring side-effects in function calls

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove theory_str and classes that are only used by it

* remove automata from python build

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove ref to theory_str

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* get the finest factorizations before project

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* rename add_lcs to add_lc

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints

* initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel?

* Update RELEASE_NOTES.md

* resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback

* remove unused square-free check

Signed-off-by: Lev Nachmanson <levnach@hotmail.com>

* add some debug prints and impelement internal polynomial fix

* add some comments and debug m_assumptions_used

* redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first)

* set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>
Signed-off-by: Lev Nachmanson <levnach@hotmail.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Co-authored-by: Lev Nachmanson <levnach@hotmail.com>

* sign of life

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add notes on parameter tuning

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add notes on parameter tuning

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add notes on parameter tuning

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add notes on parameter tuning

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* turn off logging at level 0 for testing

* add max thread conflicts backoff

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>
Signed-off-by: Lev Nachmanson <levnach@hotmail.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Co-authored-by: Lev Nachmanson <levnach@hotmail.com>

* fix #7776

* add > operator as shorthand for Array

* updates to euf completion

* resolve bug about not popping local ctx to base level before collecting shared lits

* Add virtual translate method to solver_factory class (#7780)

* Initial plan

* Add virtual translate method to solver_factory base class and all implementations

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Add documentation for the translate method in solver_factory

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* put return_cubes under lock

* Revert "resolve bug about not popping local ctx to base level before collecting shared lits"

This reverts commit bba1111e1b.

* Update seq_rewriter.cpp

* fix releaseNotesSource to inline

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Use solver factory translate method in Z3_solver_translate (#7782)

* Initial plan

* Fix Z3_solver_translate to use solver factory translate method

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Revert "Parallel solving (#7775)" (#7777)

This reverts commit c8e866f568.

* remove upload artifact for azure-pipeline

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Fix compilation warning: add missing is_passive_eq case to switch statement (#7785)

* Initial plan

* Fix compilation warning: add missing is_passive_eq case to switch statement

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Remove NugetPublishNightly stage from nightly.yaml (#7787)

* Initial plan

* Remove NugetPublishNightly stage from nightly.yaml

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* add more params

* enable pypi public

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Fix nullptr dereference in pp_symbol when handling null symbol names (#7790)

* Initial plan

* Fix nullptr dereference in pp_symbol with null symbol names

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* add option to control epsilon #7791

#7791 reports on using model values during lex optimization that break soft constraints.
This is an artifact of using optimization where optimal values can be arbitrarily close to a rational.
In a way it is by design, but we give the user now an option to control the starting point for epsilon when converting infinitesimals into rationals.

* update on euf

* check for internalized in solve_for

* fix #7792

add missing revert operations

* update version number to 4.15.4

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix #7753

* fix #7796

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Create centralized version management with VERSION.txt (#7802)

* Initial plan

* Create VERSION.txt and update CMakeLists.txt to read version from file

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete centralized version management system

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Fix version update script and finalize implementation

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Create centralized version management with VERSION.txt

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* read version from VERSION.txt

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix version parse

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* fix parsing of version

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add param tuning experiment in python

* Fix Azure Pipeline PyPI package builds by including VERSION.txt in source distribution (#7808)

* Initial plan

* Fix Azure Pipeline PyPI package builds by including VERSION.txt in source distribution

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Update nightly.yaml to match release.yml NuGet tool installer changes (#7810)

* Initial plan

* Update nightly.yaml to match release.yml NuGet tool installer changes

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Attempt at adding the README to the NuGet package (#7807)

* Attempt at adding README to NuGet package

* Forgot to enable publishing

* add resources

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove resources directive again

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Document how to use system-installed Z3 with CMake projects (#7809)

* Initial plan

* Add documentation for using system-installed Z3 with CMake

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Fix Julia bindings linker errors on Windows MSVC (#7794)

* Initial plan

* Fix Julia bindings linker errors on Windows MSVC

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Complete Julia bindings fix validation and testing

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Fix Julia bindings linker errors on Windows MSVC

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* add print for version file

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add more logging to setup.py

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* try diferennt dirs

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* try src_dir_repo

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* try other dir

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* remove extra characters

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* more output

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* print dirs

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* copy VERSION from SRC_DIR

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* Move VERSION.txt to scripts directory and update all references (#7811)

* Initial plan

* Move VERSION.txt to scripts/ and update all references

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* clean up a little of the handling of VERSION.txt

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* add implementation and toggleable param for splitting frugal + choosing deepest cubes only

* remove priority queue for top-k lits and replace with simple linear scan. the PQ implementation backend still remains in case we want to switch back

* Add new configurations for SMT parallel settings

* Bugfix: post-build sanity check when an old version of ocaml-z3 is installed (#7815)

* fix: add generating META for ocamlfind.

* Patch macos. We need to keep the `@rpath` and use environment var to enable the test because we need to leave it to be fixed by package managers.

* Trigger CI.

* Debug.

* Debug.

* Debug.

* Debug.

* Debug.

* Debug.

* Hacky fix for ocaml building warning.

* Fix typo and rename variables.

* Fix cmake for ocaml test, using local libz3 explicit.

* Rename configuration from 'shareconflicts' to 'depthsplitting'

* Fix configuration for depth splitting in notes

* rename variables

* remove double tweak versioning

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>

* attempting to add backbone code, it does not work. still debugging the error: ASSERTION VIOLATION
File: /home/t-ilshapiro/z3/src/ast/ast.cpp
Line: 388
UNEXPECTED CODE WAS REACHED. I left a comment on the line where it's crashing

* depth splitting now applies to greedy+frugal unless specified otherwise

* debug the backbone crash (it was references not being counted)

* iterative deepening experiment (no PQ yet). the hardness heuristic is still naive!

* fix iterative deepening bug: unsolved cube needs to get re-enqueued even if we don't split it further

* debug iterative deepening some more and add first attempt at PQ (untested)

* fix some bugs and the PQ approach is working for now. the depth sets approach is actually unsound, but I am going to focus on the PQ approach for now since it has more potential for SAT problems with the right hardness metric

* add new attempt at hardness function

* attempt to add different hardness functions including heule schur and march, need to re-examine/debug/evaluate

* implement march and heule schur hardness functions based on sat_lookahead.cpp implementations. they seem to be buggy, need to revisit. also set up experimental params for running on polytest

* add a lot of debug prints that need to be removed. some bugs are resolved but others remain

* debug in progress

* remove the incorrect preselection functions for march and heule-schur. update explicit-hardness with bugfixes. now it works but i am not sure there is a good perf increase based on my handpicked examples. i tried several variations of hardness ratios as you can see commented out. there are debug prints still commented out. also return_cubes now takes in a single cube instead of a list C_worker to align with the single-cube hardness/should_split metrics, it doesn't change anything bc we only pass in 1 cube to begin with

* debug a couple of things and change the hardness function

* tree version in progress

* cube tree data structure version for sharing maximal solver context. it compiles but segfaults immediately so it needs a lot of debugging

* slowly debugging (committing for saving progress)

* debugged get_next_cube to align with how we're storing the prev_cube and active status. other things are still wrong

* debug manager translation problem

* don't actually prune tree for UNSAT, too risky with multithreads. instead marks all such 'removed' nodes as inactive

* it runs! but then crashes after a while

* add optimization (or what is hopefully an optimization) about threads only working in the frontier determined by their split atoms, and only overlapping frontiers once they've exhausted their own

* debug some things. exhausting the cube tree should return unsat now, not crash. also fix a couple of things in get_next_cube. also, setting k=1 for split atoms might be better, not sure

* fix small bug about making sure we're searching in the thread's frontier when we just started a new frontier (like if we're at the empty cube at the root)

* debug big problem about incorrectly deepcopying children when assining new frontier nodes. i think it works now?? it's still not as fast as I would want
also, there are a lot of messy debug prints i will need to remove

* clean up comments and fix bug in cubetree about when subcube tautologies are UNSAT, the entire formula is UNSAT

* clean up short redundant code

* iterative deepening with cubetree, early set unsat if the entire tree is found closed by a thread, and finally multiple threads can work on the same cube if it's the only active leaf

* fix small bug about cubetree node removal

* cubetree is now a per-thread subtree, still needs cleaning but all the ugly 'frontier' logic is now gone

---------

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local>
Signed-off-by: Lev Nachmanson <levnach@hotmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com>
Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com>
Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
Co-authored-by: Lev Nachmanson <levnach@hotmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Solal Pirelli <SolalPirelli@users.noreply.github.com>
Co-authored-by: Shiwei Weng 翁士伟 <arbipher@gmail.com>
This commit is contained in:
Ilana Shapiro 2025-09-10 10:01:17 -07:00 committed by GitHub
parent ac80dfaa8b
commit e140965b90
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 206 additions and 258 deletions

View file

@ -10,14 +10,30 @@ struct CubeNode;
typedef expr_ref_vector Cube; // shorthand
enum State {
open,
closed,
active
};
inline const char* to_string(State s) {
switch (s) {
case open: return "open";
case closed: return "closed";
case active: return "active";
default: return "unknown";
}
}
struct CubeNode {
Cube cube;
CubeNode* parent;
std::vector<CubeNode*> children;
bool active = true;
State state;
CubeNode(const Cube& c, CubeNode* p = 0)
: cube(c), parent(p) {}
: cube(c), parent(p), state(open) {}
bool is_leaf() const { return children.empty(); }
};
@ -25,8 +41,8 @@ struct CubeNode {
class CubeTree {
public:
CubeTree(ast_manager& m) {
Cube root_cube(m); // empty cube
root = nullptr;
Cube empty_cube(m);
root = new CubeNode(empty_cube); // root is allocated properly
std::srand((unsigned)std::time(0)); // is seeding the pseudo-random number generator used by std::rand()
}
@ -72,45 +88,76 @@ public:
}
}
// mark node as inactive and propagate upward if parent becomes a leaf (all children inactive)
// mark node as closed and propagate upward if its polarity pair is also closed (so we have a tautology, so its parent is closed, and thus all its siblings are closed)
// return pointer to last affected ancestor (or nullptr if none) so we can select one of its siblings as the next cube
CubeNode* remove_node_and_propagate(CubeNode* node) {
if (!node || node == root || !node->is_leaf()) return nullptr; // error, root, or not a leaf
CubeNode* remove_node_and_propagate(CubeNode* node, ast_manager& m) {
if (!node) return nullptr;
CubeNode* parent = node->parent;
CubeNode* last_marked = node;
CubeNode* last_closed = node;
// mark this node as inactive
node->active = false;
// helper: recursively mark a subtree inactive
std::function<void(CubeNode*)> close_subtree = [&](CubeNode* n) {
if (!n)
return;
n->state = closed;
for (CubeNode* child : n->children)
close_subtree(child);
};
// propagate upward if parent became a "leaf" (all children inactive)
while (parent && parent != root) {
bool all_inactive = true;
for (CubeNode* child : parent->children) {
if (child->active) {
all_inactive = false;
break;
}
// mark this node as closed
close_subtree(node);
// propagate upward if parent becomes UNSAT because one of its child polarity pairs (i.e. tautology) is closed
while (parent) {
// get the index of the node in its parent's children
auto it = std::find(parent->children.begin(), parent->children.end(), last_closed);
SASSERT(it != parent->children.end());
unsigned idx = std::distance(parent->children.begin(), it);
CubeNode* polarity_pair = nullptr;
if (idx % 2 == 0 && idx + 1 < parent->children.size()) {
polarity_pair = parent->children[idx + 1]; // even index -> polarity pair is right sibling
} else if (idx % 2 == 1) {
polarity_pair = parent->children[idx - 1]; // odd index -> polarity pair is left sibling
}
if (!all_inactive) break; // stop propagating
// print the cube and its polarity pair CONTENTS, we have to loop thru each cube
IF_VERBOSE(1, {
verbose_stream() << "CubeTree: checking if parent node can be closed. Current node cube size: " << last_closed->cube.size() << " State: " << to_string(last_closed->state) << " Cube: ";
for (auto* e : last_closed->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "\n";
if (polarity_pair) {
verbose_stream() << "CubeTree: polarity pair cube size: " << polarity_pair->cube.size() << " State: " << to_string(polarity_pair->state) << " Cube: ";
for (auto* e : polarity_pair->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "\n";
} else {
verbose_stream() << "CubeTree: no polarity pair found for current node\n";
}
});
SASSERT(parent->active); // parent must not be currently worked on
last_marked = parent; // track the last ancestor we mark
parent->active = false; // mark parent inactive
// node and its polarity pair are closed, this is a tautology, so the parent is closed, so the parent's entire subtree is closed
if (polarity_pair && polarity_pair->state == closed) {
SASSERT(parent->state != active); // parent must not be currently worked on
close_subtree(parent); // mark parent and its subtree as closed
last_closed = parent; // track the last ancestor we mark
parent = parent->parent;
} else {
break; // stop propagation
}
}
return last_marked;
return last_closed;
}
// get closest cube to current by getting a random sibling of current (if current was UNSAT and we removed it from the tree)
// or by descending randomly to a leaf (if we split the current node) to get the newest cube split from the current
// we descend randomly to a leaf instead of just taking a random child because it's possible another thread made more descendants
CubeNode* get_next_cube(CubeNode* current, std::vector<CubeNode*>& frontier_roots, ast_manager& m, unsigned worker_id) {
CubeNode* get_next_cube(CubeNode* current, ast_manager& m, unsigned worker_id) {
print_tree(m);
IF_VERBOSE(1, verbose_stream() << "CubeTree: current cube is null: " << (current == nullptr) << "\n");
@ -118,117 +165,61 @@ public:
IF_VERBOSE(1, verbose_stream() << "CubeTree: getting next cube from current of size " << current->cube.size() << "\n");
// lambda to find any active leaf in the subtree (explore all branches)
std::function<CubeNode*(CubeNode*)> find_active_leaf = [&](CubeNode* node) -> CubeNode* {
// lambda to find any open leaf in the subtree (explore all branches)
std::function<CubeNode*(CubeNode*)> find_open_leaf = [&](CubeNode* node) -> CubeNode* {
if (!node) return nullptr;
if (node->is_leaf() && node->active) return node;
if (node->is_leaf() && node->state == open) return node;
for (CubeNode* child : node->children) {
CubeNode* active_leaf = find_active_leaf(child);
if (active_leaf) return active_leaf;
CubeNode* open_leaf = find_open_leaf(child);
if (open_leaf) return open_leaf;
}
return nullptr;
};
CubeNode* node = current;
std::vector<CubeNode*> remaining_frontier_roots = frontier_roots;
bool is_unexplored_frontier = frontier_roots.size() > 0 && current->cube.size() < frontier_roots[0]->cube.size(); // i.e. current is above the frontier (which always happens when we start with the empty cube!!)
IF_VERBOSE(1, verbose_stream() << "CubeTree: current cube is " << (is_unexplored_frontier ? "above" : "within") << " the frontier. Current cube has the following children: \n");
IF_VERBOSE(1, verbose_stream() << "CubeTree: Current cube has the following children: \n");
for (auto* child : current->children) {
IF_VERBOSE(1, verbose_stream() << " Child cube size: " << child->cube.size() << " Active: " << child->active << " Cube: ");
IF_VERBOSE(1, verbose_stream() << " Child cube size: " << child->cube.size() << " State: " << to_string(child->state) << " Cube: ");
for (auto* e : child->cube) {
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
}
IF_VERBOSE(1, verbose_stream() << "\n");
}
// if current is above the frontier, start searching from the first frontier root
if (is_unexplored_frontier && !frontier_roots.empty()) {
IF_VERBOSE(1, verbose_stream() << "CubeTree: Worker " << worker_id << " starting search from first frontier root. Frontier roots are:\n");
for (auto* x : frontier_roots) {
IF_VERBOSE(1, verbose_stream() << " Cube size: " << x->cube.size() << " Active: " << x->active << " Cube: ");
for (auto* e : x->cube) {
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
}
IF_VERBOSE(1, verbose_stream() << "\n");
}
node = frontier_roots[0];
IF_VERBOSE(1, verbose_stream() << "CubeTree: Worker " << worker_id << " selected frontier root: ");
for (auto* e : node->cube) {
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
}
IF_VERBOSE(1, verbose_stream() << "\n");
IF_VERBOSE(1, verbose_stream() << "This frontier root has children:\n");
for (auto* child : node->children) {
IF_VERBOSE(1, verbose_stream() << " Child cube size: " << child->cube.size() << " Active: " << child->active << " Cube: ");
for (auto* e : child->cube) {
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
}
IF_VERBOSE(1, verbose_stream() << "\n");
}
}
while (node) {
// check active leaf descendants
CubeNode* leaf_descendant = nullptr;
leaf_descendant = find_active_leaf(node);
// check open leaf descendants
CubeNode* nearest_open_leaf = nullptr;
nearest_open_leaf = find_open_leaf(node); // find an open leaf descendant
if (leaf_descendant) {
IF_VERBOSE(1, {verbose_stream() << "CubeTree: Worker " << worker_id << " found active leaf descendant under node (which could be the node itself): ";
if (nearest_open_leaf) {
IF_VERBOSE(1, {verbose_stream() << "CubeTree: Worker " << worker_id << " found open leaf descendant under node (which could be the node itself): ";
for (auto* e : node->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "\n Active leaf descendant is: ";
for (auto* e : leaf_descendant->cube) {
verbose_stream() << "\n Open leaf descendant is: ";
for (auto* e : nearest_open_leaf->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "\n";
});
return leaf_descendant;
return nearest_open_leaf;
}
IF_VERBOSE(1, {verbose_stream() << "CubeTree: Worker " << worker_id << " found no active leaf descendants found under node: ";
IF_VERBOSE(1, {verbose_stream() << "CubeTree: Worker " << worker_id << " found no open leaf descendants found under node: ";
for (auto* e : node->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "\n";
});
// DO NOT NEED TO CHECK FOR ACTIVE LEAVES bc this would only happen if we're in another thread's subtree and another thread
// is working on some leaf. but this will NEVER HAPPEN because once we exhaust our own subtree, the problem must be UNSAT
// bc of polarity pair tautologies!! so ONLY NEED TO CHECK FOR OPEN LEAVES
// DO NOT NEED to check siblings and their active leaf descendants
// since this is handled by the recusion up the tree!!
// and checking siblings here is unsafe if we adhere to thread frontier optimizations
// see if we're at a boundary of the frontier (i.e. we hit one of the frontier roots)
auto it = std::find(remaining_frontier_roots.begin(), remaining_frontier_roots.end(), node);
// get the index of the node in remaining_frontier_roots
unsigned curr_root_idx = std::distance(remaining_frontier_roots.begin(), it);
if (it != remaining_frontier_roots.end()) { // i.e. the node is in the list of remaining_frontier_roots
IF_VERBOSE(1, verbose_stream() << "CubeTree: hit frontier root " << node << "\n");
if (!remaining_frontier_roots.empty()) {
IF_VERBOSE(1, verbose_stream() << "CubeTree: picking next frontier root to search from.\n");
// pick the next frontier root (wrap around if at end)
// we do this so we either move onto the next split atom in the frontier (if we just processed neg(atom))
// or we get the negation of the atom we just processed (if we just processed pos(atom))
// since the other the splits are added is [pos, neg, ...] for each split atom
node = remaining_frontier_roots[curr_root_idx + 1 < remaining_frontier_roots.size() ? curr_root_idx + 1 : 0];
// Remove exhausted frontier root
remaining_frontier_roots.erase(it);
} else {
IF_VERBOSE(1, verbose_stream() << "CubeTree: resetting frontier after exhausting\n");
// Frontier exhausted: reset frontier_roots for next iteration
frontier_roots.clear();
// Start "global" search from current node
node = node->parent;
}
continue;
}
// Move up in the current frontier
node = node->parent;
}
@ -246,17 +237,6 @@ public:
private:
CubeNode* root;
// mark leaf as inactive instead of deleting it
void mark_leaf_inactive(CubeNode* node) {
if (!node || !node->active) return;
// must be a leaf
SASSERT(node->children.empty());
// just mark inactive
node->active = false;
}
void delete_subtree(CubeNode* node) {
if (!node) return;
for (CubeNode* child : node->children) {
@ -273,7 +253,7 @@ private:
IF_VERBOSE(1, verbose_stream() << "Node@" << node
<< " size=" << node->cube.size()
<< " active=" << node->active
<< " state=" << to_string(node->state)
<< " cube={ ");
for (expr* e : node->cube) {

View file

@ -104,12 +104,12 @@ namespace smt {
LOG_WORKER(1, " Curr cube node is null: " << (m_curr_cube_node == nullptr) << "\n");
if (m_config.m_cubetree) {
// use std::tie so we don't overshadow cube_node!!!
std::tie(cube_node, cube) = b.get_cube_from_tree(m_g2l, frontier_roots, id, m_curr_cube_node); // cube node is the reference to the node in the tree, tells us how to get the next cube. "cube" is the translated cube we need for the solver
std::tie(cube_node, cube) = get_cube_from_tree(id, m_curr_cube_node); // cube node is the reference to the node in the tree, tells us how to get the next cube. "cube" is the translated cube we need for the solver
LOG_WORKER(1, " Got cube node from CubeTree. Is null: " << (cube_node == nullptr) << "\n");
if (!cube_node) { // i.e. no more cubes
LOG_WORKER(1, " No more cubes from CubeTree, exiting\n");
LOG_WORKER(1, " Cube_Tree ran out of nodes, problem is UNSAT\n");
b.set_unsat(m_g2l, cube);
return;
}
m_curr_cube_node = cube_node; // store the current cube so we know how to get the next closest cube from the tree
@ -125,7 +125,7 @@ namespace smt {
}
collect_shared_clauses(m_g2l);
if (!m.inc()) {
if (m.limit().is_canceled()) {
b.set_exception("context cancelled");
return;
}
@ -149,7 +149,7 @@ namespace smt {
// let's automatically do iterative deepening for beam search.
// when using more advanced metrics like explicit_hardness etc: need one of two things: (1) split if greater than OR EQUAL TO than avg hardness, or (3) enter this branch only when cube.size() > 0, or else we get stuck in a loop of never deepening.
if (m_config.m_iterative_deepening || m_config.m_beam_search) {
if (!m_config.m_cubetree && (m_config.m_iterative_deepening || m_config.m_beam_search)) {
LOG_WORKER(1, " applying iterative deepening\n");
double cube_hardness;
@ -171,30 +171,28 @@ namespace smt {
b.return_cubes(m_l2g, cube, split_atoms, should_split, cube_hardness);
} else if (m_config.m_cubetree) {
IF_VERBOSE(1, verbose_stream() << " returning undef cube to CubeTree. Cube node is null: " << (cube_node == nullptr) << "\n");
bool should_split = true;
bool is_new_frontier = frontier_roots.empty();
b.return_cubes_tree(m_l2g, cube_node, cube, split_atoms, frontier_roots);
if (m_config.m_iterative_deepening) {
LOG_WORKER(1, " applying iterative deepening\n");
if (is_new_frontier) {
IF_VERBOSE(1, {
verbose_stream() << " Worker " << id << " has new frontier roots, with the following children: \n";
for (auto* node : frontier_roots) {
verbose_stream() << " Cube size: " << node->cube.size() << " Active: " << node->active << " Cube: ";
for (auto* e : node->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
double cube_hardness;
if (m_config.m_explicit_hardness) {
cube_hardness = explicit_hardness(cube);
// LOG_WORKER(1, " explicit hardness: " << cube_hardness << "\n");
} else { // default to naive hardness
cube_hardness = naive_hardness();
// LOG_WORKER(1, " naive hardness: " << cube_hardness << "\n");
}
verbose_stream() << " Children: ";
for (auto* child : node->children) {
verbose_stream() << "[";
for (auto* e : child->cube) {
verbose_stream() << mk_bounded_pp(e, m, 3) << " ";
}
verbose_stream() << "] ";
}
verbose_stream() << "\n";
}
});
const double avg_hardness = update_avg_cube_hardness_worker(cube_hardness); // let's only compare to hardness on the same thread
const double factor = 1; // can tune for multiple of avg hardness later
should_split = cube_hardness >= avg_hardness * factor; // must be >= otherwise we never deepen
LOG_WORKER(1, " cube hardness: " << cube_hardness << " avg: " << avg_hardness << " avg*factor: " << avg_hardness * factor << " should-split: " << should_split << "\n");
}
return_cubes_tree(cube_node, split_atoms, should_split);
} else {
b.return_cubes(m_l2g, cube, split_atoms);
}
@ -244,7 +242,7 @@ namespace smt {
// prune the tree now that we know the cube is unsat
if (m_config.m_cubetree) {
IF_VERBOSE(1, verbose_stream() << " removing cube node from CubeTree and propagate deletion\n");
b.remove_node_and_propagate(m_curr_cube_node);
remove_node_and_propagate(m_curr_cube_node, m);
}
break;
}
@ -257,7 +255,9 @@ namespace smt {
parallel::worker::worker(unsigned id, parallel& p, expr_ref_vector const& _asms):
id(id), p(p), b(p.m_batch_manager), m_smt_params(p.ctx.get_fparams()), asms(m),
m_g2l(p.ctx.m, m),
m_l2g(m, p.ctx.m) {
m_l2g(m, p.ctx.m),
m_cubes_tree(m),
m_split_atoms(m) {
for (auto e : _asms)
asms.push_back(m_g2l(e));
LOG_WORKER(1, " created with " << asms.size() << " assumptions\n");
@ -319,6 +319,8 @@ namespace smt {
void parallel::worker::collect_statistics(::statistics& st) const {
ctx->collect_statistics(st);
st.update("parallel-num_cubes", m_stats.m_num_cubes);
st.update("parallel-max-cube-size", m_stats.m_max_cube_depth);
}
void parallel::worker::cancel() {
@ -417,18 +419,14 @@ namespace smt {
return r;
}
std::pair<CubeNode*, expr_ref_vector> parallel::batch_manager::get_cube_from_tree(ast_translation& g2l, std::vector<CubeNode*>& frontier_roots, unsigned worker_id, CubeNode* prev_cube) {
std::scoped_lock lock(mux);
expr_ref_vector l_cube(g2l.to());
std::pair<CubeNode*, expr_ref_vector> parallel::worker::get_cube_from_tree(unsigned worker_id, CubeNode* prev_cube) {
expr_ref_vector l_cube(m);
SASSERT(m_config.m_cubetree);
if (m_cubes_tree.empty()) {
// special initialization: the first cube is emtpy, have the worker work on an empty cube.
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out empty cube.\n");
expr_ref_vector g_cube(g2l.from());
CubeNode* new_cube_node = new CubeNode(g_cube, nullptr);
CubeNode* new_cube_node = new CubeNode(l_cube, nullptr);
m_cubes_tree.add_node(new_cube_node, nullptr);
return {new_cube_node, l_cube}; // return empty cube
} else if (!prev_cube) {
@ -437,21 +435,20 @@ namespace smt {
// get a cube from the CubeTree
SASSERT(!m_cubes_tree.empty());
CubeNode* next_cube_node = m_cubes_tree.get_next_cube(prev_cube, m, worker_id); // get the next cube in the tree closest to the prev cube (i.e. longest common prefix)
CubeNode* next_cube_node = m_cubes_tree.get_next_cube(prev_cube, frontier_roots, g2l.to(), worker_id); // get the next cube in the tree closest to the prev cube (i.e. longest common prefix)
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube from CubeTree. Is null: " << (next_cube_node==nullptr) << "\n");
LOG_WORKER(1, " giving out cube from CubeTree. Is null: " << (next_cube_node==nullptr) << "\n");
if (!next_cube_node) { // i.e. no more cubes
IF_VERBOSE(1, verbose_stream() << " No more cubes from CubeTree, exiting\n");
LOG_WORKER(1, " No more cubes from CubeTree, exiting\n");
return {nullptr, l_cube}; // return nullptr and empty cube
}
for (auto& e : next_cube_node->cube) {
l_cube.push_back(g2l(e));
l_cube.push_back(e);
}
next_cube_node->active = false; // mark the cube as inactive (i.e. being processed by a worker)
next_cube_node->state = active; // mark the cube as active (i.e. being processed by a worker)
return {next_cube_node, l_cube};
}
@ -463,8 +460,7 @@ namespace smt {
if (m_cubes.size() == 1 && m_cubes[0].empty()
|| m_config.m_beam_search && m_cubes_pq.empty()
|| m_config.m_depth_splitting_only && m_cubes_depth_sets.empty()
|| m_config.m_cubetree && m_cubes_tree.empty()) {
|| m_config.m_depth_splitting_only && m_cubes_depth_sets.empty()) {
// special initialization: the first cube is emtpy, have the worker work on an empty cube.
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out empty cube.\n");
return l_cube; // return empty cube
@ -821,76 +817,49 @@ namespace smt {
}
}
void parallel::batch_manager::return_cubes_tree(ast_translation& l2g, CubeNode* cube_node, expr_ref_vector const& l_cube, expr_ref_vector const& A_worker, std::vector<CubeNode*>& frontier_roots) {
IF_VERBOSE(1, verbose_stream() << " Returning cube to batch manager's cube tree. Cube node null: " << (cube_node == nullptr) << " PROCESSING CUBE of size: " << l_cube.size() << "\n");
bool is_new_frontier = frontier_roots.empty(); // need to save this as a bool here, bc otherwise the frontier stops being populated after a single split atom
IF_VERBOSE(1, verbose_stream() << " Is new frontier: " << is_new_frontier << "\n");
void parallel::worker::return_cubes_tree(CubeNode* cube_node, expr_ref_vector const& A_worker, bool should_split) {
LOG_WORKER(1, " Returning cube to batch manager's cube tree. Cube node null: " << (cube_node == nullptr) << " PROCESSING CUBE of size: " << cube_node->cube.size() << "\n");
auto atom_in_cube = [&](expr_ref_vector const& cube, expr* atom) {
return any_of(cube, [&](expr* e) { return e == atom || (m.is_not(e, e) && e == atom); });
};
// apply the frugal strategy to ALL incoming worker cubes, but save in the PQ data structure for beam search
auto add_split_atom_tree = [&](expr* atom) {
IF_VERBOSE(1, verbose_stream() << " Adding split atom to tree: " << mk_bounded_pp(atom, m, 3) << "\n");
expr_ref_vector g_cube(l2g.to());
for (auto& atom : l_cube)
g_cube.push_back(l2g(atom));
// Positive atom branch
expr_ref_vector g_cube_pos = g_cube;
g_cube_pos.push_back(atom);
// Negative atom branch
expr_ref_vector g_cube_neg = g_cube;
g_cube_neg.push_back(m.mk_not(atom));
IF_VERBOSE(1, verbose_stream() << " Splitting positive and negative atoms: " << mk_pp(atom, m) << "," << mk_pp(m.mk_not(atom), m) << " from root: ");
//print the cube
LOG_WORKER(1, " Adding split atom to tree: " << mk_bounded_pp(atom, m, 3) << " from root: ");
for (auto& e : cube_node->cube)
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
IF_VERBOSE(1, verbose_stream() << "\n");
LOG_WORKER(1, mk_bounded_pp(e, m, 3) << " ");
LOG_WORKER(1, "\n");
auto [left_child, right_child] = m_cubes_tree.add_children(cube_node, g_cube_pos, g_cube_neg); // note: default is active
expr_ref_vector l_cube_pos = cube_node->cube;
l_cube_pos.push_back(atom);
expr_ref_vector l_cube_neg = cube_node->cube;
l_cube_neg.push_back(m.mk_not(atom));
if (is_new_frontier) {
frontier_roots.push_back(left_child);
frontier_roots.push_back(right_child);
IF_VERBOSE(1, verbose_stream() << " Added split to frontier roots. Size now: "
<< frontier_roots.size() << "\n");
}
auto [left_child, right_child] = m_cubes_tree.add_children(cube_node, l_cube_pos, l_cube_neg); // note: default is open
m_stats.m_num_cubes += 2;
m_stats.m_max_cube_depth = std::max(m_stats.m_max_cube_depth, g_cube.size() + 1);
m_stats.m_max_cube_depth = std::max(m_stats.m_max_cube_depth, cube_node->cube.size() + 1);
};
std::scoped_lock lock(mux);
if (l_cube.size() >= m_config.m_max_cube_depth) {
IF_VERBOSE(1, verbose_stream() << " Skipping split of cube at max depth " << m_config.m_max_cube_depth << "\n";);
cube_node->active = true; // mark the cube as active again since we didn't split it
if (cube_node->cube.size() >= m_config.m_max_cube_depth || !should_split) {
LOG_WORKER(1, " Skipping split of cube at max depth " << m_config.m_max_cube_depth << "\n");
cube_node->state = open; // mark the cube as open again since we didn't split it
return;
}
// --- Frugal approach: only process NEW worker cubes with NEW atoms ---
for (unsigned i = 0; i < A_worker.size(); ++i) {
IF_VERBOSE(1, verbose_stream() << " Processing worker atom " << mk_bounded_pp(A_worker[i], m, 3) << "\n");
expr_ref g_atom(l2g(A_worker[i]), l2g.to());
if (!m_split_atoms.contains(g_atom))
m_split_atoms.push_back(g_atom);
LOG_WORKER(1, " Processing worker atom " << mk_bounded_pp(A_worker[i], m, 3) << "\n");
if (!m_split_atoms.contains(A_worker[i]))
m_split_atoms.push_back(A_worker[i]);
IF_VERBOSE(1, verbose_stream() << " splitting worker cube on new atom " << mk_bounded_pp(g_atom, m, 3) << "\n");
add_split_atom_tree(g_atom);
LOG_WORKER(1, " splitting worker cube on new atom " << mk_bounded_pp(A_worker[i], m, 3) << "\n");
add_split_atom_tree(A_worker[i]);
}
IF_VERBOSE(1, verbose_stream() << " The returned cube:");
for (auto& e : l_cube)
LOG_WORKER(1, " The returned cube:");
for (auto& e : cube_node->cube)
IF_VERBOSE(1, verbose_stream() << mk_bounded_pp(e, m, 3) << " ");
IF_VERBOSE(1, verbose_stream() << " now has the following children:\n");
for (auto child : cube_node->children) {
@ -1017,7 +986,7 @@ namespace smt {
}
expr_ref_vector parallel::worker::get_split_atoms() {
unsigned k = 2;
unsigned k = 1;
// auto candidates = ctx->m_pq_scores.get_heap();
std::vector<std::pair<double, expr*>> top_k; // will hold at most k elements
@ -1067,8 +1036,6 @@ namespace smt {
m_cubes_depth_sets.clear();
} else if (m_config.m_beam_search) {
m_cubes_pq = CubePQ();
} else if (m_config.m_cubetree) {
m_cubes_tree = CubeTree(m);
} else {
m_cubes.reset();
m_cubes.push_back(expr_ref_vector(m)); // push empty cube

View file

@ -72,7 +72,6 @@ namespace smt {
stats m_stats;
expr_ref_vector m_split_atoms; // atoms to split on
vector<expr_ref_vector> m_cubes;
CubeTree m_cubes_tree;
struct ScoredCube {
double score;
@ -117,7 +116,7 @@ namespace smt {
void init_parameters_state();
public:
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_split_atoms(m), m_cubes_tree(m) { }
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_split_atoms(m) { }
void initialize();
@ -133,33 +132,16 @@ namespace smt {
//
expr_ref_vector get_cube(ast_translation& g2l); // FOR ALL NON-TREE VERSIONS
std::pair<CubeNode*, expr_ref_vector> get_cube_from_tree(ast_translation& g2l, std::vector<CubeNode*>& frontier_roots, unsigned worker_id, CubeNode* prev_cube = nullptr);
//
// worker threads return unprocessed cubes to the batch manager together with split literal candidates.
// the batch manager re-enqueues unprocessed cubes and optionally splits them using the split_atoms returned by this and workers.
//
void return_cubes_tree(ast_translation& l2g, CubeNode* cube_node, expr_ref_vector const& cube, expr_ref_vector const& split_atoms, std::vector<CubeNode*>& frontier_roots);
// FOR ALL NON-TREE VERSIONS
void return_cubes(ast_translation& l2g, expr_ref_vector const& cube, expr_ref_vector const& split_atoms, const bool should_split=true, const double hardness=1.0);
void report_assumption_used(ast_translation& l2g, expr* assumption);
void collect_clause(ast_translation& l2g, unsigned source_worker_id, expr* e);
expr_ref_vector return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id);
void remove_node_and_propagate(CubeNode* node) {
std::scoped_lock lock(mux);
SASSERT(m_config.m_cubetree);
CubeNode* last_removed = m_cubes_tree.remove_node_and_propagate(node);
if (last_removed) {
IF_VERBOSE(1, verbose_stream() << "Cube tree: removed node and propagated up to depth " << last_removed->cube.size() << "\n");
} else {
IF_VERBOSE(1, verbose_stream() << "Cube tree: ERROR removing node with no propagation\n");
}
}
double update_avg_cube_hardness(double hardness) {
std::scoped_lock lock(mux);
IF_VERBOSE(1, verbose_stream() << "Cube hardness: " << hardness << ", previous avg: " << m_avg_cube_hardness << ", solved cubes: " << m_solved_cube_count << "\n";);
@ -183,6 +165,7 @@ namespace smt {
double m_max_conflict_mul = 1.5;
bool m_share_units_initial_only = false;
bool m_cube_initial_only = false;
unsigned m_max_cube_depth = 20;
unsigned m_max_greedy_cubes = 1000;
unsigned m_num_split_lits = 2;
bool m_backbone_detection = false;
@ -192,6 +175,11 @@ namespace smt {
bool m_cubetree = false;
};
struct stats {
unsigned m_max_cube_depth = 0;
unsigned m_num_cubes = 0;
};
unsigned id; // unique identifier for the worker
parallel& p;
batch_manager& b;
@ -202,12 +190,33 @@ namespace smt {
scoped_ptr<context> ctx;
ast_translation m_g2l, m_l2g;
CubeNode* m_curr_cube_node = nullptr;
std::vector<CubeNode*> frontier_roots;
CubeTree m_cubes_tree;
stats m_stats;
expr_ref_vector m_split_atoms;
unsigned m_num_shared_units = 0;
unsigned m_num_initial_atoms = 0;
unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share
double m_avg_cube_hardness = 0.0;
unsigned m_solved_cube_count = 0;
void remove_node_and_propagate(CubeNode* node, ast_manager& m) {
SASSERT(m_config.m_cubetree);
// CubeNode* last_removed = m_cubes_tree.remove_node_and_propagate(node, m);
// if (last_removed) {
// IF_VERBOSE(1, verbose_stream() << "Cube tree: removed node and propagated up to depth " << last_removed->cube.size() << "\n");
// } else {
// IF_VERBOSE(1, verbose_stream() << "Cube tree: ERROR removing node with no propagation\n");
// }
}
double update_avg_cube_hardness_worker(double hardness) {
IF_VERBOSE(1, verbose_stream() << "Cube hardness: " << hardness << ", previous avg: " << m_avg_cube_hardness << ", solved cubes: " << m_solved_cube_count << "\n";);
m_avg_cube_hardness = (m_avg_cube_hardness * m_solved_cube_count + hardness) / (m_solved_cube_count + 1);
m_solved_cube_count++;
return m_avg_cube_hardness;
}
void share_units(ast_translation& l2g);
lbool check_cube(expr_ref_vector const& cube);
@ -217,6 +226,8 @@ namespace smt {
expr_ref_vector find_backbone_candidates();
expr_ref_vector get_backbones_from_candidates(expr_ref_vector const& candidates);
std::pair<CubeNode*, expr_ref_vector> get_cube_from_tree(unsigned worker_id, CubeNode* prev_cube = nullptr);
void return_cubes_tree(CubeNode* cube_node, expr_ref_vector const& split_atoms, const bool should_split=true);
double naive_hardness();
double explicit_hardness(expr_ref_vector const& cube);

View file

@ -28,6 +28,8 @@ Author:
#include "params/smt_parallel_params.hpp"
#include <cmath>
#include <condition_variable>
#include <mutex>
class bounded_pp_exprs {
expr_ref_vector const& es;
@ -117,13 +119,9 @@ namespace smt {
return;
}
case l_false: {
// if unsat core only contains (external) assumptions (i.e. all the unsat core are asms), then unsat and return as this does NOT depend on cubes
// otherwise, extract lemmas that can be shared (units (and unsat core?)).
// share with batch manager.
// process next cube.
expr_ref_vector const& unsat_core = ctx->unsat_core();
LOG_WORKER(2, " unsat core:\n"; for (auto c : unsat_core) verbose_stream() << mk_bounded_pp(c, m, 3) << "\n");
// If the unsat core only contains assumptions,
// If the unsat core only contains external assumptions,
// unsatisfiability does not depend on the current cube and the entire problem is unsat.
if (all_of(unsat_core, [&](expr* e) { return asms.contains(e); })) {
LOG_WORKER(1, " determined formula unsat\n");
@ -303,16 +301,15 @@ namespace smt {
}
void parallel2::worker::collect_shared_clauses(ast_translation& g2l) {
expr_ref_vector new_clauses = b.return_shared_clauses(g2l, m_shared_clause_limit, id); // get new clauses from the batch manager
expr_ref_vector new_clauses = b.return_shared_clauses(g2l, m_shared_clause_limit, id);
// iterate over new clauses and assert them in the local context
for (expr* e : new_clauses) {
expr_ref local_clause(e, g2l.to()); // e was already translated to the local context in the batch manager!!
ctx->assert_expr(local_clause); // assert the clause in the local context
expr_ref local_clause(e, g2l.to());
ctx->assert_expr(local_clause);
LOG_WORKER(2, " asserting shared clause: " << mk_bounded_pp(local_clause, m, 3) << "\n");
}
}
// get new clauses from the batch manager and assert them in the local context
expr_ref_vector parallel2::batch_manager::return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id) {
std::scoped_lock lock(mux);
expr_ref_vector result(g2l.to());
@ -363,7 +360,6 @@ namespace smt {
double new_score = ctx->m_lit_scores[0][v] * ctx->m_lit_scores[1][v];
// decay the scores
ctx->m_lit_scores[0][v] /= 2;
ctx->m_lit_scores[1][v] /= 2;
@ -392,9 +388,7 @@ namespace smt {
return;
m_state = state::is_unsat;
// every time we do a check_sat call, don't want to have old info coming from a prev check_sat call
// the unsat core gets reset internally in the context after each check_sat, so we assert this property here
// takeaway: each call to check_sat needs to have a fresh unsat core
// each call to check_sat needs to have a fresh unsat core
SASSERT(p.ctx.m_unsat_core.empty());
for (expr* e : unsat_core)
p.ctx.m_unsat_core.push_back(l2g(e));
@ -509,7 +503,6 @@ namespace smt {
}
void parallel2::batch_manager::collect_statistics(::statistics& st) const {
//ctx->collect_statistics(st);
st.update("parallel-num_cubes", m_stats.m_num_cubes);
st.update("parallel-max-cube-size", m_stats.m_max_cube_depth);
}
@ -536,11 +529,6 @@ namespace smt {
for (unsigned i = 0; i < num_threads; ++i)
m_workers.push_back(alloc(worker, i, *this, asms)); // i.e. "new worker(i, *this, asms)"
// THIS WILL ALLOW YOU TO CANCEL ALL THE CHILD THREADS
// within the lexical scope of the code block, creates a data structure that allows you to push children
// objects to the limit object, so if someone cancels the parent object, the cancellation propagates to the children
// and that cancellation has the lifetime of the scope
// even if this code doesn't expliclty kill the main thread, still applies bc if you e.g. Ctrl+C the main thread, the children threads need to be cancelled
for (auto w : m_workers)
sl.push_child(&(w->limit()));
@ -561,7 +549,7 @@ namespace smt {
m_batch_manager.collect_statistics(ctx.m_aux_stats);
}
return m_batch_manager.get_result(); // i.e. all threads have finished all of their cubes -- so if state::is_running is still true, means the entire formula is unsat (otherwise a thread would have returned l_undef)
return m_batch_manager.get_result();
}
}

View file

@ -21,6 +21,7 @@ Revision History:
#include "smt/smt_context.h"
#include "util/search_tree.h"
#include <thread>
#include <condition_variable>
#include <mutex>
#include <condition_variable>

View file

@ -4,6 +4,7 @@
#include <mutex>
#include <cmath>
#include <iostream>
#include <condition_variable>
// Initially there are no cubes.