mirror of
https://github.com/Z3Prover/z3
synced 2025-10-09 09:21:56 +00:00
Parallel solving (#7848)
* very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing * merge * fix #7603: race condition in Ctrl-C handling (#7755) * fix #7603: race condition in Ctrl-C handling * fix race in cancel_eh * fix build * add arithemtic saturation * add an option to register callback on quantifier instantiation Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation * missing new closure Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add Z3_solver_propagate_on_binding to ml callback declarations Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add python file Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> * debug under defined calls Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * more untangle params Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * precalc parameters to define the eval order Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * remove a printout Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename a Python file Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add on_binding callbacks across APIs update release notes, add to Java, .Net, C++ * use jboolean in Native interface Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * register on_binding attribute Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix java build for java bindings Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * avoid interferring side-effects in function calls Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove theory_str and classes that are only used by it * remove automata from python build Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove ref to theory_str Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * get the finest factorizations before project Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename add_lcs to add_lc Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints * initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel? * Update RELEASE_NOTES.md * resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback * remove unused square-free check Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add some debug prints and impelement internal polynomial fix * restore the square-free check Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add some comments and debug m_assumptions_used * redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first) * set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still * Add .github/copilot-instructions.md with comprehensive Z3 development guide (#7766) * Initial plan * Add comprehensive .github/copilot-instructions.md with validated build commands and timing Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Remove test_example binary file from repository Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Bump actions/checkout from 4 to 5 (#7773) Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/checkout/compare/v4...v5) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '5' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * turn off logging at level 0 for testing * add max thread conflicts backoff * Parallel solving (#7775) * very basic setup * very basic setup (#7741) * add score access and reset Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * added notes Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * add bash files for test runs * fix compilation Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * more notes Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * Update PARALLEL_PROJECT_NOTES.md * add top-k fixed-sized min-heap priority queue for top scoring literals * fixed-size min-heap for tracking top-k literals (#7752) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * debugging * process cubes as lists of individual lits * Parallel solving (#7756) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * process cubes as lists of individual lits --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> * snapshot Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * pair programming Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * pair programming Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * merge * chipping away at the new code structure * Parallel solving (#7758) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * updates Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing * Parallel solving (#7759) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * updates Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * simplify output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * merge * resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints * initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel? * Parallel solving (#7769) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing * merge * resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints * initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel? --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback * Parallel solving (#7771) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing * merge * fix #7603: race condition in Ctrl-C handling (#7755) * fix #7603: race condition in Ctrl-C handling * fix race in cancel_eh * fix build * add arithemtic saturation * add an option to register callback on quantifier instantiation Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation * missing new closure Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add Z3_solver_propagate_on_binding to ml callback declarations Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add python file Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> * debug under defined calls Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * more untangle params Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * precalc parameters to define the eval order Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * remove a printout Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename a Python file Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add on_binding callbacks across APIs update release notes, add to Java, .Net, C++ * use jboolean in Native interface Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * register on_binding attribute Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix java build for java bindings Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * avoid interferring side-effects in function calls Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove theory_str and classes that are only used by it * remove automata from python build Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove ref to theory_str Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * get the finest factorizations before project Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename add_lcs to add_lc Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints * initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel? * Update RELEASE_NOTES.md * resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> Signed-off-by: Lev Nachmanson <levnach@hotmail.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> Co-authored-by: Lev Nachmanson <levnach@hotmail.com> * code and notes * add some debug prints and impelement internal polynomial fix * add some comments and debug m_assumptions_used * redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first) * set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still * Parallel solving (#7774) * very basic setup * ensure solve_eqs is fully disabled when smt.solve_eqs=false, #7743 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * respect smt configuration parameter in elim_unconstrained simplifier Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * indentation * add bash files for test runs * add option to selectively disable variable solving for only ground expressions Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove verbose output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7745 axioms for len(substr(...)) escaped due to nested rewriting * ensure atomic constraints are processed by arithmetic solver * #7739 optimization add simplification rule for at(x, offset) = "" Introducing j just postpones some rewrites that prevent useful simplifications. Z3 already uses common sub-expressions. The example highlights some opportunities for simplification, noteworthy at(..) = "". The example is solved in both versions after adding this simplification. * fix unsound len(substr) axiom Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * FreshConst is_sort (#7748) * #7750 add pre-processing simplification * Add parameter validation for selected API functions * updates to ac-plugin fix incrementality bugs by allowing destructive updates during saturation at the cost of redoing saturation after a pop. * enable passive, add check for bloom up-to-date * add top-k fixed-sized min-heap priority queue for top scoring literals * set up worker thread batch manager for multithreaded batch cubes paradigm, need to debug as I am getting segfault still * fix bug in parallel solving batch setup * fix bug * allow for internalize implies * disable pre-processing during cubing * debugging * remove default constructor * remove a bunch of string copies * Update euf_ac_plugin.cpp include reduction rules in forward simplification * Update euf_completion.cpp try out restricting scope of equalities added by instantation * Update smt_parallel.cpp Drop non-relevant units from shared structures. * process cubes as lists of individual lits * merge * Add support for Algebraic Datatypes in JavaScript/TypeScript bindings (#7734) * Initial plan * Add datatype type definitions to types.ts (work in progress) Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype type definitions with working TypeScript compilation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Implement core datatype functionality with TypeScript compilation success Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete datatype implementation with full Context integration and tests Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * chipping away at the new code structure * comments * debug infinite recursion and split cubes on existing split atoms that aren't in the cube * share lemmas, learn from unsat core, try to debug a couple of things, there was a subtle bug that i have a hard time repro'ing * merge * fix #7603: race condition in Ctrl-C handling (#7755) * fix #7603: race condition in Ctrl-C handling * fix race in cancel_eh * fix build * add arithemtic saturation * add an option to register callback on quantifier instantiation Suppose a user propagator encodes axioms using quantifiers and uses E-matching for instantiation. If it wants to implement a custom priority scheme or drop some instances based on internal checks it can register a callback with quantifier instantiation * missing new closure Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add Z3_solver_propagate_on_binding to ml callback declarations Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add python file Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> * debug under defined calls Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * more untangle params Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * precalc parameters to define the eval order Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * remove a printout Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename a Python file Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add on_binding callbacks across APIs update release notes, add to Java, .Net, C++ * use jboolean in Native interface Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * register on_binding attribute Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix java build for java bindings Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * avoid interferring side-effects in function calls Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove theory_str and classes that are only used by it * remove automata from python build Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove ref to theory_str Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * get the finest factorizations before project Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * rename add_lcs to add_lc Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * resolve bad bug about l2g and g2l translators using wrong global context. add some debug prints * initial attempt at dynamically switching from greedy to frugal splitting strategy in return_cubes. need to test. also there is some bug where the threads take forever to cancel? * Update RELEASE_NOTES.md * resolve bug about not translating managers correctly for the second phase of the greedy cubing, and the frugal fallback * remove unused square-free check Signed-off-by: Lev Nachmanson <levnach@hotmail.com> * add some debug prints and impelement internal polynomial fix * add some comments and debug m_assumptions_used * redo greedy->frugal strategy so we don't split on existing cubes in frugal at all (eliminate the incorrect/wasteful step by processing current batch first) * set up initial scaffolding for sharing clauses between threads and batch manager. needs some reworking/debug still --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> Signed-off-by: Lev Nachmanson <levnach@hotmail.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> Co-authored-by: Lev Nachmanson <levnach@hotmail.com> * sign of life Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add notes on parameter tuning Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add notes on parameter tuning Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add notes on parameter tuning Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add notes on parameter tuning Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * turn off logging at level 0 for testing * add max thread conflicts backoff --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> Signed-off-by: Lev Nachmanson <levnach@hotmail.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> Co-authored-by: Lev Nachmanson <levnach@hotmail.com> * fix #7776 * add > operator as shorthand for Array * updates to euf completion * resolve bug about not popping local ctx to base level before collecting shared lits * Add virtual translate method to solver_factory class (#7780) * Initial plan * Add virtual translate method to solver_factory base class and all implementations Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Add documentation for the translate method in solver_factory Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * put return_cubes under lock * Revert "resolve bug about not popping local ctx to base level before collecting shared lits" This reverts commitbba1111e1b
. * Update seq_rewriter.cpp * fix releaseNotesSource to inline Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Use solver factory translate method in Z3_solver_translate (#7782) * Initial plan * Fix Z3_solver_translate to use solver factory translate method Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Revert "Parallel solving (#7775)" (#7777) This reverts commitc8e866f568
. * remove upload artifact for azure-pipeline Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Fix compilation warning: add missing is_passive_eq case to switch statement (#7785) * Initial plan * Fix compilation warning: add missing is_passive_eq case to switch statement Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Remove NugetPublishNightly stage from nightly.yaml (#7787) * Initial plan * Remove NugetPublishNightly stage from nightly.yaml Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * add more params * enable pypi public Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Fix nullptr dereference in pp_symbol when handling null symbol names (#7790) * Initial plan * Fix nullptr dereference in pp_symbol with null symbol names Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * add option to control epsilon #7791 #7791 reports on using model values during lex optimization that break soft constraints. This is an artifact of using optimization where optimal values can be arbitrarily close to a rational. In a way it is by design, but we give the user now an option to control the starting point for epsilon when converting infinitesimals into rationals. * update on euf * check for internalized in solve_for * fix #7792 add missing revert operations * update version number to 4.15.4 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix #7753 * fix #7796 Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Create centralized version management with VERSION.txt (#7802) * Initial plan * Create VERSION.txt and update CMakeLists.txt to read version from file Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete centralized version management system Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Fix version update script and finalize implementation Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Create centralized version management with VERSION.txt Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * read version from VERSION.txt Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix version parse Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * fix parsing of version Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add param tuning experiment in python * Fix Azure Pipeline PyPI package builds by including VERSION.txt in source distribution (#7808) * Initial plan * Fix Azure Pipeline PyPI package builds by including VERSION.txt in source distribution Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Update nightly.yaml to match release.yml NuGet tool installer changes (#7810) * Initial plan * Update nightly.yaml to match release.yml NuGet tool installer changes Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Attempt at adding the README to the NuGet package (#7807) * Attempt at adding README to NuGet package * Forgot to enable publishing * add resources Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove resources directive again Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Document how to use system-installed Z3 with CMake projects (#7809) * Initial plan * Add documentation for using system-installed Z3 with CMake Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Fix Julia bindings linker errors on Windows MSVC (#7794) * Initial plan * Fix Julia bindings linker errors on Windows MSVC Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Complete Julia bindings fix validation and testing Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * Fix Julia bindings linker errors on Windows MSVC Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * add print for version file Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add more logging to setup.py Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * try diferennt dirs Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * try src_dir_repo Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * try other dir Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * remove extra characters Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * more output Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * print dirs Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * copy VERSION from SRC_DIR Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * Move VERSION.txt to scripts directory and update all references (#7811) * Initial plan * Move VERSION.txt to scripts/ and update all references Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> * clean up a little of the handling of VERSION.txt Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * add implementation and toggleable param for splitting frugal + choosing deepest cubes only * remove priority queue for top-k lits and replace with simple linear scan. the PQ implementation backend still remains in case we want to switch back * Add new configurations for SMT parallel settings * Bugfix: post-build sanity check when an old version of ocaml-z3 is installed (#7815) * fix: add generating META for ocamlfind. * Patch macos. We need to keep the `@rpath` and use environment var to enable the test because we need to leave it to be fixed by package managers. * Trigger CI. * Debug. * Debug. * Debug. * Debug. * Debug. * Debug. * Hacky fix for ocaml building warning. * Fix typo and rename variables. * Fix cmake for ocaml test, using local libz3 explicit. * Rename configuration from 'shareconflicts' to 'depthsplitting' * Fix configuration for depth splitting in notes * rename variables * remove double tweak versioning Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> * attempting to add backbone code, it does not work. still debugging the error: ASSERTION VIOLATION File: /home/t-ilshapiro/z3/src/ast/ast.cpp Line: 388 UNEXPECTED CODE WAS REACHED. I left a comment on the line where it's crashing * depth splitting now applies to greedy+frugal unless specified otherwise * debug the backbone crash (it was references not being counted) * iterative deepening experiment (no PQ yet). the hardness heuristic is still naive! * fix iterative deepening bug: unsolved cube needs to get re-enqueued even if we don't split it further * debug iterative deepening some more and add first attempt at PQ (untested) * fix some bugs and the PQ approach is working for now. the depth sets approach is actually unsound, but I am going to focus on the PQ approach for now since it has more potential for SAT problems with the right hardness metric * add new attempt at hardness function * attempt to add different hardness functions including heule schur and march, need to re-examine/debug/evaluate * implement march and heule schur hardness functions based on sat_lookahead.cpp implementations. they seem to be buggy, need to revisit. also set up experimental params for running on polytest * add a lot of debug prints that need to be removed. some bugs are resolved but others remain * debug in progress * remove the incorrect preselection functions for march and heule-schur. update explicit-hardness with bugfixes. now it works but i am not sure there is a good perf increase based on my handpicked examples. i tried several variations of hardness ratios as you can see commented out. there are debug prints still commented out. also return_cubes now takes in a single cube instead of a list C_worker to align with the single-cube hardness/should_split metrics, it doesn't change anything bc we only pass in 1 cube to begin with * debug a couple of things and change the hardness function * tree version in progress * cube tree data structure version for sharing maximal solver context. it compiles but segfaults immediately so it needs a lot of debugging --------- Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com> Signed-off-by: Lev Nachmanson <levnach@Levs-MacBook-Pro.local> Signed-off-by: Lev Nachmanson <levnach@hotmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Nikolaj Bjorner <nbjorner@microsoft.com> Co-authored-by: humnrdble <83878671+humnrdble@users.noreply.github.com> Co-authored-by: Nuno Lopes <nuno.lopes@tecnico.ulisboa.pt> Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com> Co-authored-by: Lev Nachmanson <levnach@hotmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Solal Pirelli <SolalPirelli@users.noreply.github.com> Co-authored-by: Shiwei Weng 翁士伟 <arbipher@gmail.com>
This commit is contained in:
parent
2a272e1507
commit
09cc4c4f5f
4 changed files with 447 additions and 151 deletions
|
@ -18,4 +18,5 @@ def_module_params('smt_parallel',
|
||||||
('iterative_deepening', BOOL, False, 'deepen cubes based on iterative hardness cutoff heuristic'),
|
('iterative_deepening', BOOL, False, 'deepen cubes based on iterative hardness cutoff heuristic'),
|
||||||
('beam_search', BOOL, False, 'use beam search with PQ to rank cubes given to threads'),
|
('beam_search', BOOL, False, 'use beam search with PQ to rank cubes given to threads'),
|
||||||
('explicit_hardness', BOOL, False, 'use explicit hardness metric for cube'),
|
('explicit_hardness', BOOL, False, 'use explicit hardness metric for cube'),
|
||||||
|
('cubetree', BOOL, False, 'use cube tree data structure for storing cubes'),
|
||||||
))
|
))
|
||||||
|
|
177
src/smt/CubeTree.h
Normal file
177
src/smt/CubeTree.h
Normal file
|
@ -0,0 +1,177 @@
|
||||||
|
#include "ast/ast_translation.h"
|
||||||
|
|
||||||
|
#include <vector>
|
||||||
|
#include <cstdlib> // rand()
|
||||||
|
#include <ctime>
|
||||||
|
|
||||||
|
// forward declare
|
||||||
|
struct CubeNode;
|
||||||
|
|
||||||
|
typedef expr_ref_vector Cube; // shorthand
|
||||||
|
|
||||||
|
struct CubeNode {
|
||||||
|
Cube cube;
|
||||||
|
CubeNode* parent;
|
||||||
|
std::vector<CubeNode*> children;
|
||||||
|
bool active = true;
|
||||||
|
|
||||||
|
CubeNode(const Cube& c, CubeNode* p = 0)
|
||||||
|
: cube(c), parent(p) {}
|
||||||
|
|
||||||
|
bool is_leaf() const { return children.empty(); }
|
||||||
|
};
|
||||||
|
|
||||||
|
class CubeTree {
|
||||||
|
public:
|
||||||
|
CubeTree(ast_manager& m) {
|
||||||
|
Cube root_cube(m); // empty cube
|
||||||
|
root = nullptr;
|
||||||
|
std::srand((unsigned)std::time(0)); // is seeding the pseudo-random number generator used by std::rand()
|
||||||
|
}
|
||||||
|
|
||||||
|
~CubeTree() {
|
||||||
|
clear();
|
||||||
|
}
|
||||||
|
|
||||||
|
void clear() {
|
||||||
|
delete_leaf(root);
|
||||||
|
root = nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool empty() const {
|
||||||
|
return root == nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
CubeNode* get_root() { return root; } // if root is nullptr, tree is empty
|
||||||
|
|
||||||
|
void add_children(CubeNode* parent,
|
||||||
|
const Cube& left_cube,
|
||||||
|
const Cube& right_cube) {
|
||||||
|
CubeNode* left = new CubeNode(left_cube, parent);
|
||||||
|
CubeNode* right = new CubeNode(right_cube, parent);
|
||||||
|
parent->children.push_back(left);
|
||||||
|
parent->children.push_back(right);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add a new node under an existing parent
|
||||||
|
void add_node(CubeNode* node, CubeNode* parent) {
|
||||||
|
if (!node) return;
|
||||||
|
|
||||||
|
// If no parent is specified, assume it's the root
|
||||||
|
if (!parent) {
|
||||||
|
root = node;
|
||||||
|
node->parent = nullptr;
|
||||||
|
} else {
|
||||||
|
parent->children.push_back(node);
|
||||||
|
node->parent = parent;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// remove node and propagate upward if parent becomes leaf
|
||||||
|
// return pointer to last removed ancestor (or nullptr if none) so we can select one of its siblings as the next cube
|
||||||
|
CubeNode* remove_node_and_propagate(CubeNode* node) {
|
||||||
|
if (!node || node == root || node->children.empty()) return nullptr; // error or root or not a leaf
|
||||||
|
|
||||||
|
CubeNode* parent = node->parent;
|
||||||
|
CubeNode* last_removed = node;
|
||||||
|
|
||||||
|
// erase node from parent's children
|
||||||
|
for (size_t i = 0; i < parent->children.size(); ++i) {
|
||||||
|
if (parent->children[i] == node) {
|
||||||
|
delete_leaf(node);
|
||||||
|
parent->children.erase(parent->children.begin() + i);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// propagate upward if parent became leaf
|
||||||
|
while (parent && parent != root && parent->is_leaf()) {
|
||||||
|
SASSERT(parent->active); // parent only just became a leaf node -- no thread should be working on it! i.e. must NOT be inactive!
|
||||||
|
CubeNode* gp = parent->parent;
|
||||||
|
for (size_t i = 0; i < gp->children.size(); ++i) {
|
||||||
|
if (gp->children[i] == parent) {
|
||||||
|
last_removed = parent; // track the last ancestor we delete
|
||||||
|
delete parent;
|
||||||
|
gp->children.erase(gp->children.begin() + i);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parent = gp;
|
||||||
|
}
|
||||||
|
|
||||||
|
return last_removed;
|
||||||
|
}
|
||||||
|
|
||||||
|
// get closest cube to current by getting a random sibling of current (if current was UNSAT and we removed it from the tree)
|
||||||
|
// or by descending randomly to a leaf (if we split the current node) to get the newest cube split fromthe current
|
||||||
|
// we descend randomly to a leaf instead of just taking a random child because it's possible another thread made more descendants
|
||||||
|
CubeNode* get_next_cube(CubeNode* current) {
|
||||||
|
if (!current) return nullptr;
|
||||||
|
|
||||||
|
// must be a leaf
|
||||||
|
SASSERT(current->is_leaf());
|
||||||
|
|
||||||
|
// lambda to find any active leaf in the subtree (explore all branches)
|
||||||
|
std::function<CubeNode*(CubeNode*)> find_active_leaf;
|
||||||
|
find_active_leaf = [&](CubeNode* node) -> CubeNode* {
|
||||||
|
if (!node || !node->active) return nullptr;
|
||||||
|
if (node->is_leaf()) return node;
|
||||||
|
|
||||||
|
for (CubeNode* child : node->children) {
|
||||||
|
CubeNode* leaf = find_active_leaf(child);
|
||||||
|
if (leaf) return leaf; // return first found active leaf
|
||||||
|
}
|
||||||
|
return nullptr;
|
||||||
|
};
|
||||||
|
|
||||||
|
CubeNode* node = current;
|
||||||
|
|
||||||
|
while (node->parent) {
|
||||||
|
CubeNode* parent = node->parent;
|
||||||
|
|
||||||
|
// gather active siblings
|
||||||
|
std::vector<CubeNode*> siblings;
|
||||||
|
for (CubeNode* s : parent->children) {
|
||||||
|
if (s != node && s->active)
|
||||||
|
siblings.push_back(s);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!siblings.empty()) {
|
||||||
|
// try each sibling until we find an active leaf
|
||||||
|
for (CubeNode* sibling : siblings) {
|
||||||
|
CubeNode* leaf = find_active_leaf(sibling);
|
||||||
|
if (leaf) return leaf;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// no active leaf among siblings → climb up
|
||||||
|
node = parent;
|
||||||
|
}
|
||||||
|
|
||||||
|
// nothing found
|
||||||
|
return nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
private:
|
||||||
|
CubeNode* root;
|
||||||
|
|
||||||
|
void delete_leaf(CubeNode* node) {
|
||||||
|
if (!node || !node->active) return;
|
||||||
|
|
||||||
|
// must be a leaf
|
||||||
|
SASSERT(node->children.empty());
|
||||||
|
|
||||||
|
// detach from parent
|
||||||
|
if (node->parent) {
|
||||||
|
auto& siblings = node->parent->children;
|
||||||
|
for (auto it = siblings.begin(); it != siblings.end(); ++it) {
|
||||||
|
if (*it == node) {
|
||||||
|
siblings.erase(it);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete node;
|
||||||
|
}
|
||||||
|
};
|
|
@ -66,32 +66,29 @@ namespace smt {
|
||||||
auto& clause = *cp;
|
auto& clause = *cp;
|
||||||
unsigned n_false = 0;
|
unsigned n_false = 0;
|
||||||
bool satisfied = false;
|
bool satisfied = false;
|
||||||
|
unsigned sz = 0;
|
||||||
|
|
||||||
// LOG_WORKER(1, " Clause has num literals: " << clause.get_num_literals() << "\n");
|
// LOG_WORKER(1, " Clause has num literals: " << clause.get_num_literals() << "\n");
|
||||||
for (literal l : clause) {
|
for (literal l : clause) {
|
||||||
// LOG_WORKER(1, " Processing literal " << l << " with val: " << ctx->get_assignment(l) << "\n");
|
// LOG_WORKER(1, " Processing literal " << l << " with val: " << ctx->get_assignment(l) << "\n");
|
||||||
bool_var v = l.var();
|
bool_var v = l.var();
|
||||||
|
|
||||||
lbool val = ctx->get_assignment(l);
|
|
||||||
if (val == l_undef) continue;
|
|
||||||
|
|
||||||
unsigned lvl = ctx->get_assign_level(l);
|
|
||||||
|
|
||||||
// Only include assignments made after the base scope level (i.e. those made by specifically assuming the cube)
|
// Only include assignments made after the base scope level (i.e. those made by specifically assuming the cube)
|
||||||
// LOG_WORKER(1, " Literal " << l << " at level " << lvl << " is below scope level " << ctx->get_search_level() << ": " << bool(lvl < ctx->get_search_level()) << "\n");
|
// LOG_WORKER(1, " Literal " << l << " at level " << lvl << " is below scope level " << ctx->get_search_level() << ": " << bool(lvl < ctx->get_search_level()) << "\n");
|
||||||
|
unsigned lvl = ctx->get_assign_level(l);
|
||||||
if (lvl < ctx->get_search_level()) continue;
|
if (lvl < ctx->get_search_level()) continue;
|
||||||
|
|
||||||
|
lbool val = ctx->get_assignment(l);
|
||||||
if (val == l_true) { satisfied = true; break; }
|
if (val == l_true) { satisfied = true; break; }
|
||||||
if (val == l_false) n_false++;
|
if (val == l_false) n_false++;
|
||||||
|
sz++;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (satisfied || n_false == 0) continue; // meaning, the clause is true (at least 1 true atom), or we had no true OR false atoms so the whole thing is undefined
|
if (satisfied || sz == 0) continue;
|
||||||
|
|
||||||
unsigned sz = clause.get_num_literals();
|
|
||||||
// LOG_WORKER(1, " Clause of size " << sz << " has " << n_false << " false literals in cube. Is satisfied: " << satisfied << "\n");
|
// LOG_WORKER(1, " Clause of size " << sz << " has " << n_false << " false literals in cube. Is satisfied: " << satisfied << "\n");
|
||||||
|
|
||||||
// double reduction_ratio = static_cast<double>(sz) / n_false; // n_false/sz -> higher value=easier //std::max(1u, reduction);
|
// double reduction_ratio = static_cast<double>(sz) / std::max(1u, sz - n_false); // n_false/sz -> higher value=easier //std::max(1u, reduction);
|
||||||
double reduction_ratio = pow(0.5, sz) * (1 / n_false);
|
double reduction_ratio = pow(0.5, sz) * (1.0 / std::max(1u, sz - n_false));
|
||||||
// LOG_WORKER(1, " Clause contributes " << reduction_ratio << " to hardness metric. n_false: " << n_false << "\n");
|
// LOG_WORKER(1, " Clause contributes " << reduction_ratio << " to hardness metric. n_false: " << n_false << "\n");
|
||||||
overall_hardness += reduction_ratio;
|
overall_hardness += reduction_ratio;
|
||||||
}
|
}
|
||||||
|
@ -101,105 +98,116 @@ namespace smt {
|
||||||
|
|
||||||
void parallel::worker::run() {
|
void parallel::worker::run() {
|
||||||
while (m.inc()) { // inc: increase the limit and check if it is canceled, vs m.limit().is_canceled() is readonly. the .limit() is also not necessary (m.inc() etc provides a convenience wrapper)
|
while (m.inc()) { // inc: increase the limit and check if it is canceled, vs m.limit().is_canceled() is readonly. the .limit() is also not necessary (m.inc() etc provides a convenience wrapper)
|
||||||
vector<expr_ref_vector> cubes;
|
expr_ref_vector cube(m);
|
||||||
b.get_cubes(m_g2l, cubes);
|
CubeNode* cube_node;
|
||||||
if (cubes.empty())
|
if (m_config.m_cubetree) {
|
||||||
return;
|
cube_node = b.get_cube_from_tree(m_g2l, m_curr_cube_node);
|
||||||
|
m_curr_cube_node = cube_node; // store the current cube so we know how to get the next closest cube from the tree
|
||||||
|
cube = (expr_ref_vector)(cube_node->cube);
|
||||||
|
IF_VERBOSE(1, verbose_stream() << " Worker " << id << " got cube of size " << cube.size() << " from CubeTree\n");
|
||||||
|
}
|
||||||
|
else {
|
||||||
|
cube = b.get_cube(m_g2l);
|
||||||
|
}
|
||||||
|
|
||||||
collect_shared_clauses(m_g2l);
|
collect_shared_clauses(m_g2l);
|
||||||
for (auto& cube : cubes) {
|
if (!m.inc()) {
|
||||||
if (!m.inc()) {
|
b.set_exception("context cancelled");
|
||||||
b.set_exception("context cancelled");
|
return;
|
||||||
return;
|
}
|
||||||
}
|
|
||||||
|
|
||||||
LOG_WORKER(1, " CUBE SIZE IN MAIN LOOP: " << cube.size() << "\n");
|
LOG_WORKER(1, " CUBE SIZE IN MAIN LOOP: " << cube.size() << "\n");
|
||||||
lbool r = check_cube(cube);
|
lbool r = check_cube(cube);
|
||||||
|
|
||||||
if (m.limit().is_canceled()) {
|
if (m.limit().is_canceled()) {
|
||||||
LOG_WORKER(1, " cancelled\n");
|
LOG_WORKER(1, " cancelled\n");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (r) {
|
switch (r) {
|
||||||
case l_undef: {
|
case l_undef: {
|
||||||
LOG_WORKER(1, " found undef cube\n");
|
LOG_WORKER(1, " found undef cube\n");
|
||||||
// return unprocessed cubes to the batch manager
|
// return unprocessed cubes to the batch manager
|
||||||
// add a split literal to the batch manager.
|
// add a split literal to the batch manager.
|
||||||
// optionally process other cubes and delay sending back unprocessed cubes to batch manager.
|
// optionally process other cubes and delay sending back unprocessed cubes to batch manager.
|
||||||
if (!m_config.m_never_cube) {
|
if (!m_config.m_never_cube) {
|
||||||
// vector<expr_ref_vector> returned_cubes;
|
auto split_atoms = get_split_atoms();
|
||||||
// returned_cubes.push_back(cube);
|
|
||||||
auto split_atoms = get_split_atoms();
|
|
||||||
|
|
||||||
// let's automatically do iterative deepening for beam search.
|
// let's automatically do iterative deepening for beam search.
|
||||||
// when using more advanced metrics like explicit_hardness etc: need one of two things: (1) split if greater than OR EQUAL TO than avg hardness, or (3) enter this branch only when cube.size() > 0, or else we get stuck in a loop of never deepening.
|
// when using more advanced metrics like explicit_hardness etc: need one of two things: (1) split if greater than OR EQUAL TO than avg hardness, or (3) enter this branch only when cube.size() > 0, or else we get stuck in a loop of never deepening.
|
||||||
if (m_config.m_iterative_deepening || m_config.m_beam_search) {
|
if (m_config.m_iterative_deepening || m_config.m_beam_search) {
|
||||||
LOG_WORKER(1, " applying iterative deepening\n");
|
LOG_WORKER(1, " applying iterative deepening\n");
|
||||||
|
|
||||||
double cube_hardness;
|
double cube_hardness;
|
||||||
if (m_config.m_explicit_hardness) {
|
if (m_config.m_explicit_hardness) {
|
||||||
cube_hardness = explicit_hardness(cube);
|
cube_hardness = explicit_hardness(cube);
|
||||||
// LOG_WORKER(1, " explicit hardness: " << cube_hardness << "\n");
|
// LOG_WORKER(1, " explicit hardness: " << cube_hardness << "\n");
|
||||||
} else { // default to naive hardness
|
} else { // default to naive hardness
|
||||||
cube_hardness = naive_hardness();
|
cube_hardness = naive_hardness();
|
||||||
// LOG_WORKER(1, " naive hardness: " << cube_hardness << "\n");
|
// LOG_WORKER(1, " naive hardness: " << cube_hardness << "\n");
|
||||||
}
|
|
||||||
|
|
||||||
const double avg_hardness = b.update_avg_cube_hardness(cube_hardness);
|
|
||||||
const double factor = 1; // can tune for multiple of avg hardness later
|
|
||||||
bool should_split = cube_hardness >= avg_hardness * factor;
|
|
||||||
|
|
||||||
LOG_WORKER(1, " cube hardness: " << cube_hardness << " avg: " << avg_hardness << " avg*factor: " << avg_hardness * factor << " should-split: " << should_split << "\n");
|
|
||||||
// we still need to call return_cubes, even if we don't split, since we need to re-enqueue the current unsolved cube to the batch manager!
|
|
||||||
// should_split tells return_cubes whether to further split the unsolved cube.
|
|
||||||
b.return_cubes(m_l2g, cube, split_atoms, should_split);
|
|
||||||
} else {
|
|
||||||
b.return_cubes(m_l2g, cube, split_atoms);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const double avg_hardness = b.update_avg_cube_hardness(cube_hardness);
|
||||||
|
const double factor = 1.5; // can tune for multiple of avg hardness later
|
||||||
|
bool should_split = cube_hardness >= avg_hardness * factor; // must be >= otherwise we never deepen
|
||||||
|
|
||||||
|
LOG_WORKER(1, " cube hardness: " << cube_hardness << " avg: " << avg_hardness << " avg*factor: " << avg_hardness * factor << " should-split: " << should_split << "\n");
|
||||||
|
// we still need to call return_cubes, even if we don't split, since we need to re-enqueue the current unsolved cube to the batch manager!
|
||||||
|
// should_split tells return_cubes whether to further split the unsolved cube.
|
||||||
|
b.return_cubes(m_l2g, cube, split_atoms, should_split, cube_hardness);
|
||||||
|
} else if (m_config.m_cubetree) {
|
||||||
|
b.return_cubes_tree(m_l2g, cube_node, split_atoms);
|
||||||
|
} else {
|
||||||
|
b.return_cubes(m_l2g, cube, split_atoms);
|
||||||
}
|
}
|
||||||
if (m_config.m_backbone_detection) {
|
|
||||||
expr_ref_vector backbone_candidates = find_backbone_candidates();
|
|
||||||
expr_ref_vector backbones = get_backbones_from_candidates(backbone_candidates);
|
|
||||||
if (!backbones.empty()) { // QUESTION: how do we avoid splitting on backbones????
|
|
||||||
for (expr* bb : backbones) {
|
|
||||||
ctx->assert_expr(bb); // local pruning
|
|
||||||
b.collect_clause(m_l2g, id, bb); // share globally // QUESTION: gatekeep this behind share_units param????
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
update_max_thread_conflicts();
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
case l_true: {
|
if (m_config.m_backbone_detection) {
|
||||||
LOG_WORKER(1, " found sat cube\n");
|
expr_ref_vector backbone_candidates = find_backbone_candidates();
|
||||||
model_ref mdl;
|
expr_ref_vector backbones = get_backbones_from_candidates(backbone_candidates);
|
||||||
ctx->get_model(mdl);
|
if (!backbones.empty()) { // QUESTION: how do we avoid splitting on backbones????
|
||||||
b.set_sat(m_l2g, *mdl);
|
for (expr* bb : backbones) {
|
||||||
|
ctx->assert_expr(bb); // local pruning
|
||||||
|
b.collect_clause(m_l2g, id, bb); // share globally // QUESTION: gatekeep this behind share_units param????
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
update_max_thread_conflicts();
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case l_true: {
|
||||||
|
LOG_WORKER(1, " found sat cube\n");
|
||||||
|
model_ref mdl;
|
||||||
|
ctx->get_model(mdl);
|
||||||
|
b.set_sat(m_l2g, *mdl);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
case l_false: {
|
||||||
|
// if unsat core only contains (external) assumptions (i.e. all the unsat core are asms), then unsat and return as this does NOT depend on cubes
|
||||||
|
// otherwise, extract lemmas that can be shared (units (and unsat core?)).
|
||||||
|
// share with batch manager.
|
||||||
|
// process next cube.
|
||||||
|
expr_ref_vector const& unsat_core = ctx->unsat_core();
|
||||||
|
LOG_WORKER(2, " unsat core:\n"; for (auto c : unsat_core) verbose_stream() << mk_bounded_pp(c, m, 3) << "\n");
|
||||||
|
// If the unsat core only contains assumptions,
|
||||||
|
// unsatisfiability does not depend on the current cube and the entire problem is unsat.
|
||||||
|
if (all_of(unsat_core, [&](expr* e) { return asms.contains(e); })) {
|
||||||
|
LOG_WORKER(1, " determined formula unsat\n");
|
||||||
|
b.set_unsat(m_l2g, unsat_core);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
case l_false: {
|
for (expr* e : unsat_core)
|
||||||
// if unsat core only contains (external) assumptions (i.e. all the unsat core are asms), then unsat and return as this does NOT depend on cubes
|
if (asms.contains(e))
|
||||||
// otherwise, extract lemmas that can be shared (units (and unsat core?)).
|
b.report_assumption_used(m_l2g, e); // report assumptions used in unsat core, so they can be used in final core
|
||||||
// share with batch manager.
|
|
||||||
// process next cube.
|
|
||||||
expr_ref_vector const& unsat_core = ctx->unsat_core();
|
|
||||||
LOG_WORKER(2, " unsat core:\n"; for (auto c : unsat_core) verbose_stream() << mk_bounded_pp(c, m, 3) << "\n");
|
|
||||||
// If the unsat core only contains assumptions,
|
|
||||||
// unsatisfiability does not depend on the current cube and the entire problem is unsat.
|
|
||||||
if (all_of(unsat_core, [&](expr* e) { return asms.contains(e); })) {
|
|
||||||
LOG_WORKER(1, " determined formula unsat\n");
|
|
||||||
b.set_unsat(m_l2g, unsat_core);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
for (expr* e : unsat_core)
|
|
||||||
if (asms.contains(e))
|
|
||||||
b.report_assumption_used(m_l2g, e); // report assumptions used in unsat core, so they can be used in final core
|
|
||||||
|
|
||||||
LOG_WORKER(1, " found unsat cube\n");
|
LOG_WORKER(1, " found unsat cube\n");
|
||||||
if (m_config.m_share_conflicts)
|
if (m_config.m_share_conflicts)
|
||||||
b.collect_clause(m_l2g, id, mk_not(mk_and(unsat_core)));
|
b.collect_clause(m_l2g, id, mk_not(mk_and(unsat_core)));
|
||||||
break;
|
|
||||||
|
// prune the tree now that we know the cube is unsat
|
||||||
|
if (m_config.m_cubetree) {
|
||||||
|
b.remove_node_and_propagate(m_curr_cube_node);
|
||||||
}
|
}
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (m_config.m_share_units)
|
if (m_config.m_share_units)
|
||||||
|
@ -235,6 +243,7 @@ namespace smt {
|
||||||
m_config.m_iterative_deepening = pp.iterative_deepening();
|
m_config.m_iterative_deepening = pp.iterative_deepening();
|
||||||
m_config.m_beam_search = pp.beam_search();
|
m_config.m_beam_search = pp.beam_search();
|
||||||
m_config.m_explicit_hardness = pp.explicit_hardness();
|
m_config.m_explicit_hardness = pp.explicit_hardness();
|
||||||
|
m_config.m_cubetree = pp.cubetree();
|
||||||
|
|
||||||
// don't share initial units
|
// don't share initial units
|
||||||
ctx->pop_to_base_lvl();
|
ctx->pop_to_base_lvl();
|
||||||
|
@ -368,58 +377,89 @@ namespace smt {
|
||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
|
||||||
void parallel::batch_manager::get_cubes(ast_translation& g2l, vector<expr_ref_vector>& cubes) {
|
CubeNode* parallel::batch_manager::get_cube_from_tree(ast_translation& g2l, CubeNode* prev_cube) {
|
||||||
std::scoped_lock lock(mux);
|
std::scoped_lock lock(mux);
|
||||||
|
expr_ref_vector l_cube(g2l.to());
|
||||||
|
SASSERT(m_config.m_cubetree);
|
||||||
|
|
||||||
|
if (m_cubes_tree.empty()) {
|
||||||
|
// special initialization: the first cube is emtpy, have the worker work on an empty cube.
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out empty cube.\n");
|
||||||
|
CubeNode* new_cube_node = new CubeNode(l_cube, nullptr);
|
||||||
|
m_cubes_tree.add_node(new_cube_node, nullptr);
|
||||||
|
return new_cube_node; // return empty cube
|
||||||
|
}
|
||||||
|
|
||||||
|
// get a cube from the CubeTree
|
||||||
|
SASSERT(!m_cubes_tree.empty());
|
||||||
|
CubeNode* next_cube_node = m_cubes_tree.get_next_cube(prev_cube); // get the next cube in the tree closest to the prev cube (i.e. longest common prefix)
|
||||||
|
expr_ref_vector& next_cube = next_cube_node->cube;
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube from CubeTree.\n");
|
||||||
|
|
||||||
|
for (auto& e : next_cube) {
|
||||||
|
l_cube.push_back(g2l(e));
|
||||||
|
}
|
||||||
|
|
||||||
|
next_cube_node->active = false; // mark the cube as inactive (i.e. being processed by a worker)
|
||||||
|
|
||||||
|
return next_cube_node;
|
||||||
|
}
|
||||||
|
|
||||||
|
// FOR ALL NON-TREE VERSIONS
|
||||||
|
expr_ref_vector parallel::batch_manager::get_cube(ast_translation& g2l) {
|
||||||
|
std::scoped_lock lock(mux);
|
||||||
|
expr_ref_vector l_cube(g2l.to());
|
||||||
|
|
||||||
if (m_cubes.size() == 1 && m_cubes[0].empty()
|
if (m_cubes.size() == 1 && m_cubes[0].empty()
|
||||||
|| m_config.m_beam_search && m_cubes_pq.empty()
|
|| m_config.m_beam_search && m_cubes_pq.empty()
|
||||||
|| m_config.m_depth_splitting_only && m_cubes_depth_sets.empty()) {
|
|| m_config.m_depth_splitting_only && m_cubes_depth_sets.empty()
|
||||||
|
|| m_config.m_cubetree && m_cubes_tree.empty()) {
|
||||||
// special initialization: the first cube is emtpy, have the worker work on an empty cube.
|
// special initialization: the first cube is emtpy, have the worker work on an empty cube.
|
||||||
cubes.push_back(expr_ref_vector(g2l.to()));
|
|
||||||
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out empty cube.\n");
|
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out empty cube.\n");
|
||||||
return;
|
return l_cube; // return empty cube
|
||||||
}
|
}
|
||||||
|
|
||||||
for (unsigned i = 0; i < std::min(m_max_batch_size / p.num_threads, (unsigned)m_cubes.size()) && !m_cubes.empty(); ++i) {
|
if (m_config.m_depth_splitting_only || m_config.m_iterative_deepening) {
|
||||||
if (m_config.m_depth_splitting_only || m_config.m_iterative_deepening) {
|
// get the deepest set of cubes
|
||||||
// get the deepest set of cubes
|
auto& deepest_cubes = m_cubes_depth_sets.rbegin()->second;
|
||||||
auto& deepest_cubes = m_cubes_depth_sets.rbegin()->second;
|
unsigned idx = rand() % deepest_cubes.size();
|
||||||
unsigned idx = rand() % deepest_cubes.size();
|
auto& cube = deepest_cubes[idx]; // get a random cube from the deepest set
|
||||||
auto& cube = deepest_cubes[idx]; // get a random cube from the deepest set
|
|
||||||
|
|
||||||
expr_ref_vector l_cube(g2l.to());
|
for (auto& e : cube) {
|
||||||
for (auto& e : cube) {
|
l_cube.push_back(g2l(e));
|
||||||
l_cube.push_back(g2l(e));
|
|
||||||
}
|
|
||||||
cubes.push_back(l_cube);
|
|
||||||
|
|
||||||
deepest_cubes.erase(deepest_cubes.begin() + idx); // remove the cube from the set
|
|
||||||
if (deepest_cubes.empty())
|
|
||||||
m_cubes_depth_sets.erase(m_cubes_depth_sets.size() - 1);
|
|
||||||
} else if (m_config.m_beam_search) {
|
|
||||||
// get the highest ranked cube
|
|
||||||
SASSERT(!m_cubes_pq.empty());
|
|
||||||
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube with score " << m_cubes_pq.top().score << ".\n");
|
|
||||||
|
|
||||||
ScoredCube const& scored_cube = m_cubes_pq.top();
|
|
||||||
auto& cube = scored_cube.cube;
|
|
||||||
expr_ref_vector l_cube(g2l.to());
|
|
||||||
for (auto& e : cube) {
|
|
||||||
l_cube.push_back(g2l(e));
|
|
||||||
}
|
|
||||||
|
|
||||||
cubes.push_back(l_cube);
|
|
||||||
m_cubes_pq.pop();
|
|
||||||
} else {
|
|
||||||
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube.\n");
|
|
||||||
auto& cube = m_cubes.back();
|
|
||||||
expr_ref_vector l_cube(g2l.to());
|
|
||||||
for (auto& e : cube) {
|
|
||||||
l_cube.push_back(g2l(e));
|
|
||||||
}
|
|
||||||
cubes.push_back(l_cube);
|
|
||||||
m_cubes.pop_back();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
deepest_cubes.erase(deepest_cubes.begin() + idx); // remove the cube from the set
|
||||||
|
if (deepest_cubes.empty())
|
||||||
|
m_cubes_depth_sets.erase(m_cubes_depth_sets.size() - 1);
|
||||||
|
} else if (m_config.m_beam_search) {
|
||||||
|
// get the highest ranked cube
|
||||||
|
SASSERT(!m_cubes_pq.empty());
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube with score " << m_cubes_pq.top().score << ".\n");
|
||||||
|
|
||||||
|
ScoredCube const& scored_cube = m_cubes_pq.top();
|
||||||
|
auto& cube = scored_cube.cube;
|
||||||
|
expr_ref_vector l_cube(g2l.to());
|
||||||
|
for (auto& e : cube) {
|
||||||
|
l_cube.push_back(g2l(e));
|
||||||
|
}
|
||||||
|
|
||||||
|
m_cubes_pq.pop();
|
||||||
|
} else {
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Batch manager giving out cube.\n");
|
||||||
|
auto& cube = m_cubes.back();
|
||||||
|
// print out the cubes in m_cubes
|
||||||
|
for (auto& e : m_cubes) {
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Cube: " << e << "\n");
|
||||||
|
}
|
||||||
|
expr_ref_vector l_cube(g2l.to());
|
||||||
|
for (auto& e : cube) {
|
||||||
|
l_cube.push_back(g2l(e));
|
||||||
|
}
|
||||||
|
m_cubes.pop_back();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return l_cube;
|
||||||
}
|
}
|
||||||
|
|
||||||
void parallel::batch_manager::set_sat(ast_translation& l2g, model& m) {
|
void parallel::batch_manager::set_sat(ast_translation& l2g, model& m) {
|
||||||
|
@ -548,6 +588,7 @@ namespace smt {
|
||||||
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
------------------------------------------------------------------------------------------------------------------------------------------------------------
|
||||||
Final thought (do this!): use greedy strategy by a policy when C_batch, A_batch, A_worker are "small". -- want to do this. switch to frugal strategy after reaching size limit
|
Final thought (do this!): use greedy strategy by a policy when C_batch, A_batch, A_worker are "small". -- want to do this. switch to frugal strategy after reaching size limit
|
||||||
*/
|
*/
|
||||||
|
// FOR ALL NON-TREE VERSIONS
|
||||||
void parallel::batch_manager::return_cubes(ast_translation& l2g, expr_ref_vector const& c, expr_ref_vector const& A_worker, const bool should_split, const double hardness) {
|
void parallel::batch_manager::return_cubes(ast_translation& l2g, expr_ref_vector const& c, expr_ref_vector const& A_worker, const bool should_split, const double hardness) {
|
||||||
// SASSERT(!m_config.never_cube());
|
// SASSERT(!m_config.never_cube());
|
||||||
auto atom_in_cube = [&](expr_ref_vector const& cube, expr* atom) {
|
auto atom_in_cube = [&](expr_ref_vector const& cube, expr* atom) {
|
||||||
|
@ -724,6 +765,59 @@ namespace smt {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void parallel::batch_manager::return_cubes_tree(ast_translation& l2g, CubeNode* cube_node, expr_ref_vector const& A_worker) {
|
||||||
|
expr_ref_vector const& c = cube_node->cube;
|
||||||
|
|
||||||
|
auto atom_in_cube = [&](expr_ref_vector const& cube, expr* atom) {
|
||||||
|
return any_of(cube, [&](expr* e) { return e == atom || (m.is_not(e, e) && e == atom); });
|
||||||
|
};
|
||||||
|
|
||||||
|
// apply the frugal strategy to ALL incoming worker cubes, but save in the PQ data structure for beam search
|
||||||
|
auto add_split_atom_pq = [&](expr* atom) {
|
||||||
|
// IF_VERBOSE(1, verbose_stream() << " Adding split atom to PQ: " << mk_bounded_pp(atom, m, 3) << "\n");
|
||||||
|
expr_ref_vector g_cube(l2g.to());
|
||||||
|
for (auto& atom : c)
|
||||||
|
g_cube.push_back(l2g(atom));
|
||||||
|
|
||||||
|
// Positive atom branch
|
||||||
|
expr_ref_vector cube_pos = g_cube;
|
||||||
|
cube_pos.push_back(atom);
|
||||||
|
|
||||||
|
// Negative atom branch
|
||||||
|
expr_ref_vector cube_neg = g_cube;
|
||||||
|
cube_neg.push_back(m.mk_not(atom));
|
||||||
|
|
||||||
|
m_cubes_tree.add_children(cube_node, cube_pos, cube_neg);
|
||||||
|
|
||||||
|
// IF_VERBOSE(1, verbose_stream() << " PQ size now: " << m_cubes_pq.size() << ". PQ is empty: " << m_cubes_pq.empty() << "\n");
|
||||||
|
|
||||||
|
m_stats.m_num_cubes += 2;
|
||||||
|
m_stats.m_max_cube_depth = std::max(m_stats.m_max_cube_depth, g_cube.size() + 1);
|
||||||
|
};
|
||||||
|
|
||||||
|
std::scoped_lock lock(mux);
|
||||||
|
expr_ref_vector g_cube(l2g.to());
|
||||||
|
|
||||||
|
for (auto& atom : c)
|
||||||
|
g_cube.push_back(l2g(atom));
|
||||||
|
|
||||||
|
if (c.size() >= m_config.m_max_cube_depth) {
|
||||||
|
// IF_VERBOSE(1, verbose_stream() << " Skipping split of cube at max depth " << m_config.m_max_cube_depth << "\n";);
|
||||||
|
cube_node->active = true; // mark the cube as active again since we didn't split it
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Frugal approach: only process NEW worker cubes with NEW atoms ---
|
||||||
|
for (unsigned i = 0; i < A_worker.size(); ++i) {
|
||||||
|
expr_ref g_atom(l2g(A_worker[i]), l2g.to());
|
||||||
|
if (!m_split_atoms.contains(g_atom))
|
||||||
|
m_split_atoms.push_back(g_atom);
|
||||||
|
|
||||||
|
IF_VERBOSE(1, verbose_stream() << " splitting worker cubes on new atom for PQ " << mk_bounded_pp(g_atom, m, 3) << "\n");
|
||||||
|
add_split_atom_pq(g_atom);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
expr_ref_vector parallel::worker::find_backbone_candidates() {
|
expr_ref_vector parallel::worker::find_backbone_candidates() {
|
||||||
expr_ref_vector backbone_candidates(m);
|
expr_ref_vector backbone_candidates(m);
|
||||||
// Find backbone candidates based on the current state of the worker
|
// Find backbone candidates based on the current state of the worker
|
||||||
|
@ -886,14 +980,16 @@ namespace smt {
|
||||||
|
|
||||||
void parallel::batch_manager::initialize() {
|
void parallel::batch_manager::initialize() {
|
||||||
m_state = state::is_running;
|
m_state = state::is_running;
|
||||||
m_cubes.reset();
|
|
||||||
m_cubes.push_back(expr_ref_vector(m)); // push empty cube
|
|
||||||
|
|
||||||
if (m_config.m_depth_splitting_only || m_config.m_iterative_deepening) {
|
if (m_config.m_depth_splitting_only || m_config.m_iterative_deepening) {
|
||||||
m_cubes_depth_sets.clear();
|
m_cubes_depth_sets.clear();
|
||||||
}
|
} else if (m_config.m_beam_search) {
|
||||||
if (m_config.m_beam_search) {
|
|
||||||
m_cubes_pq = CubePQ();
|
m_cubes_pq = CubePQ();
|
||||||
|
} else if (m_config.m_cubetree) {
|
||||||
|
m_cubes_tree = CubeTree(m);
|
||||||
|
} else {
|
||||||
|
m_cubes.reset();
|
||||||
|
m_cubes.push_back(expr_ref_vector(m)); // push empty cube
|
||||||
}
|
}
|
||||||
|
|
||||||
m_split_atoms.reset();
|
m_split_atoms.reset();
|
||||||
|
@ -904,6 +1000,7 @@ namespace smt {
|
||||||
m_config.m_depth_splitting_only = sp.depth_splitting_only();
|
m_config.m_depth_splitting_only = sp.depth_splitting_only();
|
||||||
m_config.m_iterative_deepening = sp.iterative_deepening();
|
m_config.m_iterative_deepening = sp.iterative_deepening();
|
||||||
m_config.m_beam_search = sp.beam_search();
|
m_config.m_beam_search = sp.beam_search();
|
||||||
|
m_config.m_cubetree = sp.cubetree();
|
||||||
}
|
}
|
||||||
|
|
||||||
void parallel::batch_manager::collect_statistics(::statistics& st) const {
|
void parallel::batch_manager::collect_statistics(::statistics& st) const {
|
||||||
|
|
|
@ -19,6 +19,8 @@ Revision History:
|
||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include "smt/smt_context.h"
|
#include "smt/smt_context.h"
|
||||||
|
#include "smt/CubeTree.h"
|
||||||
|
|
||||||
#include <thread>
|
#include <thread>
|
||||||
#include <map>
|
#include <map>
|
||||||
#include <queue>
|
#include <queue>
|
||||||
|
@ -55,6 +57,7 @@ namespace smt {
|
||||||
bool m_depth_splitting_only = false;
|
bool m_depth_splitting_only = false;
|
||||||
bool m_iterative_deepening = false;
|
bool m_iterative_deepening = false;
|
||||||
bool m_beam_search = false;
|
bool m_beam_search = false;
|
||||||
|
bool m_cubetree = false;
|
||||||
};
|
};
|
||||||
struct stats {
|
struct stats {
|
||||||
unsigned m_max_cube_depth = 0;
|
unsigned m_max_cube_depth = 0;
|
||||||
|
@ -69,6 +72,7 @@ namespace smt {
|
||||||
stats m_stats;
|
stats m_stats;
|
||||||
expr_ref_vector m_split_atoms; // atoms to split on
|
expr_ref_vector m_split_atoms; // atoms to split on
|
||||||
vector<expr_ref_vector> m_cubes;
|
vector<expr_ref_vector> m_cubes;
|
||||||
|
CubeTree m_cubes_tree;
|
||||||
|
|
||||||
struct ScoredCube {
|
struct ScoredCube {
|
||||||
double score;
|
double score;
|
||||||
|
@ -113,7 +117,7 @@ namespace smt {
|
||||||
void init_parameters_state();
|
void init_parameters_state();
|
||||||
|
|
||||||
public:
|
public:
|
||||||
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_split_atoms(m) { }
|
batch_manager(ast_manager& m, parallel& p) : m(m), p(p), m_split_atoms(m), m_cubes_tree(m) { }
|
||||||
|
|
||||||
void initialize();
|
void initialize();
|
||||||
|
|
||||||
|
@ -123,21 +127,34 @@ namespace smt {
|
||||||
void set_exception(unsigned error_code);
|
void set_exception(unsigned error_code);
|
||||||
|
|
||||||
//
|
//
|
||||||
// worker threads ask the batch manager for a supply of cubes to check.
|
// worker threads ask the batch manager for a cube to check.
|
||||||
// they pass in a translation function from the global context to local context (ast-manager). It is called g2l.
|
// they pass in a translation function from the global context to local context (ast-manager). It is called g2l.
|
||||||
// The batch manager returns a list of cubes to solve.
|
// The batch manager returns the next cube to
|
||||||
//
|
//
|
||||||
void get_cubes(ast_translation& g2l, vector<expr_ref_vector>& cubes);
|
expr_ref_vector get_cube(ast_translation& g2l); // FOR ALL NON-TREE VERSIONS
|
||||||
|
CubeNode* get_cube_from_tree(ast_translation& g2l, CubeNode* prev_cube = nullptr);
|
||||||
|
|
||||||
//
|
//
|
||||||
// worker threads return unprocessed cubes to the batch manager together with split literal candidates.
|
// worker threads return unprocessed cubes to the batch manager together with split literal candidates.
|
||||||
// the batch manager re-enqueues unprocessed cubes and optionally splits them using the split_atoms returned by this and workers.
|
// the batch manager re-enqueues unprocessed cubes and optionally splits them using the split_atoms returned by this and workers.
|
||||||
//
|
//
|
||||||
|
void return_cubes_tree(ast_translation& l2g, CubeNode* cube, expr_ref_vector const& split_atoms);
|
||||||
|
// FOR ALL NON-TREE VERSIONS
|
||||||
void return_cubes(ast_translation& l2g, expr_ref_vector const& cube, expr_ref_vector const& split_atoms, const bool should_split=true, const double hardness=1.0);
|
void return_cubes(ast_translation& l2g, expr_ref_vector const& cube, expr_ref_vector const& split_atoms, const bool should_split=true, const double hardness=1.0);
|
||||||
void report_assumption_used(ast_translation& l2g, expr* assumption);
|
void report_assumption_used(ast_translation& l2g, expr* assumption);
|
||||||
void collect_clause(ast_translation& l2g, unsigned source_worker_id, expr* e);
|
void collect_clause(ast_translation& l2g, unsigned source_worker_id, expr* e);
|
||||||
expr_ref_vector return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id);
|
expr_ref_vector return_shared_clauses(ast_translation& g2l, unsigned& worker_limit, unsigned worker_id);
|
||||||
|
|
||||||
|
void remove_node_and_propagate(CubeNode* node) {
|
||||||
|
SASSERT(m_config.m_cubetree);
|
||||||
|
CubeNode* last_removed = m_cubes_tree.remove_node_and_propagate(node);
|
||||||
|
if (last_removed) {
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Cube tree: removed node and propagated up to depth " << last_removed->cube.size() << "\n");
|
||||||
|
} else {
|
||||||
|
IF_VERBOSE(1, verbose_stream() << "Cube tree: ERROR removing node with no propagation\n");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
double update_avg_cube_hardness(double hardness) {
|
double update_avg_cube_hardness(double hardness) {
|
||||||
IF_VERBOSE(1, verbose_stream() << "Cube hardness: " << hardness << ", previous avg: " << m_avg_cube_hardness << ", solved cubes: " << m_solved_cube_count << "\n";);
|
IF_VERBOSE(1, verbose_stream() << "Cube hardness: " << hardness << ", previous avg: " << m_avg_cube_hardness << ", solved cubes: " << m_solved_cube_count << "\n";);
|
||||||
m_avg_cube_hardness = (m_avg_cube_hardness * m_solved_cube_count + hardness) / (m_solved_cube_count + 1);
|
m_avg_cube_hardness = (m_avg_cube_hardness * m_solved_cube_count + hardness) / (m_solved_cube_count + 1);
|
||||||
|
@ -166,6 +183,7 @@ namespace smt {
|
||||||
bool m_iterative_deepening = false;
|
bool m_iterative_deepening = false;
|
||||||
bool m_beam_search = false;
|
bool m_beam_search = false;
|
||||||
bool m_explicit_hardness = false;
|
bool m_explicit_hardness = false;
|
||||||
|
bool m_cubetree = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
unsigned id; // unique identifier for the worker
|
unsigned id; // unique identifier for the worker
|
||||||
|
@ -177,9 +195,12 @@ namespace smt {
|
||||||
config m_config;
|
config m_config;
|
||||||
scoped_ptr<context> ctx;
|
scoped_ptr<context> ctx;
|
||||||
ast_translation m_g2l, m_l2g;
|
ast_translation m_g2l, m_l2g;
|
||||||
|
CubeNode* m_curr_cube_node = nullptr;
|
||||||
unsigned m_num_shared_units = 0;
|
unsigned m_num_shared_units = 0;
|
||||||
unsigned m_num_initial_atoms = 0;
|
unsigned m_num_initial_atoms = 0;
|
||||||
unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share
|
unsigned m_shared_clause_limit = 0; // remembers the index into shared_clause_trail marking the boundary between "old" and "new" clauses to share
|
||||||
|
|
||||||
|
|
||||||
void share_units(ast_translation& l2g);
|
void share_units(ast_translation& l2g);
|
||||||
lbool check_cube(expr_ref_vector const& cube);
|
lbool check_cube(expr_ref_vector const& cube);
|
||||||
void update_max_thread_conflicts() {
|
void update_max_thread_conflicts() {
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue