3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2025-06-15 02:16:16 +00:00
z3/src/math/simplex/simplex.h
Arie Gurfinkel d2b618df23
Spacer Global Guidance (#6026)
* Make spacer_sem_matcher::reset() public

* Add .clang-format for src/muz/spacer

* Mark substitution::get_bindings() as const

* Fix in spacer_antiunify

* Various helper methods in spacer_util

Minor functions to compute number of free variables, detect presence of certain
sub-expressions, etc.

The diff is ugly because of clang-format

* Add spacer_cluster for clustering lemmas

A cluster of lemmas is a set of lemmas that are all instances of the same
pattern, where a pattern is a qff formula with free variables.

Currently, the instances are required to be explicit, that is, they are all
obtained by substituting concrete values (i.e., numbers) for free variables of
the pattern.

Lemmas are clustered in cluster_db in each predicate transformer.

* Integrate spacer_cluster into spacer_context

* Custom clang-format pragmas for spacer_context

spacer_context.(cpp|h) are large and have inconsistent formatting. Disable
clang-format for them until merge with main z3 branch and re-format.

* Computation of convex closure and matrix kernel

Various LA functions. The implementations are somewhat preliminary.

Convex closure is simplemented via syntactic convex closure procedure.
Kernel computation considers many common cases.

spacer_arith_kernel_sage implements kernel computation by call external
Sage binary. It is used only for debugging and experiments. There is no
link dependence on Sage. If desired, it can be removed.

* Add spacer_concretize

* Utility methods for spacer conjecture rule

* Add spacer_expand_bnd_generalizer

Generalizes arithmetic inequality literals of the form x <= c,
by changing constant c to other constants found in the problem.

* Add spacer_global_generalizer

Global generalizer checks every new lemma against a cluster
of previously learned lemmas, and, if possible, conjectures
a new pob, that, when blocked, generalizes multiple existing
lemmas.

* Remove fp.spacer.print_json option

The option is used to dump state of spacer into json for debugging.

It has been replaced by `fp.spacer.trace_file` that allows dumping an execution
of spacer. The json file can be reconstructed from the trace file elsewhere.

* Workaround for segfault in spacer_proof_utils

Issue #3 in hgvk94/z3

Segfault in some proof reduction. Avoid by bailing out on reduction.

* Revert bug for incomplete models

* Use local fresh variables in spacer_global_generalizer

* Cleanup of spacer_convex_closure

* Allow arbitrary expressions to name cols in convex_closure

* WIP: convex closure

* WIP: convex closure

* Fix bindings order in spacer_global_generalizer

The matcher creates substitution using std_order, which is
reverse of expected order (variable 0 is last). Adjust the code
appropriately for that.

* Increase verbosity level for smt_context stats

* Dead code in qe_mbp

* bug fixes in spacer_global_generalizer::subsumer

* Partially remove dependence of size of m_alphas

I want m_alphas to potentially be greater than currently used alpha variables.
This is helpful for reusing them across multiple calls to convex closure

* Subtle bug in kernel computation

Coefficient was being passed by reference and, therefore, was
being changed indirectly.

In the process, updated the code to be more generic to avoid rational
computation in the middle of matrix manipulation.

* another test for sparse_matrix_ops::kernel

* Implementation of matrix kernel using Fraction Free Elimination

Ensures that the kernel is int for int matrices. All divisions are exact.

* clang-format sparse_matrix_ops.h

* another implementation of ffe kernel in sparse_matrix_ops

* Re-do arith_kernel and convex_closure

* update spacer_global_generalization for new subsumer

* remove spacer.gg.use_sage parameter

* cleanup of spacer_global_generalizer

* Removed dependency on sage

* fix in spacer_convex_closure

* spacer_sem_matcher: consider an additional semantic matching

disabled until it is shown useful

* spacer_global_generalizer: improve do_conjecture

 - if conjecture does not apply to pob, use lemma instead
 - better normalization
 - improve debug prints

* spacer_conjecture: formatting

* spacer_cluster: improve debug prints

* spacer_context: improve debug prints

* spacer_context: re-queue may pobs

enabled even if global re-queue is disabled

* spacer_cluster print formatting

* reset methods on pob

* cleanup of print and local variable names

* formatting

* reset generalization data once it has been used

* refactored extra pob creation during global guidance

* fix bug copying sparse matrix into spacer matrix

* bug fix in spacer_convex_closure

* formatting change in spacer_context

* spacer_cluster: get_min_lvl

chose level based on pob as well as lemmas

* spacer_context: add desired_level to pob

desired_level indicates at which level pob should be proved.
A pob will be pushed to desired_level if necessary

* spacer_context: renamed subsume stats

the name of success/failed was switched

* spacer_convex_closure: fix prototype of is_congruent_mod()

* spacer_convex_closure: hacks in infer_div_pred()

* spacer_util: do not expand literals with mod

By default, equality literal t=p is expanded into t<=p && t>=p

Disable the expansion in case t contains 'mod' operator since such
expansion is usually not helpful for divisibility

* spacer_util: rename m_util into m_arith

* spacer_util: cleanup normalize()

* spacer_util: formatting

* spacer_context: formatting cleanup on subsume and conjecture

* spacer_context: fix handling may pobs when abs_weakness is enabled

A pob might be undef, so weakness must be bumped up

* spacer_arith_kernel: enhance debug print

* spacer_global_generalizer: improve matching on conjecture

* spacer_global_generalizer: set desired level on conjecture pob

* spacer_global_generalizer: debug print

* spacer_global_generalizer: set min level on new pobs

the new level should not be higher than the pob that was generalized

* spacer_global_generalizer: do no re-create closed pobs

If a generalized pob exist and closed, do not re-create it.

* spacer_context: normalize twice

* spacer_context: forward propagate only same kind of pobs

* sketch of inductive generalizer

A better implementation of inductive generalizer that in addition to dropping
literals also attempts to weaken them.

Current implementation is a sketch to be extended based on examples/requirements.

* fix ordering in spacer_cluster_util

* fix resetting of substitution matcher in spacer_conjecture

Old code would forget to reset the substitution provided to the sem_matcher.
Thus, if the substitution was matched once (i.e., one literal of interest is
found), no other literal would be matched.

* add spacer_util is_normalized() method

used for debugging only

* simplify normalization of pob expressions

pob expressions are normalized to increase syntactic matching.
Some of the normalization rules seem out of place, so removing them for now.

* fix in spacer_global_generalizer

If conjecture fails, do not try other generalization strategies -- they will not apply.

* fix in spacer_context

do not check that may pob is blocked by existing lemmas.
It is likely to be blocked. Our goal is to block it again and generalize
to a new lemma.

This can be further improved by moving directly to generalization when pob is
blocked by existing lemmas...

Co-authored-by: hgvk94 <hgvk94@gmail.com>
2022-08-30 15:47:00 -07:00

208 lines
7.5 KiB
C++

/*++
Copyright (c) 2014 Microsoft Corporation
Module Name:
simplex.h
Abstract:
Multi-precision simplex tableau.
- It uses code from theory_arith where applicable.
- It is detached from the theory class and ASTs.
- It uses non-shared mpz/mpq's avoiding global locks and operations on rationals.
- It follows the same sparse tableau layout (no LU yet).
- It does not include features for non-linear arithmetic.
- Branch/bound/cuts is external.
Author:
Nikolaj Bjorner (nbjorner) 2014-01-15
Notes:
--*/
#pragma once
#include "math/simplex/sparse_matrix.h"
#include "util/mpq_inf.h"
#include "util/rational.h"
#include "util/heap.h"
#include "util/lbool.h"
#include "util/uint_set.h"
namespace simplex {
template<typename Ext>
class simplex {
typedef unsigned var_t;
typedef typename Ext::eps_numeral eps_numeral;
typedef typename Ext::numeral numeral;
typedef typename Ext::manager manager;
typedef typename Ext::eps_manager eps_manager;
typedef typename Ext::scoped_numeral scoped_numeral;
typedef _scoped_numeral<eps_manager> scoped_eps_numeral;
typedef _scoped_numeral_vector<eps_manager> scoped_eps_numeral_vector;
typedef sparse_matrix<Ext> matrix;
struct var_lt {
bool operator()(var_t v1, var_t v2) const { return v1 < v2; }
};
typedef heap<var_lt> var_heap;
struct stats {
unsigned m_num_pivots;
unsigned m_num_infeasible;
unsigned m_num_checks;
stats() { reset(); }
void reset() {
memset(this, 0, sizeof(*this));
}
};
enum pivot_strategy_t {
S_BLAND,
S_GREATEST_ERROR,
S_LEAST_ERROR,
S_DEFAULT
};
struct var_info {
unsigned m_base2row:29;
unsigned m_is_base:1;
unsigned m_lower_valid:1;
unsigned m_upper_valid:1;
eps_numeral m_value;
eps_numeral m_lower;
eps_numeral m_upper;
numeral m_base_coeff;
var_info():
m_base2row(0),
m_is_base(false),
m_lower_valid(false),
m_upper_valid(false)
{}
};
static const var_t null_var;
reslimit& m_limit;
mutable manager m;
mutable eps_manager em;
mutable matrix M;
unsigned m_max_iterations;
var_heap m_to_patch;
vector<var_info> m_vars;
svector<var_t> m_row2base;
bool m_bland;
unsigned m_blands_rule_threshold;
random_gen m_random;
uint_set m_left_basis;
unsigned m_infeasible_var;
unsigned_vector m_base_vars;
stats m_stats;
public:
simplex(reslimit& lim):
m_limit(lim),
M(m),
m_max_iterations(UINT_MAX),
m_to_patch(1024),
m_bland(false),
m_blands_rule_threshold(1000) {}
~simplex();
typedef typename matrix::row row;
typedef typename matrix::row_iterator row_iterator;
typedef typename matrix::col_iterator col_iterator;
void ensure_var(var_t v);
row add_row(var_t base, unsigned num_vars, var_t const* vars, numeral const* coeffs);
row get_infeasible_row();
var_t get_base_var(row const& r) const { return m_row2base[r.id()]; }
numeral const& get_base_coeff(row const& r) const { return m_vars[m_row2base[r.id()]].m_base_coeff; }
void del_row(var_t base_var);
void set_lower(var_t var, eps_numeral const& b);
void set_upper(var_t var, eps_numeral const& b);
void get_lower(var_t var, scoped_eps_numeral& b) const { b = m_vars[var].m_lower; }
void get_upper(var_t var, scoped_eps_numeral& b) const { b = m_vars[var].m_upper; }
eps_numeral const& get_lower(var_t var) const { return m_vars[var].m_lower; }
eps_numeral const& get_upper(var_t var) const { return m_vars[var].m_upper; }
bool above_lower(var_t var, eps_numeral const& b) const;
bool below_upper(var_t var, eps_numeral const& b) const;
bool below_lower(var_t v) const;
bool above_upper(var_t v) const;
bool lower_valid(var_t var) const { return m_vars[var].m_lower_valid; }
bool upper_valid(var_t var) const { return m_vars[var].m_upper_valid; }
void unset_lower(var_t var);
void unset_upper(var_t var);
void set_value(var_t var, eps_numeral const& b);
void set_max_iterations(unsigned n) { m_max_iterations = n; }
void reset();
lbool make_feasible();
lbool minimize(var_t var);
eps_numeral const& get_value(var_t v);
void display(std::ostream& out) const;
void display_row(std::ostream& out, row const& r, bool values = true);
unsigned get_num_vars() const { return m_vars.size(); }
row_iterator row_begin(row const& r) { return M.row_begin(r); }
row_iterator row_end(row const& r) { return M.row_end(r); }
void collect_statistics(::statistics & st) const;
private:
void del_row(row const& r);
var_t select_var_to_fix();
pivot_strategy_t pivot_strategy();
var_t select_smallest_var() { return m_to_patch.empty()?null_var:m_to_patch.erase_min(); }
var_t select_error_var(bool least);
void check_blands_rule(var_t v, unsigned& num_repeated);
bool make_var_feasible(var_t x_i);
void update_and_pivot(var_t x_i, var_t x_j, numeral const& a_ij, eps_numeral const& new_value);
void update_value(var_t v, eps_numeral const& delta);
void update_value_core(var_t v, eps_numeral const& delta);
void pivot(var_t x_i, var_t x_j, numeral const& a_ij);
void move_to_bound(var_t x, bool to_lower);
var_t select_pivot(var_t x_i, bool is_below, scoped_numeral& out_a_ij);
var_t select_pivot_blands(var_t x_i, bool is_below, scoped_numeral& out_a_ij);
var_t select_pivot_core(var_t x_i, bool is_below, scoped_numeral& out_a_ij);
int get_num_non_free_dep_vars(var_t x_j, int best_so_far);
var_t pick_var_to_leave(var_t x_j, bool is_pos,
scoped_eps_numeral& gain, scoped_numeral& new_a_ij, bool& inc);
void select_pivot_primal(var_t v, var_t& x_i, var_t& x_j, scoped_numeral& a_ij, bool& inc_x_i, bool& inc_x_j);
bool at_lower(var_t v) const;
bool at_upper(var_t v) const;
bool above_lower(var_t v) const;
bool below_upper(var_t v) const;
bool outside_bounds(var_t v) const { return below_lower(v) || above_upper(v); }
bool is_free(var_t v) const { return !m_vars[v].m_lower_valid && !m_vars[v].m_upper_valid; }
bool is_non_free(var_t v) const { return !is_free(v); }
bool is_base(var_t x) const { return m_vars[x].m_is_base; }
void add_patch(var_t v);
bool well_formed() const;
bool well_formed_row(row const& r) const;
bool is_feasible() const;
};
void ensure_rational_solution(simplex<mpq_ext>& s);
void kernel(sparse_matrix<mpq_ext>& s, vector<vector<rational>>& K);
void kernel_ffe(sparse_matrix<mpq_ext> &s, vector<vector<rational>> &K);
};