3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-05-16 15:15:35 +00:00

cleanup service

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
This commit is contained in:
Nikolaj Bjorner 2026-04-24 17:22:04 -07:00
parent cedd896ea5
commit abbe36561d
4 changed files with 32 additions and 20 deletions

View file

@ -30,7 +30,7 @@ Workflows may optionally include:
- Workflow files: `.github/workflows/*.md` and `.github/workflows/**/*.md`
- Workflow lock files: `.github/workflows/*.lock.yml`
- Shared components: `.github/workflows/shared/*.md`
- Configuration: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/github-agentic-workflows.md
- Configuration: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/github-agentic-workflows.md
## Problems This Solves
@ -52,7 +52,7 @@ When you interact with this agent, it will:
### Create New Workflow
**Load when**: User wants to create a new workflow from scratch, add automation, or design a workflow that doesn't exist yet
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/create-agentic-workflow.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/create-agentic-workflow.md
**Use cases**:
- "Create a workflow that triages issues"
@ -62,7 +62,7 @@ When you interact with this agent, it will:
### Update Existing Workflow
**Load when**: User wants to modify, improve, or refactor an existing workflow
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/update-agentic-workflow.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/update-agentic-workflow.md
**Use cases**:
- "Add web-fetch tool to the issue-classifier workflow"
@ -72,7 +72,7 @@ When you interact with this agent, it will:
### Debug Workflow
**Load when**: User needs to investigate, audit, debug, or understand a workflow, troubleshoot issues, analyze logs, or fix errors
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/debug-agentic-workflow.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/debug-agentic-workflow.md
**Use cases**:
- "Why is this workflow failing?"
@ -82,7 +82,7 @@ When you interact with this agent, it will:
### Upgrade Agentic Workflows
**Load when**: User wants to upgrade workflows to a new gh-aw version or fix deprecations
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/upgrade-agentic-workflows.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/upgrade-agentic-workflows.md
**Use cases**:
- "Upgrade all workflows to the latest version"
@ -92,7 +92,7 @@ When you interact with this agent, it will:
### Create a Report-Generating Workflow
**Load when**: The workflow being created or updated produces reports — recurring status updates, audit summaries, analyses, or any structured output posted as a GitHub issue, discussion, or comment
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/report.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/report.md
**Use cases**:
- "Create a weekly CI health report"
@ -102,7 +102,7 @@ When you interact with this agent, it will:
### Create Shared Agentic Workflow
**Load when**: User wants to create a reusable workflow component or wrap an MCP server
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/create-shared-agentic-workflow.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/create-shared-agentic-workflow.md
**Use cases**:
- "Create a shared component for Notion integration"
@ -112,7 +112,7 @@ When you interact with this agent, it will:
### Fix Dependabot PRs
**Load when**: User needs to close or fix open Dependabot PRs that update dependencies in generated manifest files (`.github/workflows/package.json`, `.github/workflows/requirements.txt`, `.github/workflows/go.mod`)
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/dependabot.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/dependabot.md
**Use cases**:
- "Fix the open Dependabot PRs for npm dependencies"
@ -122,7 +122,7 @@ When you interact with this agent, it will:
### Analyze Test Coverage
**Load when**: The workflow reads, analyzes, or reports test coverage — whether triggered by a PR, a schedule, or a slash command. Always consult this prompt before designing the coverage data strategy.
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/test-coverage.md
**Prompt file**: https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/test-coverage.md
**Use cases**:
- "Create a workflow that comments coverage on PRs"
@ -169,9 +169,10 @@ gh aw compile --validate
## Important Notes
- Always reference the instructions file at https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/github-agentic-workflows.md for complete documentation
- Always reference the instructions file at https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/github-agentic-workflows.md for complete documentation
- Use the MCP tool `agentic-workflows` when running in GitHub Copilot Cloud
- Workflows must be compiled to `.lock.yml` files before running in GitHub Actions
- **Bash tools are enabled by default** - Don't restrict bash commands unnecessarily since workflows are sandboxed by the AWF
- Follow security best practices: minimal permissions, explicit network access, no template injection
- **Network configuration**: Use ecosystem identifiers (`node`, `python`, `go`, etc.) or explicit FQDNs in `network.allowed`. Bare shorthands like `npm` or `pypi` are **not** valid. See https://github.com/github/gh-aw/blob/v0.68.3/.github/aw/network.md for the full list of valid ecosystem identifiers and domain patterns.
- **Single-file output**: When creating a workflow, produce exactly **one** workflow `.md` file. Do not create separate documentation files (architecture docs, runbooks, usage guides, etc.). If documentation is needed, add a brief `## Usage` section inside the workflow file itself.

View file

@ -1233,6 +1233,13 @@ namespace seq {
return true;
}
euf::snode* nielsen_graph::mk_rewrite(expr* e) {
expr_ref er(e, m);
th_rewriter rw(m);
rw(er);
return m_sg.mk(er);
}
// -----------------------------------------------------------------------
// nielsen_graph: search
// -----------------------------------------------------------------------
@ -1875,7 +1882,7 @@ namespace seq {
}
euf::snode* replacement = dir_concat(m_sg, prefix_sn, var_node, fwd);
nielsen_subst s(var_node, replacement,
m_sg.mk(a.mk_sub(compute_length_expr(var_node), compute_length_expr(prefix_sn))),
mk_rewrite(a.mk_sub(compute_length_expr(var_node), compute_length_expr(prefix_sn))),
eq.m_dep);
nielsen_node* child = mk_child(node);
nielsen_edge* e = mk_edge(node, child, true);
@ -1943,7 +1950,7 @@ namespace seq {
euf::snode* replacement = dir_concat(m_sg, char_head, var_head, fwd);
child = mk_child(node);
e = mk_edge(node, child, false);
nielsen_subst s2(var_head, replacement, m_sg.mk(a.mk_sub(compute_length_expr(var_head), a.mk_int(1))), eq.m_dep);
nielsen_subst s2(var_head, replacement, mk_rewrite(a.mk_sub(compute_length_expr(var_head), a.mk_int(1))), eq.m_dep);
e->add_subst(s2);
child->apply_subst(m_sg, s2);
return true;
@ -1992,7 +1999,7 @@ namespace seq {
nielsen_node* child = mk_child(node);
nielsen_edge* e = mk_edge(node, child, false);
nielsen_subst s(lhead, replacement,
m_sg.mk(a.mk_sub(compute_length_expr(lhead), compute_length_expr(rhead))),
mk_rewrite(a.mk_sub(compute_length_expr(lhead), compute_length_expr(rhead))),
eq.m_dep);
e->add_subst(s);
child->apply_subst(m_sg, s);
@ -2003,7 +2010,7 @@ namespace seq {
nielsen_node* child = mk_child(node);
nielsen_edge* e = mk_edge(node, child, false);
nielsen_subst s(rhead, replacement,
m_sg.mk(a.mk_sub(compute_length_expr(rhead), compute_length_expr(lhead))),
mk_rewrite(a.mk_sub(compute_length_expr(rhead), compute_length_expr(lhead))),
eq.m_dep);
e->add_subst(s);
child->apply_subst(m_sg, s);
@ -3469,7 +3476,7 @@ namespace seq {
euf::snode* replacement = m_sg.mk_concat(fresh_char, first);
nielsen_node* child = mk_child(node);
nielsen_edge* e = mk_edge(node, child, false);
nielsen_subst s(first, replacement, m_sg.mk(a.mk_sub(compute_length_expr(first), a.mk_int(1))), mem.m_dep);
nielsen_subst s(first, replacement, mk_rewrite(a.mk_sub(compute_length_expr(first), a.mk_int(1))), mem.m_dep);
e->add_subst(s);
child->apply_subst(m_sg, s);
// Constrain fresh_char to the character class of this minterm.
@ -3755,8 +3762,9 @@ namespace seq {
}
euf::snode *length_term = nullptr;
if (m_length_info.find(n->id(), length_term) && length_term)
if (m_length_info.find(n->id(), length_term) && length_term)
return expr_ref(length_term->get_expr(), m);
return expr_ref(m_seq.str.mk_length(n->get_expr()), m);
}
@ -3962,7 +3970,7 @@ namespace seq {
auto& c = node->constraints()[i];
m_solver.assert_expr(c.fml);
auto lit = m_literal_if_false(c.fml);
std::cout << "Internalizing literal " << mk_pp(c.fml, m) << " [" << (lit == sat::null_literal) << "]" << std::endl;
// std::cout << "Internalizing literal " << mk_pp(c.fml, m) << " [" << (lit == sat::null_literal) << "]" << std::endl;
if (lit != sat::null_literal)
node->set_external_conflict(lit, c.dep);
}

View file

@ -1038,6 +1038,9 @@ namespace seq {
// only once per node across DFS iterations.
void apply_parikh_to_node(nielsen_node& node);
// simplify expression and create a node from simplified expression.
euf::snode *mk_rewrite(expr *e);
// create a fresh variable with a unique name and the given sequence sort
euf::snode* mk_fresh_var(sort* s);

View file

@ -735,7 +735,7 @@ namespace smt {
}
}
std::cout << "[" << m_num_final_checks << "]" << std::endl;
// std::cout << "[" << m_num_final_checks << "]" << std::endl;
IF_VERBOSE(1, verbose_stream() << "nseq final_check: calling solve()\n";);
// here the actual Nielsen solving happens
@ -802,7 +802,7 @@ namespace smt {
bool all_sat = true;
ctx.push_trail(reset_vector(m_nielsen_literals));
for (auto const& c : m_nielsen.sat_node()->constraints()) {
std::cout << "Assumption: " << mk_pp(c.fml, m) << std::endl;
// std::cout << "Assumption: " << mk_pp(c.fml, m) << std::endl;
auto lit = mk_literal(c.fml);
m_nielsen_literals.push_back(lit);
// Ensure Nielsen assumptions participate in SAT search instead of
@ -827,7 +827,7 @@ namespace smt {
// or maybe it can happen if we have a "le" dependency
TRACE(seq, tout << "nseq final_check: nielsen assumption " << c.fml << " is false; internalized - " << ctx.e_internalized(c.fml) << "\n");
all_sat = false;
std::cout << "False [" << lit << "]: " << mk_pp(c.fml, m) << std::endl;
// std::cout << "False [" << lit << "]: " << mk_pp(c.fml, m) << std::endl;
ctx.push_trail(value_trail(m_should_internalize));
m_should_internalize = true;
break;