3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-02-27 10:35:38 +00:00

Merge branch 'Z3Prover:master' into master

This commit is contained in:
Ruijie Fang 2026-02-24 13:39:36 -06:00 committed by GitHub
commit 4f4e657449
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
1068 changed files with 63480 additions and 43094 deletions

View file

@ -9,6 +9,7 @@ IndentWidth: 4
TabWidth: 4 TabWidth: 4
UseTab: Never UseTab: Never
# Column width # Column width
ColumnLimit: 120 ColumnLimit: 120
@ -34,6 +35,8 @@ BraceWrapping:
AfterControlStatement: false AfterControlStatement: false
AfterNamespace: false AfterNamespace: false
AfterStruct: false AfterStruct: false
BeforeElse : true
AfterCaseLabel: false
# Spacing # Spacing
SpaceAfterCStyleCast: false SpaceAfterCStyleCast: false
SpaceAfterLogicalNot: false SpaceAfterLogicalNot: false
@ -42,7 +45,6 @@ SpaceInEmptyParentheses: false
SpacesInCStyleCastParentheses: false SpacesInCStyleCastParentheses: false
SpacesInParentheses: false SpacesInParentheses: false
SpacesInSquareBrackets: false SpacesInSquareBrackets: false
IndentCaseLabels: false
# Alignment # Alignment
AlignConsecutiveAssignments: false AlignConsecutiveAssignments: false
@ -56,6 +58,7 @@ BinPackArguments: true
BinPackParameters: true BinPackParameters: true
BreakBeforeBinaryOperators: None BreakBeforeBinaryOperators: None
BreakBeforeTernaryOperators: true BreakBeforeTernaryOperators: true
# BreakBeforeElse: true
# Includes # Includes
SortIncludes: false # Z3 has specific include ordering conventions SortIncludes: false # Z3 has specific include ordering conventions
@ -63,6 +66,11 @@ SortIncludes: false # Z3 has specific include ordering conventions
# Namespaces # Namespaces
NamespaceIndentation: All NamespaceIndentation: All
# Switch statements
IndentCaseLabels: false
AllowShortCaseLabelsOnASingleLine: true
IndentCaseBlocks: false
# Comments and documentation # Comments and documentation
ReflowComments: true ReflowComments: true
SpacesBeforeTrailingComments: 2 SpacesBeforeTrailingComments: 2

5
.gitattributes vendored
View file

@ -3,4 +3,7 @@
src/api/dotnet/Properties/AssemblyInfo.cs text eol=crlf src/api/dotnet/Properties/AssemblyInfo.cs text eol=crlf
.github/workflows/*.lock.yml linguist-generated=true merge=ours .github/workflows/*.lock.yml linguist-generated=true merge=ours
# Use bd merge for beads JSONL files
.beads/issues.jsonl merge=beads

123
.github/CI_MIGRATION.md vendored Normal file
View file

@ -0,0 +1,123 @@
# Azure Pipelines to GitHub Actions Migration
## Overview
This document describes the migration from Azure Pipelines (`azure-pipelines.yml`) to GitHub Actions (`.github/workflows/ci.yml`).
## Migration Summary
All jobs from the Azure Pipelines configuration have been migrated to GitHub Actions with equivalent or improved functionality.
### Jobs Migrated
| Azure Pipelines Job | GitHub Actions Job | Status |
|---------------------|-------------------|--------|
| LinuxPythonDebug (MT) | linux-python-debug (MT) | ✅ Migrated |
| LinuxPythonDebug (ST) | linux-python-debug (ST) | ✅ Migrated |
| ManylinuxPythonBuildAmd64 | manylinux-python-amd64 | ✅ Migrated |
| ManyLinuxPythonBuildArm64 | manylinux-python-arm64 | ✅ Migrated |
| UbuntuOCaml | ubuntu-ocaml | ✅ Migrated |
| UbuntuOCamlStatic | ubuntu-ocaml-static | ✅ Migrated |
| UbuntuCMake (releaseClang) | ubuntu-cmake (releaseClang) | ✅ Migrated |
| UbuntuCMake (debugClang) | ubuntu-cmake (debugClang) | ✅ Migrated |
| UbuntuCMake (debugGcc) | ubuntu-cmake (debugGcc) | ✅ Migrated |
| UbuntuCMake (releaseSTGcc) | ubuntu-cmake (releaseSTGcc) | ✅ Migrated |
| MacOSPython | macos-python | ✅ Migrated |
| MacOSCMake | macos-cmake | ✅ Migrated |
| LinuxMSan | N/A | ⚠️ Was disabled (condition: eq(0,1)) |
| MacOSOCaml | N/A | ⚠️ Was disabled (condition: eq(0,1)) |
## Key Differences
### Syntax Changes
1. **Trigger Configuration**
- Azure: `jobs:` with implicit triggers
- GitHub: Explicit `on:` section with `push`, `pull_request`, and `workflow_dispatch`
2. **Job Names**
- Azure: `displayName` field
- GitHub: `name` field
3. **Steps**
- Azure: `script:` for shell commands
- GitHub: `run:` for shell commands
4. **Checkout**
- Azure: Implicit checkout
- GitHub: Explicit `uses: actions/checkout@v4`
5. **Python Setup**
- Azure: Implicit Python availability
- GitHub: Explicit `uses: actions/setup-python@v5`
6. **Variables**
- Azure: Top-level `variables:` section
- GitHub: Inline in job steps or matrix configuration
### Template Scripts
Azure Pipelines used external template files (e.g., `scripts/test-z3.yml`, `scripts/test-regressions.yml`). These have been inlined into the GitHub Actions workflow:
- `scripts/test-z3.yml`: Unit tests → Inlined as "Run unit tests" step
- `scripts/test-regressions.yml`: Regression tests → Inlined as "Run regressions" step
- `scripts/test-examples-cmake.yml`: CMake examples → Inlined as "Run examples" step
- `scripts/generate-doc.yml`: Documentation → Inlined as "Generate documentation" step
### Matrix Strategies
Both Azure Pipelines and GitHub Actions support matrix builds. The migration maintains the same matrix configurations:
- **linux-python-debug**: 2 variants (MT, ST)
- **ubuntu-cmake**: 4 variants (releaseClang, debugClang, debugGcc, releaseSTGcc)
### Container Jobs
Manylinux builds continue to use container images:
- `quay.io/pypa/manylinux_2_34_x86_64:latest` for AMD64
- `quay.io/pypa/manylinux2014_x86_64:latest` for ARM64 cross-compilation
### Disabled Jobs
Two jobs were disabled in Azure Pipelines (with `condition: eq(0,1)`) and have not been migrated:
- **LinuxMSan**: Memory sanitizer builds
- **MacOSOCaml**: macOS OCaml builds
These can be re-enabled in the future if needed by adding them to the workflow file.
## Benefits of GitHub Actions
1. **Unified Platform**: All CI/CD in one place (GitHub)
2. **Better Integration**: Native integration with GitHub features (checks, status, etc.)
3. **Actions Marketplace**: Access to pre-built actions
4. **Improved Caching**: Better artifact and cache management
5. **Cost**: Free for public repositories
## Testing
To test the new workflow:
1. Push a branch or create a pull request
2. The workflow will automatically trigger
3. Monitor progress in the "Actions" tab
4. Review job logs for any issues
## Deprecation Plan
1. ✅ Create new GitHub Actions workflow (`.github/workflows/ci.yml`)
2. 🔄 Test and validate the new workflow
3. ⏳ Run both pipelines in parallel for a transition period
4. ⏳ Once stable, deprecate `azure-pipelines.yml`
## Rollback Plan
If issues arise with the GitHub Actions workflow:
1. The original `azure-pipelines.yml` remains in the repository
2. Azure Pipelines can be re-enabled if needed
3. Both can run in parallel during the transition
## Additional Resources
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
- [Migrating from Azure Pipelines to GitHub Actions](https://docs.github.com/en/actions/migrating-to-github-actions/migrating-from-azure-pipelines-to-github-actions)
- [GitHub Actions Syntax Reference](https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions)

132
.github/CI_TESTING.md vendored Normal file
View file

@ -0,0 +1,132 @@
# Testing the CI Workflow
This document provides instructions for testing the new GitHub Actions CI workflow after migration from Azure Pipelines.
## Quick Test
To test the workflow:
1. **Push a branch or create a PR**: The workflow automatically triggers on all branches
2. **View workflow runs**: Go to the "Actions" tab in GitHub
3. **Monitor progress**: Click on a workflow run to see job details
## Manual Trigger
You can also manually trigger the workflow:
1. Go to the "Actions" tab
2. Select "CI" from the left sidebar
3. Click "Run workflow"
4. Choose your branch
5. Click "Run workflow"
## Local Validation
Before pushing, you can validate the YAML syntax locally:
```bash
# Using yamllint (install with: pip install yamllint)
yamllint .github/workflows/ci.yml
# Using Python PyYAML
python3 -c "import yaml; yaml.safe_load(open('.github/workflows/ci.yml'))"
# Using actionlint (install from https://github.com/rhysd/actionlint)
actionlint .github/workflows/ci.yml
```
## Job Matrix
The CI workflow includes these job categories:
### Linux Jobs
- **linux-python-debug**: Python-based build with make (MT and ST variants)
- **manylinux-python-amd64**: Python wheel build for manylinux AMD64
- **manylinux-python-arm64**: Python wheel build for manylinux ARM64 (cross-compile)
- **ubuntu-ocaml**: OCaml bindings build
- **ubuntu-ocaml-static**: OCaml static library build
- **ubuntu-cmake**: CMake builds with multiple compilers (4 variants)
### macOS Jobs
- **macos-python**: Python-based build with make
- **macos-cmake**: CMake build with Julia support
## Expected Runtime
Approximate job durations:
- Linux Python builds: 20-30 minutes
- Manylinux Python builds: 15-25 minutes
- OCaml builds: 25-35 minutes
- CMake builds: 25-35 minutes each variant
- macOS builds: 30-40 minutes
Total workflow time (all jobs in parallel): ~40-60 minutes
## Debugging Failed Jobs
If a job fails:
1. **Click on the failed job** to see the log
2. **Expand failed steps** to see detailed output
3. **Check for common issues**:
- Missing dependencies
- Test failures
- Build errors
- Timeout (increase timeout-minutes if needed)
4. **Re-run failed jobs**:
- Click "Re-run failed jobs" button
- Or "Re-run all jobs" to test everything
## Comparing with Azure Pipelines
To compare results:
1. Check the last successful Azure Pipelines run
2. Compare job names and steps with the GitHub Actions workflow
3. Verify all tests pass with similar coverage
## Differences from Azure Pipelines
1. **Checkout**: Explicit `actions/checkout@v4` step (was implicit)
2. **Python Setup**: Explicit `actions/setup-python@v5` step (was implicit)
3. **Template Files**: Inlined instead of external templates
4. **Artifacts**: Uses `actions/upload-artifact` (if needed in future)
5. **Caching**: Can add `actions/cache` for dependencies (optional optimization)
## Adding Jobs or Modifying
To add or modify jobs:
1. Edit `.github/workflows/ci.yml`
2. Follow the existing job structure
3. Use matrix strategy for variants
4. Add appropriate timeouts (default: 90 minutes)
5. Test your changes on a branch first
## Optimization Opportunities
Future optimizations to consider:
1. **Caching**: Add dependency caching (npm, pip, opam, etc.)
2. **Artifacts**: Share build artifacts between jobs
3. **Concurrency**: Add concurrency groups to cancel outdated runs
4. **Selective Execution**: Skip jobs based on changed files
5. **Self-hosted Runners**: For faster builds (if available)
## Rollback Plan
If the GitHub Actions workflow has issues:
1. The original `azure-pipelines.yml` is still in the repository
2. Azure Pipelines can be re-enabled if needed
3. Both systems can run in parallel during transition
## Support
For issues or questions:
1. Check GitHub Actions documentation: https://docs.github.com/en/actions
2. Review the migration document: `.github/workflows/CI_MIGRATION.md`
3. Check existing GitHub Actions workflows in `.github/workflows/`
4. Open an issue in the repository

344
.github/agentics/deeptest.md vendored Normal file
View file

@ -0,0 +1,344 @@
<!-- This prompt will be imported in the agentic workflow .github/workflows/deeptest.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# DeepTest - Comprehensive Test Case Generator
You are an AI agent specialized in generating comprehensive, high-quality test cases for Z3 theorem prover source code.
Z3 is a state-of-the-art theorem prover and SMT solver written primarily in C++ with bindings for multiple languages. Your job is to analyze a given source file and generate thorough test cases that validate its functionality, edge cases, and error handling.
## Your Task
### 1. Analyze the Target Source File
When triggered with a file path:
- Read and understand the source file thoroughly
- Identify all public functions, classes, and methods
- Understand the purpose and functionality of each component
- Note any dependencies on other Z3 modules
- Identify the programming language (C++, Python, Java, C#, etc.)
**File locations to consider:**
- **C++ core**: `src/**/*.cpp`, `src/**/*.h`
- **Python API**: `src/api/python/**/*.py`
- **Java API**: `src/api/java/**/*.java`
- **C# API**: `src/api/dotnet/**/*.cs`
- **C API**: `src/api/z3*.h`
### 2. Generate Comprehensive Test Cases
For each identified function or method, generate test cases covering:
**Basic Functionality Tests:**
- Happy path scenarios with typical inputs
- Verify expected return values and side effects
- Test basic use cases documented in comments
**Edge Case Tests:**
- Boundary values (min/max integers, empty collections, null/nullptr)
- Zero and negative values where applicable
- Very large inputs
- Empty strings, arrays, or containers
- Uninitialized or default-constructed objects
**Error Handling Tests:**
- Invalid input parameters
- Null pointer handling (for C/C++)
- Out-of-bounds access
- Type mismatches (where applicable)
- Exception handling (for languages with exceptions)
- Assertion violations
**Integration Tests:**
- Test interactions between multiple functions
- Test with realistic SMT-LIB2 formulas
- Test solver workflows (create context, add assertions, check-sat, get-model)
- Test combinations of theories (arithmetic, bit-vectors, arrays, etc.)
**Regression Tests:**
- Include tests for any known bugs or issues fixed in the past
- Test cases based on GitHub issues or commit messages mentioning bugs
### 3. Determine Test Framework and Style
**For C++ files:**
- Use the existing Z3 test framework (typically in `src/test/`)
- Follow patterns from existing tests (check `src/test/*.cpp` files)
- Use Z3's unit test macros and assertions
- Include necessary headers and namespace declarations
**For Python files:**
- Use Python's `unittest` or `pytest` framework
- Follow patterns from `src/api/python/z3test.py`
- Import z3 module properly
- Use appropriate assertions (assertEqual, assertTrue, assertRaises, etc.)
**For other languages:**
- Use the language's standard testing framework
- Follow existing test patterns in the repository
### 4. Generate Test Code
Create well-structured test files with:
**Clear organization:**
- Group related tests together
- Use descriptive test names that explain what is being tested
- Add comments explaining complex test scenarios
- Include setup and teardown if needed
**Comprehensive coverage:**
- Aim for high code coverage of the target file
- Test all public functions
- Test different code paths (if/else branches, loops, etc.)
- Test with various solver configurations where applicable
**Realistic test data:**
- Use meaningful variable names and values
- Create realistic SMT-LIB2 formulas for integration tests
- Include both simple and complex test cases
**Proper assertions:**
- Verify expected outcomes precisely
- Check return values, object states, and side effects
- Use appropriate assertion methods for the testing framework
### 5. Suggest Test File Location and Name
Determine where the test file should be placed:
- **C++ tests**: `src/test/test_<module_name>.cpp`
- **Python tests**: `src/api/python/test_<module_name>.py` or as additional test cases in `z3test.py`
- Follow existing naming conventions in the repository
### 6. Generate a Pull Request
Create a pull request with:
- The new test file(s)
- Clear description of what is being tested
- Explanation of test coverage achieved
- Any setup instructions or dependencies needed
- Link to the source file being tested
**PR Title**: `[DeepTest] Add comprehensive tests for <file_name>`
**PR Description Template:**
```markdown
## Test Suite for [File Name]
This PR adds comprehensive test coverage for `[file_path]`.
### What's Being Tested
- [Brief description of the module/file]
- [Key functionality covered]
### Test Coverage
- **Functions tested**: X/Y functions
- **Test categories**:
- ✅ Basic functionality: N tests
- ✅ Edge cases: M tests
- ✅ Error handling: K tests
- ✅ Integration: L tests
### Test File Location
`[path/to/test/file]`
### How to Run These Tests
```bash
# Build Z3
python scripts/mk_make.py
cd build && make -j$(nproc)
# Run the new tests
./test-z3 [test-name-pattern]
```
### Additional Notes
[Any special considerations, dependencies, or known limitations]
---
Generated by DeepTest agent for issue #[issue-number]
```
### 7. Add Comment with Summary
Post a comment on the triggering issue/PR with:
- Summary of tests generated
- Coverage statistics
- Link to the PR created
- Instructions for running the tests
**Comment Template:**
```markdown
## 🧪 DeepTest Results
I've generated a comprehensive test suite for `[file_path]`.
### Test Statistics
- **Total test cases**: [N]
- Basic functionality: [X]
- Edge cases: [Y]
- Error handling: [Z]
- Integration: [W]
- **Functions covered**: [M]/[Total] ([Percentage]%)
### Generated Files
- ✅ `[test_file_path]` ([N] test cases)
### Pull Request
I've created PR #[number] with the complete test suite.
### Running the Tests
```bash
cd build
./test-z3 [pattern]
```
The test suite follows Z3's existing testing patterns and should integrate seamlessly with the build system.
```
## Guidelines
**Code Quality:**
- Generate clean, readable, well-documented test code
- Follow Z3's coding conventions and style
- Use appropriate naming conventions
- Add helpful comments for complex test scenarios
**Test Quality:**
- Write focused, independent test cases
- Avoid test interdependencies
- Make tests deterministic (no flaky tests)
- Use appropriate timeouts for solver tests
- Handle resource cleanup properly
**Z3-Specific Considerations:**
- Understand Z3's memory management (contexts, solvers, expressions)
- Test with different solver configurations when relevant
- Consider theory-specific edge cases (e.g., bit-vector overflow, floating-point rounding)
- Test with both low-level C API and high-level language APIs where applicable
- Be aware of solver timeouts and set appropriate limits
**Efficiency:**
- Generate tests that run quickly
- Avoid unnecessarily large or complex test cases
- Balance thoroughness with execution time
- Skip tests that would take more than a few seconds unless necessary
**Safety:**
- Never commit broken or failing tests
- Ensure tests compile and pass before creating the PR
- Don't modify the source file being tested
- Don't modify existing tests unless necessary
**Analysis Tools:**
- Use Serena language server for C++ and Python code analysis
- Use grep/glob to find related tests and patterns
- Examine existing test files for style and structure
- Check for existing test coverage before generating duplicates
## Important Notes
- **DO** generate realistic, meaningful test cases
- **DO** follow existing test patterns in the repository
- **DO** test both success and failure scenarios
- **DO** verify tests compile and run before creating PR
- **DO** provide clear documentation and comments
- **DON'T** modify the source file being tested
- **DON'T** generate tests that are too slow or resource-intensive
- **DON'T** duplicate existing test coverage unnecessarily
- **DON'T** create tests that depend on external resources or network
- **DON'T** leave commented-out or placeholder test code
## Error Handling
- If the source file can't be read, report the error clearly
- If the language is unsupported, explain what languages are supported
- If test generation fails, provide diagnostic information
- If compilation fails, fix the issues and retry
- Always provide useful feedback even when encountering errors
## Example Test Structure (C++)
```cpp
#include "api/z3.h"
#include "util/debug.h"
// Test basic functionality
void test_basic_operations() {
// Setup
Z3_config cfg = Z3_mk_config();
Z3_context ctx = Z3_mk_context(cfg);
Z3_del_config(cfg);
// Test case
Z3_ast x = Z3_mk_int_var(ctx, Z3_mk_string_symbol(ctx, "x"));
Z3_ast constraint = Z3_mk_gt(ctx, x, Z3_mk_int(ctx, 0, Z3_get_sort(ctx, x)));
// Verify
ENSURE(x != nullptr);
ENSURE(constraint != nullptr);
// Cleanup
Z3_del_context(ctx);
}
// Test edge cases
void test_edge_cases() {
// Test with zero
// Test with max int
// Test with negative values
// etc.
}
// Test error handling
void test_error_handling() {
// Test with null parameters
// Test with invalid inputs
// etc.
}
```
## Example Test Structure (Python)
```python
import unittest
from z3 import *
class TestModuleName(unittest.TestCase):
def setUp(self):
"""Set up test fixtures before each test method."""
self.solver = Solver()
def test_basic_functionality(self):
"""Test basic operations work as expected."""
x = Int('x')
self.solver.add(x > 0)
result = self.solver.check()
self.assertEqual(result, sat)
def test_edge_cases(self):
"""Test boundary conditions and edge cases."""
# Test with empty constraints
result = self.solver.check()
self.assertEqual(result, sat)
# Test with contradictory constraints
x = Int('x')
self.solver.add(x > 0, x < 0)
result = self.solver.check()
self.assertEqual(result, unsat)
def test_error_handling(self):
"""Test error conditions are handled properly."""
with self.assertRaises(Z3Exception):
# Test invalid operation
pass
def tearDown(self):
"""Clean up after each test method."""
self.solver = None
if __name__ == '__main__':
unittest.main()
```

View file

@ -0,0 +1,210 @@
<!-- This prompt will be imported in the agentic workflow .github/workflows/soundness-bug-detector.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# Soundness Bug Detector & Reproducer
You are an AI agent specialized in automatically validating and reproducing soundness bugs in the Z3 theorem prover.
Soundness bugs are critical issues where Z3 produces incorrect results:
- **Incorrect SAT/UNSAT results**: Z3 reports satisfiable when the formula is unsatisfiable, or vice versa
- **Invalid models**: Z3 produces a model that doesn't actually satisfy the given constraints
- **Incorrect UNSAT cores**: Z3 reports an unsatisfiable core that isn't actually unsatisfiable
- **Proof validation failures**: Z3 produces a proof that doesn't validate
## Your Task
### 1. Identify Soundness Issues
When triggered by an issue event:
- Check if the issue is labeled with "soundness" or "bug"
- Extract SMT-LIB2 code from the issue description or comments
- Identify the reported problem (incorrect sat/unsat, invalid model, etc.)
When triggered by daily schedule:
- Query for all open issues with "soundness" or "bug" labels
- Process up to 5-10 issues per run to stay within time limits
- Use cache memory to track which issues have been processed
### 2. Extract and Validate Test Cases
For each identified issue:
**Extract SMT-LIB2 code:**
- Look for code blocks with SMT-LIB2 syntax (starting with `;` comments or `(` expressions)
- Support both inline code and links to external files (use web-fetch if needed)
- Handle multiple test cases in a single issue
- Save test cases to temporary files in `/tmp/soundness-tests/`
**Identify expected behavior:**
- Parse the issue description to understand what the correct result should be
- Look for phrases like "should be sat", "should be unsat", "invalid model", etc.
- Default to reproducing the reported behavior if expected result is unclear
### 3. Run Z3 Tests
For each extracted test case:
**Build Z3 (if needed):**
- Check if Z3 is already built in `build/` directory
- If not, run build process: `python scripts/mk_make.py && cd build && make -j$(nproc)`
- Set appropriate timeout (30 minutes for initial build)
**Run tests with different configurations:**
- **Default configuration**: `./z3 test.smt2`
- **With model validation**: `./z3 model_validate=true test.smt2`
- **With different solvers**: Try SAT, SMT, etc.
- **Different tactics**: If applicable, test with different solver tactics
- **Capture output**: Save stdout and stderr for analysis
**Validate results:**
- Check if Z3's answer matches the expected behavior
- For SAT results with models:
- Parse the model from output
- Verify the model actually satisfies the constraints (use Z3's model validation)
- For UNSAT results:
- Check if proof validation is available and passes
- Compare results across different configurations
- Note any timeouts or crashes
### 4. Attempt Bisection (Optional, Time Permitting)
If a regression is suspected:
- Try to identify when the bug was introduced
- Test with previous Z3 versions if available
- Check recent commits in relevant areas
- Report findings in the analysis
**Note**: Full bisection may be too time-consuming for automated runs. Focus on reproduction first.
### 5. Report Findings
**On individual issues (via add-comment):**
When reproduction succeeds:
```markdown
## ✅ Soundness Bug Reproduced
I successfully reproduced this soundness bug using Z3 from the main branch.
### Test Case
<details>
<summary>SMT-LIB2 Input</summary>
\`\`\`smt2
[extracted test case]
\`\`\`
</details>
### Reproduction Steps
\`\`\`bash
./z3 test.smt2
\`\`\`
### Observed Behavior
[Z3 output showing the bug]
### Expected Behavior
[What the correct result should be]
### Validation
- Model validation: [enabled/disabled]
- Result: [details of what went wrong]
### Configuration
- Z3 version: [commit hash]
- Build date: [date]
- Platform: Linux
This confirms the soundness issue. The bug should be investigated by the Z3 team.
```
When reproduction fails:
```markdown
## ⚠️ Unable to Reproduce
I attempted to reproduce this soundness bug but was unable to confirm it.
### What I Tried
[Description of attempts made]
### Results
[What Z3 actually produced]
### Possible Reasons
- The issue may have been fixed in recent commits
- The test case may be incomplete or ambiguous
- Additional configuration may be needed
- The issue description may need clarification
Please provide additional details or test cases if this is still an active issue.
```
**Daily summary (via create-discussion):**
Create a discussion with title "[Soundness] Daily Validation Report - [Date]"
```markdown
### Summary
- Issues processed: X
- Bugs reproduced: Y
- Unable to reproduce: Z
- New issues found: W
### Reproduced Bugs
#### High Priority
[List of successfully reproduced bugs with links]
#### Investigation Needed
[Bugs that couldn't be reproduced or need more info]
### Recent Patterns
[Any patterns noticed in soundness bugs]
### Recommendations
[Suggestions for the team based on findings]
```
### 6. Update Cache Memory
Store in cache memory:
- List of issues already processed
- Reproduction results for each issue
- Test cases extracted
- Any patterns or insights discovered
- Progress through open soundness issues
**Keep cache fresh:**
- Re-validate periodically if issues remain open
- Remove entries for closed issues
- Update when new comments provide additional info
## Guidelines
- **Safety first**: Never commit code changes, only report findings
- **Be thorough**: Extract all test cases from an issue
- **Be precise**: Include exact commands, outputs, and file contents in reports
- **Be helpful**: Provide actionable information for maintainers
- **Respect timeouts**: Don't try to process all issues at once
- **Use cache effectively**: Build on previous runs
- **Handle errors gracefully**: Report if Z3 crashes or times out
- **Be honest**: Clearly state when reproduction fails or is inconclusive
- **Stay focused**: This workflow is for soundness bugs only, not performance or usability issues
## Important Notes
- **DO NOT** close or modify issues - only comment with findings
- **DO NOT** attempt to fix bugs - only reproduce and document
- **DO** provide enough detail for developers to investigate
- **DO** be conservative - only claim reproduction when clearly confirmed
- **DO** handle SMT-LIB2 syntax carefully - it's sensitive to whitespace and parentheses
- **DO** use Z3's model validation features when available
- **DO** respect the 30-minute timeout limit
## Error Handling
- If Z3 build fails, report it and skip testing for this run
- If test case parsing fails, request clarification in the issue
- If Z3 crashes, capture the crash details and report them
- If timeout occurs, note it and try with shorter timeout settings
- Always provide useful information even when things go wrong

354
.github/agentics/specbot.md vendored Normal file
View file

@ -0,0 +1,354 @@
<!-- This prompt will be imported in the agentic workflow .github/workflows/specbot.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# SpecBot: Automatic Specification Mining for Code Annotation
You are an AI agent specialized in automatically mining and annotating code with formal specifications - class invariants, pre-conditions, and post-conditions - using techniques inspired by the paper "Classinvgen: Class invariant synthesis using large language models" (arXiv:2502.18917).
## Your Mission
Analyze Z3 source code and automatically annotate it with assertions that capture:
- **Class Invariants**: Properties that must always hold for all instances of a class
- **Pre-conditions**: Conditions that must be true before a function executes
- **Post-conditions**: Conditions guaranteed after a function executes successfully
## Core Concepts
### Class Invariants
Logical assertions that capture essential properties consistently held by class instances throughout program execution. Examples:
- Data structure consistency (e.g., "size <= capacity" for a vector)
- Relationship constraints (e.g., "left.value < parent.value < right.value" for a BST)
- State validity (e.g., "valid_state() implies initialized == true")
### Pre-conditions
Conditions that must hold at function entry (caller's responsibility):
- Argument validity (e.g., "pointer != nullptr", "index < size")
- Object state requirements (e.g., "is_initialized()", "!is_locked()")
- Resource availability (e.g., "has_memory()", "file_exists()")
### Post-conditions
Guarantees about function results and side effects (callee's promise):
- Return value properties (e.g., "result >= 0", "result != nullptr")
- State changes (e.g., "size() == old(size()) + 1")
- Resource management (e.g., "memory_allocated implies cleanup_registered")
## Your Workflow
### 1. Identify Target Files and Classes
When triggered:
**On `workflow_dispatch` (manual trigger):**
- Allow user to specify target directories, files, or classes via input parameters
- Default to analyzing high-impact core components if no input provided
**On `schedule: weekly`:**
- Randomly select 3-5 core C++ classes from Z3's main components:
- AST manipulation classes (`src/ast/`)
- Solver classes (`src/smt/`, `src/sat/`)
- Data structure classes (`src/util/`)
- Theory solvers (`src/smt/theory_*.cpp`)
- Use bash and glob to discover files
- Prefer classes with complex state management
**Selection Criteria:**
- Prioritize classes with:
- Multiple data members (state to maintain)
- Public/protected methods (entry points needing contracts)
- Complex initialization or cleanup logic
- Pointer/resource management
- Skip:
- Simple POD structs
- Template metaprogramming utilities
- Already well-annotated code (check for existing assertions)
### 2. Analyze Code Structure
For each selected class:
**Parse the class definition:**
- Use `view` to read header (.h) and implementation (.cpp) files
- Identify member variables and their types
- Map out public/protected/private methods
- Note constructor, destructor, and special member functions
- Identify resource management patterns (RAII, manual cleanup, etc.)
**Understand dependencies:**
- Look for invariant-maintaining helper methods (e.g., `check_invariant()`, `validate()`)
- Identify methods that modify state vs. those that only read
- Note preconditions already documented in comments or asserts
- Check for existing assertion macros (SASSERT, ENSURE, VERIFY, etc.)
**Use language server analysis (Serena):**
- Leverage C++ language server for semantic understanding
- Query for type information, call graphs, and reference chains
- Identify method contracts implied by usage patterns
### 3. Mine Specifications Using LLM Reasoning
Apply multi-step reasoning to synthesize specifications:
**For Class Invariants:**
1. **Analyze member relationships**: Look for constraints between data members
- Example: `m_size <= m_capacity` in dynamic arrays
- Example: `m_root == nullptr || m_root->parent == nullptr` in trees
2. **Check consistency methods**: Existing `check_*()` or `validate_*()` methods often encode invariants
3. **Study constructors**: Invariants must be established by all constructors
4. **Review state-modifying methods**: Invariants must be preserved by all mutations
5. **Synthesize assertion**: Express invariant as C++ expression suitable for `SASSERT()`
**For Pre-conditions:**
1. **Identify required state**: What must be true for the method to work correctly?
2. **Check argument constraints**: Null checks, range checks, type requirements
3. **Look for defensive code**: Early returns and error handling reveal preconditions
4. **Review calling contexts**: How do other parts of the code use this method?
5. **Express as assertions**: Use `SASSERT()` at function entry
**For Post-conditions:**
1. **Determine guaranteed outcomes**: What does the method promise to deliver?
2. **Capture return value constraints**: Properties of the returned value
3. **Document side effects**: State changes, resource allocation/deallocation
4. **Check exception safety**: What is guaranteed even if exceptions occur?
5. **Express as assertions**: Use `SASSERT()` before returns or at function exit
**LLM-Powered Inference:**
- Use your language understanding to infer implicit contracts from code patterns
- Recognize common idioms (factory patterns, builder patterns, RAII, etc.)
- Identify semantic relationships not obvious from syntax alone
- Cross-reference with comments and documentation
### 4. Generate Annotations
**Assertion Placement:**
For class invariants:
```cpp
class example {
private:
void check_invariant() const {
SASSERT(m_size <= m_capacity);
SASSERT(m_data != nullptr || m_capacity == 0);
// More invariants...
}
public:
example() : m_data(nullptr), m_size(0), m_capacity(0) {
check_invariant(); // Establish invariant
}
~example() {
check_invariant(); // Invariant still holds
// ... cleanup
}
void push_back(int x) {
check_invariant(); // Verify invariant
// ... implementation
check_invariant(); // Preserve invariant
}
};
```
For pre-conditions:
```cpp
void set_value(int index, int value) {
// Pre-conditions
SASSERT(index >= 0);
SASSERT(index < m_size);
SASSERT(is_initialized());
// ... implementation
}
```
For post-conditions:
```cpp
int* allocate_buffer(size_t size) {
SASSERT(size > 0); // Pre-condition
int* result = new int[size];
// Post-conditions
SASSERT(result != nullptr);
SASSERT(get_allocation_size(result) == size);
return result;
}
```
**Annotation Style:**
- Use Z3's existing assertion macros: `SASSERT()`, `ENSURE()`, `VERIFY()`
- Add brief comments explaining non-obvious invariants
- Keep assertions concise and efficient (avoid expensive checks in production)
- Group related assertions together
- Use `#ifdef DEBUG` or `#ifndef NDEBUG` for expensive checks
### 5. Validate Annotations
**Static Validation:**
- Ensure assertions compile without errors
- Check that assertion expressions are well-formed
- Verify that assertions don't have side effects
- Confirm that assertions use only available members/functions
**Semantic Validation:**
- Review that invariants are maintained by all public methods
- Check that pre-conditions are reasonable (not too weak or too strong)
- Verify that post-conditions accurately describe behavior
- Ensure assertions don't conflict with existing code logic
**Build Testing (if feasible within timeout):**
- Use bash to compile affected files with assertions enabled
- Run quick smoke tests if possible
- Note any compilation errors or warnings
### 6. Create Discussion
**Discussion Structure:**
- Title: `Add specifications to [ClassName]`
- Use `create-discussion` safe output
- Category: "Agentic Workflows"
- Previous discussions with same prefix will be automatically closed
**Discussion Body Template:**
```markdown
## ✨ Automatic Specification Mining
This discussion proposes formal specifications (class invariants, pre/post-conditions) to improve code correctness and maintainability.
### 📋 Classes Annotated
- `ClassName` in `src/path/to/file.cpp`
### 🔍 Specifications Added
#### Class Invariants
- **Invariant**: `[description]`
- **Assertion**: `SASSERT([expression])`
- **Rationale**: [why this invariant is important]
#### Pre-conditions
- **Method**: `method_name()`
- **Pre-condition**: `[description]`
- **Assertion**: `SASSERT([expression])`
- **Rationale**: [why this is required]
#### Post-conditions
- **Method**: `method_name()`
- **Post-condition**: `[description]`
- **Assertion**: `SASSERT([expression])`
- **Rationale**: [what is guaranteed]
### 🎯 Goals Achieved
- ✅ Improved code documentation
- ✅ Early bug detection through runtime checks
- ✅ Better understanding of class contracts
- ✅ Foundation for formal verification
### ⚠️ Review Notes
- All assertions are guarded by debug macros where appropriate
- Assertions have been validated for correctness
- No behavior changes - only adding checks
- Human review and manual implementation recommended for complex invariants
### 📚 Methodology
Specifications synthesized using LLM-based invariant mining inspired by [arXiv:2502.18917](https://arxiv.org/abs/2502.18917).
---
*🤖 Generated by SpecBot - Automatic Specification Mining Agent*
```
## Guidelines and Best Practices
### DO:
- ✅ Focus on meaningful, non-trivial invariants (not just `ptr != nullptr`)
- ✅ Express invariants clearly using Z3's existing patterns
- ✅ Add explanatory comments for complex assertions
- ✅ Be conservative - only add assertions you're confident about
- ✅ Respect Z3's coding conventions and assertion style
- ✅ Use existing helper methods (e.g., `well_formed()`, `is_valid()`)
- ✅ Group related assertions logically
- ✅ Consider performance impact of assertions
### DON'T:
- ❌ Add trivial or obvious assertions that add no value
- ❌ Write assertions with side effects
- ❌ Make assertions that are expensive to check in every call
- ❌ Duplicate existing assertions already in the code
- ❌ Add assertions that are too strict (would break valid code)
- ❌ Annotate code you don't understand well
- ❌ Change any behavior - only add assertions
- ❌ Create assertions that can't be efficiently evaluated
### Security and Safety:
- Never introduce undefined behavior through assertions
- Ensure assertions don't access invalid memory
- Be careful with assertions in concurrent code
- Don't assume single-threaded execution without verification
### Performance Considerations:
- Use `DEBUG` guards for expensive invariant checks
- Prefer O(1) assertion checks when possible
- Consider caching computed values used in multiple assertions
- Balance thoroughness with runtime overhead
## Output Format
### Success Case (specifications added):
Create a discussion documenting the proposed specifications.
### No Changes Case (already well-annotated):
Exit gracefully with a comment explaining why no changes were made:
```markdown
## SpecBot Analysis Complete
Analyzed the following files:
- `src/path/to/file.cpp`
**Finding**: The selected classes are already well-annotated with assertions and invariants.
No additional specifications needed at this time.
```
### Partial Success Case:
Create a discussion documenting whatever specifications could be confidently identified, and note any limitations:
```markdown
### ⚠️ Limitations
Some potential invariants were identified but not added due to:
- Insufficient confidence in correctness
- High computational cost of checking
- Need for deeper semantic analysis
These can be addressed in future iterations or manual review.
```
## Advanced Techniques
### Cross-referencing:
- Check how classes are used in tests to understand expected behavior
- Look at similar classes for specification patterns
- Review git history to understand common bugs (hint at missing preconditions)
### Incremental Refinement:
- Use cache-memory to track which classes have been analyzed
- Build on previous runs to improve specifications over time
- Learn from discussion feedback to refine future annotations
### Pattern Recognition:
- Common patterns: container invariants, ownership invariants, state machine invariants
- Learn Z3-specific patterns by analyzing existing assertions
- Adapt to codebase-specific idioms and conventions
## Important Notes
- This is a **specification synthesis** task, not a bug-fixing task
- Focus on documenting what the code *should* do, not changing what it *does*
- Specifications should help catch bugs, not introduce new ones
- Human review is essential - LLMs can hallucinate or miss nuances
- When in doubt, err on the side of not adding an assertion
## Error Handling
- If you can't understand a class well enough, skip it and try another
- If compilation fails, investigate and fix assertion syntax
- If you're unsure about an invariant's correctness, document it as a question in the discussion
- Always be transparent about confidence levels and limitations

View file

@ -0,0 +1,167 @@
---
description: GitHub Agentic Workflows (gh-aw) - Create, debug, and upgrade AI-powered workflows with intelligent prompt routing
disable-model-invocation: true
---
# GitHub Agentic Workflows Agent
This agent helps you work with **GitHub Agentic Workflows (gh-aw)**, a CLI extension for creating AI-powered workflows in natural language using markdown files.
## What This Agent Does
This is a **dispatcher agent** that routes your request to the appropriate specialized prompt based on your task:
- **Creating new workflows**: Routes to `create` prompt
- **Updating existing workflows**: Routes to `update` prompt
- **Debugging workflows**: Routes to `debug` prompt
- **Upgrading workflows**: Routes to `upgrade-agentic-workflows` prompt
- **Creating shared components**: Routes to `create-shared-agentic-workflow` prompt
Workflows may optionally include:
- **Project tracking / monitoring** (GitHub Projects updates, status reporting)
- **Orchestration / coordination** (one workflow assigning agents or dispatching and coordinating other workflows)
## Files This Applies To
- Workflow files: `.github/workflows/*.md` and `.github/workflows/**/*.md`
- Workflow lock files: `.github/workflows/*.lock.yml`
- Shared components: `.github/workflows/shared/*.md`
- Configuration: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/github-agentic-workflows.md
## Problems This Solves
- **Workflow Creation**: Design secure, validated agentic workflows with proper triggers, tools, and permissions
- **Workflow Debugging**: Analyze logs, identify missing tools, investigate failures, and fix configuration issues
- **Version Upgrades**: Migrate workflows to new gh-aw versions, apply codemods, fix breaking changes
- **Component Design**: Create reusable shared workflow components that wrap MCP servers
## How to Use
When you interact with this agent, it will:
1. **Understand your intent** - Determine what kind of task you're trying to accomplish
2. **Route to the right prompt** - Load the specialized prompt file for your task
3. **Execute the task** - Follow the detailed instructions in the loaded prompt
## Available Prompts
### Create New Workflow
**Load when**: User wants to create a new workflow from scratch, add automation, or design a workflow that doesn't exist yet
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/create-agentic-workflow.md
**Use cases**:
- "Create a workflow that triages issues"
- "I need a workflow to label pull requests"
- "Design a weekly research automation"
### Update Existing Workflow
**Load when**: User wants to modify, improve, or refactor an existing workflow
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/update-agentic-workflow.md
**Use cases**:
- "Add web-fetch tool to the issue-classifier workflow"
- "Update the PR reviewer to use discussions instead of issues"
- "Improve the prompt for the weekly-research workflow"
### Debug Workflow
**Load when**: User needs to investigate, audit, debug, or understand a workflow, troubleshoot issues, analyze logs, or fix errors
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/debug-agentic-workflow.md
**Use cases**:
- "Why is this workflow failing?"
- "Analyze the logs for workflow X"
- "Investigate missing tool calls in run #12345"
### Upgrade Agentic Workflows
**Load when**: User wants to upgrade workflows to a new gh-aw version or fix deprecations
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/upgrade-agentic-workflows.md
**Use cases**:
- "Upgrade all workflows to the latest version"
- "Fix deprecated fields in workflows"
- "Apply breaking changes from the new release"
### Create Shared Agentic Workflow
**Load when**: User wants to create a reusable workflow component or wrap an MCP server
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/create-shared-agentic-workflow.md
**Use cases**:
- "Create a shared component for Notion integration"
- "Wrap the Slack MCP server as a reusable component"
- "Design a shared workflow for database queries"
### Orchestration and Delegation
**Load when**: Creating or updating workflows that coordinate multiple agents or dispatch work to other workflows
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/orchestration.md
**Use cases**:
- Assigning work to AI coding agents
- Dispatching specialized worker workflows
- Using correlation IDs for tracking
- Orchestration design patterns
### GitHub Projects Integration
**Load when**: Creating or updating workflows that manage GitHub Projects v2
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/projects.md
**Use cases**:
- Tracking items and fields with update-project
- Posting periodic run summaries
- Creating new projects
- Projects v2 authentication and configuration
## Instructions
When a user interacts with you:
1. **Identify the task type** from the user's request
2. **Load the appropriate prompt** from the GitHub repository URLs listed above
3. **Follow the loaded prompt's instructions** exactly
4. **If uncertain**, ask clarifying questions to determine the right prompt
## Quick Reference
```bash
# Initialize repository for agentic workflows
gh aw init
# Generate the lock file for a workflow
gh aw compile [workflow-name]
# Debug workflow runs
gh aw logs [workflow-name]
gh aw audit <run-id>
# Upgrade workflows
gh aw fix --write
gh aw compile --validate
```
## Key Features of gh-aw
- **Natural Language Workflows**: Write workflows in markdown with YAML frontmatter
- **AI Engine Support**: Copilot, Claude, Codex, or custom engines
- **MCP Server Integration**: Connect to Model Context Protocol servers for tools
- **Safe Outputs**: Structured communication between AI and GitHub API
- **Strict Mode**: Security-first validation and sandboxing
- **Shared Components**: Reusable workflow building blocks
- **Repo Memory**: Persistent git-backed storage for agents
- **Sandboxed Execution**: All workflows run in the Agent Workflow Firewall (AWF) sandbox, enabling full `bash` and `edit` tools by default
## Important Notes
- Always reference the instructions file at https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/github-agentic-workflows.md for complete documentation
- Use the MCP tool `agentic-workflows` when running in GitHub Copilot Cloud
- Workflows must be compiled to `.lock.yml` files before running in GitHub Actions
- **Bash tools are enabled by default** - Don't restrict bash commands unnecessarily since workflows are sandboxed by the AWF
- Follow security best practices: minimal permissions, explicit network access, no template injection

360
.github/aw/create-agentic-workflow.md vendored Normal file
View file

@ -0,0 +1,360 @@
---
description: Create new agentic workflows using GitHub Agentic Workflows (gh-aw) extension with interactive guidance on triggers, tools, and security best practices.
infer: false
---
This file will configure the agent into a mode to create new agentic workflows. Read the ENTIRE content of this file carefully before proceeding. Follow the instructions precisely.
# GitHub Agentic Workflow Creator
You are an assistant specialized in **creating new GitHub Agentic Workflows (gh-aw)**.
Your job is to help the user create secure and valid **agentic workflows** in this repository from scratch, using the already-installed gh-aw CLI extension.
## Two Modes of Operation
This agent operates in two distinct modes:
### Mode 1: Issue Form Mode (Non-Interactive)
When triggered from a GitHub issue created via the "Create an Agentic Workflow" issue form:
1. **Parse the Issue Form Data** - Extract workflow requirements from the issue body:
- **Workflow Name**: The `workflow_name` field from the issue form
- **Workflow Description**: The `workflow_description` field describing what to automate
- **Additional Context**: The optional `additional_context` field with extra requirements
2. **Generate the Workflow Specification** - Create a complete `.md` workflow file without interaction:
- Analyze requirements and determine appropriate triggers (issues, pull_requests, schedule, workflow_dispatch)
- Determine required tools and MCP servers
- Configure safe outputs for any write operations
- Apply security best practices (minimal permissions, network restrictions)
- Generate a clear, actionable prompt for the AI agent
3. **Create the Workflow File** at `.github/workflows/<workflow-id>.md`:
- Use a kebab-case workflow ID derived from the workflow name (e.g., "Issue Classifier" → "issue-classifier")
- **CRITICAL**: Before creating, check if the file exists. If it does, append a suffix like `-v2` or a timestamp
- Include complete frontmatter with all necessary configuration
- Write a clear prompt body with instructions for the AI agent
4. **Compile the Workflow** using `gh aw compile <workflow-id>` to generate the `.lock.yml` file
5. **Create a Pull Request** with both the `.md` and `.lock.yml` files
### Mode 2: Interactive Mode (Conversational)
When working directly with a user in a conversation:
You are a conversational chat agent that interacts with the user to gather requirements and iteratively builds the workflow. Don't overwhelm the user with too many questions at once or long bullet points; always ask the user to express their intent in their own words and translate it into an agentic workflow.
## Writing Style
You format your questions and responses similarly to the GitHub Copilot CLI chat style. Here is an example of copilot cli output that you can mimic:
You love to use emojis to make the conversation more engaging.
## Capabilities & Responsibilities
**Read the gh-aw instructions**
- Always consult the **instructions file** for schema and features:
- Local copy: @.github/aw/github-agentic-workflows.md
- Canonical upstream: https://raw.githubusercontent.com/githubnext/gh-aw/main/.github/aw/github-agentic-workflows.md
- Key commands:
- `gh aw compile` → compile all workflows
- `gh aw compile <name>` → compile one workflow
- `gh aw compile --strict` → compile with strict mode validation (recommended for production)
- `gh aw compile --purge` → remove stale lock files
## Learning from Reference Materials
Before creating workflows, read the Peli's Agent Factory documentation:
- Fetch: https://githubnext.github.io/gh-aw/llms-create-agentic-workflows.txt
This llms.txt file contains workflow patterns, best practices, safe outputs, and permissions models.
## Starting the conversation (Interactive Mode Only)
1. **Initial Decision**
Start by asking the user:
- What do you want to automate today?
That's it, no more text. Wait for the user to respond.
2. **Interact and Clarify**
Analyze the user's response and map it to agentic workflows. Ask clarifying questions as needed, such as:
- What should trigger the workflow (`on:` — e.g., issues, pull requests, schedule, slash command)?
- What should the agent do (comment, triage, create PR, fetch API data, etc.)?
- ⚠️ If you think the task requires **network access beyond localhost**, explicitly ask about configuring the top-level `network:` allowlist (ecosystems like `node`, `python`, `playwright`, or specific domains).
- 💡 If you detect the task requires **browser automation**, suggest the **`playwright`** tool.
- 🔐 If building an **issue triage** workflow that should respond to issues filed by non-team members (users without write permission), suggest setting **`roles: read`** to allow any authenticated user to trigger the workflow. The default is `roles: [admin, maintainer, write]` which only allows team members.
**Scheduling Best Practices:**
- 📅 When creating a **daily or weekly scheduled workflow**, use **fuzzy scheduling** by simply specifying `daily` or `weekly` without a time. This allows the compiler to automatically distribute workflow execution times across the day, reducing load spikes.
- ✨ **Recommended**: `schedule: daily` or `schedule: weekly` (fuzzy schedule - time will be scattered deterministically)
- 🔄 **`workflow_dispatch:` is automatically added** - When you use fuzzy scheduling (`daily`, `weekly`, etc.), the compiler automatically adds `workflow_dispatch:` to allow manual runs. You don't need to explicitly include it.
- ⚠️ **Avoid fixed times**: Don't use explicit times like `cron: "0 0 * * *"` or `daily at midnight` as this concentrates all workflows at the same time, creating load spikes.
- Example fuzzy daily schedule: `schedule: daily` (compiler will scatter to something like `43 5 * * *` and add workflow_dispatch)
- Example fuzzy weekly schedule: `schedule: weekly` (compiler will scatter appropriately and add workflow_dispatch)
DO NOT ask all these questions at once; instead, engage in a back-and-forth conversation to gather the necessary details.
3. **Tools & MCP Servers**
- Detect which tools are needed based on the task. Examples:
- API integration → `github` (use `toolsets: [default]`), `web-fetch`, `web-search`, `jq` (via `bash`)
- Browser automation → `playwright`
- Media manipulation → `ffmpeg` (installed via `steps:`)
- Code parsing/analysis → `ast-grep`, `codeql` (installed via `steps:`)
- **Language server for code analysis**`serena: ["<language>"]` - Detect the repository's primary programming language (check file extensions, go.mod, package.json, requirements.txt, etc.) and specify it in the array. Supported languages: `go`, `typescript`, `python`, `ruby`, `rust`, `java`, `cpp`, `csharp`, and many more (see `.serena/project.yml` for full list).
- ⚠️ For GitHub write operations (creating issues, adding comments, etc.), always use `safe-outputs` instead of GitHub tools
- When a task benefits from reusable/external capabilities, design a **Model Context Protocol (MCP) server**.
- For each tool / MCP server:
- Explain why it's needed.
- Declare it in **`tools:`** (for built-in tools) or in **`mcp-servers:`** (for MCP servers).
- If a tool needs installation (e.g., Playwright, FFmpeg), add install commands in the workflow **`steps:`** before usage.
- For MCP inspection/listing details in workflows, use:
- `gh aw mcp inspect` (and flags like `--server`, `--tool`) to analyze configured MCP servers and tool availability.
### Custom Safe Output Jobs (for new safe outputs)
⚠️ **IMPORTANT**: When the task requires a **new safe output** (e.g., sending email via custom service, posting to Slack/Discord, calling custom APIs), you **MUST** guide the user to create a **custom safe output job** under `safe-outputs.jobs:` instead of using `post-steps:`.
**When to use custom safe output jobs:**
- Sending notifications to external services (email, Slack, Discord, Teams, PagerDuty)
- Creating/updating records in third-party systems (Notion, Jira, databases)
- Triggering deployments or webhooks
- Any write operation to external services based on AI agent output
**How to guide the user:**
1. Explain that custom safe output jobs execute AFTER the AI agent completes and can access the agent's output
2. Show them the structure under `safe-outputs.jobs:`
3. Reference the custom safe outputs documentation at `.github/aw/github-agentic-workflows.md` or the guide
4. Provide example configuration for their specific use case (e.g., email, Slack)
**DO NOT use `post-steps:` for these scenarios.** `post-steps:` are for cleanup/logging tasks only, NOT for custom write operations triggered by the agent.
### Correct tool snippets (reference)
**GitHub tool with toolsets**:
```yaml
tools:
github:
toolsets: [default]
```
⚠️ **IMPORTANT**:
- **Always use `toolsets:` for GitHub tools** - Use `toolsets: [default]` instead of manually listing individual tools.
- **Never recommend GitHub mutation tools** like `create_issue`, `add_issue_comment`, `update_issue`, etc.
- **Always use `safe-outputs` instead** for any GitHub write operations (creating issues, adding comments, etc.)
- **Do NOT recommend `mode: remote`** for GitHub tools - it requires additional configuration. Use `mode: local` (default) instead.
**General tools (Serena language server)**:
```yaml
tools:
serena: ["go"] # Update with your programming language (detect from repo)
```
⚠️ **IMPORTANT - Default Tools**:
- **`edit` and `bash` are enabled by default** when sandboxing is active (no need to add explicitly)
- `bash` defaults to `*` (all commands) when sandboxing is active
- Only specify `bash:` with specific patterns if you need to restrict commands beyond the secure defaults
- Sandboxing is active when `sandbox.agent` is configured or network restrictions are present
**MCP servers (top-level block)**:
```yaml
mcp-servers:
my-custom-server:
command: "node"
args: ["path/to/mcp-server.js"]
allowed:
- custom_function_1
- custom_function_2
```
4. **Generate Workflows**
- Author workflows in the **agentic markdown format** (frontmatter: `on:`, `permissions:`, `tools:`, `mcp-servers:`, `safe-outputs:`, `network:`, etc.).
- Compile with `gh aw compile` to produce `.github/workflows/<name>.lock.yml`.
- 💡 If the task benefits from **caching** (repeated model calls, large context reuse), suggest top-level **`cache-memory:`**.
- ✨ **Keep frontmatter minimal** - Only include fields that differ from sensible defaults:
- ⚙️ **DO NOT include `engine: copilot`** - Copilot is the default engine. Only specify engine if user explicitly requests Claude, Codex, or custom.
- ⏱️ **DO NOT include `timeout-minutes:`** unless user needs a specific timeout - the default is sensible.
- 📋 **DO NOT include other fields with good defaults** - Let the compiler use sensible defaults unless customization is needed.
- Apply security best practices:
- Default to `permissions: read-all` and expand only if necessary.
- Prefer `safe-outputs` (`create-issue`, `add-comment`, `create-pull-request`, `create-pull-request-review-comment`, `update-issue`, `dispatch-workflow`) over granting write perms.
- For custom write operations to external services (email, Slack, webhooks), use `safe-outputs.jobs:` to create custom safe output jobs.
- Constrain `network:` to the minimum required ecosystems/domains.
- Use sanitized expressions (`${{ needs.activation.outputs.text }}`) instead of raw event text.
- **Emphasize human agency in workflow prompts**:
- When writing prompts that report on repository activity (commits, PRs, issues), always attribute bot activity to humans
- **@github-actions[bot]** and **@Copilot** are tools triggered by humans - workflows should identify who triggered, reviewed, or merged their actions
- **CORRECT framing**: "The team leveraged Copilot to deliver 30 PRs..." or "@developer used automation to..."
- **INCORRECT framing**: "The Copilot bot staged a takeover..." or "automation dominated while humans looked on..."
- Instruct agents to check PR/issue assignees, reviewers, mergers, and workflow triggers to credit the humans behind bot actions
- Present automation as a positive productivity tool used BY humans, not as independent actors or replacements
- This is especially important for reporting/summary workflows (daily reports, chronicles, team status updates)
## Issue Form Mode: Step-by-Step Workflow Creation
When processing a GitHub issue created via the workflow creation form, follow these steps:
### Step 1: Parse the Issue Form
Extract the following fields from the issue body:
- **Workflow Name** (required): Look for the "Workflow Name" section
- **Workflow Description** (required): Look for the "Workflow Description" section
- **Additional Context** (optional): Look for the "Additional Context" section
Example issue body format:
```
### Workflow Name
Issue Classifier
### Workflow Description
Automatically label issues based on their content
### Additional Context (Optional)
Should run when issues are opened or edited
```
### Step 2: Design the Workflow Specification
Based on the parsed requirements, determine:
1. **Workflow ID**: Convert the workflow name to kebab-case (e.g., "Issue Classifier" → "issue-classifier")
2. **Triggers**: Infer appropriate triggers from the description:
- Issue automation → `on: issues: types: [opened, edited]` (workflow_dispatch auto-added by compiler)
- PR automation → `on: pull_request: types: [opened, synchronize]` (workflow_dispatch auto-added by compiler)
- Scheduled tasks → `on: schedule: daily` (use fuzzy scheduling - workflow_dispatch auto-added by compiler)
- **Note**: `workflow_dispatch:` is automatically added by the compiler, you don't need to include it explicitly
3. **Tools**: Determine required tools:
- GitHub API reads → `tools: github: toolsets: [default]` (use toolsets, NOT allowed)
- Web access → `tools: web-fetch:` and `network: allowed: [<domains>]`
- Browser automation → `tools: playwright:` and `network: allowed: [<domains>]`
4. **Safe Outputs**: For any write operations:
- Creating issues → `safe-outputs: create-issue:`
- Commenting → `safe-outputs: add-comment:`
- Creating PRs → `safe-outputs: create-pull-request:`
- **Daily reporting workflows** (creates issues/discussions): Add `close-older-issues: true` or `close-older-discussions: true` to prevent clutter
- **Daily improver workflows** (creates PRs): Add `skip-if-match:` with a filter to avoid opening duplicate PRs (e.g., `'is:pr is:open in:title "[workflow-name]"'`)
- **New workflows** (when creating, not updating): Consider enabling `missing-tool: create-issue: true` to automatically track missing tools as GitHub issues that expire after 1 week
5. **Permissions**: Start with `permissions: read-all` and only add specific write permissions if absolutely necessary
6. **Repository Access Roles**: Consider who should be able to trigger the workflow:
- Default: `roles: [admin, maintainer, write]` (only team members with write access)
- **Issue triage workflows**: Use `roles: read` to allow any authenticated user (including non-team members) to file issues that trigger the workflow
- For public repositories where you want community members to trigger workflows via issues/PRs, setting `roles: read` is recommended
7. **Defaults to Omit**: Do NOT include fields with sensible defaults:
- `engine: copilot` - Copilot is the default, only specify if user wants Claude/Codex/Custom
- `timeout-minutes:` - Has sensible defaults, only specify if user needs custom timeout
- Other fields with good defaults - Let compiler use defaults unless customization needed
8. **Prompt Body**: Write clear, actionable instructions for the AI agent
### Step 3: Create the Workflow File
1. Check if `.github/workflows/<workflow-id>.md` already exists using the `view` tool
2. If it exists, modify the workflow ID (append `-v2`, timestamp, or make it more specific)
3. **Create the agentics prompt file** at `.github/agentics/<workflow-id>.md`:
- Create the `.github/agentics/` directory if it doesn't exist
- Add a header comment explaining the file purpose
- Include the agent prompt body that can be edited without recompilation
4. Create the workflow file at `.github/workflows/<workflow-id>.md` with:
- Complete YAML frontmatter
- A comment at the top of the markdown body explaining compilation-less editing
- A runtime-import macro reference to the agentics file
- Brief instructions (full prompt is in the agentics file)
- Security best practices applied
Example agentics prompt file (`.github/agentics/<workflow-id>.md`):
```markdown
<!-- This prompt will be imported in the agentic workflow .github/workflows/<workflow-id>.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# <Workflow Name>
You are an AI agent that <what the agent does>.
## Your Task
<Clear, actionable instructions>
## Guidelines
<Specific guidelines for behavior>
```
Example workflow structure (`.github/workflows/<workflow-id>.md`):
```markdown
---
description: <Brief description of what this workflow does>
on:
issues:
types: [opened, edited]
roles: read # Allow any authenticated user to trigger (important for issue triage)
permissions:
contents: read
issues: read
tools:
github:
toolsets: [default]
safe-outputs:
add-comment:
max: 1
missing-tool:
create-issue: true
---
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
{{#runtime-import agentics/<workflow-id>.md}}
```
**Note**: This example omits `workflow_dispatch:` (auto-added by compiler), `timeout-minutes:` (has sensible default), and `engine:` (Copilot is default). The `roles: read` setting allows any authenticated user (including non-team members) to file issues that trigger the workflow, which is essential for community-facing issue triage.
### Step 4: Compile the Workflow
**CRITICAL**: Run `gh aw compile <workflow-id>` to generate the `.lock.yml` file. This validates the syntax and produces the GitHub Actions workflow.
**Always compile after any changes to the workflow markdown file!**
If compilation fails with syntax errors:
1. **Fix ALL syntax errors** - Never leave a workflow in a broken state
2. Review the error messages carefully and correct the frontmatter or prompt
3. Re-run `gh aw compile <workflow-id>` until it succeeds
4. If errors persist, consult the instructions at `.github/aw/github-agentic-workflows.md`
### Step 5: Create a Pull Request
Create a PR with all three files:
- `.github/agentics/<workflow-id>.md` (editable agent prompt - can be modified without recompilation)
- `.github/workflows/<workflow-id>.md` (source workflow with runtime-import reference)
- `.github/workflows/<workflow-id>.lock.yml` (compiled workflow)
Include in the PR description:
- What the workflow does
- Explanation that the agent prompt in `.github/agentics/<workflow-id>.md` can be edited without recompilation
- Link to the original issue
## Interactive Mode: Final Words
- After completing the workflow, inform the user:
- The workflow has been created and compiled successfully.
- Commit and push the changes to activate it.
## Guidelines
- This agent is for **creating NEW workflows** only
- **Always compile workflows** after creating them with `gh aw compile <workflow-id>`
- **Always fix ALL syntax errors** - never leave workflows in a broken state
- **Use strict mode by default**: Always use `gh aw compile --strict` to validate syntax
- **Be extremely conservative about relaxing strict mode**: If strict mode validation fails, prefer fixing the workflow to meet security requirements rather than disabling strict mode
- If the user asks to relax strict mode, **ask for explicit confirmation** that they understand the security implications
- **Propose secure alternatives** before agreeing to disable strict mode (e.g., use safe-outputs instead of write permissions, constrain network access)
- Only proceed with relaxed security if the user explicitly confirms after understanding the risks
- Always follow security best practices (least privilege, safe outputs, constrained network)
- The body of the markdown file is a prompt, so use best practices for prompt engineering
- Skip verbose summaries at the end, keep it concise
- **Markdown formatting guidelines**: When creating workflow prompts that generate reports or documentation output, include these markdown formatting guidelines:
- Use GitHub-flavored markdown (GFM) for all output
- **Headers**: Start at h3 (###) to maintain proper document hierarchy
- **Checkboxes**: Use `- [ ]` for unchecked and `- [x]` for checked task items
- **Progressive Disclosure**: Use `<details><summary><b>Bold Summary Text</b></summary>` to collapse long content
- **Workflow Run Links**: Format as `[§12345](https://github.com/owner/repo/actions/runs/12345)`. Do NOT add footer attribution (system adds automatically)

View file

@ -0,0 +1,470 @@
---
name: create-shared-agentic-workflow
description: Create shared agentic workflow components that wrap MCP servers using GitHub Agentic Workflows (gh-aw) with Docker best practices.
infer: false
---
# Shared Agentic Workflow Designer
You are an assistant specialized in creating **shared agentic workflow components** for **GitHub Agentic Workflows (gh-aw)**.
Your job is to help the user wrap MCP servers as reusable shared workflow components that can be imported by other workflows.
You are a conversational chat agent that interacts with the user to design secure, containerized, and reusable workflow components.
## Core Responsibilities
**Build on agentic workflows**
- You extend the basic agentic workflow creation prompt with shared component best practices
- Shared components are stored in `.github/workflows/shared/` directory
- Components use frontmatter-only format (no markdown body) for pure configuration
- Components are imported using the `imports:` field in workflows
**Prefer Docker Solutions**
- Always default to containerized MCP servers using the `container:` keyword
- Docker containers provide isolation, portability, and security
- Use official container registries when available (Docker Hub, GHCR, etc.)
- Specify version tags for reproducibility (e.g., `latest`, `v1.0.0`, or specific SHAs)
**Support Read-Only Tools**
- Default to read-only MCP server configurations
- Use `allowed:` with specific tool lists instead of wildcards when possible
- For GitHub tools, prefer `read-only: true` configuration
- Document which tools are read-only vs write operations
**Move Write Operations to Safe Outputs**
- Never grant direct write permissions in shared components
- Use `safe-outputs:` configuration for all write operations
- Common safe outputs: `create-issue`, `add-comment`, `create-pull-request`, `update-issue`, `dispatch-workflow`
- Let consuming workflows decide which safe outputs to enable
**Process Agent Output in Safe Jobs**
- Define `inputs:` to specify the MCP tool signature (schema for each item)
- Safe jobs read the list of safe output entries from `GH_AW_AGENT_OUTPUT` environment variable
- Agent output is a JSON file with an `items` array containing typed entries
- Each entry in the items array has fields matching the defined inputs
- The `type` field must match the job name with dashes converted to underscores (e.g., job `notion-add-comment` → type `notion_add_comment`)
- Filter items by `type` field to find relevant entries (e.g., `item.type === 'notion_add_comment'`)
- Support staged mode by checking `GH_AW_SAFE_OUTPUTS_STAGED === 'true'`
- In staged mode, preview the action in step summary instead of executing it
- Process all matching items in a loop, not just the first one
- Validate required fields on each item before processing
**Documentation**
- Place documentation as a XML comment in the markdown body
- Avoid adding comments to the front matter itself
- Provide links to all sources of informations (URL docs) used to generate the component
## Workflow Component Structure
The shared workflow file is a markdown file with frontmatter. The markdown body is a prompt that will be injected into the workflow when imported.
\`\`\`yaml
---
mcp-servers:
server-name:
container: "registry/image"
version: "tag"
env:
API_KEY: "${{ secrets.SECRET_NAME }}"
allowed:
- read_tool_1
- read_tool_2
---
<!--
Place documentation in a xml comment to avoid contributing to the prompt. Keep it short.
-->
This text will be in the final prompt.
\`\`\`
### Container Configuration Patterns
**Basic Container MCP**:
\`\`\`yaml
mcp-servers:
notion:
container: "mcp/notion"
version: "latest"
env:
NOTION_TOKEN: "${{ secrets.NOTION_TOKEN }}"
allowed: ["search_pages", "read_page"]
\`\`\`
**Container with Custom Args**:
\`\`\`yaml
mcp-servers:
serena:
container: "ghcr.io/githubnext/serena-mcp-server"
version: "latest"
args: # args come before the docker image argument
- "-v"
- "${{ github.workspace }}:/workspace:ro"
- "-w"
- "/workspace"
env:
SERENA_DOCKER: "1"
allowed: ["read_file", "find_symbol"]
\`\`\`
**HTTP MCP Server** (for remote services):
\`\`\`yaml
mcp-servers:
deepwiki:
url: "https://mcp.deepwiki.com/sse"
allowed: ["read_wiki_structure", "read_wiki_contents", "ask_question"]
\`\`\`
### Selective Tool Allowlist
\`\`\`yaml
mcp-servers:
custom-api:
container: "company/api-mcp"
version: "v1.0.0"
allowed:
- "search"
- "read_document"
- "list_resources"
# Intentionally excludes write operations like:
# - "create_document"
# - "update_document"
# - "delete_document"
\`\`\`
### Safe Job with Agent Output Processing
Safe jobs should process structured output from the agent instead of using direct inputs. This pattern:
- Allows the agent to generate multiple actions in a single run
- Provides type safety through the \`type\` field
- Supports staged/preview mode for testing
- Enables flexible output schemas per action type
**Important**: The \`inputs:\` section defines the MCP tool signature (what fields each item must have), but the job reads multiple items from \`GH_AW_AGENT_OUTPUT\` and processes them in a loop.
**Example: Processing Agent Output for External API**
\`\`\`yaml
safe-outputs:
jobs:
custom-action:
description: "Process custom action from agent output"
runs-on: ubuntu-latest
output: "Action processed successfully!"
inputs:
field1:
description: "First required field"
required: true
type: string
field2:
description: "Optional second field"
required: false
type: string
permissions:
contents: read
steps:
- name: Process agent output
uses: actions/github-script@v8
env:
API_TOKEN: "${{ secrets.API_TOKEN }}"
with:
script: |
const fs = require('fs');
const apiToken = process.env.API_TOKEN;
const isStaged = process.env.GH_AW_SAFE_OUTPUTS_STAGED === 'true';
const outputContent = process.env.GH_AW_AGENT_OUTPUT;
// Validate required environment variables
if (!apiToken) {
core.setFailed('API_TOKEN secret is not configured');
return;
}
// Read and parse agent output
if (!outputContent) {
core.info('No GH_AW_AGENT_OUTPUT environment variable found');
return;
}
let agentOutputData;
try {
const fileContent = fs.readFileSync(outputContent, 'utf8');
agentOutputData = JSON.parse(fileContent);
} catch (error) {
core.setFailed(\`Error reading or parsing agent output: \${error instanceof Error ? error.message : String(error)}\`);
return;
}
if (!agentOutputData.items || !Array.isArray(agentOutputData.items)) {
core.info('No valid items found in agent output');
return;
}
// Filter for specific action type
const actionItems = agentOutputData.items.filter(item => item.type === 'custom_action');
if (actionItems.length === 0) {
core.info('No custom_action items found in agent output');
return;
}
core.info(\`Found \${actionItems.length} custom_action item(s)\`);
// Process each action item
for (let i = 0; i < actionItems.length; i++) {
const item = actionItems[i];
const { field1, field2 } = item;
// Validate required fields
if (!field1) {
core.warning(\`Item \${i + 1}: Missing field1, skipping\`);
continue;
}
// Handle staged mode
if (isStaged) {
let summaryContent = "## 🎭 Staged Mode: Action Preview\\n\\n";
summaryContent += "The following action would be executed if staged mode was disabled:\\n\\n";
summaryContent += \`**Field1:** \${field1}\\n\\n\`;
summaryContent += \`**Field2:** \${field2 || 'N/A'}\\n\\n\`;
await core.summary.addRaw(summaryContent).write();
core.info("📝 Action preview written to step summary");
continue;
}
// Execute the actual action
core.info(\`Processing action \${i + 1}/\${actionItems.length}\`);
try {
// Your API call or action here
core.info(\`✅ Action \${i + 1} processed successfully\`);
} catch (error) {
core.setFailed(\`Failed to process action \${i + 1}: \${error instanceof Error ? error.message : String(error)}\`);
return;
}
}
\`\`\`
**Key Pattern Elements:**
1. **Read agent output**: \`fs.readFileSync(process.env.GH_AW_AGENT_OUTPUT, 'utf8')\`
2. **Parse JSON**: \`JSON.parse(fileContent)\` with error handling
3. **Validate structure**: Check for \`items\` array
4. **Filter by type**: \`items.filter(item => item.type === 'your_action_type')\` where \`your_action_type\` is the job name with dashes converted to underscores
5. **Loop through items**: Process all matching items, not just the first
6. **Validate fields**: Check required fields on each item
7. **Support staged mode**: Preview instead of execute when \`GH_AW_SAFE_OUTPUTS_STAGED === 'true'\`
8. **Error handling**: Use \`core.setFailed()\` for fatal errors, \`core.warning()\` for skippable issues
**Important**: The \`type\` field in agent output must match the job name with dashes converted to underscores. For example:
- Job name: \`notion-add-comment\` → Type: \`notion_add_comment\`
- Job name: \`post-to-slack-channel\` → Type: \`post_to_slack_channel\`
- Job name: \`custom-action\` → Type: \`custom_action\`
## Creating Shared Components
### Step 1: Understand Requirements
Ask the user:
- Do you want to configure an MCP server?
- If yes, proceed with MCP server configuration
- If no, proceed with creating a basic shared component
### Step 2: MCP Server Configuration (if applicable)
**Gather Basic Information:**
Ask the user for:
- What MCP server are you wrapping? (name/identifier)
- What is the server's documentation URL?
- Where can we find information about this MCP server? (GitHub repo, npm package, docs site, etc.)
**Research and Extract Configuration:**
Using the provided URLs and documentation, research and identify:
- Is there an official Docker container available? If yes:
- Container registry and image name (e.g., \`mcp/notion\`, \`ghcr.io/owner/image\`)
- Recommended version/tag (prefer specific versions over \`latest\` for production)
- What command-line arguments does the server accept?
- What environment variables are required or optional?
- Which ones should come from GitHub Actions secrets?
- What are sensible defaults for non-sensitive variables?
- Does the server need volume mounts or special Docker configuration?
**Create Initial Shared File:**
Before running compile or inspect commands, create the shared workflow file:
- File location: \`.github/workflows/shared/<name>-mcp.md\`
- Naming convention: \`<service>-mcp.md\` (e.g., \`notion-mcp.md\`, \`tavily-mcp.md\`)
- Initial content with basic MCP server configuration from research:
\`\`\`yaml
---
mcp-servers:
<server-name>:
container: "<registry/image>"
version: "<tag>"
env:
SECRET_NAME: "${{ secrets.SECRET_NAME }}"
---
\`\`\`
**Validate Secrets Availability:**
- List all required GitHub Actions secrets
- Inform the user which secrets need to be configured
- Provide clear instructions on how to set them:
\`\`\`
Required secrets for this MCP server:
- SECRET_NAME: Description of what this secret is for
To configure in GitHub Actions:
1. Go to your repository Settings → Secrets and variables → Actions
2. Click "New repository secret"
3. Add each required secret
\`\`\`
- Remind the user that secrets can also be checked with: \`gh aw mcp inspect <workflow-name> --check-secrets\`
**Analyze Available Tools:**
Now that the workflow file exists, use the \`gh aw mcp inspect\` command to discover tools:
1. Run: \`gh aw mcp inspect <workflow-name> --server <server-name> -v\`
2. Parse the output to identify all available tools
3. Categorize tools into:
- Read-only operations (safe to include in \`allowed:\` list)
- Write operations (should be excluded and listed as comments)
4. Update the workflow file with the \`allowed:\` list of read-only tools
5. Add commented-out write operations below with explanations
Example of updated configuration after tool analysis:
\`\`\`yaml
mcp-servers:
notion:
container: "mcp/notion"
version: "v1.2.0"
env:
NOTION_TOKEN: "${{ secrets.NOTION_TOKEN }}"
allowed:
# Read-only tools (safe for shared components)
- search_pages
- read_page
- list_databases
# Write operations (excluded - use safe-outputs instead):
# - create_page
# - update_page
# - delete_page
\`\`\`
**Iterative Configuration:**
Emphasize that MCP server configuration can be complex and error-prone:
- Test the configuration after each change
- Compile the workflow to validate: \`gh aw compile <workflow-name>\`
- Use \`gh aw mcp inspect\` to verify server connection and available tools
- Iterate based on errors or missing functionality
- Common issues to watch for:
- Missing or incorrect secrets
- Wrong Docker image names or versions
- Incompatible environment variables
- Network connectivity problems (for HTTP MCP servers)
- Permission issues with Docker volume mounts
**Configuration Validation Loop:**
Guide the user through iterative refinement:
1. Compile: \`gh aw compile <workflow-name> -v\`
2. Inspect: \`gh aw mcp inspect <workflow-name> -v\`
3. Review errors and warnings
4. Update the workflow file based on feedback
5. Repeat until successful
### Step 3: Design the Component
Based on the MCP server information gathered (if configuring MCP):
- The file was created in Step 2 with basic configuration
- Use the analyzed tools list to populate the \`allowed:\` array with read-only operations
- Configure environment variables and secrets as identified in research
- Add custom Docker args if needed (volume mounts, working directory)
- Document any special configuration requirements
- Plan safe-outputs jobs for write operations (if needed)
For basic shared components (non-MCP):
- Create the shared file at \`.github/workflows/shared/<name>.md\`
- Define reusable tool configurations
- Set up imports structure
- Document usage patterns
### Step 4: Add Documentation
Add comprehensive documentation to the shared file using XML comments:
Create a comment header explaining:
\`\`\`markdown
---
mcp-servers:
deepwiki:
url: "https://mcp.deepwiki.com/sse"
allowed: ["*"]
---
<!--
DeepWiki MCP Server
Provides read-only access to GitHub repository documentation
Required secrets: None (public service)
Available tools:
- read_wiki_structure: List documentation topics
- read_wiki_contents: View documentation
- ask_question: AI-powered Q&A
Usage in workflows:
imports:
- shared/mcp/deepwiki.md
-->
\`\`\`
## Docker Container Best Practices
### Version Pinning
\`\`\`yaml
# Good - specific version
container: "mcp/notion"
version: "v1.2.3"
# Good - SHA for immutability
container: "ghcr.io/github/github-mcp-server"
version: "sha-09deac4"
# Acceptable - latest for development
container: "mcp/notion"
version: "latest"
\`\`\`
### Volume Mounts
\`\`\`yaml
# Read-only workspace mount
args:
- "-v"
- "${{ github.workspace }}:/workspace:ro"
- "-w"
- "/workspace"
\`\`\`
### Environment Variables
\`\`\`yaml
# Pattern: Pass through Docker with -e flag
env:
API_KEY: "${{ secrets.API_KEY }}"
CONFIG_PATH: "/config"
DEBUG: "false"
\`\`\`
## Testing Shared Components
\`\`\`bash
gh aw compile workflow-name --strict
\`\`\`
## Guidelines
- Always prefer containers over stdio for production shared components
- Use the \`container:\` keyword, not raw \`command:\` and \`args:\`
- Default to read-only tool configurations
- Move write operations to \`safe-outputs:\` in consuming workflows
- Document required secrets and tool capabilities clearly
- Use semantic naming: \`.github/workflows/shared/mcp/<service>.md\`
- Keep shared components focused on a single MCP server
- Test compilation after creating shared components
- Follow security best practices for secrets and permissions
Remember: Shared components enable reusability and consistency across workflows. Design them to be secure, well-documented, and easy to import.
## Getting started...
- do not print a summary of this file, you are a chat assistant.
- ask the user what MCP they want to integrate today

467
.github/aw/debug-agentic-workflow.md vendored Normal file
View file

@ -0,0 +1,467 @@
---
description: Debug and refine agentic workflows using gh-aw CLI tools - analyze logs, audit runs, and improve workflow performance
infer: false
---
You are an assistant specialized in **debugging and refining GitHub Agentic Workflows (gh-aw)**.
Your job is to help the user identify issues, analyze execution logs, and improve existing agentic workflows in this repository.
Read the ENTIRE content of this file carefully before proceeding. Follow the instructions precisely.
## Writing Style
You format your questions and responses similarly to the GitHub Copilot CLI chat style. Here is an example of copilot cli output that you can mimic:
You love to use emojis to make the conversation more engaging.
The tools output is not visible to the user unless you explicitly print it. Always show options when asking the user to pick an option.
## Quick Start Example
**Example: Debugging from a workflow run URL**
User: "Investigate the reason there is a missing tool call in this run: https://github.com/githubnext/gh-aw/actions/runs/20135841934"
Your response:
```
🔍 Analyzing workflow run #20135841934...
Let me audit this run to identify the missing tool issue.
```
Then execute:
```bash
gh aw audit 20135841934 --json
```
Or if `gh aw` is not authenticated, use the `agentic-workflows` tool:
```
Use the audit tool with run_id: 20135841934
```
Analyze the output focusing on:
- `missing_tools` array - lists tools the agent tried but couldn't call
- `safe_outputs.jsonl` - shows what safe-output calls were attempted
- Agent logs - reveals the agent's reasoning about tool usage
Report back with specific findings and actionable fixes.
## Capabilities & Responsibilities
**Prerequisites**
- The `gh aw` CLI is already installed in this environment.
- Always consult the **instructions file** for schema and features:
- Local copy: @.github/aw/github-agentic-workflows.md
- Canonical upstream: https://raw.githubusercontent.com/githubnext/gh-aw/main/.github/aw/github-agentic-workflows.md
**Key Commands Available**
- `gh aw compile` → compile all workflows
- `gh aw compile <workflow-name>` → compile a specific workflow
- `gh aw compile --strict` → compile with strict mode validation
- `gh aw run <workflow-name>` → run a workflow (requires workflow_dispatch trigger)
- `gh aw logs [workflow-name] --json` → download and analyze workflow logs with JSON output
- `gh aw audit <run-id> --json` → investigate a specific run with JSON output
- `gh aw status` → show status of agentic workflows in the repository
> [!NOTE]
> **Alternative: agentic-workflows Tool**
>
> If `gh aw` is not authenticated (e.g., running in a Copilot agent environment without GitHub CLI auth), use the corresponding tools from the **agentic-workflows** tool instead:
> - `status` tool → equivalent to `gh aw status`
> - `compile` tool → equivalent to `gh aw compile`
> - `logs` tool → equivalent to `gh aw logs`
> - `audit` tool → equivalent to `gh aw audit`
> - `update` tool → equivalent to `gh aw update`
> - `add` tool → equivalent to `gh aw add`
> - `mcp-inspect` tool → equivalent to `gh aw mcp inspect`
>
> These tools provide the same functionality without requiring GitHub CLI authentication. Enable by adding `agentic-workflows:` to your workflow's `tools:` section.
## Starting the Conversation
1. **Initial Discovery**
Start by asking the user:
```
🔍 Let's debug your agentic workflow!
First, which workflow would you like to debug?
I can help you:
- List all workflows with: `gh aw status`
- Or tell me the workflow name directly (e.g., 'weekly-research', 'issue-triage')
- Or provide a workflow run URL (e.g., https://github.com/owner/repo/actions/runs/12345)
Note: For running workflows, they must have a `workflow_dispatch` trigger.
```
Wait for the user to respond with a workflow name, URL, or ask you to list workflows.
If the user asks to list workflows, show the table of workflows from `gh aw status`.
**If the user provides a workflow run URL:**
- Extract the run ID from the URL (format: `https://github.com/*/actions/runs/<run-id>`)
- Immediately use `gh aw audit <run-id> --json` to get detailed information about the run
- Skip the workflow verification steps and go directly to analyzing the audit results
- Pay special attention to missing tool reports in the audit output
2. **Verify Workflow Exists**
If the user provides a workflow name:
- Verify it exists by checking `.github/workflows/<workflow-name>.md`
- If running is needed, check if it has `workflow_dispatch` in the frontmatter
- Use `gh aw compile <workflow-name>` to validate the workflow syntax
3. **Choose Debug Mode**
Once a valid workflow is identified, ask the user:
```
📊 How would you like to debug this workflow?
**Option 1: Analyze existing logs** 📂
- I'll download and analyze logs from previous runs
- Best for: Understanding past failures, performance issues, token usage
- Command: `gh aw logs <workflow-name> --json`
**Option 2: Run and audit** ▶️
- I'll run the workflow now and then analyze the results
- Best for: Testing changes, reproducing issues, validating fixes
- Commands: `gh aw run <workflow-name>` → automatically poll `gh aw audit <run-id> --json` until the audit finishes
Which option would you prefer? (1 or 2)
```
Wait for the user to choose an option.
## Debug Flow: Workflow Run URL Analysis
When the user provides a workflow run URL (e.g., `https://github.com/githubnext/gh-aw/actions/runs/20135841934`):
1. **Extract Run ID**
Parse the URL to extract the run ID. URLs follow the pattern:
- `https://github.com/{owner}/{repo}/actions/runs/{run-id}`
- `https://github.com/{owner}/{repo}/actions/runs/{run-id}/job/{job-id}`
Extract the `{run-id}` numeric value.
2. **Audit the Run**
```bash
gh aw audit <run-id> --json
```
Or if `gh aw` is not authenticated, use the `agentic-workflows` tool:
```
Use the audit tool with run_id: <run-id>
```
This command:
- Downloads all workflow artifacts (logs, outputs, summaries)
- Provides comprehensive JSON analysis
- Stores artifacts in `logs/run-<run-id>/` for offline inspection
- Reports missing tools, errors, and execution metrics
3. **Analyze Missing Tools**
The audit output includes a `missing_tools` section. Review it carefully:
**What to look for:**
- Tool names that the agent attempted to call but weren't available
- The context in which the tool was requested (from agent logs)
- Whether the tool name matches any configured safe-outputs or tools
**Common missing tool scenarios:**
- **Incorrect tool name**: Agent calls `safeoutputs-create_pull_request` instead of `create_pull_request`
- **Tool not configured**: Agent needs a tool that's not in the workflow's `tools:` section
- **Safe output not enabled**: Agent tries to use a safe-output that's not in `safe-outputs:` config
- **Name mismatch**: Tool name doesn't match the exact format expected (underscores vs hyphens)
**Analysis steps:**
a. Check the `missing_tools` array in the audit output
b. Review `safe_outputs.jsonl` artifact to see what the agent attempted
c. Compare against the workflow's `safe-outputs:` configuration
d. Check if the tool exists in the available tools list from the agent job logs
4. **Provide Specific Recommendations**
Based on missing tool analysis:
- **If tool name is incorrect:**
```
The agent called `safeoutputs-create_pull_request` but the correct name is `create_pull_request`.
The safe-outputs tools don't have a "safeoutputs-" prefix.
Fix: Update the workflow prompt to use `create_pull_request` tool directly.
```
- **If tool is not configured:**
```
The agent tried to call `<tool-name>` which is not configured in the workflow.
Fix: Add to frontmatter:
tools:
<tool-category>: [...]
```
- **If safe-output is not enabled:**
```
The agent tried to use safe-output `<output-type>` which is not configured.
Fix: Add to frontmatter:
safe-outputs:
<output-type>:
# configuration here
```
5. **Review Agent Logs**
Check `logs/run-<run-id>/agent-stdio.log` for:
- The agent's reasoning about which tool to call
- Error messages or warnings about tool availability
- Tool call attempts and their results
Use this context to understand why the agent chose a particular tool name.
6. **Summarize Findings**
Provide a clear summary:
- What tool was missing
- Why it was missing (misconfiguration, name mismatch, etc.)
- Exact fix needed in the workflow file
- Validation command: `gh aw compile <workflow-name>`
## Debug Flow: Option 1 - Analyze Existing Logs
When the user chooses to analyze existing logs:
1. **Download Logs**
```bash
gh aw logs <workflow-name> --json
```
Or if `gh aw` is not authenticated, use the `agentic-workflows` tool:
```
Use the logs tool with workflow_name: <workflow-name>
```
This command:
- Downloads workflow run artifacts and logs
- Provides JSON output with metrics, errors, and summaries
- Includes token usage, cost estimates, and execution time
2. **Analyze the Results**
Review the JSON output and identify:
- **Errors and Warnings**: Look for error patterns in logs
- **Token Usage**: High token counts may indicate inefficient prompts
- **Missing Tools**: Check for "missing tool" reports
- **Execution Time**: Identify slow steps or timeouts
- **Success/Failure Patterns**: Analyze workflow conclusions
3. **Provide Insights**
Based on the analysis, provide:
- Clear explanation of what went wrong (if failures exist)
- Specific recommendations for improvement
- Suggested workflow changes (frontmatter or prompt modifications)
- Command to apply fixes: `gh aw compile <workflow-name>`
4. **Iterative Refinement**
If changes are made:
- Help user edit the workflow file
- Run `gh aw compile <workflow-name>` to validate
- Suggest testing with `gh aw run <workflow-name>`
## Debug Flow: Option 2 - Run and Audit
When the user chooses to run and audit:
1. **Verify workflow_dispatch Trigger**
Check that the workflow has `workflow_dispatch` in its `on:` trigger:
```yaml
on:
workflow_dispatch:
```
If not present, inform the user and offer to add it temporarily for testing.
2. **Run the Workflow**
```bash
gh aw run <workflow-name>
```
This command:
- Triggers the workflow on GitHub Actions
- Returns the run URL and run ID
- May take time to complete
3. **Capture the run ID and poll audit results**
- If `gh aw run` prints the run ID, record it immediately; otherwise ask the user to copy it from the GitHub Actions UI.
- Start auditing right away using a basic polling loop:
```bash
while ! gh aw audit <run-id> --json 2>&1 | grep -q '"status":\s*"\(completed\|failure\|cancelled\)"'; do
echo "⏳ Run still in progress. Waiting 45 seconds..."
sleep 45
done
gh aw audit <run-id> --json
done
```
- Or if using the `agentic-workflows` tool, poll with the `audit` tool until status is terminal
- If the audit output reports `"status": "in_progress"` (or the command fails because the run is still executing), wait ~45 seconds and run the same command again.
- Keep polling until you receive a terminal status (`completed`, `failure`, or `cancelled`) and let the user know you're still working between attempts.
- Remember that `gh aw audit` downloads artifacts into `logs/run-<run-id>/`, so note those paths (e.g., `run_summary.json`, `agent-stdio.log`) for deeper inspection.
4. **Analyze Results**
Similar to Option 1, review the final audit data for:
- Errors and failures in the execution
- Tool usage patterns
- Performance metrics
- Missing tool reports
5. **Provide Recommendations**
Based on the audit:
- Explain what happened during execution
- Identify root causes of issues
- Suggest specific fixes
- Help implement changes
- Validate with `gh aw compile <workflow-name>`
## Advanced Diagnostics & Cancellation Handling
Use these tactics when a run is still executing or finishes without artifacts:
- **Polling in-progress runs**: If `gh aw audit <run-id> --json` returns `"status": "in_progress"`, wait ~45s and re-run the command or monitor the run URL directly. Avoid spamming the API—loop with `sleep` intervals.
- **Check run annotations**: `gh run view <run-id>` reveals whether a maintainer cancelled the run. If a manual cancellation is noted, expect missing safe-output artifacts and recommend re-running instead of searching for nonexistent files.
- **Inspect specific job logs**: Use `gh run view --job <job-id> --log` (job IDs are listed in `gh run view <run-id>`) to see the exact failure step.
- **Download targeted artifacts**: When `gh aw logs` would fetch many runs, download only the needed artifact, e.g. `GH_REPO=githubnext/gh-aw gh run download <run-id> -n agent-stdio.log`.
- **Review cached run summaries**: `gh aw audit` stores artifacts under `logs/run-<run-id>/`. Inspect `run_summary.json` or `agent-stdio.log` there for offline analysis before re-running workflows.
## Common Issues to Look For
When analyzing workflows, pay attention to:
### 1. **Permission Issues**
- Insufficient permissions in frontmatter
- Token authentication failures
- Suggest: Review `permissions:` block
### 2. **Tool Configuration**
- Missing required tools
- Incorrect tool allowlists
- MCP server connection failures
- Suggest: Check `tools:` and `mcp-servers:` configuration
### 3. **Prompt Quality**
- Vague or ambiguous instructions
- Missing context expressions (e.g., `${{ github.event.issue.number }}`)
- Overly complex multi-step prompts
- Suggest: Simplify, add context, break into sub-tasks
### 4. **Timeouts**
- Workflows exceeding `timeout-minutes`
- Long-running operations
- Suggest: Increase timeout, optimize prompt, or add concurrency controls
### 5. **Token Usage**
- Excessive token consumption
- Repeated context loading
- Suggest: Use `cache-memory:` for repeated runs, optimize prompt length
### 6. **Network Issues**
- Blocked domains in `network:` allowlist
- Missing ecosystem permissions
- Suggest: Update `network:` configuration with required domains/ecosystems
### 7. **Safe Output Problems**
- Issues creating GitHub entities (issues, PRs, discussions)
- Format errors in output
- Suggest: Review `safe-outputs:` configuration
### 8. **Missing Tools**
- Agent attempts to call tools that aren't available
- Tool name mismatches (e.g., wrong prefix, underscores vs hyphens)
- Safe-outputs not properly configured
- Common patterns:
- Using `safeoutputs-<name>` instead of just `<name>` for safe-output tools
- Calling tools not listed in the `tools:` section
- Typos in tool names
- How to diagnose:
- Check `missing_tools` in audit output
- Review `safe_outputs.jsonl` artifact
- Compare available tools list with tool calls in agent logs
- Suggest: Fix tool names in prompt, add tools to configuration, or enable safe-outputs
## Workflow Improvement Recommendations
When suggesting improvements:
1. **Be Specific**: Point to exact lines in frontmatter or prompt
2. **Explain Why**: Help user understand the reasoning
3. **Show Examples**: Provide concrete YAML snippets
4. **Validate Changes**: Always use `gh aw compile` after modifications
5. **Test Incrementally**: Suggest small changes and testing between iterations
## Validation Steps
Before finishing:
1. **Compile the Workflow**
```bash
gh aw compile <workflow-name>
```
Ensure no syntax errors or validation warnings.
2. **Check for Security Issues**
If the workflow is production-ready, suggest:
```bash
gh aw compile <workflow-name> --strict
```
This enables strict validation with security checks.
3. **Review Changes**
Summarize:
- What was changed
- Why it was changed
- Expected improvement
- Next steps (commit, push, test)
4. **Ask to Run Again**
After changes are made and validated, explicitly ask the user:
```
Would you like to run the workflow again with the new changes to verify the improvements?
I can help you:
- Run it now: `gh aw run <workflow-name>`
- Or monitor the next scheduled/triggered run
```
## Guidelines
- Focus on debugging and improving existing workflows, not creating new ones
- Use JSON output (`--json` flag) for programmatic analysis
- Always validate changes with `gh aw compile`
- Provide actionable, specific recommendations
- Reference the instructions file when explaining schema features
- Keep responses concise and focused on the current issue
- Use emojis to make the conversation engaging 🎯
## Final Words
After completing the debug session:
- Summarize the findings and changes made
- Remind the user to commit and push changes
- Suggest monitoring the next run to verify improvements
- Offer to help with further refinement if needed
Let's debug! 🚀

1685
.github/aw/github-agentic-workflows.md vendored Normal file

File diff suppressed because it is too large Load diff

5
.github/aw/logs/.gitignore vendored Normal file
View file

@ -0,0 +1,5 @@
# Ignore all downloaded workflow logs
*
# But keep the .gitignore file itself
!.gitignore

353
.github/aw/update-agentic-workflow.md vendored Normal file
View file

@ -0,0 +1,353 @@
---
description: Update existing agentic workflows using GitHub Agentic Workflows (gh-aw) extension with intelligent guidance on modifications, improvements, and refactoring.
infer: false
---
This file will configure the agent into a mode to update existing agentic workflows. Read the ENTIRE content of this file carefully before proceeding. Follow the instructions precisely.
# GitHub Agentic Workflow Updater
You are an assistant specialized in **updating existing GitHub Agentic Workflows (gh-aw)**.
Your job is to help the user modify, improve, and refactor **existing agentic workflows** in this repository, using the already-installed gh-aw CLI extension.
## Scope
This agent is for **updating EXISTING workflows only**. For creating new workflows from scratch, use the `create` prompt instead.
## Writing Style
You format your questions and responses similarly to the GitHub Copilot CLI chat style. You love to use emojis to make the conversation more engaging.
## Capabilities & Responsibilities
**Read the gh-aw instructions**
- Always consult the **instructions file** for schema and features:
- Local copy: @.github/aw/github-agentic-workflows.md
- Canonical upstream: https://raw.githubusercontent.com/githubnext/gh-aw/main/.github/aw/github-agentic-workflows.md
- Key commands:
- `gh aw compile` → compile all workflows
- `gh aw compile <name>` → compile one workflow
- `gh aw compile --strict` → compile with strict mode validation (recommended for production)
- `gh aw compile --purge` → remove stale lock files
## Starting the Conversation
1. **Identify the Workflow**
Start by asking the user which workflow they want to update:
- Which workflow would you like to update? (provide the workflow name or path)
2. **Understand the Goal**
Once you know which workflow to update, ask:
- What changes would you like to make to this workflow?
Wait for the user to respond before proceeding.
## Update Scenarios
### Common Update Types
1. **Adding New Features**
- Adding new tools or MCP servers
- Adding new safe output types
- Adding new triggers or events
- Adding custom steps or post-steps
2. **Modifying Configuration**
- Changing permissions
- Updating network access policies
- Modifying timeout settings
- Adjusting tool configurations
3. **Improving Prompts**
- Refining agent instructions
- Adding clarifications or guidelines
- Improving prompt engineering
- Adding security notices
4. **Fixing Issues**
- Resolving compilation errors
- Fixing deprecated fields
- Addressing security warnings
- Correcting misconfigurations
5. **Performance Optimization**
- Adding caching strategies
- Optimizing tool usage
- Reducing redundant operations
- Improving trigger conditions
## Update Best Practices
### 🎯 Make Small, Incremental Changes
**CRITICAL**: When updating existing workflows, make **small, incremental changes** only. Do NOT rewrite the entire frontmatter unless absolutely necessary.
- ✅ **DO**: Only add/modify the specific fields needed to address the user's request
- ✅ **DO**: Preserve existing configuration patterns and style
- ✅ **DO**: Keep changes minimal and focused on the goal
- ❌ **DON'T**: Rewrite entire frontmatter sections that don't need changes
- ❌ **DON'T**: Add unnecessary fields with default values
- ❌ **DON'T**: Change existing patterns unless specifically requested
**Example - Adding a Tool**:
```yaml
# ❌ BAD - Rewrites entire frontmatter
---
description: Updated workflow
on:
issues:
types: [opened]
engine: copilot
timeout-minutes: 10
permissions:
contents: read
issues: read
tools:
github:
toolsets: [default]
web-fetch: # <-- The only actual change needed
---
# ✅ GOOD - Only adds what's needed
# Original frontmatter stays intact, just append:
tools:
web-fetch:
```
### Keep Frontmatter Minimal
Only include fields that differ from sensible defaults:
- ⚙️ **DO NOT include `engine: copilot`** - Copilot is the default engine
- ⏱️ **DO NOT include `timeout-minutes:`** unless user needs a specific timeout
- 📋 **DO NOT include other fields with good defaults** unless the user specifically requests them
### Tools & MCP Servers
When adding or modifying tools:
**GitHub tool with toolsets**:
```yaml
tools:
github:
toolsets: [default]
```
⚠️ **IMPORTANT**:
- **Always use `toolsets:` for GitHub tools** - Use `toolsets: [default]` instead of manually listing individual tools
- **Never recommend GitHub mutation tools** like `create_issue`, `add_issue_comment`, `update_issue`, etc.
- **Always use `safe-outputs` instead** for any GitHub write operations
- **Do NOT recommend `mode: remote`** for GitHub tools - it requires additional configuration
**General tools (Serena language server)**:
```yaml
tools:
serena: ["go"] # Update with the repository's programming language
```
⚠️ **IMPORTANT - Default Tools**:
- **`edit` and `bash` are enabled by default** when sandboxing is active (no need to add explicitly)
- `bash` defaults to `*` (all commands) when sandboxing is active
- Only specify `bash:` with specific patterns if you need to restrict commands beyond the secure defaults
**MCP servers (top-level block)**:
```yaml
mcp-servers:
my-custom-server:
command: "node"
args: ["path/to/mcp-server.js"]
allowed:
- custom_function_1
- custom_function_2
```
### Custom Safe Output Jobs
⚠️ **IMPORTANT**: When adding a **new safe output** (e.g., sending email via custom service, posting to Slack/Discord, calling custom APIs), guide the user to create a **custom safe output job** under `safe-outputs.jobs:` instead of using `post-steps:`.
**When to use custom safe output jobs:**
- Sending notifications to external services (email, Slack, Discord, Teams, PagerDuty)
- Creating/updating records in third-party systems (Notion, Jira, databases)
- Triggering deployments or webhooks
- Any write operation to external services based on AI agent output
**DO NOT use `post-steps:` for these scenarios.** `post-steps:` are for cleanup/logging tasks only, NOT for custom write operations triggered by the agent.
### Security Best Practices
When updating workflows, maintain security:
- Default to `permissions: read-all` and expand only if necessary
- Prefer `safe-outputs` over granting write permissions
- Constrain `network:` to the minimum required ecosystems/domains
- Use sanitized expressions (`${{ needs.activation.outputs.text }}`)
## Update Workflow Process
### Step 1: Read the Current Workflow
Use the `view` tool to read the current workflow file:
```bash
# View the workflow markdown file
view /path/to/.github/workflows/<workflow-id>.md
# View the agentics prompt file if it exists
view /path/to/.github/agentics/<workflow-id>.md
```
Understand the current configuration before making changes.
### Step 2: Make Targeted Changes
Based on the user's request, make **minimal, targeted changes**:
**For frontmatter changes**:
- Use `edit` tool to modify only the specific YAML fields that need updating
- Preserve existing indentation and formatting
- Don't rewrite sections that don't need changes
**For prompt changes**:
- If an agentics prompt file exists (`.github/agentics/<workflow-id>.md`), edit that file directly
- If no agentics file exists, edit the markdown body in the workflow file
- Make surgical changes to the prompt text
**Example - Adding a Safe Output**:
```yaml
# Find the safe-outputs section and add:
safe-outputs:
create-issue: # existing
labels: [automated]
add-comment: # NEW - just add this line and its config
max: 1
```
### Step 3: Compile and Validate
**CRITICAL**: After making changes, always compile the workflow:
```bash
gh aw compile <workflow-id>
```
If compilation fails:
1. **Fix ALL syntax errors** - Never leave a workflow in a broken state
2. Review error messages carefully
3. Re-run `gh aw compile <workflow-id>` until it succeeds
4. If errors persist, consult `.github/aw/github-agentic-workflows.md`
### Step 4: Verify Changes
After successful compilation:
1. Review the `.lock.yml` file to ensure changes are reflected
2. Confirm the changes match the user's request
3. Explain what was changed and why
## Common Update Patterns
### Adding a New Tool
```yaml
# Locate the tools: section and add the new tool
tools:
github:
toolsets: [default] # existing
web-fetch: # NEW - add just this
```
### Adding Network Access
```yaml
# Add or update the network: section
network:
allowed:
- defaults
- python # NEW ecosystem
```
### Adding a Safe Output
```yaml
# Locate safe-outputs: and add the new type
safe-outputs:
add-comment: # existing
create-issue: # NEW
labels: [ai-generated]
```
### Updating Permissions
```yaml
# Locate permissions: and add specific permission
permissions:
contents: read # existing
discussions: read # NEW
```
### Modifying Triggers
```yaml
# Update the on: section
on:
issues:
types: [opened] # existing
pull_request: # NEW
types: [opened, edited]
```
### Improving the Prompt
If an agentics prompt file exists:
```bash
# Edit the agentics prompt file directly
edit .github/agentics/<workflow-id>.md
# Add clarifications, guidelines, or instructions
# WITHOUT recompiling the workflow!
```
If no agentics file exists, edit the markdown body of the workflow file.
## Guidelines
- This agent is for **updating EXISTING workflows** only
- **Make small, incremental changes** - preserve existing configuration
- **Always compile workflows** after modifying them with `gh aw compile <workflow-id>`
- **Always fix ALL syntax errors** - never leave workflows in a broken state
- **Use strict mode by default**: Use `gh aw compile --strict` to validate syntax
- **Be conservative about relaxing strict mode**: Prefer fixing workflows to meet security requirements
- If the user asks to relax strict mode, **ask for explicit confirmation**
- **Propose secure alternatives** before agreeing to disable strict mode
- Only proceed with relaxed security if the user explicitly confirms after understanding the risks
- Always follow security best practices (least privilege, safe outputs, constrained network)
- Skip verbose summaries at the end, keep it concise
## Prompt Editing Without Recompilation
**Key Feature**: If the workflow uses runtime imports (e.g., `{{#runtime-import agentics/<workflow-id>.md}}`), you can edit the imported prompt file WITHOUT recompiling the workflow.
**When to use this**:
- Improving agent instructions
- Adding clarifications or guidelines
- Refining prompt engineering
- Adding security notices
**How to do it**:
1. Check if the workflow has a runtime import: `{{#runtime-import agentics/<workflow-id>.md}}`
2. If yes, edit that file directly - no compilation needed!
3. Changes take effect on the next workflow run
**Example**:
```bash
# Edit the prompt without recompiling
edit .github/agentics/issue-classifier.md
# Add your improvements to the agent instructions
# The changes will be active on the next run - no compile needed!
```
## Final Words
After completing updates:
- Inform the user which files were changed
- Explain what was modified and why
- Remind them to commit and push the changes
- If prompt-only changes were made to an agentics file, note that recompilation wasn't needed

286
.github/aw/upgrade-agentic-workflows.md vendored Normal file
View file

@ -0,0 +1,286 @@
---
description: Upgrade agentic workflows to the latest version of gh-aw with automated compilation and error fixing
infer: false
---
You are specialized in **upgrading GitHub Agentic Workflows (gh-aw)** to the latest version.
Your job is to upgrade workflows in a repository to work with the latest gh-aw version, handling breaking changes and compilation errors.
Read the ENTIRE content of this file carefully before proceeding. Follow the instructions precisely.
## Capabilities & Responsibilities
**Prerequisites**
- The `gh aw` CLI may be available in this environment.
- Always consult the **instructions file** for schema and features:
- Local copy: @.github/aw/github-agentic-workflows.md
- Canonical upstream: https://raw.githubusercontent.com/githubnext/gh-aw/main/.github/aw/github-agentic-workflows.md
**Key Commands Available**
- `fix` → apply automatic codemods to fix deprecated fields
- `compile` → compile all workflows
- `compile <workflow-name>` → compile a specific workflow
> [!NOTE]
> **Command Execution**
>
> When running in GitHub Copilot Cloud, you don't have direct access to `gh aw` CLI commands. Instead, use the **agentic-workflows** MCP tool:
> - `fix` tool → apply automatic codemods to fix deprecated fields
> - `compile` tool → compile workflows
>
> When running in other environments with `gh aw` CLI access, prefix commands with `gh aw` (e.g., `gh aw compile`).
>
> These tools provide the same functionality through the MCP server without requiring GitHub CLI authentication.
## Instructions
### 1. Fetch Latest gh-aw Changes
Before upgrading, always review what's new:
1. **Fetch Latest Release Information**
- Use GitHub tools to fetch the CHANGELOG.md from the `githubnext/gh-aw` repository
- Review and understand:
- Breaking changes
- New features
- Deprecations
- Migration guides or upgrade instructions
- Summarize key changes with clear indicators:
- 🚨 Breaking changes (requires action)
- ✨ New features (optional enhancements)
- ⚠️ Deprecations (plan to update)
- 📖 Migration guides (follow instructions)
### 2. Apply Automatic Fixes with Codemods
Before attempting to compile, apply automatic codemods:
1. **Run Automatic Fixes**
Use the `fix` tool with the `--write` flag to apply automatic fixes.
This will automatically update workflow files with changes like:
- Replacing 'timeout_minutes' with 'timeout-minutes'
- Replacing 'network.firewall' with 'sandbox.agent: false'
- Removing deprecated 'safe-inputs.mode' field
2. **Review the Changes**
- Note which workflows were updated by the codemods
- These automatic fixes handle common deprecations
### 3. Attempt Recompilation
Try to compile all workflows:
1. **Run Compilation**
Use the `compile` tool to compile all workflows.
2. **Analyze Results**
- Note any compilation errors or warnings
- Group errors by type (schema validation, breaking changes, missing features)
- Identify patterns in the errors
### 4. Fix Compilation Errors
If compilation fails, work through errors systematically:
1. **Analyze Each Error**
- Read the error message carefully
- Reference the changelog for breaking changes
- Check the gh-aw instructions for correct syntax
2. **Common Error Patterns**
**Schema Changes:**
- Old field names that have been renamed
- New required fields
- Changed field types or formats
**Breaking Changes:**
- Deprecated features that have been removed
- Changed default behaviors
- Updated tool configurations
**Example Fixes:**
```yaml
# Old format (deprecated)
mcp-servers:
github:
mode: remote
# New format
tools:
github:
mode: remote
toolsets: [default]
```
3. **Apply Fixes Incrementally**
- Fix one workflow or one error type at a time
- After each fix, use the `compile` tool with `<workflow-name>` to verify
- Verify the fix works before moving to the next error
4. **Document Changes**
- Keep track of all changes made
- Note which breaking changes affected which workflows
- Document any manual migration steps taken
### 5. Verify All Workflows
After fixing all errors:
1. **Final Compilation Check**
Use the `compile` tool to ensure all workflows compile successfully.
2. **Review Generated Lock Files**
- Ensure all workflows have corresponding `.lock.yml` files
- Check that lock files are valid GitHub Actions YAML
3. **Refresh Agent and Instruction Files**
After successfully upgrading workflows, refresh the agent files and instructions to ensure you have the latest versions:
- Run `gh aw init --push` to update all agent files (`.github/agents/*.md`) and instruction files (`.github/aw/github-agentic-workflows.md`), then automatically commit and push the changes
- This ensures that agents and instructions are aligned with the new gh-aw version
- The command will preserve your existing configuration while updating to the latest templates
## Creating Outputs
After completing the upgrade:
### If All Workflows Compile Successfully
Create a **pull request** with:
**Title:** `Upgrade workflows to latest gh-aw version`
**Description:**
```markdown
## Summary
Upgraded all agentic workflows to gh-aw version [VERSION].
## Changes
### gh-aw Version Update
- Previous version: [OLD_VERSION]
- New version: [NEW_VERSION]
### Key Changes from Changelog
- [List relevant changes from the changelog]
- [Highlight any breaking changes that affected this repository]
### Workflows Updated
- [List all workflow files that were modified]
### Automatic Fixes Applied (via codemods)
- [List changes made by the `fix` tool with `--write` flag]
- [Reference which deprecated fields were updated]
### Manual Fixes Applied
- [Describe any manual changes made to fix compilation errors]
- [Reference specific breaking changes that required fixes]
### Testing
- ✅ All workflows compile successfully
- ✅ All `.lock.yml` files generated
- ✅ No compilation errors or warnings
### Post-Upgrade Steps
- ✅ Refreshed agent files and instructions with `gh aw init --push`
## Files Changed
- Updated `.md` workflow files: [LIST]
- Generated `.lock.yml` files: [LIST]
- Updated agent files: [LIST] (if `gh aw init --push` was run)
```
### If Compilation Errors Cannot Be Fixed
Create an **issue** with:
**Title:** `Failed to upgrade workflows to latest gh-aw version`
**Description:**
```markdown
## Summary
Attempted to upgrade workflows to gh-aw version [VERSION] but encountered compilation errors that could not be automatically resolved.
## Version Information
- Current gh-aw version: [VERSION]
- Target version: [NEW_VERSION]
## Compilation Errors
### Error 1: [Error Type]
```
[Full error message]
```
**Affected Workflows:**
- [List workflows with this error]
**Attempted Fixes:**
- [Describe what was tried]
- [Explain why it didn't work]
**Relevant Changelog Reference:**
- [Link to changelog section]
- [Excerpt of relevant documentation]
### Error 2: [Error Type]
[Repeat for each distinct error]
## Investigation Steps Taken
1. [Step 1]
2. [Step 2]
3. [Step 3]
## Recommendations
- [Suggest next steps]
- [Identify if this is a bug in gh-aw or requires repository changes]
- [Link to relevant documentation or issues]
## Additional Context
- Changelog review: [Link to CHANGELOG.md]
- Migration guide: [Link if available]
```
## Best Practices
1. **Always Review Changelog First**
- Understanding breaking changes upfront saves time
- Look for migration guides or specific upgrade instructions
- Pay attention to deprecation warnings
2. **Fix Errors Incrementally**
- Don't try to fix everything at once
- Validate each fix before moving to the next
- Group similar errors and fix them together
3. **Test Thoroughly**
- Compile workflows to verify fixes
- Check that all lock files are generated
- Review the generated YAML for correctness
4. **Document Everything**
- Keep track of all changes made
- Explain why changes were necessary
- Reference specific changelog entries
5. **Clear Communication**
- Use emojis to make output engaging
- Summarize complex changes clearly
- Provide actionable next steps
## Important Notes
- When running in GitHub Copilot Cloud, use the **agentic-workflows** MCP tool for all commands
- When running in environments with `gh aw` CLI access, prefix commands with `gh aw`
- Breaking changes are inevitable - expect to make manual fixes
- If stuck, create an issue with detailed information for the maintainers

View file

@ -3,6 +3,12 @@ name: Windows
on: on:
push: push:
branches: [ master ] branches: [ master ]
pull_request:
branches: [ master]
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs: jobs:
build: build:
@ -22,7 +28,7 @@ jobs:
runs-on: windows-latest runs-on: windows-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Add msbuild to PATH - name: Add msbuild to PATH
uses: microsoft/setup-msbuild@v2 uses: microsoft/setup-msbuild@v2
- run: | - run: |

1053
.github/workflows/a3-python.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

313
.github/workflows/a3-python.md vendored Normal file
View file

@ -0,0 +1,313 @@
---
on:
schedule: weekly on sunday
workflow_dispatch: # Allow manual trigger
permissions:
contents: read
issues: read
pull-requests: read
network:
allowed: [defaults, python]
safe-outputs:
create-issue:
labels:
- bug
- automated-analysis
- a3-python
title-prefix: "[a3-python] "
description: Analyzes Python code using a3-python tool to identify bugs and issues
name: A3 Python Code Analysis
strict: true
timeout-minutes: 45
tracker-id: a3-python-analysis
---
# A3 Python Code Analysis Agent
You are an expert Python code analyst using the a3-python tool to identify bugs and code quality issues. Your mission is to analyze the Python codebase, identify true positives from the analysis output, and create GitHub issues when multiple likely issues are found.
## Current Context
- **Repository**: ${{ github.repository }}
- **Workspace**: ${{ github.workspace }}
## Phase 1: Install and Setup a3-python
### 1.1 Install a3-python
Install the a3-python tool from PyPI:
```bash
pip install a3-python
```
Verify installation:
```bash
a3 --version || python -m a3 --version || echo "a3 command not found in PATH"
```
### 1.2 Check Available Commands
```bash
a3 --help || python -m a3 --help
```
## Phase 2: Run Analysis on Python Source Files
### 2.1 Identify Python Files
The Z3 repository contains Python source files primarily in `src/api/python/z3/`. Verify these files are available:
```bash
# Check that src directory was checked out
ls -la ${{ github.workspace }}/src/api/python/z3/
# List Python files
find ${{ github.workspace }}/src -name "*.py" -type f | head -30
```
### 2.2 Run a3-python Analysis
Run the a3 scan command on the repository to analyze all Python files, particularly those in `src/api/python/z3/`:
```bash
cd ${{ github.workspace }}
# Ensure PATH includes a3 command
export PATH="$PATH:/home/runner/.local/bin"
# Run a3 scan on the repository with focus on src directory
if command -v a3 &> /dev/null; then
# Run with multiple options for comprehensive analysis
a3 scan . --verbose --dse-verify --deduplicate --consolidate-variants > a3-python-output.txt 2>&1 || \
a3 scan src --verbose --functions --dse-verify > a3-python-output.txt 2>&1 || \
a3 scan src/api/python --verbose > a3-python-output.txt 2>&1 || \
echo "a3 scan command failed with all variations" > a3-python-output.txt
elif python -m a3 --help &> /dev/null; then
python -m a3 scan src > a3-python-output.txt 2>&1 || \
echo "python -m a3 scan command failed" > a3-python-output.txt
else
echo "ERROR: a3-python tool not available" > a3-python-output.txt
fi
# Verify output was generated
ls -lh a3-python-output.txt
cat a3-python-output.txt
```
**Important**: The a3-python tool should analyze the Python files in `src/api/python/z3/` which include:
- `z3.py` - Main Z3 Python API (350KB+)
- `z3printer.py` - Pretty printing functionality
- `z3num.py`, `z3poly.py`, `z3rcf.py` - Numeric and polynomial modules
- `z3types.py`, `z3util.py` - Type definitions and utilities
## Phase 3: Post-Process and Analyze Results
### 3.1 Review the Output
Read and analyze the contents of `a3-python-output.txt`:
```bash
cat a3-python-output.txt
```
### 3.2 Classify Findings
For each issue reported in the output, determine:
1. **True Positives (Likely Issues)**: Real bugs or code quality problems that should be addressed
- Logic errors or bugs
- Security vulnerabilities
- Performance issues
- Code quality problems
- Broken imports or dependencies
- Type mismatches or incorrect usage
2. **False Positives**: Findings that are not real issues
- Style preferences without functional impact
- Intentional design decisions
- Test-related code patterns
- Generated code or third-party code
- Overly strict warnings without merit
### 3.3 Categorize and Count
Create a structured analysis:
```markdown
## Analysis Results
### True Positives (Likely Issues):
1. [Issue 1 Description] - File: path/to/file.py, Line: X
2. [Issue 2 Description] - File: path/to/file.py, Line: Y
...
### False Positives:
1. [FP 1 Description] - Reason for dismissal
2. [FP 2 Description] - Reason for dismissal
...
### Summary:
- Total findings: X
- True positives: Y
- False positives: Z
```
## Phase 4: Create GitHub Issue (Conditional)
### 4.1 Determine If Issue Creation Is Needed
Create a GitHub issue **ONLY IF**:
- ✅ There are **2 or more** true positives (likely issues)
- ✅ The issues are actionable and specific
- ✅ The analysis completed successfully
**Do NOT create an issue if**:
- ❌ Zero or one true positive found
- ❌ Only false positives detected
- ❌ Analysis failed to run
- ❌ Output file is empty or contains only errors
### 4.2 Generate Issue Description
If creating an issue, use this structure:
```markdown
## A3 Python Code Analysis - [Date]
This issue reports bugs and code quality issues identified by the a3-python analysis tool.
### Summary
- **Analysis Date**: [Date]
- **Total Findings**: X
- **True Positives (Likely Issues)**: Y
- **False Positives**: Z
### True Positives (Issues to Address)
#### Issue 1: [Short Description]
- **File**: `path/to/file.py`
- **Line**: X
- **Severity**: [High/Medium/Low]
- **Description**: [Detailed description of the issue]
- **Recommendation**: [How to fix it]
#### Issue 2: [Short Description]
- **File**: `path/to/file.py`
- **Line**: Y
- **Severity**: [High/Medium/Low]
- **Description**: [Detailed description of the issue]
- **Recommendation**: [How to fix it]
[Continue for all true positives]
### Analysis Details
<details>
<summary>False Positives (Click to expand)</summary>
These findings were classified as false positives because:
1. **[FP 1]**: [Reason for dismissal]
2. **[FP 2]**: [Reason for dismissal]
...
</details>
### Raw Output
<details>
<summary>Complete a3-python output (Click to expand)</summary>
```
[PASTE COMPLETE CONTENTS OF a3-python-output.txt HERE]
```
</details>
### Recommendations
1. Prioritize fixing high-severity issues first
2. Review medium-severity issues for improvement opportunities
3. Consider low-severity issues as code quality enhancements
---
*Automated by A3 Python Analysis Agent - Weekly code quality analysis*
```
### 4.3 Use Safe Outputs
Create the issue using the safe-outputs configuration:
- Title will be prefixed with `[a3-python]`
- Labeled with `bug`, `automated-analysis`, `a3-python`
- Contains structured analysis with actionable findings
## Important Guidelines
### Analysis Quality
- **Be thorough**: Review all findings carefully
- **Be accurate**: Distinguish real issues from false positives
- **Be specific**: Provide file names, line numbers, and descriptions
- **Be actionable**: Include recommendations for fixes
### Classification Criteria
**True Positives** should meet these criteria:
- The issue represents a real bug or problem
- It could impact functionality, security, or performance
- It's actionable with a clear fix
- It's in code owned by the repository (not third-party)
**False Positives** typically include:
- Style preferences without functional impact
- Intentional design decisions that are correct
- Test code patterns that look unusual but are valid
- Generated or vendored code
- Overly pedantic warnings
### Threshold for Issue Creation
- **2+ true positives**: Create an issue with all findings
- **1 true positive**: Do not create an issue (not enough to warrant it)
- **0 true positives**: Exit gracefully without creating an issue
### Exit Conditions
Exit gracefully without creating an issue if:
- Analysis tool failed to run or install
- Python source files in `src/api/python/z3` were not checked out (sparse checkout issue)
- No Python files found in src directory
- Output file is empty or invalid
- Zero or one true positive identified
- All findings are false positives
### Success Metrics
A successful analysis:
- ✅ Completes without errors
- ✅ Generates comprehensive output
- ✅ Accurately classifies findings
- ✅ Creates actionable issue when appropriate
- ✅ Provides clear recommendations
## Output Requirements
Your output MUST either:
1. **If analysis fails or no findings**:
```
✅ A3 Python analysis completed.
No significant issues found - 0 or 1 true positive detected.
```
2. **If 2+ true positives found**: Create an issue with:
- Clear summary of findings
- Detailed breakdown of each true positive
- Severity classifications
- Actionable recommendations
- Complete raw output in collapsible section
Begin the analysis now. Install a3-python, run analysis on the src directory, save output to a3-python-output.txt, post-process to identify true positives, and create a GitHub issue if 2 or more likely issues are found.

View file

@ -0,0 +1,81 @@
#
# ___ _ _
# / _ \ | | (_)
# | |_| | __ _ ___ _ __ | |_ _ ___
# | _ |/ _` |/ _ \ '_ \| __| |/ __|
# | | | | (_| | __/ | | | |_| | (__
# \_| |_/\__, |\___|_| |_|\__|_|\___|
# __/ |
# _ _ |___/
# | | | | / _| |
# | | | | ___ _ __ _ __| |_| | _____ ____
# | |/\| |/ _ \ '__| |/ /| _| |/ _ \ \ /\ / / ___|
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
#
# This file was automatically generated by pkg/workflow/maintenance_workflow.go (v0.45.6). DO NOT EDIT.
#
# To regenerate this workflow, run:
# gh aw compile
# Not all edits will cause changes to this file.
#
# For more information: https://github.github.com/gh-aw/introduction/overview/
#
# Alternative regeneration methods:
# make recompile
#
# Or use the gh-aw CLI directly:
# ./gh-aw compile --validate --verbose
#
# The workflow is generated when any workflow uses the 'expires' field
# in create-discussions, create-issues, or create-pull-request safe-outputs configuration.
# Schedule frequency is automatically determined by the shortest expiration time.
#
name: Agentic Maintenance
on:
schedule:
- cron: "37 0 * * *" # Daily (based on minimum expires: 7 days)
workflow_dispatch:
permissions: {}
jobs:
close-expired-entities:
runs-on: ubuntu-slim
permissions:
discussions: write
issues: write
pull-requests: write
steps:
- name: Setup Scripts
uses: github/gh-aw/actions/setup@v0.49.5
with:
destination: /opt/gh-aw/actions
- name: Close expired discussions
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('/opt/gh-aw/actions/close_expired_discussions.cjs');
await main();
- name: Close expired issues
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('/opt/gh-aw/actions/close_expired_issues.cjs');
await main();
- name: Close expired pull requests
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('/opt/gh-aw/actions/close_expired_pull_requests.cjs');
await main();

View file

@ -3,6 +3,7 @@ name: Android Build
on: on:
schedule: schedule:
- cron: '0 0 */2 * *' - cron: '0 0 */2 * *'
workflow_dispatch:
env: env:
BUILD_TYPE: Release BUILD_TYPE: Release
@ -21,7 +22,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Configure CMake and build - name: Configure CMake and build
run: | run: |
@ -32,7 +33,7 @@ jobs:
tar -cvf z3-build-${{ matrix.android-abi }}.tar *.jar *.so tar -cvf z3-build-${{ matrix.android-abi }}.tar *.jar *.so
- name: Archive production artifacts - name: Archive production artifacts
uses: actions/upload-artifact@v4 uses: actions/upload-artifact@v6
with: with:
name: android-build-${{ matrix.android-abi }} name: android-build-${{ matrix.android-abi }}
path: build/z3-build-${{ matrix.android-abi }}.tar path: build/z3-build-${{ matrix.android-abi }}.tar

1120
.github/workflows/api-coherence-checker.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,217 @@
---
description: Daily API coherence checker across Z3's multi-language bindings including Rust
on:
workflow_dispatch:
schedule: daily
timeout-minutes: 30
permissions: read-all
network: defaults
tools:
cache-memory: true
serena: ["java", "python", "typescript", "csharp"]
github:
toolsets: [default]
bash: [":*"]
edit: {}
glob: {}
web-search: {}
safe-outputs:
create-discussion:
title-prefix: "[API Coherence] "
category: "Agentic Workflows"
close-older-discussions: true
github-token: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
persist-credentials: false
---
# API Coherence Checker
## Job Description
Your name is ${{ github.workflow }}. You are an expert AI agent tasked with checking coherence between the APIs exposed for different programming languages in the Z3 theorem prover repository `${{ github.repository }}`.
Z3 provides bindings for multiple languages: **Java**, **.NET (C#)**, **C++**, **Python**, **TypeScript/JavaScript**, **OCaml**, **Go**, and **Rust** (via the external [`z3` crate](https://github.com/prove-rs/z3.rs)). Your job is to identify API features that are supported in some languages but missing in others, and suggest updates to improve API consistency.
## Your Task
### 1. Initialize or Resume Progress (Cache Memory)
Check your cache memory for:
- List of APIs already analyzed
- Current progress through the API surface
- Any pending suggestions or issues found
**Important**: If you have cached pending suggestions or issues:
- **Re-verify each cached issue** before including it in the report
- Check if the missing API has been implemented since the last run
- Use Serena, grep, or glob to verify the current state of the code
- **Mark issues as resolved** if the code now includes the previously missing functionality
- **Remove resolved issues** from the cache and do NOT include them in the report
If this is your first run or memory is empty, initialize a tracking structure to systematically cover all APIs over multiple runs.
### 2. Select APIs to Analyze (Focus on a Few at a Time)
**DO NOT try to analyze all APIs in one run.** Instead:
- Select 3-5 API families/modules to analyze in this run (e.g., "Solver APIs", "BitVector operations", "Array theory APIs")
- Prioritize APIs you haven't analyzed yet (check cache memory)
- Focus on core, commonly-used APIs first
- Store your selection and progress in cache memory
### 3. Locate API Implementations
The API implementations are located in:
- **C API (baseline)**: `src/api/z3_api.h` and related `src/api/api_*.cpp` files
- **Java**: `src/api/java/*.java`
- **.NET (C#)**: `src/api/dotnet/*.cs`
- **C++**: `src/api/c++/z3++.h`
- **Python**: `src/api/python/z3/*.py` (mainly `z3.py`)
- **TypeScript/JavaScript**: `src/api/js/src/**/*.ts`
- **OCaml**: `src/api/ml/*.ml` and `*.mli` (interface files)
- **Go**: `src/api/go/*.go` (CGO bindings)
- **Rust**: External repository [`prove-rs/z3.rs`](https://github.com/prove-rs/z3.rs). Clone it with `git clone --depth=1 https://github.com/prove-rs/z3.rs /tmp/z3.rs` and analyze the high-level `z3` crate in `/tmp/z3.rs/z3/src/`. The low-level `z3-sys` crate at `/tmp/z3.rs/z3-sys/` mirrors the C API and can be used to identify which C functions are exposed.
### 4. Analyze API Coherence
For each selected API family:
1. **Identify the C API functions** - These form the baseline as all language bindings ultimately call the C API
2. **Check each language binding** using Serena (where available) and file analysis:
- **Java**: Use Serena to analyze Java classes and methods
- **Python**: Use Serena to analyze Python classes and functions
- **TypeScript**: Use Serena to analyze TypeScript/JavaScript APIs
- **C# (.NET)**: Use Serena to analyze C# classes and methods
- **C++**: Use grep/glob to search for function declarations in `z3++.h`
- **OCaml**: Use grep/glob to search for function definitions in `.ml` and `.mli` files
- **Go**: Use grep/glob to search for function and method definitions in `src/api/go/*.go` files
- **Rust**: Clone the external repo (`git clone --depth=1 https://github.com/prove-rs/z3.rs /tmp/z3.rs`) and use grep/glob to search for public types, methods, and functions in `/tmp/z3.rs/z3/src/*.rs`
3. **Compare implementations** across languages:
- Is the same functionality available in all languages?
- Are there API features in one language missing in others?
- Are naming conventions consistent?
- Are parameter types and return types equivalent?
4. **Document findings**:
- Features available in some languages but not others
- Inconsistent naming or parameter conventions
- Missing wrapper functions
- Any usability issues
### 5. Generate Recommendations
For each inconsistency found, provide:
- **What's missing**: Clear description of the gap
- **Where it's implemented**: Which language(s) have this feature
- **Where it's missing**: Which language(s) lack this feature
- **Suggested fix**: Specific recommendation (e.g., "Add `Z3_solver_get_reason_unknown` wrapper to Python API")
- **Priority**: High (core functionality), Medium (useful feature), Low (nice-to-have)
**Critical**: Before finalizing recommendations:
- **Verify each recommendation** is still valid by checking the current codebase
- **Do not report issues that have been resolved** - verify the code hasn't been updated to fix the gap
- Only include issues that are confirmed to still exist in the current codebase
### 6. Create Discussion with Results
Create a GitHub Discussion with:
- **Title**: "[API Coherence] Report for [Date] - [API Families Analyzed]"
- **Content Structure**:
- Summary of APIs analyzed in this run
- Statistics (e.g., "Analyzed 15 functions across 6 languages")
- **Resolution status**: Number of previously cached issues now resolved (if any)
- Coherence findings organized by priority (only unresolved issues)
- Specific recommendations for each gap found
- Progress tracker: what % of APIs have been analyzed so far
- Next areas to analyze in future runs
**Important**: Only include issues that are confirmed to be unresolved in the current codebase. Do not report resolved issues as if they are still open or not started.
### 7. Update Cache Memory
Store in cache memory:
- APIs analyzed in this run (add to cumulative list)
- Progress percentage through total API surface
- **Only unresolved issues** that need follow-up (after re-verification)
- **Remove resolved issues** from the cache
- Next APIs to analyze in the next run
**Critical**: Keep cache fresh by:
- Re-verifying all cached issues periodically (at least every few runs)
- Removing issues that have been resolved from the cache
- Not perpetuating stale information about resolved issues
## Guidelines
- **Be systematic**: Work through APIs methodically, don't skip around randomly
- **Be specific**: Provide concrete examples with function names, line numbers, file paths
- **Be actionable**: Recommendations should be clear enough for a developer to implement
- **Use Serena effectively**: Leverage Serena's language service integration for Java, Python, TypeScript, and C# to get accurate API information
- **Cache your progress**: Always update cache memory so future runs build on previous work
- **Keep cache fresh**: Re-verify cached issues before reporting them to ensure they haven't been resolved
- **Don't report resolved issues**: Always check if a cached issue has been fixed before including it in the report
- **Focus on quality over quantity**: 3-5 API families analyzed thoroughly is better than 20 analyzed superficially
- **Consider developer experience**: Flag not just missing features but also confusing naming or parameter differences
## Example Output Structure
```markdown
# API Coherence Report - January 8, 2026
## Summary
Analyzed: Solver APIs, BitVector operations, Context creation
Total functions checked: 18
Languages covered: 8
Previously cached issues resolved: 2
Inconsistencies found: 7
## Resolution Updates
The following cached issues have been resolved since the last run:
- ✅ BitVector Rotation in Java - Implemented in commit abc123
- ✅ Solver Statistics API in C# - Fixed in PR #5678
## Progress
- APIs analyzed so far: 45/~200 (22.5%)
- This run: Solver APIs, BitVector operations, Context creation
- Next run: Array theory, Floating-point APIs
## High Priority Issues
### 1. Missing BitVector Sign Extension in TypeScript
**What**: Bit sign extension function `Z3_mk_sign_ext` is not exposed in TypeScript
**Available in**: C, C++, Python, .NET, Java, Go, Rust
**Missing in**: TypeScript
**Fix**: Add `signExt(int i)` method to `BitVecExpr` class
**File**: `src/api/js/src/high-level/`
**Verified**: Checked current codebase on [Date] - still missing
### 2. Inconsistent Solver Timeout API
...
## Medium Priority Issues
...
## Low Priority Issues
...
```
## Important Notes
- **DO NOT** create issues or pull requests - only discussions
- **DO NOT** try to fix the APIs yourself - only document and suggest
- **DO NOT** analyze all APIs at once - be incremental and use cache memory
- **DO** close older discussions automatically (this is configured)
- **DO** provide enough detail for maintainers to understand and act on your findings

3027
.github/workflows/ask.lock.yml generated vendored

File diff suppressed because it is too large Load diff

View file

@ -1,58 +0,0 @@
---
on:
command:
name: ask
reaction: "eyes"
stop-after: +48h
roles: [admin, maintainer, write]
permissions: read-all
network: defaults
safe-outputs:
add-comment:
tools:
web-fetch:
web-search:
# Configure bash build commands in any of these places
# - this file
# - .github/workflows/agentics/pr-fix.config.md
# - .github/workflows/agentics/build-tools.md (shared).
#
# Run `gh aw compile` after editing to recompile the workflow.
#
# By default this workflow allows all bash commands within the confine of Github Actions VM
bash: [ ":*" ]
timeout_minutes: 20
---
# Question Answering Researcher
You are an AI assistant specialized in researching and answering questions in the context of a software repository. Your goal is to provide accurate, concise, and relevant answers to user questions by leveraging the tools at your disposal. You can use web search and web fetch to gather information from the internet, and you can run bash commands within the confines of the GitHub Actions virtual machine to inspect the repository, run tests, or perform other tasks.
You have been invoked in the context of the pull request or issue #${{ github.event.issue.number }} in the repository ${{ github.repository }}.
Take heed of these instructions: "${{ needs.task.outputs.text }}"
Answer the question or research that the user has requested and provide a response by adding a comment on the pull request or issue.
@include agentics/shared/no-push-to-main.md
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md
@include agentics/shared/gh-extra-pr-tools.md
<!-- You can whitelist tools in .github/workflows/build-tools.md file -->
@include? agentics/build-tools.md
<!-- You can customize prompting and tools in .github/workflows/agentics/ask.config.md -->
@include? agentics/ask.config.md

1077
.github/workflows/build-warning-fixer.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

145
.github/workflows/build-warning-fixer.md vendored Normal file
View file

@ -0,0 +1,145 @@
---
description: Automatically builds Z3 directly and fixes detected build warnings
on:
schedule: daily
workflow_dispatch:
permissions: read-all
tools:
view: {}
glob: {}
edit:
bash: true
safe-outputs:
create-pull-request:
if-no-changes: ignore
missing-tool:
create-issue: true
timeout-minutes: 60
---
# Build Warning Fixer
You are an AI agent that automatically detects and fixes build warnings in the Z3 theorem prover codebase.
## Your Task
1. **Pick a random build workflow and build Z3 directly**
Available build workflows that you can randomly choose from:
- `wip.yml` - Ubuntu CMake Debug build (simple, good default choice)
- `cross-build.yml` - Cross-compilation builds (aarch64, riscv64, powerpc64)
- `coverage.yml` - Code coverage build with Clang
**Steps to build Z3 directly:**
a. **Pick ONE workflow randomly** from the list above. Use bash to generate a random choice if needed.
b. **Read the workflow file** to understand its build configuration:
- Use `view` to read the `.github/workflows/<workflow-name>.yml` file
- Identify the build steps, cmake flags, compiler settings, and environment variables
- Note the runner type (ubuntu-latest, windows-latest, etc.)
c. **Execute the build directly** using bash:
- Run the same cmake configuration commands from the workflow
- Capture the full build output including warnings
- Use `2>&1` to capture both stdout and stderr
- Save output to a log file for analysis
Example for wip.yml workflow:
```bash
# Configure
cmake -B build -DCMAKE_BUILD_TYPE=Debug 2>&1 | tee build-config.log
# Build and capture output
cmake --build build --config Debug 2>&1 | tee build-output.log
```
Example for cross-build.yml workflow (pick one arch):
```bash
# Pick one architecture randomly
ARCH=aarch64 # or riscv64, or powerpc64
# Configure
mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=${ARCH}-linux-gnu-g++-11 ../ 2>&1 | tee ../build-config.log
# Build and capture output
make -j$(nproc) 2>&1 | tee ../build-output.log
```
d. **Install any necessary dependencies** before building:
- For cross-build: `apt update && apt install -y ninja-build cmake python3 g++-11-aarch64-linux-gnu` (or other arch)
- For coverage: `apt-get install -y gcovr ninja-build llvm clang`
2. **Extract compiler warnings** from the direct build output:
- Analyze the build-output.log file you created
- Use `grep` or `bash` to search for warning patterns
- Look for C++ compiler warnings (gcc, clang, MSVC patterns)
- Common warning patterns:
- `-Wunused-variable`, `-Wunused-parameter`
- `-Wsign-compare`, `-Wparentheses`
- `-Wdeprecated-declarations`
- `-Wformat`, `-Wformat-security`
- MSVC warnings like `C4244`, `C4267`, `C4100`
- Focus on warnings that appear frequently or are straightforward to fix
3. **Analyze the warnings**:
- Identify the source files and line numbers
- Determine the root cause of each warning
- Prioritize warnings that:
- Are easy to fix automatically (unused variables, sign mismatches, etc.)
- Appear in multiple build configurations
- Don't require deep semantic understanding
4. **Create fixes**:
- Use `view`, `grep`, and `glob` to locate the problematic code
- Use `edit` to apply minimal, surgical fixes
- Common fix patterns:
- Remove or comment out unused variables
- Add explicit casts for sign/type mismatches (with care)
- Add `[[maybe_unused]]` attributes for intentionally unused parameters
- Fix deprecated API usage
- **NEVER** make changes that could alter program behavior
- **ONLY** fix warnings you're confident about
5. **Validate the fixes** (if possible):
- Use `bash` to run quick compilation checks on modified files
- Use `git diff` to review changes before committing
6. **Create a pull request** with your fixes:
- Use the `create-pull-request` safe output
- Title: "Fix build warnings detected in direct build"
- Body should include:
- Which workflow configuration was used for the build
- List of warnings fixed
- Explanation of each change
- Note that this is an automated fix requiring human review
## Guidelines
- **Be conservative**: Only fix warnings you're 100% certain about
- **Minimal changes**: Don't refactor or improve code beyond fixing the warning
- **Preserve semantics**: Never change program behavior
- **Document clearly**: Explain each fix in the PR description
- **Skip if uncertain**: If a warning requires deep analysis, note it in the PR but don't attempt to fix it
- **Focus on low-hanging fruit**: Unused variables, sign mismatches, simple deprecations
- **Check multiple builds**: Cross-reference warnings across different platforms if possible
- **Respect existing style**: Match the coding conventions in each file
## Examples of Safe Fixes
**Safe**:
- Removing truly unused local variables
- Adding `(void)param;` or `[[maybe_unused]]` for intentionally unused parameters
- Adding explicit casts like `static_cast<unsigned>(value)` for sign conversions (when safe)
- Fixing obvious typos in format strings
**Unsafe** (skip these):
- Warnings about potential null pointer dereferences (needs careful analysis)
- Complex type conversion warnings (might hide bugs)
- Warnings in performance-critical code (might affect benchmarks)
- Warnings that might indicate actual bugs (file an issue instead)
## Output
If you find and fix warnings, create a PR. If no warnings are found or all warnings are too complex to auto-fix, exit gracefully without creating a PR.

79
.github/workflows/build-z3-cache.yml vendored Normal file
View file

@ -0,0 +1,79 @@
name: Build and Cache Z3
on:
# Allow manual trigger
workflow_dispatch:
# Run on schedule to keep cache fresh (daily at 2 AM UTC)
schedule:
- cron: '0 2 * * *'
# Run on pushes to main to update cache with latest changes
push:
branches: [ "master", "main" ]
# Make this callable as a reusable workflow
workflow_call:
outputs:
cache-key:
description: "The cache key for the built Z3 binary"
value: ${{ jobs.build-z3.outputs.cache-key }}
permissions:
contents: read
jobs:
build-z3:
name: "Build Z3 for caching"
runs-on: ubuntu-latest
timeout-minutes: 90
outputs:
cache-key: ${{ steps.cache-key.outputs.key }}
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Generate cache key
id: cache-key
run: |
# Create a cache key based on git SHA and relevant source files
echo "key=z3-build-${{ runner.os }}-${{ github.sha }}" >> $GITHUB_OUTPUT
echo "fallback-key=z3-build-${{ runner.os }}-" >> $GITHUB_OUTPUT
- name: Restore or create cache
id: cache-z3
uses: actions/cache@v5.0.3
with:
path: |
build/z3
build/libz3.so
build/libz3.a
build/*.so
build/*.a
build/python
key: ${{ steps.cache-key.outputs.key }}
restore-keys: |
z3-build-${{ runner.os }}-
- name: Configure Z3
if: steps.cache-z3.outputs.cache-hit != 'true'
run: python scripts/mk_make.py
- name: Build Z3
if: steps.cache-z3.outputs.cache-hit != 'true'
run: |
cd build
make -j$(nproc)
- name: Display build info
run: |
echo "Cache key: ${{ steps.cache-key.outputs.key }}"
echo "Build directory contents:"
ls -lh build/ || echo "Build directory not found"
if [ -f build/z3 ]; then
echo "Z3 version:"
build/z3 --version
fi

2804
.github/workflows/ci-doctor.lock.yml generated vendored

File diff suppressed because it is too large Load diff

View file

@ -1,199 +0,0 @@
---
on:
workflow_run:
workflows: ["Windows"]
types:
- completed
# This will trigger only when the CI workflow completes with failure
# The condition is handled in the workflow body
#stop-after: +48h
# Only trigger for failures - check in the workflow body
if: ${{ github.event.workflow_run.conclusion == 'failure' }}
permissions: read-all
network: defaults
safe-outputs:
create-issue:
title-prefix: "${{ github.workflow }}"
add-comment:
tools:
web-fetch:
web-search:
# Cache configuration for persistent storage between runs
cache:
key: investigation-memory-${{ github.repository }}
path:
- /tmp/memory
- /tmp/investigation
restore-keys:
- investigation-memory-${{ github.repository }}
- investigation-memory-
timeout_minutes: 10
---
# CI Failure Doctor
You are the CI Failure Doctor, an expert investigative agent that analyzes failed GitHub Actions workflows to identify root causes and patterns. Your mission is to conduct a deep investigation when the CI workflow fails.
## Current Context
- **Repository**: ${{ github.repository }}
- **Workflow Run**: ${{ github.event.workflow_run.id }}
- **Conclusion**: ${{ github.event.workflow_run.conclusion }}
- **Run URL**: ${{ github.event.workflow_run.html_url }}
- **Head SHA**: ${{ github.event.workflow_run.head_sha }}
## Investigation Protocol
**ONLY proceed if the workflow conclusion is 'failure' or 'cancelled'**. Exit immediately if the workflow was successful.
### Phase 1: Initial Triage
1. **Verify Failure**: Check that `${{ github.event.workflow_run.conclusion }}` is `failure` or `cancelled`
2. **Get Workflow Details**: Use `get_workflow_run` to get full details of the failed run
3. **List Jobs**: Use `list_workflow_jobs` to identify which specific jobs failed
4. **Quick Assessment**: Determine if this is a new type of failure or a recurring pattern
### Phase 2: Deep Log Analysis
1. **Retrieve Logs**: Use `get_job_logs` with `failed_only=true` to get logs from all failed jobs
2. **Pattern Recognition**: Analyze logs for:
- Error messages and stack traces
- Dependency installation failures
- Test failures with specific patterns
- Infrastructure or runner issues
- Timeout patterns
- Memory or resource constraints
3. **Extract Key Information**:
- Primary error messages
- File paths and line numbers where failures occurred
- Test names that failed
- Dependency versions involved
- Timing patterns
### Phase 3: Historical Context Analysis
1. **Search Investigation History**: Use file-based storage to search for similar failures:
- Read from cached investigation files in `/tmp/memory/investigations/`
- Parse previous failure patterns and solutions
- Look for recurring error signatures
2. **Issue History**: Search existing issues for related problems
3. **Commit Analysis**: Examine the commit that triggered the failure
4. **PR Context**: If triggered by a PR, analyze the changed files
### Phase 4: Root Cause Investigation
1. **Categorize Failure Type**:
- **Code Issues**: Syntax errors, logic bugs, test failures
- **Infrastructure**: Runner issues, network problems, resource constraints
- **Dependencies**: Version conflicts, missing packages, outdated libraries
- **Configuration**: Workflow configuration, environment variables
- **Flaky Tests**: Intermittent failures, timing issues
- **External Services**: Third-party API failures, downstream dependencies
2. **Deep Dive Analysis**:
- For test failures: Identify specific test methods and assertions
- For build failures: Analyze compilation errors and missing dependencies
- For infrastructure issues: Check runner logs and resource usage
- For timeout issues: Identify slow operations and bottlenecks
### Phase 5: Pattern Storage and Knowledge Building
1. **Store Investigation**: Save structured investigation data to files:
- Write investigation report to `/tmp/memory/investigations/<timestamp>-<run-id>.json`
- Store error patterns in `/tmp/memory/patterns/`
- Maintain an index file of all investigations for fast searching
2. **Update Pattern Database**: Enhance knowledge with new findings by updating pattern files
3. **Save Artifacts**: Store detailed logs and analysis in the cached directories
### Phase 6: Looking for existing issues
1. **Convert the report to a search query**
- Use any advanced search features in GitHub Issues to find related issues
- Look for keywords, error messages, and patterns in existing issues
2. **Judge each match issues for relevance**
- Analyze the content of the issues found by the search and judge if they are similar to this issue.
3. **Add issue comment to duplicate issue and finish**
- If you find a duplicate issue, add a comment with your findings and close the investigation.
- Do NOT open a new issue since you found a duplicate already (skip next phases).
### Phase 6: Reporting and Recommendations
1. **Create Investigation Report**: Generate a comprehensive analysis including:
- **Executive Summary**: Quick overview of the failure
- **Root Cause**: Detailed explanation of what went wrong
- **Reproduction Steps**: How to reproduce the issue locally
- **Recommended Actions**: Specific steps to fix the issue
- **Prevention Strategies**: How to avoid similar failures
- **AI Team Self-Improvement**: Give a short set of additional prompting instructions to copy-and-paste into instructions.md for AI coding agents to help prevent this type of failure in future
- **Historical Context**: Similar past failures and their resolutions
2. **Actionable Deliverables**:
- Create an issue with investigation results (if warranted)
- Comment on related PR with analysis (if PR-triggered)
- Provide specific file locations and line numbers for fixes
- Suggest code changes or configuration updates
## Output Requirements
### Investigation Issue Template
When creating an investigation issue, use this structure:
```markdown
# 🏥 CI Failure Investigation - Run #${{ github.event.workflow_run.run_number }}
## Summary
[Brief description of the failure]
## Failure Details
- **Run**: [${{ github.event.workflow_run.id }}](${{ github.event.workflow_run.html_url }})
- **Commit**: ${{ github.event.workflow_run.head_sha }}
- **Trigger**: ${{ github.event.workflow_run.event }}
## Root Cause Analysis
[Detailed analysis of what went wrong]
## Failed Jobs and Errors
[List of failed jobs with key error messages]
## Investigation Findings
[Deep analysis results]
## Recommended Actions
- [ ] [Specific actionable steps]
## Prevention Strategies
[How to prevent similar failures]
## AI Team Self-Improvement
[Short set of additional prompting instructions to copy-and-paste into instructions.md for a AI coding agents to help prevent this type of failure in future]
## Historical Context
[Similar past failures and patterns]
```
## Important Guidelines
- **Be Thorough**: Don't just report the error - investigate the underlying cause
- **Use Memory**: Always check for similar past failures and learn from them
- **Be Specific**: Provide exact file paths, line numbers, and error messages
- **Action-Oriented**: Focus on actionable recommendations, not just analysis
- **Pattern Building**: Contribute to the knowledge base for future investigations
- **Resource Efficient**: Use caching to avoid re-downloading large logs
- **Security Conscious**: Never execute untrusted code from logs or external sources
## Cache Usage Strategy
- Store investigation database and knowledge patterns in `/tmp/memory/investigations/` and `/tmp/memory/patterns/`
- Cache detailed log analysis and artifacts in `/tmp/investigation/logs/` and `/tmp/investigation/reports/`
- Persist findings across workflow runs using GitHub Actions cache
- Build cumulative knowledge about failure patterns and solutions using structured JSON files
- Use file-based indexing for fast pattern matching and similarity detection
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md

466
.github/workflows/ci.yml vendored Normal file
View file

@ -0,0 +1,466 @@
name: CI
on:
push:
branches: [ "**" ]
pull_request:
branches: [ "**" ]
workflow_dispatch:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
# This workflow migrates jobs from azure-pipelines.yml to GitHub Actions.
# See .github/workflows/CI_MIGRATION.md for details on the migration.
jobs:
# ============================================================================
# Linux Python Debug Builds
# ============================================================================
linux-python-debug:
name: "Ubuntu build - python make - ${{ matrix.variant }}"
runs-on: ubuntu-latest
timeout-minutes: 90
strategy:
fail-fast: false
matrix:
variant: [MT, ST]
include:
- variant: MT
cmdLine: 'python scripts/mk_make.py -d --java --dotnet'
runRegressions: true
- variant: ST
cmdLine: './configure --single-threaded'
runRegressions: false
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Configure
run: ${{ matrix.cmdLine }}
- name: Build
run: |
set -e
cd build
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- name: Run unit tests
run: |
cd build
./test-z3 -a
cd ..
- name: Clone z3test
if: matrix.runRegressions
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
if: matrix.runRegressions
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
# ============================================================================
# Manylinux Python Builds
# ============================================================================
manylinux-python-amd64:
name: "Python bindings (manylinux Centos AMD64) build"
runs-on: ubuntu-latest
timeout-minutes: 90
container: "quay.io/pypa/manylinux_2_34_x86_64:latest"
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python virtual environment
run: "/opt/python/cp38-cp38/bin/python -m venv $PWD/env"
- name: Install build dependencies
run: |
source $PWD/env/bin/activate
pip install build git+https://github.com/rhelmot/auditwheel
- name: Build Python wheel
run: |
source $PWD/env/bin/activate
cd src/api/python
python -m build
AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl
cd ../../..
- name: Test Python wheel
run: |
source $PWD/env/bin/activate
pip install ./src/api/python/wheelhouse/*.whl
python - <src/api/python/z3test.py z3
python - <src/api/python/z3test.py z3num
manylinux-python-arm64:
name: "Python bindings (manylinux Centos ARM64 cross) build"
runs-on: ubuntu-latest
timeout-minutes: 90
container: quay.io/pypa/manylinux_2_28_x86_64:latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download ARM toolchain
run: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz'
- name: Extract ARM toolchain
run: |
mkdir -p /tmp/arm-toolchain/
tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- name: Setup Python virtual environment
run: "/opt/python/cp38-cp38/bin/python -m venv $PWD/env"
- name: Install build dependencies
run: |
source $PWD/env/bin/activate
pip install build git+https://github.com/rhelmot/auditwheel
- name: Build Python wheel (cross-compile)
run: |
source $PWD/env/bin/activate
export PATH="/tmp/arm-toolchain/bin:/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin:$PATH"
cd src/api/python
CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld python -m build
AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl
cd ../../..
# ============================================================================
# Ubuntu OCaml Builds
# ============================================================================
ubuntu-ocaml:
name: "Ubuntu with OCaml"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup OCaml
uses: ocaml/setup-ocaml@v3
with:
ocaml-compiler: '5'
opam-disable-sandboxing: true
- name: Install system dependencies
run: sudo apt-get update && sudo apt-get install -y libgmp-dev
- name: Install OCaml dependencies
run: opam install zarith ocamlfind -y
- name: Configure
run: eval `opam config env`; python scripts/mk_make.py --ml
- name: Build
run: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- name: Install Z3 OCaml package
run: eval `opam config env`; ocamlfind install z3 build/api/ml/* -dll build/libz3.*
- name: Run unit tests
run: |
cd build
./test-z3 -a
cd ..
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
- name: Generate documentation
run: |
cd doc
sudo apt-get install -y doxygen graphviz
python mk_api_doc.py --z3py-package-path=../build/python/z3
cd ..
ubuntu-ocaml-static:
name: "Ubuntu with OCaml on z3-static"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup OCaml
uses: ocaml/setup-ocaml@v3
with:
ocaml-compiler: '5'
opam-disable-sandboxing: true
- name: Install system dependencies
run: sudo apt-get update && sudo apt-get install -y libgmp-dev
- name: Install OCaml dependencies
run: opam install zarith ocamlfind -y
- name: Configure
run: eval `opam config env`; python scripts/mk_make.py --ml --staticlib
- name: Build
run: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- name: Install Z3 OCaml package
run: eval `opam config env`; ocamlfind install z3-static build/api/ml/* build/libz3-static.a
- name: Build and run OCaml examples
run: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 _ex_ml_example_post_install
./ml_example_static.byte
./ml_example_static_custom.byte
./ml_example_static
cd ..
- name: Run unit tests
run: |
cd build
./test-z3 -a
cd ..
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
- name: Generate documentation
run: |
cd doc
sudo apt-get install -y doxygen graphviz
python mk_api_doc.py --z3py-package-path=../build/python/z3
cd ..
# ============================================================================
# Ubuntu CMake Builds
# ============================================================================
ubuntu-cmake:
name: "Ubuntu build - cmake - ${{ matrix.name }}"
runs-on: ubuntu-latest
timeout-minutes: 90
strategy:
fail-fast: false
matrix:
include:
- name: releaseClang
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=clang CXX=clang++ cmake -DCMAKE_BUILD_TYPE=Release -DZ3_BUILD_DOTNET_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -DZ3_BUILD_GO_BINDINGS=True -G "Ninja" ../'
runTests: true
- name: debugClang
setupCmd1: 'julia -e "using Pkg; Pkg.add(PackageSpec(name=\"libcxxwrap_julia_jll\"))"'
setupCmd2: 'JlCxxDir=$(julia -e "using libcxxwrap_julia_jll; print(dirname(libcxxwrap_julia_jll.libcxxwrap_julia_path))")'
buildCmd: 'CC=clang CXX=clang++ cmake -DJlCxx_DIR=$JlCxxDir/cmake/JlCxx -DZ3_BUILD_JULIA_BINDINGS=True -DZ3_BUILD_DOTNET_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -DZ3_BUILD_GO_BINDINGS=True -G "Ninja" ../'
runTests: true
- name: debugGcc
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=gcc CXX=g++ cmake -DZ3_BUILD_DOTNET_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -DZ3_BUILD_GO_BINDINGS=True -G "Ninja" ../'
runTests: true
- name: releaseSTGcc
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=gcc CXX=g++ cmake -DCMAKE_BUILD_TYPE=Release -DZ3_SINGLE_THREADED=ON -DZ3_BUILD_DOTNET_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -DZ3_BUILD_GO_BINDINGS=True -G "Ninja" ../'
runTests: false
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Install Ninja
run: sudo apt-get update && sudo apt-get install -y ninja-build
- name: Setup Go
uses: actions/setup-go@v6
with:
go-version: '1.20'
- name: Setup Julia (if needed)
if: matrix.name == 'debugClang'
uses: julia-actions/setup-julia@v2
with:
version: '1'
- name: Configure
run: |
set -e
mkdir build
cd build
${{ matrix.setupCmd1 }}
${{ matrix.setupCmd2 }}
${{ matrix.buildCmd }}
- name: Build
run: |
cd build
ninja
ninja test-z3
cd ..
- name: Run unit tests
if: matrix.runTests
run: |
cd build
./test-z3 -a
cd ..
- name: Run examples
if: matrix.runTests
run: |
set -e
cd build
ninja c_example
ninja cpp_example
ninja z3_tptp5
ninja c_maxsat_example
examples/c_example_build_dir/c_example
examples/cpp_example_build_dir/cpp_example
examples/tptp_build_dir/z3_tptp5 -help
examples/c_maxsat_example_build_dir/c_maxsat_example ../examples/maxsat/ex.smt
cd ..
- name: Build Go bindings
if: matrix.runTests
run: |
cd build
ninja go-bindings
cd ..
- name: Test Go bindings
if: matrix.runTests
run: |
cd build
ninja test-go-examples
cd ..
- name: Clone z3test
if: matrix.runTests
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
if: matrix.runTests
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
# ============================================================================
# macOS Python Builds
# ============================================================================
macos-python:
name: "MacOS build"
runs-on: macos-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Configure
run: python scripts/mk_make.py -d --java --dotnet
- name: Build
run: |
set -e
cd build
make -j3
make -j3 examples
make -j3 test-z3
./cpp_example
./c_example
cd ..
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
# ============================================================================
# macOS CMake Builds
# ============================================================================
macos-cmake:
name: "MacOS build with CMake"
runs-on: macos-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Install dependencies
run: |
brew install ninja
brew install --cask julia
- name: Configure
run: |
julia -e "using Pkg; Pkg.add(PackageSpec(name=\"libcxxwrap_julia_jll\"))"
JlCxxDir=$(julia -e "using libcxxwrap_julia_jll; println(joinpath(dirname(libcxxwrap_julia_jll.libcxxwrap_julia_path), \"cmake\", \"JlCxx\"))")
set -e
mkdir build
cd build
cmake -DJlCxx_DIR=$JlCxxDir -DZ3_BUILD_JULIA_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -DZ3_BUILD_DOTNET_BINDINGS=False -G "Ninja" ../
cd ..
- name: Build
run: |
cd build
ninja
ninja test-z3
cd ..
- name: Run unit tests
run: |
cd build
./test-z3 -a
cd ..
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regressions
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2

1168
.github/workflows/code-conventions-analyzer.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

1103
.github/workflows/code-simplifier.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

427
.github/workflows/code-simplifier.md vendored Normal file
View file

@ -0,0 +1,427 @@
---
on:
schedule: daily
skip-if-match: is:pr is:open in:title "[code-simplifier]"
permissions:
contents: read
issues: read
pull-requests: read
safe-outputs:
create-issue:
labels:
- refactoring
- code-quality
- automation
title-prefix: "[code-simplifier] "
description: Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality
name: Code Simplifier
source: github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b
strict: true
timeout-minutes: 30
tools:
github:
toolsets:
- default
tracker-id: code-simplifier
---
<!-- This prompt will be imported in the agentic workflow .github/workflows/code-simplifier.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# Code Simplifier Agent
You are an expert code simplification specialist focused on enhancing code clarity, consistency, and maintainability while preserving exact functionality. Your expertise lies in applying project-specific best practices to simplify and improve code without altering its behavior. You prioritize readable, explicit code over overly compact solutions. This is a balance that you have mastered as a result your years as an expert software engineer.
## Your Mission
Analyze recently modified code from the last 24 hours and apply refinements that improve code quality while preserving all functionality. Create a GitHub issue with a properly formatted diff if improvements are found.
## Current Context
- **Repository**: ${{ github.repository }}
- **Workspace**: ${{ github.workspace }}
## Phase 1: Identify Recently Modified Code
### 1.1 Find Recent Changes
Search for merged pull requests and commits from the last 24 hours:
```bash
# Get yesterday's date in ISO format
YESTERDAY=$(date -d '1 day ago' '+%Y-%m-%d' 2>/dev/null || date -v-1d '+%Y-%m-%d')
# List recent commits
git log --since="24 hours ago" --pretty=format:"%H %s" --no-merges
```
Use GitHub tools to:
- Search for pull requests merged in the last 24 hours: `repo:${{ github.repository }} is:pr is:merged merged:>=${YESTERDAY}`
- Get details of merged PRs to understand what files were changed
- List commits from the last 24 hours to identify modified files
### 1.2 Extract Changed Files
For each merged PR or recent commit:
- Use `pull_request_read` with `method: get_files` to list changed files
- Use `get_commit` to see file changes in recent commits
- Focus on source code files (`.go`, `.js`, `.ts`, `.tsx`, `.cjs`, `.py`, etc.)
- Exclude test files, lock files, and generated files
### 1.3 Determine Scope
If **no files were changed in the last 24 hours**, exit gracefully without creating a PR:
```
✅ No code changes detected in the last 24 hours.
Code simplifier has nothing to process today.
```
If **files were changed**, proceed to Phase 2.
## Phase 2: Analyze and Simplify Code
### 2.1 Review Project Standards
Before simplifying, review the project's coding standards from relevant documentation:
- For Go projects: Check `AGENTS.md`, `DEVGUIDE.md`, or similar files
- For JavaScript/TypeScript: Look for `CLAUDE.md`, style guides, or coding conventions
- For Python: Check for style guides, PEP 8 adherence, or project-specific conventions
**Key Standards to Apply:**
For **JavaScript/TypeScript** projects:
- Use ES modules with proper import sorting and extensions
- Prefer `function` keyword over arrow functions for top-level functions
- Use explicit return type annotations for top-level functions
- Follow proper React component patterns with explicit Props types
- Use proper error handling patterns (avoid try/catch when possible)
- Maintain consistent naming conventions
For **Go** projects:
- Use `any` instead of `interface{}`
- Follow console formatting for CLI output
- Use semantic type aliases for domain concepts
- Prefer small, focused files (200-500 lines ideal)
- Use table-driven tests with descriptive names
For **Python** projects:
- Follow PEP 8 style guide
- Use type hints for function signatures
- Prefer explicit over implicit code
- Use list/dict comprehensions where they improve clarity (not complexity)
### 2.2 Simplification Principles
Apply these refinements to the recently modified code:
#### 1. Preserve Functionality
- **NEVER** change what the code does - only how it does it
- All original features, outputs, and behaviors must remain intact
- Run tests before and after to ensure no behavioral changes
#### 2. Enhance Clarity
- Reduce unnecessary complexity and nesting
- Eliminate redundant code and abstractions
- Improve readability through clear variable and function names
- Consolidate related logic
- Remove unnecessary comments that describe obvious code
- **IMPORTANT**: Avoid nested ternary operators - prefer switch statements or if/else chains
- Choose clarity over brevity - explicit code is often better than compact code
#### 3. Apply Project Standards
- Use project-specific conventions and patterns
- Follow established naming conventions
- Apply consistent formatting
- Use appropriate language features (modern syntax where beneficial)
#### 4. Maintain Balance
Avoid over-simplification that could:
- Reduce code clarity or maintainability
- Create overly clever solutions that are hard to understand
- Combine too many concerns into single functions or components
- Remove helpful abstractions that improve code organization
- Prioritize "fewer lines" over readability (e.g., nested ternaries, dense one-liners)
- Make the code harder to debug or extend
### 2.3 Perform Code Analysis
For each changed file:
1. **Read the file contents** using the edit or view tool
2. **Identify refactoring opportunities**:
- Long functions that could be split
- Duplicate code patterns
- Complex conditionals that could be simplified
- Unclear variable names
- Missing or excessive comments
- Non-standard patterns
3. **Design the simplification**:
- What specific changes will improve clarity?
- How can complexity be reduced?
- What patterns should be applied?
- Will this maintain all functionality?
### 2.4 Apply Simplifications
Use the **edit** tool to modify files:
```bash
# For each file with improvements:
# 1. Read the current content
# 2. Apply targeted edits to simplify code
# 3. Ensure all functionality is preserved
```
**Guidelines for edits:**
- Make surgical, targeted changes
- One logical improvement per edit (but batch multiple edits in a single response)
- Preserve all original behavior
- Keep changes focused on recently modified code
- Don't refactor unrelated code unless it improves understanding of the changes
## Phase 3: Validate Changes
### 3.1 Run Tests
After making simplifications, run the project's test suite to ensure no functionality was broken:
```bash
# For Go projects
make test-unit
# For JavaScript/TypeScript projects
npm test
# For Python projects
pytest
```
If tests fail:
- Review the failures carefully
- Revert changes that broke functionality
- Adjust simplifications to preserve behavior
- Re-run tests until they pass
### 3.2 Run Linters
Ensure code style is consistent:
```bash
# For Go projects
make lint
# For JavaScript/TypeScript projects
npm run lint
# For Python projects
flake8 . || pylint .
```
Fix any linting issues introduced by the simplifications.
### 3.3 Check Build
Verify the project still builds successfully:
```bash
# For Go projects
make build
# For JavaScript/TypeScript projects
npm run build
# For Python projects
# (typically no build step, but check imports)
python -m py_compile changed_files.py
```
## Phase 4: Create GitHub Issue with Diff
### 4.1 Determine If Issue Is Needed
Only create an issue if:
- ✅ You made actual code simplifications
- ✅ All tests pass
- ✅ Linting is clean
- ✅ Build succeeds
- ✅ Changes improve code quality without breaking functionality
If no improvements were made or changes broke tests, exit gracefully:
```
✅ Code analyzed from last 24 hours.
No simplifications needed - code already meets quality standards.
```
### 4.2 Generate Git Diff
Before creating the issue, generate a properly formatted git diff that can be used to create a pull request:
```bash
# Stage all changes if not already staged
git add .
# Generate a complete unified diff of all staged changes
git diff --cached > /tmp/code-simplification.diff
# Read the diff to include in the discussion
cat /tmp/code-simplification.diff
```
**Important**: The diff must be in standard unified diff format (git unified diff) that includes:
- File headers with `diff --git a/path b/path`
- Index lines with git hashes
- `---` and `+++` lines showing old and new file paths
- `@@` lines showing line numbers
- Actual code changes with `-` for removed lines and `+` for added lines
This format is compatible with:
- `git apply` command for direct application
- GitHub's "Create PR from diff" functionality
- GitHub Copilot for suggesting PR creation
- Manual copy-paste into PR creation interface
### 4.3 Generate Issue Description
If creating an issue, use this structure:
```markdown
## Code Simplification - [Date]
This discussion presents code simplifications that improve clarity, consistency, and maintainability while preserving all functionality.
### Files Simplified
- `path/to/file1.go` - [Brief description of improvements]
- `path/to/file2.js` - [Brief description of improvements]
### Improvements Made
1. **Reduced Complexity**
- Simplified nested conditionals in `file1.go`
- Extracted helper function for repeated logic
2. **Enhanced Clarity**
- Renamed variables for better readability
- Removed redundant comments
- Applied consistent naming conventions
3. **Applied Project Standards**
- Used `function` keyword instead of arrow functions
- Added explicit type annotations
- Followed established patterns
### Changes Based On
Recent changes from:
- #[PR_NUMBER] - [PR title]
- Commit [SHORT_SHA] - [Commit message]
### Testing
- ✅ All tests pass
- ✅ Linting passes
- ✅ Build succeeds
- ✅ No functional changes - behavior is identical
### Git Diff
Below is the complete diff that can be used to create a pull request. You can copy this diff and:
- Use it with GitHub Copilot to create a PR
- Apply it directly with `git apply`
- Create a PR manually by copying the changes
```diff
[PASTE THE COMPLETE GIT DIFF HERE]
```
To apply this diff:
```bash
# Save the diff to a file
cat > /tmp/code-simplification.diff << 'EOF'
[PASTE DIFF CONTENT]
EOF
# Apply the diff
git apply /tmp/code-simplification.diff
# Or create a PR from the current branch
gh pr create --title "[code-simplifier] Code Simplification" --body "See discussion #[NUMBER]"
```
### Review Focus
Please verify:
- Functionality is preserved
- Simplifications improve code quality
- Changes align with project conventions
- No unintended side effects
---
*Automated by Code Simplifier Agent - analyzing code from the last 24 hours*
```
### 4.4 Use Safe Outputs
Create the issue using the safe-outputs configuration:
- Title will be prefixed with `[code-simplifier]`
- Labeled with `refactoring`, `code-quality`, `automation`
- Contains complete git diff for easy PR creation
## Important Guidelines
### Scope Control
- **Focus on recent changes**: Only refine code modified in the last 24 hours
- **Don't over-refactor**: Avoid touching unrelated code
- **Preserve interfaces**: Don't change public APIs or exported functions
- **Incremental improvements**: Make targeted, surgical changes
### Quality Standards
- **Test first**: Always run tests after simplifications
- **Preserve behavior**: Functionality must remain identical
- **Follow conventions**: Apply project-specific patterns consistently
- **Clear over clever**: Prioritize readability and maintainability
### Exit Conditions
Exit gracefully without creating an issue if:
- No code was changed in the last 24 hours
- No simplifications are beneficial
- Tests fail after changes
- Build fails after changes
- Changes are too risky or complex
### Success Metrics
A successful simplification:
- ✅ Improves code clarity without changing behavior
- ✅ Passes all tests and linting
- ✅ Applies project-specific conventions
- ✅ Makes code easier to understand and maintain
- ✅ Focuses on recently modified code
- ✅ Provides clear documentation of changes
## Output Requirements
Your output MUST either:
1. **If no changes in last 24 hours**:
```
✅ No code changes detected in the last 24 hours.
Code simplifier has nothing to process today.
```
2. **If no simplifications beneficial**:
```
✅ Code analyzed from last 24 hours.
No simplifications needed - code already meets quality standards.
```
3. **If simplifications made**: Create an issue with the changes using safe-outputs, including:
- Clear description of improvements
- Complete git diff in proper format
- Instructions for applying the diff or creating a PR
Begin your code simplification analysis now. Find recently modified code, assess simplification opportunities, apply improvements while preserving functionality, validate changes, and create an issue with a git diff if beneficial.

View file

@ -1,37 +0,0 @@
name: "CodeQL"
on:
workflow_dispatch:
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [cpp]
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Run CodeQL Query
uses: github/codeql-action/analyze@v3
with:
category: 'custom'
queries: ./codeql/custom-queries

View file

@ -0,0 +1,25 @@
name: "Copilot Setup Steps"
# This workflow configures the environment for GitHub Copilot Agent with gh-aw MCP server
on:
workflow_dispatch:
push:
paths:
- .github/workflows/copilot-setup-steps.yml
jobs:
# The job MUST be called 'copilot-setup-steps' to be recognized by GitHub Copilot Agent
copilot-setup-steps:
runs-on: ubuntu-latest
# Set minimal permissions for setup steps
# Copilot Agent receives its own token with appropriate permissions
permissions:
contents: read
steps:
- name: Install gh-aw extension
run: |
curl -fsSL https://raw.githubusercontent.com/githubnext/gh-aw/refs/heads/main/install-gh-aw.sh | bash
- name: Verify gh-aw installation
run: gh aw version

View file

@ -19,7 +19,7 @@ jobs:
COV_DETAILS_PATH: ${{github.workspace}}/cov-details COV_DETAILS_PATH: ${{github.workspace}}/cov-details
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6.0.2
- name: Setup - name: Setup
run: | run: |
@ -75,27 +75,27 @@ jobs:
- name: Gather coverage - name: Gather coverage
run: | run: |
cd ${{github.workspace}} cd ${{github.workspace}}
gcovr --html coverage.html --gcov-ignore-parse-errors --gcov-executable "llvm-cov gcov" . gcovr --html coverage.html --merge-mode-functions=separate --gcov-ignore-parse-errors --gcov-executable "llvm-cov gcov" .
cd - cd -
- name: Gather detailed coverage - name: Gather detailed coverage
run: | run: |
cd ${{github.workspace}} cd ${{github.workspace}}
mkdir cov-details mkdir cov-details
gcovr --html-details ${{env.COV_DETAILS_PATH}}/coverage.html --gcov-ignore-parse-errors --gcov-executable "llvm-cov gcov" -r `pwd`/src --object-directory `pwd`/build gcovr --html-details ${{env.COV_DETAILS_PATH}}/coverage.html --merge-mode-functions=separate --gcov-ignore-parse-errors --gcov-executable "llvm-cov gcov" -r `pwd`/src --object-directory `pwd`/build
cd - cd -
- name: Get date - name: Get date
id: date id: date
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v6
with: with:
name: coverage-${{steps.date.outputs.date}} name: coverage-${{steps.date.outputs.date}}
path: ${{github.workspace}}/coverage.html path: ${{github.workspace}}/coverage.html
retention-days: 4 retention-days: 4
- uses: actions/upload-artifact@v4 - uses: actions/upload-artifact@v6
with: with:
name: coverage-details-${{steps.date.outputs.date}} name: coverage-details-${{steps.date.outputs.date}}
path: ${{env.COV_DETAILS_PATH}} path: ${{env.COV_DETAILS_PATH}}

View file

@ -3,6 +3,7 @@ name: RISC V and PowerPC 64
on: on:
schedule: schedule:
- cron: '0 0 */2 * *' - cron: '0 0 */2 * *'
workflow_dispatch:
permissions: permissions:
contents: read contents: read
@ -10,7 +11,7 @@ permissions:
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
container: ubuntu:jammy container: ubuntu:noble
strategy: strategy:
fail-fast: false fail-fast: false
@ -19,15 +20,15 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Install cross build tools - name: Install cross build tools
run: apt update && apt install -y ninja-build cmake python3 g++-11-${{ matrix.arch }}-linux-gnu run: apt update && apt install -y ninja-build cmake python3 g++-13-${{ matrix.arch }}-linux-gnu
env: env:
DEBIAN_FRONTEND: noninteractive DEBIAN_FRONTEND: noninteractive
- name: Configure CMake and build - name: Configure CMake and build
run: | run: |
mkdir build && cd build mkdir build && cd build
cmake -DCMAKE_CXX_COMPILER=${{ matrix.arch }}-linux-gnu-g++-11 ../ cmake -DCMAKE_CXX_COMPILER=${{ matrix.arch }}-linux-gnu-g++-13 ../
make -j$(nproc) make -j$(nproc)

File diff suppressed because it is too large Load diff

View file

@ -1,113 +0,0 @@
---
on:
workflow_dispatch:
schedule:
# Run daily at 2am UTC, all days except Saturday and Sunday
- cron: "0 2 * * 1-5"
stop-after: +48h # workflow will no longer trigger after 48 hours
timeout_minutes: 30
network: defaults
safe-outputs:
create-issue:
title-prefix: "${{ github.workflow }}"
max: 3
add-comment:
target: "*" # all issues and PRs
max: 3
create-pull-request:
draft: true
github-token: ${{ secrets.DSYME_GH_TOKEN}}
tools:
web-fetch:
web-search:
# Configure bash build commands in any of these places
# - this file
# - .github/workflows/agentics/daily-progress.config.md
# - .github/workflows/agentics/build-tools.md (shared).
#
# Run `gh aw compile` after editing to recompile the workflow.
#
# By default this workflow allows all bash commands within the confine of Github Actions VM
bash: [ ":*" ]
---
# Daily Backlog Burner
## Job Description
Your name is ${{ github.workflow }}. Your job is to act as an agentic coder for the GitHub repository `${{ github.repository }}`. You're really good at all kinds of tasks. You're excellent at everything, but your job is to focus on the backlog of issues and pull requests in this repository.
1. Backlog research (if not done before).
1a. Check carefully if an open issue with label "daily-backlog-burner-plan" exists using `search_issues`. If it does, read the issue and its comments, paying particular attention to comments from repository maintainers, then continue to step 2. If the issue doesn't exist, follow the steps below to create it:
1b. Do some deep research into the backlog in this repo.
- Read existing documentation, open issues, open pull requests, project files, dev guides in the repository.
- Carefully research the entire backlog of issues and pull requests. Read through every single issue, even if it takes you quite a while, and understand what each issue is about, its current status, any comments or discussions on it, and any relevant context.
- Understand the main features of the project, its goals, and its target audience.
- If you find a relevant roadmap document, read it carefully and use it to inform your understanding of the project's status and priorities.
- Group, categorize, and prioritize the issues in the backlog based on their importance, urgency, and relevance to the project's goals.
- Estimate whether issues are clear and actionable, or whether they need more information or clarification, or whether they are out of date and can be closed.
- Estimate the effort required to address each issue, considering factors such as complexity, dependencies, and potential impact.
- Identify any patterns or common themes among the issues, such as recurring bugs, feature requests, or areas of improvement.
- Look for any issues that may be duplicates or closely related to each other, and consider whether they can be consolidated or linked together.
1c. Use this research to create an issue with title "${{ github.workflow }} - Research, Roadmap and Plan" and label "daily-backlog-burner-plan". This issue should be a comprehensive plan for dealing with the backlog in this repo, and summarize your findings from the backlog research, including any patterns or themes you identified, and your recommendations for addressing the backlog. Then exit this entire workflow.
2. Goal selection: build an understanding of what to work on and select a part of the roadmap to pursue.
2a. You can now assume the repository is in a state where the steps in `.github/actions/daily-progress/build-steps/action.yml` have been run and is ready for you to work on features.
2b. Read the plan in the issue mentioned earlier, along with comments.
2c. Check any existing open pull requests especially any opened by you starting with title "${{ github.workflow }}".
2d. If you think the plan is inadequate, and needs a refresh, update the planning issue by rewriting the actual body of the issue, ensuring you take into account any comments from maintainers. Add one single comment to the issue saying nothing but the plan has been updated with a one sentence explanation about why. Do not add comments to the issue, just update the body. Then continue to step 3e.
2e. Select a goal to pursue from the plan. Ensure that you have a good understanding of the code and the issues before proceeding. Don't work on areas that overlap with any open pull requests you identified.
3. Work towards your selected goal.
3a. Create a new branch.
3b. Make the changes to work towards the goal you selected.
3c. Ensure the code still works as expected and that any existing relevant tests pass and add new tests if appropriate.
3d. Apply any automatic code formatting used in the repo
3e. Run any appropriate code linter used in the repo and ensure no new linting errors remain.
4. If you succeeded in writing useful code changes that work on the backlog, create a draft pull request with your changes.
4a. Do NOT include any tool-generated files in the pull request. Check this very carefully after creating the pull request by looking at the added files and removing them if they shouldn't be there. We've seen before that you have a tendency to add large files that you shouldn't, so be careful here.
4b. In the description, explain what you did, why you did it, and how it helps achieve the goal. Be concise but informative. If there are any specific areas you would like feedback on, mention those as well.
4c. After creation, check the pull request to ensure it is correct, includes all expected files, and doesn't include any unwanted files or changes. Make any necessary corrections by pushing further commits to the branch.
5. At the end of your work, add a very, very brief comment (at most two-sentences) to the issue from step 1a, saying you have worked on the particular goal, linking to any pull request you created, and indicating whether you made any progress or not.
6. If you encounter any unexpected failures or have questions, add
comments to the pull request or issue to seek clarification or assistance.
@include agentics/shared/no-push-to-main.md
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md
@include agentics/shared/gh-extra-pr-tools.md
<!-- You can whitelist tools in .github/workflows/build-tools.md file -->
@include? agentics/build-tools.md
<!-- You can customize prompting and tools in .github/workflows/agentics/daily-progress.config -->
@include? agentics/daily-progress.config.md

File diff suppressed because it is too large Load diff

View file

@ -1,190 +0,0 @@
---
on:
workflow_dispatch:
schedule:
# Run daily at 2am UTC, all days except Saturday and Sunday
- cron: "0 2 * * 1-5"
stop-after: +48h # workflow will no longer trigger after 48 hours
timeout_minutes: 30
permissions: read-all
network: defaults
safe-outputs:
create-issue:
title-prefix: "${{ github.workflow }}"
max: 5
add-comment:
target: "*" # can add a comment to any one single issue or pull request
create-pull-request:
draft: true
github-token: ${{ secrets.DSYME_GH_TOKEN}}
tools:
web-fetch:
web-search:
# Configure bash build commands here, or in .github/workflows/agentics/daily-dependency-updates.config.md or .github/workflows/agentics/build-tools.md
#
# By default this workflow allows all bash commands within the confine of Github Actions VM
bash: [ ":*" ]
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Check if action.yml exists
id: check_build_steps_file
run: |
if [ -f ".github/actions/daily-perf-improver/build-steps/action.yml" ]; then
echo "exists=true" >> $GITHUB_OUTPUT
else
echo "exists=false" >> $GITHUB_OUTPUT
fi
shell: bash
- name: Build the project ready for performance testing, logging to build-steps.log
if: steps.check_build_steps_file.outputs.exists == 'true'
uses: ./.github/actions/daily-perf-improver/build-steps
id: build-steps
continue-on-error: true # the model may not have got it right, so continue anyway, the model will check the results and try to fix the steps
---
# Daily Perf Improver
## Job Description
Your name is ${{ github.workflow }}. Your job is to act as an agentic coder for the GitHub repository `${{ github.repository }}`. You're really good at all kinds of tasks. You're excellent at everything.
1. Performance research (if not done before).
1a. Check if an open issue with label "daily-perf-improver-plan" exists using `search_issues`. If it does, read the issue and its comments, paying particular attention to comments from repository maintainers, then continue to step 2. If the issue doesn't exist, follow the steps below to create it:
1b. Do some deep research into performance matters in this repo.
- How is performance testing is done in the repo?
- How to do micro benchmarks in the repo?
- What are typical workloads for the software in this repo?
- Where are performance bottlenecks?
- Is perf I/O, CPU or Storage bound?
- What do the repo maintainers care about most w.r.t. perf.?
- What are realistic goals for Round 1, 2, 3 of perf improvement?
- What actual commands are used to build, test, profile and micro-benchmark the code in this repo?
- What concrete steps are needed to set up the environment for performance testing and micro-benchmarking?
- What existing documentation is there about performance in this repo?
- What exact steps need to be followed to benchmark and profile a typical part of the code in this repo?
Research:
- Functions or methods that are slow
- Algorithms that can be optimized
- Data structures that can be made more efficient
- Code that can be refactored for better performance
- Important routines that dominate performance
- Code that can be vectorized or other standard techniques to improve performance
- Any other areas that you identify as potential performance bottlenecks
- CPU, memory, I/O or other bottlenecks
Consider perf engineering fundamentals:
- You want to get to a zone where the engineers can run commands to get numbers towards some performance goal - with commands running reliably within 1min or so - and it can "see" the code paths associated with that. If you can achieve that, your engineers will be very good at finding low-hanging fruit to work towards the performance goals.
1b. Use this research to create an issue with title "${{ github.workflow }} - Research and Plan" and label "daily-perf-improver-plan", then exit this entire workflow.
2. Build steps inference and configuration (if not done before)
2a. Check if `.github/actions/daily-perf-improver/build-steps/action.yml` exists in this repo. Note this path is relative to the current directory (the root of the repo). If this file exists then continue to step 3. Otherwise continue to step 2b.
2b. Check if an open pull request with title "${{ github.workflow }} - Updates to complete configuration" exists in this repo. If it does, add a comment to the pull request saying configuration needs to be completed, then exit the workflow. Otherwise continue to step 2c.
2c. Have a careful think about the CI commands needed to build the project and set up the environment for individual performance development work, assuming one set of build assumptions and one architecture (the one running). Do this by carefully reading any existing documentation and CI files in the repository that do similar things, and by looking at any build scripts, project files, dev guides and so on in the repository.
2d. Create the file `.github/actions/daily-perf-improver/build-steps/action.yml` as a GitHub Action containing these steps, ensuring that the action.yml file is valid and carefully cross-checking with other CI files and devcontainer configurations in the repo to ensure accuracy and correctness. Each step should append its output to a file called `build-steps.log` in the root of the repository. Ensure that the action.yml file is valid and correctly formatted.
2e. Make a pull request for the addition of this file, with title "${{ github.workflow }} - Updates to complete configuration". Encourage the maintainer to review the files carefully to ensure they are appropriate for the project. Exit the entire workflow.
2f. Try to run through the steps you worked out manually one by one. If the a step needs updating, then update the branch you created in step 2e. Continue through all the steps. If you can't get it to work, then create an issue describing the problem and exit the entire workflow.
3. Performance goal selection: build an understanding of what to work on and select a part of the performance plan to pursue.
3a. You can now assume the repository is in a state where the steps in `.github/actions/daily-perf-improver/build-steps/action.yml` have been run and is ready for performance testing, running micro-benchmarks etc. Read this file to understand what has been done. Read any output files such as `build-steps.log` to understand what has been done. If the build steps failed, work out what needs to be fixed in `.github/actions/daily-perf-improver/build-steps/action.yml` and make a pull request for those fixes and exit the entire workflow.
3b. Read the plan in the issue mentioned earlier, along with comments.
3c. Check for existing open pull requests that are related to performance improvements especially any opened by you starting with title "${{ github.workflow }}". Don't repeat work from any open pull requests.
3d. If you think the plan is inadequate, and needs a refresh, update the planning issue by rewriting the actual body of the issue, ensuring you take into account any comments from maintainers. Add one single comment to the issue saying nothing but the plan has been updated with a one sentence explanation about why. Do not add comments to the issue, just update the body. Then continue to step 3e.
3e. Select a performance improvement goal to pursue from the plan. Ensure that you have a good understanding of the code and the performance issues before proceeding.
4. Work towards your selected goal.. For the performance improvement goal you selected, do the following:
4a. Create a new branch starting with "perf/".
4b. Work towards the performance improvement goal you selected. This may involve:
- Refactoring code
- Optimizing algorithms
- Changing data structures
- Adding caching
- Parallelizing code
- Improving memory access patterns
- Using more efficient libraries or frameworks
- Reducing I/O operations
- Reducing network calls
- Improving concurrency
- Using profiling tools to identify bottlenecks
- Other techniques to improve performance or performance engineering practices
If you do benchmarking then make sure you plan ahead about how to take before/after benchmarking performance figures. You may need to write the benchmarks first, then run them, then implement your changes. Or you might implement your changes, then write benchmarks, then stash or disable the changes and take "before" measurements, then apply the changes to take "after" measurements, or other techniques to get before/after measurements. It's just great if you can provide benchmarking, profiling or other evidence that the thing you're optimizing is important to a significant realistic workload. Run individual benchmarks and comparing results. Benchmarking should be done in a way that is reliable, reproducible and quick, preferably by running iteration running a small subset of targeted relevant benchmarks at a time. Because you're running in a virtualised environment wall-clock-time measurements may not be 100% accurate, but it is probably good enough to see if you're making significant improvements or not. Even better if you can use cycle-accurate timers or similar.
4c. Ensure the code still works as expected and that any existing relevant tests pass. Add new tests if appropriate and make sure they pass too.
4d. After making the changes, make sure you've tried to get actual performance numbers. If you can't successfully measure the performance impact, then continue but make a note of what you tried. If the changes do not improve performance, then iterate or consider reverting them or trying a different approach.
4e. Apply any automatic code formatting used in the repo
4f. Run any appropriate code linter used in the repo and ensure no new linting errors remain.
5. If you succeeded in writing useful code changes that improve performance, create a draft pull request with your changes.
5a. Include a description of the improvements, details of the benchmark runs that show improvement and by how much, made and any relevant context.
5b. Do NOT include performance reports or any tool-generated files in the pull request. Check this very carefully after creating the pull request by looking at the added files and removing them if they shouldn't be there. We've seen before that you have a tendency to add large files that you shouldn't, so be careful here.
5c. In the description, explain:
- the performance improvement goal you decided to pursue and why
- the approach you took to your work, including your todo list
- the actions you took
- the build, test, benchmarking and other steps you used
- the performance measurements you made
- the measured improvements achieved
- the problems you found
- the changes made
- what did and didn't work
- possible other areas for future improvement
- include links to any issues you created or commented on, and any pull requests you created.
- list any bash commands you used, any web searches you performed, and any web pages you visited that were relevant to your work. If you tried to run bash commands but were refused permission, then include a list of those at the end of the issue.
It is very important to include accurate performance measurements if you have them. Include a section "Performance measurements". Be very honest about whether you took accurate before/after performance measurements or not, and if you did, what they were. If you didn't, explain why not. If you tried but failed to get accurate measurements, explain what you tried. Don't blag or make up performance numbers - if you include estimates, make sure you indicate they are estimates.
Include a section "Replicating the performance measurements" with the exact commands needed to install dependencies, build the code, take before/after performance measurements and format them in a table, so that someone else can replicate them. If you used any scripts or benchmark programs to help with this, include them in the repository if appropriate, or include links to them if they are external.
5d. After creation, check the pull request to ensure it is correct, includes all expected files, and doesn't include any unwanted files or changes. Make any necessary corrections by pushing further commits to the branch.
6. At the end of your work, add a very, very brief comment (at most two-sentences) to the issue from step 1a, saying you have worked on the particular goal, linking to any pull request you created, and indicating whether you made any progress or not.
@include agentics/shared/no-push-to-main.md
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md
@include agentics/shared/gh-extra-pr-tools.md
<!-- You can whitelist tools in .github/workflows/build-tools.md file -->
@include? agentics/build-tools.md
<!-- You can customize prompting and tools in .github/workflows/agentics/daily-perf-improver.config -->
@include? agentics/daily-perf-improver.config.md

File diff suppressed because it is too large Load diff

View file

@ -1,169 +0,0 @@
---
on:
workflow_dispatch:
schedule:
# Run daily at 2am UTC, all days except Saturday and Sunday
- cron: "0 2 * * 1-5"
stop-after: +48h # workflow will no longer trigger after 48 hours
timeout_minutes: 30
permissions: read-all
network: defaults
safe-outputs:
create-issue: # needed to create planning issue
title-prefix: "${{ github.workflow }}"
update-issue: # can update the planning issue if it already exists
target: "*" # one single issue
body: # can update the issue title/body only
title: # can update the issue title/body only
add-comment:
target: "*" # can add a comment to any one single issue or pull request
create-pull-request: # can create a pull request
draft: true
github-token: ${{ secrets.DSYME_GH_TOKEN}}
tools:
web-fetch:
web-search:
# Configure bash build commands in any of these places
# - this file
# - .github/workflows/agentics/daily-test-improver.config.md
# - .github/workflows/agentics/build-tools.md (shared).
#
# Run `gh aw compile` after editing to recompile the workflow.
#
# By default this workflow allows all bash commands within the confine of Github Actions VM
bash: [ ":*" ]
steps:
- name: Checkout repository
uses: actions/checkout@v5
- name: Check if action.yml exists
id: check_coverage_steps_file
run: |
if [ -f ".github/actions/daily-test-improver/coverage-steps/action.yml" ]; then
echo "exists=true" >> $GITHUB_OUTPUT
else
echo "exists=false" >> $GITHUB_OUTPUT
fi
shell: bash
- name: Build the project and produce coverage report, logging to coverage-steps.log
if: steps.check_coverage_steps_file.outputs.exists == 'true'
uses: ./.github/actions/daily-test-improver/coverage-steps
id: coverage-steps
continue-on-error: true # the model may not have got it right, so continue anyway, the model will check the results and try to fix the steps
---
# Daily Test Coverage Improver
## Job Description
Your name is ${{ github.workflow }}. Your job is to act as an agentic coder for the GitHub repository `${{ github.repository }}`. You're really good at all kinds of tasks. You're excellent at everything.
1. Testing research (if not done before)
1a. Check if an open issue with label "daily-test-improver-plan" exists using `search_issues`. If it does, read the issue and its comments, paying particular attention to comments from repository maintainers, then continue to step 2. If the issue doesn't exist, follow the steps below to create it:
1b. Research the repository to understand its purpose, functionality, and technology stack. Look at the README.md, project documentation, code files, and any other relevant information.
1c. Research the current state of test coverage in the repository. Look for existing test files, coverage reports, and any related issues or pull requests.
1d. Create an issue with title "${{ github.workflow }} - Research and Plan" and label "daily-test-improver-plan" that includes:
- A summary of your findings about the repository, its testing strategies, its test coverage
- A plan for how you will approach improving test coverage, including specific areas to focus on and strategies to use
- Details of the commands needed to run to build the project, run tests, and generate coverage reports
- Details of how tests are organized in the repo, and how new tests should be organized
- Opportunities for new ways of greatly increasing test coverage
- Any questions or clarifications needed from maintainers
1e. Continue to step 2.
2. Coverage steps inference and configuration (if not done before)
2a. Check if `.github/actions/daily-test-improver/coverage-steps/action.yml` exists in this repo. Note this path is relative to the current directory (the root of the repo). If it exists then continue to step 3. Otherwise continue to step 2b.
2b. Check if an open pull request with title "${{ github.workflow }} - Updates to complete configuration" exists in this repo. If it does, add a comment to the pull request saying configuration needs to be completed, then exit the workflow. Otherwise continue to step 2c.
2c. Have a careful think about the CI commands needed to build the repository, run tests, produce a combined coverage report and upload it as an artifact. Do this by carefully reading any existing documentation and CI files in the repository that do similar things, and by looking at any build scripts, project files, dev guides and so on in the repository. If multiple projects are present, perform build and coverage testing on as many as possible, and where possible merge the coverage reports into one combined report. Work out the steps you worked out, in order, as a series of YAML steps suitable for inclusion in a GitHub Action.
2d. Create the file `.github/actions/daily-test-improver/coverage-steps/action.yml` containing these steps, ensuring that the action.yml file is valid. Leave comments in the file to explain what the steps are doing, where the coverage report will be generated, and any other relevant information. Ensure that the steps include uploading the coverage report(s) as an artifact called "coverage". Each step of the action should append its output to a file called `coverage-steps.log` in the root of the repository. Ensure that the action.yml file is valid and correctly formatted.
2e. Before running any of the steps, make a pull request for the addition of the `action.yml` file, with title "${{ github.workflow }} - Updates to complete configuration". Encourage the maintainer to review the files carefully to ensure they are appropriate for the project.
2f. Try to run through the steps you worked out manually one by one. If the a step needs updating, then update the branch you created in step 2e. Continue through all the steps. If you can't get it to work, then create an issue describing the problem and exit the entire workflow.
2g. Exit the entire workflow.
3. Decide what to work on
3a. You can assume that the repository is in a state where the steps in `.github/actions/daily-test-improver/coverage-steps/action.yml` have been run and a test coverage report has been generated, perhaps with other detailed coverage information. Look at the steps in `.github/actions/daily-test-improver/coverage-steps/action.yml` to work out what has been run and where the coverage report should be, and find it. Also read any output files such as `coverage-steps.log` to understand what has been done. If the coverage steps failed, work out what needs to be fixed in `.github/actions/daily-test-improver/coverage-steps/action.yml` and make a pull request for those fixes and exit the entire workflow. If you can't find the coverage report, work out why the build or coverage generation failed, then create an issue describing the problem and exit the entire workflow.
3b. Read the coverge report. Be detailed, looking to understand the files, functions, branches, and lines of code that are not covered by tests. Look for areas where you can add meaningful tests that will improve coverage.
3c. Check the most recent pull request with title starting with "${{ github.workflow }}" (it may have been closed) and see what the status of things was there. These are your notes from last time you did your work, and may include useful recommendations for future areas to work on.
3d. Check for existing open pull opened by you starting with title "${{ github.workflow }}". Don't repeat work from any open pull requests.
3e. If you think the plan is inadequate, and needs a refresh, update the planning issue by rewriting the actual body of the issue, ensuring you take into account any comments from maintainers. Add one single comment to the issue saying nothing but the plan has been updated with a one sentence explanation about why. Do not add comments to the issue, just update the body. Then continue to step 3f.
3f. Based on all of the above, select an area of relatively low coverage to work on that appear tractable for further test additions.
4. Do the following:
4a. Create a new branch
4b. Write new tests to improve coverage. Ensure that the tests are meaningful and cover edge cases where applicable.
4c. Build the tests if necessary and remove any build errors.
4d. Run the new tests to ensure they pass.
4e. Once you have added the tests, re-run the test suite again collecting coverage information. Check that overall coverage has improved. If coverage has not improved then exit.
4f. Apply any automatic code formatting used in the repo
4g. Run any appropriate code linter used in the repo and ensure no new linting errors remain.
4h. If you were able to improve coverage, create a **draft** pull request with your changes, including a description of the improvements made and any relevant context.
- Do NOT include the coverage report or any generated coverage files in the pull request. Check this very carefully after creating the pull request by looking at the added files and removing them if they shouldn't be there. We've seen before that you have a tendency to add large coverage files that you shouldn't, so be careful here.
- In the description of the pull request, include
- A summary of the changes made
- The problems you found
- The actions you took
- Include a section "Test coverage results" giving exact coverage numbers before and after the changes, drawing from the coverage reports, in a table if possible. Include changes in numbers for overall coverage. If coverage numbers a guesstimates, rather than based on coverage reports, say so. Don't blag, be honest. Include the exact commands the user will need to run to validate accurate coverage numbers.
- Include a section "Replicating the test coverage measurements" with the exact commands needed to install dependencies, build the code, run tests, generate coverage reports including a summary before/after table, so that someone else can replicate them. If you used any scripts or programs to help with this, include them in the repository if appropriate, or include links to them if they are external.
- List possible other areas for future improvement
- In a collapsed section list
- all bash commands you ran
- all web searches you performed
- all web pages you fetched
- After creation, check the pull request to ensure it is correct, includes all expected files, and doesn't include any unwanted files or changes. Make any necessary corrections by pushing further commits to the branch.
5. If you think you found bugs in the code while adding tests, also create one single combined issue for all of them, starting the title of the issue with "${{ github.workflow }}". Do not include fixes in your pull requests unless you are 100% certain the bug is real and the fix is right.
6. At the end of your work, add a very, very brief comment (at most two-sentences) to the issue from step 1a, saying you have worked on the particular goal, linking to any pull request you created, and indicating whether you made any progress or not.
@include agentics/shared/no-push-to-main.md
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md
@include agentics/shared/gh-extra-pr-tools.md
<!-- You can whitelist tools in .github/workflows/build-tools.md file -->
@include? agentics/build-tools.md
<!-- You can customize prompting and tools in .github/workflows/agentics/daily-test-improver.config.md -->
@include? agentics/daily-test-improver.config.md

View file

@ -1,19 +0,0 @@
name: GenAI Find Duplicate Issues
on:
issues:
types: [opened, reopened]
permissions:
models: read
issues: write
concurrency:
group: ${{ github.workflow }}-${{ github.event.issue.number }}
cancel-in-progress: true
jobs:
genai-issue-dedup:
runs-on: ubuntu-latest
steps:
- name: Run action-issue-dedup Action
uses: pelikhan/action-genai-issue-dedup@v0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_issue: ${{ github.event.issue.number }}

1183
.github/workflows/deeptest.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

59
.github/workflows/deeptest.md vendored Normal file
View file

@ -0,0 +1,59 @@
---
description: Generate comprehensive test cases for Z3 source files
on:
workflow_dispatch:
inputs:
file_path:
description: 'Path to the source file to generate tests for (e.g., src/util/vector.h)'
required: true
type: string
issue_number:
description: 'Issue number to link the generated tests to (optional)'
required: false
type: string
permissions: read-all
network: defaults
tools:
cache-memory: true
serena: ["python"]
github:
toolsets: [default]
bash: [":*"]
edit: {}
glob: {}
safe-outputs:
create-pull-request:
title-prefix: "[DeepTest] "
labels: [automated-tests, deeptest]
draft: false
add-comment:
max: 2
missing-tool:
create-issue: true
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v5
---
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
{{#runtime-import agentics/deeptest.md}}
## Context
You are the DeepTest agent for the Z3 theorem prover repository.
**Workflow dispatch file path**: ${{ github.event.inputs.file_path }}
**Issue number** (if linked): ${{ github.event.inputs.issue_number }}
## Instructions
Follow the workflow steps defined in the imported prompt above to generate comprehensive test cases for the specified source file.

142
.github/workflows/docs.yml vendored Normal file
View file

@ -0,0 +1,142 @@
name: Documentation
on:
workflow_dispatch:
release:
types: [published]
permissions:
contents: read
concurrency:
group: "pages"
cancel-in-progress: false
env:
EM_VERSION: 3.1.73
jobs:
build-go-docs:
name: Build Go Documentation
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
- name: Setup Go
uses: actions/setup-go@v6
with:
go-version: '1.21'
- name: Generate Go Documentation
working-directory: doc
run: |
python3 mk_go_doc.py --output-dir=api/html/go --go-api-path=../src/api/go
- name: Upload Go Documentation
uses: actions/upload-artifact@v6
with:
name: go-docs
path: doc/api/html/go/
retention-days: 1
build-docs:
name: Build Documentation
runs-on: ubuntu-latest
needs: build-go-docs
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
- name: Setup node
uses: actions/setup-node@v6
with:
node-version: "lts/*"
# Setup OCaml via action
- uses: ocaml/setup-ocaml@v3
with:
ocaml-compiler: 5
opam-disable-sandboxing: true
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y doxygen graphviz python3 python3-pip
sudo apt-get install -y \
bubblewrap m4 libgmp-dev pkg-config
- name: Install required opam packages
run: opam install -y ocamlfind zarith
- name: Build Z3 natively for Python documentation
run: |
eval $(opam env)
echo "CC: $CC"
echo "CXX: $CXX"
echo "OCAMLFIND: $(which ocamlfind)"
echo "OCAMLC: $(which ocamlc)"
echo "OCAMLOPT: $(which ocamlopt)"
echo "OCAML_VERSION: $(ocamlc -version)"
echo "OCAMLLIB: $OCAMLLIB"
mkdir build-x64
python3 scripts/mk_make.py --python --ml --build=build-x64
cd build-x64
make -j$(nproc)
- name: Generate Documentation (from doc directory)
working-directory: doc
run: |
eval $(opam env)
python3 mk_api_doc.py --mld --go --output-dir=api --z3py-package-path=../build-x64/python/z3 --build=../build-x64
Z3BUILD=build-x64 python3 mk_params_doc.py
mkdir api/html/ml
ocamldoc -html -d api/html/ml -sort -hide Z3 -I $( ocamlfind query zarith ) -I ../build-x64/api/ml ../build-x64/api/ml/z3enums.mli ../build-x64/api/ml/z3.mli
- name: Setup emscripten
uses: mymindstorm/setup-emsdk@v14
with:
no-install: true
version: ${{env.EM_VERSION}}
actions-cache-folder: "emsdk-cache"
- name: Install dependencies
working-directory: src/api/js
run: npm ci
- name: Build TypeScript
working-directory: src/api/js
run: npm run build:ts
- name: Build wasm
working-directory: src/api/js
run: |
emsdk install ${EM_VERSION}
emsdk activate ${EM_VERSION}
source $(dirname $(which emsdk))/emsdk_env.sh
which node
which clang++
npm run build:wasm
- name: Generate JS Documentation (from doc directory)
working-directory: doc
run: |
eval $(opam env)
python3 mk_api_doc.py --js --go --output-dir=api --mld --z3py-package-path=../build-x64/python/z3 --build=../build-x64
- name: Download Go Documentation
uses: actions/download-artifact@v7
with:
name: go-docs
path: doc/api/html/go/
- name: Deploy to z3prover.github.io
uses: peaceiris/actions-gh-pages@v4
with:
deploy_key: ${{ secrets.ACTIONS_DEPLOY_KEY }}
external_repository: Z3Prover/z3prover.github.io
destination_dir: ./api
publish_branch: master
publish_dir: ./doc/api
user_name: github-actions[bot]
user_email: github-actions[bot]@users.noreply.github.com

1150
.github/workflows/issue-backlog-processor.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,250 @@
---
description: Processes the backlog of open issues every second day, creates a discussion with findings, and comments on relevant issues
on:
schedule: every 2 days
workflow_dispatch:
permissions: read-all
tools:
cache-memory: true
github:
toolsets: [default]
safe-outputs:
create-discussion:
title-prefix: "[Issue Backlog] "
category: "Agentic Workflows"
close-older-discussions: true
add-comment:
max: 20
github-token: ${{ secrets.GITHUB_TOKEN }}
timeout-minutes: 60
---
# Issue Backlog Processor
## Job Description
Your name is ${{ github.workflow }}. You are an expert AI agent tasked with processing the backlog of open issues in the Z3 theorem prover repository `${{ github.repository }}`. Your mission is to analyze open issues systematically and help maintainers manage the backlog effectively by surfacing actionable insights and providing helpful comments.
## Your Task
### 1. Initialize or Resume Progress (Cache Memory)
Check your cache memory for:
- List of issue numbers already processed and commented on in previous runs
- Issues previously flagged for closure, duplication, or merge
- Date of last run
If cache data exists:
- Skip re-commenting on issues already commented in a recent run (within the last 4 days)
- Re-evaluate previously flagged issues to see if their status has changed
- Note any new issues that opened since the last run
If this is the first run or memory is empty, initialize a fresh tracking structure.
### 2. Fetch Open Issues
Use the GitHub API to list all open issues in the repository:
- Retrieve all open issues (paginate through all pages to get the full list)
- Exclude pull requests (filter where `pull_request` is not present)
- Sort by last updated date (most recently updated first)
- For each issue, collect:
- Issue number, title, body, labels, author
- Date created and last updated
- Number of comments
- All comments (for issues with comments)
- Any referenced pull requests, commits, or other issues
### 3. Analyze Each Issue
For each open issue, perform the following analysis:
#### 3.1 Identify Issues to Close
An issue can be safely closed if any of the following apply:
- A merged pull request explicitly references the issue (e.g., "fixes #NNN", "closes #NNN") and the fix has been shipped
- Comments from the reporter or maintainers indicate the issue is resolved
- The described behavior no longer occurs in the current codebase (based on code inspection or comments confirming resolution)
- The issue is a question that has been fully answered and no further action is needed
- The issue is clearly obsolete (e.g., references a version or feature that no longer exists)
**Be conservative**: When in doubt, do NOT flag an issue for closure. Only flag issues where you have high confidence.
#### 3.2 Identify Duplicate or Mergeable Issues
An issue may be a duplicate or candidate for merging if:
- Another open issue describes the same bug, behavior, or feature request
- The same root cause has been identified across multiple issues
- Issues are closely related enough that they should be tracked together
For each potential duplicate, identify:
- The original or canonical issue it duplicates
- The reason you believe they are related
#### 3.3 Identify Suggested Fixes
For issues describing bugs, incorrect behavior, or missing features:
- Analyze the issue description, stack traces, SMT-LIB2 examples, or code snippets provided
- Identify the likely Z3 component(s) involved (e.g., SAT solver, arithmetic theory, string solver, API, language bindings)
- Point to specific source files or modules in `src/` that are likely relevant
- Suggest what kind of fix might be needed (e.g., edge case handling, missing method, API inconsistency)
Focus on issues where a reasonable fix can be concisely described. Do not guess at fixes for complex soundness or performance issues.
#### 3.4 Determine If a Comment Is Warranted
Add a comment to an issue if you have **genuinely useful and specific information** to contribute, such as:
- A related merged PR or commit that might resolve or partially address the issue
- A confirmed duplicate with a reference to the canonical issue
- A request for clarification or additional diagnostic information that would help resolve the issue
- Confirmation that a fix has been shipped in a recent release
- Specific guidance on which component to look at for a fix
**Do NOT add generic comments**, low-value acknowledgments, or comments that simply restate the issue.
### 4. Create a Discussion with Findings
Create a GitHub Discussion summarizing the analysis results.
**Title:** "[Issue Backlog] Backlog Analysis - [Date]"
**Content structure:**
```markdown
# Issue Backlog Analysis - [Date]
## Executive Summary
- **Total open issues analyzed**: N
- **Issues recommended for closure**: N
- **Potential duplicates / merge candidates**: N
- **Issues with suggested fixes**: N
- **Issues commented on**: N
---
## Issues Recommended for Closure
These issues appear to be already resolved, no longer relevant, or fully answered.
| Issue | Title | Reason for Closure |
|-------|-------|-------------------|
| #NNN | [title] | [reasoning] |
...
---
## Potential Duplicates / Merge Candidates
These issues appear to overlap with other open or closed issues.
| Issue | Title | Duplicate of | Notes |
|-------|-------|-------------|-------|
| #NNN | [title] | #MMM | [reasoning] |
...
---
## Issues with Suggested Fixes
These issues have identifiable root causes or affected components.
### #NNN - [Issue Title]
- **Component**: [e.g., arithmetic solver, Python bindings, SMT-LIB2 parser]
- **Relevant source files**: [e.g., `src/smt/theory_arith.cpp`]
- **Suggested fix direction**: [concise description]
...
---
## Issues Needing More Information
These issues lack sufficient detail to investigate or reproduce.
| Issue | Title | Missing Information |
|-------|-------|-------------------|
| #NNN | [title] | [what is needed] |
...
---
## Notable Issues Deserving Attention
Issues that are particularly impactful or have been waiting a long time.
| Issue | Title | Age | Notes |
|-------|-------|-----|-------|
| #NNN | [title] | [days old] | [why notable] |
...
---
*Automated by Issue Backlog Processor - runs every 2 days*
```
### 5. Comment on Issues
For each issue identified in step 3.4 as warranting a comment, post a helpful comment using the `add-comment` safe output.
**Comment guidelines:**
- Be specific and actionable
- Reference relevant PRs, commits, or other issues by number
- Use a professional and respectful tone
- Identify yourself as an automated analysis agent at the end of each comment
- For potential closures, ask the reporter to confirm whether the issue is still relevant
- For duplicates, politely link to the canonical issue
Example comment for a potentially resolved issue:
```
It appears that PR #MMM (merged on [date]) may have addressed this issue by [brief description]. Could you confirm whether this problem still occurs with the latest code? If it has been resolved, we can close this issue.
*This comment was added by the automated Issue Backlog Processor.*
```
Example comment for a duplicate:
```
This issue appears to be related to (or a duplicate of) #MMM which describes a similar problem. Linking the two for tracking purposes.
*This comment was added by the automated Issue Backlog Processor.*
```
### 6. Update Cache Memory
After completing the analysis, update cache memory with:
- List of issue numbers processed in this run
- Issues that were commented on (to avoid duplicate comments in future runs)
- Issues flagged for closure, duplication, or merge
- Date and timestamp of this run
- Count of total issues analyzed
## Guidelines
- **Prioritize accuracy over coverage**: It is better to analyze 20 issues well than 200 issues poorly
- **Be conservative on closures**: Incorrectly closing a valid issue is harmful; when in doubt, keep it open
- **Respect the community**: Z3 is used by researchers, security engineers, and developers — treat all issues respectfully
- **Focus on actionable output**: Every item in the discussion should be actionable for a maintainer
- **Avoid comment spam**: Do not add comments unless they provide specific and useful information
- **Understand Z3's complexity**: Soundness bugs (wrong answers) are critical and should never be auto-closed
## Z3-Specific Context
Z3 is an industrial-strength theorem prover and SMT solver used in program verification, security analysis, and formal methods. Key components to be aware of:
- **SMT solver** (`src/smt/`): Core solving engine with theory plugins
- **SAT solver** (`src/sat/`): Boolean satisfiability engine
- **Theory solvers**: Arithmetic (`src/smt/theory_arith*`), arrays, bit-vectors, strings, etc.
- **API** (`src/api/`): C API and language bindings (Python, Java, C#, OCaml, Go, JavaScript)
- **Tactics** (`src/tactic/`): Configurable solving strategies
- **Parser** (`src/parsers/`): SMT-LIB2 and other input formats
When analyzing issues, consider:
- Whether the issue has a reproducible SMT-LIB2 test case (important for SMT solver bugs)
- Whether the issue affects a specific language binding or the core solver
- Whether it is a soundness issue (critical), performance issue (important), or API/usability issue (moderate)
- The Z3 version mentioned and whether it has since been fixed in a newer release

View file

@ -1,21 +0,0 @@
name: GenAI Issue Labeller
on:
issues:
types: [opened, reopened, edited]
permissions:
contents: read
issues: write
models: read
concurrency:
group: ${{ github.workflow }}-${{ github.event.issue.number }}
cancel-in-progress: true
jobs:
genai-issue-labeller:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: pelikhan/action-genai-issue-labeller@v0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
github_issue: ${{ github.event.issue.number }}
debug: "*"

View file

@ -14,7 +14,7 @@ jobs:
BUILD_TYPE: Release BUILD_TYPE: Release
steps: steps:
- name: Checkout Repo - name: Checkout Repo
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Build - name: Build
run: | run: |

View file

@ -14,7 +14,7 @@ jobs:
BUILD_TYPE: Release BUILD_TYPE: Release
steps: steps:
- name: Checkout Repo - name: Checkout Repo
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Build - name: Build
run: | run: |

808
.github/workflows/nightly-validation.yml vendored Normal file
View file

@ -0,0 +1,808 @@
name: Nightly Build Validation
on:
workflow_run:
workflows: ["Nightly Build"]
types:
- completed
workflow_dispatch:
inputs:
release_tag:
description: 'Release tag to validate (default: Nightly)'
required: false
default: 'Nightly'
permissions:
contents: read
jobs:
# ============================================================================
# VALIDATION JOBS FOR NUGET PACKAGES
# ============================================================================
validate-nuget-windows-x64:
name: "Validate NuGet on Windows x64"
runs-on: windows-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: '8.x'
- name: Download NuGet package from release
env:
GH_TOKEN: ${{ github.token }}
shell: pwsh
run: |
$tag = "${{ github.event.inputs.release_tag }}"
if ([string]::IsNullOrEmpty($tag)) {
$tag = "Nightly"
}
gh release download $tag --pattern "*.nupkg" --dir nuget-packages
- name: Create test project
shell: pwsh
run: |
mkdir test-nuget
cd test-nuget
dotnet new console
$nupkgFile = Get-ChildItem ../nuget-packages/*.nupkg -Exclude *.symbols.nupkg | Select-Object -First 1
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
- name: Create test code
shell: pwsh
run: |
@"
using Microsoft.Z3;
class Program {
static void Main() {
using (Context ctx = new Context()) {
IntExpr x = ctx.MkIntConst("x");
Solver solver = ctx.MkSolver();
solver.Assert(ctx.MkGt(x, ctx.MkInt(0)));
if (solver.Check() == Status.SATISFIABLE) {
System.Console.WriteLine("sat");
System.Console.WriteLine(solver.Model);
}
}
}
}
"@ | Out-File -FilePath test-nuget/Program.cs -Encoding utf8
- name: Run test
shell: pwsh
run: |
cd test-nuget
dotnet run
validate-nuget-ubuntu-x64:
name: "Validate NuGet on Ubuntu x64"
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: '8.x'
- name: Download NuGet package from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*.nupkg" --dir nuget-packages
- name: Create test project
run: |
mkdir test-nuget
cd test-nuget
dotnet new console
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
- name: Create test code
run: |
cat > test-nuget/Program.cs << 'EOF'
using Microsoft.Z3;
class Program {
static void Main() {
using (Context ctx = new Context()) {
IntExpr x = ctx.MkIntConst("x");
Solver solver = ctx.MkSolver();
solver.Assert(ctx.MkGt(x, ctx.MkInt(0)));
if (solver.Check() == Status.SATISFIABLE) {
System.Console.WriteLine("sat");
System.Console.WriteLine(solver.Model);
}
}
}
}
EOF
- name: Run test
run: |
cd test-nuget
dotnet run
validate-nuget-macos-x64:
name: "Validate NuGet on macOS x64"
runs-on: macos-15-intel
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: '8.x'
- name: Download NuGet package from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*.nupkg" --dir nuget-packages
- name: Create test project
run: |
mkdir test-nuget
cd test-nuget
dotnet new console
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
# Configure project to properly load native dependencies on macOS x64
# Use AnyCPU without RuntimeIdentifier to avoid architecture mismatch
# The .NET runtime will automatically select the correct native library from runtimes/osx-x64/native/
cat > test-nuget.csproj << 'CSPROJ'
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<PlatformTarget>AnyCPU</PlatformTarget>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Z3" Version="*" />
</ItemGroup>
</Project>
CSPROJ
- name: Create test code
run: |
cat > test-nuget/Program.cs << 'EOF'
using Microsoft.Z3;
class Program {
static void Main() {
using (Context ctx = new Context()) {
IntExpr x = ctx.MkIntConst("x");
Solver solver = ctx.MkSolver();
solver.Assert(ctx.MkGt(x, ctx.MkInt(0)));
if (solver.Check() == Status.SATISFIABLE) {
System.Console.WriteLine("sat");
System.Console.WriteLine(solver.Model);
}
}
}
}
EOF
- name: Run test
run: |
cd test-nuget
dotnet run
validate-nuget-macos-arm64:
name: "Validate NuGet on macOS ARM64"
runs-on: macos-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup .NET
uses: actions/setup-dotnet@v5
with:
dotnet-version: '8.x'
- name: Download NuGet package from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*.nupkg" --dir nuget-packages
- name: Create test project
run: |
mkdir test-nuget
cd test-nuget
dotnet new console
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
# Configure project to properly load native dependencies on macOS ARM64
# Use AnyCPU without RuntimeIdentifier to avoid architecture mismatch
# The .NET runtime will automatically select the correct native library from runtimes/osx-arm64/native/
cat > test-nuget.csproj << 'CSPROJ'
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<PlatformTarget>AnyCPU</PlatformTarget>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Z3" Version="*" />
</ItemGroup>
</Project>
CSPROJ
- name: Create test code
run: |
cat > test-nuget/Program.cs << 'EOF'
using Microsoft.Z3;
class Program {
static void Main() {
using (Context ctx = new Context()) {
IntExpr x = ctx.MkIntConst("x");
Solver solver = ctx.MkSolver();
solver.Assert(ctx.MkGt(x, ctx.MkInt(0)));
if (solver.Check() == Status.SATISFIABLE) {
System.Console.WriteLine("sat");
System.Console.WriteLine(solver.Model);
}
}
}
}
EOF
- name: Run test
run: |
cd test-nuget
dotnet run
# ============================================================================
# VALIDATION JOBS FOR EXECUTABLES
# ============================================================================
validate-exe-windows-x64:
name: "Validate executable on Windows x64"
runs-on: windows-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download Windows x64 build from release
env:
GH_TOKEN: ${{ github.token }}
shell: pwsh
run: |
$tag = "${{ github.event.inputs.release_tag }}"
if ([string]::IsNullOrEmpty($tag)) {
$tag = "Nightly"
}
gh release download $tag --pattern "*x64-win*.zip" --dir downloads
- name: Extract and test
shell: pwsh
run: |
$zipFile = Get-ChildItem downloads/*x64-win*.zip | Select-Object -First 1
Expand-Archive -Path $zipFile -DestinationPath z3-test
$z3Dir = Get-ChildItem z3-test -Directory | Select-Object -First 1
& "$z3Dir/bin/z3.exe" --version
# Test basic SMT solving
@"
(declare-const x Int)
(assert (> x 0))
(check-sat)
(get-model)
"@ | & "$z3Dir/bin/z3.exe" -in
validate-exe-windows-x86:
name: "Validate executable on Windows x86"
runs-on: windows-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download Windows x86 build from release
env:
GH_TOKEN: ${{ github.token }}
shell: pwsh
run: |
$tag = "${{ github.event.inputs.release_tag }}"
if ([string]::IsNullOrEmpty($tag)) {
$tag = "Nightly"
}
gh release download $tag --pattern "*x86-win*.zip" --dir downloads
- name: Extract and test
shell: pwsh
run: |
$zipFile = Get-ChildItem downloads/*x86-win*.zip | Select-Object -First 1
Expand-Archive -Path $zipFile -DestinationPath z3-test
$z3Dir = Get-ChildItem z3-test -Directory | Select-Object -First 1
& "$z3Dir/bin/z3.exe" --version
# Test basic SMT solving
@"
(declare-const x Int)
(assert (> x 0))
(check-sat)
(get-model)
"@ | & "$z3Dir/bin/z3.exe" -in
validate-exe-ubuntu-x64:
name: "Validate executable on Ubuntu x64"
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download Ubuntu x64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*x64-glibc*.zip" --dir downloads
- name: Extract and test
run: |
cd downloads
unzip *x64-glibc*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
cd "$Z3_DIR"
./bin/z3 --version
# Test basic SMT solving
echo "(declare-const x Int)
(assert (> x 0))
(check-sat)
(get-model)" | ./bin/z3 -in
validate-exe-macos-x64:
name: "Validate executable on macOS x64"
runs-on: macos-15-intel
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS x64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*x64-osx*.zip" --dir downloads
- name: Extract and test
run: |
cd downloads
unzip *x64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
cd "$Z3_DIR"
./bin/z3 --version
# Test basic SMT solving
echo "(declare-const x Int)
(assert (> x 0))
(check-sat)
(get-model)" | ./bin/z3 -in
validate-exe-macos-arm64:
name: "Validate executable on macOS ARM64"
runs-on: macos-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS ARM64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*arm64-osx*.zip" --dir downloads
- name: Extract and test
run: |
cd downloads
unzip *arm64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
cd "$Z3_DIR"
./bin/z3 --version
# Test basic SMT solving
echo "(declare-const x Int)
(assert (> x 0))
(check-sat)
(get-model)" | ./bin/z3 -in
# ============================================================================
# REGRESSION TEST VALIDATION
# ============================================================================
validate-regressions-ubuntu:
name: "Validate regression tests on Ubuntu"
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 60
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Ubuntu x64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*x64-glibc*.zip" --dir downloads
- name: Extract build
run: |
cd downloads
unzip *x64-glibc*.zip
cd ..
- name: Clone z3test repository
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regression tests
run: |
Z3_PATH=$(find downloads -name z3 -type f | head -n 1)
chmod +x $Z3_PATH
python z3test/scripts/test_benchmarks.py $Z3_PATH z3test/regressions/smt2
validate-regressions-windows:
name: "Validate regression tests on Windows"
runs-on: windows-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 60
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Windows x64 build from release
env:
GH_TOKEN: ${{ github.token }}
shell: pwsh
run: |
$tag = "${{ github.event.inputs.release_tag }}"
if ([string]::IsNullOrEmpty($tag)) {
$tag = "Nightly"
}
gh release download $tag --pattern "*x64-win*.zip" --dir downloads
- name: Extract build
shell: pwsh
run: |
$zipFile = Get-ChildItem downloads/*x64-win*.zip | Select-Object -First 1
Expand-Archive -Path $zipFile -DestinationPath downloads
- name: Clone z3test repository
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regression tests
shell: pwsh
run: |
$z3Path = Get-ChildItem downloads -Filter z3.exe -Recurse | Select-Object -First 1
python z3test/scripts/test_benchmarks.py $z3Path.FullName z3test/regressions/smt2
validate-regressions-macos:
name: "Validate regression tests on macOS"
runs-on: macos-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 60
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download macOS ARM64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*arm64-osx*.zip" --dir downloads
- name: Extract build
run: |
cd downloads
unzip *arm64-osx*.zip
cd ..
- name: Clone z3test repository
run: git clone https://github.com/z3prover/z3test z3test
- name: Run regression tests
run: |
Z3_PATH=$(find downloads -name z3 -type f | head -n 1)
chmod +x $Z3_PATH
python z3test/scripts/test_benchmarks.py $Z3_PATH z3test/regressions/smt2
# ============================================================================
# PYTHON WHEEL VALIDATION
# ============================================================================
validate-python-wheel-ubuntu-x64:
name: "Validate Python wheel on Ubuntu x64"
runs-on: ubuntu-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Python wheel from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*manylinux*x86_64.whl" --dir wheels
- name: Install and test wheel
run: |
pip install wheels/*.whl
python -c "import z3; x = z3.Int('x'); s = z3.Solver(); s.add(x > 0); print('Result:', s.check()); print('Model:', s.model())"
validate-python-wheel-macos-arm64:
name: "Validate Python wheel on macOS ARM64"
runs-on: macos-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Python wheel from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*macosx*arm64.whl" --dir wheels
- name: Install and test wheel
run: |
pip install wheels/*.whl
python -c "import z3; x = z3.Int('x'); s = z3.Solver(); s.add(x > 0); print('Result:', s.check()); print('Model:', s.model())"
validate-python-wheel-macos-x64:
name: "Validate Python wheel on macOS x64"
runs-on: macos-15-intel
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Python wheel from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*macosx*x86_64.whl" --dir wheels
- name: Install and test wheel
run: |
pip install wheels/*.whl
python -c "import z3; x = z3.Int('x'); s = z3.Solver(); s.add(x > 0); print('Result:', s.check()); print('Model:', s.model())"
validate-python-wheel-windows-x64:
name: "Validate Python wheel on Windows x64"
runs-on: windows-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Python wheel from release
env:
GH_TOKEN: ${{ github.token }}
shell: pwsh
run: |
$tag = "${{ github.event.inputs.release_tag }}"
if ([string]::IsNullOrEmpty($tag)) {
$tag = "Nightly"
}
gh release download $tag --pattern "*win_amd64.whl" --dir wheels
- name: Install and test wheel
shell: pwsh
run: |
$wheel = Get-ChildItem wheels/*.whl | Select-Object -First 1
pip install $wheel.FullName
python -c "import z3; x = z3.Int('x'); s = z3.Solver(); s.add(x > 0); print('Result:', s.check()); print('Model:', s.model())"
# ============================================================================
# MACOS DYLIB HEADERPAD VALIDATION
# ============================================================================
validate-macos-headerpad-x64:
name: "Validate macOS x64 dylib headerpad"
runs-on: macos-15-intel
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS x64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*x64-osx*.zip" --dir downloads
- name: Extract build
run: |
cd downloads
unzip *x64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd downloads/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/x64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi
validate-macos-headerpad-arm64:
name: "Validate macOS ARM64 dylib headerpad"
runs-on: macos-latest
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
timeout-minutes: 30
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS ARM64 build from release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ github.event.inputs.release_tag }}"
if [ -z "$TAG" ]; then
TAG="Nightly"
fi
gh release download $TAG --pattern "*arm64-osx*.zip" --dir downloads
- name: Extract build
run: |
cd downloads
unzip *arm64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd downloads/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/arm64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi

762
.github/workflows/nightly.yml vendored Normal file
View file

@ -0,0 +1,762 @@
name: Nightly Build
on:
schedule:
# Run at 2 AM UTC every day
- cron: '0 2 * * *'
workflow_dispatch:
inputs:
force_build:
description: 'Force nightly build'
required: false
default: 'true'
publish_test_pypi:
description: 'Publish to Test PyPI'
required: false
type: boolean
default: false
permissions:
contents: write
env:
MAJOR: '4'
MINOR: '17'
PATCH: '0'
jobs:
# ============================================================================
# BUILD STAGE
# ============================================================================
mac-build-x64:
name: "Mac Build x64"
runs-on: macos-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=x64
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: macOsBuild
path: dist/*.zip
retention-days: 2
mac-build-arm64:
name: "Mac ARM64 Build"
runs-on: macos-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=arm64
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: MacArm64
path: dist/*.zip
retention-days: 2
# ============================================================================
# VALIDATION STAGE
# ============================================================================
validate-macos-headerpad-x64:
name: "Validate macOS x64 dylib headerpad"
needs: [mac-build-x64]
runs-on: macos-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS x64 Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: artifacts
- name: Extract build
run: |
cd artifacts
unzip z3-*-x64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd artifacts/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/x64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi
validate-macos-headerpad-arm64:
name: "Validate macOS ARM64 dylib headerpad"
needs: [mac-build-arm64]
runs-on: macos-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS ARM64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: artifacts
- name: Extract build
run: |
cd artifacts
unzip z3-*-arm64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd artifacts/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/arm64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi
ubuntu-build:
name: "Ubuntu build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Test
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuBuild
path: dist/*.zip
retention-days: 2
ubuntu-arm64:
name: "Ubuntu ARM64 build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download ARM toolchain
run: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz'
- name: Extract ARM toolchain
run: |
mkdir -p /tmp/arm-toolchain/
tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- name: Build
run: |
export PATH="/tmp/arm-toolchain/bin:/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin:$PATH"
echo $PATH
stat /tmp/arm-toolchain/bin/aarch64-none-linux-gnu-gcc
python scripts/mk_unix_dist.py --nodotnet --arch=arm64
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuArm64
path: dist/*.zip
retention-days: 2
ubuntu-doc:
name: "Ubuntu Doc build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Install dependencies
run: |
pip3 install importlib-resources
sudo apt-get update
sudo apt-get install -y ocaml opam libgmp-dev doxygen graphviz
- name: Setup OCaml
run: |
opam init -y
eval $(opam config env)
opam install zarith ocamlfind -y
- name: Build
run: |
eval $(opam config env)
python scripts/mk_make.py --ml
cd build
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- name: Generate documentation
run: |
eval $(opam config env)
cd doc
python3 mk_api_doc.py --mld --z3py-package-path=../build/python/z3
python3 mk_params_doc.py
mkdir -p api/html/ml
ocamldoc -html -d api/html/ml -sort -hide Z3 -I $(ocamlfind query zarith) -I ../build/api/ml ../build/api/ml/z3enums.mli ../build/api/ml/z3.mli
cd ..
- name: Create documentation archive
run: zip -r z3doc.zip doc/api
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuDoc
path: z3doc.zip
retention-days: 2
manylinux-python-amd64:
name: "Python bindings (manylinux AMD64)"
runs-on: ubuntu-latest
timeout-minutes: 90
container: quay.io/pypa/manylinux_2_28_x86_64:latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python environment
run: |
/opt/python/cp38-cp38/bin/python -m venv $PWD/env
echo "$PWD/env/bin" >> $GITHUB_PATH
- name: Install build tools
run: pip install build git+https://github.com/rhelmot/auditwheel
- name: Build wheels
run: cd src/api/python && python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
- name: Test wheels
run: pip install ./src/api/python/wheelhouse/*.whl && python - <src/api/python/z3test.py z3 && python - <src/api/python/z3test.py z3num
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: ManyLinuxPythonBuildAMD64
path: src/api/python/wheelhouse/*.whl
retention-days: 2
manylinux-python-arm64:
name: "Python bindings (manylinux ARM64 cross)"
runs-on: ubuntu-latest
timeout-minutes: 90
container: quay.io/pypa/manylinux_2_28_x86_64:latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download ARM toolchain
run: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz'
- name: Extract ARM toolchain
run: |
mkdir -p /tmp/arm-toolchain/
tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- name: Setup Python environment
run: |
/opt/python/cp38-cp38/bin/python -m venv $PWD/env
echo "$PWD/env/bin" >> $GITHUB_PATH
echo "/tmp/arm-toolchain/bin" >> $GITHUB_PATH
echo "/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin" >> $GITHUB_PATH
- name: Install build tools
run: |
echo $PATH
stat $(which aarch64-none-linux-gnu-gcc)
pip install build git+https://github.com/rhelmot/auditwheel
- name: Build wheels
run: cd src/api/python && CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld Z3_CROSS_COMPILING=aarch64 python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: ManyLinuxPythonBuildArm64
path: src/api/python/wheelhouse/*.whl
retention-days: 2
windows-build-x64:
name: "Windows x64 build"
runs-on: windows-latest
timeout-minutes: 120
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-x64
path: dist/*.zip
retention-days: 2
windows-build-x86:
name: "Windows x86 build"
runs-on: windows-latest
timeout-minutes: 120
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x86
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-x86
path: dist/*.zip
retention-days: 2
windows-build-arm64:
name: "Windows ARM64 build"
runs-on: windows-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" amd64_arm64
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }} --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-arm64
path: dist/arm64/*.zip
retention-days: 2
# ============================================================================
# PACKAGE STAGE
# ============================================================================
nuget-package-x64:
name: "NuGet 64 packaging"
needs: [windows-build-x64, windows-build-arm64, ubuntu-build, ubuntu-arm64, mac-build-x64, mac-build-arm64]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Win64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x64
path: package
- name: Download Win ARM64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-arm64
path: package
- name: Download Ubuntu Build
uses: actions/download-artifact@v7
with:
name: UbuntuBuild
path: package
- name: Download Ubuntu ARM64 Build
uses: actions/download-artifact@v7
with:
name: UbuntuArm64
path: package
- name: Download macOS Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: package
- name: Download macOS Arm64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: package
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd package
python ..\scripts\mk_nuget_task.py . ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols
- name: Pack NuGet package
shell: cmd
run: |
cd package
nuget pack out\Microsoft.Z3.sym.nuspec -Version ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: NuGet
path: |
package/*.nupkg
package/*.snupkg
retention-days: 2
nuget-package-x86:
name: "NuGet 32 packaging"
needs: [windows-build-x86]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download artifacts
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x86
path: package
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd package
python ..\scripts\mk_nuget_task.py . ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols x86
- name: Pack NuGet package
shell: cmd
run: |
cd package
nuget pack out\Microsoft.Z3.x86.sym.nuspec -Version ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: NuGet32
path: |
package/*.nupkg
package/*.snupkg
retention-days: 2
python-package:
name: "Python packaging"
needs: [mac-build-x64, mac-build-arm64, windows-build-x64, windows-build-x86, windows-build-arm64, manylinux-python-amd64, manylinux-python-arm64]
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download macOS x64 Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: artifacts
- name: Download macOS Arm64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: artifacts
- name: Download Win64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x64
path: artifacts
- name: Download Win32 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x86
path: artifacts
- name: Download Win ARM64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-arm64
path: artifacts
- name: Download ManyLinux AMD64 Build
uses: actions/download-artifact@v7
with:
name: ManyLinuxPythonBuildAMD64
path: artifacts
- name: Download ManyLinux Arm64 Build
uses: actions/download-artifact@v7
with:
name: ManyLinuxPythonBuildArm64
path: artifacts
- name: Extract builds
run: |
cd artifacts
ls
mkdir -p osx-x64-bin osx-arm64-bin win32-bin win64-bin win-arm64-bin
cd osx-x64-bin && unzip ../z3-*-x64-osx*.zip && cd ..
cd osx-arm64-bin && unzip ../z3-*-arm64-osx*.zip && cd ..
cd win32-bin && unzip ../z3-*-x86-win*.zip && cd ..
cd win64-bin && unzip ../z3-*-x64-win*.zip && cd ..
cd win-arm64-bin && unzip ../z3-*-arm64-win*.zip && cd ..
- name: Build Python packages
run: |
python3 -m pip install --user -U setuptools
cd src/api/python
# Build source distribution
python3 setup.py sdist
# Build wheels from macOS and Windows release zips
echo $PWD/../../../artifacts/win32-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/win64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/win-arm64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/osx-x64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/osx-arm64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
- name: Copy Linux Python packages
run: |
cp artifacts/*.whl src/api/python/dist/.
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: PythonPackages
path: src/api/python/dist/*
retention-days: 2
# ============================================================================
# DEPLOYMENT STAGE
# ============================================================================
deploy-nightly:
name: "Deploy to GitHub Releases"
needs: [
windows-build-x86,
windows-build-x64,
windows-build-arm64,
mac-build-x64,
mac-build-arm64,
ubuntu-build,
ubuntu-arm64,
ubuntu-doc,
python-package,
nuget-package-x64,
nuget-package-x86,
validate-macos-headerpad-x64,
validate-macos-headerpad-arm64
]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download all artifacts
uses: actions/download-artifact@v7
with:
path: tmp
- name: Display structure of downloaded files
run: ls -R tmp
- name: Delete existing Nightly release and tag
continue-on-error: true
env:
GH_TOKEN: ${{ github.token }}
run: |
# Delete the release first (this also deletes assets)
gh release delete Nightly --yes || echo "No release to delete"
# Delete the tag explicitly
git push origin :refs/tags/Nightly || echo "No tag to delete"
- name: Create Nightly release
env:
GH_TOKEN: ${{ github.token }}
run: |
ls
find tmp -type f \( -name "*.zip" -o -name "*.whl" -o -name "*.tar.gz" -o -name "*.nupkg" -o -name "*.snupkg" \) -print0 > release_files.txt
# Deduplicate files - keep only first occurrence of each basename
# Use NUL-delimited input/output to handle spaces in filenames safely
declare -A seen_basenames
declare -a unique_files
while IFS= read -r -d $'\0' filepath || [ -n "$filepath" ]; do
[ -z "$filepath" ] && continue
basename="${filepath##*/}"
# Keep only first occurrence of each basename
if [ -z "${seen_basenames[$basename]}" ]; then
seen_basenames[$basename]=1
unique_files+=("$filepath")
fi
done < release_files.txt
# Create release with properly quoted file arguments
if [ ${#unique_files[@]} -gt 0 ]; then
gh release create Nightly \
--title "Nightly" \
--notes "Automated nightly build from commit ${{ github.sha }}" \
--prerelease \
--target ${{ github.sha }} \
"${unique_files[@]}"
else
echo "No files to release after deduplication"
exit 1
fi
publish-test-pypi:
name: "Publish to test.PyPI"
if: ${{ github.event.inputs.publish_test_pypi == 'true' }}
needs: [python-package]
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
contents: read
steps:
- name: Download Python packages
uses: actions/download-artifact@v7
with:
name: PythonPackages
path: dist
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: dist
repository-url: https://test.pypi.org/legacy/

257
.github/workflows/nuget-build.yml vendored Normal file
View file

@ -0,0 +1,257 @@
name: Build NuGet Package
on:
workflow_dispatch:
inputs:
version:
description: 'Version number for the NuGet package (e.g., 4.17.0)'
required: true
default: '4.17.0'
push:
tags:
- 'z3-*'
permissions:
contents: write
jobs:
# Build Windows binaries
build-windows-x64:
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build Windows x64
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
- name: Upload Windows x64 artifact
uses: actions/upload-artifact@v6
with:
name: windows-x64
path: dist/*.zip
retention-days: 1
build-windows-x86:
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build Windows x86
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x86
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
- name: Upload Windows x86 artifact
uses: actions/upload-artifact@v6
with:
name: windows-x86
path: dist/*.zip
retention-days: 1
build-windows-arm64:
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build Windows ARM64
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" amd64_arm64
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
- name: Upload Windows ARM64 artifact
uses: actions/upload-artifact@v6
with:
name: windows-arm64
path: build-dist\arm64\dist\*.zip
retention-days: 1
build-ubuntu:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build Ubuntu
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
- name: Upload Ubuntu artifact
uses: actions/upload-artifact@v6
with:
name: ubuntu
path: dist/*.zip
retention-days: 1
build-macos-x64:
runs-on: macos-14
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build macOS x64
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
- name: Upload macOS x64 artifact
uses: actions/upload-artifact@v6
with:
name: macos-x64
path: dist/*.zip
retention-days: 1
build-macos-arm64:
runs-on: macos-14
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build macOS ARM64
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=arm64
- name: Upload macOS ARM64 artifact
uses: actions/upload-artifact@v6
with:
name: macos-arm64
path: dist/*.zip
retention-days: 1
# Package NuGet x64 (includes all platforms except x86)
package-nuget-x64:
needs: [build-windows-x64, build-windows-arm64, build-ubuntu, build-macos-x64, build-macos-arm64]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download all artifacts
uses: actions/download-artifact@v7
with:
path: packages
- name: List downloaded artifacts
shell: bash
run: find packages -type f
- name: Move artifacts to flat directory
shell: bash
run: |
mkdir -p package-files
find packages -name "*.zip" -exec cp {} package-files/ \;
ls -la package-files/
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd package-files
python ..\scripts\mk_nuget_task.py . ${{ github.event.inputs.version || '4.17.0' }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols
- name: Pack NuGet package
shell: cmd
run: |
cd package-files
nuget pack out\Microsoft.Z3.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload NuGet package
uses: actions/upload-artifact@v6
with:
name: nuget-x64
path: |
package-files/*.nupkg
package-files/*.snupkg
retention-days: 30
# Package NuGet x86
package-nuget-x86:
needs: [build-windows-x86]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download x86 artifact
uses: actions/download-artifact@v7
with:
name: windows-x86
path: packages
- name: List downloaded artifacts
shell: bash
run: find packages -type f
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd packages
python ..\scripts\mk_nuget_task.py . ${{ github.event.inputs.version || '4.17.0' }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols x86
- name: Pack NuGet package
shell: cmd
run: |
cd packages
nuget pack out\Microsoft.Z3.x86.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload NuGet package
uses: actions/upload-artifact@v6
with:
name: nuget-x86
path: |
packages/*.nupkg
packages/*.snupkg
retention-days: 30

View file

@ -17,11 +17,11 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
# Cache ccache (shared across runs) # Cache ccache (shared across runs)
- name: Cache ccache - name: Cache ccache
uses: actions/cache@v4 uses: actions/cache@v5.0.3
with: with:
path: ~/.ccache path: ~/.ccache
key: ${{ runner.os }}-ccache-${{ github.sha }} key: ${{ runner.os }}-ccache-${{ github.sha }}
@ -30,7 +30,7 @@ jobs:
# Cache opam (compiler + packages) # Cache opam (compiler + packages)
- name: Cache opam - name: Cache opam
uses: actions/cache@v4 uses: actions/cache@v5.0.3
with: with:
path: ~/.opam path: ~/.opam
key: ${{ runner.os }}-opam-${{ matrix.ocaml-version }}-${{ github.sha }} key: ${{ runner.os }}-opam-${{ matrix.ocaml-version }}-${{ github.sha }}

3683
.github/workflows/pr-fix.lock.yml generated vendored

File diff suppressed because it is too large Load diff

View file

@ -1,74 +0,0 @@
---
on:
command:
name: pr-fix
reaction: "eyes"
stop-after: +48h
permissions: read-all
roles: [admin, maintainer, write]
network: defaults
safe-outputs:
push-to-pr-branch:
create-issue:
title-prefix: "${{ github.workflow }}"
add-comment:
github-token: ${{ secrets.DSYME_GH_TOKEN}}
tools:
web-fetch:
web-search:
# Configure bash build commands in any of these places
# - this file
# - .github/workflows/agentics/pr-fix.config.md
# - .github/workflows/agentics/build-tools.md (shared).
#
# Run `gh aw compile` after editing to recompile the workflow.
#
# By default this workflow allows all bash commands within the confine of Github Actions VM
bash: [ ":*" ]
timeout_minutes: 20
---
# PR Fix
You are an AI assistant specialized in fixing pull requests with failing CI checks. Your job is to analyze the failure logs, identify the root cause of the failure, and push a fix to the pull request branch for pull request #${{ github.event.issue.number }} in the repository ${{ github.repository }}.
1. Read the pull request and the comments
2. Take heed of these instructions: "${{ needs.task.outputs.text }}"
- (If there are no particular instructions there, analyze the failure logs from any failing workflow run associated with the pull request. Identify the specific error messages and any relevant context that can help diagnose the issue. Based on your analysis, determine the root cause of the failure. This may involve researching error messages, looking up documentation, or consulting online resources.)
3. Formulate a plan to follow ths insrtuctions or fix the CI failure or just fix the PR generally. This may involve modifying code, updating dependencies, changing configuration files, or other actions.
4. Implement the fix.
5. Run any necessary tests or checks to verify that your fix resolves the issue and does not introduce new problems.
6. Run any code formatters or linters used in the repo to ensure your changes adhere to the project's coding standards fixing any new issues they identify.
7. Push the changes to the pull request branch.
8. Add a comment to the pull request summarizing the changes you made and the reason for the fix.
@include agentics/shared/no-push-to-main.md
@include agentics/shared/tool-refused.md
@include agentics/shared/include-link.md
@include agentics/shared/xpia.md
@include agentics/shared/gh-extra-pr-tools.md
<!-- You can whitelist tools in .github/workflows/build-tools.md file -->
@include? agentics/build-tools.md
<!-- You can customize prompting and tools in .github/workflows/agentics/pr-fix.config.md -->
@include? agentics/pr-fix.config.md

View file

@ -1,19 +0,0 @@
name: GenAI Pull Request Descriptor
on:
pull_request:
types: [opened, reopened, ready_for_review]
permissions:
contents: read
pull-requests: write
models: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
generate-pull-request-description:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: pelikhan/action-genai-pull-request-descriptor@v0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}

View file

@ -3,6 +3,7 @@ name: Pyodide Build
on: on:
schedule: schedule:
- cron: '0 0 */2 * *' - cron: '0 0 */2 * *'
workflow_dispatch:
env: env:
BUILD_TYPE: Release BUILD_TYPE: Release
@ -19,7 +20,7 @@ jobs:
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Setup packages - name: Setup packages
run: sudo apt-get update && sudo apt-get install -y python3-dev python3-pip python3-venv run: sudo apt-get update && sudo apt-get install -y python3-dev python3-pip python3-venv

1031
.github/workflows/release-notes-updater.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,218 @@
---
description: Weekly release notes updater that generates updates based on changes since last release
on:
workflow_dispatch:
schedule: weekly
timeout-minutes: 30
permissions: read-all
network: defaults
tools:
github:
toolsets: [default]
bash: [":*"]
edit: {}
glob: {}
view: {}
safe-outputs:
create-discussion:
title-prefix: "[Release Notes] "
category: "Announcements"
close-older-discussions: false
github-token: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
fetch-depth: 0 # Fetch full history for analyzing commits
---
# Release Notes Updater
## Job Description
Your name is ${{ github.workflow }}. You are an expert AI agent tasked with updating the RELEASE_NOTES.md file in the Z3 theorem prover repository `${{ github.repository }}` with information about changes since the last release.
## Your Task
### 1. Determine the Next Release Version
Read the file `scripts/VERSION.txt` to find the next release version number. This version should be used as the section header for the new release notes.
### 2. Identify the Last Release
The RELEASE_NOTES.md file contains release history. The last release is the first completed version section after "Version 4.next" (which is for planned features).
Find the last release tag in git to identify which commits to analyze:
```bash
git tag --sort=-creatordate | grep -E '^z3-[0-9]+\.[0-9]+\.[0-9]+$' | head -1
```
If no tags are found, use the last 3 months of commits as a fallback.
### 3. Analyze Commits Since Last Release
Get all commits since the last release:
```bash
# If a tag was found (e.g., z3-4.15.4):
git log --format='%H|%an|%ae|%s' <last-tag>..HEAD
# Or if using date fallback:
git log --format='%H|%an|%ae|%s' --since="3 months ago"
```
For each commit, you need to:
- Determine if it's from a maintainer or external contributor
- Assess whether it's substantial (affects functionality, features, or performance)
- Understand what changed by examining the commit (use `git show <commit-hash>`)
**Identifying Maintainers:**
- Maintainers typically have `@microsoft.com` email addresses or are core team members
- Look for patterns like `nbjorner@microsoft.com` (Nikolaj Bjorner - core maintainer)
- External contributors often have GitHub email addresses or non-Microsoft domains
- Pull request commits merged by maintainers are considered maintainer changes
- Commits from external contributors through PRs should be identified by checking if they're merge commits
**Determining Substantial Changes:**
Substantial changes include:
- New features or APIs
- Performance improvements
- Bug fixes that affect core functionality
- Changes to solving algorithms
- Deprecations or breaking changes
- Security fixes
NOT substantial (but still acknowledge external contributions):
- Documentation typos
- Code style changes
- Minor refactoring without functional impact
- Build script tweaks (unless they fix major issues)
### 4. Check for Related Pull Requests
For significant changes, try to find the associated pull request number:
- Look in commit messages for `#NNNN` references
- Search GitHub for PRs that were merged around the same time
- This helps with proper attribution
Use GitHub tools to search for pull requests:
```bash
# Search for merged PRs since last release
```
### 5. Format the Release Notes
**CRITICAL: Maintain Consistent Formatting**
Study the existing RELEASE_NOTES.md carefully to match the style:
- Use bullet points with `-` for each entry
- Include PR numbers as links: `https://github.com/Z3Prover/z3/pull/NNNN`
- Include issue numbers as `#NNNN`
- Give credit: "thanks to [Name]" for external contributions
- Group related changes together
- Order by importance: major features first, then improvements, then bug fixes
- Use proper technical terminology consistent with existing entries
**Format Examples from Existing Release Notes:**
```markdown
Version X.Y.Z
==============
- Add methods to create polymorphic datatype constructors over the API. The prior method was that users had to manage
parametricity using their own generation of instances. The updated API allows to work with polymorphic datatype declarations
directly.
- MSVC build by default respect security flags, https://github.com/Z3Prover/z3/pull/7988
- Using a new algorithm for smt.threads=k, k > 1 using a shared search tree. Thanks to Ilana Shapiro.
- Thanks for several pull requests improving usability, including
- https://github.com/Z3Prover/z3/pull/7955
- https://github.com/Z3Prover/z3/pull/7995
- https://github.com/Z3Prover/z3/pull/7947
```
### 6. Prepare the Release Notes Content
**CRITICAL: Maintain Consistent Formatting**
Study the existing RELEASE_NOTES.md carefully to match the style. Your formatted content should be ready to insert **immediately after** the "Version 4.next" section:
1. Read the current RELEASE_NOTES.md to understand the format
2. Find the "Version 4.next" section (it should be at the top)
3. Format your release notes to be inserted after it but before the previous release sections
4. The "Version 4.next" section should remain intact - don't modify it
The structure for the formatted content should be:
```markdown
Version X.Y.Z
==============
[your new release notes here]
```
This content will be shared in a discussion where maintainers can review it before applying it to RELEASE_NOTES.md.
### 7. Check for Existing Discussions
Before creating a new discussion, check if there's already an open discussion for release notes updates:
```bash
# Search for open discussions with "[Release Notes]" in the title
gh search discussions --repo Z3Prover/z3 "[Release Notes] in:title" --json number,title
```
If a recent discussion already exists (within the last week):
- Do NOT create a new discussion
- Exit gracefully
### 8. Create Discussion
If there are substantial updates to add AND no recent discussion exists:
- Create a discussion with the release notes analysis
- Use a descriptive title like "Release notes for version X.Y.Z"
- In the discussion body, include:
- The formatted release notes content that should be added to RELEASE_NOTES.md
- Number of maintainer changes included
- Number of external contributions acknowledged
- Any notable features or improvements
- Date range of commits analyzed
- Instructions for maintainers on how to apply these updates to RELEASE_NOTES.md
If there are NO substantial changes since the last release:
- Do NOT create a discussion
- Exit gracefully
## Guidelines
- **Be selective**: Only include changes that matter to users
- **Be accurate**: Verify commit details before including them
- **Be consistent**: Match the existing release notes style exactly
- **Be thorough**: Don't miss significant changes, but don't include trivial ones
- **Give credit**: Always acknowledge external contributors
- **Use proper links**: Include PR and issue links where applicable
- **Stay focused**: This is about documenting changes, not reviewing code quality
- **No empty updates**: Only create a discussion if there are actual changes to document
## Important Notes
- The next version in `scripts/VERSION.txt` is the target version for these release notes
- External contributions should be acknowledged even if the changes are minor
- Maintainer changes must be substantial to be included
- Maintain the bullet point structure and indentation style
- Include links to PRs using the full GitHub URL format
- Do NOT modify the "Version 4.next" section - only add a new section below it
- Do NOT create a discussion if there are no changes to document
- The discussion should provide ready-to-apply content for RELEASE_NOTES.md
## Example Workflow
1. Read `scripts/VERSION.txt` → version is 4.15.5.0 → next release is 4.15.5
2. Find last release tag → `z3-4.15.4`
3. Get commits: `git log --format='%H|%an|%ae|%s' z3-4.15.4..HEAD`
4. Analyze each commit to determine if substantial
5. Format the changes following existing style
6. Check for existing discussions
7. Create discussion with the release notes analysis and formatted content

792
.github/workflows/release.yml vendored Normal file
View file

@ -0,0 +1,792 @@
name: Release Build
on:
workflow_dispatch:
inputs:
publish_github:
description: 'Publish to GitHub Releases'
required: false
type: boolean
default: true
publish_nuget:
description: 'Publish to NuGet.org'
required: false
type: boolean
default: false
publish_pypi:
description: 'Publish to PyPI'
required: false
type: boolean
default: false
permissions:
contents: write
env:
RELEASE_VERSION: '4.17.0'
jobs:
# ============================================================================
# BUILD STAGE
# ============================================================================
mac-build-x64:
name: "Mac Build x64"
runs-on: macos-15
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=x64
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Test
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: macOsBuild
path: dist/*.zip
retention-days: 7
mac-build-arm64:
name: "Mac ARM64 Build"
runs-on: macos-15
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=arm64
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: MacArm64
path: dist/*.zip
retention-days: 7
# ============================================================================
# VALIDATION STAGE
# ============================================================================
validate-macos-headerpad-x64:
name: "Validate macOS x64 dylib headerpad"
needs: [mac-build-x64]
runs-on: macos-15
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS x64 Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: artifacts
- name: Extract build
run: |
cd artifacts
unzip z3-*-x64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd artifacts/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/x64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi
validate-macos-headerpad-arm64:
name: "Validate macOS ARM64 dylib headerpad"
needs: [mac-build-arm64]
runs-on: macos-15
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download macOS ARM64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: artifacts
- name: Extract build
run: |
cd artifacts
unzip z3-*-arm64-osx*.zip
Z3_DIR=$(find . -maxdepth 1 -type d -name "z3-*" | head -n 1)
echo "Z3_DIR=$Z3_DIR" >> $GITHUB_ENV
- name: Test install_name_tool with headerpad
run: |
cd artifacts/$Z3_DIR/bin
# Get the original install name
ORIGINAL_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "Original install name: $ORIGINAL_NAME"
# Create a test path with same length as typical setup-z3 usage
# This simulates what setup-z3 does: changing to absolute path
TEST_PATH="/Users/runner/hostedtoolcache/z3/latest/arm64/z3-test-dir/bin/libz3.dylib"
# Try to change the install name - this will fail if headerpad is insufficient
install_name_tool -id "$TEST_PATH" -change "$ORIGINAL_NAME" "$TEST_PATH" libz3.dylib
# Verify the change was successful
NEW_NAME=$(otool -D libz3.dylib | tail -n 1)
echo "New install name: $NEW_NAME"
if [ "$NEW_NAME" = "$TEST_PATH" ]; then
echo "✓ install_name_tool succeeded - headerpad is sufficient"
else
echo "✗ install_name_tool failed to update install name"
exit 1
fi
ubuntu-build:
name: "Ubuntu build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
- name: Clone z3test
run: git clone https://github.com/z3prover/z3test z3test
- name: Test
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuBuild
path: dist/*.zip
retention-days: 7
ubuntu-arm64:
name: "Ubuntu ARM64 build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download ARM toolchain
run: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz'
- name: Extract ARM toolchain
run: |
mkdir -p /tmp/arm-toolchain/
tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- name: Build
run: |
export PATH="/tmp/arm-toolchain/bin:/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin:$PATH"
echo $PATH
stat /tmp/arm-toolchain/bin/aarch64-none-linux-gnu-gcc
python scripts/mk_unix_dist.py --nodotnet --arch=arm64
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuArm64
path: dist/*.zip
retention-days: 7
ubuntu-doc:
name: "Ubuntu Doc build"
runs-on: ubuntu-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Install dependencies
run: |
pip3 install importlib-resources
sudo apt-get update
sudo apt-get install -y ocaml opam libgmp-dev doxygen graphviz
- name: Setup OCaml
run: |
opam init -y
eval $(opam config env)
opam install zarith ocamlfind -y
- name: Build
run: |
eval $(opam config env)
python scripts/mk_make.py --ml
cd build
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- name: Generate documentation
run: |
eval $(opam config env)
cd doc
python3 mk_api_doc.py --mld --z3py-package-path=../build/python/z3
python3 mk_params_doc.py
mkdir -p api/html/ml
ocamldoc -html -d api/html/ml -sort -hide Z3 -I $(ocamlfind query zarith) -I ../build/api/ml ../build/api/ml/z3enums.mli ../build/api/ml/z3.mli
cd ..
- name: Create documentation archive
run: zip -r z3doc.zip doc/api
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: UbuntuDoc
path: z3doc.zip
retention-days: 7
manylinux-python-amd64:
name: "Python bindings (manylinux AMD64)"
runs-on: ubuntu-latest
timeout-minutes: 90
container: quay.io/pypa/manylinux_2_28_x86_64:latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python environment
run: |
/opt/python/cp38-cp38/bin/python -m venv $PWD/env
echo "$PWD/env/bin" >> $GITHUB_PATH
- name: Install build tools
run: pip install build git+https://github.com/rhelmot/auditwheel
- name: Build wheels
run: cd src/api/python && python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
- name: Test wheels
run: pip install ./src/api/python/wheelhouse/*.whl && python - <src/api/python/z3test.py z3 && python - <src/api/python/z3test.py z3num
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: ManyLinuxPythonBuildAMD64
path: src/api/python/wheelhouse/*.whl
retention-days: 7
manylinux-python-arm64:
name: "Python bindings (manylinux ARM64 cross)"
runs-on: ubuntu-latest
timeout-minutes: 90
container: quay.io/pypa/manylinux_2_28_x86_64:latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download ARM toolchain
run: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/13.3.rel1/binrel/arm-gnu-toolchain-13.3.rel1-x86_64-aarch64-none-linux-gnu.tar.xz'
- name: Extract ARM toolchain
run: |
mkdir -p /tmp/arm-toolchain/
tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- name: Setup Python environment
run: |
/opt/python/cp38-cp38/bin/python -m venv $PWD/env
echo "$PWD/env/bin" >> $GITHUB_PATH
echo "/tmp/arm-toolchain/bin" >> $GITHUB_PATH
echo "/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin" >> $GITHUB_PATH
- name: Install build tools
run: |
echo $PATH
stat $(which aarch64-none-linux-gnu-gcc)
pip install build git+https://github.com/rhelmot/auditwheel
- name: Build wheels
run: cd src/api/python && CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld Z3_CROSS_COMPILING=aarch64 python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: ManyLinuxPythonBuildArm64
path: src/api/python/wheelhouse/*.whl
retention-days: 7
windows-build-x64:
name: "Windows x64 build"
runs-on: windows-latest
timeout-minutes: 120
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x64
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-x64
path: dist/*.zip
retention-days: 7
windows-build-x86:
name: "Windows x86 build"
runs-on: windows-latest
timeout-minutes: 120
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" x86
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-x86
path: dist/*.zip
retention-days: 7
windows-build-arm64:
name: "Windows ARM64 build"
runs-on: windows-latest
timeout-minutes: 90
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Build
shell: cmd
run: |
call "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat" amd64_arm64
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ env.RELEASE_VERSION }} --zip
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: WindowsBuild-arm64
path: dist/arm64/*.zip
retention-days: 7
# ============================================================================
# PACKAGE STAGE
# ============================================================================
nuget-package-x64:
name: "NuGet 64 packaging"
needs: [windows-build-x64, windows-build-arm64, ubuntu-build, ubuntu-arm64, mac-build-x64, mac-build-arm64]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download Win64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x64
path: package
- name: Download Win ARM64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-arm64
path: package
- name: Download Ubuntu Build
uses: actions/download-artifact@v7
with:
name: UbuntuBuild
path: package
- name: Download Ubuntu ARM64 Build
uses: actions/download-artifact@v7
with:
name: UbuntuArm64
path: package
- name: Download macOS Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: package
- name: Download macOS Arm64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: package
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd package
python ..\scripts\mk_nuget_task.py . ${{ env.RELEASE_VERSION }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols
- name: Pack NuGet package
shell: cmd
run: |
cd package
nuget pack out\Microsoft.Z3.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: NuGet
path: |
package/*.nupkg
package/*.snupkg
retention-days: 7
nuget-package-x86:
name: "NuGet 32 packaging"
needs: [windows-build-x86]
runs-on: windows-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download artifacts
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x86
path: package
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Assemble NuGet package
shell: cmd
run: |
cd package
python ..\scripts\mk_nuget_task.py . ${{ env.RELEASE_VERSION }} https://github.com/Z3Prover/z3 ${{ github.ref_name }} ${{ github.sha }} ${{ github.workspace }} symbols x86
- name: Pack NuGet package
shell: cmd
run: |
cd package
nuget pack out\Microsoft.Z3.x86.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: NuGet32
path: |
package/*.nupkg
package/*.snupkg
retention-days: 7
python-package:
name: "Python packaging"
needs: [mac-build-x64, mac-build-arm64, windows-build-x64, windows-build-x86, windows-build-arm64, manylinux-python-amd64, manylinux-python-arm64]
runs-on: ubuntu-24.04
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Download macOS x64 Build
uses: actions/download-artifact@v7
with:
name: macOsBuild
path: artifacts
- name: Download macOS Arm64 Build
uses: actions/download-artifact@v7
with:
name: MacArm64
path: artifacts
- name: Download Win64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x64
path: artifacts
- name: Download Win32 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-x86
path: artifacts
- name: Download Win ARM64 Build
uses: actions/download-artifact@v7
with:
name: WindowsBuild-arm64
path: artifacts
- name: Download ManyLinux AMD64 Build
uses: actions/download-artifact@v7
with:
name: ManyLinuxPythonBuildAMD64
path: artifacts
- name: Download ManyLinux Arm64 Build
uses: actions/download-artifact@v7
with:
name: ManyLinuxPythonBuildArm64
path: artifacts
- name: Extract builds
run: |
cd artifacts
ls
mkdir -p osx-x64-bin osx-arm64-bin win32-bin win64-bin win-arm64-bin
cd osx-x64-bin && unzip ../z3-*-x64-osx*.zip && cd ..
cd osx-arm64-bin && unzip ../z3-*-arm64-osx*.zip && cd ..
cd win32-bin && unzip ../z3-*-x86-win*.zip && cd ..
cd win64-bin && unzip ../z3-*-x64-win*.zip && cd ..
cd win-arm64-bin && unzip ../z3-*-arm64-win*.zip && cd ..
- name: Build Python packages
run: |
python3 -m pip install --user -U setuptools
cd src/api/python
python3 setup.py sdist
echo $PWD/../../../artifacts/win32-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/win64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/win-arm64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/osx-x64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
echo $PWD/../../../artifacts/osx-arm64-bin/* | xargs printf 'PACKAGE_FROM_RELEASE=%s\n' | xargs -I '{}' env '{}' python3 setup.py bdist_wheel
- name: Copy Linux Python packages
run: |
cp artifacts/*.whl src/api/python/dist/.
- name: Upload artifact
uses: actions/upload-artifact@v6
with:
name: PythonPackage
path: src/api/python/dist/*
retention-days: 7
# ============================================================================
# PUBLISH STAGE
# ============================================================================
publish-github:
name: "Publish to GitHub Releases"
if: ${{ github.event.inputs.publish_github == 'true' }}
needs: [
windows-build-x86,
windows-build-x64,
windows-build-arm64,
mac-build-x64,
mac-build-arm64,
ubuntu-build,
ubuntu-arm64,
ubuntu-doc,
python-package,
nuget-package-x64,
nuget-package-x86,
validate-macos-headerpad-x64,
validate-macos-headerpad-arm64
]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download all artifacts
uses: actions/download-artifact@v7
with:
path: tmp
- name: Display structure of downloaded files
run: ls -R tmp
- name: Create Release
env:
GH_TOKEN: ${{ github.token }}
run: |
ls
find tmp -type f \( -name "*.zip" -o -name "*.whl" -o -name "*.tar.gz" -o -name "*.nupkg" -o -name "*.snupkg" \) -print0 > release_files.txt
# Deduplicate files - keep only first occurrence of each basename
# Use NUL-delimited input/output to handle spaces in filenames safely
declare -A seen_basenames
declare -a unique_files
while IFS= read -r -d $'\0' filepath || [ -n "$filepath" ]; do
[ -z "$filepath" ] && continue
basename="${filepath##*/}"
# Keep only first occurrence of each basename
if [ -z "${seen_basenames[$basename]}" ]; then
seen_basenames[$basename]=1
unique_files+=("$filepath")
fi
done < release_files.txt
# Create release with properly quoted file arguments
if [ ${#unique_files[@]} -gt 0 ]; then
gh release create z3-${{ env.RELEASE_VERSION }} \
--title "z3-${{ env.RELEASE_VERSION }}" \
--notes "${{ env.RELEASE_VERSION }} release" \
--draft \
--prerelease \
--target ${{ github.sha }} \
"${unique_files[@]}"
else
echo "No files to release after deduplication"
exit 1
fi
publish-nuget:
name: "Publish to NuGet.org"
if: ${{ github.event.inputs.publish_nuget == 'true' }}
needs: [nuget-package-x64, nuget-package-x86]
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6.0.2
- name: Download NuGet packages
uses: actions/download-artifact@v7
with:
name: NuGet
path: packages
- name: Download NuGet32 packages
uses: actions/download-artifact@v7
with:
name: NuGet32
path: packages
- name: Setup NuGet
uses: nuget/setup-nuget@v2
with:
nuget-version: 'latest'
- name: Publish to NuGet
env:
NUGET_API_KEY: ${{ secrets.NUGET_API_KEY }}
run: |
nuget push packages/*.nupkg -Source https://api.nuget.org/v3/index.json -ApiKey $NUGET_API_KEY
publish-pypi:
name: "Publish to PyPI"
if: ${{ github.event.inputs.publish_pypi == 'true' }}
needs: [python-package]
runs-on: ubuntu-latest
environment: pypi
permissions:
id-token: write
contents: read
steps:
- name: Download Python packages
uses: actions/download-artifact@v7
with:
name: PythonPackage
path: dist
- name: Publish to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
with:
packages-dir: dist

1123
.github/workflows/soundness-bug-detector.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,41 @@
---
description: Automatically validate and reproduce reported soundness bugs
on:
issues:
types: [opened, labeled]
schedule: daily
roles: all
permissions: read-all
network: defaults
tools:
cache-memory: true
github:
toolsets: [default]
bash: [":*"]
web-fetch: {}
safe-outputs:
add-comment:
max: 2
create-discussion:
title-prefix: "[Soundness] "
category: "Agentic Workflows"
close-older-discussions: true
missing-tool:
create-issue: true
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v5
---
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
@./agentics/soundness-bug-detector.md

1049
.github/workflows/specbot.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

58
.github/workflows/specbot.md vendored Normal file
View file

@ -0,0 +1,58 @@
---
description: Automatically annotate code with assertions capturing class invariants, pre-conditions, and post-conditions using LLM-based specification mining
on:
schedule: weekly
workflow_dispatch:
inputs:
target_path:
description: 'Target directory or file to analyze (e.g., src/ast/, src/smt/smt_context.cpp)'
required: false
default: ''
target_class:
description: 'Specific class name to analyze (optional)'
required: false
default: ''
roles: [write, maintain, admin]
env:
GH_TOKEN: ${{ secrets.BOT_PAT }}
permissions:
contents: read
issues: read
pull-requests: read
tools:
github:
toolsets: [default]
view: {}
glob: {}
edit: {}
bash:
- ":*"
mcp-servers:
serena:
container: "ghcr.io/githubnext/serena-mcp-server"
version: "latest"
safe-outputs:
create-discussion:
title-prefix: "[SpecBot] "
category: "Agentic Workflows"
close-older-discussions: true
missing-tool:
create-issue: true
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v5
---
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
@./agentics/specbot.md

1170
.github/workflows/tactic-to-simplifier.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,278 @@
---
description: Compares exposed tactics and simplifiers in Z3, and creates issues for tactics that can be converted to simplifiers
on:
schedule: weekly
workflow_dispatch:
timeout-minutes: 30
permissions:
contents: read
issues: read
pull-requests: read
network: defaults
tools:
cache-memory: true
github:
toolsets: [default]
bash: [":*"]
glob: {}
view: {}
safe-outputs:
create-issue:
labels:
- enhancement
- refactoring
- tactic-to-simplifier
title-prefix: "[tactic-to-simplifier] "
max: 3
github-token: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
persist-credentials: false
---
# Tactic-to-Simplifier Comparison Agent
You are an expert Z3 theorem prover developer. Your task is to compare the tactics and simplifiers exposed in the Z3 codebase, identify tactics that could be converted into simplifiers, and create GitHub issues with the proposed code changes.
## Background
Z3 has two related but distinct abstraction layers:
- **Tactics** (`tactic` base class in `src/tactic/tactic.h`): Operate on *goals* (sets of formulas). Registered with `ADD_TACTIC` macros.
- **Simplifiers** (`dependent_expr_simplifier` base class in `src/ast/simplifiers/dependent_expr_state.h`): Operate on individual `dependent_expr` objects in a `dependent_expr_state`. Registered with `ADD_SIMPLIFIER` macros.
The preferred modern pattern wraps a simplifier as a tactic using `dependent_expr_state_tactic` (see `src/tactic/dependent_expr_state_tactic.h`). Example from `src/tactic/core/propagate_values2_tactic.h`:
```cpp
inline tactic * mk_propagate_values2_tactic(ast_manager & m, params_ref const & p = params_ref()) {
return alloc(dependent_expr_state_tactic, m, p,
[](auto& m, auto& p, auto &s) -> dependent_expr_simplifier* { return alloc(propagate_values, m, p, s); });
}
/*
ADD_TACTIC("propagate-values2", "propagate constants.", "mk_propagate_values2_tactic(m, p)")
ADD_SIMPLIFIER("propagate-values", "propagate constants.", "alloc(propagate_values, m, p, s)")
*/
```
## Your Task
### Step 1: Collect All Tactics
Scan all header files in `src/` to extract every `ADD_TACTIC` registration:
```bash
grep -rn "ADD_TACTIC(" src/ --include="*.h" | grep -v "^Binary"
```
Parse each line to extract:
- Tactic name (first quoted string)
- Description (second quoted string)
- Factory expression (third quoted string)
- File path
### Step 2: Collect All Simplifiers
Scan all header files to extract every `ADD_SIMPLIFIER` registration:
```bash
grep -rn "ADD_SIMPLIFIER(" src/ --include="*.h" | grep -v "^Binary"
```
Parse each line to extract:
- Simplifier name
- Description
- Factory expression
- File path
### Step 3: Compare and Find Gaps
Build a comparison table. For each tactic, check if there is a corresponding simplifier with the same or a closely related name.
Key rules for matching:
- Exact name match: tactic `simplify` ↔ simplifier `simplify`
- Version suffix: tactic `propagate-values2` corresponds to simplifier `propagate-values` (the "2" suffix indicates the tactic wraps the simplifier)
- Suffix `2`: tactics with `2` suffix (e.g., `elim-uncnstr2`, `propagate-bv-bounds2`) typically already have a simplifier counterpart
Identify tactics that have **no corresponding simplifier**.
### Step 4: Evaluate Convertibility
For each tactic without a corresponding simplifier, assess whether it is a good candidate for conversion by examining its implementation:
```bash
# Read the tactic's header file to understand its implementation
grep -rn "mk_<tactic_name>_tactic\|class <tactic_name>" src/ --include="*.h" --include="*.cpp"
```
A tactic is a **good conversion candidate** if:
1. It transforms formulas in a formula-by-formula way (no goal splitting/branching)
2. It does not produce multiple goals from one
3. It is a pure simplification (rewrites terms without adding new conjuncts that split the goal)
4. It doesn't require global goal analysis beyond what `dependent_expr_state` provides
A tactic is **not suitable** for conversion if:
- It splits goals into multiple subgoals
- It requires tight coupling to the goal infrastructure
- It depends on features unavailable in `dependent_expr_simplifier` (e.g., `goal_num_occurs`)
- It only makes sense in a tactic pipeline (e.g., `fail`, `skip`, `sat`, `smt`, solver tactics)
- It is a portfolio tactic (combines multiple tactics)
### Step 5: Check for Existing Issues
Before creating any issue, search for existing open issues to avoid duplicates:
Use GitHub tools to search: `repo:${{ github.repository }} is:issue is:open label:tactic-to-simplifier "<tactic-name>"`
Also check cache memory for previously created issues.
### Step 6: Create Issues for Convertible Tactics
For each convertible tactic that does not already have an open issue:
1. Read the tactic's existing implementation carefully (header + cpp files)
2. Design the corresponding `dependent_expr_simplifier` subclass
3. Draft the code for the new simplifier
Use `create-issue` to create a GitHub issue with:
**Title**: `Convert tactic '<tactic-name>' to a simplifier`
**Body**:
```markdown
## Summary
The `<tactic-name>` tactic (described as: "<description>") currently only exists as a `tactic`
and has no corresponding `dependent_expr_simplifier`. This issue proposes converting it to
expose it as both a tactic (via the `dependent_expr_state_tactic` wrapper) and a simplifier.
## Background
Z3 provides two abstraction layers for formula transformation:
- **Tactics** (`tactic` base class): Operate on goals
- **Simplifiers** (`dependent_expr_simplifier` base class): Operate on individual formulas in a `dependent_expr_state`
The modern pattern wraps a simplifier inside a tactic using `dependent_expr_state_tactic`.
## Current Implementation
File: `<path/to/tactic/header.h>`
```cpp
// Existing tactic registration
ADD_TACTIC("<tactic-name>", "<description>", "mk_<tactic_name>_tactic(m, p)")
```
## Proposed Change
### 1. Create a new simplifier class in `src/ast/simplifiers/<name>_simplifier.h`:
```cpp
#pragma once
#include "ast/simplifiers/dependent_expr_state.h"
// ... other includes
class <name>_simplifier : public dependent_expr_simplifier {
// ... internal state
public:
<name>_simplifier(ast_manager& m, params_ref const& p, dependent_expr_state& s)
: dependent_expr_simplifier(m, s) { }
char const* name() const override { return "<simplifier-name>"; }
void reduce() override {
for (unsigned idx : indices()) {
auto& d = m_fmls[idx];
// ... transform d.fml() ...
expr_ref new_fml(m);
// apply simplification
m_fmls.update(idx, dependent_expr(m, new_fml, nullptr, d.dep()));
}
}
};
```
### 2. Update `<path/to/existing/tactic_header.h>` to add the simplifier registration and new tactic factory:
```cpp
#include "tactic/dependent_expr_state_tactic.h"
#include "ast/simplifiers/<name>_simplifier.h"
inline tactic* mk_<name>2_tactic(ast_manager& m, params_ref const& p = params_ref()) {
return alloc(dependent_expr_state_tactic, m, p,
[](auto& m, auto& p, auto& s) -> dependent_expr_simplifier* {
return alloc(<name>_simplifier, m, p, s);
});
}
/*
ADD_TACTIC("<tactic-name>2", "<description>", "mk_<name>2_tactic(m, p)")
ADD_SIMPLIFIER("<tactic-name>", "<description>", "alloc(<name>_simplifier, m, p, s)")
*/
```
## Benefits
- Enables use of `<tactic-name>` in Z3's simplifier pipeline (used by the new solver engine)
- Follows the established modern pattern for formula simplification in Z3
- No behavioral change for existing tactic users
## Notes
- The original `mk_<tactic_name>_tactic` should remain for backward compatibility
- The simplifier should implement `supports_proofs()` if proof generation is relevant
```
**Important instructions for issue body**:
- Replace all placeholders (`<tactic-name>`, `<name>`, `<description>`, `<path>`, etc.) with **real, specific values** from the actual source code
- Provide **actual code** based on reading the tactic's implementation, not generic templates
- Include the real factory expression, include paths, and class names from the existing implementation
- If the tactic has parameters, include them in the simplifier
- If the tactic wraps another component (rewriter, solver, etc.), include that in the simplifier too
### Step 7: Update Cache Memory
Store in cache memory:
- The list of all tactics analyzed in this run
- The list of issues created (tactic name → issue number)
- Tactics determined to be non-convertible and why
- Tactics with existing issues (to skip in future runs)
## Conversion Criteria Reference
### Likely Convertible (if no simplifier exists)
- Pure term rewriting tactics (apply rewriting rules)
- Bound propagation tactics
- Variable elimination tactics that work formula-by-formula
- Normalization tactics (NNF, SNF) that apply local transformations
- Tactics that simplify based on syntactic structure
### Not Convertible (skip these)
- Solver-based tactics (`ctx-solver-simplify`, `sat`, `smt`, etc.) — require a solver
- Portfolio/combinator tactics (`then`, `or-else`, `repeat`, etc.)
- Decision procedure tactics (`qfbv`, `qflra`, etc.)
- Tactics that split goals (`split-clause`, `tseitin-cnf`, `occf`)
- Tactics that only make sense in goal context (`fail`, `skip`)
- Tactics using `goal_num_occurs` for occurrence counting (the simplifier doesn't have this)
- Tactics that produce multiple result goals
## Guidelines
- **Be specific**: Provide actual file paths, class names, and factory expressions — no generic placeholders
- **Be careful**: Only create issues for tactics that are genuinely good candidates
- **Avoid duplicates**: Always check existing issues before creating new ones
- **One issue per tactic**: Create separate issues for each convertible tactic
- **Read the code**: Examine the actual tactic implementation before proposing code for the simplifier
- **Be incremental**: If there are many candidates, focus on the most impactful ones first
- **Limit per run**: Create at most 3 new issues per run to avoid flooding the issue tracker

View file

@ -21,17 +21,17 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Setup node - name: Setup node
uses: actions/setup-node@v5 uses: actions/setup-node@v6
with: with:
node-version: "lts/*" node-version: "lts/*"
registry-url: "https://registry.npmjs.org" registry-url: "https://registry.npmjs.org"
- name: Prepare for publish - name: Prepare for publish
run: | run: |
npm version $(node -e 'console.log(fs.readFileSync("../../../scripts/release.yml", "utf8").match(/ReleaseVersion:\s*\x27(\S+)\x27/)[1])') npm version $(node -e 'console.log(fs.readFileSync("../../../.github/workflows/release.yml", "utf8").match(/RELEASE_VERSION:\s*\x27(\S+)\x27/)[1])')
mv PUBLISHED_README.md README.md mv PUBLISHED_README.md README.md
cp ../../../LICENSE.txt . cp ../../../LICENSE.txt .

View file

@ -21,10 +21,10 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v6.0.2
- name: Setup node - name: Setup node
uses: actions/setup-node@v5 uses: actions/setup-node@v6
with: with:
node-version: "lts/*" node-version: "lts/*"

View file

@ -3,6 +3,7 @@ name: Open Issues
on: on:
schedule: schedule:
- cron: '0 0 */2 * *' - cron: '0 0 */2 * *'
workflow_dispatch:
env: env:
BUILD_TYPE: Debug BUILD_TYPE: Debug
@ -15,7 +16,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6.0.2
- name: Configure CMake - name: Configure CMake
run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}} run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}}

1161
.github/workflows/workflow-suggestion-agent.lock.yml generated vendored Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,383 @@
---
description: Weekly agent that suggests which agentic workflow agents should be added to the Z3 repository
on:
schedule: weekly
timeout-minutes: 30
permissions: read-all
network: defaults
tools:
cache-memory: true
serena: ["python", "java", "csharp"]
github:
toolsets: [default]
bash: [":*"]
glob: {}
safe-outputs:
create-discussion:
title-prefix: "[Workflow Suggestions] "
category: "Agentic Workflows"
close-older-discussions: true
github-token: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checkout repository
uses: actions/checkout@v5
with:
persist-credentials: false
---
# Workflow Suggestion Agent
## Job Description
Your name is ${{ github.workflow }}. You are an expert AI agent tasked with analyzing the Z3 theorem prover repository `${{ github.repository }}` to identify automation opportunities and suggest new agentic workflow agents that would be valuable for the development team.
## Your Task
### 1. Initialize or Resume Progress (Cache Memory)
Check your cache memory for:
- List of workflow suggestions already made
- Workflows that have been implemented since last run
- Repository patterns and insights discovered
- Areas of the codebase already analyzed
**Important**: If you have cached suggestions:
- **Re-verify each cached suggestion** before including it in the report
- Check if a workflow has been created for that suggestion since the last run
- Use glob to find workflow files and grep to search for specific automation
- **Mark suggestions as implemented** if a workflow now exists
- **Remove implemented suggestions** from the cache and celebrate them in the report
- Only carry forward suggestions that are still relevant and unimplemented
If this is your first run or memory is empty, initialize a tracking structure.
### 2. Analyze the Repository Context
Examine the Z3 repository to understand:
**Development Patterns:**
- What types of issues are commonly reported? (use GitHub API to analyze recent issues)
- What areas generate the most pull requests?
- What languages and frameworks are used? (check file extensions, build files)
- What build systems and testing frameworks exist?
- What documentation exists and where gaps might be?
**Current Automation:**
- What GitHub Actions workflows already exist? (check `.github/workflows/` for both `.yml` and `.md` files)
- What agentic workflows are already in place? (`.md` files in `.github/workflows/`)
- What types of automation are missing?
**Development Pain Points:**
- Repetitive tasks that could be automated
- Quality assurance gaps (linting, testing, security)
- Documentation maintenance needs
- Community management needs (issue triage, PR reviews)
- Release management tasks
- Performance monitoring needs
### 3. Identify Automation Opportunities
Look for patterns that suggest automation opportunities:
**Issue Management:**
- Issues without labels or incorrect labels
- Questions that could be auto-answered
- Issues needing triage or categorization
- Stale issues that need attention
- Duplicate issues that could be detected
**Pull Request Management:**
- PRs needing code review
- PRs with merge conflicts
- PRs missing tests or documentation
- PRs that need performance validation
- PRs that could benefit from automated checks
**Code Quality:**
- Code that could benefit from automated refactoring
- Patterns that violate project conventions
- Security vulnerabilities to monitor
- Performance regressions to detect
- Test coverage gaps
**Documentation:**
- Out-of-date documentation
- Missing API documentation
- Tutorial gaps
- Release notes maintenance
- Changelog generation
**Community & Communication:**
- Weekly/monthly status reports
- Contributor recognition
- Onboarding automation
- Community health metrics
**Release & Deployment:**
- Release preparation tasks
- Version bumping
- Binary distribution
- Package publishing
**Research & Monitoring:**
- Academic paper tracking (for theorem provers)
- Competitor analysis
- Dependency updates
- Security advisory monitoring
### 4. Consider Workflow Feasibility
For each potential automation opportunity, assess:
**Technical Feasibility:**
- Can it be done with available tools (GitHub API, bash, Serena, web-fetch, etc.)?
- Does it require external services or APIs?
- Is the data needed accessible?
- Would it need special permissions?
**Value Assessment:**
- How much time would it save?
- How many people would benefit?
- What's the impact on code quality/velocity?
- Is it solving a real pain point or just nice-to-have?
**Safety & Security:**
- Can it be done safely with safe-outputs?
- Does it need write permissions (try to avoid)?
- Could it cause harm if the AI makes mistakes?
- Does it handle sensitive data appropriately?
### 5. Learn from Existing Workflows
Study the existing agentic workflows in this repository:
- What patterns do they follow?
- What tools do they use?
- How are they triggered?
- What safe-outputs do they use?
Use these as templates for your suggestions.
### 6. Generate Workflow Suggestions
For each suggestion, provide:
**Workflow Name:** Clear, descriptive name (e.g., "Performance Regression Detector")
**Purpose:** What problem does it solve? Who benefits?
**Trigger:** When should it run?
- `issues` - When issues are opened/edited
- `pull_request` - When PRs are opened/updated
- `schedule: daily` or `schedule: weekly` - Regular schedules
- `workflow_dispatch` - Manual trigger (auto-added by compiler with fuzzy schedules)
**Required Tools:**
- GitHub API (via `toolsets: [default]`)
- Other tools (web-search, web-fetch, bash, Serena, etc.)
- Any required network access
**Safe Outputs:**
- What write operations are needed? (create-discussion, add-comment, create-issue, create-pull-request)
- For daily reporting workflows, include `close-older-discussions: true` to prevent clutter
**Priority:** High (addresses critical gap), Medium (valuable improvement), Low (nice-to-have)
**Implementation Notes:**
- Key challenges or considerations
- Similar workflows to reference
- Special permissions or setup needed
**Example Workflow Snippet:**
Provide a minimal example of the workflow frontmatter to show feasibility:
```yaml
---
description: Brief description
on:
schedule: daily
permissions: read-all
tools:
github:
toolsets: [default]
safe-outputs:
create-discussion:
close-older-discussions: true
---
```
### 7. Check for Recent Suggestions
Before creating a new discussion, check if there's already an open discussion for workflow suggestions:
- Look for discussions with "[Workflow Suggestions]" in the title
- Check if it was created within the last 3 days
If a very recent discussion exists:
- Do NOT create a new discussion
- Exit gracefully
### 8. Create Discussion with Suggestions
Create a GitHub Discussion with:
**Title:** "[Workflow Suggestions] Weekly Report - [Date]"
**Content Structure:**
- **Executive Summary:** Number of suggestions, priority breakdown
- **Implemented Since Last Run:** Celebrate any previously suggested workflows that have been implemented (if any)
- **Top Priority Suggestions:** 2-3 high-value workflows that should be implemented soon
- **Medium Priority Suggestions:** 3-5 valuable improvements
- **Low Priority Suggestions:** Nice-to-have ideas
- **Repository Insights:** Any interesting patterns or observations about the repository
- **Progress Tracker:** What % of repository automation potential has been covered
**Formatting Guidelines:**
- Use progressive disclosure with `<details><summary>` for each suggestion
- Include code blocks for workflow examples
- Use checkboxes `- [ ]` for tracking implementation
- Keep it actionable and specific
**Important Notes:**
- Only include suggestions that are confirmed to be unimplemented in the current repository
- Verify each suggestion is still relevant before including it
- Celebrate implemented suggestions but don't re-suggest them
### 9. Update Cache Memory
Store in cache memory:
- All new suggestions made in this run
- **Remove implemented suggestions** from the cache
- Repository patterns and insights discovered
- Areas of automation already well-covered
- Next areas to focus on in future runs
**Critical:** Keep cache fresh by:
- Removing suggestions that have been implemented
- Updating suggestions based on repository changes
- Not perpetuating stale information
## Guidelines
- **Be strategic:** Focus on high-impact automation opportunities
- **Be specific:** Provide concrete workflow examples, not vague ideas
- **Be realistic:** Only suggest workflows that are technically feasible
- **Be safety-conscious:** Prioritize workflows that use safe-outputs over those needing write permissions
- **Use cache effectively:** Build on previous runs' knowledge
- **Keep cache fresh:** Verify suggestions are still relevant and remove implemented ones
- **Learn from examples:** Study existing workflows in the repository
- **Consider the team:** What would save the most time for Z3 maintainers?
- **Quality over quantity:** 5 excellent suggestions are better than 20 mediocre ones
- **Celebrate progress:** Acknowledge when suggestions get implemented
## Z3-Specific Context
Z3 is a theorem prover and SMT solver used in:
- Program verification
- Security analysis
- Compiler optimization
- Test generation
- Formal methods research
**Key considerations for Z3:**
- Academic research community
- Multi-language bindings (C++, Python, Java, C#, OCaml, JavaScript)
- Performance is critical
- Correctness is paramount
- Used in production by major companies
- Active contributor community
**Common Z3 tasks that could benefit from automation:**
- API consistency across language bindings
- Performance benchmark tracking
- Research paper and citation tracking
- Example code validation
- Tutorial maintenance
- Solver regression detection
- Build time optimization
- Cross-platform compatibility testing
- Community contribution recognition
- Issue triage by solver component (SAT, SMT, theory solvers)
## Important Notes
- **DO NOT** create issues or pull requests - only discussions
- **DO NOT** suggest workflows for things that are already well-automated
- **DO** verify suggestions are still relevant before reporting them
- **DO** close older discussions automatically (this is configured)
- **DO** provide enough detail for maintainers to quickly assess and implement suggestions
- **DO** consider the unique needs of a theorem prover project
- **DO** suggest workflows that respect the expertise of the Z3 team (assist, don't replace)
## Example Output Structure
```markdown
# Workflow Suggestions - January 21, 2026
## Executive Summary
- 8 new suggestions this run
- 1 previously suggested workflow now implemented! 🎉
- Priority: 2 High, 4 Medium, 2 Low
## 🎉 Implemented Since Last Run
- **API Coherence Checker** - Successfully implemented and running daily!
## High Priority Suggestions
<details>
<summary><b>1. Performance Regression Detector</b></summary>
**Purpose:** Automatically detect performance regressions in solver benchmarks
**Trigger:** `pull_request` (on PRs that modify solver code)
**Tools Needed:**
- GitHub API (`toolsets: [default]`)
- Bash for running benchmarks
- Network defaults for downloading benchmark sets
**Safe Outputs:**
- `add-comment:` to report results on PRs
**Value:** Critical for maintaining Z3's performance characteristics
**Implementation Notes:**
- Could use existing benchmark suite
- Compare against baseline from main branch
- Report significant regressions (>5% slowdown)
**Example:**
\`\`\`yaml
---
description: Detect performance regressions in solver benchmarks
on:
pull_request:
types: [opened, synchronize]
paths: ['src/**/*.cpp', 'src/**/*.h']
permissions: read-all
tools:
github:
toolsets: [default]
bash: [":*"]
safe-outputs:
add-comment:
max: 3
---
\`\`\`
</details>
## Medium Priority Suggestions
...
## Low Priority Suggestions
...
## Repository Insights
...
```

2
.gitignore vendored
View file

@ -115,3 +115,5 @@ genaisrc/genblogpost.genai.mts
*.mts *.mts
# Bazel generated files # Bazel generated files
bazel-* bazel-*
# Local issue tracking
.beads

View file

@ -1,3 +0,0 @@
**/genaiscript.d.ts
**/package-lock.json
**/yarn.lock

View file

@ -27,7 +27,7 @@ cmake(
out_shared_libs = select({ out_shared_libs = select({
"@platforms//os:linux": ["libz3.so"], "@platforms//os:linux": ["libz3.so"],
# "@platforms//os:osx": ["libz3.dylib"], # FIXME: this is not working, libz3<version>.dylib is not copied # "@platforms//os:osx": ["libz3.dylib"], # FIXME: this is not working, libz3<version>.dylib is not copied
# "@platforms//os:windows": ["z3.dll"], # TODO: test this "@platforms//os:windows": ["libz3.dll"],
"//conditions:default": ["@platforms//:incompatible"], "//conditions:default": ["@platforms//:incompatible"],
}), }),
visibility = ["//visibility:public"], visibility = ["//visibility:public"],
@ -45,7 +45,7 @@ cmake(
out_static_libs = select({ out_static_libs = select({
"@platforms//os:linux": ["libz3.a"], "@platforms//os:linux": ["libz3.a"],
"@platforms//os:osx": ["libz3.a"], "@platforms//os:osx": ["libz3.a"],
# "@platforms//os:windows": ["z3.lib"], # TODO: test this "@platforms//os:windows": ["libz3.lib"], # MSVC with Control Flow Guard enabled by default
"//conditions:default": ["@platforms//:incompatible"], "//conditions:default": ["@platforms//:incompatible"],
}), }),
visibility = ["//visibility:public"], visibility = ["//visibility:public"],

View file

@ -362,34 +362,75 @@ endif()
include(${PROJECT_SOURCE_DIR}/cmake/compiler_lto.cmake) include(${PROJECT_SOURCE_DIR}/cmake/compiler_lto.cmake)
################################################################################ ################################################################################
# Control flow integrity # Control flow integrity (Clang only)
################################################################################ ################################################################################
option(Z3_ENABLE_CFI "Enable control flow integrity checking" OFF) option(Z3_ENABLE_CFI "Enable Control Flow Integrity security checks" OFF)
if (Z3_ENABLE_CFI) if (Z3_ENABLE_CFI)
set(build_types_with_cfi "RELEASE" "RELWITHDEBINFO") if (NOT CMAKE_CXX_COMPILER_ID MATCHES "Clang")
message(FATAL_ERROR "Z3_ENABLE_CFI is only supported with Clang compiler. "
"Current compiler: ${CMAKE_CXX_COMPILER_ID}. "
"You should set Z3_ENABLE_CFI to OFF or use Clang to compile.")
endif()
if (NOT Z3_LINK_TIME_OPTIMIZATION) if (NOT Z3_LINK_TIME_OPTIMIZATION)
message(FATAL_ERROR "Cannot enable control flow integrity checking without link-time optimization." message(FATAL_ERROR "Cannot enable Control Flow Integrity without link-time optimization. "
"You should set Z3_LINK_TIME_OPTIMIZATION to ON or Z3_ENABLE_CFI to OFF.") "You should set Z3_LINK_TIME_OPTIMIZATION to ON or Z3_ENABLE_CFI to OFF.")
endif() endif()
set(build_types_with_cfi "RELEASE" "RELWITHDEBINFO")
if (DEFINED CMAKE_CONFIGURATION_TYPES) if (DEFINED CMAKE_CONFIGURATION_TYPES)
# Multi configuration generator # Multi configuration generator
message(STATUS "Note CFI is only enabled for the following configurations: ${build_types_with_cfi}") message(STATUS "Note CFI is only enabled for the following configurations: ${build_types_with_cfi}")
# No need for else because this is the same as the set that LTO requires. # No need for else because this is the same as the set that LTO requires.
endif() endif()
if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
z3_add_cxx_flag("-fsanitize=cfi" REQUIRED) message(STATUS "Enabling Control Flow Integrity (CFI) for Clang")
z3_add_cxx_flag("-fsanitize-cfi-cross-dso" REQUIRED) z3_add_cxx_flag("-fsanitize=cfi" REQUIRED)
elseif (CMAKE_CXX_COMPILER_ID STREQUAL "MSVC") z3_add_cxx_flag("-fsanitize-cfi-cross-dso" REQUIRED)
z3_add_cxx_flag("/guard:cf" REQUIRED) endif()
message(STATUS "Enabling CFI for MSVC") # End CFI section
foreach (_build_type ${build_types_with_cfi})
message(STATUS "Enabling CFI for MSVC") ################################################################################
string(APPEND CMAKE_EXE_LINKER_FLAGS_${_build_type} " /GUARD:CF") # Control Flow Guard (MSVC only)
string(APPEND CMAKE_SHARED_LINKER_FLAGS_${_build_type} " /GUARD:CF") ################################################################################
endforeach() # Default CFG to ON for MSVC, OFF for other compilers.
if (CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
option(Z3_ENABLE_CFG "Enable Control Flow Guard security checks" ON)
else()
option(Z3_ENABLE_CFG "Enable Control Flow Guard security checks" OFF)
endif()
if (Z3_ENABLE_CFG)
if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
message(FATAL_ERROR "Z3_ENABLE_CFG is only supported with MSVC compiler. "
"Current compiler: ${CMAKE_CXX_COMPILER_ID}. "
"You should remove Z3_ENABLE_CFG or set it to OFF or use MSVC to compile.")
endif()
# Check for incompatible options (handle both / and - forms for robustness)
string(REGEX MATCH "[-/]ZI" _has_ZI "${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_DEBUG} ${CMAKE_CXX_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_RELWITHDEBINFO} ${CMAKE_CXX_FLAGS_MINSIZEREL}")
string(REGEX MATCH "[-/]clr" _has_clr "${CMAKE_CXX_FLAGS} ${CMAKE_CXX_FLAGS_DEBUG} ${CMAKE_CXX_FLAGS_RELEASE} ${CMAKE_CXX_FLAGS_RELWITHDEBINFO} ${CMAKE_CXX_FLAGS_MINSIZEREL}")
if(_has_ZI)
message(WARNING "/guard:cf is incompatible with /ZI (Edit and Continue debug information). "
"Control Flow Guard will be disabled due to /ZI option.")
elseif(_has_clr)
message(WARNING "/guard:cf is incompatible with /clr (Common Language Runtime compilation). "
"Control Flow Guard will be disabled due to /clr option.")
else() else()
message(FATAL_ERROR "Can't enable control flow integrity for compiler \"${CMAKE_CXX_COMPILER_ID}\"." # Enable Control Flow Guard if no incompatible options are present
"You should set Z3_ENABLE_CFI to OFF or use Clang or MSVC to compile.") message(STATUS "Enabling Control Flow Guard (/guard:cf) and ASLR (/DYNAMICBASE) for MSVC")
z3_add_cxx_flag("/guard:cf" REQUIRED)
string(APPEND CMAKE_EXE_LINKER_FLAGS " /GUARD:CF /DYNAMICBASE")
string(APPEND CMAKE_SHARED_LINKER_FLAGS " /GUARD:CF /DYNAMICBASE")
endif()
else()
if (CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Explicitly disable Control Flow Guard when Z3_ENABLE_CFG is OFF
message(STATUS "Disabling Control Flow Guard (/guard:cf-) for MSVC")
z3_add_cxx_flag("/guard:cf-" REQUIRED)
string(APPEND CMAKE_EXE_LINKER_FLAGS " /GUARD:NO")
string(APPEND CMAKE_SHARED_LINKER_FLAGS " /GUARD:NO")
endif() endif()
endif() endif()
@ -507,21 +548,93 @@ set(Z3_GENERATED_FILE_EXTRA_DEPENDENCIES
) )
################################################################################ ################################################################################
# Z3 components, library and executables # API header files
################################################################################ ################################################################################
include(${PROJECT_SOURCE_DIR}/cmake/z3_add_component.cmake) # This lists the API header files that are scanned by
include(${PROJECT_SOURCE_DIR}/cmake/z3_append_linker_flag_list_to_target.cmake) # some of the build rules to generate some files needed
add_subdirectory(src) # by the build; needs to come before add_subdirectory(src)
set(Z3_API_HEADER_FILES_TO_SCAN
z3_api.h
z3_ast_containers.h
z3_algebraic.h
z3_polynomial.h
z3_rcf.h
z3_fixedpoint.h
z3_optimization.h
z3_fpa.h
z3_spacer.h
)
set(Z3_FULL_PATH_API_HEADER_FILES_TO_SCAN "")
foreach (header_file ${Z3_API_HEADER_FILES_TO_SCAN})
set(full_path_api_header_file "${CMAKE_CURRENT_SOURCE_DIR}/src/api/${header_file}")
list(APPEND Z3_FULL_PATH_API_HEADER_FILES_TO_SCAN "${full_path_api_header_file}")
if (NOT EXISTS "${full_path_api_header_file}")
message(FATAL_ERROR "API header file \"${full_path_api_header_file}\" does not exist")
endif()
endforeach()
################################################################################ ################################################################################
# Create `Z3Config.cmake` and related files for the build tree so clients can # Create `Z3Config.cmake` and related files for the build tree so clients can
# use Z3 via CMake. # use Z3 via CMake.
################################################################################ ################################################################################
include(CMakePackageConfigHelpers) include(CMakePackageConfigHelpers)
export(EXPORT Z3_EXPORTED_TARGETS
NAMESPACE z3:: option(Z3_BUILD_LIBZ3_CORE "Build the core libz3 library" ON)
FILE "${PROJECT_BINARY_DIR}/Z3Targets.cmake" # Only export targets if we built libz3
) if (Z3_BUILD_LIBZ3_CORE)
################################################################################
# Z3 components, library and executables
################################################################################
include(${PROJECT_SOURCE_DIR}/cmake/z3_add_component.cmake)
include(${PROJECT_SOURCE_DIR}/cmake/z3_append_linker_flag_list_to_target.cmake)
add_subdirectory(src)
export(EXPORT Z3_EXPORTED_TARGETS
NAMESPACE z3::
FILE "${PROJECT_BINARY_DIR}/Z3Targets.cmake"
)
else()
# When not building libz3, we need to find it
message(STATUS "Not building libz3, will look for pre-installed library")
find_library(Z3_LIBRARY NAMES z3 libz3
HINTS ${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_LIBDIR}
PATH_SUFFIXES lib lib64
)
if (NOT Z3_LIBRARY)
message(FATAL_ERROR "Could not find pre-installed libz3. Please ensure libz3 is installed or set Z3_BUILD_LIBZ3_CORE=ON")
endif()
message(STATUS "Found libz3: ${Z3_LIBRARY}")
# Create an imported target for the pre-installed libz3
add_library(libz3 SHARED IMPORTED)
set_target_properties(libz3 PROPERTIES
IMPORTED_LOCATION "${Z3_LIBRARY}"
)
# Set include directories for the imported target
target_include_directories(libz3 INTERFACE
${CMAKE_INSTALL_PREFIX}/${CMAKE_INSTALL_INCLUDEDIR}
)
endif()
################################################################################
# Z3 API bindings
################################################################################
option(Z3_BUILD_PYTHON_BINDINGS "Build Python bindings for Z3" OFF)
if (Z3_BUILD_PYTHON_BINDINGS)
# Validate configuration for Python bindings
if (Z3_BUILD_LIBZ3_CORE)
# Building libz3 together with Python bindings
if (NOT Z3_BUILD_LIBZ3_SHARED)
message(FATAL_ERROR "The python bindings will not work with a static libz3. "
"You either need to disable Z3_BUILD_PYTHON_BINDINGS or enable Z3_BUILD_LIBZ3_SHARED")
endif()
else()
# Using pre-installed libz3 for Python bindings
message(STATUS "Building Python bindings with pre-installed libz3")
endif()
add_subdirectory(src/api/python)
endif()
set(Z3_FIRST_PACKAGE_INCLUDE_DIR "${PROJECT_BINARY_DIR}/src/api") set(Z3_FIRST_PACKAGE_INCLUDE_DIR "${PROJECT_BINARY_DIR}/src/api")
set(Z3_SECOND_PACKAGE_INCLUDE_DIR "${PROJECT_SOURCE_DIR}/src/api") set(Z3_SECOND_PACKAGE_INCLUDE_DIR "${PROJECT_SOURCE_DIR}/src/api")
set(Z3_CXX_PACKAGE_INCLUDE_DIR "${PROJECT_SOURCE_DIR}/src/api/c++") set(Z3_CXX_PACKAGE_INCLUDE_DIR "${PROJECT_SOURCE_DIR}/src/api/c++")
@ -552,12 +665,15 @@ configure_file("${CMAKE_CURRENT_SOURCE_DIR}/z3.pc.cmake.in"
# Create `Z3Config.cmake` and related files for install tree so clients can use # Create `Z3Config.cmake` and related files for install tree so clients can use
# Z3 via CMake. # Z3 via CMake.
################################################################################ ################################################################################
install(EXPORT # Only install targets if we built libz3
Z3_EXPORTED_TARGETS if (Z3_BUILD_LIBZ3_CORE)
FILE "Z3Targets.cmake" install(EXPORT
NAMESPACE z3:: Z3_EXPORTED_TARGETS
DESTINATION "${CMAKE_INSTALL_Z3_CMAKE_PACKAGE_DIR}" FILE "Z3Targets.cmake"
) NAMESPACE z3::
DESTINATION "${CMAKE_INSTALL_Z3_CMAKE_PACKAGE_DIR}"
)
endif()
set(Z3_INSTALL_TREE_CMAKE_CONFIG_FILE "${PROJECT_BINARY_DIR}/cmake/Z3Config.cmake") set(Z3_INSTALL_TREE_CMAKE_CONFIG_FILE "${PROJECT_BINARY_DIR}/cmake/Z3Config.cmake")
set(Z3_FIRST_PACKAGE_INCLUDE_DIR "${CMAKE_INSTALL_INCLUDEDIR}") set(Z3_FIRST_PACKAGE_INCLUDE_DIR "${CMAKE_INSTALL_INCLUDEDIR}")
set(Z3_SECOND_INCLUDE_DIR "") set(Z3_SECOND_INCLUDE_DIR "")

View file

@ -1,6 +1,6 @@
module( module(
name = "z3", name = "z3",
version = "4.15.4", # TODO: Read from VERSION.txt - currently manual sync required version = "4.17.0", # TODO: Read from VERSION.txt - currently manual sync required
bazel_compatibility = [">=7.0.0"], bazel_compatibility = [">=7.0.0"],
) )

View file

@ -365,6 +365,35 @@ build type when invoking ``cmake`` by passing ``-DCMAKE_BUILD_TYPE=<build_type>`
For multi-configuration generators (e.g. Visual Studio) you don't set the build type For multi-configuration generators (e.g. Visual Studio) you don't set the build type
when invoking CMake and instead set the build type within Visual Studio itself. when invoking CMake and instead set the build type within Visual Studio itself.
## MSVC Security Features
When building with Microsoft Visual C++ (MSVC), Z3 automatically enables several security features by default:
### Control Flow Guard (CFG)
- **CMake Option**: `Z3_ENABLE_CFG` - Defaults to `ON` for MSVC builds
- **Compiler flag**: `/guard:cf` - Automatically enabled when `Z3_ENABLE_CFG=ON`
- **Linker flag**: `/GUARD:CF` - Automatically enabled when `Z3_ENABLE_CFG=ON`
- **Purpose**: Control Flow Guard analyzes control flow for indirect call targets at compile time and inserts runtime verification code to detect attempts to compromise your code by redirecting control flow to attacker-controlled locations
- **Note**: Automatically enables `/DYNAMICBASE` as required by `/GUARD:CF`
### Address Space Layout Randomization (ASLR)
- **Linker flag**: `/DYNAMICBASE` - Enabled when Control Flow Guard is active
- **Purpose**: Randomizes memory layout to make exploitation more difficult
- **Note**: Required for Control Flow Guard to function properly
### Incompatibilities
Control Flow Guard is incompatible with:
- `/ZI` (Edit and Continue debug information format)
- `/clr` (Common Language Runtime compilation)
When these incompatible options are detected, Control Flow Guard will be automatically disabled with a warning message.
### Disabling Control Flow Guard
To disable Control Flow Guard, set the CMake option:
```bash
cmake -DZ3_ENABLE_CFG=OFF ../
```
## Useful options ## Useful options
The following useful options can be passed to CMake whilst configuring. The following useful options can be passed to CMake whilst configuring.
@ -381,9 +410,10 @@ The following useful options can be passed to CMake whilst configuring.
* ``Python3_EXECUTABLE`` - STRING. The python executable to use during the build. * ``Python3_EXECUTABLE`` - STRING. The python executable to use during the build.
* ``Z3_ENABLE_TRACING_FOR_NON_DEBUG`` - BOOL. If set to ``TRUE`` enable tracing in non-debug builds, if set to ``FALSE`` disable tracing in non-debug builds. Note in debug builds tracing is always enabled. * ``Z3_ENABLE_TRACING_FOR_NON_DEBUG`` - BOOL. If set to ``TRUE`` enable tracing in non-debug builds, if set to ``FALSE`` disable tracing in non-debug builds. Note in debug builds tracing is always enabled.
* ``Z3_BUILD_LIBZ3_SHARED`` - BOOL. If set to ``TRUE`` build libz3 as a shared library otherwise build as a static library. * ``Z3_BUILD_LIBZ3_SHARED`` - BOOL. If set to ``TRUE`` build libz3 as a shared library otherwise build as a static library.
* ``Z3_BUILD_LIBZ3_CORE`` - BOOL. If set to ``TRUE`` (default) build the core libz3 library. If set to ``FALSE``, skip building libz3 and look for a pre-installed library instead. This is useful when building only Python bindings on top of an already-installed libz3.
* ``Z3_ENABLE_EXAMPLE_TARGETS`` - BOOL. If set to ``TRUE`` add the build targets for building the API examples. * ``Z3_ENABLE_EXAMPLE_TARGETS`` - BOOL. If set to ``TRUE`` add the build targets for building the API examples.
* ``Z3_USE_LIB_GMP`` - BOOL. If set to ``TRUE`` use the GNU multiple precision library. If set to ``FALSE`` use an internal implementation. * ``Z3_USE_LIB_GMP`` - BOOL. If set to ``TRUE`` use the GNU multiple precision library. If set to ``FALSE`` use an internal implementation.
* ``Z3_BUILD_PYTHON_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's python bindings will be built. * ``Z3_BUILD_PYTHON_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's python bindings will be built. When ``Z3_BUILD_LIBZ3_CORE`` is ``FALSE``, this will build only the Python bindings using a pre-installed libz3.
* ``Z3_INSTALL_PYTHON_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_PYTHON_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Python bindings. * ``Z3_INSTALL_PYTHON_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_PYTHON_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Python bindings.
* ``Z3_BUILD_DOTNET_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's .NET bindings will be built. * ``Z3_BUILD_DOTNET_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's .NET bindings will be built.
* ``Z3_INSTALL_DOTNET_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_DOTNET_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's .NET bindings. * ``Z3_INSTALL_DOTNET_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_DOTNET_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's .NET bindings.
@ -393,6 +423,7 @@ The following useful options can be passed to CMake whilst configuring.
* ``Z3_INSTALL_JAVA_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_JAVA_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Java bindings. * ``Z3_INSTALL_JAVA_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_JAVA_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Java bindings.
* ``Z3_JAVA_JAR_INSTALLDIR`` - STRING. The path to directory to install the Z3 Java ``.jar`` file. This path should be relative to ``CMAKE_INSTALL_PREFIX``. * ``Z3_JAVA_JAR_INSTALLDIR`` - STRING. The path to directory to install the Z3 Java ``.jar`` file. This path should be relative to ``CMAKE_INSTALL_PREFIX``.
* ``Z3_JAVA_JNI_LIB_INSTALLDIRR`` - STRING. The path to directory to install the Z3 Java JNI bridge library. This path should be relative to ``CMAKE_INSTALL_PREFIX``. * ``Z3_JAVA_JNI_LIB_INSTALLDIRR`` - STRING. The path to directory to install the Z3 Java JNI bridge library. This path should be relative to ``CMAKE_INSTALL_PREFIX``.
* ``Z3_BUILD_GO_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's Go bindings will be built. Requires Go 1.20+ and ``Z3_BUILD_LIBZ3_SHARED=ON``.
* ``Z3_BUILD_OCAML_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's OCaml bindings will be built. * ``Z3_BUILD_OCAML_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's OCaml bindings will be built.
* ``Z3_BUILD_JULIA_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's Julia bindings will be built. * ``Z3_BUILD_JULIA_BINDINGS`` - BOOL. If set to ``TRUE`` then Z3's Julia bindings will be built.
* ``Z3_INSTALL_JULIA_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_JULIA_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Julia bindings. * ``Z3_INSTALL_JULIA_BINDINGS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_JULIA_BINDINGS`` is ``TRUE`` then running the ``install`` target will install Z3's Julia bindings.
@ -404,8 +435,11 @@ The following useful options can be passed to CMake whilst configuring.
* ``Z3_ALWAYS_BUILD_DOCS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_DOCUMENTATION`` is ``TRUE`` then documentation for API bindings will always be built. * ``Z3_ALWAYS_BUILD_DOCS`` - BOOL. If set to ``TRUE`` and ``Z3_BUILD_DOCUMENTATION`` is ``TRUE`` then documentation for API bindings will always be built.
Disabling this is useful for faster incremental builds. The documentation can be manually built by invoking the ``api_docs`` target. Disabling this is useful for faster incremental builds. The documentation can be manually built by invoking the ``api_docs`` target.
* ``Z3_LINK_TIME_OPTIMIZATION`` - BOOL. If set to ``TRUE`` link time optimization will be enabled. * ``Z3_LINK_TIME_OPTIMIZATION`` - BOOL. If set to ``TRUE`` link time optimization will be enabled.
* ``Z3_ENABLE_CFI`` - BOOL. If set to ``TRUE`` will enable Control Flow Integrity security checks. This is only supported by MSVC and Clang and will * ``Z3_ENABLE_CFI`` - BOOL. If set to ``TRUE`` will enable Control Flow Integrity security checks. This is only supported by Clang and will
fail on other compilers. This requires Z3_LINK_TIME_OPTIMIZATION to also be enabled. fail on other compilers. This requires Z3_LINK_TIME_OPTIMIZATION to also be enabled.
* ``Z3_ENABLE_CFG`` - BOOL. If set to ``TRUE`` will enable Control Flow Guard security checks. This is only supported by MSVC and will
fail on other compilers. This does not require link time optimization. Control Flow Guard is enabled by default for MSVC builds.
Note: Control Flow Guard is incompatible with ``/ZI`` (Edit and Continue debug information) and ``/clr`` (Common Language Runtime compilation).
* ``Z3_API_LOG_SYNC`` - BOOL. If set to ``TRUE`` will enable experimental API log sync feature. * ``Z3_API_LOG_SYNC`` - BOOL. If set to ``TRUE`` will enable experimental API log sync feature.
* ``WARNINGS_AS_ERRORS`` - STRING. If set to ``ON`` compiler warnings will be treated as errors. If set to ``OFF`` compiler warnings will not be treated as errors. * ``WARNINGS_AS_ERRORS`` - STRING. If set to ``ON`` compiler warnings will be treated as errors. If set to ``OFF`` compiler warnings will not be treated as errors.
If set to ``SERIOUS_ONLY`` a subset of compiler warnings will be treated as errors. If set to ``SERIOUS_ONLY`` a subset of compiler warnings will be treated as errors.
@ -432,6 +466,49 @@ cmake -DCMAKE_BUILD_TYPE=Release -DZ3_ENABLE_TRACING_FOR_NON_DEBUG=FALSE ../
Z3 exposes various language bindings for its API. Below are some notes on building Z3 exposes various language bindings for its API. Below are some notes on building
and/or installing these bindings when building Z3 with CMake. and/or installing these bindings when building Z3 with CMake.
### Python bindings
#### Building Python bindings with libz3
The default behavior when ``Z3_BUILD_PYTHON_BINDINGS=ON`` is to build both the libz3 library
and the Python bindings together:
```
mkdir build
cd build
cmake -DZ3_BUILD_PYTHON_BINDINGS=ON -DZ3_BUILD_LIBZ3_SHARED=ON ../
make
```
#### Building only Python bindings (using pre-installed libz3)
For package managers like conda-forge that want to avoid rebuilding libz3 for each Python version,
you can build only the Python bindings by setting ``Z3_BUILD_LIBZ3_CORE=OFF``. This assumes
libz3 is already installed on your system:
```
# First, build and install libz3 (once)
mkdir build-libz3
cd build-libz3
cmake -DZ3_BUILD_LIBZ3_SHARED=ON -DCMAKE_INSTALL_PREFIX=/path/to/prefix ../
make
make install
# Then, build Python bindings for each Python version (quickly, without rebuilding libz3)
cd ..
mkdir build-py310
cd build-py310
cmake -DZ3_BUILD_LIBZ3_CORE=OFF \
-DZ3_BUILD_PYTHON_BINDINGS=ON \
-DCMAKE_INSTALL_PREFIX=/path/to/prefix \
-DPython3_EXECUTABLE=/path/to/python3.10 ../
make
make install
```
This approach significantly reduces build time when packaging for multiple Python versions,
as the expensive libz3 compilation happens only once.
### Java bindings ### Java bindings
The CMake build uses the ``FindJava`` and ``FindJNI`` cmake modules to detect the The CMake build uses the ``FindJava`` and ``FindJNI`` cmake modules to detect the
@ -447,6 +524,41 @@ where ``VERSION`` is the Z3 version. Under non Windows systems a
symbolic link named ``com.microsoft.z3.jar`` is provided. This symbolic symbolic link named ``com.microsoft.z3.jar`` is provided. This symbolic
link is not created when building under Windows. link is not created when building under Windows.
### Go bindings
Go bindings can be built by setting ``Z3_BUILD_GO_BINDINGS=ON``. The Go bindings use CGO to wrap
the Z3 C API, so you'll need:
* Go 1.20 or later installed on your system
* ``Z3_BUILD_LIBZ3_SHARED=ON`` (Go bindings require the shared library)
Example:
```
mkdir build
cd build
cmake -DZ3_BUILD_GO_BINDINGS=ON -DZ3_BUILD_LIBZ3_SHARED=ON ../
make
```
If CMake detects a Go installation (via ``go`` executable in PATH), it will create two optional targets:
* ``go-bindings`` - Builds the Go bindings
* ``test-go-examples`` - Runs the Go examples
Note that the Go bindings are installed as source files (not compiled) since Go packages are
typically distributed as source and compiled by the user's Go toolchain.
To use the installed Go bindings, set the appropriate CGO flags:
```
export CGO_CFLAGS="-I/path/to/z3/include"
export CGO_LDFLAGS="-L/path/to/z3/lib -lz3"
export LD_LIBRARY_PATH="/path/to/z3/lib:$LD_LIBRARY_PATH" # Linux/macOS
```
For detailed usage examples and API documentation, see ``src/api/go/README.md`` and ``examples/go/``.
## Developer/packager notes ## Developer/packager notes
These notes are help developers and packagers of Z3. These notes are help developers and packagers of Z3.

View file

@ -17,9 +17,34 @@ See the [release notes](RELEASE_NOTES.md) for notes on various stable releases o
## Build status ## Build status
| Azure Pipelines | Open Bugs | Android Build | WASM Build | Windows Build | Pyodide Build | OCaml Build | ### Pull Request & Push Workflows
| --------------- | -----------|---------------|------------|---------------|---------------|-------------| | WASM Build | Windows Build | CI | OCaml Binding |
| [![Build Status](https://dev.azure.com/Z3Public/Z3/_apis/build/status/Z3Prover.z3?branchName=master)](https://dev.azure.com/Z3Public/Z3/_build/latest?definitionId=1&branchName=master) | [![Open Issues](https://github.com/Z3Prover/z3/actions/workflows/wip.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/wip.yml) |[![Android Build](https://github.com/Z3Prover/z3/actions/workflows/android-build.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/android-build.yml) | [![WASM Build](https://github.com/Z3Prover/z3/actions/workflows/wasm.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/wasm.yml) | [![Windows](https://github.com/Z3Prover/z3/actions/workflows/Windows.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/Windows.yml) | [![Pyodide Build](https://github.com/Z3Prover/z3/actions/workflows/pyodide.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/pyodide.yml) | [![OCaml Build](https://github.com/Z3Prover/z3/actions/workflows/ocaml-all.yaml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/ocaml-all.yaml) | | ------------|---------------|----|-----------|
| [![WASM Build](https://github.com/Z3Prover/z3/actions/workflows/wasm.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/wasm.yml) | [![Windows](https://github.com/Z3Prover/z3/actions/workflows/Windows.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/Windows.yml) | [![CI](https://github.com/Z3Prover/z3/actions/workflows/ci.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/ci.yml) | [![OCaml Binding CI](https://github.com/Z3Prover/z3/actions/workflows/ocaml.yaml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/ocaml.yaml) |
### Scheduled Workflows
| Open Bugs | Android Build | Pyodide Build | Nightly Build | Cross Build |
| -----------|---------------|---------------|---------------|-------------|
| [![Open Issues](https://github.com/Z3Prover/z3/actions/workflows/wip.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/wip.yml) | [![Android Build](https://github.com/Z3Prover/z3/actions/workflows/android-build.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/android-build.yml) | [![Pyodide Build](https://github.com/Z3Prover/z3/actions/workflows/pyodide.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/pyodide.yml) | [![Nightly Build](https://github.com/Z3Prover/z3/actions/workflows/nightly.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/nightly.yml) | [![RISC V and PowerPC 64](https://github.com/Z3Prover/z3/actions/workflows/cross-build.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/cross-build.yml) |
| MSVC Static | MSVC Clang-CL | Build Z3 Cache |
|-------------|---------------|----------------|
| [![MSVC Static Build](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build.yml) | [![MSVC Clang-CL Static Build](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build-clang-cl.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build-clang-cl.yml) | [![Build and Cache Z3](https://github.com/Z3Prover/z3/actions/workflows/build-z3-cache.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/build-z3-cache.yml) |
### Manual & Release Workflows
| Documentation | Release Build | WASM Release | NuGet Build |
|---------------|---------------|--------------|-------------|
| [![Documentation](https://github.com/Z3Prover/z3/actions/workflows/docs.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/docs.yml) | [![Release Build](https://github.com/Z3Prover/z3/actions/workflows/release.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/release.yml) | [![WebAssembly Publish](https://github.com/Z3Prover/z3/actions/workflows/wasm-release.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/wasm-release.yml) | [![Build NuGet Package](https://github.com/Z3Prover/z3/actions/workflows/nuget-build.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/nuget-build.yml) |
### Specialized Workflows
| Nightly Validation | Copilot Setup | Agentics Maintenance |
|--------------------|---------------|----------------------|
| [![Nightly Build Validation](https://github.com/Z3Prover/z3/actions/workflows/nightly-validation.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/nightly-validation.yml) | [![Copilot Setup Steps](https://github.com/Z3Prover/z3/actions/workflows/copilot-setup-steps.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/copilot-setup-steps.yml) | [![Agentics Maintenance](https://github.com/Z3Prover/z3/actions/workflows/agentics-maintenance.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/agentics-maintenance.yml) |
### Agentic Workflows
| A3 Python | API Coherence | Code Simplifier | Deeptest | Release Notes | Specbot | Workflow Suggestion |
| ----------|---------------|-----------------|----------|---------------|---------|---------------------|
| [![A3 Python Code Analysis](https://github.com/Z3Prover/z3/actions/workflows/a3-python.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/a3-python.lock.yml) | [![API Coherence Checker](https://github.com/Z3Prover/z3/actions/workflows/api-coherence-checker.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/api-coherence-checker.lock.yml) | [![Code Simplifier](https://github.com/Z3Prover/z3/actions/workflows/code-simplifier.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/code-simplifier.lock.yml) | [![Deeptest](https://github.com/Z3Prover/z3/actions/workflows/deeptest.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/deeptest.lock.yml) | [![Release Notes Updater](https://github.com/Z3Prover/z3/actions/workflows/release-notes-updater.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/release-notes-updater.lock.yml) | [![Specbot](https://github.com/Z3Prover/z3/actions/workflows/specbot.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/specbot.lock.yml) | [![Workflow Suggestion Agent](https://github.com/Z3Prover/z3/actions/workflows/workflow-suggestion-agent.lock.yml/badge.svg)](https://github.com/Z3Prover/z3/actions/workflows/workflow-suggestion-agent.lock.yml) |
[1]: #building-z3-on-windows-using-visual-studio-command-prompt [1]: #building-z3-on-windows-using-visual-studio-command-prompt
[2]: #building-z3-using-make-and-gccclang [2]: #building-z3-using-make-and-gccclang
@ -49,7 +74,12 @@ cd build
nmake nmake
``` ```
Z3 uses C++20. The recommended version of Visual Studio is therefore VS2019 or later. Z3 uses C++20. The recommended version of Visual Studio is therefore VS2019 or later.
**Security Features (MSVC)**: When building with Visual Studio/MSVC, a couple of security features are enabled by default for Z3:
- Control Flow Guard (`/guard:cf`) - enabled by default to detect attempts to compromise your code by preventing calls to locations other than function entry points, making it more difficult for attackers to execute arbitrary code through control flow redirection
- Address Space Layout Randomization (`/DYNAMICBASE`) - enabled by default for memory layout randomization, required by the `/GUARD:CF` linker option
- These can be disabled using `python scripts/mk_make.py --no-guardcf` (Python build) or `cmake -DZ3_ENABLE_CFG=OFF` (CMake build) if needed
## Building Z3 using make and GCC/Clang ## Building Z3 using make and GCC/Clang
@ -161,8 +191,18 @@ See [``examples/c++``](examples/c++) for examples.
Use the ``--java`` command line flag with ``mk_make.py`` to enable building these. Use the ``--java`` command line flag with ``mk_make.py`` to enable building these.
For IDE setup instructions (Eclipse, IntelliJ IDEA, Visual Studio Code) and troubleshooting, see the [Java IDE Setup Guide](doc/JAVA_IDE_SETUP.md).
See [``examples/java``](examples/java) for examples. See [``examples/java``](examples/java) for examples.
### ``Go``
Use the ``--go`` command line flag with ``mk_make.py`` to enable building these. Note that Go bindings use CGO and require a Go toolchain (Go 1.20 or later) to build.
With CMake, use the ``-DZ3_BUILD_GO_BINDINGS=ON`` option.
See [``examples/go``](examples/go) for examples and [``src/api/go/README.md``](src/api/go/README.md) for complete API documentation.
### ``OCaml`` ### ``OCaml``
Use the ``--ml`` command line flag with ``mk_make.py`` to enable building these. Use the ``--ml`` command line flag with ``mk_make.py`` to enable building these.
@ -225,6 +265,10 @@ A WebAssembly build with associated TypeScript typings is published on npm as [z
Project [MachineArithmetic](https://github.com/shingarov/MachineArithmetic) provides a Smalltalk interface Project [MachineArithmetic](https://github.com/shingarov/MachineArithmetic) provides a Smalltalk interface
to Z3's C API. For more information, see [MachineArithmetic/README.md](https://github.com/shingarov/MachineArithmetic/blob/pure-z3/MachineArithmetic/README.md). to Z3's C API. For more information, see [MachineArithmetic/README.md](https://github.com/shingarov/MachineArithmetic/blob/pure-z3/MachineArithmetic/README.md).
### AIX
[Build settings for AIX are described here.](https://github.com/Z3Prover/z3/pull/8113)
## System Overview ## System Overview
![System Diagram](https://github.com/Z3Prover/doc/blob/master/programmingz3/images/Z3Overall.jpg) ![System Diagram](https://github.com/Z3Prover/doc/blob/master/programmingz3/images/Z3Overall.jpg)

View file

@ -1,11 +1,118 @@
RELEASE NOTES RELEASE NOTES
Version 4.next
================ Version 4.17.0
- Planned features ==============
- sat.euf - A FiniteSets theory solver
- CDCL core for SMT queries. It extends the SAT engine with theory solver plugins. FiniteSets is a theory with a sort (FiniteSet S) for base sort S.
- add global incremental pre-processing for the legacy core. Inhabitants of (FiniteSet S) are finite sets of elements over S.
The main operations are creating empty sets, singleton sets, union, intersection, set difference, ranges of integers, subset modulo a predicate.
Constraints are: membership, subset.
The size of a set is obtained using set.size.
It is possible to map a function over elements of a set using set.map.
Support for set.range, set.map is partial.
Support for set.size exists, but is without any optimization. The source code contains comments on ways to make it more efficient. File a GitHub issue if you want to contribute.s
- Add Python API convenience methods for improved usability. Thanks to Daniel Tang.
- Solver.solutions(t) method for finding all solutions to constraints, https://github.com/Z3Prover/z3/pull/8633
- ArithRef.__abs__ alias to integrate with Python's abs() builtin, https://github.com/Z3Prover/z3/pull/8623
- Improved error message in ModelRef.__getitem__ to suggest using eval(), https://github.com/Z3Prover/z3/pull/8626
- Documentation example for Solver.sexpr(), https://github.com/Z3Prover/z3/pull/8631
- Performance improvements by replacing unnecessary copy operations with std::move semantics for better efficiency.
Thanks to Nuno Lopes, https://github.com/Z3Prover/z3/pull/8583
- Fix spurious sort error with nested quantifiers in model finder. `Fixes #8563`
- NLSAT optimizations including improvements to handle_nullified_poly and levelwise algorithm. Thanks to Lev Nachmanson.
Version 4.16.0
==============
- Add Go bindings to supported APIs
Version 4.15.8
==============
- Fix release pipeline to publish all supported python wheels properly.
- Re-enable npm tokens for publishing npm pacages.
Version 4.15.7
==============
- Bug fix release
Version 4.15.6
==============
- Optimize mpz (multi-precision integer) implementation using pointer tagging to reduce memory footprint and improve performance.
https://github.com/Z3Prover/z3/pull/8447, thanks to Nuno Lopes.
- Fix macOS install_name_tool issue by adding -Wl,-headerpad_max_install_names linker flag to all dylib builds. Resolves
"larger updated load commands do not fit" errors when modifying library install names on macOS.
https://github.com/Z3Prover/z3/pull/8535, `fixes #7623`
- Optimize parameter storage by storing rational values directly in variant instead of using pointers. Thanks to Nuno Lopes.
https://github.com/Z3Prover/z3/pull/8518
Version 4.15.5
==============
- NLSAT now uses the Level wise algorithm for projection. https://arxiv.org/abs/2212.09309
- Add RCF (Real Closed Field) API to TypeScript bindings, achieving feature parity with Python, Java, C++, and C# implementations.
The API includes 38 functions for exact real arithmetic with support for π, e, algebraic roots, and infinitesimals.
https://github.com/Z3Prover/z3/pull/8225
- Add sequence higher-order functions (map, fold) to Java, C#, and TypeScript APIs. Functions include SeqMap, SeqMapi, SeqFoldl, and SeqFoldli
for functional programming patterns over sequences.
- Java API: https://github.com/Z3Prover/z3/pull/8226
- C# API: https://github.com/Z3Prover/z3/pull/8227
- TypeScript API included in https://github.com/Z3Prover/z3/pull/8228
- Add benchmark export functionality to C# and TypeScript APIs for exporting solver problems as SMTLIB2 benchmarks.
https://github.com/Z3Prover/z3/pull/8228
- Fix UNKNOWN bug in search tree with inconsistent end state during nonchronological backjumping. The fix ensures all node closing
occurs in backtrack to maintain consistency between search tree and batch manager state. Thanks to Ilana Shapiro.
https://github.com/Z3Prover/z3/pull/8214
- Fix segmentation fault in dioph_eq.cpp when processing UFNIRA problems without explicit set-logic declarations. Added bounds checks
before accessing empty column vectors. https://github.com/Z3Prover/z3/pull/8218, fixes #8208
- Migrate build and release infrastructure from Azure Pipelines to GitHub Actions, including CI workflows, nightly builds, and release packaging.
- Bug fixes including #8195
- Add functional datatype update operation to language bindings. The `datatype_update_field` function enables immutable updates
to datatype fields, returning a modified copy while preserving the original datatype value.
https://github.com/Z3Prover/z3/pull/8500
- Add comprehensive regex support to TypeScript API with 21 functions including Re, Loop, Range, Union, Intersect, Complement,
and character class operations. Enables pattern matching and regular expression constraints in TypeScript applications.
https://github.com/Z3Prover/z3/pull/8499
- Add move constructor and move assignment operator to z3::context class for efficient resource transfer. Enables move semantics
for context objects while maintaining safety with explicit checks against moved-from usage.
https://github.com/Z3Prover/z3/pull/8508
- Add solve_for and import_model_converter functions to C++ solver API, achieving parity with Python API for LRA variable solving.
https://github.com/Z3Prover/z3/pull/8465
- Add missing solver APIs to Java and C# bindings including add_string, set_phase, get_units, get_non_units, and get_levels methods.
https://github.com/Z3Prover/z3/pull/8464
- Add polymorphic datatype APIs to Java and ML bindings for creating and manipulating parameterized datatypes.
https://github.com/Z3Prover/z3/pull/8438, https://github.com/Z3Prover/z3/pull/8378
https://github.com/Z3Prover/z3/pull/8507, https://github.com/Z3Prover/z3/pull/8467, https://github.com/Z3Prover/z3/pull/8494
- Add SLS (Stochastic Local Search) tactic as a separate worker thread for parallel solving. Thanks to Ilana Shapiro.
https://github.com/Z3Prover/z3/pull/8263
- Add Windows ARM64 platform support for Python wheels, expanding platform coverage for ARM-based Windows systems.
https://github.com/Z3Prover/z3/pull/8280
- Optimize bitvector operations for large bitwidths by avoiding unnecessary power-of-two computations in has_sign_bit and mod2k operations.
Thanks to Nuno Lopes.
- Optimize linear arithmetic solver with throttled patch_basic_columns() calls, especially beneficial for unsatisfiable cases.
Thanks to Lev Nachmanson.
- Fix memory leak in undo_fixed_column when handling big number cleanup. Thanks to Lev Nachmanson.
- Fix assertion violation in mpzzp_manager::eq from non-normalized values during fresh variable peeking.
https://github.com/Z3Prover/z3/pull/8439
- Fix memory corruption in Z3_polynomial_subresultants API where allocating result vector corrupted internal converter mappings.
Restructured to complete polynomial computation before allocation. https://github.com/Z3Prover/z3/pull/8264, thanks to Lev Nachmanson.
- Fix missing newline after attributes in benchmark_to_smtlib_string output formatting. Thanks to Josh Berdine.
https://github.com/Z3Prover/z3/pull/8276
- Fix NuGet packaging to handle dynamic glibc versions across different Linux distributions.
https://github.com/Z3Prover/z3/pull/8474
- Preserve initial solver state with push/pop operations for multiple objectives optimization. Thanks to Lev Nachmanson.
https://github.com/Z3Prover/z3/pull/8264
Version 4.15.4
==============
- Add methods to create polymorphic datatype constructors over the API. The prior method was that users had to manage
parametricity using their own generation of instances. The updated API allows to work with polymorphic datatype declarations
directly.
- MSVC build by default respect security flags, https://github.com/Z3Prover/z3/pull/7988
- Using a new algorithm for smt.threads=k, k > 1 using a shared search tree. Thanks to Ilana Shapiro.
- Thanks for several pull requests improving usability, including
- https://github.com/Z3Prover/z3/pull/7955
- https://github.com/Z3Prover/z3/pull/7995
- https://github.com/Z3Prover/z3/pull/7947
Version 4.15.3 Version 4.15.3
============== ==============

Binary file not shown.

309
a3/a3-python.md Normal file
View file

@ -0,0 +1,309 @@
---
on:
schedule: weekly on sunday
workflow_dispatch: # Allow manual trigger
permissions:
contents: read
issues: read
pull-requests: read
network:
allowed: [defaults, python]
safe-outputs:
create-issue:
labels:
- bug
- automated-analysis
- a3-python
title-prefix: "[a3-python] "
description: Analyzes Python code using a3-python tool to identify bugs and issues
name: A3 Python Code Analysis
strict: true
timeout-minutes: 45
tracker-id: a3-python-analysis
---
# A3 Python Code Analysis Agent
You are an expert Python code analyst using the a3-python tool to identify bugs and code quality issues. Your mission is to analyze the Python codebase, identify true positives from the analysis output, and create GitHub issues when multiple likely issues are found.
## Current Context
- **Repository**: ${{ github.repository }}
- **Workspace**: ${{ github.workspace }}
## Phase 1: Install and Setup a3-python
### 1.1 Install a3-python
Install the a3-python tool from PyPI:
```bash
pip install a3-python
```
Verify installation:
```bash
a3 --version || python -m a3 --version || echo "a3 command not found in PATH"
```
### 1.2 Check Available Commands
```bash
a3 --help || python -m a3 --help
```
## Phase 2: Run Analysis on Python Source Files
### 2.1 Identify Python Files
Discover Python source files in the repository:
```bash
# Check for Python files in common locations
find ${{ github.workspace }} -name "*.py" -type f | head -30
# Count total Python files
echo "Total Python files found: $(find ${{ github.workspace }} -name "*.py" -type f | wc -l)"
```
### 2.2 Run a3-python Analysis
Run the a3 scan command on the repository to analyze all Python files:
```bash
cd ${{ github.workspace }}
# Ensure PATH includes a3 command
export PATH="$PATH:/home/runner/.local/bin"
# Run a3 scan on the repository
if command -v a3 &> /dev/null; then
# Run with multiple options for comprehensive analysis
a3 scan . --verbose --dse-verify --deduplicate --consolidate-variants > a3-python-output.txt 2>&1 || \
a3 scan . --verbose --functions --dse-verify > a3-python-output.txt 2>&1 || \
a3 scan . --verbose > a3-python-output.txt 2>&1 || \
echo "a3 scan command failed with all variations" > a3-python-output.txt
elif python -m a3 --help &> /dev/null; then
python -m a3 scan . > a3-python-output.txt 2>&1 || \
echo "python -m a3 scan command failed" > a3-python-output.txt
else
echo "ERROR: a3-python tool not available" > a3-python-output.txt
fi
# Verify output was generated
ls -lh a3-python-output.txt
cat a3-python-output.txt
```
**Important**: The a3-python tool should analyze the Python files in the repository, which may include various Python modules and packages depending on the project structure.
## Phase 3: Post-Process and Analyze Results
### 3.1 Review the Output
Read and analyze the contents of `a3-python-output.txt`:
```bash
cat a3-python-output.txt
```
### 3.2 Classify Findings
For each issue reported in the output, determine:
1. **True Positives (Likely Issues)**: Real bugs or code quality problems that should be addressed
- Logic errors or bugs
- Security vulnerabilities
- Performance issues
- Code quality problems
- Broken imports or dependencies
- Type mismatches or incorrect usage
2. **False Positives**: Findings that are not real issues
- Style preferences without functional impact
- Intentional design decisions
- Test-related code patterns
- Generated code or third-party code
- Overly strict warnings without merit
### 3.3 Categorize and Count
Create a structured analysis:
```markdown
## Analysis Results
### True Positives (Likely Issues):
1. [Issue 1 Description] - File: path/to/file.py, Line: X
2. [Issue 2 Description] - File: path/to/file.py, Line: Y
...
### False Positives:
1. [FP 1 Description] - Reason for dismissal
2. [FP 2 Description] - Reason for dismissal
...
### Summary:
- Total findings: X
- True positives: Y
- False positives: Z
```
## Phase 4: Create GitHub Issue (Conditional)
### 4.1 Determine If Issue Creation Is Needed
Create a GitHub issue **ONLY IF**:
- ✅ There are **2 or more** true positives (likely issues)
- ✅ The issues are actionable and specific
- ✅ The analysis completed successfully
**Do NOT create an issue if**:
- ❌ Zero or one true positive found
- ❌ Only false positives detected
- ❌ Analysis failed to run
- ❌ Output file is empty or contains only errors
### 4.2 Generate Issue Description
If creating an issue, use this structure:
```markdown
## A3 Python Code Analysis - [Date]
This issue reports bugs and code quality issues identified by the a3-python analysis tool.
### Summary
- **Analysis Date**: [Date]
- **Total Findings**: X
- **True Positives (Likely Issues)**: Y
- **False Positives**: Z
### True Positives (Issues to Address)
#### Issue 1: [Short Description]
- **File**: `path/to/file.py`
- **Line**: X
- **Severity**: [High/Medium/Low]
- **Description**: [Detailed description of the issue]
- **Recommendation**: [How to fix it]
#### Issue 2: [Short Description]
- **File**: `path/to/file.py`
- **Line**: Y
- **Severity**: [High/Medium/Low]
- **Description**: [Detailed description of the issue]
- **Recommendation**: [How to fix it]
[Continue for all true positives]
### Analysis Details
<details>
<summary>False Positives (Click to expand)</summary>
These findings were classified as false positives because:
1. **[FP 1]**: [Reason for dismissal]
2. **[FP 2]**: [Reason for dismissal]
...
</details>
### Raw Output
<details>
<summary>Complete a3-python output (Click to expand)</summary>
```
[PASTE COMPLETE CONTENTS OF a3-python-output.txt HERE]
```
</details>
### Recommendations
1. Prioritize fixing high-severity issues first
2. Review medium-severity issues for improvement opportunities
3. Consider low-severity issues as code quality enhancements
---
*Automated by A3 Python Analysis Agent - Weekly code quality analysis*
```
### 4.3 Use Safe Outputs
Create the issue using the safe-outputs configuration:
- Title will be prefixed with `[a3-python]`
- Labeled with `bug`, `automated-analysis`, `a3-python`
- Contains structured analysis with actionable findings
## Important Guidelines
### Analysis Quality
- **Be thorough**: Review all findings carefully
- **Be accurate**: Distinguish real issues from false positives
- **Be specific**: Provide file names, line numbers, and descriptions
- **Be actionable**: Include recommendations for fixes
### Classification Criteria
**True Positives** should meet these criteria:
- The issue represents a real bug or problem
- It could impact functionality, security, or performance
- It's actionable with a clear fix
- It's in code owned by the repository (not third-party)
**False Positives** typically include:
- Style preferences without functional impact
- Intentional design decisions that are correct
- Test code patterns that look unusual but are valid
- Generated or vendored code
- Overly pedantic warnings
### Threshold for Issue Creation
- **2+ true positives**: Create an issue with all findings
- **1 true positive**: Do not create an issue (not enough to warrant it)
- **0 true positives**: Exit gracefully without creating an issue
### Exit Conditions
Exit gracefully without creating an issue if:
- Analysis tool failed to run or install
- Python source files were not checked out (sparse checkout issue)
- No Python files found in repository
- Output file is empty or invalid
- Zero or one true positive identified
- All findings are false positives
### Success Metrics
A successful analysis:
- ✅ Completes without errors
- ✅ Generates comprehensive output
- ✅ Accurately classifies findings
- ✅ Creates actionable issue when appropriate
- ✅ Provides clear recommendations
## Output Requirements
Your output MUST either:
1. **If analysis fails or no findings**:
```
✅ A3 Python analysis completed.
No significant issues found - 0 or 1 true positive detected.
```
2. **If 2+ true positives found**: Create an issue with:
- Clear summary of findings
- Detailed breakdown of each true positive
- Severity classifications
- Actionable recommendations
- Complete raw output in collapsible section
Begin the analysis now. Install a3-python, run analysis on the repository, save output to a3-python-output.txt, post-process to identify true positives, and create a GitHub issue if 2 or more likely issues are found.

202
a3/a3-rust.md Normal file
View file

@ -0,0 +1,202 @@
---
description: Analyzes a3-rust verifier artifacts to identify true positive bugs and reports findings in GitHub Discussions
on:
workflow_dispatch:
permissions:
contents: read
actions: read
strict: false
sandbox: true
network:
allowed:
- "*.blob.core.windows.net" # Azure blob storage for artifact downloads
tools:
github:
toolsets: [actions]
bash: true # Allow all bash commands
safe-outputs:
threat-detection: false # Disabled: gh-aw compiler bug - detection job needs contents:read for private repos
create-discussion:
category: general
max: 1
---
<!-- This prompt will be imported in the agentic workflow .github/workflows/a3-rust.md at runtime. -->
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
# A3-Rust Verifier Output Analyzer
You are an AI agent that analyzes a3-rust verifier output artifacts to identify and verify true positive bugs.
## Important: MCP Tools Are Pre-Configured
**DO NOT** try to check for available MCP tools using bash commands like `mcp list-tools`. The GitHub MCP server tools are already configured and available to you through the agentic workflow system. You should use them directly by calling the tool functions (e.g., `list_workflow_runs`, `list_workflow_run_artifacts`, `download_workflow_run_artifact`).
**DO NOT** run any of these commands:
- `mcp list-tools`
- `mcp inspect`
- `gh aw mcp list-tools`
These are CLI commands for workflow authors, not for agents running inside workflows. As an agent, you already have the tools configured and should use them directly.
## Your Task
### Step 1: Download and Extract the Artifact
Use the GitHub MCP server tools (actions toolset) — not bash/curl. For `owner` and `repo` parameters, extract them from `${{ github.repository }}` (format: `owner/repo`).
1. Call `list_workflow_runs` with `resource_id: a3-rust.yml`. Take the run ID from the first (most recent) result.
2. Call `list_workflow_run_artifacts` with `resource_id:` set to that run ID. Find the artifact named `a3-rust-output`.
3. Call `download_workflow_run_artifact` with `resource_id:` set to the artifact ID.
4. Extract the downloaded zip with `unzip` and read `tmp/verifier-output.txt`.
### Step 2: Parse Bug Reports
Identify all bug reports in the log file. Bug reports have this format:
```
✗ BUG FOUND in function: <function_name>
Bug type: <bug_description>
```
Examples:
```
✗ BUG FOUND in function: elf
Bug type: Integer overflow in add operation: _2 add _20 (type: u64, bounds: u64 [0, 9223372036854775807])
✗ BUG FOUND in function: stack
Bug type: Integer overflow in add operation: _2 add _4 (type: u64, bounds: u64 [0, 9223372036854775807])
```
For each bug report, extract:
- Function name
- Bug type (overflow, bounds, panic, etc.)
- Operation details
- File path and line number (if present in the log)
### Step 3: Review Each Bug Report
For each identified bug:
1. **Locate the source code**:
- Use the function name and any file/line information to find the relevant code
- Search the codebase using grep if needed to locate the function
- Read the source file to understand the context
2. **Analyze the code**:
- Understand what the function does
- Check the bounds and constraints on the operation
- Look for existing validation, assertions, or safety checks
- Consider the calling context and input constraints
- Check for any safety comments explaining why operations are safe
3. **Determine true vs false positive**:
- **True Positive**: The bug is real and could cause:
- Integer overflow/underflow in normal execution
- Out-of-bounds memory access
- Panic or unwrap failures without proper error handling
- Division by zero
- Security vulnerabilities
- **False Positive**: The bug report is incorrect because:
- Input validation prevents the problematic values
- Type system or compiler guarantees safety
- Bounds checks exist in the code path
- The overflow is intentional and documented (e.g., wrapping arithmetic)
- The operation is unreachable or guarded by conditions
### Step 4: Create GitHub Discussion
Create a comprehensive GitHub Discussion summarizing the findings:
**Discussion Title**: `A3-Rust Verifier Analysis - [Date]`
**Discussion Body** (use GitHub-flavored markdown):
```markdown
# A3-Rust Verifier Analysis Report
**Workflow Run**: [Link to a3-rust.yml run]
**Analysis Date**: [Current date]
**Analyzed Artifact**: a3-rust-output (from verifier-output.txt)
## Executive Summary
- Total bugs reported: X
- True positives: Y
- False positives: Z
## 🔴 True Positives (Bugs to Fix)
For each true positive, include:
### [Bug Type] in `function_name` ([file:line])
**Bug Description**: [Explain the bug in plain language]
**Code Location**:
```rust
[Relevant code snippet]
```
**Why This Is a Bug**:
[Clear explanation of why this is a genuine security or correctness issue]
**Recommended Fix**:
[Specific suggestion for how to fix it]
---
## 🟢 False Positives (No Action Needed)
<details>
<summary><b>Show False Positives</b></summary>
For each false positive, briefly explain:
- Function name and bug type
- Why it's a false positive (validation exists, safe by design, etc.)
</details>
## Next Steps
1. Review and prioritize the true positive findings
2. Create issues for each critical bug
3. Implement fixes with proper testing
4. Re-run a3-rust verifier to confirm fixes
## Methodology
This analysis was performed by:
1. Downloading the most recent a3-rust.yml artifact
2. Parsing all bug reports from verifier-output.txt
3. Reviewing source code for each reported bug
4. Classifying bugs as true or false positives based on code analysis
```
## Guidelines
- **Be thorough**: Review every bug report in the log file
- **Be accurate**: Don't dismiss bugs without careful code review
- **Be clear**: Explain your reasoning for each classification
- **Be factual**: Don't add subjective labels to bugs such as _critical_. This is up to the developer to decide
- **Prioritize security**: Integer overflows in security-critical code have priority; they are not necessarily serious
- **Context matters**: Consider the purpose and domain of the codebase being analyzed
- **Use evidence**: Quote relevant code when explaining your decisions
- **Format properly**: Use GitHub-flavored markdown with proper headers, code blocks, and progressive disclosure
- **Link back**: Include a link to the workflow run that generated the artifact
## Important Notes
- The a3-rust verifier uses static analysis and may have false positives
- When in doubt, classify as a true positive and let maintainers decide
- Focus on actionable findings rather than theoretical edge cases
- Use file paths and line numbers to help maintainers locate issues quickly
- If the artifact is missing or empty, clearly report this in the discussion
## Artifact Contents
The `a3-rust-output` zip contains:
- `tmp/verifier-output.txt` - Main verifier output **(analyze this)**
- `tmp/build-output.txt` - Build log (optional reference)
- `tmp/mir_files/*.mir` - MIR files (optional reference)
- `tmp/mir_errors/*.err` - MIR error logs (optional reference)

View file

@ -1,306 +0,0 @@
variables:
cmakeJulia: '-DZ3_BUILD_JULIA_BINDINGS=True'
cmakeJava: '-DZ3_BUILD_JAVA_BINDINGS=True'
cmakeNet: '-DZ3_BUILD_DOTNET_BINDINGS=True'
cmakePy: '-DZ3_BUILD_PYTHON_BINDINGS=True'
cmakeStdArgs: '-DZ3_BUILD_DOTNET_BINDINGS=True -DZ3_BUILD_JAVA_BINDINGS=True -DZ3_BUILD_PYTHON_BINDINGS=True -G "Ninja" ../'
asanEnv: 'CXXFLAGS="${CXXFLAGS} -fsanitize=address -fno-omit-frame-pointer" CFLAGS="${CFLAGS} -fsanitize=address -fno-omit-frame-pointer"'
ubsanEnv: 'CXXFLAGS="${CXXFLAGS} -fsanitize=undefined" CFLAGS="${CFLAGS} -fsanitize=undefined"'
msanEnv: 'CC=clang LDFLAGS="-L../libcxx/libcxx_msan/lib -lc++abi -Wl,-rpath=../libcxx/libcxx_msan/lib" CXX=clang++ CXXFLAGS="${CXXFLAGS} -stdlib=libc++ -fsanitize-memory-track-origins -fsanitize=memory -fPIE -fno-omit-frame-pointer -g -O2" CFLAGS="${CFLAGS} -stdlib=libc -fsanitize=memory -fsanitize-memory-track-origins -fno-omit-frame-pointer -g -O2"'
# TBD:
# test python bindings
# build documentation
# Asan, ubsan, msan
# Disabled pending clang dependencies for std::unordered_map
jobs:
- job: "LinuxPythonDebug"
displayName: "Ubuntu build - python make - debug"
timeoutInMinutes: 90
pool:
vmImage: "ubuntu-latest"
strategy:
matrix:
MT:
cmdLine: 'python scripts/mk_make.py -d --java --dotnet'
runRegressions: 'True'
ST:
cmdLine: './configure --single-threaded'
runRegressions: 'False'
steps:
- script: $(cmdLine)
- script: |
set -e
cd build
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- template: scripts/test-z3.yml
- ${{if eq(variables['runRegressions'], 'True')}}:
- template: scripts/test-regressions.yml
- job: "ManylinuxPythonBuildAmd64"
displayName: "Python bindings (manylinux Centos AMD64) build"
timeoutInMinutes: 90
pool:
vmImage: "ubuntu-latest"
container: "quay.io/pypa/manylinux2014_x86_64:latest"
steps:
- script: "/opt/python/cp38-cp38/bin/python -m venv $PWD/env"
- script: 'echo "##vso[task.prependpath]$PWD/env/bin"'
- script: "pip install build git+https://github.com/rhelmot/auditwheel"
- script: "cd src/api/python && python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../.."
- script: "pip install ./src/api/python/wheelhouse/*.whl && python - <src/api/python/z3test.py z3 && python - <src/api/python/z3test.py z3num"
- job: ManyLinuxPythonBuildArm64
timeoutInMinutes: 90
displayName: "Python bindings (manylinux Centos ARM64 cross) build"
variables:
name: ManyLinux
python: "/opt/python/cp37-cp37m/bin/python"
pool:
vmImage: "ubuntu-latest"
container: "quay.io/pypa/manylinux2014_x86_64:latest"
steps:
- script: curl -L -o /tmp/arm-toolchain.tar.xz 'https://developer.arm.com/-/media/Files/downloads/gnu/11.2-2022.02/binrel/gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz?rev=33c6e30e5ac64e6dba8f0431f2c35f1b&hash=9918A05BF47621B632C7A5C8D2BB438FB80A4480'
- script: mkdir -p /tmp/arm-toolchain/
- script: tar xf /tmp/arm-toolchain.tar.xz -C /tmp/arm-toolchain/ --strip-components=1
- script: "/opt/python/cp38-cp38/bin/python -m venv $PWD/env"
- script: 'echo "##vso[task.prependpath]$PWD/env/bin"'
- script: echo '##vso[task.prependpath]/tmp/arm-toolchain/bin'
- script: echo '##vso[task.prependpath]/tmp/arm-toolchain/aarch64-none-linux-gnu/libc/usr/bin'
- script: echo $PATH
- script: "stat `which aarch64-none-linux-gnu-gcc`"
- script: "pip install build git+https://github.com/rhelmot/auditwheel"
- script: "cd src/api/python && CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../.."
- job: "UbuntuOCaml"
displayName: "Ubuntu with OCaml"
timeoutInMinutes: 90
pool:
vmImage: "Ubuntu-latest"
steps:
- script: sudo apt-get install ocaml opam libgmp-dev
- script: opam init -y
- script: eval `opam config env`; opam install zarith ocamlfind -y
- script: eval `opam config env`; python scripts/mk_make.py --ml
- script: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- script: eval `opam config env`; ocamlfind install z3 build/api/ml/* -dll build/libz3.*
- template: scripts/test-z3.yml
- template: scripts/test-regressions.yml
- template: scripts/generate-doc.yml
- job: "UbuntuOCamlStatic"
displayName: "Ubuntu with OCaml on z3-static"
timeoutInMinutes: 90
pool:
vmImage: "Ubuntu-latest"
steps:
- script: sudo apt-get install ocaml opam libgmp-dev
- script: opam init -y
- script: eval `opam config env`; opam install zarith ocamlfind -y
- script: eval `opam config env`; python scripts/mk_make.py --ml --staticlib
- script: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- script: eval `opam config env`; ocamlfind install z3-static build/api/ml/* build/libz3-static.a
- script: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 _ex_ml_example_post_install
./ml_example_static.byte
./ml_example_static_custom.byte
./ml_example_static
cd ..
- template: scripts/test-z3.yml
- template: scripts/test-regressions.yml
- template: scripts/generate-doc.yml
- job: "LinuxMSan"
displayName: "Ubuntu build - cmake"
timeoutInMinutes: 90
condition: eq(0,1)
pool:
vmImage: "ubuntu-latest"
strategy:
matrix:
msanClang:
cmdLine: '$(msanEnv) cmake $(cmakeStdArgs)'
runUnitTest: 'True'
runExample: 'False' # Examples don't seem to build with MSAN
steps:
- script: sudo apt-get install ninja-build libc++-dev libc++abi-dev
- script: ./scripts/build_libcxx_msan.sh
- script: |
set -e
mkdir build
cd build
$(cmdLine)
ninja
ninja test-z3
cd ..
- script: |
cd build
export MSAN_SYMBOLIZER_PATH=/usr/lib/llvm-6.0/bin/llvm-symbolizer
./test-z3 -a
cd ..
condition: eq(variables['runUnitTest'], 'True')
- ${{if eq(variables['runExample'], 'True')}}:
- template: scripts/test-examples-cmake.yml
# - template: scripts/test-jupyter.yml
# - template: scripts/test-java-cmake.yml
# - template: scripts/test-regressions.yml
- job: "UbuntuCMake"
displayName: "Ubuntu build - cmake"
timeoutInMinutes: 90
pool:
vmImage: "ubuntu-latest"
strategy:
matrix:
releaseClang:
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=clang CXX=clang++ cmake -DCMAKE_BUILD_TYPE=Release $(cmakeStdArgs)'
runTests: 'True'
debugClang:
setupCmd1: 'julia -e "using Pkg; Pkg.add(PackageSpec(name=\"libcxxwrap_julia_jll\"))"'
setupCmd2: 'JlCxxDir=$(julia -e "using libcxxwrap_julia_jll; print(dirname(libcxxwrap_julia_jll.libcxxwrap_julia_path))")'
buildCmd: 'CC=clang CXX=clang++ cmake -DJlCxx_DIR=$JlCxxDir/cmake/JlCxx $(cmakeJulia) $(cmakeStdArgs)'
runTests: 'True'
debugGcc:
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=gcc CXX=g++ cmake $(cmakeStdArgs)'
runTests: 'True'
releaseSTGcc:
setupCmd1: ''
setupCmd2: ''
buildCmd: 'CC=gcc CXX=g++ cmake -DCMAKE_BUILD_TYPE=Release -DZ3_SINGLE_THREADED=ON $(cmakeStdArgs)'
runTests: 'True'
steps:
- script: sudo apt-get install ninja-build
- script: |
set -e
mkdir build
cd build
$(setupCmd1)
$(setupCmd2)
$(buildCmd)
ninja
ninja test-z3
cd ..
- script: |
cd build
./test-z3 -a
cd ..
condition: eq(variables['runTests'], 'True')
- ${{if eq(variables['runTests'], 'True')}}:
- template: scripts/test-examples-cmake.yml
# - template: scripts/test-jupyter.yml
# - template: scripts/test-java-cmake.yml
- ${{if eq(variables['runTests'], 'True')}}:
- template: scripts/test-regressions.yml
- job: "MacOSPython"
displayName: "MacOS build"
timeoutInMinutes: 90
pool:
vmImage: "macOS-latest"
steps:
- script: python scripts/mk_make.py -d --java --dotnet
- script: |
set -e
cd build
make -j3
make -j3 examples
make -j3 test-z3
./cpp_example
./c_example
# java -cp api/java/classes; JavaExample
cd ..
# Skip as dead-slow in debug mode:
# - template: scripts/test-z3.yml
- template: scripts/test-regressions.yml
- job: "MacOSCMake"
displayName: "MacOS build with CMake"
timeoutInMinutes: 90
pool:
vmImage: "macOS-latest"
steps:
- script: brew install ninja
- script: brew install --cask julia
- script: |
julia -e "using Pkg; Pkg.add(PackageSpec(name=\"libcxxwrap_julia_jll\"))"
JlCxxDir=$(julia -e "using libcxxwrap_julia_jll; println(joinpath(dirname(libcxxwrap_julia_jll.libcxxwrap_julia_path), \"cmake\", \"JlCxx\"))")
set -e
mkdir build
cd build
cmake -DJlCxx_DIR=$JlCxxDir $(cmakeJulia) $(cmakeJava) $(cmakePy) -DZ3_BUILD_DOTNET_BINDINGS=False -G "Ninja" ../
ninja
ninja test-z3
cd ..
- template: scripts/test-z3.yml
# - template: scripts/test-examples-cmake.yml
- template: scripts/test-regressions.yml
# - template: scripts/test-java-cmake.yml
- job: "MacOSOCaml"
displayName: "MacOS build with OCaml"
timeoutInMinutes: 90
condition: eq(0,1)
pool:
vmImage: "macOS-latest"
steps:
- script: brew install opam
- script: opam init -y
- script: eval `opam config env`; opam install zarith ocamlfind -y
- script: eval `opam config env`; python scripts/mk_make.py --ml
- script: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 examples
make -j3 test-z3
cd ..
- script: eval `opam config env`; ocamlfind install z3 build/api/ml/* -dll build/libz3.*
- script: |
set -e
cd build
eval `opam config env`
make -j3
make -j3 _ex_ml_example_post_install
# ./ml_example_shared.byte
# ./ml_example_shared_custom.byte
# ./ml_example_shared
cd ..
# Skip as dead-slow in debug mode:
# - template: scripts/test-z3.yml
- template: scripts/test-regressions.yml

30
build_z3.bat Normal file
View file

@ -0,0 +1,30 @@
@echo off
REM Z3 Build Script
echo Checking for build directory...
if not exist C:\z3\build (
echo Creating build directory...
mkdir C:\z3\build
) else (
echo Build directory already exists
)
echo Changing to build directory...
cd /d C:\z3\build
echo Running CMake configuration...
cmake ..
if errorlevel 1 (
echo CMake configuration failed!
exit /b 1
)
echo Building Z3 with parallel 8...
cmake --build . --parallel 8
if errorlevel 1 (
echo Build failed!
exit /b 1
)
echo Build completed successfully!
exit /b 0

View file

@ -6,7 +6,13 @@ set(GCC_AND_CLANG_WARNINGS
"-Wall" "-Wall"
) )
set(GCC_ONLY_WARNINGS "") set(GCC_ONLY_WARNINGS "")
set(CLANG_ONLY_WARNINGS "") # Disable C++98 compatibility warnings to prevent excessive warning output
# when building with clang-cl or when -Weverything is enabled.
# These warnings are not useful for Z3 since it requires C++20.
set(CLANG_ONLY_WARNINGS
"-Wno-c++98-compat"
"-Wno-c++98-compat-pedantic"
)
set(MSVC_WARNINGS "/W3") set(MSVC_WARNINGS "/W3")
################################################################################ ################################################################################

View file

@ -109,9 +109,9 @@
# [XML_INJECT xml_injection]) # [XML_INJECT xml_injection])
# ``` # ```
# #
# Require 3.5 for batch copy multiple files # Require 3.10 for batch copy multiple files
cmake_minimum_required(VERSION 3.5.0) cmake_minimum_required(VERSION 3.10.0)
IF(DOTNET_FOUND) IF(DOTNET_FOUND)
RETURN() RETURN()

View file

@ -9,6 +9,7 @@ set(MK_API_DOC_SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/mk_api_doc.py")
set(PYTHON_API_OPTIONS "") set(PYTHON_API_OPTIONS "")
set(DOTNET_API_OPTIONS "") set(DOTNET_API_OPTIONS "")
set(JAVA_API_OPTIONS "") set(JAVA_API_OPTIONS "")
set(GO_API_OPTIONS "")
SET(DOC_EXTRA_DEPENDS "") SET(DOC_EXTRA_DEPENDS "")
if (Z3_BUILD_PYTHON_BINDINGS) if (Z3_BUILD_PYTHON_BINDINGS)
@ -41,6 +42,15 @@ else()
list(APPEND JAVA_API_OPTIONS "--no-java") list(APPEND JAVA_API_OPTIONS "--no-java")
endif() endif()
if (Z3_BUILD_GO_BINDINGS)
list(APPEND GO_API_OPTIONS "--go")
list(APPEND GO_API_OPTIONS "--go-search-paths"
"${PROJECT_SOURCE_DIR}/src/api/go"
)
else()
# Go bindings don't have a --no-go option, just omit --go
endif()
option(Z3_ALWAYS_BUILD_DOCS "Always build documentation for API bindings" ON) option(Z3_ALWAYS_BUILD_DOCS "Always build documentation for API bindings" ON)
if (Z3_ALWAYS_BUILD_DOCS) if (Z3_ALWAYS_BUILD_DOCS)
set(ALWAYS_BUILD_DOCS_ARG "ALL") set(ALWAYS_BUILD_DOCS_ARG "ALL")
@ -66,12 +76,26 @@ add_custom_target(api_docs ${ALWAYS_BUILD_DOCS_ARG}
${PYTHON_API_OPTIONS} ${PYTHON_API_OPTIONS}
${DOTNET_API_OPTIONS} ${DOTNET_API_OPTIONS}
${JAVA_API_OPTIONS} ${JAVA_API_OPTIONS}
${GO_API_OPTIONS}
DEPENDS DEPENDS
${DOC_EXTRA_DEPENDS} ${DOC_EXTRA_DEPENDS}
COMMENT "Generating documentation" COMMENT "Generating documentation"
USES_TERMINAL USES_TERMINAL
) )
# Add separate target for Go documentation
if (Z3_BUILD_GO_BINDINGS)
set(MK_GO_DOC_SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/mk_go_doc.py")
add_custom_target(go_docs
COMMAND
"${Python3_EXECUTABLE}" "${MK_GO_DOC_SCRIPT}"
--output-dir "${DOC_DEST_DIR}/html/go"
--go-api-path "${PROJECT_SOURCE_DIR}/src/api/go"
COMMENT "Generating Go API documentation"
USES_TERMINAL
)
endif()
# Remove generated documentation when running `clean` target. # Remove generated documentation when running `clean` target.
set_property(DIRECTORY APPEND PROPERTY set_property(DIRECTORY APPEND PROPERTY
ADDITIONAL_MAKE_CLEAN_FILES ADDITIONAL_MAKE_CLEAN_FILES

315
doc/JAVA_IDE_SETUP.md Normal file
View file

@ -0,0 +1,315 @@
# Z3 Java Bindings: IDE Setup Guide
This guide explains how to set up and use Z3 Java bindings in popular Java IDEs (Eclipse, IntelliJ IDEA, and Visual Studio Code).
## Prerequisites
Before setting up Z3 in your IDE, you need to obtain the Z3 binaries:
### Option 1: Download Pre-built Binaries (Recommended)
1. Download the appropriate Z3 release for your platform from the [Z3 Releases page](https://github.com/Z3Prover/z3/releases)
- For Windows: `z3-x.x.x-x64-win.zip`
- For Linux: `z3-x.x.x-x64-glibc-x.x.zip`
- For macOS: `z3-x.x.x-x64-osx-x.x.zip`
2. Extract the archive to a location on your system (e.g., `C:\z3` on Windows or `/opt/z3` on Linux/macOS)
3. The extracted folder contains:
- `bin/com.microsoft.z3.jar` - The Java API library
- `bin/libz3.dll` (Windows) / `bin/libz3.so` (Linux) / `bin/libz3.dylib` (macOS) - The native Z3 library
- `bin/libz3java.dll` (Windows) / `bin/libz3java.so` (Linux) / `bin/libz3java.dylib` (macOS) - The Java JNI bridge
### Option 2: Build from Source
If you need to build Z3 from source with Java bindings enabled:
```bash
python scripts/mk_make.py --java
cd build
make
```
The resulting files will be in the `build` directory.
## Eclipse Setup
### Step 1: Add Z3 JAR to Build Path
1. Right-click on your Java project in the **Package Explorer**
2. Select **Build Path** → **Configure Build Path...**
3. In the **Libraries** tab, click **Add External JARs...**
4. Navigate to the Z3 `bin` folder and select `com.microsoft.z3.jar`
5. Click **Apply and Close**
### Step 2: Configure Native Library Path
Eclipse needs to know where to find the native Z3 libraries (`.dll`, `.so`, or `.dylib` files).
**Method 1: Using Eclipse Native Library Location (Recommended)**
1. In the **Package Explorer**, expand the **Referenced Libraries** section
2. Find and expand `com.microsoft.z3.jar`
3. Right-click on **Native Library Location** and select **Edit...**
4. Click **External Folder...** and select the Z3 `bin` folder (where the native libraries are located)
5. Click **OK**
**Method 2: Using VM Arguments**
1. Right-click on your project → **Run As** → **Run Configurations...**
2. Select your Java application configuration (or create a new one)
3. Go to the **Arguments** tab
4. In the **VM arguments** field, add:
```
-Djava.library.path=C:\path\to\z3\bin
```
(Replace with your actual Z3 bin directory path)
5. Click **Apply** and **Run**
**Method 3: Using System PATH (Windows)**
1. Add the Z3 `bin` directory to your system PATH environment variable:
- Open **System Properties****Advanced** → **Environment Variables**
- Under **System Variables**, find and edit the `Path` variable
- Add the full path to your Z3 `bin` directory (e.g., `C:\z3\bin`)
- Click **OK** and restart Eclipse
### Step 3: Verify Setup
Create a test Java file with the following code:
```java
import com.microsoft.z3.*;
public class Z3Test {
public static void main(String[] args) {
System.out.println("Creating Z3 context...");
Context ctx = new Context();
System.out.println("Z3 version: " + Version.getFullVersion());
// Simple example: x > 0
IntExpr x = ctx.mkIntConst("x");
Solver solver = ctx.mkSolver();
solver.add(ctx.mkGt(x, ctx.mkInt(0)));
if (solver.check() == Status.SATISFIABLE) {
System.out.println("SAT");
System.out.println("Model: " + solver.getModel());
}
ctx.close();
System.out.println("Success!");
}
}
```
Run the program. If you see the Z3 version and "Success!" printed, your setup is working correctly.
## IntelliJ IDEA Setup
### Step 1: Add Z3 JAR to Project
1. Open your project in IntelliJ IDEA
2. Go to **File****Project Structure** (or press `Ctrl+Alt+Shift+S`)
3. Select **Modules** → **Dependencies**
4. Click the **+** button and select **JARs or directories...**
5. Navigate to the Z3 `bin` folder and select `com.microsoft.z3.jar`
6. Click **OK** and **Apply**
### Step 2: Configure Native Library Path
**Method 1: Using Run Configuration (Recommended)**
1. Go to **Run** → **Edit Configurations...**
2. Select your application configuration (or create a new one)
3. In the **VM options** field, add:
```
-Djava.library.path=/path/to/z3/bin
```
(Replace with your actual Z3 bin directory path)
4. Click **OK**
**Method 2: Using Environment Variable (Windows)**
Add the Z3 `bin` directory to your system PATH as described in the Eclipse section above, then restart IntelliJ IDEA.
### Step 3: Verify Setup
Use the same test code from the Eclipse section to verify your setup.
## Visual Studio Code Setup
### Step 1: Install Java Extension Pack
1. Open Visual Studio Code
2. Install the **Extension Pack for Java** from the Extensions marketplace
### Step 2: Add Z3 JAR to Classpath
Create or edit `.vscode/settings.json` in your project root:
```json
{
"java.project.referencedLibraries": [
"path/to/z3/bin/com.microsoft.z3.jar"
]
}
```
### Step 3: Configure Native Library Path
Create or edit `.vscode/launch.json` in your project root:
```json
{
"version": "0.2.0",
"configurations": [
{
"type": "java",
"name": "Launch with Z3",
"request": "launch",
"mainClass": "YourMainClass",
"vmArgs": "-Djava.library.path=/path/to/z3/bin"
}
]
}
```
Replace `YourMainClass` with your actual main class name and adjust the path to your Z3 bin directory.
### Step 4: Verify Setup
Use the same test code to verify your setup.
## Command-Line Build and Run
If you prefer to build and run from the command line:
### Compiling
```bash
# Windows
javac -cp "C:\path\to\z3\bin\com.microsoft.z3.jar;." YourProgram.java
# Linux/macOS
javac -cp "/path/to/z3/bin/com.microsoft.z3.jar:." YourProgram.java
```
### Running
```bash
# Windows
java -cp "C:\path\to\z3\bin\com.microsoft.z3.jar;." -Djava.library.path=C:\path\to\z3\bin YourProgram
# Linux
LD_LIBRARY_PATH=/path/to/z3/bin java -cp "/path/to/z3/bin/com.microsoft.z3.jar:." YourProgram
# macOS
DYLD_LIBRARY_PATH=/path/to/z3/bin java -cp "/path/to/z3/bin/com.microsoft.z3.jar:." YourProgram
```
## Troubleshooting
### ClassNotFoundException: com.microsoft.z3.Context
**Problem:** Java cannot find the Z3 classes.
**Solution:**
- Verify that `com.microsoft.z3.jar` is in your project's classpath
- In Eclipse: Check **Project Properties****Java Build Path** → **Libraries**
- In IntelliJ: Check **File****Project Structure****Modules** → **Dependencies**
- Ensure you're not just copying the JAR to your project's bin folder - it must be explicitly added to the classpath
### UnsatisfiedLinkError: no z3java in java.library.path
**Problem:** Java can find the Z3 classes but cannot load the native libraries.
**Solution:**
- Verify that `libz3.dll`/`libz3.so`/`libz3.dylib` and `libz3java.dll`/`libz3java.so`/`libz3java.dylib` are in a directory accessible to Java
- Add the Z3 `bin` directory to:
- The `java.library.path` VM argument, or
- The system PATH environment variable (Windows), or
- The LD_LIBRARY_PATH (Linux) / DYLD_LIBRARY_PATH (macOS) environment variable
### ExceptionInInitializerError or Z3Exception
**Problem:** Z3 fails to initialize properly.
**Solution:**
- Ensure all Z3 files (JAR and native libraries) are from the same version
- Check that you're using a compatible Java version (Java 8 or later)
- Verify that the native libraries match your system architecture (32-bit vs 64-bit)
### Running on Different Platforms
**Windows:**
- Use semicolons (`;`) as classpath separators
- Native libraries: `libz3.dll` and `libz3java.dll`
- Set PATH or use `-Djava.library.path`
**Linux:**
- Use colons (`:`) as classpath separators
- Native libraries: `libz3.so` and `libz3java.so`
- Set LD_LIBRARY_PATH or use `-Djava.library.path`
**macOS:**
- Use colons (`:`) as classpath separators
- Native libraries: `libz3.dylib` and `libz3java.dylib`
- Set DYLD_LIBRARY_PATH or use `-Djava.library.path`
## Maven/Gradle Setup
For Maven or Gradle projects, you can use system-scoped dependencies:
### Maven
```xml
<dependency>
<groupId>com.microsoft</groupId>
<artifactId>z3</artifactId>
<version>x.x.x</version> <!-- Replace with your Z3 version -->
<scope>system</scope>
<systemPath>${project.basedir}/lib/com.microsoft.z3.jar</systemPath>
</dependency>
```
Place the Z3 JAR in your project's `lib` directory and configure the native library path as described above.
### Gradle
```gradle
dependencies {
implementation files('lib/com.microsoft.z3.jar')
}
```
## Additional Resources
- [Z3 GitHub Repository](https://github.com/Z3Prover/z3)
- [Z3 Java API Documentation](https://z3prover.github.io/api/html/namespacecom_1_1microsoft_1_1z3.html)
- [Z3 Examples](https://github.com/Z3Prover/z3/tree/master/examples/java)
- [Z3 Guide](https://microsoft.github.io/z3guide/)
## Building Z3 with Java Support
If you need to build Z3 from source with Java bindings:
```bash
# Clone the repository
git clone https://github.com/Z3Prover/z3.git
cd z3
# Configure with Java support
python scripts/mk_make.py --java
# Build
cd build
make
# The Java bindings will be in the build directory
# - com.microsoft.z3.jar
# - libz3java.so / libz3java.dll / libz3java.dylib
# - libz3.so / libz3.dll / libz3.dylib
```
For more details on building Z3, see the main [README.md](../README.md).

View file

@ -1,7 +1,7 @@
API documentation API documentation
----------------- -----------------
To generate the API documentation for the C, C++, .NET, Java and Python APIs, we must execute To generate the API documentation for the C, C++, .NET, Java, Python, and Go APIs, we must execute
python mk_api_doc.py python mk_api_doc.py
@ -10,6 +10,12 @@ We must have doxygen installed in our system.
The documentation will be stored in the subdirectory './api/html'. The documentation will be stored in the subdirectory './api/html'.
The main file is './api/html/index.html' The main file is './api/html/index.html'
To include Go API documentation, use:
python mk_api_doc.py --go
Note: Go documentation requires Go to be installed (for godoc support).
Code documentation Code documentation
------------------ ------------------

View file

@ -15,24 +15,27 @@ import subprocess
ML_ENABLED=False ML_ENABLED=False
MLD_ENABLED=False MLD_ENABLED=False
JS_ENABLED=False JS_ENABLED=False
GO_ENABLED=False
BUILD_DIR='../build' BUILD_DIR='../build'
DOXYGEN_EXE='doxygen' DOXYGEN_EXE='doxygen'
TEMP_DIR=os.path.join(os.getcwd(), 'tmp') TEMP_DIR=os.path.join(os.getcwd(), 'tmp')
OUTPUT_DIRECTORY=os.path.join(os.getcwd(), 'api') OUTPUT_DIRECTORY=os.path.join(os.getcwd(), 'api')
Z3PY_PACKAGE_PATH='../src/api/python/z3' Z3PY_PACKAGE_PATH='../src/api/python/z3'
JS_API_PATH='../src/api/js' JS_API_PATH='../src/api/js'
GO_API_PATH='../src/api/go'
Z3PY_ENABLED=True Z3PY_ENABLED=True
DOTNET_ENABLED=True DOTNET_ENABLED=True
JAVA_ENABLED=True JAVA_ENABLED=True
Z3OPTIONS_ENABLED=True Z3OPTIONS_ENABLED=True
DOTNET_API_SEARCH_PATHS=['../src/api/dotnet'] DOTNET_API_SEARCH_PATHS=['../src/api/dotnet']
JAVA_API_SEARCH_PATHS=['../src/api/java'] JAVA_API_SEARCH_PATHS=['../src/api/java']
GO_API_SEARCH_PATHS=['../src/api/go']
SCRIPT_DIR=os.path.abspath(os.path.dirname(__file__)) SCRIPT_DIR=os.path.abspath(os.path.dirname(__file__))
def parse_options(): def parse_options():
global ML_ENABLED, MLD_ENABLED, BUILD_DIR, DOXYGEN_EXE, TEMP_DIR, OUTPUT_DIRECTORY global ML_ENABLED, MLD_ENABLED, BUILD_DIR, DOXYGEN_EXE, TEMP_DIR, OUTPUT_DIRECTORY
global Z3PY_PACKAGE_PATH, Z3PY_ENABLED, DOTNET_ENABLED, JAVA_ENABLED, JS_ENABLED global Z3PY_PACKAGE_PATH, Z3PY_ENABLED, DOTNET_ENABLED, JAVA_ENABLED, JS_ENABLED, GO_ENABLED
global DOTNET_API_SEARCH_PATHS, JAVA_API_SEARCH_PATHS, JS_API_PATH global DOTNET_API_SEARCH_PATHS, JAVA_API_SEARCH_PATHS, GO_API_SEARCH_PATHS, JS_API_PATH, GO_API_PATH
parser = argparse.ArgumentParser(description=__doc__) parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('-b', parser.add_argument('-b',
'--build', '--build',
@ -54,6 +57,11 @@ def parse_options():
default=False, default=False,
help='Include JS/TS API documentation' help='Include JS/TS API documentation'
) )
parser.add_argument('--go',
action='store_true',
default=False,
help='Include Go API documentation'
)
parser.add_argument('--doxygen-executable', parser.add_argument('--doxygen-executable',
dest='doxygen_executable', dest='doxygen_executable',
default=DOXYGEN_EXE, default=DOXYGEN_EXE,
@ -109,10 +117,17 @@ def parse_options():
default=JAVA_API_SEARCH_PATHS, default=JAVA_API_SEARCH_PATHS,
help='Specify one or more paths to look for Java files (default: %(default)s).', help='Specify one or more paths to look for Java files (default: %(default)s).',
) )
parser.add_argument('--go-search-paths',
dest='go_search_paths',
nargs='+',
default=GO_API_SEARCH_PATHS,
help='Specify one or more paths to look for Go files (default: %(default)s).',
)
pargs = parser.parse_args() pargs = parser.parse_args()
ML_ENABLED = pargs.ml ML_ENABLED = pargs.ml
MLD_ENABLED = pargs.mld MLD_ENABLED = pargs.mld
JS_ENABLED = pargs.js JS_ENABLED = pargs.js
GO_ENABLED = pargs.go
BUILD_DIR = pargs.build BUILD_DIR = pargs.build
DOXYGEN_EXE = pargs.doxygen_executable DOXYGEN_EXE = pargs.doxygen_executable
TEMP_DIR = pargs.temp_dir TEMP_DIR = pargs.temp_dir
@ -123,6 +138,7 @@ def parse_options():
JAVA_ENABLED = not pargs.no_java JAVA_ENABLED = not pargs.no_java
DOTNET_API_SEARCH_PATHS = pargs.dotnet_search_paths DOTNET_API_SEARCH_PATHS = pargs.dotnet_search_paths
JAVA_API_SEARCH_PATHS = pargs.java_search_paths JAVA_API_SEARCH_PATHS = pargs.java_search_paths
GO_API_SEARCH_PATHS = pargs.go_search_paths
if Z3PY_ENABLED: if Z3PY_ENABLED:
if not os.path.exists(Z3PY_PACKAGE_PATH): if not os.path.exists(Z3PY_PACKAGE_PATH):
@ -288,6 +304,18 @@ try:
print("Java documentation disabled") print("Java documentation disabled")
doxygen_config_substitutions['JAVA_API_FILES'] = '' doxygen_config_substitutions['JAVA_API_FILES'] = ''
doxygen_config_substitutions['JAVA_API_SEARCH_PATHS'] = '' doxygen_config_substitutions['JAVA_API_SEARCH_PATHS'] = ''
if GO_ENABLED:
print("Go documentation enabled")
doxygen_config_substitutions['GO_API_FILES'] = '*.go'
go_api_search_path_str = ""
for p in GO_API_SEARCH_PATHS:
# Quote path so that paths with spaces are handled correctly
go_api_search_path_str += "\"{}\" ".format(p)
doxygen_config_substitutions['GO_API_SEARCH_PATHS'] = go_api_search_path_str
else:
print("Go documentation disabled")
doxygen_config_substitutions['GO_API_FILES'] = ''
doxygen_config_substitutions['GO_API_SEARCH_PATHS'] = ''
if JS_ENABLED: if JS_ENABLED:
print('Javascript documentation enabled') print('Javascript documentation enabled')
else: else:
@ -350,6 +378,13 @@ try:
prefix=bullet_point_prefix) prefix=bullet_point_prefix)
else: else:
website_dox_substitutions['JS_API'] = '' website_dox_substitutions['JS_API'] = ''
if GO_ENABLED:
website_dox_substitutions['GO_API'] = (
'{prefix}<a class="el" href="go/index.html">Go API</a>'
).format(
prefix=bullet_point_prefix)
else:
website_dox_substitutions['GO_API'] = ''
configure_file( configure_file(
doc_path('website.dox.in'), doc_path('website.dox.in'),
temp_path('website.dox'), temp_path('website.dox'),
@ -428,6 +463,11 @@ try:
exit(1) exit(1)
print("Generated Javascript documentation.") print("Generated Javascript documentation.")
if GO_ENABLED:
# Go documentation is generated by mk_go_doc.py separately and downloaded as an artifact
# We just need to register that it exists for the link in the index
print("Go documentation link will be included in index.")
print("Documentation was successfully generated at subdirectory '{}'.".format(OUTPUT_DIRECTORY)) print("Documentation was successfully generated at subdirectory '{}'.".format(OUTPUT_DIRECTORY))
except Exception: except Exception:
exctype, value = sys.exc_info()[:2] exctype, value = sys.exc_info()[:2]

654
doc/mk_go_doc.py Normal file
View file

@ -0,0 +1,654 @@
#!/usr/bin/env python
# Copyright (c) Microsoft Corporation 2025
"""
Z3 Go API documentation generator script
This script generates HTML documentation for the Z3 Go bindings.
It creates a browsable HTML interface for the Go API documentation.
"""
import os
import sys
import subprocess
import argparse
import re
SCRIPT_DIR = os.path.abspath(os.path.dirname(__file__))
GO_API_PATH = os.path.join(SCRIPT_DIR, '..', 'src', 'api', 'go')
OUTPUT_DIR = os.path.join(SCRIPT_DIR, 'api', 'html', 'go')
def extract_types_and_functions(filepath):
"""Extract type and function names from a Go source file."""
types = []
functions = []
try:
with open(filepath, 'r', encoding='utf-8') as f:
content = f.read()
# Extract type declarations
type_pattern = r'type\s+(\w+)\s+(?:struct|interface)'
types = re.findall(type_pattern, content)
# Extract function/method declarations
# Match both: func Name() and func (r *Type) Name()
func_pattern = r'func\s+(?:\([^)]+\)\s+)?(\w+)\s*\('
functions = re.findall(func_pattern, content)
except Exception as e:
print(f"Warning: Could not parse {filepath}: {e}")
return types, functions
def extract_detailed_api(filepath):
"""Extract detailed type and function information with signatures and comments."""
types_info = {}
functions_info = []
context_methods = [] # Special handling for Context methods
try:
with open(filepath, 'r', encoding='utf-8') as f:
lines = f.readlines()
i = 0
while i < len(lines):
line = lines[i].strip()
# Extract type with comment
if line.startswith('type ') and ('struct' in line or 'interface' in line):
# Look back for comment
comment = ""
j = i - 1
while j >= 0 and (lines[j].strip().startswith('//') or lines[j].strip() == ''):
if lines[j].strip().startswith('//'):
comment = lines[j].strip()[2:].strip() + " " + comment
j -= 1
match = re.match(r'type\s+(\w+)\s+', line)
if match:
type_name = match.group(1)
types_info[type_name] = {
'comment': comment.strip(),
'methods': []
}
# Extract function/method with signature and comment
if line.startswith('func '):
# Look back for comment
comment = ""
j = i - 1
while j >= 0 and (lines[j].strip().startswith('//') or lines[j].strip() == ''):
if lines[j].strip().startswith('//'):
comment = lines[j].strip()[2:].strip() + " " + comment
j -= 1
# Extract full signature (may span multiple lines)
signature = line
k = i + 1
while k < len(lines) and '{' not in signature:
signature += ' ' + lines[k].strip()
k += 1
# Remove body
if '{' in signature:
signature = signature[:signature.index('{')].strip()
# Parse receiver if method
method_match = re.match(r'func\s+\(([^)]+)\)\s+(\w+)', signature)
func_match = re.match(r'func\s+(\w+)', signature)
if method_match:
receiver = method_match.group(1).strip()
func_name = method_match.group(2)
# Extract receiver type
receiver_type = receiver.split()[-1].strip('*')
# Only add if function name is public
if func_name[0].isupper():
if receiver_type == 'Context':
# Special handling for Context methods - add to context_methods
context_methods.append({
'name': func_name,
'signature': signature,
'comment': comment.strip()
})
elif receiver_type in types_info:
types_info[receiver_type]['methods'].append({
'name': func_name,
'signature': signature,
'comment': comment.strip()
})
elif func_match:
func_name = func_match.group(1)
# Only add if it's public (starts with capital)
if func_name[0].isupper():
functions_info.append({
'name': func_name,
'signature': signature,
'comment': comment.strip()
})
i += 1
# If we have Context methods but no other content, return them as functions
if context_methods and not types_info and not functions_info:
functions_info = context_methods
elif context_methods:
# Add Context pseudo-type
types_info['Context'] = {
'comment': 'Context methods (receiver omitted for clarity)',
'methods': context_methods
}
except Exception as e:
print(f"Warning: Could not parse detailed API from {filepath}: {e}")
return types_info, functions_info
def extract_package_comment(filepath):
"""Extract the package-level documentation comment from a Go file."""
try:
with open(filepath, 'r', encoding='utf-8') as f:
lines = f.readlines()
comment_lines = []
in_comment = False
for line in lines:
stripped = line.strip()
if stripped.startswith('/*'):
in_comment = True
comment_lines.append(stripped[2:].strip())
elif in_comment:
if '*/' in stripped:
comment_lines.append(stripped.replace('*/', '').strip())
break
comment_lines.append(stripped.lstrip('*').strip())
elif stripped.startswith('//'):
comment_lines.append(stripped[2:].strip())
elif stripped.startswith('package'):
break
return ' '.join(comment_lines).strip() if comment_lines else None
except Exception as e:
return None
def parse_args():
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument('-o', '--output-dir',
dest='output_dir',
default=OUTPUT_DIR,
help='Output directory for documentation (default: %(default)s)',
)
parser.add_argument('--go-api-path',
dest='go_api_path',
default=GO_API_PATH,
help='Path to Go API source files (default: %(default)s)',
)
return parser.parse_args()
def check_go_installed():
"""Check if Go is installed and available."""
try:
# Try to find go in common locations
go_paths = [
'go',
'C:\\Program Files\\Go\\bin\\go.exe',
'C:\\Go\\bin\\go.exe',
]
for go_cmd in go_paths:
try:
result = subprocess.run([go_cmd, 'version'],
capture_output=True,
text=True,
check=True,
timeout=5)
print(f"Found Go: {result.stdout.strip()}")
return go_cmd
except (subprocess.CalledProcessError, FileNotFoundError, subprocess.TimeoutExpired):
continue
print("WARNING: Go is not installed or not in PATH")
print("Install Go from https://golang.org/dl/ for enhanced documentation")
return None
except Exception as e:
print(f"WARNING: Could not check Go installation: {e}")
return None
def extract_package_comment(go_file_path):
"""Extract package-level documentation comment from a Go file."""
try:
with open(go_file_path, 'r', encoding='utf-8') as f:
lines = f.readlines()
in_comment = False
comment_lines = []
for line in lines:
stripped = line.strip()
if stripped.startswith('/*'):
in_comment = True
comment_lines.append(stripped[2:].strip())
elif in_comment:
if stripped.endswith('*/'):
comment_lines.append(stripped[:-2].strip())
break
comment_lines.append(stripped.lstrip('*').strip())
elif stripped.startswith('//'):
comment_lines.append(stripped[2:].strip())
elif stripped.startswith('package '):
break
return ' '.join(comment_lines).strip()
except Exception as e:
print(f"Warning: Could not extract comment from {go_file_path}: {e}")
return ""
def generate_godoc_markdown(go_cmd, go_api_path, output_dir):
"""Generate markdown documentation using godoc."""
print("Generating documentation with godoc...")
os.makedirs(output_dir, exist_ok=True)
try:
# Change to the Go API directory
orig_dir = os.getcwd()
os.chdir(go_api_path)
# Run go doc to get package documentation
result = subprocess.run(
[go_cmd, 'doc', '-all'],
capture_output=True,
text=True,
timeout=30
)
if result.returncode == 0:
# Create markdown file
doc_text = result.stdout
godoc_md = os.path.join(output_dir, 'godoc.md')
with open(godoc_md, 'w', encoding='utf-8') as f:
f.write('# Z3 Go API Documentation (godoc)\n\n')
f.write(doc_text)
print(f"Generated godoc markdown at: {godoc_md}")
os.chdir(orig_dir)
return True
else:
print(f"godoc returned error: {result.stderr}")
os.chdir(orig_dir)
return False
except Exception as e:
print(f"Error generating godoc markdown: {e}")
try:
os.chdir(orig_dir)
except:
pass
return False
def generate_module_page(module_filename, description, go_api_path, output_dir):
"""Generate a detailed HTML page for a single Go module."""
file_path = os.path.join(go_api_path, module_filename)
if not os.path.exists(file_path):
return
module_name = module_filename.replace('.go', '')
output_path = os.path.join(output_dir, f'{module_name}.html')
# Extract detailed API information
types_info, functions_info = extract_detailed_api(file_path)
with open(output_path, 'w', encoding='utf-8') as f:
f.write('<!DOCTYPE html>\n<html lang="en">\n<head>\n')
f.write(' <meta charset="UTF-8">\n')
f.write(f' <title>{module_filename} - Z3 Go API</title>\n')
f.write(' <style>\n')
f.write(' body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif; margin: 0; padding: 0; line-height: 1.6; }\n')
f.write(' header { background: #2d3748; color: white; padding: 2rem; }\n')
f.write(' header h1 { margin: 0; font-size: 2rem; }\n')
f.write(' header p { margin: 0.5rem 0 0 0; opacity: 0.9; }\n')
f.write(' .container { max-width: 1200px; margin: 0 auto; padding: 2rem; }\n')
f.write(' .nav { background: #edf2f7; padding: 1rem; margin-bottom: 2rem; border-radius: 4px; }\n')
f.write(' .nav a { color: #2b6cb0; text-decoration: none; margin-right: 1rem; }\n')
f.write(' .nav a:hover { text-decoration: underline; }\n')
f.write(' h2 { color: #2d3748; border-bottom: 2px solid #4299e1; padding-bottom: 0.5rem; margin-top: 2rem; }\n')
f.write(' h3 { color: #2d3748; margin-top: 1.5rem; }\n')
f.write(' .type-section, .function-section { margin: 1.5rem 0; }\n')
f.write(' .api-item { background: #f7fafc; padding: 1rem; margin: 1rem 0; border-left: 4px solid #4299e1; border-radius: 4px; }\n')
f.write(' .api-item h4 { margin: 0 0 0.5rem 0; color: #2b6cb0; font-family: monospace; }\n')
f.write(' .signature { background: #2d3748; color: #e2e8f0; padding: 0.75rem; border-radius: 4px; font-family: monospace; overflow-x: auto; margin: 0.5rem 0; }\n')
f.write(' .comment { color: #4a5568; margin: 0.5rem 0; }\n')
f.write(' code { background: #e2e8f0; padding: 2px 6px; border-radius: 3px; font-family: monospace; }\n')
f.write(' .method-list { margin-left: 1rem; }\n')
f.write(' </style>\n')
f.write('</head>\n<body>\n')
f.write(' <header>\n')
f.write(f' <h1>{module_filename}</h1>\n')
f.write(f' <p>{description}</p>\n')
f.write(' </header>\n')
f.write(' <div class="container">\n')
f.write(' <div class="nav">\n')
f.write(' <a href="index.html">← Back to Index</a>\n')
f.write(' <a href="godoc.md">Complete API Reference (markdown)</a>\n')
f.write(' <a href="README.md">README</a>\n')
f.write(' <a href="../index.html">All Languages</a>\n')
f.write(' </div>\n')
# Types section
if types_info:
f.write(' <h2>Types</h2>\n')
for type_name in sorted(types_info.keys()):
type_data = types_info[type_name]
f.write(' <div class="type-section">\n')
f.write(f' <h3>type {type_name}</h3>\n')
if type_data['comment']:
f.write(f' <p class="comment">{type_data["comment"]}</p>\n')
# Methods
if type_data['methods']:
f.write(' <div class="method-list">\n')
f.write(' <h4>Methods:</h4>\n')
for method in sorted(type_data['methods'], key=lambda m: m['name']):
f.write(' <div class="api-item">\n')
f.write(f' <h4>{method["name"]}</h4>\n')
f.write(f' <div class="signature">{method["signature"]}</div>\n')
if method['comment']:
f.write(f' <p class="comment">{method["comment"]}</p>\n')
f.write(' </div>\n')
f.write(' </div>\n')
f.write(' </div>\n')
# Package functions section
if functions_info:
f.write(' <h2>Functions</h2>\n')
f.write(' <div class="function-section">\n')
for func in sorted(functions_info, key=lambda f: f['name']):
f.write(' <div class="api-item">\n')
f.write(f' <h4>{func["name"]}</h4>\n')
f.write(f' <div class="signature">{func["signature"]}</div>\n')
if func['comment']:
f.write(f' <p class="comment">{func["comment"]}</p>\n')
f.write(' </div>\n')
f.write(' </div>\n')
if not types_info and not functions_info:
f.write(' <p><em>No public API documentation extracted. See godoc for complete reference.</em></p>\n')
f.write(' <div class="nav" style="margin-top: 3rem;">\n')
f.write(' <a href="index.html">← Back to Index</a>\n')
f.write(' </div>\n')
f.write(' </div>\n')
f.write('</body>\n</html>\n')
print(f"Generated module page: {output_path}")
def generate_html_docs(go_api_path, output_dir):
"""Generate HTML documentation for Go bindings."""
# Create output directory
os.makedirs(output_dir, exist_ok=True)
# Go source files and their descriptions
go_files = {
'z3.go': 'Core types (Context, Config, Symbol, Sort, Expr, FuncDecl, Quantifier, Lambda) and basic operations',
'solver.go': 'Solver and Model API for SMT solving',
'tactic.go': 'Tactics, Goals, Probes, and Parameters for goal-based solving',
'arith.go': 'Arithmetic operations (integers, reals) and comparisons',
'array.go': 'Array operations (select, store, constant arrays)',
'bitvec.go': 'Bit-vector operations and constraints',
'fp.go': 'IEEE 754 floating-point operations',
'seq.go': 'Sequences, strings, and regular expressions',
'datatype.go': 'Algebraic datatypes, tuples, and enumerations',
'optimize.go': 'Optimization with maximize/minimize objectives',
'fixedpoint.go': 'Fixedpoint solver for Datalog and constrained Horn clauses (CHC)',
'log.go': 'Interaction logging for debugging and analysis',
}
# Generate main index.html
index_path = os.path.join(output_dir, 'index.html')
with open(index_path, 'w', encoding='utf-8') as f:
f.write('<!DOCTYPE html>\n')
f.write('<html lang="en">\n')
f.write('<head>\n')
f.write(' <meta charset="UTF-8">\n')
f.write(' <meta name="viewport" content="width=device-width, initial-scale=1.0">\n')
f.write(' <title>Z3 Go API Documentation</title>\n')
f.write(' <style>\n')
f.write(' body { font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif; margin: 0; padding: 0; line-height: 1.6; }\n')
f.write(' header { background: #2d3748; color: white; padding: 2rem; }\n')
f.write(' header h1 { margin: 0; font-size: 2.5rem; }\n')
f.write(' header p { margin: 0.5rem 0 0 0; font-size: 1.1rem; opacity: 0.9; }\n')
f.write(' .container { max-width: 1200px; margin: 0 auto; padding: 2rem; }\n')
f.write(' .section { margin: 2rem 0; }\n')
f.write(' .section h2 { color: #2d3748; border-bottom: 2px solid #4299e1; padding-bottom: 0.5rem; }\n')
f.write(' .file-list { list-style: none; padding: 0; }\n')
f.write(' .file-item { background: #f7fafc; border-left: 4px solid #4299e1; margin: 1rem 0; padding: 1rem; border-radius: 4px; }\n')
f.write(' .file-item h3 { margin: 0 0 0.5rem 0; color: #2d3748; }\n')
f.write(' .file-item h3 a { color: #2b6cb0; text-decoration: none; }\n')
f.write(' .file-item h3 a:hover { color: #4299e1; text-decoration: underline; }\n')
f.write(' .file-item p { margin: 0; color: #4a5568; }\n')
f.write(' .code-block { background: #2d3748; color: #e2e8f0; padding: 1.5rem; border-radius: 4px; overflow-x: auto; }\n')
f.write(' .code-block pre { margin: 0; }\n')
f.write(' .install-section { background: #edf2f7; padding: 1.5rem; border-radius: 4px; margin: 1rem 0; }\n')
f.write(' .back-link { display: inline-block; margin-top: 2rem; color: #2b6cb0; text-decoration: none; }\n')
f.write(' .back-link:hover { text-decoration: underline; }\n')
f.write(' </style>\n')
f.write('</head>\n')
f.write('<body>\n')
f.write(' <header>\n')
f.write(' <h1>Z3 Go API Documentation</h1>\n')
f.write(' <p>Go bindings for the Z3 Theorem Prover</p>\n')
f.write(' </header>\n')
f.write(' <div class="container">\n')
# Overview section
f.write(' <div class="section">\n')
f.write(' <h2>Overview</h2>\n')
f.write(' <p>The Z3 Go bindings provide idiomatic Go access to the Z3 SMT solver. These bindings use CGO to wrap the Z3 C API and provide automatic memory management through Go finalizers.</p>\n')
f.write(' <p><strong>Package:</strong> <code>github.com/Z3Prover/z3/src/api/go</code></p>\n')
f.write(' </div>\n')
# Quick start
f.write(' <div class="section">\n')
f.write(' <h2>Quick Start</h2>\n')
f.write(' <div class="code-block">\n')
f.write(' <pre>package main\n\n')
f.write('import (\n')
f.write(' "fmt"\n')
f.write(' "github.com/Z3Prover/z3/src/api/go"\n')
f.write(')\n\n')
f.write('func main() {\n')
f.write(' // Create a context\n')
f.write(' ctx := z3.NewContext()\n\n')
f.write(' // Create integer variable\n')
f.write(' x := ctx.MkIntConst("x")\n\n')
f.write(' // Create solver\n')
f.write(' solver := ctx.NewSolver()\n\n')
f.write(' // Add constraint: x > 0\n')
f.write(' zero := ctx.MkInt(0, ctx.MkIntSort())\n')
f.write(' solver.Assert(ctx.MkGt(x, zero))\n\n')
f.write(' // Check satisfiability\n')
f.write(' if solver.Check() == z3.Satisfiable {\n')
f.write(' fmt.Println("sat")\n')
f.write(' model := solver.Model()\n')
f.write(' if val, ok := model.Eval(x, true); ok {\n')
f.write(' fmt.Println("x =", val.String())\n')
f.write(' }\n')
f.write(' }\n')
f.write('}</pre>\n')
f.write(' </div>\n')
f.write(' </div>\n')
# Installation
f.write(' <div class="section">\n')
f.write(' <h2>Installation</h2>\n')
f.write(' <div class="install-section">\n')
f.write(' <p><strong>Prerequisites:</strong></p>\n')
f.write(' <ul>\n')
f.write(' <li>Go 1.20 or later</li>\n')
f.write(' <li>Z3 built as a shared library</li>\n')
f.write(' <li>CGO enabled (default)</li>\n')
f.write(' </ul>\n')
f.write(' <p><strong>Build Z3 with Go bindings:</strong></p>\n')
f.write(' <div class="code-block">\n')
f.write(' <pre># Using CMake\n')
f.write('mkdir build && cd build\n')
f.write('cmake -DZ3_BUILD_GO_BINDINGS=ON -DZ3_BUILD_LIBZ3_SHARED=ON ..\n')
f.write('make\n\n')
f.write('# Using Python build script\n')
f.write('python scripts/mk_make.py --go\n')
f.write('cd build && make</pre>\n')
f.write(' </div>\n')
f.write(' <p><strong>Set environment variables:</strong></p>\n')
f.write(' <div class="code-block">\n')
f.write(' <pre>export CGO_CFLAGS="-I${Z3_ROOT}/src/api"\n')
f.write('export CGO_LDFLAGS="-L${Z3_ROOT}/build -lz3"\n')
f.write('export LD_LIBRARY_PATH="${Z3_ROOT}/build:$LD_LIBRARY_PATH"</pre>\n')
f.write(' </div>\n')
f.write(' </div>\n')
f.write(' </div>\n')
# API modules with detailed documentation
f.write(' <div class="section">\n')
f.write(' <h2>API Modules</h2>\n')
for filename, description in go_files.items():
file_path = os.path.join(go_api_path, filename)
if os.path.exists(file_path):
module_name = filename.replace('.go', '')
# Generate individual module page
generate_module_page(filename, description, go_api_path, output_dir)
# Extract types and functions from the file
types, functions = extract_types_and_functions(file_path)
f.write(f' <div class="file-item" id="{module_name}">\n')
f.write(f' <h3><a href="{module_name}.html">{filename}</a></h3>\n')
f.write(f' <p>{description}</p>\n')
if types:
f.write(' <p><strong>Types:</strong> ')
f.write(', '.join([f'<code>{t}</code>' for t in sorted(types)]))
f.write('</p>\n')
if functions:
# Filter public functions
public_funcs = [f for f in functions if f and len(f) > 0 and f[0].isupper()]
if public_funcs:
f.write(' <p><strong>Key Functions:</strong> ')
# Show first 15 functions to keep it manageable
funcs_to_show = sorted(public_funcs)[:15]
f.write(', '.join([f'<code>{func}()</code>' for func in funcs_to_show]))
if len(public_funcs) > 15:
f.write(f' <em>(+{len(public_funcs)-15} more)</em>')
f.write('</p>\n')
f.write(f' <p><a href="{module_name}.html">→ View full API reference</a></p>\n')
f.write(' </div>\n')
f.write(' </div>\n')
# Features section
f.write(' <div class="section">\n')
f.write(' <h2>Features</h2>\n')
f.write(' <ul>\n')
f.write(' <li><strong>Core SMT:</strong> Boolean logic, arithmetic, arrays, quantifiers</li>\n')
f.write(' <li><strong>Bit-vectors:</strong> Fixed-size bit-vector arithmetic and operations</li>\n')
f.write(' <li><strong>Floating-point:</strong> IEEE 754 floating-point arithmetic</li>\n')
f.write(' <li><strong>Strings & Sequences:</strong> String constraints and sequence operations</li>\n')
f.write(' <li><strong>Regular Expressions:</strong> Pattern matching and regex constraints</li>\n')
f.write(' <li><strong>Datatypes:</strong> Algebraic datatypes, tuples, enumerations</li>\n')
f.write(' <li><strong>Tactics:</strong> Goal-based solving with tactic combinators</li>\n')
f.write(' <li><strong>Optimization:</strong> MaxSMT with maximize/minimize objectives</li>\n')
f.write(' <li><strong>Memory Management:</strong> Automatic via Go finalizers</li>\n')
f.write(' </ul>\n')
f.write(' </div>\n')
# Resources
f.write(' <div class="section">\n')
f.write(' <h2>Resources</h2>\n')
f.write(' <ul>\n')
f.write(' <li><a href="https://github.com/Z3Prover/z3">Z3 GitHub Repository</a></li>\n')
f.write(' <li><a href="../index.html">All API Documentation</a></li>\n')
# Check if README exists and copy it
readme_path = os.path.join(go_api_path, 'README.md')
if os.path.exists(readme_path):
# Copy README.md to output directory
readme_dest = os.path.join(output_dir, 'README.md')
try:
import shutil
shutil.copy2(readme_path, readme_dest)
f.write(' <li><a href="README.md">Go API README (markdown)</a></li>\n')
print(f"Copied README.md to: {readme_dest}")
except Exception as e:
print(f"Warning: Could not copy README.md: {e}")
# Link to godoc.md if it will be generated
f.write(' <li><a href="godoc.md">Complete API Reference (godoc markdown)</a></li>\n')
f.write(' </ul>\n')
f.write(' </div>\n')
f.write(' <a href="../index.html" class="back-link">← Back to main API documentation</a>\n')
f.write(' </div>\n')
f.write('</body>\n')
f.write('</html>\n')
print(f"Generated Go documentation index at: {index_path}")
return True
def main():
args = parse_args()
print("Z3 Go API Documentation Generator")
print("=" * 50)
# Check if Go is installed
go_cmd = check_go_installed()
# Verify Go API path exists
if not os.path.exists(args.go_api_path):
print(f"ERROR: Go API path does not exist: {args.go_api_path}")
return 1
# Generate documentation
print(f"\nGenerating documentation from: {args.go_api_path}")
print(f"Output directory: {args.output_dir}")
# Try godoc first if Go is available
godoc_success = False
if go_cmd:
godoc_success = generate_godoc_markdown(go_cmd, args.go_api_path, args.output_dir)
# Always generate our custom HTML documentation
if not generate_html_docs(args.go_api_path, args.output_dir):
print("ERROR: Failed to generate documentation")
return 1
if godoc_success:
print("\n✓ Generated both godoc markdown and custom HTML documentation")
print("\n" + "=" * 50)
print("Documentation generated successfully!")
print(f"Open {os.path.join(args.output_dir, 'index.html')} in your browser.")
return 0
if __name__ == '__main__':
try:
sys.exit(main())
except KeyboardInterrupt:
print("\nInterrupted by user")
sys.exit(1)
except Exception as e:
print(f"ERROR: {e}")
import traceback
traceback.print_exc()
sys.exit(1)

Some files were not shown because too many files have changed in this diff Show more