3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-02-12 11:54:07 +00:00

remove readmes

Signed-off-by: Nikolaj Bjorner <nbjorner@microsoft.com>
This commit is contained in:
Nikolaj Bjorner 2026-02-10 17:39:09 -08:00
parent 74502a969c
commit b086050568
3 changed files with 0 additions and 427 deletions

View file

@ -1,60 +0,0 @@
# A3 Python Workflow Fix
## Problem
The a3-python workflow was not finding Python files in the repository because it was using sparse-checkout and only checking out `.github` and `.agents` directories. The Python source files in `src/api/python/z3/` were not being checked out.
## Solution
Added a custom `steps:` configuration in the workflow frontmatter to run `git sparse-checkout add src` before the agent executes. This ensures the Python source files are available for analysis.
## Changes Made
### 1. Frontmatter Update (lines 24-28)
```yaml
steps:
- name: Checkout Python source files
run: |
git sparse-checkout add src
echo "Python source files checked out from src directory"
```
This step runs before the AI agent starts, expanding the sparse-checkout to include the `src/` directory.
### 2. Updated Analysis Instructions
- Updated Phase 2.1 to explicitly verify `src/api/python/z3/` files are available
- Changed the a3 command from `a3 analyze` to `a3 scan` with proper flags
- Added specific documentation about which Python files should be analyzed:
- `z3.py` - Main Z3 Python API (350KB+)
- `z3printer.py` - Pretty printing functionality
- `z3num.py`, `z3poly.py`, `z3rcf.py` - Numeric and polynomial modules
- `z3types.py`, `z3util.py` - Type definitions and utilities
### 3. Updated Exit Conditions
Added mention of sparse checkout issues in the exit conditions to help with future debugging.
## How to Recompile
After modifying `a3-python.md`, you must recompile the workflow to generate the updated `.lock.yml` file:
```bash
# Compile the specific workflow
gh aw compile a3-python
# Or compile all workflows
gh aw compile
```
This will update `a3-python.lock.yml` with the new sparse-checkout step.
## Testing
To test the workflow locally or in a PR:
1. Trigger the workflow manually via workflow_dispatch
2. Check the logs to verify:
- The "Checkout Python source files" step runs successfully
- Python files in `src/api/python/z3/` are found
- The a3 scan command analyzes those files
## Related Files
- `a3-python.md` - The workflow definition (source of truth)
- `a3-python.lock.yml` - The compiled GitHub Actions workflow (auto-generated)
- `src/api/python/z3/*.py` - The Python files that should be analyzed

View file

@ -1,222 +0,0 @@
# Z3 Build Cache Workflow
This workflow builds Z3 and caches the built binaries for use by other workflows, eliminating the need for repeated builds and saving significant CI time.
## Purpose
The `build-z3-cache.yml` workflow:
1. Builds Z3 from source using the Python build system
2. Caches the built binaries and libraries
3. Can be triggered manually, on schedule, or on pushes to main
4. Can be called as a reusable workflow from other workflows
## Cache Contents
The workflow caches:
- `build/z3` - Main Z3 executable
- `build/libz3.so` and `build/libz3.a` - Z3 libraries
- `build/*.so` and `build/*.a` - Additional library files
- `build/python` - Python bindings
## Cache Key Format
The cache key is: `z3-build-{OS}-{git-sha}`
**How it works:**
- Each commit gets its own cache entry based on its git SHA
- When restoring, the `restore-keys` pattern allows falling back to caches from previous commits
- The most recent cache matching the `restore-keys` pattern will be used if an exact key match isn't found
- This means workflows typically restore from a recent build and save a new cache for the current commit
**Example:**
- A workflow on commit `abc123` will look for `z3-build-Linux-abc123`
- If not found, it falls back to the most recent `z3-build-Linux-*` cache
- After building, it saves a new cache with key `z3-build-Linux-abc123`
This ensures:
- Caches are OS-specific
- Each new commit can reuse builds from recent commits
- The cache stays relatively up-to-date with the repository
## Using the Cached Build
### Option 1: Direct Cache Restore (Recommended)
Add these steps to your workflow before you need to use Z3:
```yaml
- name: Checkout code
uses: actions/checkout@v6
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Restore Z3 cache
id: cache-z3
uses: actions/cache/restore@v4
with:
path: |
build/z3
build/libz3.so
build/libz3.a
build/*.so
build/*.a
build/python
key: z3-build-${{ runner.os }}-${{ github.sha }}
restore-keys: |
z3-build-${{ runner.os }}-
- name: Build Z3 (if not cached)
if: steps.cache-z3.outputs.cache-hit != 'true'
run: |
python scripts/mk_make.py
cd build
make -j$(nproc)
- name: Verify Z3 is available
run: |
build/z3 --version
```
### Option 2: Call as Reusable Workflow
You can also call this workflow before your job:
```yaml
jobs:
build-cache:
uses: ./.github/workflows/build-z3-cache.yml
your-job:
needs: build-cache
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Restore Z3 cache
uses: actions/cache/restore@v4
with:
path: |
build/z3
build/libz3.so
build/libz3.a
build/*.so
build/*.a
build/python
key: ${{ needs.build-cache.outputs.cache-key }}
- name: Use Z3
run: build/z3 --version
```
## Benefits
- **Time Savings**: Eliminates 15-17 minute build time when cache hit
- **Resource Efficiency**: Reduces CI compute usage
- **Reliability**: Ensures consistent Z3 binary across workflow runs
- **Flexibility**: Works with manual triggers, schedules, and pushes
## Example: Soundness Bug Detector
The soundness bug detector workflow can now use cached Z3:
```yaml
name: Soundness Bug Detector
on:
schedule:
- cron: '0 4 * * *'
jobs:
detect-bugs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Setup Python
uses: actions/setup-python@v6
with:
python-version: '3.x'
- name: Restore Z3 cache
uses: actions/cache/restore@v4
with:
path: |
build/z3
build/libz3.so
build/libz3.a
build/*.so
build/*.a
build/python
key: z3-build-${{ runner.os }}-${{ github.sha }}
restore-keys: |
z3-build-${{ runner.os }}-
- name: Build Z3 (if not cached)
run: |
if [ ! -f build/z3 ]; then
python scripts/mk_make.py
cd build
make -j$(nproc)
fi
- name: Test soundness issues
run: |
# Now Z3 is available at build/z3
build/z3 --version
# ... rest of soundness testing logic
```
## Cache Maintenance
**GitHub Actions Cache Policies:**
- **Retention**: Caches are removed after 7 days without being accessed
- **Repository Limit**: Total cache size per repository is limited to 10GB
- **Eviction**: Least Recently Used (LRU) caches are evicted when the limit is reached
- **Size per Entry**: Each Z3 build cache is approximately 70MB (z3 executable, libz3 libraries, and Python bindings)
**This Workflow's Strategy:**
- **Daily Schedule**: Runs daily at 2 AM UTC to refresh the cache
- **Push Trigger**: Updates cache on pushes to master/main branches
- **Manual Trigger**: Can be triggered manually via workflow_dispatch
**Best Practices:**
- The daily schedule ensures at least one cache is always available (within 7-day retention)
- Multiple commits may share the same cache via the `restore-keys` fallback pattern
- If you need guaranteed cache availability for a specific commit, manually trigger the workflow
## Troubleshooting
### Cache miss on every run
If you're getting cache misses, check:
1. The cache key format matches between save and restore
2. The git SHA is consistent (use the same checkout before restore)
3. The runner OS matches
### Build not working after restore
After restoring from cache:
1. Verify all library files are present: `ls -la build/`
2. Check library dependencies: `ldd build/z3`
3. Ensure Python bindings are in the correct location
### Need different build configuration
If you need a different Z3 build (e.g., with debug symbols), you can:
1. Create a separate cache workflow with a different cache key
2. Or build Z3 directly in your workflow with custom flags
## Related Workflows
- `ci.yml` - Main CI workflow (could benefit from caching)
- `soundness-bug-detector.lock.yml` - Automated soundness testing
- `nightly.yml` - Nightly builds (uses its own build process)
## Implementation Note
This addresses the recommendation from [GitHub Discussion #8458](https://github.com/Z3Prover/z3/discussions/8458), which identified that the soundness bug detector and other workflows were unable to test properly due to missing Z3 binaries and build tool limitations.

View file

@ -1,145 +0,0 @@
# DeepTest - Automated Test Case Generator
## Overview
DeepTest is an AI-powered GitHub Agentic Workflow that automatically generates comprehensive test cases for Z3 source files. It analyzes a given source file and creates high-quality, production-ready tests covering basic functionality, edge cases, error handling, and integration scenarios.
## Features
- **Comprehensive Test Coverage**: Generates tests for basic functionality, edge cases, error handling, and integration scenarios
- **Multiple Languages**: Supports C++, Python, Java, C#, and other Z3 API languages
- **Smart Analysis**: Uses Serena language server for Python code analysis and grep/glob for C++ analysis
- **Automated PR Creation**: Creates a pull request with generated tests automatically
- **Follows Z3 Conventions**: Generates tests that match existing Z3 testing patterns and style
- **Persistent Memory**: Uses cache memory to track progress across runs
## How to Use
### Workflow Dispatch (Manual Trigger)
1. Go to **Actions****Deeptest** in the GitHub repository
2. Click **Run workflow**
3. Enter the file path (e.g., `src/util/vector.h`)
4. Optionally link to an issue number
5. Click **Run workflow**
The workflow will:
1. Analyze the source file
2. Generate comprehensive tests
3. Create a pull request with the test files
4. Optionally add a comment to the linked issue with statistics and instructions
## What Gets Generated
DeepTest creates test files that include:
### For C++ Files
- Unit tests using Z3's testing framework
- Located in `src/test/test_<module_name>.cpp`
- Follows existing test patterns in the repository
- Includes necessary headers and setup/teardown code
### For Python Files
- Unit tests using `unittest` or `pytest`
- Located in `src/api/python/test_<module_name>.py`
- Follows patterns from existing Python tests
- Includes proper imports and test fixtures
## Test Categories
Generated tests cover:
1. **Basic Functionality**: Happy path scenarios with typical inputs
2. **Edge Cases**: Boundary values, empty inputs, zero/negative values, very large inputs
3. **Error Handling**: Invalid parameters, null pointers, exceptions, assertion violations
4. **Integration Tests**: Realistic SMT-LIB2 formulas, solver workflows, theory combinations
## Output
After running, DeepTest will:
1. **Create a Pull Request** with:
- Title: `[DeepTest] Add comprehensive tests for <filename>`
- Generated test file(s)
- Detailed description of test coverage
- Instructions for running the tests
- Labels: `automated-tests`, `deeptest`
2. **Post a Comment** with:
- Test statistics (number of test cases by category)
- Coverage percentage
- Link to the created PR
- Instructions for running the tests
## Example Usage
### Example 1: Test a C++ utility file
Via workflow dispatch with file path: `src/util/vector.h`
### Example 2: Test a Python API file
Via workflow dispatch with file path: `src/api/python/z3/z3.py`
### Example 3: Link to an issue
- File path: `src/ast/ast.cpp`
- Issue number: `1234` (optional)
## Running Generated Tests
After the PR is merged, run the tests:
```bash
# Build Z3
python scripts/mk_make.py
cd build && make -j$(nproc)
# Run the new tests
./test-z3 [test-name-pattern]
```
## Configuration
The workflow is configured with:
- **Timeout**: 30 minutes
- **Permissions**: Read-only (safe-outputs handle writes)
- **Network**: Default curated allow-list
- **Tools**: Serena (Python), bash, edit, grep, glob, GitHub API
- **Cache**: Persistent memory enabled
## Customization
To modify the agent behavior without recompiling:
1. Edit `.github/agentics/deeptest.md`
2. Changes take effect immediately (no compilation needed)
3. For configuration changes in `.github/workflows/deeptest.md`, run:
```bash
gh aw compile deeptest
```
## Limitations
- Does not modify existing source files (only creates new test files)
- Focuses on public APIs and functions
- May not cover all internal implementation details
- Generated tests should be reviewed before merging
## Security
- **Read-only permissions** for the main job
- **Safe outputs** handle all write operations (PR creation, comments)
- **Network access** restricted to curated allow-list
- **No secrets exposed** to the AI agent
## Support
If DeepTest encounters issues or needs additional tools:
- It will automatically create an issue labeled `missing-tool`
- The issue will expire after 1 week if not addressed
- Check the workflow run logs for detailed error information
## See Also
- [GitHub Agentic Workflows Documentation](../.github/aw/github-agentic-workflows.md)
- [Z3 Testing Guide](../../README.md)
- [Z3 Build Instructions](../../README-CMake.md)