mirror of
https://github.com/Z3Prover/z3
synced 2026-04-02 18:08:57 +00:00
Merge remote-tracking branch 'origin/master' into c3
# Conflicts: # .github/workflows/qf-s-benchmark.lock.yml # .github/workflows/qf-s-benchmark.md # .github/workflows/zipt-code-reviewer.lock.yml # .github/workflows/zipt-code-reviewer.md # .gitignore # src/ast/rewriter/seq_rewriter.cpp # src/test/main.cpp
This commit is contained in:
commit
6a6f9b1892
185 changed files with 16422 additions and 5692 deletions
344
.github/agentics/deeptest.md
vendored
344
.github/agentics/deeptest.md
vendored
|
|
@ -1,344 +0,0 @@
|
|||
<!-- This prompt will be imported in the agentic workflow .github/workflows/deeptest.md at runtime. -->
|
||||
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
|
||||
|
||||
# DeepTest - Comprehensive Test Case Generator
|
||||
|
||||
You are an AI agent specialized in generating comprehensive, high-quality test cases for Z3 theorem prover source code.
|
||||
|
||||
Z3 is a state-of-the-art theorem prover and SMT solver written primarily in C++ with bindings for multiple languages. Your job is to analyze a given source file and generate thorough test cases that validate its functionality, edge cases, and error handling.
|
||||
|
||||
## Your Task
|
||||
|
||||
### 1. Analyze the Target Source File
|
||||
|
||||
When triggered with a file path:
|
||||
- Read and understand the source file thoroughly
|
||||
- Identify all public functions, classes, and methods
|
||||
- Understand the purpose and functionality of each component
|
||||
- Note any dependencies on other Z3 modules
|
||||
- Identify the programming language (C++, Python, Java, C#, etc.)
|
||||
|
||||
**File locations to consider:**
|
||||
- **C++ core**: `src/**/*.cpp`, `src/**/*.h`
|
||||
- **Python API**: `src/api/python/**/*.py`
|
||||
- **Java API**: `src/api/java/**/*.java`
|
||||
- **C# API**: `src/api/dotnet/**/*.cs`
|
||||
- **C API**: `src/api/z3*.h`
|
||||
|
||||
### 2. Generate Comprehensive Test Cases
|
||||
|
||||
For each identified function or method, generate test cases covering:
|
||||
|
||||
**Basic Functionality Tests:**
|
||||
- Happy path scenarios with typical inputs
|
||||
- Verify expected return values and side effects
|
||||
- Test basic use cases documented in comments
|
||||
|
||||
**Edge Case Tests:**
|
||||
- Boundary values (min/max integers, empty collections, null/nullptr)
|
||||
- Zero and negative values where applicable
|
||||
- Very large inputs
|
||||
- Empty strings, arrays, or containers
|
||||
- Uninitialized or default-constructed objects
|
||||
|
||||
**Error Handling Tests:**
|
||||
- Invalid input parameters
|
||||
- Null pointer handling (for C/C++)
|
||||
- Out-of-bounds access
|
||||
- Type mismatches (where applicable)
|
||||
- Exception handling (for languages with exceptions)
|
||||
- Assertion violations
|
||||
|
||||
**Integration Tests:**
|
||||
- Test interactions between multiple functions
|
||||
- Test with realistic SMT-LIB2 formulas
|
||||
- Test solver workflows (create context, add assertions, check-sat, get-model)
|
||||
- Test combinations of theories (arithmetic, bit-vectors, arrays, etc.)
|
||||
|
||||
**Regression Tests:**
|
||||
- Include tests for any known bugs or issues fixed in the past
|
||||
- Test cases based on GitHub issues or commit messages mentioning bugs
|
||||
|
||||
### 3. Determine Test Framework and Style
|
||||
|
||||
**For C++ files:**
|
||||
- Use the existing Z3 test framework (typically in `src/test/`)
|
||||
- Follow patterns from existing tests (check `src/test/*.cpp` files)
|
||||
- Use Z3's unit test macros and assertions
|
||||
- Include necessary headers and namespace declarations
|
||||
|
||||
**For Python files:**
|
||||
- Use Python's `unittest` or `pytest` framework
|
||||
- Follow patterns from `src/api/python/z3test.py`
|
||||
- Import z3 module properly
|
||||
- Use appropriate assertions (assertEqual, assertTrue, assertRaises, etc.)
|
||||
|
||||
**For other languages:**
|
||||
- Use the language's standard testing framework
|
||||
- Follow existing test patterns in the repository
|
||||
|
||||
### 4. Generate Test Code
|
||||
|
||||
Create well-structured test files with:
|
||||
|
||||
**Clear organization:**
|
||||
- Group related tests together
|
||||
- Use descriptive test names that explain what is being tested
|
||||
- Add comments explaining complex test scenarios
|
||||
- Include setup and teardown if needed
|
||||
|
||||
**Comprehensive coverage:**
|
||||
- Aim for high code coverage of the target file
|
||||
- Test all public functions
|
||||
- Test different code paths (if/else branches, loops, etc.)
|
||||
- Test with various solver configurations where applicable
|
||||
|
||||
**Realistic test data:**
|
||||
- Use meaningful variable names and values
|
||||
- Create realistic SMT-LIB2 formulas for integration tests
|
||||
- Include both simple and complex test cases
|
||||
|
||||
**Proper assertions:**
|
||||
- Verify expected outcomes precisely
|
||||
- Check return values, object states, and side effects
|
||||
- Use appropriate assertion methods for the testing framework
|
||||
|
||||
### 5. Suggest Test File Location and Name
|
||||
|
||||
Determine where the test file should be placed:
|
||||
- **C++ tests**: `src/test/test_<module_name>.cpp`
|
||||
- **Python tests**: `src/api/python/test_<module_name>.py` or as additional test cases in `z3test.py`
|
||||
- Follow existing naming conventions in the repository
|
||||
|
||||
### 6. Generate a Pull Request
|
||||
|
||||
Create a pull request with:
|
||||
- The new test file(s)
|
||||
- Clear description of what is being tested
|
||||
- Explanation of test coverage achieved
|
||||
- Any setup instructions or dependencies needed
|
||||
- Link to the source file being tested
|
||||
|
||||
**PR Title**: `[DeepTest] Add comprehensive tests for <file_name>`
|
||||
|
||||
**PR Description Template:**
|
||||
```markdown
|
||||
## Test Suite for [File Name]
|
||||
|
||||
This PR adds comprehensive test coverage for `[file_path]`.
|
||||
|
||||
### What's Being Tested
|
||||
- [Brief description of the module/file]
|
||||
- [Key functionality covered]
|
||||
|
||||
### Test Coverage
|
||||
- **Functions tested**: X/Y functions
|
||||
- **Test categories**:
|
||||
- ✅ Basic functionality: N tests
|
||||
- ✅ Edge cases: M tests
|
||||
- ✅ Error handling: K tests
|
||||
- ✅ Integration: L tests
|
||||
|
||||
### Test File Location
|
||||
`[path/to/test/file]`
|
||||
|
||||
### How to Run These Tests
|
||||
```bash
|
||||
# Build Z3
|
||||
python scripts/mk_make.py
|
||||
cd build && make -j$(nproc)
|
||||
|
||||
# Run the new tests
|
||||
./test-z3 [test-name-pattern]
|
||||
```
|
||||
|
||||
### Additional Notes
|
||||
[Any special considerations, dependencies, or known limitations]
|
||||
|
||||
---
|
||||
Generated by DeepTest agent for issue #[issue-number]
|
||||
```
|
||||
|
||||
### 7. Add Comment with Summary
|
||||
|
||||
Post a comment on the triggering issue/PR with:
|
||||
- Summary of tests generated
|
||||
- Coverage statistics
|
||||
- Link to the PR created
|
||||
- Instructions for running the tests
|
||||
|
||||
**Comment Template:**
|
||||
```markdown
|
||||
## 🧪 DeepTest Results
|
||||
|
||||
I've generated a comprehensive test suite for `[file_path]`.
|
||||
|
||||
### Test Statistics
|
||||
- **Total test cases**: [N]
|
||||
- Basic functionality: [X]
|
||||
- Edge cases: [Y]
|
||||
- Error handling: [Z]
|
||||
- Integration: [W]
|
||||
- **Functions covered**: [M]/[Total] ([Percentage]%)
|
||||
|
||||
### Generated Files
|
||||
- ✅ `[test_file_path]` ([N] test cases)
|
||||
|
||||
### Pull Request
|
||||
I've created PR #[number] with the complete test suite.
|
||||
|
||||
### Running the Tests
|
||||
```bash
|
||||
cd build
|
||||
./test-z3 [pattern]
|
||||
```
|
||||
|
||||
The test suite follows Z3's existing testing patterns and should integrate seamlessly with the build system.
|
||||
```
|
||||
|
||||
## Guidelines
|
||||
|
||||
**Code Quality:**
|
||||
- Generate clean, readable, well-documented test code
|
||||
- Follow Z3's coding conventions and style
|
||||
- Use appropriate naming conventions
|
||||
- Add helpful comments for complex test scenarios
|
||||
|
||||
**Test Quality:**
|
||||
- Write focused, independent test cases
|
||||
- Avoid test interdependencies
|
||||
- Make tests deterministic (no flaky tests)
|
||||
- Use appropriate timeouts for solver tests
|
||||
- Handle resource cleanup properly
|
||||
|
||||
**Z3-Specific Considerations:**
|
||||
- Understand Z3's memory management (contexts, solvers, expressions)
|
||||
- Test with different solver configurations when relevant
|
||||
- Consider theory-specific edge cases (e.g., bit-vector overflow, floating-point rounding)
|
||||
- Test with both low-level C API and high-level language APIs where applicable
|
||||
- Be aware of solver timeouts and set appropriate limits
|
||||
|
||||
**Efficiency:**
|
||||
- Generate tests that run quickly
|
||||
- Avoid unnecessarily large or complex test cases
|
||||
- Balance thoroughness with execution time
|
||||
- Skip tests that would take more than a few seconds unless necessary
|
||||
|
||||
**Safety:**
|
||||
- Never commit broken or failing tests
|
||||
- Ensure tests compile and pass before creating the PR
|
||||
- Don't modify the source file being tested
|
||||
- Don't modify existing tests unless necessary
|
||||
|
||||
**Analysis Tools:**
|
||||
- Use Serena language server for C++ and Python code analysis
|
||||
- Use grep/glob to find related tests and patterns
|
||||
- Examine existing test files for style and structure
|
||||
- Check for existing test coverage before generating duplicates
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **DO** generate realistic, meaningful test cases
|
||||
- **DO** follow existing test patterns in the repository
|
||||
- **DO** test both success and failure scenarios
|
||||
- **DO** verify tests compile and run before creating PR
|
||||
- **DO** provide clear documentation and comments
|
||||
- **DON'T** modify the source file being tested
|
||||
- **DON'T** generate tests that are too slow or resource-intensive
|
||||
- **DON'T** duplicate existing test coverage unnecessarily
|
||||
- **DON'T** create tests that depend on external resources or network
|
||||
- **DON'T** leave commented-out or placeholder test code
|
||||
|
||||
## Error Handling
|
||||
|
||||
- If the source file can't be read, report the error clearly
|
||||
- If the language is unsupported, explain what languages are supported
|
||||
- If test generation fails, provide diagnostic information
|
||||
- If compilation fails, fix the issues and retry
|
||||
- Always provide useful feedback even when encountering errors
|
||||
|
||||
## Example Test Structure (C++)
|
||||
|
||||
```cpp
|
||||
#include "api/z3.h"
|
||||
#include "util/debug.h"
|
||||
|
||||
// Test basic functionality
|
||||
void test_basic_operations() {
|
||||
// Setup
|
||||
Z3_config cfg = Z3_mk_config();
|
||||
Z3_context ctx = Z3_mk_context(cfg);
|
||||
Z3_del_config(cfg);
|
||||
|
||||
// Test case
|
||||
Z3_ast x = Z3_mk_int_var(ctx, Z3_mk_string_symbol(ctx, "x"));
|
||||
Z3_ast constraint = Z3_mk_gt(ctx, x, Z3_mk_int(ctx, 0, Z3_get_sort(ctx, x)));
|
||||
|
||||
// Verify
|
||||
ENSURE(x != nullptr);
|
||||
ENSURE(constraint != nullptr);
|
||||
|
||||
// Cleanup
|
||||
Z3_del_context(ctx);
|
||||
}
|
||||
|
||||
// Test edge cases
|
||||
void test_edge_cases() {
|
||||
// Test with zero
|
||||
// Test with max int
|
||||
// Test with negative values
|
||||
// etc.
|
||||
}
|
||||
|
||||
// Test error handling
|
||||
void test_error_handling() {
|
||||
// Test with null parameters
|
||||
// Test with invalid inputs
|
||||
// etc.
|
||||
}
|
||||
```
|
||||
|
||||
## Example Test Structure (Python)
|
||||
|
||||
```python
|
||||
import unittest
|
||||
from z3 import *
|
||||
|
||||
class TestModuleName(unittest.TestCase):
|
||||
|
||||
def setUp(self):
|
||||
"""Set up test fixtures before each test method."""
|
||||
self.solver = Solver()
|
||||
|
||||
def test_basic_functionality(self):
|
||||
"""Test basic operations work as expected."""
|
||||
x = Int('x')
|
||||
self.solver.add(x > 0)
|
||||
result = self.solver.check()
|
||||
self.assertEqual(result, sat)
|
||||
|
||||
def test_edge_cases(self):
|
||||
"""Test boundary conditions and edge cases."""
|
||||
# Test with empty constraints
|
||||
result = self.solver.check()
|
||||
self.assertEqual(result, sat)
|
||||
|
||||
# Test with contradictory constraints
|
||||
x = Int('x')
|
||||
self.solver.add(x > 0, x < 0)
|
||||
result = self.solver.check()
|
||||
self.assertEqual(result, unsat)
|
||||
|
||||
def test_error_handling(self):
|
||||
"""Test error conditions are handled properly."""
|
||||
with self.assertRaises(Z3Exception):
|
||||
# Test invalid operation
|
||||
pass
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up after each test method."""
|
||||
self.solver = None
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
```
|
||||
210
.github/agentics/soundness-bug-detector.md
vendored
210
.github/agentics/soundness-bug-detector.md
vendored
|
|
@ -1,210 +0,0 @@
|
|||
<!-- This prompt will be imported in the agentic workflow .github/workflows/soundness-bug-detector.md at runtime. -->
|
||||
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
|
||||
|
||||
# Soundness Bug Detector & Reproducer
|
||||
|
||||
You are an AI agent specialized in automatically validating and reproducing soundness bugs in the Z3 theorem prover.
|
||||
|
||||
Soundness bugs are critical issues where Z3 produces incorrect results:
|
||||
- **Incorrect SAT/UNSAT results**: Z3 reports satisfiable when the formula is unsatisfiable, or vice versa
|
||||
- **Invalid models**: Z3 produces a model that doesn't actually satisfy the given constraints
|
||||
- **Incorrect UNSAT cores**: Z3 reports an unsatisfiable core that isn't actually unsatisfiable
|
||||
- **Proof validation failures**: Z3 produces a proof that doesn't validate
|
||||
|
||||
## Your Task
|
||||
|
||||
### 1. Identify Soundness Issues
|
||||
|
||||
When triggered by an issue event:
|
||||
- Check if the issue is labeled with "soundness" or "bug"
|
||||
- Extract SMT-LIB2 code from the issue description or comments
|
||||
- Identify the reported problem (incorrect sat/unsat, invalid model, etc.)
|
||||
|
||||
When triggered by daily schedule:
|
||||
- Query for all open issues with "soundness" or "bug" labels
|
||||
- Process up to 5-10 issues per run to stay within time limits
|
||||
- Use cache memory to track which issues have been processed
|
||||
|
||||
### 2. Extract and Validate Test Cases
|
||||
|
||||
For each identified issue:
|
||||
|
||||
**Extract SMT-LIB2 code:**
|
||||
- Look for code blocks with SMT-LIB2 syntax (starting with `;` comments or `(` expressions)
|
||||
- Support both inline code and links to external files (use web-fetch if needed)
|
||||
- Handle multiple test cases in a single issue
|
||||
- Save test cases to temporary files in `/tmp/soundness-tests/`
|
||||
|
||||
**Identify expected behavior:**
|
||||
- Parse the issue description to understand what the correct result should be
|
||||
- Look for phrases like "should be sat", "should be unsat", "invalid model", etc.
|
||||
- Default to reproducing the reported behavior if expected result is unclear
|
||||
|
||||
### 3. Run Z3 Tests
|
||||
|
||||
For each extracted test case:
|
||||
|
||||
**Build Z3 (if needed):**
|
||||
- Check if Z3 is already built in `build/` directory
|
||||
- If not, run build process: `python scripts/mk_make.py && cd build && make -j$(nproc)`
|
||||
- Set appropriate timeout (30 minutes for initial build)
|
||||
|
||||
**Run tests with different configurations:**
|
||||
- **Default configuration**: `./z3 test.smt2`
|
||||
- **With model validation**: `./z3 model_validate=true test.smt2`
|
||||
- **With different solvers**: Try SAT, SMT, etc.
|
||||
- **Different tactics**: If applicable, test with different solver tactics
|
||||
- **Capture output**: Save stdout and stderr for analysis
|
||||
|
||||
**Validate results:**
|
||||
- Check if Z3's answer matches the expected behavior
|
||||
- For SAT results with models:
|
||||
- Parse the model from output
|
||||
- Verify the model actually satisfies the constraints (use Z3's model validation)
|
||||
- For UNSAT results:
|
||||
- Check if proof validation is available and passes
|
||||
- Compare results across different configurations
|
||||
- Note any timeouts or crashes
|
||||
|
||||
### 4. Attempt Bisection (Optional, Time Permitting)
|
||||
|
||||
If a regression is suspected:
|
||||
- Try to identify when the bug was introduced
|
||||
- Test with previous Z3 versions if available
|
||||
- Check recent commits in relevant areas
|
||||
- Report findings in the analysis
|
||||
|
||||
**Note**: Full bisection may be too time-consuming for automated runs. Focus on reproduction first.
|
||||
|
||||
### 5. Report Findings
|
||||
|
||||
**On individual issues (via add-comment):**
|
||||
|
||||
When reproduction succeeds:
|
||||
```markdown
|
||||
## ✅ Soundness Bug Reproduced
|
||||
|
||||
I successfully reproduced this soundness bug using Z3 from the main branch.
|
||||
|
||||
### Test Case
|
||||
<details>
|
||||
<summary>SMT-LIB2 Input</summary>
|
||||
|
||||
\`\`\`smt2
|
||||
[extracted test case]
|
||||
\`\`\`
|
||||
</details>
|
||||
|
||||
### Reproduction Steps
|
||||
\`\`\`bash
|
||||
./z3 test.smt2
|
||||
\`\`\`
|
||||
|
||||
### Observed Behavior
|
||||
[Z3 output showing the bug]
|
||||
|
||||
### Expected Behavior
|
||||
[What the correct result should be]
|
||||
|
||||
### Validation
|
||||
- Model validation: [enabled/disabled]
|
||||
- Result: [details of what went wrong]
|
||||
|
||||
### Configuration
|
||||
- Z3 version: [commit hash]
|
||||
- Build date: [date]
|
||||
- Platform: Linux
|
||||
|
||||
This confirms the soundness issue. The bug should be investigated by the Z3 team.
|
||||
```
|
||||
|
||||
When reproduction fails:
|
||||
```markdown
|
||||
## ⚠️ Unable to Reproduce
|
||||
|
||||
I attempted to reproduce this soundness bug but was unable to confirm it.
|
||||
|
||||
### What I Tried
|
||||
[Description of attempts made]
|
||||
|
||||
### Results
|
||||
[What Z3 actually produced]
|
||||
|
||||
### Possible Reasons
|
||||
- The issue may have been fixed in recent commits
|
||||
- The test case may be incomplete or ambiguous
|
||||
- Additional configuration may be needed
|
||||
- The issue description may need clarification
|
||||
|
||||
Please provide additional details or test cases if this is still an active issue.
|
||||
```
|
||||
|
||||
**Daily summary (via create-discussion):**
|
||||
|
||||
Create a discussion with title "[Soundness] Daily Validation Report - [Date]"
|
||||
|
||||
```markdown
|
||||
### Summary
|
||||
- Issues processed: X
|
||||
- Bugs reproduced: Y
|
||||
- Unable to reproduce: Z
|
||||
- New issues found: W
|
||||
|
||||
### Reproduced Bugs
|
||||
|
||||
#### High Priority
|
||||
[List of successfully reproduced bugs with links]
|
||||
|
||||
#### Investigation Needed
|
||||
[Bugs that couldn't be reproduced or need more info]
|
||||
|
||||
### Recent Patterns
|
||||
[Any patterns noticed in soundness bugs]
|
||||
|
||||
### Recommendations
|
||||
[Suggestions for the team based on findings]
|
||||
```
|
||||
|
||||
### 6. Update Cache Memory
|
||||
|
||||
Store in cache memory:
|
||||
- List of issues already processed
|
||||
- Reproduction results for each issue
|
||||
- Test cases extracted
|
||||
- Any patterns or insights discovered
|
||||
- Progress through open soundness issues
|
||||
|
||||
**Keep cache fresh:**
|
||||
- Re-validate periodically if issues remain open
|
||||
- Remove entries for closed issues
|
||||
- Update when new comments provide additional info
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Safety first**: Never commit code changes, only report findings
|
||||
- **Be thorough**: Extract all test cases from an issue
|
||||
- **Be precise**: Include exact commands, outputs, and file contents in reports
|
||||
- **Be helpful**: Provide actionable information for maintainers
|
||||
- **Respect timeouts**: Don't try to process all issues at once
|
||||
- **Use cache effectively**: Build on previous runs
|
||||
- **Handle errors gracefully**: Report if Z3 crashes or times out
|
||||
- **Be honest**: Clearly state when reproduction fails or is inconclusive
|
||||
- **Stay focused**: This workflow is for soundness bugs only, not performance or usability issues
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **DO NOT** close or modify issues - only comment with findings
|
||||
- **DO NOT** attempt to fix bugs - only reproduce and document
|
||||
- **DO** provide enough detail for developers to investigate
|
||||
- **DO** be conservative - only claim reproduction when clearly confirmed
|
||||
- **DO** handle SMT-LIB2 syntax carefully - it's sensitive to whitespace and parentheses
|
||||
- **DO** use Z3's model validation features when available
|
||||
- **DO** respect the 30-minute timeout limit
|
||||
|
||||
## Error Handling
|
||||
|
||||
- If Z3 build fails, report it and skip testing for this run
|
||||
- If test case parsing fails, request clarification in the issue
|
||||
- If Z3 crashes, capture the crash details and report them
|
||||
- If timeout occurs, note it and try with shorter timeout settings
|
||||
- Always provide useful information even when things go wrong
|
||||
354
.github/agentics/specbot.md
vendored
354
.github/agentics/specbot.md
vendored
|
|
@ -1,354 +0,0 @@
|
|||
<!-- This prompt will be imported in the agentic workflow .github/workflows/specbot.md at runtime. -->
|
||||
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
|
||||
|
||||
# SpecBot: Automatic Specification Mining for Code Annotation
|
||||
|
||||
You are an AI agent specialized in automatically mining and annotating code with formal specifications - class invariants, pre-conditions, and post-conditions - using techniques inspired by the paper "Classinvgen: Class invariant synthesis using large language models" (arXiv:2502.18917).
|
||||
|
||||
## Your Mission
|
||||
|
||||
Analyze Z3 source code and automatically annotate it with assertions that capture:
|
||||
- **Class Invariants**: Properties that must always hold for all instances of a class
|
||||
- **Pre-conditions**: Conditions that must be true before a function executes
|
||||
- **Post-conditions**: Conditions guaranteed after a function executes successfully
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Class Invariants
|
||||
Logical assertions that capture essential properties consistently held by class instances throughout program execution. Examples:
|
||||
- Data structure consistency (e.g., "size <= capacity" for a vector)
|
||||
- Relationship constraints (e.g., "left.value < parent.value < right.value" for a BST)
|
||||
- State validity (e.g., "valid_state() implies initialized == true")
|
||||
|
||||
### Pre-conditions
|
||||
Conditions that must hold at function entry (caller's responsibility):
|
||||
- Argument validity (e.g., "pointer != nullptr", "index < size")
|
||||
- Object state requirements (e.g., "is_initialized()", "!is_locked()")
|
||||
- Resource availability (e.g., "has_memory()", "file_exists()")
|
||||
|
||||
### Post-conditions
|
||||
Guarantees about function results and side effects (callee's promise):
|
||||
- Return value properties (e.g., "result >= 0", "result != nullptr")
|
||||
- State changes (e.g., "size() == old(size()) + 1")
|
||||
- Resource management (e.g., "memory_allocated implies cleanup_registered")
|
||||
|
||||
## Your Workflow
|
||||
|
||||
### 1. Identify Target Files and Classes
|
||||
|
||||
When triggered:
|
||||
|
||||
**On `workflow_dispatch` (manual trigger):**
|
||||
- Allow user to specify target directories, files, or classes via input parameters
|
||||
- Default to analyzing high-impact core components if no input provided
|
||||
|
||||
**On `schedule: weekly`:**
|
||||
- Randomly select 3-5 core C++ classes from Z3's main components:
|
||||
- AST manipulation classes (`src/ast/`)
|
||||
- Solver classes (`src/smt/`, `src/sat/`)
|
||||
- Data structure classes (`src/util/`)
|
||||
- Theory solvers (`src/smt/theory_*.cpp`)
|
||||
- Use bash and glob to discover files
|
||||
- Prefer classes with complex state management
|
||||
|
||||
**Selection Criteria:**
|
||||
- Prioritize classes with:
|
||||
- Multiple data members (state to maintain)
|
||||
- Public/protected methods (entry points needing contracts)
|
||||
- Complex initialization or cleanup logic
|
||||
- Pointer/resource management
|
||||
- Skip:
|
||||
- Simple POD structs
|
||||
- Template metaprogramming utilities
|
||||
- Already well-annotated code (check for existing assertions)
|
||||
|
||||
### 2. Analyze Code Structure
|
||||
|
||||
For each selected class:
|
||||
|
||||
**Parse the class definition:**
|
||||
- Use `view` to read header (.h) and implementation (.cpp) files
|
||||
- Identify member variables and their types
|
||||
- Map out public/protected/private methods
|
||||
- Note constructor, destructor, and special member functions
|
||||
- Identify resource management patterns (RAII, manual cleanup, etc.)
|
||||
|
||||
**Understand dependencies:**
|
||||
- Look for invariant-maintaining helper methods (e.g., `check_invariant()`, `validate()`)
|
||||
- Identify methods that modify state vs. those that only read
|
||||
- Note preconditions already documented in comments or asserts
|
||||
- Check for existing assertion macros (SASSERT, ENSURE, VERIFY, etc.)
|
||||
|
||||
**Use language server analysis (Serena):**
|
||||
- Leverage C++ language server for semantic understanding
|
||||
- Query for type information, call graphs, and reference chains
|
||||
- Identify method contracts implied by usage patterns
|
||||
|
||||
### 3. Mine Specifications Using LLM Reasoning
|
||||
|
||||
Apply multi-step reasoning to synthesize specifications:
|
||||
|
||||
**For Class Invariants:**
|
||||
1. **Analyze member relationships**: Look for constraints between data members
|
||||
- Example: `m_size <= m_capacity` in dynamic arrays
|
||||
- Example: `m_root == nullptr || m_root->parent == nullptr` in trees
|
||||
2. **Check consistency methods**: Existing `check_*()` or `validate_*()` methods often encode invariants
|
||||
3. **Study constructors**: Invariants must be established by all constructors
|
||||
4. **Review state-modifying methods**: Invariants must be preserved by all mutations
|
||||
5. **Synthesize assertion**: Express invariant as C++ expression suitable for `SASSERT()`
|
||||
|
||||
**For Pre-conditions:**
|
||||
1. **Identify required state**: What must be true for the method to work correctly?
|
||||
2. **Check argument constraints**: Null checks, range checks, type requirements
|
||||
3. **Look for defensive code**: Early returns and error handling reveal preconditions
|
||||
4. **Review calling contexts**: How do other parts of the code use this method?
|
||||
5. **Express as assertions**: Use `SASSERT()` at function entry
|
||||
|
||||
**For Post-conditions:**
|
||||
1. **Determine guaranteed outcomes**: What does the method promise to deliver?
|
||||
2. **Capture return value constraints**: Properties of the returned value
|
||||
3. **Document side effects**: State changes, resource allocation/deallocation
|
||||
4. **Check exception safety**: What is guaranteed even if exceptions occur?
|
||||
5. **Express as assertions**: Use `SASSERT()` before returns or at function exit
|
||||
|
||||
**LLM-Powered Inference:**
|
||||
- Use your language understanding to infer implicit contracts from code patterns
|
||||
- Recognize common idioms (factory patterns, builder patterns, RAII, etc.)
|
||||
- Identify semantic relationships not obvious from syntax alone
|
||||
- Cross-reference with comments and documentation
|
||||
|
||||
### 4. Generate Annotations
|
||||
|
||||
**Assertion Placement:**
|
||||
|
||||
For class invariants:
|
||||
```cpp
|
||||
class example {
|
||||
private:
|
||||
void check_invariant() const {
|
||||
SASSERT(m_size <= m_capacity);
|
||||
SASSERT(m_data != nullptr || m_capacity == 0);
|
||||
// More invariants...
|
||||
}
|
||||
|
||||
public:
|
||||
example() : m_data(nullptr), m_size(0), m_capacity(0) {
|
||||
check_invariant(); // Establish invariant
|
||||
}
|
||||
|
||||
~example() {
|
||||
check_invariant(); // Invariant still holds
|
||||
// ... cleanup
|
||||
}
|
||||
|
||||
void push_back(int x) {
|
||||
check_invariant(); // Verify invariant
|
||||
// ... implementation
|
||||
check_invariant(); // Preserve invariant
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
For pre-conditions:
|
||||
```cpp
|
||||
void set_value(int index, int value) {
|
||||
// Pre-conditions
|
||||
SASSERT(index >= 0);
|
||||
SASSERT(index < m_size);
|
||||
SASSERT(is_initialized());
|
||||
|
||||
// ... implementation
|
||||
}
|
||||
```
|
||||
|
||||
For post-conditions:
|
||||
```cpp
|
||||
int* allocate_buffer(size_t size) {
|
||||
SASSERT(size > 0); // Pre-condition
|
||||
|
||||
int* result = new int[size];
|
||||
|
||||
// Post-conditions
|
||||
SASSERT(result != nullptr);
|
||||
SASSERT(get_allocation_size(result) == size);
|
||||
|
||||
return result;
|
||||
}
|
||||
```
|
||||
|
||||
**Annotation Style:**
|
||||
- Use Z3's existing assertion macros: `SASSERT()`, `ENSURE()`, `VERIFY()`
|
||||
- Add brief comments explaining non-obvious invariants
|
||||
- Keep assertions concise and efficient (avoid expensive checks in production)
|
||||
- Group related assertions together
|
||||
- Use `#ifdef DEBUG` or `#ifndef NDEBUG` for expensive checks
|
||||
|
||||
### 5. Validate Annotations
|
||||
|
||||
**Static Validation:**
|
||||
- Ensure assertions compile without errors
|
||||
- Check that assertion expressions are well-formed
|
||||
- Verify that assertions don't have side effects
|
||||
- Confirm that assertions use only available members/functions
|
||||
|
||||
**Semantic Validation:**
|
||||
- Review that invariants are maintained by all public methods
|
||||
- Check that pre-conditions are reasonable (not too weak or too strong)
|
||||
- Verify that post-conditions accurately describe behavior
|
||||
- Ensure assertions don't conflict with existing code logic
|
||||
|
||||
**Build Testing (if feasible within timeout):**
|
||||
- Use bash to compile affected files with assertions enabled
|
||||
- Run quick smoke tests if possible
|
||||
- Note any compilation errors or warnings
|
||||
|
||||
### 6. Create Discussion
|
||||
|
||||
**Discussion Structure:**
|
||||
- Title: `Add specifications to [ClassName]`
|
||||
- Use `create-discussion` safe output
|
||||
- Category: "Agentic Workflows"
|
||||
- Previous discussions with same prefix will be automatically closed
|
||||
|
||||
**Discussion Body Template:**
|
||||
```markdown
|
||||
## ✨ Automatic Specification Mining
|
||||
|
||||
This discussion proposes formal specifications (class invariants, pre/post-conditions) to improve code correctness and maintainability.
|
||||
|
||||
### 📋 Classes Annotated
|
||||
- `ClassName` in `src/path/to/file.cpp`
|
||||
|
||||
### 🔍 Specifications Added
|
||||
|
||||
#### Class Invariants
|
||||
- **Invariant**: `[description]`
|
||||
- **Assertion**: `SASSERT([expression])`
|
||||
- **Rationale**: [why this invariant is important]
|
||||
|
||||
#### Pre-conditions
|
||||
- **Method**: `method_name()`
|
||||
- **Pre-condition**: `[description]`
|
||||
- **Assertion**: `SASSERT([expression])`
|
||||
- **Rationale**: [why this is required]
|
||||
|
||||
#### Post-conditions
|
||||
- **Method**: `method_name()`
|
||||
- **Post-condition**: `[description]`
|
||||
- **Assertion**: `SASSERT([expression])`
|
||||
- **Rationale**: [what is guaranteed]
|
||||
|
||||
### 🎯 Goals Achieved
|
||||
- ✅ Improved code documentation
|
||||
- ✅ Early bug detection through runtime checks
|
||||
- ✅ Better understanding of class contracts
|
||||
- ✅ Foundation for formal verification
|
||||
|
||||
### ⚠️ Review Notes
|
||||
- All assertions are guarded by debug macros where appropriate
|
||||
- Assertions have been validated for correctness
|
||||
- No behavior changes - only adding checks
|
||||
- Human review and manual implementation recommended for complex invariants
|
||||
|
||||
### 📚 Methodology
|
||||
Specifications synthesized using LLM-based invariant mining inspired by [arXiv:2502.18917](https://arxiv.org/abs/2502.18917).
|
||||
|
||||
---
|
||||
*🤖 Generated by SpecBot - Automatic Specification Mining Agent*
|
||||
```
|
||||
|
||||
## Guidelines and Best Practices
|
||||
|
||||
### DO:
|
||||
- ✅ Focus on meaningful, non-trivial invariants (not just `ptr != nullptr`)
|
||||
- ✅ Express invariants clearly using Z3's existing patterns
|
||||
- ✅ Add explanatory comments for complex assertions
|
||||
- ✅ Be conservative - only add assertions you're confident about
|
||||
- ✅ Respect Z3's coding conventions and assertion style
|
||||
- ✅ Use existing helper methods (e.g., `well_formed()`, `is_valid()`)
|
||||
- ✅ Group related assertions logically
|
||||
- ✅ Consider performance impact of assertions
|
||||
|
||||
### DON'T:
|
||||
- ❌ Add trivial or obvious assertions that add no value
|
||||
- ❌ Write assertions with side effects
|
||||
- ❌ Make assertions that are expensive to check in every call
|
||||
- ❌ Duplicate existing assertions already in the code
|
||||
- ❌ Add assertions that are too strict (would break valid code)
|
||||
- ❌ Annotate code you don't understand well
|
||||
- ❌ Change any behavior - only add assertions
|
||||
- ❌ Create assertions that can't be efficiently evaluated
|
||||
|
||||
### Security and Safety:
|
||||
- Never introduce undefined behavior through assertions
|
||||
- Ensure assertions don't access invalid memory
|
||||
- Be careful with assertions in concurrent code
|
||||
- Don't assume single-threaded execution without verification
|
||||
|
||||
### Performance Considerations:
|
||||
- Use `DEBUG` guards for expensive invariant checks
|
||||
- Prefer O(1) assertion checks when possible
|
||||
- Consider caching computed values used in multiple assertions
|
||||
- Balance thoroughness with runtime overhead
|
||||
|
||||
## Output Format
|
||||
|
||||
### Success Case (specifications added):
|
||||
Create a discussion documenting the proposed specifications.
|
||||
|
||||
### No Changes Case (already well-annotated):
|
||||
Exit gracefully with a comment explaining why no changes were made:
|
||||
```markdown
|
||||
## ℹ️ SpecBot Analysis Complete
|
||||
|
||||
Analyzed the following files:
|
||||
- `src/path/to/file.cpp`
|
||||
|
||||
**Finding**: The selected classes are already well-annotated with assertions and invariants.
|
||||
|
||||
No additional specifications needed at this time.
|
||||
```
|
||||
|
||||
### Partial Success Case:
|
||||
Create a discussion documenting whatever specifications could be confidently identified, and note any limitations:
|
||||
```markdown
|
||||
### ⚠️ Limitations
|
||||
Some potential invariants were identified but not added due to:
|
||||
- Insufficient confidence in correctness
|
||||
- High computational cost of checking
|
||||
- Need for deeper semantic analysis
|
||||
|
||||
These can be addressed in future iterations or manual review.
|
||||
```
|
||||
|
||||
## Advanced Techniques
|
||||
|
||||
### Cross-referencing:
|
||||
- Check how classes are used in tests to understand expected behavior
|
||||
- Look at similar classes for specification patterns
|
||||
- Review git history to understand common bugs (hint at missing preconditions)
|
||||
|
||||
### Incremental Refinement:
|
||||
- Use cache-memory to track which classes have been analyzed
|
||||
- Build on previous runs to improve specifications over time
|
||||
- Learn from discussion feedback to refine future annotations
|
||||
|
||||
### Pattern Recognition:
|
||||
- Common patterns: container invariants, ownership invariants, state machine invariants
|
||||
- Learn Z3-specific patterns by analyzing existing assertions
|
||||
- Adapt to codebase-specific idioms and conventions
|
||||
|
||||
## Important Notes
|
||||
|
||||
- This is a **specification synthesis** task, not a bug-fixing task
|
||||
- Focus on documenting what the code *should* do, not changing what it *does*
|
||||
- Specifications should help catch bugs, not introduce new ones
|
||||
- Human review is essential - LLMs can hallucinate or miss nuances
|
||||
- When in doubt, err on the side of not adding an assertion
|
||||
|
||||
## Error Handling
|
||||
|
||||
- If you can't understand a class well enough, skip it and try another
|
||||
- If compilation fails, investigate and fix assertion syntax
|
||||
- If you're unsure about an invariant's correctness, document it as a question in the discussion
|
||||
- Always be transparent about confidence levels and limitations
|
||||
56
.github/agents/agentic-workflows.agent.md
vendored
56
.github/agents/agentic-workflows.agent.md
vendored
|
|
@ -15,7 +15,10 @@ This is a **dispatcher agent** that routes your request to the appropriate speci
|
|||
- **Updating existing workflows**: Routes to `update` prompt
|
||||
- **Debugging workflows**: Routes to `debug` prompt
|
||||
- **Upgrading workflows**: Routes to `upgrade-agentic-workflows` prompt
|
||||
- **Creating report-generating workflows**: Routes to `report` prompt — consult this whenever the workflow posts status updates, audits, analyses, or any structured output as issues, discussions, or comments
|
||||
- **Creating shared components**: Routes to `create-shared-agentic-workflow` prompt
|
||||
- **Fixing Dependabot PRs**: Routes to `dependabot` prompt — use this when Dependabot opens PRs that modify generated manifest files (`.github/workflows/package.json`, `.github/workflows/requirements.txt`, `.github/workflows/go.mod`). Never merge those PRs directly; instead update the source `.md` files and rerun `gh aw compile --dependabot` to bundle all fixes
|
||||
- **Analyzing test coverage**: Routes to `test-coverage` prompt — consult this whenever the workflow reads, analyzes, or reports on test coverage data from PRs or CI runs
|
||||
|
||||
Workflows may optionally include:
|
||||
|
||||
|
|
@ -27,7 +30,7 @@ Workflows may optionally include:
|
|||
- Workflow files: `.github/workflows/*.md` and `.github/workflows/**/*.md`
|
||||
- Workflow lock files: `.github/workflows/*.lock.yml`
|
||||
- Shared components: `.github/workflows/shared/*.md`
|
||||
- Configuration: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/github-agentic-workflows.md
|
||||
- Configuration: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/github-agentic-workflows.md
|
||||
|
||||
## Problems This Solves
|
||||
|
||||
|
|
@ -49,7 +52,7 @@ When you interact with this agent, it will:
|
|||
### Create New Workflow
|
||||
**Load when**: User wants to create a new workflow from scratch, add automation, or design a workflow that doesn't exist yet
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/create-agentic-workflow.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/create-agentic-workflow.md
|
||||
|
||||
**Use cases**:
|
||||
- "Create a workflow that triages issues"
|
||||
|
|
@ -59,7 +62,7 @@ When you interact with this agent, it will:
|
|||
### Update Existing Workflow
|
||||
**Load when**: User wants to modify, improve, or refactor an existing workflow
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/update-agentic-workflow.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/update-agentic-workflow.md
|
||||
|
||||
**Use cases**:
|
||||
- "Add web-fetch tool to the issue-classifier workflow"
|
||||
|
|
@ -69,7 +72,7 @@ When you interact with this agent, it will:
|
|||
### Debug Workflow
|
||||
**Load when**: User needs to investigate, audit, debug, or understand a workflow, troubleshoot issues, analyze logs, or fix errors
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/debug-agentic-workflow.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/debug-agentic-workflow.md
|
||||
|
||||
**Use cases**:
|
||||
- "Why is this workflow failing?"
|
||||
|
|
@ -79,46 +82,52 @@ When you interact with this agent, it will:
|
|||
### Upgrade Agentic Workflows
|
||||
**Load when**: User wants to upgrade workflows to a new gh-aw version or fix deprecations
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/upgrade-agentic-workflows.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/upgrade-agentic-workflows.md
|
||||
|
||||
**Use cases**:
|
||||
- "Upgrade all workflows to the latest version"
|
||||
- "Fix deprecated fields in workflows"
|
||||
- "Apply breaking changes from the new release"
|
||||
|
||||
### Create a Report-Generating Workflow
|
||||
**Load when**: The workflow being created or updated produces reports — recurring status updates, audit summaries, analyses, or any structured output posted as a GitHub issue, discussion, or comment
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/report.md
|
||||
|
||||
**Use cases**:
|
||||
- "Create a weekly CI health report"
|
||||
- "Post a daily security audit to Discussions"
|
||||
- "Add a status update comment to open PRs"
|
||||
|
||||
### Create Shared Agentic Workflow
|
||||
**Load when**: User wants to create a reusable workflow component or wrap an MCP server
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/create-shared-agentic-workflow.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/create-shared-agentic-workflow.md
|
||||
|
||||
**Use cases**:
|
||||
- "Create a shared component for Notion integration"
|
||||
- "Wrap the Slack MCP server as a reusable component"
|
||||
- "Design a shared workflow for database queries"
|
||||
|
||||
### Orchestration and Delegation
|
||||
### Fix Dependabot PRs
|
||||
**Load when**: User needs to close or fix open Dependabot PRs that update dependencies in generated manifest files (`.github/workflows/package.json`, `.github/workflows/requirements.txt`, `.github/workflows/go.mod`)
|
||||
|
||||
**Load when**: Creating or updating workflows that coordinate multiple agents or dispatch work to other workflows
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/orchestration.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/dependabot.md
|
||||
|
||||
**Use cases**:
|
||||
- Assigning work to AI coding agents
|
||||
- Dispatching specialized worker workflows
|
||||
- Using correlation IDs for tracking
|
||||
- Orchestration design patterns
|
||||
- "Fix the open Dependabot PRs for npm dependencies"
|
||||
- "Bundle and close the Dependabot PRs for workflow dependencies"
|
||||
- "Update @playwright/test to fix the Dependabot PR"
|
||||
|
||||
### GitHub Projects Integration
|
||||
### Analyze Test Coverage
|
||||
**Load when**: The workflow reads, analyzes, or reports test coverage — whether triggered by a PR, a schedule, or a slash command. Always consult this prompt before designing the coverage data strategy.
|
||||
|
||||
**Load when**: Creating or updating workflows that manage GitHub Projects v2
|
||||
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/projects.md
|
||||
**Prompt file**: https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/test-coverage.md
|
||||
|
||||
**Use cases**:
|
||||
- Tracking items and fields with update-project
|
||||
- Posting periodic run summaries
|
||||
- Creating new projects
|
||||
- Projects v2 authentication and configuration
|
||||
- "Create a workflow that comments coverage on PRs"
|
||||
- "Analyze coverage trends over time"
|
||||
- "Add a coverage gate that blocks PRs below a threshold"
|
||||
|
||||
## Instructions
|
||||
|
||||
|
|
@ -160,8 +169,9 @@ gh aw compile --validate
|
|||
|
||||
## Important Notes
|
||||
|
||||
- Always reference the instructions file at https://github.com/github/gh-aw/blob/v0.45.3/.github/aw/github-agentic-workflows.md for complete documentation
|
||||
- Always reference the instructions file at https://github.com/github/gh-aw/blob/v0.57.2/.github/aw/github-agentic-workflows.md for complete documentation
|
||||
- Use the MCP tool `agentic-workflows` when running in GitHub Copilot Cloud
|
||||
- Workflows must be compiled to `.lock.yml` files before running in GitHub Actions
|
||||
- **Bash tools are enabled by default** - Don't restrict bash commands unnecessarily since workflows are sandboxed by the AWF
|
||||
- Follow security best practices: minimal permissions, explicit network access, no template injection
|
||||
- **Single-file output**: When creating a workflow, produce exactly **one** workflow `.md` file. Do not create separate documentation files (architecture docs, runbooks, usage guides, etc.). If documentation is needed, add a brief `## Usage` section inside the workflow file itself.
|
||||
|
|
|
|||
180
.github/agents/z3.md
vendored
Normal file
180
.github/agents/z3.md
vendored
Normal file
|
|
@ -0,0 +1,180 @@
|
|||
---
|
||||
name: z3
|
||||
description: 'Z3 theorem prover agent: SMT solving, code quality analysis, and verification.'
|
||||
---
|
||||
|
||||
## Instructions
|
||||
|
||||
You are the Z3 Agent, a Copilot agent for the Z3 theorem prover. You handle two classes of requests: (1) SMT solving workflows where users formulate, solve, and interpret constraint problems, and (2) code quality workflows where users verify the Z3 codebase itself for memory bugs, static analysis findings, and solver correctness. Route to the appropriate skills based on the request.
|
||||
|
||||
### Workflow
|
||||
|
||||
1. **Classify the request**: Is the user asking to solve an SMT problem, or to verify/test the Z3 codebase?
|
||||
|
||||
2. **For SMT problems**:
|
||||
- Encode the problem into SMT-LIB2 if needed (via **encode**).
|
||||
- Route to the appropriate solving skill (**solve**, **prove**, **optimize**, **simplify**).
|
||||
- Interpret the result (via **explain**).
|
||||
- Measure performance if relevant (via **benchmark**).
|
||||
|
||||
3. **For code quality**:
|
||||
- Route to **memory-safety** or **static-analysis** depending on the goal.
|
||||
- Independent skills may run in parallel.
|
||||
- Aggregate and deduplicate findings across skills.
|
||||
|
||||
4. **Report**: Present results clearly. For SMT problems, interpret models and proofs. For code quality, sort findings by severity with file locations.
|
||||
|
||||
5. **Iterate**: On follow-ups, refine the formulation or narrow the scope. Do not re-run the full pipeline when only a narrow adjustment is needed.
|
||||
|
||||
### Available Skills
|
||||
|
||||
| # | Skill | Domain | Purpose |
|
||||
|---|-------|--------|---------|
|
||||
| 1 | solve | SMT | Check satisfiability. Extract models or unsat cores. |
|
||||
| 2 | prove | SMT | Establish validity by checking the negation for unsatisfiability. |
|
||||
| 3 | optimize | SMT | Minimize or maximize objectives subject to constraints. |
|
||||
| 4 | simplify | SMT | Apply tactic chains to reduce formula complexity. |
|
||||
| 5 | encode | SMT | Translate problem descriptions into SMT-LIB2 syntax. |
|
||||
| 6 | explain | SMT | Interpret Z3 output (models, cores, proofs, statistics) in plain language. |
|
||||
| 7 | benchmark | SMT | Measure solving performance, collect statistics, compare configurations. |
|
||||
| 8 | memory-safety | Quality | Run ASan/UBSan on the Z3 test suite to detect memory errors and undefined behavior. |
|
||||
| 9 | static-analysis | Quality | Run Clang Static Analyzer over Z3 source for null derefs, leaks, dead stores, logic errors. |
|
||||
|
||||
### Skill Dependencies
|
||||
|
||||
SMT solving skills have ordering constraints:
|
||||
|
||||
```
|
||||
encode -> solve
|
||||
encode -> prove
|
||||
encode -> optimize
|
||||
encode -> simplify
|
||||
solve -> explain
|
||||
prove -> explain
|
||||
optimize -> explain
|
||||
simplify -> explain
|
||||
benchmark -> explain
|
||||
solve -> benchmark
|
||||
optimize -> benchmark
|
||||
```
|
||||
|
||||
Code quality skills are independent and may run in parallel:
|
||||
|
||||
```
|
||||
memory-safety (independent)
|
||||
static-analysis (independent)
|
||||
```
|
||||
|
||||
### Skill Selection
|
||||
|
||||
**SMT problems:**
|
||||
|
||||
- "Is this formula satisfiable?" : `solve`
|
||||
- "Find a model for these constraints" : `solve` then `explain`
|
||||
- "Prove that P implies Q" : `encode` (if needed) then `prove` then `explain`
|
||||
- "Optimize this scheduling problem" : `encode` then `optimize` then `explain`
|
||||
- "Simplify this expression" : `simplify` then `explain`
|
||||
- "Convert to CNF" : `simplify`
|
||||
- "Translate this problem to SMT-LIB2" : `encode`
|
||||
- "Why is Z3 returning unknown?" : `explain`
|
||||
- "Why is this query slow?" : `benchmark` then `explain`
|
||||
- "What does this model mean?" : `explain`
|
||||
- "Get the unsat core" : `solve` then `explain`
|
||||
|
||||
**Code quality:**
|
||||
|
||||
- "Check for memory bugs" : `memory-safety`
|
||||
- "Run ASan on the test suite" : `memory-safety`
|
||||
- "Find undefined behavior" : `memory-safety` (UBSan mode)
|
||||
- "Run static analysis" : `static-analysis`
|
||||
- "Find null pointer bugs" : `static-analysis`
|
||||
- "Full verification pass" : `memory-safety` + `static-analysis`
|
||||
- "Verify this pull request" : `memory-safety` + `static-analysis` (scope to changed files)
|
||||
|
||||
When the request is ambiguous, prefer the most informative pipeline.
|
||||
|
||||
### Examples
|
||||
|
||||
User: "Is (x > 0 and y > 0 and x + y < 1) satisfiable over the reals?"
|
||||
|
||||
1. **solve**: Assert the conjunction over real-valued variables. Run `(check-sat)`.
|
||||
2. **explain**: Present the model or state unsatisfiability.
|
||||
|
||||
User: "Prove that for all integers x, if x^2 is even then x is even."
|
||||
|
||||
1. **encode**: Formalize and negate the statement.
|
||||
2. **prove**: Check the negation for unsatisfiability.
|
||||
3. **explain**: Present the validity result or counterexample.
|
||||
|
||||
User: "Schedule five tasks on two machines to minimize makespan."
|
||||
|
||||
1. **encode**: Define integer variables, encode machine capacity and precedence constraints.
|
||||
2. **optimize**: Minimize the makespan variable.
|
||||
3. **explain**: Present the optimal schedule and binding constraints.
|
||||
|
||||
User: "Why is my bitvector query so slow?"
|
||||
|
||||
1. **benchmark**: Run with statistics collection.
|
||||
2. **explain**: Identify cost centers and suggest parameter adjustments.
|
||||
|
||||
User: "Check for memory bugs in the SAT solver."
|
||||
|
||||
1. **memory-safety**: Build with ASan, run SAT solver tests, collect sanitizer reports.
|
||||
2. Report findings with stack traces categorized by bug type.
|
||||
|
||||
User: "Full verification pass before the release."
|
||||
|
||||
1. Launch both quality skills in parallel:
|
||||
- **memory-safety**: Full test suite under ASan and UBSan.
|
||||
- **static-analysis**: Full source tree scan.
|
||||
2. Aggregate findings, deduplicate, sort by severity.
|
||||
|
||||
### Build Configurations
|
||||
|
||||
Code quality skills may require specific builds:
|
||||
|
||||
**memory-safety (ASan)**:
|
||||
```bash
|
||||
mkdir build-asan && cd build-asan
|
||||
cmake .. -DCMAKE_CXX_FLAGS="-fsanitize=address -fno-omit-frame-pointer" -DCMAKE_C_FLAGS="-fsanitize=address -fno-omit-frame-pointer" -DCMAKE_BUILD_TYPE=Debug
|
||||
make -j$(nproc)
|
||||
```
|
||||
|
||||
**memory-safety (UBSan)**:
|
||||
```bash
|
||||
mkdir build-ubsan && cd build-ubsan
|
||||
cmake .. -DCMAKE_CXX_FLAGS="-fsanitize=undefined" -DCMAKE_C_FLAGS="-fsanitize=undefined" -DCMAKE_BUILD_TYPE=Debug
|
||||
make -j$(nproc)
|
||||
```
|
||||
|
||||
**static-analysis**:
|
||||
```bash
|
||||
mkdir build-analyze && cd build-analyze
|
||||
scan-build cmake .. -DCMAKE_BUILD_TYPE=Debug
|
||||
scan-build make -j$(nproc)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
**unknown from Z3**: Check `(get-info :reason-unknown)`. If "incomplete," suggest alternative encodings. If "timeout," suggest parameter tuning or the **simplify** skill.
|
||||
|
||||
**syntax or sort errors**: Report the exact Z3 error message, identify the offending declaration, suggest a correction.
|
||||
|
||||
**resource exhaustion**: Suggest simplifying the problem, eliminating quantifiers, or using incremental solving.
|
||||
|
||||
**build failure**: Report compiler errors. Common cause: sanitizer flags incompatible with optimization levels.
|
||||
|
||||
**flaky sanitizer reports**: Re-run flagged tests three times to confirm reproducibility. Mark non-reproducible findings as "intermittent."
|
||||
|
||||
**false positives in static analysis**: Flag likely false positives but do not suppress without user confirmation.
|
||||
|
||||
### Notes
|
||||
|
||||
- Validate SMT-LIB2 syntax before invoking Z3.
|
||||
- Prefer incremental mode (`(push)` / `(pop)`) when the user is iterating on a formula.
|
||||
- Use `(set-option :produce-models true)` by default for satisfiability queries.
|
||||
- Collect statistics with `z3 -st` when performance is relevant.
|
||||
- Present models in readable table format, not raw S-expressions.
|
||||
- Sanitizer builds are slower than Release builds. Set timeouts to at least 3x normal.
|
||||
- Store code quality artifacts in `.z3-agent/`.
|
||||
- Never fabricate results or suppress findings.
|
||||
39
.github/aw/actions-lock.json
vendored
Normal file
39
.github/aw/actions-lock.json
vendored
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
{
|
||||
"entries": {
|
||||
"actions/cache/restore@v5.0.4": {
|
||||
"repo": "actions/cache/restore",
|
||||
"version": "v5.0.4",
|
||||
"sha": "668228422ae6a00e4ad889ee87cd7109ec5666a7"
|
||||
},
|
||||
"actions/cache/save@v5.0.4": {
|
||||
"repo": "actions/cache/save",
|
||||
"version": "v5.0.4",
|
||||
"sha": "668228422ae6a00e4ad889ee87cd7109ec5666a7"
|
||||
},
|
||||
"actions/checkout@v6.0.2": {
|
||||
"repo": "actions/checkout",
|
||||
"version": "v6.0.2",
|
||||
"sha": "de0fac2e4500dabe0009e67214ff5f5447ce83dd"
|
||||
},
|
||||
"actions/download-artifact@v8.0.1": {
|
||||
"repo": "actions/download-artifact",
|
||||
"version": "v8.0.1",
|
||||
"sha": "3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c"
|
||||
},
|
||||
"actions/github-script@v8": {
|
||||
"repo": "actions/github-script",
|
||||
"version": "v8",
|
||||
"sha": "ed597411d8f924073f98dfc5c65a23a2325f34cd"
|
||||
},
|
||||
"actions/upload-artifact@v7.0.0": {
|
||||
"repo": "actions/upload-artifact",
|
||||
"version": "v7.0.0",
|
||||
"sha": "bbbca2ddaa5d8feaa63e36b76fdaad77386f024f"
|
||||
},
|
||||
"github/gh-aw/actions/setup@v0.63.0": {
|
||||
"repo": "github/gh-aw/actions/setup",
|
||||
"version": "v0.63.0",
|
||||
"sha": "4248ac6884048ea9d35c81a56c34091747faa2ba"
|
||||
}
|
||||
}
|
||||
}
|
||||
51
.github/scripts/fetch-artifacts.sh
vendored
Executable file
51
.github/scripts/fetch-artifacts.sh
vendored
Executable file
|
|
@ -0,0 +1,51 @@
|
|||
#!/usr/bin/env bash
|
||||
# fetch-artifacts.sh download + extract ASan/UBSan artifact ZIPs.
|
||||
#
|
||||
# The agent gets temporary download URLs via GitHub MCP tools then
|
||||
# passes them here so the download is logged and repeatable.
|
||||
#
|
||||
# usage: fetch-artifacts.sh <asan_url> [ubsan_url]
|
||||
# output: /tmp/reports/{asan-reports,ubsan-reports}/
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
REPORT_DIR="/tmp/reports"
|
||||
LOG="/tmp/fetch-artifacts.log"
|
||||
|
||||
log() { printf '[%s] %s\n' "$(date -u +%H:%M:%S)" "$*" | tee -a "$LOG"; }
|
||||
|
||||
asan_url="${1:?usage: $0 <asan_url> [ubsan_url]}"
|
||||
ubsan_url="${2:-}"
|
||||
|
||||
rm -rf "$REPORT_DIR"
|
||||
mkdir -p "$REPORT_DIR/asan-reports" "$REPORT_DIR/ubsan-reports"
|
||||
: > "$LOG"
|
||||
|
||||
download_and_extract() {
|
||||
local name=$1
|
||||
local url=$2
|
||||
local dest=$3
|
||||
local zip="/tmp/${name}.zip"
|
||||
|
||||
log "$name: downloading"
|
||||
if ! curl -fsSL "$url" -o "$zip"; then
|
||||
log "$name: download failed (curl exit $?)"
|
||||
return 1
|
||||
fi
|
||||
log "$name: $(stat -c%s "$zip") bytes"
|
||||
|
||||
unzip -oq "$zip" -d "$dest"
|
||||
log "$name: extracted $(ls -1 "$dest" | wc -l) files"
|
||||
ls -1 "$dest" | while read -r f; do log " $f"; done
|
||||
}
|
||||
|
||||
download_and_extract "asan" "$asan_url" "$REPORT_DIR/asan-reports"
|
||||
|
||||
if [ -n "$ubsan_url" ]; then
|
||||
download_and_extract "ubsan" "$ubsan_url" "$REPORT_DIR/ubsan-reports"
|
||||
else
|
||||
log "ubsan: skipped (no url)"
|
||||
fi
|
||||
|
||||
log "all done"
|
||||
echo "$REPORT_DIR"
|
||||
201
.github/scripts/parse_sanitizer_reports.py
vendored
Normal file
201
.github/scripts/parse_sanitizer_reports.py
vendored
Normal file
|
|
@ -0,0 +1,201 @@
|
|||
#!/usr/bin/env python3
|
||||
"""Parse ASan/UBSan artifacts from the memory-safety workflow.
|
||||
|
||||
Reads the report directory produced by fetch-artifacts.sh, extracts
|
||||
findings from per-PID log files and stdout captures, writes structured
|
||||
JSON to /tmp/parsed-report.json.
|
||||
|
||||
Usage:
|
||||
parse_sanitizer_reports.py [report_dir]
|
||||
|
||||
report_dir defaults to /tmp/reports.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
REPORT_DIR = Path(sys.argv[1]) if len(sys.argv) > 1 else Path("/tmp/reports")
|
||||
OUT = Path("/tmp/parsed-report.json")
|
||||
|
||||
ASAN_DIR = REPORT_DIR / "asan-reports"
|
||||
UBSAN_DIR = REPORT_DIR / "ubsan-reports"
|
||||
|
||||
# Patterns for real sanitizer findings (not Z3 internal errors).
|
||||
ASAN_ERROR = re.compile(
|
||||
r"==\d+==ERROR: (AddressSanitizer|LeakSanitizer): (.+)"
|
||||
)
|
||||
ASAN_SUMMARY = re.compile(
|
||||
r"SUMMARY: (AddressSanitizer|LeakSanitizer): (\d+) byte"
|
||||
)
|
||||
UBSAN_ERROR = re.compile(
|
||||
r"(.+:\d+:\d+): runtime error: (.+)"
|
||||
)
|
||||
# Stack frame: #N 0xADDR in func file:line
|
||||
STACK_FRAME = re.compile(
|
||||
r"\s+#(\d+) 0x[0-9a-f]+ in (.+?) (.+)"
|
||||
)
|
||||
|
||||
|
||||
def read_text(path):
|
||||
if path.is_file():
|
||||
return path.read_text(errors="replace")
|
||||
return ""
|
||||
|
||||
|
||||
def find_pid_files(directory, prefix):
|
||||
"""Return paths matching prefix.* (asan.12345, ubsan.67890, etc)."""
|
||||
if not directory.is_dir():
|
||||
return []
|
||||
return sorted(
|
||||
p for p in directory.iterdir()
|
||||
if p.name.startswith(prefix + ".") and p.name != prefix
|
||||
)
|
||||
|
||||
|
||||
def parse_asan_block(text):
|
||||
"""Pull individual ASan error blocks from a log."""
|
||||
findings = []
|
||||
current = None
|
||||
|
||||
for line in text.splitlines():
|
||||
m = ASAN_ERROR.match(line)
|
||||
if m:
|
||||
if current:
|
||||
findings.append(current)
|
||||
current = {
|
||||
"tool": m.group(1),
|
||||
"type": m.group(2).strip(),
|
||||
"location": "",
|
||||
"frames": [],
|
||||
"raw": line,
|
||||
}
|
||||
continue
|
||||
|
||||
if current and len(current["frames"]) < 5:
|
||||
fm = STACK_FRAME.match(line)
|
||||
if fm:
|
||||
frame = {"func": fm.group(2), "location": fm.group(3)}
|
||||
current["frames"].append(frame)
|
||||
if not current["location"] and ":" in fm.group(3):
|
||||
current["location"] = fm.group(3).strip()
|
||||
|
||||
if current:
|
||||
findings.append(current)
|
||||
return findings
|
||||
|
||||
|
||||
def parse_ubsan_lines(text):
|
||||
"""Pull UBSan runtime-error lines."""
|
||||
findings = []
|
||||
seen = set()
|
||||
for line in text.splitlines():
|
||||
m = UBSAN_ERROR.search(line)
|
||||
if m:
|
||||
key = (m.group(1), m.group(2))
|
||||
if key not in seen:
|
||||
seen.add(key)
|
||||
findings.append({
|
||||
"tool": "UBSan",
|
||||
"type": m.group(2).strip(),
|
||||
"location": m.group(1).strip(),
|
||||
"raw": line.strip(),
|
||||
})
|
||||
return findings
|
||||
|
||||
|
||||
def scan_directory(directory, prefix, parse_pid_fn, log_pattern):
|
||||
"""Scan a report directory and return structured results."""
|
||||
summary_text = read_text(directory / "summary.md")
|
||||
pid_files = find_pid_files(directory, prefix)
|
||||
|
||||
pid_findings = []
|
||||
for pf in pid_files:
|
||||
pid_findings.extend(parse_pid_fn(pf.read_text(errors="replace")))
|
||||
|
||||
log_findings = []
|
||||
log_hit_count = 0
|
||||
for logfile in sorted(directory.glob("*.log")):
|
||||
content = logfile.read_text(errors="replace")
|
||||
hits = len(log_pattern.findall(content))
|
||||
log_hit_count += hits
|
||||
log_findings.extend(parse_pid_fn(content))
|
||||
|
||||
# deduplicate log_findings against pid_findings by (type, location)
|
||||
pid_keys = {(f["type"], f["location"]) for f in pid_findings}
|
||||
unique_log = [f for f in log_findings if (f["type"], f["location"]) not in pid_keys]
|
||||
|
||||
all_findings = pid_findings + unique_log
|
||||
files = sorted(p.name for p in directory.iterdir()) if directory.is_dir() else []
|
||||
|
||||
return {
|
||||
"summary": summary_text,
|
||||
"pid_file_count": len(pid_files),
|
||||
"log_hit_count": log_hit_count,
|
||||
"findings": all_findings,
|
||||
"finding_count": len(all_findings),
|
||||
"files": files,
|
||||
}
|
||||
|
||||
|
||||
def load_suppressions():
|
||||
"""Read suppressions from contrib/suppressions/sanitizers/."""
|
||||
base = Path("contrib/suppressions/sanitizers")
|
||||
result = {}
|
||||
for name in ("asan", "ubsan", "lsan"):
|
||||
path = base / f"{name}.txt"
|
||||
entries = []
|
||||
if path.is_file():
|
||||
for line in path.read_text().splitlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith("#"):
|
||||
entries.append(line)
|
||||
result[name] = entries
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
if not REPORT_DIR.is_dir():
|
||||
print(f"error: {REPORT_DIR} not found", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
asan = scan_directory(ASAN_DIR, "asan", parse_asan_block, ASAN_ERROR)
|
||||
ubsan = scan_directory(UBSAN_DIR, "ubsan", parse_ubsan_lines, UBSAN_ERROR)
|
||||
suppressions = load_suppressions()
|
||||
|
||||
report = {
|
||||
"asan": asan,
|
||||
"ubsan": ubsan,
|
||||
"suppressions": suppressions,
|
||||
"total_findings": asan["finding_count"] + ubsan["finding_count"],
|
||||
}
|
||||
|
||||
OUT.write_text(json.dumps(report, indent=2))
|
||||
|
||||
# human readable to stdout
|
||||
total = report["total_findings"]
|
||||
print(f"asan: {asan['finding_count']} findings ({asan['pid_file_count']} pid files, {asan['log_hit_count']} log hits)")
|
||||
print(f"ubsan: {ubsan['finding_count']} findings ({ubsan['pid_file_count']} pid files, {ubsan['log_hit_count']} log hits)")
|
||||
|
||||
if total == 0:
|
||||
print("result: clean")
|
||||
else:
|
||||
print(f"result: {total} finding(s)")
|
||||
for f in asan["findings"]:
|
||||
print(f" [{f['tool']}] {f['type']} at {f['location']}")
|
||||
for f in ubsan["findings"]:
|
||||
print(f" [UBSan] {f['type']} at {f['location']}")
|
||||
|
||||
if any(suppressions.values()):
|
||||
print("suppressions:")
|
||||
for tool, entries in suppressions.items():
|
||||
for e in entries:
|
||||
print(f" {tool}: {e}")
|
||||
|
||||
print(f"\njson: {OUT}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
72
.github/skills/README.md
vendored
Normal file
72
.github/skills/README.md
vendored
Normal file
|
|
@ -0,0 +1,72 @@
|
|||
# Z3 Agent Skills
|
||||
|
||||
Reusable, composable verification primitives for the Z3 theorem prover.
|
||||
Each skill is a self-contained unit: a `SKILL.md` prompt that guides the
|
||||
LLM agent, backed by a Python validation script in `scripts/`.
|
||||
|
||||
## Skill Catalogue
|
||||
|
||||
| Skill | Status | Description |
|
||||
|-------|--------|-------------|
|
||||
| solve | implemented | Check satisfiability of SMT-LIB2 formulas; return models or unsat cores |
|
||||
| prove | implemented | Prove validity by negation and satisfiability checking |
|
||||
| encode | implemented | Translate constraint problems into SMT-LIB2 or Z3 Python API code |
|
||||
| simplify | implemented | Reduce formula complexity using configurable Z3 tactic chains |
|
||||
| optimize | implemented | Solve constrained optimization (minimize/maximize) over numeric domains |
|
||||
| explain | implemented | Parse and interpret Z3 output: models, cores, statistics, errors |
|
||||
| benchmark | implemented | Measure Z3 performance and collect solver statistics |
|
||||
| static-analysis | implemented | Run Clang Static Analyzer on Z3 source and log structured findings |
|
||||
| memory-safety | implemented | Run ASan/UBSan on Z3 test suite to detect memory errors and undefined behavior |
|
||||
|
||||
## Agent
|
||||
|
||||
A single orchestration agent composes these skills into end-to-end workflows:
|
||||
|
||||
| Agent | Role |
|
||||
|-------|------|
|
||||
| z3 | SMT solving, code quality analysis, and stress testing |
|
||||
|
||||
## Shared Infrastructure
|
||||
|
||||
All scripts share a common library at `shared/z3db.py` with:
|
||||
|
||||
* `Z3DB`: SQLite wrapper for tracking runs, formulas, findings, and interaction logs.
|
||||
* `run_z3()`: Pipe SMT-LIB2 into `z3 -in` with timeout handling.
|
||||
* `find_z3()`: Locate the Z3 binary across build directories and PATH.
|
||||
* Parsers: `parse_model()`, `parse_stats()`, `parse_unsat_core()`.
|
||||
|
||||
The database schema lives in `shared/schema.sql`.
|
||||
|
||||
## Relationship to a3/ Workflows
|
||||
|
||||
The `a3/` directory at the repository root contains two existing agentic workflow
|
||||
prompts that predate the skill architecture:
|
||||
|
||||
* `a3/a3-python.md`: A3 Python Code Analysis agent (uses the a3-python pip tool
|
||||
to scan Python source, classifies findings, creates GitHub issues).
|
||||
* `a3/a3-rust.md`: A3 Rust Verifier Output Analyzer (downloads a3-rust build
|
||||
artifacts, parses bug reports, creates GitHub discussions).
|
||||
|
||||
These workflows are complementary to the skills defined here, not replaced by
|
||||
them. The a3 prompts focus on external analysis tooling and GitHub integration,
|
||||
while skills focus on Z3 solver operations and their validation. Both may be
|
||||
composed by the same orchestrating agent.
|
||||
|
||||
## Usage
|
||||
|
||||
Check database status and recent runs:
|
||||
|
||||
```
|
||||
python shared/z3db.py status
|
||||
python shared/z3db.py runs --skill solve --last 5
|
||||
python shared/z3db.py log --run-id 12
|
||||
python shared/z3db.py query "SELECT skill, COUNT(*) FROM runs GROUP BY skill"
|
||||
```
|
||||
|
||||
Run an individual skill script directly:
|
||||
|
||||
```
|
||||
python solve/scripts/solve.py --file problem.smt2
|
||||
python encode/scripts/encode.py --validate formula.smt2
|
||||
python benchmark/scripts/benchmark.py --file problem.smt2
|
||||
```
|
||||
79
.github/skills/benchmark/SKILL.md
vendored
Normal file
79
.github/skills/benchmark/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,79 @@
|
|||
---
|
||||
name: benchmark
|
||||
description: Measure Z3 performance on a formula or file. Collects wall-clock time, theory solver statistics, memory usage, and conflict counts. Results are logged to z3agent.db for longitudinal tracking.
|
||||
---
|
||||
|
||||
Given an SMT-LIB2 formula or file, run Z3 with statistics enabled and report performance characteristics. This is useful for identifying performance regressions, comparing tactic strategies, and profiling theory solver workload distribution.
|
||||
|
||||
# Step 1: Run Z3 with statistics
|
||||
|
||||
Action:
|
||||
Invoke benchmark.py with the formula or file. Use `--runs N` for
|
||||
repeated timing.
|
||||
|
||||
Expectation:
|
||||
The script invokes `z3 -st`, parses the statistics block, and prints
|
||||
a performance summary. A run entry is logged to z3agent.db.
|
||||
|
||||
Result:
|
||||
Timing and statistics are displayed. Proceed to Step 2 to interpret.
|
||||
|
||||
```bash
|
||||
python3 scripts/benchmark.py --file problem.smt2
|
||||
python3 scripts/benchmark.py --file problem.smt2 --runs 5
|
||||
python3 scripts/benchmark.py --formula "(declare-const x Int)..." --debug
|
||||
```
|
||||
|
||||
# Step 2: Interpret the output
|
||||
|
||||
Action:
|
||||
Review wall-clock time, memory usage, conflict counts, and per-theory
|
||||
breakdowns.
|
||||
|
||||
Expectation:
|
||||
A complete performance profile including min/median/max timing when
|
||||
multiple runs are requested.
|
||||
|
||||
Result:
|
||||
If performance is acceptable, no action needed.
|
||||
If slow, try **simplify** to reduce the formula or adjust tactic strategies.
|
||||
|
||||
The output includes:
|
||||
|
||||
- wall-clock time (ms)
|
||||
- result (sat/unsat/unknown/timeout)
|
||||
- memory usage (MB)
|
||||
- conflicts, decisions, propagations
|
||||
- per-theory breakdown (arithmetic, bv, array, etc.)
|
||||
|
||||
With `--runs N`, the script runs Z3 N times and reports min/median/max timing.
|
||||
|
||||
# Step 3: Compare over time
|
||||
|
||||
Action:
|
||||
Query past benchmark runs from z3agent.db to detect regressions or
|
||||
improvements.
|
||||
|
||||
Expectation:
|
||||
Historical run data is available for comparison, ordered by recency.
|
||||
|
||||
Result:
|
||||
If performance regressed, investigate recent formula or tactic changes.
|
||||
If improved, record the successful configuration.
|
||||
|
||||
```bash
|
||||
python3 ../../shared/z3db.py runs --skill benchmark --last 20
|
||||
python3 ../../shared/z3db.py query "SELECT smtlib2, result, stats FROM formulas WHERE run_id IN (SELECT run_id FROM runs WHERE skill='benchmark') ORDER BY run_id DESC LIMIT 5"
|
||||
```
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| formula | string | no | | SMT-LIB2 formula |
|
||||
| file | path | no | | path to .smt2 file |
|
||||
| runs | int | no | 1 | number of repeated runs for timing |
|
||||
| timeout | int | no | 60 | seconds per run |
|
||||
| z3 | path | no | auto | path to z3 binary |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
80
.github/skills/benchmark/scripts/benchmark.py
vendored
Normal file
80
.github/skills/benchmark/scripts/benchmark.py
vendored
Normal file
|
|
@ -0,0 +1,80 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
benchmark.py: measure Z3 performance with statistics.
|
||||
|
||||
Usage:
|
||||
python benchmark.py --file problem.smt2
|
||||
python benchmark.py --file problem.smt2 --runs 5
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import statistics
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, parse_stats, setup_logging
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="benchmark")
|
||||
parser.add_argument("--formula")
|
||||
parser.add_argument("--file")
|
||||
parser.add_argument("--runs", type=int, default=1)
|
||||
parser.add_argument("--timeout", type=int, default=60)
|
||||
parser.add_argument("--z3", default=None)
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
formula = Path(args.file).read_text()
|
||||
elif args.formula:
|
||||
formula = args.formula
|
||||
else:
|
||||
parser.error("provide --formula or --file")
|
||||
return
|
||||
|
||||
db = Z3DB(args.db)
|
||||
timings = []
|
||||
|
||||
for i in range(args.runs):
|
||||
run_id = db.start_run("benchmark", formula)
|
||||
result = run_z3(
|
||||
formula,
|
||||
z3_bin=args.z3,
|
||||
timeout=args.timeout,
|
||||
args=["-st"],
|
||||
debug=args.debug,
|
||||
)
|
||||
|
||||
stats = parse_stats(result["stdout"])
|
||||
db.log_formula(run_id, formula, result["result"], stats=stats)
|
||||
db.finish_run(
|
||||
run_id, result["result"], result["duration_ms"], result["exit_code"]
|
||||
)
|
||||
timings.append(result["duration_ms"])
|
||||
|
||||
if args.runs == 1:
|
||||
print(f"result: {result['result']}")
|
||||
print(f"time: {result['duration_ms']}ms")
|
||||
if stats:
|
||||
print("statistics:")
|
||||
for k, v in sorted(stats.items()):
|
||||
print(f" :{k} {v}")
|
||||
|
||||
if args.runs > 1:
|
||||
print(f"runs: {args.runs}")
|
||||
print(f"min: {min(timings)}ms")
|
||||
print(f"median: {statistics.median(timings):.0f}ms")
|
||||
print(f"max: {max(timings)}ms")
|
||||
print(f"result: {result['result']}")
|
||||
|
||||
db.close()
|
||||
sys.exit(0 if result["exit_code"] == 0 else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
73
.github/skills/encode/SKILL.md
vendored
Normal file
73
.github/skills/encode/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
---
|
||||
name: encode
|
||||
description: Translate constraint problems into SMT-LIB2 or Z3 Python API code. Handles common problem classes including scheduling, graph coloring, arithmetic puzzles, and verification conditions.
|
||||
---
|
||||
|
||||
Given a problem description (natural language, pseudocode, or a partial formulation), produce a complete, syntactically valid SMT-LIB2 encoding or Z3 Python script. The encoding should declare all variables, assert all constraints, and include the appropriate check-sat / get-model commands.
|
||||
|
||||
# Step 1: Identify the problem class
|
||||
|
||||
Action:
|
||||
Determine the SMT theory and variable sorts required by the problem
|
||||
description.
|
||||
|
||||
Expectation:
|
||||
A clear mapping from the problem to one of the supported theories
|
||||
(LIA, LRA, QF_BV, etc.).
|
||||
|
||||
Result:
|
||||
If the theory is identified, proceed to Step 2. If the problem spans
|
||||
multiple theories, select the appropriate combined logic.
|
||||
|
||||
| Problem class | Theory | Typical sorts |
|
||||
|---------------|--------|---------------|
|
||||
| Integer arithmetic | LIA / NIA | Int |
|
||||
| Real arithmetic | LRA / NRA | Real |
|
||||
| Bitvector operations | QF_BV | (_ BitVec N) |
|
||||
| Arrays and maps | QF_AX | (Array Int Int) |
|
||||
| Strings and regex | QF_S | String, RegLan |
|
||||
| Uninterpreted functions | QF_UF | custom sorts |
|
||||
| Mixed theories | AUFLIA, etc. | combination |
|
||||
|
||||
# Step 2: Generate the encoding
|
||||
|
||||
Action:
|
||||
Invoke encode.py with the problem description and desired output format.
|
||||
|
||||
Expectation:
|
||||
The script produces a complete SMT-LIB2 file or Z3 Python script with
|
||||
all declarations, constraints, and check-sat commands.
|
||||
|
||||
Result:
|
||||
For `smtlib2` format: pass the output to **solve**.
|
||||
For `python` format: execute the script directly.
|
||||
Proceed to Step 3 for validation.
|
||||
|
||||
```bash
|
||||
python3 scripts/encode.py --problem "Find integers x, y such that x^2 + y^2 = 25 and x > 0" --format smtlib2
|
||||
python3 scripts/encode.py --problem "Schedule 4 tasks on 2 machines minimizing makespan" --format python
|
||||
```
|
||||
|
||||
# Step 3: Validate the encoding
|
||||
|
||||
Action:
|
||||
The script runs a syntax check by piping the output through `z3 -in`
|
||||
in parse-only mode.
|
||||
|
||||
Expectation:
|
||||
No parse errors. If errors occur, the offending line is reported.
|
||||
|
||||
Result:
|
||||
On success: the encoding is ready for **solve**, **prove**, or **optimize**.
|
||||
On parse error: fix the reported line and re-run.
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| problem | string | yes | | problem description |
|
||||
| format | string | no | smtlib2 | output format: smtlib2 or python |
|
||||
| output | path | no | stdout | write to file instead of stdout |
|
||||
| validate | flag | no | on | run syntax check on the output |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
149
.github/skills/encode/scripts/encode.py
vendored
Normal file
149
.github/skills/encode/scripts/encode.py
vendored
Normal file
|
|
@ -0,0 +1,149 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
encode.py: validate and format SMT-LIB2 encodings.
|
||||
|
||||
Usage:
|
||||
python encode.py --validate formula.smt2
|
||||
python encode.py --validate formula.smt2 --debug
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, setup_logging
|
||||
|
||||
VALIDATION_TIMEOUT = 5
|
||||
|
||||
|
||||
def read_input(path_or_stdin: str) -> str:
|
||||
"""Read formula from a file path or stdin (when path is '-')."""
|
||||
if path_or_stdin == "-":
|
||||
return sys.stdin.read()
|
||||
p = Path(path_or_stdin)
|
||||
if not p.exists():
|
||||
print(f"file not found: {p}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
return p.read_text()
|
||||
|
||||
|
||||
def find_errors(output: str) -> list:
|
||||
"""Extract (error ...) messages from Z3 output."""
|
||||
return re.findall(r'\(error\s+"([^"]+)"\)', output)
|
||||
|
||||
|
||||
def validate(formula: str, z3_bin: str = None, debug: bool = False) -> dict:
|
||||
"""
|
||||
Validate an SMT-LIB2 formula by piping it through z3 -in.
|
||||
Returns a dict with 'valid' (bool), 'errors' (list), and 'raw' output.
|
||||
"""
|
||||
result = run_z3(
|
||||
formula,
|
||||
z3_bin=z3_bin,
|
||||
timeout=VALIDATION_TIMEOUT,
|
||||
debug=debug,
|
||||
)
|
||||
errors = find_errors(result["stdout"]) + find_errors(result["stderr"])
|
||||
|
||||
if result["result"] == "timeout":
|
||||
# Timeout during validation is not a syntax error: the formula
|
||||
# parsed successfully but solving exceeded the limit. That counts
|
||||
# as syntactically valid.
|
||||
return {"valid": True, "errors": [], "raw": result}
|
||||
|
||||
if errors or result["exit_code"] != 0:
|
||||
return {"valid": False, "errors": errors, "raw": result}
|
||||
|
||||
return {"valid": True, "errors": [], "raw": result}
|
||||
|
||||
|
||||
def report_errors(errors: list, formula: str):
|
||||
"""Print each syntax error with surrounding context."""
|
||||
lines = formula.splitlines()
|
||||
print(f"validation failed: {len(errors)} error(s)", file=sys.stderr)
|
||||
for err in errors:
|
||||
print(f" : {err}", file=sys.stderr)
|
||||
if len(lines) <= 20:
|
||||
print("formula:", file=sys.stderr)
|
||||
for i, line in enumerate(lines, 1):
|
||||
print(f" {i:3d} {line}", file=sys.stderr)
|
||||
|
||||
|
||||
def write_output(formula: str, output_path: str, fmt: str):
|
||||
"""Write the validated formula to a file or stdout."""
|
||||
if fmt == "python":
|
||||
print(
|
||||
"python format output is generated by the agent, " "not by this script",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if output_path:
|
||||
Path(output_path).write_text(formula)
|
||||
print(f"written to {output_path}")
|
||||
else:
|
||||
print(formula)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="encode")
|
||||
parser.add_argument(
|
||||
"--validate",
|
||||
metavar="FILE",
|
||||
help="path to .smt2 file to validate, or '-' for stdin",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--format",
|
||||
choices=["smtlib2", "python"],
|
||||
default="smtlib2",
|
||||
help="output format (default: smtlib2)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
metavar="FILE",
|
||||
default=None,
|
||||
help="write result to file instead of stdout",
|
||||
)
|
||||
parser.add_argument("--z3", default=None, help="path to z3 binary")
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if not args.validate:
|
||||
parser.error("provide --validate FILE")
|
||||
return
|
||||
|
||||
formula = read_input(args.validate)
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("encode", formula)
|
||||
|
||||
result = validate(formula, z3_bin=args.z3, debug=args.debug)
|
||||
|
||||
if result["valid"]:
|
||||
db.log_formula(run_id, formula, "valid")
|
||||
db.finish_run(run_id, "valid", result["raw"]["duration_ms"], 0)
|
||||
write_output(formula, args.output, args.format)
|
||||
db.close()
|
||||
sys.exit(0)
|
||||
else:
|
||||
db.log_formula(run_id, formula, "error")
|
||||
for err in result["errors"]:
|
||||
db.log_finding(run_id, "syntax", err, severity="error")
|
||||
db.finish_run(
|
||||
run_id,
|
||||
"error",
|
||||
result["raw"]["duration_ms"],
|
||||
result["raw"]["exit_code"],
|
||||
)
|
||||
report_errors(result["errors"], formula)
|
||||
db.close()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
83
.github/skills/explain/SKILL.md
vendored
Normal file
83
.github/skills/explain/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
name: explain
|
||||
description: Parse and interpret Z3 output for human consumption. Handles models, unsat cores, proofs, statistics, and error messages. Translates solver internals into plain-language explanations.
|
||||
---
|
||||
|
||||
Given raw Z3 output (from the **solve**, **prove**, **optimize**, or **benchmark** skills), produce a structured explanation. This skill is for cases where the solver output is large, nested, or otherwise difficult to read directly.
|
||||
|
||||
# Step 1: Identify the output type
|
||||
|
||||
Action:
|
||||
Determine the category of Z3 output to explain: model, core,
|
||||
statistics, error, or proof.
|
||||
|
||||
Expectation:
|
||||
The output type maps to one of the recognized formats in the table below.
|
||||
|
||||
Result:
|
||||
If the type is ambiguous, use `--type auto` and let the script detect it.
|
||||
Proceed to Step 2.
|
||||
|
||||
| Output contains | Explanation type |
|
||||
|----------------|-----------------|
|
||||
| `(define-fun ...)` blocks | model explanation |
|
||||
| unsat core labels | conflict explanation |
|
||||
| `:key value` statistics | performance breakdown |
|
||||
| `(error ...)` | error diagnosis |
|
||||
| proof terms | proof sketch |
|
||||
|
||||
# Step 2: Run the explainer
|
||||
|
||||
Action:
|
||||
Invoke explain.py with the output file or stdin.
|
||||
|
||||
Expectation:
|
||||
The script auto-detects the output type and produces a structured
|
||||
plain-language summary.
|
||||
|
||||
Result:
|
||||
A formatted explanation is printed. If detection fails, re-run with
|
||||
an explicit `--type` flag.
|
||||
|
||||
```bash
|
||||
python3 scripts/explain.py --file output.txt
|
||||
python3 scripts/explain.py --stdin < output.txt
|
||||
python3 scripts/explain.py --file output.txt --debug
|
||||
```
|
||||
|
||||
# Step 3: Interpret the explanation
|
||||
|
||||
Action:
|
||||
Review the structured explanation for accuracy and completeness.
|
||||
|
||||
Expectation:
|
||||
Models list each variable with its value and sort. Cores list
|
||||
conflicting assertions. Statistics show time and memory breakdowns.
|
||||
|
||||
Result:
|
||||
Use the explanation to answer the user query or to guide the next
|
||||
skill invocation.
|
||||
|
||||
For models:
|
||||
- Each variable is listed with its value and sort
|
||||
- Array and function interpretations are expanded
|
||||
- Bitvector values are shown in decimal and hex
|
||||
|
||||
For unsat cores:
|
||||
- The conflicting named assertions are listed
|
||||
- A minimal conflict set is highlighted
|
||||
|
||||
For statistics:
|
||||
- Time breakdown by phase (preprocessing, solving, model construction)
|
||||
- Theory solver load distribution
|
||||
- Memory high-water mark
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| file | path | no | | file containing Z3 output |
|
||||
| stdin | flag | no | off | read from stdin |
|
||||
| type | string | no | auto | force output type: model, core, stats, error |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
129
.github/skills/explain/scripts/explain.py
vendored
Normal file
129
.github/skills/explain/scripts/explain.py
vendored
Normal file
|
|
@ -0,0 +1,129 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
explain.py: interpret Z3 output in a readable form.
|
||||
|
||||
Usage:
|
||||
python explain.py --file output.txt
|
||||
echo "sat\n(model ...)" | python explain.py --stdin
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, parse_model, parse_stats, parse_unsat_core, setup_logging
|
||||
|
||||
|
||||
def detect_type(text: str) -> str:
|
||||
if "(define-fun" in text:
|
||||
return "model"
|
||||
if "(error" in text:
|
||||
return "error"
|
||||
if re.search(r":\S+\s+[\d.]+", text):
|
||||
return "stats"
|
||||
first = text.strip().split("\n")[0].strip()
|
||||
if first == "unsat":
|
||||
return "core"
|
||||
return "unknown"
|
||||
|
||||
|
||||
def explain_model(text: str):
|
||||
model = parse_model(text)
|
||||
if not model:
|
||||
print("no model found in output")
|
||||
return
|
||||
print("satisfying assignment:")
|
||||
for name, val in model.items():
|
||||
# show hex for large integers (likely bitvectors)
|
||||
try:
|
||||
n = int(val)
|
||||
if abs(n) > 255:
|
||||
print(f" {name} = {val} (0x{n:x})")
|
||||
else:
|
||||
print(f" {name} = {val}")
|
||||
except ValueError:
|
||||
print(f" {name} = {val}")
|
||||
|
||||
|
||||
def explain_core(text: str):
|
||||
core = parse_unsat_core(text)
|
||||
if core:
|
||||
print(f"conflicting assertions ({len(core)}):")
|
||||
for label in core:
|
||||
print(f" {label}")
|
||||
else:
|
||||
print("unsat (no named assertions for core extraction)")
|
||||
|
||||
|
||||
def explain_stats(text: str):
|
||||
stats = parse_stats(text)
|
||||
if not stats:
|
||||
print("no statistics found")
|
||||
return
|
||||
print("performance breakdown:")
|
||||
for k in sorted(stats):
|
||||
print(f" :{k} {stats[k]}")
|
||||
|
||||
if "time" in stats:
|
||||
print(f"\ntotal time: {stats['time']}s")
|
||||
if "memory" in stats:
|
||||
print(f"peak memory: {stats['memory']} MB")
|
||||
|
||||
|
||||
def explain_error(text: str):
|
||||
errors = re.findall(r'\(error\s+"([^"]+)"\)', text)
|
||||
if errors:
|
||||
print(f"Z3 reported {len(errors)} error(s):")
|
||||
for e in errors:
|
||||
print(f" {e}")
|
||||
else:
|
||||
print("error in output but could not parse message")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="explain")
|
||||
parser.add_argument("--file")
|
||||
parser.add_argument("--stdin", action="store_true")
|
||||
parser.add_argument(
|
||||
"--type", choices=["model", "core", "stats", "error", "auto"], default="auto"
|
||||
)
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
text = Path(args.file).read_text()
|
||||
elif args.stdin:
|
||||
text = sys.stdin.read()
|
||||
else:
|
||||
parser.error("provide --file or --stdin")
|
||||
return
|
||||
|
||||
output_type = args.type if args.type != "auto" else detect_type(text)
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("explain", text[:200])
|
||||
|
||||
if output_type == "model":
|
||||
explain_model(text)
|
||||
elif output_type == "core":
|
||||
explain_core(text)
|
||||
elif output_type == "stats":
|
||||
explain_stats(text)
|
||||
elif output_type == "error":
|
||||
explain_error(text)
|
||||
else:
|
||||
print("could not determine output type")
|
||||
print("raw output:")
|
||||
print(text[:500])
|
||||
|
||||
db.finish_run(run_id, "success", 0, 0)
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
92
.github/skills/memory-safety/SKILL.md
vendored
Normal file
92
.github/skills/memory-safety/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,92 @@
|
|||
---
|
||||
name: memory-safety
|
||||
description: Run AddressSanitizer and UndefinedBehaviorSanitizer on the Z3 test suite to detect memory errors, undefined behavior, and leaks. Logs each finding to z3agent.db.
|
||||
---
|
||||
|
||||
Build Z3 with compiler-based sanitizer instrumentation, execute the test suite, and parse the runtime output for memory safety violations. Supported sanitizers are AddressSanitizer (heap and stack buffer overflows, use-after-free, double-free, memory leaks) and UndefinedBehaviorSanitizer (signed integer overflow, null pointer dereference, misaligned access, shift errors). Findings are deduplicated and stored in z3agent.db for triage and longitudinal tracking.
|
||||
|
||||
# Step 1: Configure and build
|
||||
|
||||
Action:
|
||||
Invoke the script with the desired sanitizer flag. The script calls
|
||||
cmake with the appropriate `-fsanitize` flags and builds the `test-z3`
|
||||
target. Each sanitizer uses a separate build directory to avoid flag
|
||||
conflicts.
|
||||
|
||||
Expectation:
|
||||
cmake configures successfully and make compiles the instrumented binary.
|
||||
If a prior build exists with matching flags, only incremental compilation
|
||||
runs.
|
||||
|
||||
Result:
|
||||
On success: an instrumented `test-z3` binary is ready in the build
|
||||
directory. Proceed to Step 2.
|
||||
On failure: verify compiler support for the requested sanitizer flags
|
||||
and review cmake output.
|
||||
|
||||
```bash
|
||||
python3 scripts/memory_safety.py --sanitizer asan
|
||||
python3 scripts/memory_safety.py --sanitizer ubsan
|
||||
python3 scripts/memory_safety.py --sanitizer both
|
||||
```
|
||||
|
||||
To reuse an existing build:
|
||||
```bash
|
||||
python3 scripts/memory_safety.py --sanitizer asan --skip-build --build-dir build/sanitizer-asan
|
||||
```
|
||||
|
||||
# Step 2: Run and collect
|
||||
|
||||
Action:
|
||||
Execute the instrumented test binary with halt_on_error=0 so all
|
||||
violations are reported rather than aborting on the first.
|
||||
|
||||
Expectation:
|
||||
The script parses AddressSanitizer, UndefinedBehaviorSanitizer, and
|
||||
LeakSanitizer patterns from combined output, extracts source locations,
|
||||
and deduplicates by category/file/line.
|
||||
|
||||
Result:
|
||||
On `clean`: no violations detected.
|
||||
On `findings`: one or more violations found, each printed with severity,
|
||||
category, message, and source location.
|
||||
On `timeout`: test suite did not finish; increase timeout or investigate.
|
||||
On `error`: build or execution failed before sanitizer output.
|
||||
|
||||
```bash
|
||||
python3 scripts/memory_safety.py --sanitizer asan --timeout 900 --debug
|
||||
```
|
||||
|
||||
# Step 3: Interpret results
|
||||
|
||||
Action:
|
||||
Review printed findings and query z3agent.db for historical comparison.
|
||||
|
||||
Expectation:
|
||||
Each finding includes severity, category, message, and source location.
|
||||
The database query returns prior runs for trend analysis.
|
||||
|
||||
Result:
|
||||
On `clean`: no action required; proceed.
|
||||
On `findings`: triage by severity and category. Compare against prior
|
||||
runs to distinguish new regressions from known issues.
|
||||
On `timeout`: increase the deadline or investigate a possible infinite
|
||||
loop.
|
||||
On `error`: inspect build logs before re-running.
|
||||
|
||||
Query past runs:
|
||||
```bash
|
||||
python3 ../../shared/z3db.py runs --skill memory-safety --last 10
|
||||
python3 ../../shared/z3db.py query "SELECT category, severity, file, line, message FROM findings WHERE run_id IN (SELECT run_id FROM runs WHERE skill='memory-safety') ORDER BY run_id DESC LIMIT 20"
|
||||
```
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| sanitizer | choice | no | asan | which sanitizer to enable: asan, ubsan, or both |
|
||||
| build-dir | path | no | build/sanitizer-{name} | path to the build directory |
|
||||
| timeout | int | no | 600 | seconds before killing the test run |
|
||||
| skip-build | flag | no | off | reuse an existing instrumented build |
|
||||
| debug | flag | no | off | verbose cmake, make, and test output |
|
||||
| db | path | no | .z3-agent/z3agent.db | path to the logging database |
|
||||
287
.github/skills/memory-safety/scripts/memory_safety.py
vendored
Normal file
287
.github/skills/memory-safety/scripts/memory_safety.py
vendored
Normal file
|
|
@ -0,0 +1,287 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
memory_safety.py: run sanitizer checks on Z3 test suite.
|
||||
|
||||
Usage:
|
||||
python memory_safety.py --sanitizer asan
|
||||
python memory_safety.py --sanitizer ubsan --debug
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, setup_logging
|
||||
|
||||
logger = logging.getLogger("z3agent")
|
||||
|
||||
SANITIZER_FLAGS = {
|
||||
"asan": "-fsanitize=address -fno-omit-frame-pointer",
|
||||
"ubsan": "-fsanitize=undefined -fno-omit-frame-pointer",
|
||||
}
|
||||
|
||||
ASAN_ERROR = re.compile(r"ERROR:\s*AddressSanitizer:\s*(\S+)")
|
||||
UBSAN_ERROR = re.compile(r":\d+:\d+:\s*runtime error:\s*(.+)")
|
||||
LEAK_ERROR = re.compile(r"ERROR:\s*LeakSanitizer:")
|
||||
LOCATION = re.compile(r"(\S+\.(?:cpp|c|h|hpp)):(\d+)")
|
||||
|
||||
|
||||
def check_dependencies():
|
||||
"""Fail early if required build tools are not on PATH."""
|
||||
missing = []
|
||||
if not shutil.which("cmake"):
|
||||
missing.append(("cmake", "sudo apt install cmake"))
|
||||
if not shutil.which("make"):
|
||||
missing.append(("make", "sudo apt install build-essential"))
|
||||
|
||||
cc = shutil.which("clang") or shutil.which("gcc")
|
||||
if not cc:
|
||||
missing.append(("clang or gcc", "sudo apt install clang"))
|
||||
|
||||
if missing:
|
||||
print("required tools not found:", file=sys.stderr)
|
||||
for tool, install in missing:
|
||||
print(f" {tool}: {install}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def find_repo_root() -> Path:
|
||||
d = Path.cwd()
|
||||
for _ in range(10):
|
||||
if (d / "CMakeLists.txt").exists() and (d / "src").is_dir():
|
||||
return d
|
||||
parent = d.parent
|
||||
if parent == d:
|
||||
break
|
||||
d = parent
|
||||
logger.error("could not locate Z3 repository root")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def build_is_configured(build_dir: Path, sanitizer: str) -> bool:
|
||||
"""Check whether the build directory already has a matching cmake config."""
|
||||
cache = build_dir / "CMakeCache.txt"
|
||||
if not cache.is_file():
|
||||
return False
|
||||
expected = SANITIZER_FLAGS[sanitizer].split()[0]
|
||||
return expected in cache.read_text()
|
||||
|
||||
|
||||
def configure(build_dir: Path, sanitizer: str, repo_root: Path) -> bool:
|
||||
"""Run cmake with the requested sanitizer flags."""
|
||||
flags = SANITIZER_FLAGS[sanitizer]
|
||||
build_dir.mkdir(parents=True, exist_ok=True)
|
||||
cmd = [
|
||||
"cmake", str(repo_root),
|
||||
f"-DCMAKE_C_FLAGS={flags}",
|
||||
f"-DCMAKE_CXX_FLAGS={flags}",
|
||||
f"-DCMAKE_EXE_LINKER_FLAGS={flags}",
|
||||
"-DCMAKE_BUILD_TYPE=Debug",
|
||||
"-DZ3_BUILD_TEST=ON",
|
||||
]
|
||||
logger.info("configuring %s build in %s", sanitizer, build_dir)
|
||||
logger.debug("cmake command: %s", " ".join(cmd))
|
||||
proc = subprocess.run(cmd, cwd=build_dir, capture_output=True, text=True)
|
||||
if proc.returncode != 0:
|
||||
logger.error("cmake failed:\n%s", proc.stderr)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def compile_tests(build_dir: Path) -> bool:
|
||||
"""Compile the test-z3 target."""
|
||||
nproc = os.cpu_count() or 4
|
||||
cmd = ["make", f"-j{nproc}", "test-z3"]
|
||||
logger.info("compiling test-z3 (%d parallel jobs)", nproc)
|
||||
proc = subprocess.run(cmd, cwd=build_dir, capture_output=True, text=True)
|
||||
if proc.returncode != 0:
|
||||
logger.error("compilation failed:\n%s", proc.stderr[-2000:])
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def run_tests(build_dir: Path, timeout: int) -> dict:
|
||||
"""Execute test-z3 under sanitizer runtime and capture output."""
|
||||
test_bin = build_dir / "test-z3"
|
||||
if not test_bin.is_file():
|
||||
logger.error("test-z3 not found at %s", test_bin)
|
||||
return {"stdout": "", "stderr": "binary not found", "exit_code": -1,
|
||||
"duration_ms": 0}
|
||||
|
||||
env = os.environ.copy()
|
||||
env["ASAN_OPTIONS"] = "detect_leaks=1:halt_on_error=0:print_stacktrace=1"
|
||||
env["UBSAN_OPTIONS"] = "print_stacktrace=1:halt_on_error=0"
|
||||
|
||||
cmd = [str(test_bin), "/a"]
|
||||
logger.info("running: %s", " ".join(cmd))
|
||||
start = time.monotonic()
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
cmd, capture_output=True, text=True, timeout=timeout,
|
||||
cwd=build_dir, env=env,
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
ms = int((time.monotonic() - start) * 1000)
|
||||
logger.warning("test-z3 timed out after %dms", ms)
|
||||
return {"stdout": "", "stderr": "timeout", "exit_code": -1,
|
||||
"duration_ms": ms}
|
||||
|
||||
ms = int((time.monotonic() - start) * 1000)
|
||||
logger.debug("exit_code=%d duration=%dms", proc.returncode, ms)
|
||||
return {
|
||||
"stdout": proc.stdout,
|
||||
"stderr": proc.stderr,
|
||||
"exit_code": proc.returncode,
|
||||
"duration_ms": ms,
|
||||
}
|
||||
|
||||
|
||||
def parse_findings(output: str) -> list:
|
||||
"""Extract sanitizer error reports from combined stdout and stderr."""
|
||||
findings = []
|
||||
lines = output.split("\n")
|
||||
|
||||
for i, line in enumerate(lines):
|
||||
entry = None
|
||||
|
||||
m = ASAN_ERROR.search(line)
|
||||
if m:
|
||||
entry = {"category": "asan", "message": m.group(1),
|
||||
"severity": "high"}
|
||||
|
||||
if not entry:
|
||||
m = LEAK_ERROR.search(line)
|
||||
if m:
|
||||
entry = {"category": "leak",
|
||||
"message": "detected memory leaks",
|
||||
"severity": "high"}
|
||||
|
||||
if not entry:
|
||||
m = UBSAN_ERROR.search(line)
|
||||
if m:
|
||||
entry = {"category": "ubsan", "message": m.group(1),
|
||||
"severity": "medium"}
|
||||
|
||||
if not entry:
|
||||
continue
|
||||
|
||||
file_path, line_no = None, None
|
||||
window = lines[max(0, i - 2):i + 5]
|
||||
for ctx in window:
|
||||
loc = LOCATION.search(ctx)
|
||||
if loc and "/usr/" not in loc.group(1):
|
||||
file_path = loc.group(1)
|
||||
line_no = int(loc.group(2))
|
||||
break
|
||||
|
||||
entry["file"] = file_path
|
||||
entry["line"] = line_no
|
||||
entry["raw"] = line.strip()
|
||||
findings.append(entry)
|
||||
|
||||
return findings
|
||||
|
||||
|
||||
def deduplicate(findings: list) -> list:
|
||||
"""Remove duplicate reports at the same category, file, and line."""
|
||||
seen = set()
|
||||
result = []
|
||||
for f in findings:
|
||||
key = (f["category"], f["file"], f["line"], f["message"])
|
||||
if key in seen:
|
||||
continue
|
||||
seen.add(key)
|
||||
result.append(f)
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="memory-safety")
|
||||
parser.add_argument("--sanitizer", choices=["asan", "ubsan", "both"],
|
||||
default="asan",
|
||||
help="sanitizer to enable (default: asan)")
|
||||
parser.add_argument("--build-dir", default=None,
|
||||
help="path to build directory")
|
||||
parser.add_argument("--timeout", type=int, default=600,
|
||||
help="seconds before killing test run")
|
||||
parser.add_argument("--skip-build", action="store_true",
|
||||
help="reuse existing instrumented build")
|
||||
parser.add_argument("--db", default=None,
|
||||
help="path to z3agent.db")
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
check_dependencies()
|
||||
repo_root = find_repo_root()
|
||||
|
||||
sanitizers = ["asan", "ubsan"] if args.sanitizer == "both" else [args.sanitizer]
|
||||
all_findings = []
|
||||
|
||||
db = Z3DB(args.db)
|
||||
|
||||
for san in sanitizers:
|
||||
if args.build_dir:
|
||||
build_dir = Path(args.build_dir)
|
||||
else:
|
||||
build_dir = repo_root / "build" / f"sanitizer-{san}"
|
||||
|
||||
run_id = db.start_run("memory-safety", f"sanitizer={san}")
|
||||
db.log(f"sanitizer: {san}, build: {build_dir}", run_id=run_id)
|
||||
|
||||
if not args.skip_build:
|
||||
needs_configure = not build_is_configured(build_dir, san)
|
||||
if needs_configure and not configure(build_dir, san, repo_root):
|
||||
db.finish_run(run_id, "error", 0, exit_code=1)
|
||||
print(f"FAIL: cmake configuration failed for {san}")
|
||||
continue
|
||||
if not compile_tests(build_dir):
|
||||
db.finish_run(run_id, "error", 0, exit_code=1)
|
||||
print(f"FAIL: compilation failed for {san}")
|
||||
continue
|
||||
|
||||
result = run_tests(build_dir, args.timeout)
|
||||
combined = result["stdout"] + "\n" + result["stderr"]
|
||||
findings = deduplicate(parse_findings(combined))
|
||||
|
||||
for f in findings:
|
||||
db.log_finding(
|
||||
run_id,
|
||||
category=f["category"],
|
||||
message=f["message"],
|
||||
severity=f["severity"],
|
||||
file=f["file"],
|
||||
line=f["line"],
|
||||
details={"raw": f["raw"]},
|
||||
)
|
||||
|
||||
status = "clean" if not findings else "findings"
|
||||
if result["exit_code"] == -1:
|
||||
status = "timeout" if "timeout" in result["stderr"] else "error"
|
||||
|
||||
db.finish_run(run_id, status, result["duration_ms"], result["exit_code"])
|
||||
all_findings.extend(findings)
|
||||
print(f"{san}: {len(findings)} finding(s), {result['duration_ms']}ms")
|
||||
|
||||
if all_findings:
|
||||
print(f"\nTotal: {len(all_findings)} finding(s)")
|
||||
for f in all_findings:
|
||||
loc = f"{f['file']}:{f['line']}" if f["file"] else "unknown location"
|
||||
print(f" [{f['severity']}] {f['category']}: {f['message']} at {loc}")
|
||||
db.close()
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("\nNo sanitizer findings.")
|
||||
db.close()
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
75
.github/skills/optimize/SKILL.md
vendored
Normal file
75
.github/skills/optimize/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
name: optimize
|
||||
description: Solve constrained optimization problems using Z3. Supports minimization and maximization of objective functions over integer, real, and bitvector domains.
|
||||
---
|
||||
|
||||
Given a set of constraints and an objective function, find the optimal value. Z3 supports both hard constraints (must hold) and soft constraints (weighted preferences), as well as lexicographic multi-objective optimization.
|
||||
|
||||
# Step 1: Formulate the problem
|
||||
|
||||
Action:
|
||||
Write constraints and an objective using `(minimize ...)` or
|
||||
`(maximize ...)` directives, followed by `(check-sat)` and `(get-model)`.
|
||||
|
||||
Expectation:
|
||||
A valid SMT-LIB2 formula with at least one optimization directive and
|
||||
all variables declared.
|
||||
|
||||
Result:
|
||||
If the formula is well-formed, proceed to Step 2. For multi-objective
|
||||
problems, list directives in priority order for lexicographic optimization.
|
||||
|
||||
Example: minimize `x + y` subject to `x >= 1`, `y >= 2`, `x + y <= 10`:
|
||||
```smtlib
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (>= x 1))
|
||||
(assert (>= y 2))
|
||||
(assert (<= (+ x y) 10))
|
||||
(minimize (+ x y))
|
||||
(check-sat)
|
||||
(get-model)
|
||||
```
|
||||
|
||||
# Step 2: Run the optimizer
|
||||
|
||||
Action:
|
||||
Invoke optimize.py with the formula or file path.
|
||||
|
||||
Expectation:
|
||||
The script prints `sat` with the optimal assignment, `unsat`, `unknown`,
|
||||
or `timeout`. A run entry is logged to z3agent.db.
|
||||
|
||||
Result:
|
||||
On `sat`: proceed to Step 3 to read the optimal values.
|
||||
On `unsat` or `timeout`: check constraints for contradictions or simplify.
|
||||
|
||||
```bash
|
||||
python3 scripts/optimize.py --file scheduling.smt2
|
||||
python3 scripts/optimize.py --formula "<inline smt-lib2>" --debug
|
||||
```
|
||||
|
||||
# Step 3: Interpret the output
|
||||
|
||||
Action:
|
||||
Parse the objective value and satisfying assignment from the output.
|
||||
|
||||
Expectation:
|
||||
`sat` with a model containing the optimal value, `unsat` indicating
|
||||
infeasibility, or `unknown`/`timeout`.
|
||||
|
||||
Result:
|
||||
On `sat`: report the optimal value and assignment.
|
||||
On `unsat`: the constraints are contradictory, no feasible solution exists.
|
||||
On `unknown`/`timeout`: relax constraints or try **simplify**.
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| formula | string | no | | SMT-LIB2 formula with minimize/maximize |
|
||||
| file | path | no | | path to .smt2 file |
|
||||
| timeout | int | no | 60 | seconds |
|
||||
| z3 | path | no | auto | path to z3 binary |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
58
.github/skills/optimize/scripts/optimize.py
vendored
Normal file
58
.github/skills/optimize/scripts/optimize.py
vendored
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
optimize.py: solve constrained optimization problems via Z3.
|
||||
|
||||
Usage:
|
||||
python optimize.py --file scheduling.smt2
|
||||
python optimize.py --formula "(declare-const x Int)..." --debug
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, parse_model, setup_logging
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="optimize")
|
||||
parser.add_argument("--formula")
|
||||
parser.add_argument("--file")
|
||||
parser.add_argument("--timeout", type=int, default=60)
|
||||
parser.add_argument("--z3", default=None)
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
formula = Path(args.file).read_text()
|
||||
elif args.formula:
|
||||
formula = args.formula
|
||||
else:
|
||||
parser.error("provide --formula or --file")
|
||||
return
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("optimize", formula)
|
||||
|
||||
result = run_z3(formula, z3_bin=args.z3, timeout=args.timeout, debug=args.debug)
|
||||
|
||||
model = parse_model(result["stdout"]) if result["result"] == "sat" else None
|
||||
|
||||
db.log_formula(run_id, formula, result["result"], str(model) if model else None)
|
||||
db.finish_run(run_id, result["result"], result["duration_ms"], result["exit_code"])
|
||||
|
||||
print(result["result"])
|
||||
if model:
|
||||
for name, val in model.items():
|
||||
print(f" {name} = {val}")
|
||||
|
||||
db.close()
|
||||
sys.exit(0 if result["exit_code"] == 0 else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
83
.github/skills/prove/SKILL.md
vendored
Normal file
83
.github/skills/prove/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
---
|
||||
name: prove
|
||||
description: Prove validity of logical statements by negation and satisfiability checking. If the negation is unsatisfiable, the original statement is valid. Otherwise a counterexample is returned.
|
||||
---
|
||||
|
||||
Given a conjecture (an SMT-LIB2 assertion or a natural language claim), determine whether it holds universally. The method is standard: negate the conjecture and check satisfiability. If the negation is unsatisfiable, the original is valid. If satisfiable, the model is a counterexample.
|
||||
|
||||
# Step 1: Prepare the negated formula
|
||||
|
||||
Action:
|
||||
Wrap the conjecture in `(assert (not ...))` and append
|
||||
`(check-sat)(get-model)`.
|
||||
|
||||
Expectation:
|
||||
A complete SMT-LIB2 formula that negates the original conjecture with
|
||||
all variables declared.
|
||||
|
||||
Result:
|
||||
If the negation is well-formed, proceed to Step 2.
|
||||
If the conjecture is natural language, run **encode** first.
|
||||
|
||||
Example: to prove that `(> x 3)` implies `(> x 1)`:
|
||||
```smtlib
|
||||
(declare-const x Int)
|
||||
(assert (not (=> (> x 3) (> x 1))))
|
||||
(check-sat)
|
||||
(get-model)
|
||||
```
|
||||
|
||||
# Step 2: Run the prover
|
||||
|
||||
Action:
|
||||
Invoke prove.py with the conjecture and variable declarations.
|
||||
|
||||
Expectation:
|
||||
The script prints `valid`, `invalid` (with counterexample), `unknown`,
|
||||
or `timeout`. A run entry is logged to z3agent.db.
|
||||
|
||||
Result:
|
||||
On `valid`: proceed to **explain** if the user needs a summary.
|
||||
On `invalid`: report the counterexample directly.
|
||||
On `unknown`/`timeout`: try **simplify** first, or increase the timeout.
|
||||
|
||||
```bash
|
||||
python3 scripts/prove.py --conjecture "(=> (> x 3) (> x 1))" --vars "x:Int"
|
||||
```
|
||||
|
||||
For file input where the file contains the full negated formula:
|
||||
```bash
|
||||
python3 scripts/prove.py --file negated.smt2
|
||||
```
|
||||
|
||||
With debug tracing:
|
||||
```bash
|
||||
python3 scripts/prove.py --conjecture "(=> (> x 3) (> x 1))" --vars "x:Int" --debug
|
||||
```
|
||||
|
||||
# Step 3: Interpret the output
|
||||
|
||||
Action:
|
||||
Read the prover output to determine validity of the conjecture.
|
||||
|
||||
Expectation:
|
||||
One of `valid`, `invalid` (with counterexample), `unknown`, or `timeout`.
|
||||
|
||||
Result:
|
||||
On `valid`: the conjecture holds universally.
|
||||
On `invalid`: the model shows a concrete counterexample.
|
||||
On `unknown`/`timeout`: the conjecture may require auxiliary lemmas or induction.
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| conjecture | string | no | | the assertion to prove (without negation) |
|
||||
| vars | string | no | | variable declarations as "name:sort" pairs, comma-separated |
|
||||
| file | path | no | | .smt2 file with the negated formula |
|
||||
| timeout | int | no | 30 | seconds |
|
||||
| z3 | path | no | auto | path to z3 binary |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
|
||||
Either `conjecture` (with `vars`) or `file` must be provided.
|
||||
82
.github/skills/prove/scripts/prove.py
vendored
Normal file
82
.github/skills/prove/scripts/prove.py
vendored
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
prove.py: prove validity by negation + satisfiability check.
|
||||
|
||||
Usage:
|
||||
python prove.py --conjecture "(=> (> x 3) (> x 1))" --vars "x:Int"
|
||||
python prove.py --file negated.smt2
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, parse_model, setup_logging
|
||||
|
||||
|
||||
def build_formula(conjecture: str, vars_str: str) -> str:
|
||||
lines = []
|
||||
if vars_str:
|
||||
for v in vars_str.split(","):
|
||||
v = v.strip()
|
||||
name, sort = v.split(":")
|
||||
lines.append(f"(declare-const {name.strip()} {sort.strip()})")
|
||||
lines.append(f"(assert (not {conjecture}))")
|
||||
lines.append("(check-sat)")
|
||||
lines.append("(get-model)")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="prove")
|
||||
parser.add_argument("--conjecture", help="assertion to prove")
|
||||
parser.add_argument("--vars", help="variable declarations, e.g. 'x:Int,y:Bool'")
|
||||
parser.add_argument("--file", help="path to .smt2 file with negated formula")
|
||||
parser.add_argument("--timeout", type=int, default=30)
|
||||
parser.add_argument("--z3", default=None)
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
formula = Path(args.file).read_text()
|
||||
elif args.conjecture:
|
||||
formula = build_formula(args.conjecture, args.vars or "")
|
||||
else:
|
||||
parser.error("provide --conjecture or --file")
|
||||
return
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("prove", formula)
|
||||
|
||||
result = run_z3(formula, z3_bin=args.z3, timeout=args.timeout, debug=args.debug)
|
||||
|
||||
if result["result"] == "unsat":
|
||||
verdict = "valid"
|
||||
elif result["result"] == "sat":
|
||||
verdict = "invalid"
|
||||
else:
|
||||
verdict = result["result"]
|
||||
|
||||
model = parse_model(result["stdout"]) if verdict == "invalid" else None
|
||||
|
||||
db.log_formula(run_id, formula, verdict, str(model) if model else None)
|
||||
db.finish_run(run_id, verdict, result["duration_ms"], result["exit_code"])
|
||||
|
||||
print(verdict)
|
||||
if model:
|
||||
print("counterexample:")
|
||||
for name, val in model.items():
|
||||
print(f" {name} = {val}")
|
||||
|
||||
db.close()
|
||||
# Exit 0 when we successfully determined validity or invalidity;
|
||||
# exit 1 only for errors/timeouts.
|
||||
sys.exit(0 if verdict in ("valid", "invalid") else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
57
.github/skills/shared/schema.sql
vendored
Normal file
57
.github/skills/shared/schema.sql
vendored
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
-- z3agent schema v1
|
||||
|
||||
PRAGMA journal_mode=WAL;
|
||||
PRAGMA foreign_keys=ON;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS runs (
|
||||
run_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
skill TEXT NOT NULL,
|
||||
input_hash TEXT,
|
||||
status TEXT NOT NULL DEFAULT 'running',
|
||||
duration_ms INTEGER,
|
||||
exit_code INTEGER,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_runs_skill ON runs(skill);
|
||||
CREATE INDEX IF NOT EXISTS idx_runs_status ON runs(status);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS formulas (
|
||||
formula_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
run_id INTEGER REFERENCES runs(run_id) ON DELETE CASCADE,
|
||||
smtlib2 TEXT NOT NULL,
|
||||
result TEXT,
|
||||
model TEXT,
|
||||
stats TEXT,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_formulas_run ON formulas(run_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_formulas_result ON formulas(result);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS findings (
|
||||
finding_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
run_id INTEGER REFERENCES runs(run_id) ON DELETE CASCADE,
|
||||
category TEXT NOT NULL,
|
||||
severity TEXT,
|
||||
file TEXT,
|
||||
line INTEGER,
|
||||
message TEXT NOT NULL,
|
||||
details TEXT,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_findings_run ON findings(run_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_findings_category ON findings(category);
|
||||
CREATE INDEX IF NOT EXISTS idx_findings_severity ON findings(severity);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS interaction_log (
|
||||
log_id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
run_id INTEGER REFERENCES runs(run_id) ON DELETE SET NULL,
|
||||
level TEXT NOT NULL DEFAULT 'info',
|
||||
message TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL DEFAULT (datetime('now'))
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_log_run ON interaction_log(run_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_log_level ON interaction_log(level);
|
||||
364
.github/skills/shared/z3db.py
vendored
Normal file
364
.github/skills/shared/z3db.py
vendored
Normal file
|
|
@ -0,0 +1,364 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
z3db: shared library and CLI for Z3 skill scripts.
|
||||
|
||||
Library usage:
|
||||
from z3db import Z3DB, find_z3, run_z3
|
||||
|
||||
CLI usage:
|
||||
python z3db.py init
|
||||
python z3db.py status
|
||||
python z3db.py log [--run-id N]
|
||||
python z3db.py runs [--skill solve] [--last N]
|
||||
python z3db.py query "SELECT ..."
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import sqlite3
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
SCHEMA_PATH = Path(__file__).parent / "schema.sql"
|
||||
DEFAULT_DB_DIR = ".z3-agent"
|
||||
DEFAULT_DB_NAME = "z3agent.db"
|
||||
|
||||
logger = logging.getLogger("z3agent")
|
||||
|
||||
|
||||
def setup_logging(debug: bool = False):
|
||||
level = logging.DEBUG if debug else logging.INFO
|
||||
fmt = (
|
||||
"[%(levelname)s] %(message)s"
|
||||
if not debug
|
||||
else "[%(levelname)s %(asctime)s] %(message)s"
|
||||
)
|
||||
logging.basicConfig(level=level, format=fmt, stream=sys.stderr)
|
||||
|
||||
|
||||
class Z3DB:
|
||||
"""SQLite handle for z3agent.db, tracks runs, formulas, findings, logs."""
|
||||
|
||||
def __init__(self, db_path: Optional[str] = None):
|
||||
if db_path is None:
|
||||
db_dir = Path(DEFAULT_DB_DIR)
|
||||
db_dir.mkdir(exist_ok=True)
|
||||
db_path = str(db_dir / DEFAULT_DB_NAME)
|
||||
self.db_path = db_path
|
||||
self.conn = sqlite3.connect(db_path)
|
||||
self.conn.execute("PRAGMA foreign_keys=ON")
|
||||
self.conn.row_factory = sqlite3.Row
|
||||
self._init_schema()
|
||||
|
||||
def _init_schema(self):
|
||||
self.conn.executescript(SCHEMA_PATH.read_text())
|
||||
|
||||
def close(self):
|
||||
self.conn.close()
|
||||
|
||||
def start_run(self, skill: str, input_text: str = "") -> int:
|
||||
input_hash = hashlib.sha256(input_text.encode()).hexdigest()[:16]
|
||||
cur = self.conn.execute(
|
||||
"INSERT INTO runs (skill, input_hash) VALUES (?, ?)",
|
||||
(skill, input_hash),
|
||||
)
|
||||
self.conn.commit()
|
||||
run_id = cur.lastrowid
|
||||
logger.debug("started run %d (skill=%s, hash=%s)", run_id, skill, input_hash)
|
||||
return run_id
|
||||
|
||||
def finish_run(
|
||||
self, run_id: int, status: str, duration_ms: int, exit_code: int = 0
|
||||
):
|
||||
self.conn.execute(
|
||||
"UPDATE runs SET status=?, duration_ms=?, exit_code=? WHERE run_id=?",
|
||||
(status, duration_ms, exit_code, run_id),
|
||||
)
|
||||
self.conn.commit()
|
||||
logger.debug("finished run %d: %s (%dms)", run_id, status, duration_ms)
|
||||
|
||||
def log_formula(
|
||||
self,
|
||||
run_id: int,
|
||||
smtlib2: str,
|
||||
result: str = None,
|
||||
model: str = None,
|
||||
stats: dict = None,
|
||||
) -> int:
|
||||
cur = self.conn.execute(
|
||||
"INSERT INTO formulas (run_id, smtlib2, result, model, stats) "
|
||||
"VALUES (?, ?, ?, ?, ?)",
|
||||
(run_id, smtlib2, result, model, json.dumps(stats) if stats else None),
|
||||
)
|
||||
self.conn.commit()
|
||||
return cur.lastrowid
|
||||
|
||||
def log_finding(
|
||||
self,
|
||||
run_id: int,
|
||||
category: str,
|
||||
message: str,
|
||||
severity: str = None,
|
||||
file: str = None,
|
||||
line: int = None,
|
||||
details: dict = None,
|
||||
) -> int:
|
||||
cur = self.conn.execute(
|
||||
"INSERT INTO findings (run_id, category, severity, file, line, "
|
||||
"message, details) VALUES (?, ?, ?, ?, ?, ?, ?)",
|
||||
(
|
||||
run_id,
|
||||
category,
|
||||
severity,
|
||||
file,
|
||||
line,
|
||||
message,
|
||||
json.dumps(details) if details else None,
|
||||
),
|
||||
)
|
||||
self.conn.commit()
|
||||
return cur.lastrowid
|
||||
|
||||
def log(self, message: str, level: str = "info", run_id: int = None):
|
||||
"""Write to stderr and to the interaction_log table."""
|
||||
getattr(logger, level, logger.info)(message)
|
||||
self.conn.execute(
|
||||
"INSERT INTO interaction_log (run_id, level, message) " "VALUES (?, ?, ?)",
|
||||
(run_id, level, message),
|
||||
)
|
||||
self.conn.commit()
|
||||
|
||||
def get_runs(self, skill: str = None, last: int = 10):
|
||||
sql = "SELECT * FROM runs"
|
||||
params = []
|
||||
if skill:
|
||||
sql += " WHERE skill = ?"
|
||||
params.append(skill)
|
||||
sql += " ORDER BY run_id DESC LIMIT ?"
|
||||
params.append(last)
|
||||
return self.conn.execute(sql, params).fetchall()
|
||||
|
||||
def get_status(self) -> dict:
|
||||
rows = self.conn.execute(
|
||||
"SELECT status, COUNT(*) as cnt FROM runs GROUP BY status"
|
||||
).fetchall()
|
||||
total = sum(r["cnt"] for r in rows)
|
||||
by_status = {r["status"]: r["cnt"] for r in rows}
|
||||
last = self.conn.execute(
|
||||
"SELECT timestamp FROM runs ORDER BY run_id DESC LIMIT 1"
|
||||
).fetchone()
|
||||
return {
|
||||
"total": total,
|
||||
**by_status,
|
||||
"last_run": last["timestamp"] if last else None,
|
||||
}
|
||||
|
||||
def get_logs(self, run_id: int = None, last: int = 50):
|
||||
if run_id:
|
||||
return self.conn.execute(
|
||||
"SELECT * FROM interaction_log WHERE run_id=? "
|
||||
"ORDER BY log_id DESC LIMIT ?",
|
||||
(run_id, last),
|
||||
).fetchall()
|
||||
return self.conn.execute(
|
||||
"SELECT * FROM interaction_log ORDER BY log_id DESC LIMIT ?", (last,)
|
||||
).fetchall()
|
||||
|
||||
def query(self, sql: str):
|
||||
return self.conn.execute(sql).fetchall()
|
||||
|
||||
|
||||
def find_z3(hint: str = None) -> str:
|
||||
"""Locate the z3 binary: explicit path > build dirs > PATH."""
|
||||
candidates = []
|
||||
if hint:
|
||||
candidates.append(hint)
|
||||
|
||||
repo_root = _find_repo_root()
|
||||
if repo_root:
|
||||
for build_dir in ["build", "build/release", "build/debug"]:
|
||||
candidates.append(str(repo_root / build_dir / "z3"))
|
||||
|
||||
path_z3 = shutil.which("z3")
|
||||
if path_z3:
|
||||
candidates.append(path_z3)
|
||||
|
||||
for c in candidates:
|
||||
p = Path(c)
|
||||
if p.is_file() and os.access(p, os.X_OK):
|
||||
logger.debug("found z3: %s", p)
|
||||
return str(p)
|
||||
|
||||
logger.error("z3 binary not found. Searched: %s", candidates)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def _find_repo_root() -> Optional[Path]:
|
||||
d = Path.cwd()
|
||||
for _ in range(10):
|
||||
if (d / "CMakeLists.txt").exists() and (d / "src").is_dir():
|
||||
return d
|
||||
parent = d.parent
|
||||
if parent == d:
|
||||
break
|
||||
d = parent
|
||||
return None
|
||||
|
||||
|
||||
def run_z3(
|
||||
formula: str,
|
||||
z3_bin: str = None,
|
||||
timeout: int = 30,
|
||||
args: list = None,
|
||||
debug: bool = False,
|
||||
) -> dict:
|
||||
"""Pipe an SMT-LIB2 formula into z3 -in, return parsed output."""
|
||||
z3_path = find_z3(z3_bin)
|
||||
cmd = [z3_path, "-in"] + (args or [])
|
||||
|
||||
logger.debug("cmd: %s", " ".join(cmd))
|
||||
logger.debug("stdin:\n%s", formula)
|
||||
|
||||
start = time.monotonic()
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
cmd,
|
||||
input=formula,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
duration_ms = int((time.monotonic() - start) * 1000)
|
||||
logger.warning("z3 timed out after %dms", duration_ms)
|
||||
return {
|
||||
"stdout": "",
|
||||
"stderr": "timeout",
|
||||
"exit_code": -1,
|
||||
"duration_ms": duration_ms,
|
||||
"result": "timeout",
|
||||
}
|
||||
|
||||
duration_ms = int((time.monotonic() - start) * 1000)
|
||||
|
||||
logger.debug("exit_code=%d duration=%dms", proc.returncode, duration_ms)
|
||||
logger.debug("stdout:\n%s", proc.stdout)
|
||||
if proc.stderr:
|
||||
logger.debug("stderr:\n%s", proc.stderr)
|
||||
|
||||
first_line = proc.stdout.strip().split("\n")[0].strip() if proc.stdout else ""
|
||||
result = first_line if first_line in ("sat", "unsat", "unknown") else "error"
|
||||
|
||||
return {
|
||||
"stdout": proc.stdout,
|
||||
"stderr": proc.stderr,
|
||||
"exit_code": proc.returncode,
|
||||
"duration_ms": duration_ms,
|
||||
"result": result,
|
||||
}
|
||||
|
||||
|
||||
def parse_model(stdout: str) -> Optional[dict]:
|
||||
"""Pull define-fun entries from a (get-model) response."""
|
||||
model = {}
|
||||
for m in re.finditer(r"\(define-fun\s+(\S+)\s+\(\)\s+\S+\s+(.+?)\)", stdout):
|
||||
model[m.group(1)] = m.group(2).strip()
|
||||
return model if model else None
|
||||
|
||||
|
||||
def parse_stats(stdout: str) -> Optional[dict]:
|
||||
"""Parse :key value pairs from z3 -st output."""
|
||||
stats = {}
|
||||
for m in re.finditer(r":(\S+)\s+([\d.]+)", stdout):
|
||||
key, val = m.group(1), m.group(2)
|
||||
stats[key] = float(val) if "." in val else int(val)
|
||||
return stats if stats else None
|
||||
|
||||
|
||||
def parse_unsat_core(stdout: str) -> Optional[list]:
|
||||
for line in stdout.strip().split("\n"):
|
||||
line = line.strip()
|
||||
if line.startswith("(") and not line.startswith("(error"):
|
||||
labels = line.strip("()").split()
|
||||
if labels:
|
||||
return labels
|
||||
return None
|
||||
|
||||
|
||||
def cli():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Z3 Agent database CLI",
|
||||
prog="z3db",
|
||||
)
|
||||
parser.add_argument("--db", default=None, help="path to z3agent.db")
|
||||
parser.add_argument("--debug", action="store_true", help="verbose output")
|
||||
|
||||
sub = parser.add_subparsers(dest="command")
|
||||
|
||||
sub.add_parser("init", help="initialize the database")
|
||||
|
||||
sub.add_parser("status", help="show run summary")
|
||||
|
||||
log_p = sub.add_parser("log", help="show interaction log")
|
||||
log_p.add_argument("--run-id", type=int, help="filter by run ID")
|
||||
log_p.add_argument("--last", type=int, default=50)
|
||||
|
||||
runs_p = sub.add_parser("runs", help="list runs")
|
||||
runs_p.add_argument("--skill", help="filter by skill name")
|
||||
runs_p.add_argument("--last", type=int, default=10)
|
||||
|
||||
query_p = sub.add_parser("query", help="run raw SQL")
|
||||
query_p.add_argument("sql", help="SQL query string")
|
||||
|
||||
args = parser.parse_args()
|
||||
setup_logging(args.debug)
|
||||
|
||||
db = Z3DB(args.db)
|
||||
|
||||
if args.command == "init":
|
||||
print(f"Database initialized at {db.db_path}")
|
||||
|
||||
elif args.command == "status":
|
||||
s = db.get_status()
|
||||
print(
|
||||
f"Runs: {s['total']}"
|
||||
f" | success: {s.get('success', 0)}"
|
||||
f" | error: {s.get('error', 0)}"
|
||||
f" | timeout: {s.get('timeout', 0)}"
|
||||
f" | Last: {s['last_run'] or 'never'}"
|
||||
)
|
||||
|
||||
elif args.command == "log":
|
||||
for row in db.get_logs(args.run_id, args.last):
|
||||
print(
|
||||
f"[{row['level']}] {row['timestamp']} "
|
||||
f"(run {row['run_id']}): {row['message']}"
|
||||
)
|
||||
|
||||
elif args.command == "runs":
|
||||
for row in db.get_runs(args.skill, args.last):
|
||||
print(
|
||||
f"#{row['run_id']} {row['skill']} {row['status']} "
|
||||
f"{row['duration_ms']}ms @ {row['timestamp']}"
|
||||
)
|
||||
|
||||
elif args.command == "query":
|
||||
for row in db.query(args.sql):
|
||||
print(dict(row))
|
||||
|
||||
else:
|
||||
parser.print_help()
|
||||
|
||||
db.close()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
cli()
|
||||
76
.github/skills/simplify/SKILL.md
vendored
Normal file
76
.github/skills/simplify/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
---
|
||||
name: simplify
|
||||
description: Reduce formula complexity using Z3 tactic chains. Supports configurable tactic pipelines for boolean, arithmetic, and bitvector simplification.
|
||||
---
|
||||
|
||||
Given a formula, apply a sequence of Z3 tactics to produce an equivalent but simpler form. This is useful for understanding what Z3 sees after preprocessing, debugging tactic selection, and reducing formula size before solving.
|
||||
|
||||
# Step 1: Choose tactics
|
||||
|
||||
Action:
|
||||
Select a tactic chain from the available Z3 tactics based on the
|
||||
formula's theory.
|
||||
|
||||
Expectation:
|
||||
A comma-separated list of tactic names suitable for the formula domain.
|
||||
|
||||
Result:
|
||||
If unsure, use the default chain: `simplify,propagate-values,ctx-simplify`.
|
||||
For bitvector formulas, add `bit-blast`. Proceed to Step 2.
|
||||
|
||||
| Tactic | What it does |
|
||||
|--------|-------------|
|
||||
| simplify | constant folding, algebraic identities |
|
||||
| propagate-values | substitute known equalities |
|
||||
| ctx-simplify | context-dependent simplification |
|
||||
| elim-uncnstr | remove unconstrained variables |
|
||||
| solve-eqs | Gaussian elimination |
|
||||
| bit-blast | reduce bitvectors to booleans |
|
||||
| tseitin-cnf | convert to CNF |
|
||||
| aig | and-inverter graph reduction |
|
||||
|
||||
# Step 2: Run simplification
|
||||
|
||||
Action:
|
||||
Invoke simplify.py with the formula and optional tactic chain.
|
||||
|
||||
Expectation:
|
||||
The script applies each tactic in sequence and prints the simplified
|
||||
formula. A run entry is logged to z3agent.db.
|
||||
|
||||
Result:
|
||||
If the output is simpler, pass it to **solve** or **prove**.
|
||||
If unchanged, try a different tactic chain.
|
||||
|
||||
```bash
|
||||
python3 scripts/simplify.py --formula "(assert (and (> x 0) (> x 0)))" --vars "x:Int"
|
||||
python3 scripts/simplify.py --file formula.smt2 --tactics "simplify,propagate-values,ctx-simplify"
|
||||
python3 scripts/simplify.py --file formula.smt2 --debug
|
||||
```
|
||||
|
||||
Without `--tactics`, the script applies the default chain: `simplify`, `propagate-values`, `ctx-simplify`.
|
||||
|
||||
# Step 3: Interpret the output
|
||||
|
||||
Action:
|
||||
Read the simplified formula output in SMT-LIB2 syntax.
|
||||
|
||||
Expectation:
|
||||
One or more `(assert ...)` blocks representing equivalent subgoals.
|
||||
|
||||
Result:
|
||||
A smaller formula indicates successful reduction. Pass the result to
|
||||
**solve**, **prove**, or **optimize** as needed.
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| formula | string | no | | SMT-LIB2 formula to simplify |
|
||||
| vars | string | no | | variable declarations as "name:sort" pairs |
|
||||
| file | path | no | | path to .smt2 file |
|
||||
| tactics | string | no | simplify,propagate-values,ctx-simplify | comma-separated tactic names |
|
||||
| timeout | int | no | 30 | seconds |
|
||||
| z3 | path | no | auto | path to z3 binary |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
82
.github/skills/simplify/scripts/simplify.py
vendored
Normal file
82
.github/skills/simplify/scripts/simplify.py
vendored
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
simplify.py: apply Z3 tactics to simplify an SMT-LIB2 formula.
|
||||
|
||||
Usage:
|
||||
python simplify.py --formula "(assert (and (> x 0) (> x 0)))" --vars "x:Int"
|
||||
python simplify.py --file formula.smt2 --tactics "simplify,solve-eqs"
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, setup_logging
|
||||
|
||||
DEFAULT_TACTICS = "simplify,propagate-values,ctx-simplify"
|
||||
|
||||
|
||||
def build_tactic_formula(base_formula: str, tactics: str) -> str:
|
||||
tactic_list = [t.strip() for t in tactics.split(",")]
|
||||
if len(tactic_list) == 1:
|
||||
tactic_expr = f"(then {tactic_list[0]} skip)"
|
||||
else:
|
||||
tactic_expr = "(then " + " ".join(tactic_list) + ")"
|
||||
return base_formula + f"\n(apply {tactic_expr})\n"
|
||||
|
||||
|
||||
def build_formula_from_parts(formula_str: str, vars_str: str) -> str:
|
||||
lines = []
|
||||
if vars_str:
|
||||
for v in vars_str.split(","):
|
||||
v = v.strip()
|
||||
name, sort = v.split(":")
|
||||
lines.append(f"(declare-const {name.strip()} {sort.strip()})")
|
||||
lines.append(formula_str)
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="simplify")
|
||||
parser.add_argument("--formula")
|
||||
parser.add_argument("--vars")
|
||||
parser.add_argument("--file")
|
||||
parser.add_argument("--tactics", default=DEFAULT_TACTICS)
|
||||
parser.add_argument("--timeout", type=int, default=30)
|
||||
parser.add_argument("--z3", default=None)
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
base = Path(args.file).read_text()
|
||||
elif args.formula:
|
||||
base = build_formula_from_parts(args.formula, args.vars or "")
|
||||
else:
|
||||
parser.error("provide --formula or --file")
|
||||
return
|
||||
|
||||
formula = build_tactic_formula(base, args.tactics)
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("simplify", formula)
|
||||
|
||||
result = run_z3(formula, z3_bin=args.z3, timeout=args.timeout, debug=args.debug)
|
||||
|
||||
status = "success" if result["exit_code"] == 0 else "error"
|
||||
db.log_formula(run_id, formula, status)
|
||||
db.finish_run(run_id, status, result["duration_ms"], result["exit_code"])
|
||||
|
||||
print(result["stdout"])
|
||||
if result["stderr"] and result["exit_code"] != 0:
|
||||
print(result["stderr"], file=sys.stderr)
|
||||
|
||||
db.close()
|
||||
sys.exit(0 if result["exit_code"] == 0 else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
75
.github/skills/solve/SKILL.md
vendored
Normal file
75
.github/skills/solve/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
---
|
||||
name: solve
|
||||
description: Check satisfiability of SMT-LIB2 formulas using Z3. Returns sat/unsat with models or unsat cores. Logs every invocation to z3agent.db for auditability.
|
||||
---
|
||||
|
||||
Given an SMT-LIB2 formula (or a set of constraints described in natural language), determine whether the formula is satisfiable. If sat, extract a satisfying assignment. If unsat and tracking labels are present, extract the unsat core.
|
||||
|
||||
# Step 1: Prepare the formula
|
||||
|
||||
Action:
|
||||
Convert the input to valid SMT-LIB2. If the input is natural language,
|
||||
use the **encode** skill first.
|
||||
|
||||
Expectation:
|
||||
A syntactically valid SMT-LIB2 formula ending with `(check-sat)` and
|
||||
either `(get-model)` or `(get-unsat-core)` as appropriate.
|
||||
|
||||
Result:
|
||||
If valid SMT-LIB2 is ready, proceed to Step 2.
|
||||
If encoding is needed, run **encode** first and return here.
|
||||
|
||||
# Step 2: Run Z3
|
||||
|
||||
Action:
|
||||
Invoke solve.py with the formula string or file path.
|
||||
|
||||
Expectation:
|
||||
The script pipes the formula to `z3 -in`, logs the run to
|
||||
`.z3-agent/z3agent.db`, and prints the result.
|
||||
|
||||
Result:
|
||||
Output is one of `sat`, `unsat`, `unknown`, or `timeout`.
|
||||
Proceed to Step 3 to interpret.
|
||||
|
||||
```bash
|
||||
python3 scripts/solve.py --formula "(declare-const x Int)(assert (> x 0))(check-sat)(get-model)"
|
||||
```
|
||||
|
||||
For file input:
|
||||
```bash
|
||||
python3 scripts/solve.py --file problem.smt2
|
||||
```
|
||||
|
||||
With debug tracing:
|
||||
```bash
|
||||
python3 scripts/solve.py --file problem.smt2 --debug
|
||||
```
|
||||
|
||||
# Step 3: Interpret the output
|
||||
|
||||
Action:
|
||||
Parse the Z3 output to determine satisfiability and extract any model
|
||||
or unsat core.
|
||||
|
||||
Expectation:
|
||||
`sat` with a model, `unsat` optionally with a core, `unknown`, or
|
||||
`timeout`.
|
||||
|
||||
Result:
|
||||
On `sat`: report the model to the user.
|
||||
On `unsat`: report the core if available.
|
||||
On `unknown`/`timeout`: try **simplify** or increase the timeout.
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| formula | string | no | | SMT-LIB2 formula as a string |
|
||||
| file | path | no | | path to an .smt2 file |
|
||||
| timeout | int | no | 30 | seconds before killing z3 |
|
||||
| z3 | path | no | auto | explicit path to z3 binary |
|
||||
| debug | flag | no | off | print z3 command, stdin, stdout, stderr, timing |
|
||||
| db | path | no | .z3-agent/z3agent.db | path to the logging database |
|
||||
|
||||
Either `formula` or `file` must be provided.
|
||||
64
.github/skills/solve/scripts/solve.py
vendored
Normal file
64
.github/skills/solve/scripts/solve.py
vendored
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
solve.py: check satisfiability of an SMT-LIB2 formula via Z3.
|
||||
|
||||
Usage:
|
||||
python solve.py --formula "(declare-const x Int)(assert (> x 0))(check-sat)(get-model)"
|
||||
python solve.py --file problem.smt2
|
||||
python solve.py --file problem.smt2 --debug --timeout 60
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, run_z3, parse_model, parse_unsat_core, setup_logging
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(prog="solve")
|
||||
parser.add_argument("--formula", help="SMT-LIB2 formula string")
|
||||
parser.add_argument("--file", help="path to .smt2 file")
|
||||
parser.add_argument("--timeout", type=int, default=30)
|
||||
parser.add_argument("--z3", default=None, help="path to z3 binary")
|
||||
parser.add_argument("--db", default=None)
|
||||
parser.add_argument("--debug", action="store_true")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
if args.file:
|
||||
formula = Path(args.file).read_text()
|
||||
elif args.formula:
|
||||
formula = args.formula
|
||||
else:
|
||||
parser.error("provide --formula or --file")
|
||||
return
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("solve", formula)
|
||||
|
||||
result = run_z3(formula, z3_bin=args.z3, timeout=args.timeout, debug=args.debug)
|
||||
|
||||
model = parse_model(result["stdout"]) if result["result"] == "sat" else None
|
||||
core = parse_unsat_core(result["stdout"]) if result["result"] == "unsat" else None
|
||||
|
||||
db.log_formula(run_id, formula, result["result"], str(model) if model else None)
|
||||
db.finish_run(run_id, result["result"], result["duration_ms"], result["exit_code"])
|
||||
|
||||
print(result["result"])
|
||||
if model:
|
||||
for name, val in model.items():
|
||||
print(f" {name} = {val}")
|
||||
if core:
|
||||
print("unsat core:", " ".join(core))
|
||||
if result["stderr"] and result["result"] == "error":
|
||||
print(result["stderr"], file=sys.stderr)
|
||||
|
||||
db.close()
|
||||
sys.exit(0 if result["exit_code"] == 0 else 1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
81
.github/skills/static-analysis/SKILL.md
vendored
Normal file
81
.github/skills/static-analysis/SKILL.md
vendored
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
---
|
||||
name: static-analysis
|
||||
description: Run Clang Static Analyzer (scan-build) on Z3 source and log structured findings to z3agent.db.
|
||||
---
|
||||
|
||||
Run the Clang Static Analyzer over a CMake build of Z3, parse the resulting plist diagnostics, and record each finding with file, line, category, and description. This skill wraps scan-build into a reproducible, logged workflow suitable for regular analysis sweeps and regression tracking.
|
||||
|
||||
# Step 1: Run the analysis
|
||||
|
||||
Action:
|
||||
Invoke the script pointing at the CMake build directory. The script
|
||||
runs `scan-build cmake ..` followed by `scan-build make` and writes
|
||||
checker output to the output directory.
|
||||
|
||||
Expectation:
|
||||
scan-build completes within the timeout, producing plist diagnostic
|
||||
files in the output directory (defaults to a `scan-results` subdirectory
|
||||
of the build directory).
|
||||
|
||||
Result:
|
||||
On success: diagnostics are parsed and findings are printed. Proceed to
|
||||
Step 2.
|
||||
On failure: verify that clang and scan-build are installed and that the
|
||||
build directory contains a valid CMake configuration.
|
||||
|
||||
```bash
|
||||
python3 scripts/static_analysis.py --build-dir build
|
||||
python3 scripts/static_analysis.py --build-dir build --output-dir /tmp/sa-results --debug
|
||||
python3 scripts/static_analysis.py --build-dir build --timeout 1800
|
||||
```
|
||||
|
||||
# Step 2: Interpret the output
|
||||
|
||||
Action:
|
||||
Review the printed findings and the summary table grouped by category.
|
||||
|
||||
Expectation:
|
||||
Each finding shows its source location, category, and description.
|
||||
The summary table ranks categories by frequency for quick triage.
|
||||
|
||||
Result:
|
||||
On zero findings: the codebase passes all enabled static checks.
|
||||
On findings: prioritize by category frequency and severity. Address
|
||||
null dereferences and use-after-free classes first.
|
||||
|
||||
Example output:
|
||||
|
||||
```
|
||||
[Dead store] src/ast/ast.cpp:142: Value stored to 'result' is never read
|
||||
[Null dereference] src/smt/theory_lra.cpp:87: Access to field 'next' results in a dereference of a null pointer
|
||||
```
|
||||
|
||||
# Step 3: Review historical findings
|
||||
|
||||
Action:
|
||||
Query z3agent.db to compare current results against prior analysis
|
||||
runs.
|
||||
|
||||
Expectation:
|
||||
Queries return category counts and run history, enabling regression
|
||||
detection across commits.
|
||||
|
||||
Result:
|
||||
On stable or decreasing counts: no regressions introduced.
|
||||
On increased counts: cross-reference new findings with recent commits
|
||||
to identify the responsible change.
|
||||
|
||||
```bash
|
||||
python3 ../../shared/z3db.py query "SELECT category, COUNT(*) as cnt FROM findings WHERE run_id IN (SELECT run_id FROM runs WHERE skill='static-analysis') GROUP BY category ORDER BY cnt DESC"
|
||||
python3 ../../shared/z3db.py runs --skill static-analysis --last 10
|
||||
```
|
||||
|
||||
# Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| build-dir | path | yes | | path to the CMake build directory |
|
||||
| output-dir | path | no | BUILD/scan-results | directory for scan-build output |
|
||||
| timeout | int | no | 1200 | seconds allowed for the full build |
|
||||
| db | path | no | .z3-agent/z3agent.db | logging database |
|
||||
| debug | flag | no | off | verbose tracing |
|
||||
260
.github/skills/static-analysis/scripts/static_analysis.py
vendored
Normal file
260
.github/skills/static-analysis/scripts/static_analysis.py
vendored
Normal file
|
|
@ -0,0 +1,260 @@
|
|||
#!/usr/bin/env python3
|
||||
"""
|
||||
static_analysis.py: run Clang Static Analyzer on Z3 source.
|
||||
|
||||
Usage:
|
||||
python static_analysis.py --build-dir build
|
||||
python static_analysis.py --build-dir build --output-dir /tmp/sa-results
|
||||
python static_analysis.py --build-dir build --debug
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import plistlib
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from collections import Counter
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parent.parent.parent / "shared"))
|
||||
from z3db import Z3DB, setup_logging
|
||||
|
||||
logger = logging.getLogger("z3agent")
|
||||
|
||||
SCAN_BUILD_NAMES = ["scan-build", "scan-build-14", "scan-build-15", "scan-build-16"]
|
||||
|
||||
|
||||
def find_scan_build() -> str:
|
||||
"""Locate the scan-build binary on PATH."""
|
||||
for name in SCAN_BUILD_NAMES:
|
||||
path = shutil.which(name)
|
||||
if path:
|
||||
logger.debug("found scan-build: %s", path)
|
||||
return path
|
||||
print(
|
||||
"scan-build not found on PATH.\n"
|
||||
"Install one of the following:\n"
|
||||
" Ubuntu/Debian: sudo apt install clang-tools\n"
|
||||
" macOS: brew install llvm\n"
|
||||
" Fedora: sudo dnf install clang-tools-extra\n"
|
||||
f"searched for: {', '.join(SCAN_BUILD_NAMES)}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def run_configure(scan_build: str, build_dir: Path, output_dir: Path,
|
||||
timeout: int) -> bool:
|
||||
"""Run scan-build cmake to configure the project."""
|
||||
repo_root = build_dir.parent
|
||||
cmd = [
|
||||
scan_build,
|
||||
"-o", str(output_dir),
|
||||
"cmake",
|
||||
str(repo_root),
|
||||
]
|
||||
logger.info("configuring: %s", " ".join(cmd))
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
cmd, cwd=str(build_dir),
|
||||
capture_output=True, text=True, timeout=timeout,
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.error("cmake configuration timed out after %ds", timeout)
|
||||
return False
|
||||
|
||||
if proc.returncode != 0:
|
||||
logger.error("cmake configuration failed (exit %d)", proc.returncode)
|
||||
logger.error("stderr: %s", proc.stderr[:2000])
|
||||
return False
|
||||
|
||||
logger.info("configuration complete")
|
||||
return True
|
||||
|
||||
|
||||
def run_build(scan_build: str, build_dir: Path, output_dir: Path,
|
||||
timeout: int) -> bool:
|
||||
"""Run scan-build make to build and analyze."""
|
||||
nproc = os.cpu_count() or 4
|
||||
cmd = [
|
||||
scan_build,
|
||||
"-o", str(output_dir),
|
||||
"--status-bugs",
|
||||
"make",
|
||||
f"-j{nproc}",
|
||||
]
|
||||
logger.info("building with analysis: %s", " ".join(cmd))
|
||||
try:
|
||||
proc = subprocess.run(
|
||||
cmd, cwd=str(build_dir),
|
||||
capture_output=True, text=True, timeout=timeout,
|
||||
)
|
||||
except subprocess.TimeoutExpired:
|
||||
logger.error("build timed out after %ds", timeout)
|
||||
return False
|
||||
|
||||
# scan-build returns nonzero when bugs are found (due to --status-bugs),
|
||||
# so a nonzero exit code is not necessarily a build failure.
|
||||
if proc.returncode != 0:
|
||||
logger.info(
|
||||
"scan-build exited with code %d (nonzero may indicate findings)",
|
||||
proc.returncode,
|
||||
)
|
||||
else:
|
||||
logger.info("build complete, no bugs reported by scan-build")
|
||||
|
||||
if proc.stderr:
|
||||
logger.debug("build stderr (last 2000 chars): %s", proc.stderr[-2000:])
|
||||
return True
|
||||
|
||||
|
||||
def collect_plist_files(output_dir: Path) -> list:
|
||||
"""Recursively find all .plist diagnostic files under the output directory."""
|
||||
plists = sorted(output_dir.rglob("*.plist"))
|
||||
logger.debug("found %d plist files in %s", len(plists), output_dir)
|
||||
return plists
|
||||
|
||||
|
||||
def parse_plist_findings(plist_path: Path) -> list:
|
||||
"""Extract findings from a single Clang plist diagnostic file.
|
||||
|
||||
Returns a list of dicts with keys: file, line, col, category, type, description.
|
||||
"""
|
||||
findings = []
|
||||
try:
|
||||
with open(plist_path, "rb") as f:
|
||||
data = plistlib.load(f)
|
||||
except Exception as exc:
|
||||
logger.warning("could not parse %s: %s", plist_path, exc)
|
||||
return findings
|
||||
|
||||
source_files = data.get("files", [])
|
||||
for diag in data.get("diagnostics", []):
|
||||
location = diag.get("location", {})
|
||||
file_idx = location.get("file", 0)
|
||||
source_file = source_files[file_idx] if file_idx < len(source_files) else "<unknown>"
|
||||
findings.append({
|
||||
"file": source_file,
|
||||
"line": location.get("line", 0),
|
||||
"col": location.get("col", 0),
|
||||
"category": diag.get("category", "uncategorized"),
|
||||
"type": diag.get("type", ""),
|
||||
"description": diag.get("description", ""),
|
||||
})
|
||||
return findings
|
||||
|
||||
|
||||
def collect_all_findings(output_dir: Path) -> list:
|
||||
"""Parse every plist file under output_dir and return merged findings."""
|
||||
all_findings = []
|
||||
for plist_path in collect_plist_files(output_dir):
|
||||
all_findings.extend(parse_plist_findings(plist_path))
|
||||
return all_findings
|
||||
|
||||
|
||||
def log_findings(db, run_id: int, findings: list):
|
||||
"""Persist each finding to z3agent.db."""
|
||||
for f in findings:
|
||||
db.log_finding(
|
||||
run_id,
|
||||
category=f["category"],
|
||||
message=f["description"],
|
||||
severity=f.get("type"),
|
||||
file=f["file"],
|
||||
line=f["line"],
|
||||
details={"col": f["col"], "type": f["type"]},
|
||||
)
|
||||
|
||||
|
||||
def print_findings(findings: list):
|
||||
"""Print individual findings and a category summary."""
|
||||
if not findings:
|
||||
print("No findings reported.")
|
||||
return
|
||||
|
||||
for f in findings:
|
||||
label = f["category"]
|
||||
if f["type"]:
|
||||
label = f["type"]
|
||||
print(f"[{label}] {f['file']}:{f['line']}: {f['description']}")
|
||||
|
||||
print()
|
||||
counts = Counter(f["category"] for f in findings)
|
||||
print(f"Total findings: {len(findings)}")
|
||||
print("By category:")
|
||||
for cat, cnt in counts.most_common():
|
||||
print(f" {cat}: {cnt}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
prog="static_analysis",
|
||||
description="Run Clang Static Analyzer on Z3 and log findings.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--build-dir", required=True,
|
||||
help="path to the CMake build directory",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir", default=None,
|
||||
help="directory for scan-build results (default: BUILD/scan-results)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout", type=int, default=1200,
|
||||
help="seconds allowed for the full analysis build",
|
||||
)
|
||||
parser.add_argument("--db", default=None, help="path to z3agent.db")
|
||||
parser.add_argument("--debug", action="store_true", help="verbose tracing")
|
||||
args = parser.parse_args()
|
||||
|
||||
setup_logging(args.debug)
|
||||
|
||||
scan_build = find_scan_build()
|
||||
|
||||
build_dir = Path(args.build_dir).resolve()
|
||||
build_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
output_dir = Path(args.output_dir) if args.output_dir else build_dir / "scan-results"
|
||||
output_dir = output_dir.resolve()
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
db = Z3DB(args.db)
|
||||
run_id = db.start_run("static-analysis", f"build_dir={build_dir}")
|
||||
start = time.monotonic()
|
||||
|
||||
if not run_configure(scan_build, build_dir, output_dir, timeout=args.timeout):
|
||||
elapsed = int((time.monotonic() - start) * 1000)
|
||||
db.finish_run(run_id, "error", elapsed, exit_code=1)
|
||||
db.close()
|
||||
sys.exit(1)
|
||||
|
||||
if not run_build(scan_build, build_dir, output_dir, timeout=args.timeout):
|
||||
elapsed = int((time.monotonic() - start) * 1000)
|
||||
db.finish_run(run_id, "error", elapsed, exit_code=1)
|
||||
db.close()
|
||||
sys.exit(1)
|
||||
|
||||
elapsed = int((time.monotonic() - start) * 1000)
|
||||
|
||||
findings = collect_all_findings(output_dir)
|
||||
log_findings(db, run_id, findings)
|
||||
|
||||
status = "clean" if len(findings) == 0 else "findings"
|
||||
db.finish_run(run_id, status, elapsed, exit_code=0)
|
||||
|
||||
db.log(
|
||||
f"static analysis complete: {len(findings)} finding(s) in {elapsed}ms",
|
||||
run_id=run_id,
|
||||
)
|
||||
|
||||
print_findings(findings)
|
||||
|
||||
db.close()
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
2
.github/workflows/Windows.yml
vendored
2
.github/workflows/Windows.yml
vendored
|
|
@ -30,7 +30,7 @@ jobs:
|
|||
- name: Checkout code
|
||||
uses: actions/checkout@v6.0.2
|
||||
- name: Add msbuild to PATH
|
||||
uses: microsoft/setup-msbuild@v2
|
||||
uses: microsoft/setup-msbuild@v3
|
||||
- run: |
|
||||
md build
|
||||
cd build
|
||||
|
|
|
|||
564
.github/workflows/a3-python.lock.yml
generated
vendored
564
.github/workflows/a3-python.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Analyzes Python code using a3-python tool to identify bugs and issues
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"e0bad93581cdf2abd9d7463c3d17c24341868f3e72928d533c73bd53e1bafa44"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"b070efd760f3adb920cf3555ebb4342d451f942f24a114965f2eba0ea6d79419","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "A3 Python Code Analysis"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "A3 Python Code Analysis"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults","python"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -84,41 +116,18 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_issue, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -148,12 +157,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/a3-python.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -178,8 +188,6 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -198,9 +206,7 @@ jobs:
|
|||
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
|
||||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -211,12 +217,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -240,14 +248,16 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: a3python
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Checkout repository
|
||||
|
|
@ -263,6 +273,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -270,7 +281,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -281,59 +292,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "A3 Python Code Analysis",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults","python"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -345,7 +307,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -357,7 +319,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 1 issue(s) can be created. Title will be prefixed with \"[a3-python] \". Labels [bug automated-analysis a3-python] will be automatically added.",
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 1 issue(s) can be created. Title will be prefixed with \"[a3-python] \". Labels [\"bug\" \"automated-analysis\" \"a3-python\"] will be automatically added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -365,6 +327,10 @@ jobs:
|
|||
"description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -379,9 +345,13 @@ jobs:
|
|||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,8}$",
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
|
|
@ -406,10 +376,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -427,9 +405,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -456,9 +442,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -503,6 +497,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -595,10 +614,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -606,7 +626,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -630,17 +650,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -649,20 +663,37 @@ jobs:
|
|||
timeout-minutes: 45
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains '*.pythonhosted.org,anaconda.org,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,binstar.org,bootstrap.pypa.io,conda.anaconda.org,conda.binstar.org,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,files.pythonhosted.org,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.npmjs.org,repo.anaconda.com,repo.continuum.io,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com' --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "*.pythonhosted.org,anaconda.org,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,binstar.org,bootstrap.pypa.io,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,files.pythonhosted.org,github.com,host.docker.internal,index.crates.io,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.npmjs.org,repo.anaconda.com,repo.continuum.io,s.symcb.com,s.symcd.com,security.ubuntu.com,static.crates.io,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -670,6 +701,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -715,9 +747,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -728,7 +763,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_ALLOWED_DOMAINS: "*.pythonhosted.org,anaconda.org,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,binstar.org,bootstrap.pypa.io,conda.anaconda.org,conda.binstar.org,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,files.pythonhosted.org,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.npmjs.org,repo.anaconda.com,repo.continuum.io,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GH_AW_ALLOWED_DOMAINS: "*.pythonhosted.org,anaconda.org,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,binstar.org,bootstrap.pypa.io,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,files.pythonhosted.org,github.com,host.docker.internal,index.crates.io,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.npmjs.org,repo.anaconda.com,repo.continuum.io,s.symcb.com,s.symcd.com,security.ubuntu.com,static.crates.io,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
with:
|
||||
|
|
@ -739,13 +774,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -790,45 +825,172 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "A3 Python Code Analysis"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes Python code using a3-python tool to identify bugs and issues"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-a3-python"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -838,7 +1000,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "A3 Python Code Analysis"
|
||||
GH_AW_TRACKER_ID: "a3-python-analysis"
|
||||
with:
|
||||
|
|
@ -872,8 +1034,12 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "a3-python"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "45"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -891,7 +1057,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -900,139 +1066,43 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "A3 Python Code Analysis"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes Python code using a3-python tool to identify bugs and issues"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/a3-python"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_TRACKER_ID: "a3-python-analysis"
|
||||
GH_AW_WORKFLOW_ID: "a3-python"
|
||||
GH_AW_WORKFLOW_NAME: "A3 Python Code Analysis"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
created_issue_number: ${{ steps.process_safe_outputs.outputs.created_issue_number }}
|
||||
created_issue_url: ${{ steps.process_safe_outputs.outputs.created_issue_url }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1042,6 +1112,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "*.pythonhosted.org,anaconda.org,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,binstar.org,bootstrap.pypa.io,conda.anaconda.org,conda.binstar.org,crates.io,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,files.pythonhosted.org,github.com,host.docker.internal,index.crates.io,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,pip.pypa.io,ppa.launchpad.net,pypi.org,pypi.python.org,raw.githubusercontent.com,registry.npmjs.org,repo.anaconda.com,repo.continuum.io,s.symcb.com,s.symcd.com,security.ubuntu.com,static.crates.io,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_issue\":{\"labels\":[\"bug\",\"automated-analysis\",\"a3-python\"],\"max\":1,\"title_prefix\":\"[a3-python] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1050,4 +1123,11 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
|
|
|
|||
2
.github/workflows/a3-python.md
vendored
2
.github/workflows/a3-python.md
vendored
|
|
@ -15,6 +15,8 @@ safe-outputs:
|
|||
- automated-analysis
|
||||
- a3-python
|
||||
title-prefix: "[a3-python] "
|
||||
noop:
|
||||
report-as-issue: false
|
||||
description: Analyzes Python code using a3-python tool to identify bugs and issues
|
||||
name: A3 Python Code Analysis
|
||||
strict: true
|
||||
|
|
|
|||
768
.github/workflows/deeptest.lock.yml → .github/workflows/academic-citation-tracker.lock.yml
generated
vendored
768
.github/workflows/deeptest.lock.yml → .github/workflows/academic-citation-tracker.lock.yml
generated
vendored
File diff suppressed because it is too large
Load diff
298
.github/workflows/academic-citation-tracker.md
vendored
Normal file
298
.github/workflows/academic-citation-tracker.md
vendored
Normal file
|
|
@ -0,0 +1,298 @@
|
|||
---
|
||||
description: >
|
||||
Monthly Academic Citation & Research Trend Tracker for Z3.
|
||||
Searches arXiv, Semantic Scholar, and GitHub for recent papers and projects
|
||||
using Z3, analyses which Z3 features they rely on, and identifies the
|
||||
functionality — features or performance — most important to address next.
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 6 1 * *"
|
||||
workflow_dispatch:
|
||||
|
||||
timeout-minutes: 60
|
||||
|
||||
permissions: read-all
|
||||
|
||||
network:
|
||||
allowed:
|
||||
- defaults
|
||||
- export.arxiv.org
|
||||
- api.semanticscholar.org
|
||||
- github
|
||||
|
||||
tools:
|
||||
cache-memory: true
|
||||
web-fetch: {}
|
||||
github:
|
||||
toolsets: [default, repos]
|
||||
bash: [":*"]
|
||||
|
||||
safe-outputs:
|
||||
mentions: false
|
||||
allowed-github-references: []
|
||||
max-bot-mentions: 1
|
||||
create-discussion:
|
||||
title-prefix: "[Research Trends] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
expires: 60
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
---
|
||||
|
||||
# Academic Citation & Research Trend Tracker
|
||||
|
||||
## Job Description
|
||||
|
||||
Your name is ${{ github.workflow }}. You are an expert research analyst for the Z3
|
||||
theorem prover repository `${{ github.repository }}`. Your mission is to find recent
|
||||
academic papers and open-source projects that use Z3, understand *which Z3 features*
|
||||
they rely on, and synthesise what this reveals about the features and performance
|
||||
improvements that would have the greatest community impact.
|
||||
|
||||
## Your Task
|
||||
|
||||
### 1. Initialise or Resume Progress (Cache Memory)
|
||||
|
||||
Check cache memory for:
|
||||
- Papers and projects already covered in the previous run (DOIs, arXiv IDs, GitHub repo URLs)
|
||||
- Feature-usage counts accumulated across runs
|
||||
- Date of the last run
|
||||
|
||||
Use the cached data so this run focuses on **new** material (last 30 days by default; if no prior cache exists, cover the last 90 days).
|
||||
Initialise an empty tracking structure if the cache is absent.
|
||||
|
||||
### 2. Collect Recent Papers
|
||||
|
||||
#### 2.1 arXiv Search
|
||||
|
||||
Fetch recent papers that mention Z3 as a core tool. Use the arXiv API.
|
||||
First compute the date 30 days ago (or 90 days for the initial run) in YYYYMMDD format,
|
||||
then pass it as the `submittedDate` range filter:
|
||||
|
||||
```bash
|
||||
# Compute the start date (30 days ago)
|
||||
START_DATE=$(date -d "30 days ago" +%Y%m%d 2>/dev/null || date -v-30d +%Y%m%d)
|
||||
TODAY=$(date +%Y%m%d)
|
||||
|
||||
# Papers mentioning Z3 in cs.PL, cs.LO, cs.SE, cs.CR, cs.FM categories
|
||||
curl -s "https://export.arxiv.org/api/query?search_query=all:Z3+solver+AND+(cat:cs.PL+OR+cat:cs.LO+OR+cat:cs.SE+OR+cat:cs.CR+OR+cat:cs.FM)&submittedDate=[${START_DATE}2359+TO+${TODAY}2359]&sortBy=submittedDate&sortOrder=descending&max_results=40" \
|
||||
-o /tmp/arxiv-results.xml
|
||||
```
|
||||
|
||||
Parse the XML for: title, authors, abstract, arXiv ID, submission date, primary category.
|
||||
|
||||
#### 2.2 Semantic Scholar Search
|
||||
|
||||
Fetch recent papers via the Semantic Scholar API, filtering to the current year
|
||||
(or year-1 for the initial run) to surface only recent work:
|
||||
|
||||
```bash
|
||||
CURRENT_YEAR=$(date +%Y)
|
||||
|
||||
curl -s "https://api.semanticscholar.org/graph/v1/paper/search?query=Z3+theorem+prover&fields=title,authors,year,abstract,externalIds,citationCount,venue&limit=40&sort=relevance&year=${CURRENT_YEAR}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-o /tmp/s2-results.json
|
||||
```
|
||||
|
||||
Merge with the arXiv results (de-duplicate by DOI / arXiv ID).
|
||||
|
||||
#### 2.3 GitHub Projects
|
||||
|
||||
Use the GitHub MCP server tools to find recently-active repositories that depend on
|
||||
or study Z3. Use these example search strategies:
|
||||
- Repos with the `z3` topic pushed in the last 30 days:
|
||||
`topic:z3 pushed:>YYYY-MM-DD` (substitute the actual date)
|
||||
- Repos depending on z3 Python package with recent activity:
|
||||
`z3-solver in:file filename:requirements.txt pushed:>YYYY-MM-DD`
|
||||
- Repos referencing Z3Prover in README:
|
||||
`Z3Prover/z3 in:readme pushed:>YYYY-MM-DD`
|
||||
|
||||
Limit to the 20 most-relevant results; filter out the Z3 repo itself (`Z3Prover/z3`).
|
||||
|
||||
#### 2.4 Filter for Genuine Z3 Usage
|
||||
|
||||
Keep only results where Z3 is used as a *core* component (not just a passing mention).
|
||||
Discard:
|
||||
- Papers that mention Z3 only in a reference list
|
||||
- Repos that list z3 as an optional or dev dependency only
|
||||
- Papers behind hard paywalls where the abstract cannot be fetched
|
||||
|
||||
### 3. Analyse Feature Usage
|
||||
|
||||
For each retained paper or project extract, from the abstract, full text (when
|
||||
accessible), README, or source code:
|
||||
|
||||
**Z3 Feature / API Surface Used:**
|
||||
- SMT-LIB2 formula input (`check-sat`, `get-model`, theory declarations)
|
||||
- Python API (`z3py`) — which theories: Int/Real arithmetic, BitVectors, Arrays, Strings/Sequences, Uninterpreted Functions, Quantifiers
|
||||
- C/C++ API
|
||||
- Other language bindings (Java, C#, OCaml, JavaScript/WASM)
|
||||
- Fixedpoint / Datalog (`z3.Fixedpoint`)
|
||||
- Optimisation (`z3.Optimize`, MaxSMT)
|
||||
- Proofs / DRAT
|
||||
- Tactics and solvers (e.g., `qfbv`, `spacer`, `elim-quantifiers`, `nlsat`)
|
||||
- Incremental solving (`push`/`pop`, assumptions)
|
||||
- Model generation and evaluation
|
||||
- Interpolation / Horn clause solving (Spacer/PDR)
|
||||
- SMTCOMP/evaluation benchmarks
|
||||
|
||||
**Application Domain:**
|
||||
- Program verification / deductive verification
|
||||
- Symbolic execution / concolic testing
|
||||
- Security (vulnerability discovery, protocol verification, exploit generation)
|
||||
- Type checking / language design
|
||||
- Hardware verification
|
||||
- Constraint solving / planning / scheduling
|
||||
- Formal specification / theorem proving assistance
|
||||
- Compiler correctness
|
||||
- Machine learning / neural network verification
|
||||
- Other
|
||||
|
||||
**Pain Points Mentioned:**
|
||||
Note any explicit mentions of Z3 limitations, performance issues, missing features,
|
||||
workarounds, or comparisons where Z3 underperformed.
|
||||
|
||||
### 4. Aggregate Trends
|
||||
|
||||
Compute over all papers and projects collected (this run + cache history):
|
||||
- **Feature popularity ranking**: which APIs/theories appear most frequently
|
||||
- **Domain ranking**: which application areas use Z3 most
|
||||
- **Performance pain-point frequency**: mentions of timeouts, scalability, memory, or
|
||||
regression across Z3 versions
|
||||
- **Feature gap signals**: features requested but absent, or workarounds applied
|
||||
- **New vs. returning features**: compare with previous month's top features to spot
|
||||
rising or falling trends
|
||||
|
||||
### 5. Correlate with Open Issues and PRs
|
||||
|
||||
Use the GitHub MCP server to search the Z3 issue tracker and recent PRs for signals
|
||||
that align with the academic findings:
|
||||
- Are the performance pain-points also reflected in open issues?
|
||||
- Do any open feature requests map to high-demand research use-cases?
|
||||
- Are there recent PRs that address any of the identified gaps?
|
||||
|
||||
This produces a prioritised list of development recommendations grounded in both
|
||||
community usage and academic demand.
|
||||
|
||||
### 6. Generate the Discussion Report
|
||||
|
||||
Create a GitHub Discussion. Use `###` or lower for all section headers.
|
||||
Wrap verbose tables or lists in `<details>` tags to keep the report scannable.
|
||||
|
||||
Title: `[Research Trends] Academic Citation & Research Trend Report — [Month YYYY]`
|
||||
|
||||
Suggested structure:
|
||||
|
||||
```markdown
|
||||
**Period covered**: [start date] – [end date]
|
||||
**Papers analysed**: N (arXiv: N, Semantic Scholar: N, new this run: N)
|
||||
**GitHub projects analysed**: N (new this run: N)
|
||||
|
||||
### Executive Summary
|
||||
|
||||
2–3 sentences: headline finding about where Z3 is being used and what the
|
||||
community most needs.
|
||||
|
||||
### Top Z3 Features Used
|
||||
|
||||
| Rank | Feature / API | Papers | Projects | Trend vs. Last Month |
|
||||
|------|--------------|--------|----------|----------------------|
|
||||
| 1 | z3py – BitVectors | N | N | ↑ / ↓ / → |
|
||||
| … |
|
||||
|
||||
### Application Domain Breakdown
|
||||
|
||||
| Domain | Papers | % of Total |
|
||||
|--------|--------|------------|
|
||||
| Program verification | N | N% |
|
||||
| … |
|
||||
|
||||
### Performance & Feature Pain-Points
|
||||
|
||||
List the most-cited pain-points with representative quotes or paraphrases from
|
||||
abstracts/READMEs. Group by theme (scalability, string solver performance, API
|
||||
ergonomics, missing theories, etc.).
|
||||
|
||||
<details>
|
||||
<summary><b>All Pain-Point Mentions</b></summary>
|
||||
|
||||
One entry per paper/project that mentions a pain-point.
|
||||
|
||||
</details>
|
||||
|
||||
### Recommended Development Priorities
|
||||
|
||||
Ranked list of Z3 features or performance improvements most likely to have broad
|
||||
research impact, with rationale tied to specific evidence:
|
||||
|
||||
1. **[Priority 1]** — evidence: N papers, N projects, N related issues
|
||||
2. …
|
||||
|
||||
### Correlation with Open Issues / PRs
|
||||
|
||||
Issues and PRs in Z3Prover/z3 that align with the identified research priorities.
|
||||
|
||||
| Issue / PR | Title | Alignment |
|
||||
|-----------|-------|-----------|
|
||||
| #NNN | … | [feature / pain-point it addresses] |
|
||||
|
||||
### Notable New Papers
|
||||
|
||||
Brief description of 3–5 particularly interesting papers, their use of Z3, and
|
||||
any Z3-specific insights.
|
||||
|
||||
<details>
|
||||
<summary><b>All Papers This Run</b></summary>
|
||||
|
||||
| Source | Title | Authors | Date | Features Used | Domain |
|
||||
|--------|-------|---------|------|--------------|--------|
|
||||
| arXiv:XXXX.XXXXX | … | … | … | … | … |
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>All GitHub Projects This Run</b></summary>
|
||||
|
||||
| Repository | Stars | Updated | Features Used | Domain |
|
||||
|-----------|-------|---------|--------------|--------|
|
||||
| owner/repo | N | YYYY-MM-DD | … | … |
|
||||
|
||||
</details>
|
||||
|
||||
### Methodology Note
|
||||
|
||||
Brief description of the search strategy, sources, and filters used this run.
|
||||
```
|
||||
|
||||
### 7. Update Cache Memory
|
||||
|
||||
Store for next run:
|
||||
- Set of all paper IDs (DOIs, arXiv IDs) and GitHub repo URLs already covered
|
||||
- Feature-usage frequency counts (cumulative)
|
||||
- Domain frequency counts (cumulative)
|
||||
- Date of this run
|
||||
- Top-3 pain-point themes for trend comparison
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Be accurate**: Only attribute feature usage to Z3 when the paper/code makes it explicit.
|
||||
- **Be exhaustive within scope**: Cover all material found; don't cherry-pick.
|
||||
- **Be concise in headlines**: Lead with the most actionable finding.
|
||||
- **Respect academic citation norms**: Include arXiv IDs and DOIs; do not reproduce
|
||||
full paper text — only titles, authors, and abstracts.
|
||||
- **Track trends**: The cache lets you show month-over-month changes.
|
||||
- **Stay Z3-specific**: Focus on insights relevant to Z3 development, not general SMT
|
||||
or theorem-proving trends.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- DO NOT create pull requests or modify source files.
|
||||
- DO NOT reproduce copyrighted paper text beyond short fair-use quotes.
|
||||
- DO close older Research Trends discussions automatically (configured).
|
||||
- DO always cite sources (arXiv ID, DOI, GitHub URL) so maintainers can verify.
|
||||
- DO use cache memory to track longitudinal trends across months.
|
||||
64
.github/workflows/agentics-maintenance.yml
vendored
64
.github/workflows/agentics-maintenance.yml
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by pkg/workflow/maintenance_workflow.go (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by pkg/workflow/maintenance_workflow.go (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To regenerate this workflow, run:
|
||||
# gh aw compile
|
||||
|
|
@ -37,11 +37,24 @@ on:
|
|||
schedule:
|
||||
- cron: "37 0 * * *" # Daily (based on minimum expires: 7 days)
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
operation:
|
||||
description: 'Optional maintenance operation to run'
|
||||
required: false
|
||||
type: choice
|
||||
default: ''
|
||||
options:
|
||||
- ''
|
||||
- 'disable'
|
||||
- 'enable'
|
||||
- 'update'
|
||||
- 'upgrade'
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
close-expired-entities:
|
||||
if: ${{ !github.event.repository.fork && (github.event_name != 'workflow_dispatch' || github.event.inputs.operation == '') }}
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
discussions: write
|
||||
|
|
@ -49,7 +62,7 @@ jobs:
|
|||
pull-requests: write
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
|
||||
|
|
@ -79,3 +92,50 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/close_expired_pull_requests.cjs');
|
||||
await main();
|
||||
|
||||
run_operation:
|
||||
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.operation != '' && !github.event.repository.fork }}
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
actions: write
|
||||
contents: write
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
|
||||
- name: Check admin/maintainer permissions
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/check_team_member.cjs');
|
||||
await main();
|
||||
|
||||
- name: Install gh-aw
|
||||
uses: github/gh-aw/actions/setup-cli@v0.62.5
|
||||
with:
|
||||
version: v0.57.2
|
||||
|
||||
- name: Run operation
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
GH_AW_OPERATION: ${{ github.event.inputs.operation }}
|
||||
GH_AW_CMD_PREFIX: gh aw
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/run_operation_update_upgrade.cjs');
|
||||
await main();
|
||||
|
|
|
|||
2
.github/workflows/android-build.yml
vendored
2
.github/workflows/android-build.yml
vendored
|
|
@ -33,7 +33,7 @@ jobs:
|
|||
tar -cvf z3-build-${{ matrix.android-abi }}.tar *.jar *.so
|
||||
|
||||
- name: Archive production artifacts
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: android-build-${{ matrix.android-abi }}
|
||||
path: build/z3-build-${{ matrix.android-abi }}.tar
|
||||
|
|
|
|||
587
.github/workflows/api-coherence-checker.lock.yml
generated
vendored
587
.github/workflows/api-coherence-checker.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Daily API coherence checker across Z3's multi-language bindings including Rust
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"598c1f5c864f7f50ae4874ea58b6a0fb58480c7220cbbd8c9cd2e9386320c5af"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"4e2da3456dfb6002cbd0bca4a01b78acfc1e96fcbb97f8fcc4c0f58e105e4f03","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "API Coherence Checker"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "API Coherence Checker"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -85,42 +117,19 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -150,12 +159,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/api-coherence-checker.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -184,8 +194,6 @@ jobs:
|
|||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -208,9 +216,7 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKFLOW: process.env.GH_AW_GITHUB_WORKFLOW,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -221,12 +227,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -247,20 +255,22 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: apicoherencechecker
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
@ -268,7 +278,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -281,6 +291,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -288,7 +299,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -299,59 +310,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "API Coherence Checker",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -363,7 +325,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 ghcr.io/github/serena-mcp-server:latest node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 ghcr.io/github/serena-mcp-server:latest node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -387,6 +349,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -409,10 +379,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -430,9 +408,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -459,9 +445,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -499,6 +493,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -591,10 +610,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -602,7 +622,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -634,17 +654,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -653,20 +667,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -674,6 +705,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -719,9 +751,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -743,13 +778,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -792,7 +827,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -800,23 +835,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "API Coherence Checker"
|
||||
WORKFLOW_DESCRIPTION: "Daily API coherence checker across Z3's multi-language bindings including Rust"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
- update_cache_memory
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
|
|
@ -825,22 +982,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-api-coherence-checker"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -850,7 +1012,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "API Coherence Checker"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -881,10 +1043,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "api-coherence-checker"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -901,7 +1067,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -910,112 +1076,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "API Coherence Checker"
|
||||
WORKFLOW_DESCRIPTION: "Daily API coherence checker across Z3's multi-language bindings including Rust"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -1023,26 +1086,31 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/api-coherence-checker"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "api-coherence-checker"
|
||||
GH_AW_WORKFLOW_NAME: "API Coherence Checker"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1052,6 +1120,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[API Coherence] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1060,27 +1131,45 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
update_cache_memory:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: always() && needs.detection.outputs.success == 'true'
|
||||
needs: agent
|
||||
if: always() && needs.agent.outputs.detection_success == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
env:
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: apicoherencechecker
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
- name: Check if cache-memory folder has content (default)
|
||||
id: check_cache_default
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then
|
||||
echo "has_content=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_content=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
4
.github/workflows/api-coherence-checker.md
vendored
4
.github/workflows/api-coherence-checker.md
vendored
|
|
@ -26,11 +26,13 @@ safe-outputs:
|
|||
title-prefix: "[API Coherence] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
|
|||
584
.github/workflows/build-warning-fixer.lock.yml
generated
vendored
584
.github/workflows/build-warning-fixer.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Automatically builds Z3 directly and fixes detected build warnings
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"8b0dff2ea86746229278e436b3de6a4d6868c48ea5aecca3aad131d326a4c819"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"076f956f53f04fe2f9fc916da97f426b702f68c328045cce4cc1575bed38787d","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Build Warning Fixer"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Build Warning Fixer"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -84,41 +116,21 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_pull_request, missing_tool, missing_data, noop
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_create_pull_request.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -148,12 +160,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/build-warning-fixer.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -176,8 +189,6 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -196,9 +207,7 @@ jobs:
|
|||
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
|
||||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -209,12 +218,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -235,14 +246,16 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: buildwarningfixer
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Checkout repository
|
||||
|
|
@ -258,6 +271,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -265,7 +279,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -276,59 +290,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Build Warning Fixer",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -340,14 +305,14 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
mkdir -p /tmp/gh-aw/safeoutputs
|
||||
mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs
|
||||
cat > /opt/gh-aw/safeoutputs/config.json << 'GH_AW_SAFE_OUTPUTS_CONFIG_EOF'
|
||||
{"create_missing_tool_issue":{"max":1,"title_prefix":"[missing tool]"},"create_pull_request":{},"missing_data":{},"missing_tool":{},"noop":{"max":1}}
|
||||
{"create_missing_tool_issue":{"max":1,"title_prefix":"[missing tool]"},"create_pull_request":{"max":1},"missing_data":{},"missing_tool":{},"noop":{"max":1}}
|
||||
GH_AW_SAFE_OUTPUTS_CONFIG_EOF
|
||||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
|
|
@ -364,6 +329,14 @@ jobs:
|
|||
"description": "Source branch name containing the changes. If omitted, uses the current working branch.",
|
||||
"type": "string"
|
||||
},
|
||||
"draft": {
|
||||
"description": "Whether to create the PR as a draft. Draft PRs cannot be merged until marked as ready for review. Use mark_pull_request_as_ready_for_review to convert a draft PR. Default: true.",
|
||||
"type": "boolean"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the PR (e.g., 'enhancement', 'bugfix'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -371,6 +344,14 @@ jobs:
|
|||
},
|
||||
"type": "array"
|
||||
},
|
||||
"repo": {
|
||||
"description": "Target repository in 'owner/repo' format. For multi-repo workflows where the target repo differs from the workflow repo, this must match a repo in the allowed-repos list or the configured target-repo. If omitted, defaults to the configured target-repo (from safe-outputs config), NOT the workflow repository. In most cases, you should omit this parameter and let the system use the configured default.",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise PR title describing the changes. Follow repository conventions (e.g., conventional commits). The title appears as the main heading.",
|
||||
"type": "string"
|
||||
|
|
@ -393,10 +374,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -414,9 +403,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -443,9 +440,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -472,12 +477,19 @@ jobs:
|
|||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"draft": {
|
||||
"type": "boolean"
|
||||
},
|
||||
"labels": {
|
||||
"type": "array",
|
||||
"itemType": "string",
|
||||
"itemSanitize": true,
|
||||
"itemMaxLength": 128
|
||||
},
|
||||
"repo": {
|
||||
"type": "string",
|
||||
"maxLength": 256
|
||||
},
|
||||
"title": {
|
||||
"required": true,
|
||||
"type": "string",
|
||||
|
|
@ -486,6 +498,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -578,10 +615,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -589,7 +627,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -613,17 +651,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -632,20 +664,37 @@ jobs:
|
|||
timeout-minutes: 60
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -653,6 +702,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -698,9 +748,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -722,13 +775,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -773,24 +826,146 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
/tmp/gh-aw/aw.patch
|
||||
/tmp/gh-aw/aw-*.patch
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Build Warning Fixer"
|
||||
WORKFLOW_DESCRIPTION: "Automatically builds Z3 directly and fixes detected build warnings"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
runs-on: ubuntu-slim
|
||||
|
|
@ -798,22 +973,27 @@ jobs:
|
|||
contents: write
|
||||
issues: write
|
||||
pull-requests: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-build-warning-fixer"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -823,7 +1003,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Build Warning Fixer"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -856,8 +1036,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "build-warning-fixer"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CODE_PUSH_FAILURE_ERRORS: ${{ needs.safe_outputs.outputs.code_push_failure_errors }}
|
||||
GH_AW_CODE_PUSH_FAILURE_COUNT: ${{ needs.safe_outputs.outputs.code_push_failure_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "60"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -874,7 +1060,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -897,113 +1083,11 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_create_pr_error.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Build Warning Fixer"
|
||||
WORKFLOW_DESCRIPTION: "Automatically builds Z3 directly and fixes detected build warnings"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: write
|
||||
|
|
@ -1011,33 +1095,40 @@ jobs:
|
|||
pull-requests: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/build-warning-fixer"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "build-warning-fixer"
|
||||
GH_AW_WORKFLOW_NAME: "Build Warning Fixer"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
created_pr_number: ${{ steps.process_safe_outputs.outputs.created_pr_number }}
|
||||
created_pr_url: ${{ steps.process_safe_outputs.outputs.created_pr_url }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/safeoutputs/agent_output.json" >> "$GITHUB_ENV"
|
||||
- name: Download patch artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/
|
||||
|
|
@ -1045,6 +1136,7 @@ jobs:
|
|||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (contains(needs.agent.outputs.output_types, 'create_pull_request'))
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
ref: ${{ github.base_ref || github.event.pull_request.base.ref || github.ref_name || github.event.repository.default_branch }}
|
||||
token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
persist-credentials: false
|
||||
fetch-depth: 1
|
||||
|
|
@ -1057,6 +1149,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${GIT_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -1066,7 +1159,11 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_pull_request\":{\"base_branch\":\"${{ github.ref_name }}\",\"if_no_changes\":\"ignore\",\"max\":1,\"max_patch_size\":1024},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_pull_request\":{\"if_no_changes\":\"ignore\",\"max\":1,\"max_patch_size\":1024,\"protected_files\":[\"package.json\",\"bun.lockb\",\"bunfig.toml\",\"deno.json\",\"deno.jsonc\",\"deno.lock\",\"global.json\",\"NuGet.Config\",\"Directory.Packages.props\",\"mix.exs\",\"mix.lock\",\"go.mod\",\"go.sum\",\"stack.yaml\",\"stack.yaml.lock\",\"pom.xml\",\"build.gradle\",\"build.gradle.kts\",\"settings.gradle\",\"settings.gradle.kts\",\"gradle.properties\",\"package-lock.json\",\"yarn.lock\",\"pnpm-lock.yaml\",\"npm-shrinkwrap.json\",\"requirements.txt\",\"Pipfile\",\"Pipfile.lock\",\"pyproject.toml\",\"setup.py\",\"setup.cfg\",\"Gemfile\",\"Gemfile.lock\",\"uv.lock\",\"AGENTS.md\"],\"protected_path_prefixes\":[\".github/\",\".agents/\"]},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
GH_AW_CI_TRIGGER_TOKEN: ${{ secrets.GH_AW_CI_TRIGGER_TOKEN }}
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1074,4 +1171,11 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
|
|
|
|||
2
.github/workflows/build-warning-fixer.md
vendored
2
.github/workflows/build-warning-fixer.md
vendored
|
|
@ -14,6 +14,8 @@ safe-outputs:
|
|||
if-no-changes: ignore
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
timeout-minutes: 60
|
||||
---
|
||||
|
||||
|
|
|
|||
2
.github/workflows/build-z3-cache.yml
vendored
2
.github/workflows/build-z3-cache.yml
vendored
|
|
@ -45,7 +45,7 @@ jobs:
|
|||
|
||||
- name: Restore or create cache
|
||||
id: cache-z3
|
||||
uses: actions/cache@v5.0.3
|
||||
uses: actions/cache@v5.0.4
|
||||
with:
|
||||
path: |
|
||||
build/z3
|
||||
|
|
|
|||
102
.github/workflows/ci.yml
vendored
102
.github/workflows/ci.yml
vendored
|
|
@ -52,9 +52,9 @@ jobs:
|
|||
run: |
|
||||
set -e
|
||||
cd build
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) examples
|
||||
make -j$(nproc) test-z3
|
||||
cd ..
|
||||
|
||||
- name: Run unit tests
|
||||
|
|
@ -171,9 +171,9 @@ jobs:
|
|||
set -e
|
||||
cd build
|
||||
eval `opam config env`
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) examples
|
||||
make -j$(nproc) test-z3
|
||||
cd ..
|
||||
|
||||
- name: Install Z3 OCaml package
|
||||
|
|
@ -226,9 +226,9 @@ jobs:
|
|||
set -e
|
||||
cd build
|
||||
eval `opam config env`
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) examples
|
||||
make -j$(nproc) test-z3
|
||||
cd ..
|
||||
|
||||
- name: Install Z3 OCaml package
|
||||
|
|
@ -239,8 +239,8 @@ jobs:
|
|||
set -e
|
||||
cd build
|
||||
eval `opam config env`
|
||||
make -j3
|
||||
make -j3 _ex_ml_example_post_install
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) _ex_ml_example_post_install
|
||||
./ml_example_static.byte
|
||||
./ml_example_static_custom.byte
|
||||
./ml_example_static
|
||||
|
|
@ -402,9 +402,10 @@ jobs:
|
|||
run: |
|
||||
set -e
|
||||
cd build
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
JOBS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || sysctl -n hw.ncpu || echo 1)
|
||||
make -j"$JOBS"
|
||||
make -j"$JOBS" examples
|
||||
make -j"$JOBS" test-z3
|
||||
./cpp_example
|
||||
./c_example
|
||||
cd ..
|
||||
|
|
@ -415,6 +416,79 @@ jobs:
|
|||
- name: Run regressions
|
||||
run: python z3test/scripts/test_benchmarks.py build/z3 z3test/regressions/smt2
|
||||
|
||||
- name: Validate JNI library architecture matches host
|
||||
run: |
|
||||
echo "Checking libz3java.dylib architecture..."
|
||||
ARCH=$(lipo -archs build/libz3java.dylib)
|
||||
HOST_ARCH=$(uname -m)
|
||||
echo "libz3java.dylib arch: $ARCH | host arch: $HOST_ARCH"
|
||||
if [ "$ARCH" != "$HOST_ARCH" ]; then
|
||||
echo "ERROR: libz3java.dylib has arch '$ARCH' but host is '$HOST_ARCH'"
|
||||
exit 1
|
||||
fi
|
||||
echo "OK: libz3java.dylib correctly built for $HOST_ARCH"
|
||||
|
||||
# ============================================================================
|
||||
# macOS JNI cross-compilation validation (ARM64 host -> x86_64 target)
|
||||
# ============================================================================
|
||||
macos-jni-cross-compile:
|
||||
name: "MacOS JNI cross-compile (ARM64 -> x64) architecture validation"
|
||||
runs-on: macos-15
|
||||
timeout-minutes: 90
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.x'
|
||||
|
||||
- name: Configure (cross-compile ARM64 host -> x86_64 target)
|
||||
run: |
|
||||
CXXFLAGS="-arch x86_64" CFLAGS="-arch x86_64" LDFLAGS="-arch x86_64" \
|
||||
python scripts/mk_make.py --java --arm64=false
|
||||
|
||||
- name: Build
|
||||
run: |
|
||||
set -e
|
||||
cd build
|
||||
NPROC=$(getconf _NPROCESSORS_ONLN 2>/dev/null || sysctl -n hw.ncpu 2>/dev/null || echo 1)
|
||||
make -j"$NPROC" libz3java.dylib
|
||||
cd ..
|
||||
|
||||
- name: Validate libz3java.dylib is x86_64
|
||||
run: |
|
||||
echo "Checking libz3java.dylib architecture..."
|
||||
ARCH=$(lipo -archs build/libz3java.dylib)
|
||||
echo "libz3java.dylib architecture: $ARCH"
|
||||
if [ "$ARCH" != "x86_64" ]; then
|
||||
echo "ERROR: Expected x86_64 (cross-compiled target), got: $ARCH"
|
||||
echo "This is the regression fixed in: JNI bindings use wrong architecture in macOS cross-compilation"
|
||||
exit 1
|
||||
fi
|
||||
echo "OK: libz3java.dylib correctly built for x86_64 target on ARM64 host"
|
||||
|
||||
# ============================================================================
|
||||
# Python script unit tests (build-script logic validation)
|
||||
# ============================================================================
|
||||
python-script-tests:
|
||||
name: "Python build-script unit tests"
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.x'
|
||||
|
||||
- name: Run Python script unit tests
|
||||
working-directory: ${{ github.workspace }}
|
||||
run: python -m unittest discover -s scripts/tests -p "test_*.py" -v
|
||||
|
||||
# ============================================================================
|
||||
# macOS CMake Builds
|
||||
# ============================================================================
|
||||
|
|
|
|||
601
.github/workflows/code-conventions-analyzer.lock.yml
generated
vendored
601
.github/workflows/code-conventions-analyzer.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Analyzes Z3 codebase for consistent coding conventions and opportunities to use modern C++ features
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"6d7361c4c87b89662d96d40f58300649076c6abb8614cbc7e3e37bc06baa057a"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"5314f869129082f4b6c07bda77b7fa3201da3828ec66262697c72928d1eab973","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Code Conventions Analyzer"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Code Conventions Analyzer"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -84,42 +116,19 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_issue, create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -149,12 +158,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/code-conventions-analyzer.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -180,8 +190,6 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -203,9 +211,7 @@ jobs:
|
|||
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
|
||||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -216,12 +222,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -242,14 +250,16 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: codeconventionsanalyzer
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Checkout repository
|
||||
|
|
@ -262,7 +272,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -275,6 +285,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -282,7 +293,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -293,59 +304,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Code Conventions Analyzer",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -357,7 +319,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -369,7 +331,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 5 issue(s) can be created. Title will be prefixed with \"[Conventions] \". Labels [code-quality automated] will be automatically added.",
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 5 issue(s) can be created. Title will be prefixed with \"[Conventions] \". Labels [\"code-quality\" \"automated\"] will be automatically added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -377,6 +339,10 @@ jobs:
|
|||
"description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -391,9 +357,13 @@ jobs:
|
|||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,8}$",
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
|
|
@ -422,6 +392,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -444,10 +422,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -465,9 +451,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -494,9 +488,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -567,6 +569,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -659,10 +686,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -670,7 +698,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -694,17 +722,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -732,20 +754,37 @@ jobs:
|
|||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool github --allow-tool safeoutputs --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(clang-format --version)'\'' --allow-tool '\''shell(date)'\'' --allow-tool '\''shell(echo)'\'' --allow-tool '\''shell(git diff:*)'\'' --allow-tool '\''shell(git log:*)'\'' --allow-tool '\''shell(git show:*)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(pwd)'\'' --allow-tool '\''shell(sort)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(uniq)'\'' --allow-tool '\''shell(wc)'\'' --allow-tool '\''shell(yq)'\'' --allow-tool write --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool github --allow-tool safeoutputs --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(clang-format --version)'\'' --allow-tool '\''shell(date)'\'' --allow-tool '\''shell(echo)'\'' --allow-tool '\''shell(git diff:*)'\'' --allow-tool '\''shell(git log:*)'\'' --allow-tool '\''shell(git show:*)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(pwd)'\'' --allow-tool '\''shell(sort)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(uniq)'\'' --allow-tool '\''shell(wc)'\'' --allow-tool '\''shell(yq)'\'' --allow-tool write --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -753,6 +792,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -798,9 +838,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -822,13 +865,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -871,7 +914,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -879,23 +922,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Code Conventions Analyzer"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes Z3 codebase for consistent coding conventions and opportunities to use modern C++ features"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
- update_cache_memory
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
|
|
@ -904,22 +1069,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-code-conventions-analyzer"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -929,7 +1099,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Code Conventions Analyzer"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -962,10 +1132,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "code-conventions-analyzer"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "20"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -982,7 +1156,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -991,112 +1165,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Code Conventions Analyzer"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes Z3 codebase for consistent coding conventions and opportunities to use modern C++ features"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -1104,26 +1175,33 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/code-conventions-analyzer"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "code-conventions-analyzer"
|
||||
GH_AW_WORKFLOW_NAME: "Code Conventions Analyzer"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
created_issue_number: ${{ steps.process_safe_outputs.outputs.created_issue_number }}
|
||||
created_issue_url: ${{ steps.process_safe_outputs.outputs.created_issue_url }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1133,6 +1211,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"Code Conventions Analysis\"},\"create_issue\":{\"labels\":[\"code-quality\",\"automated\"],\"max\":5,\"title_prefix\":\"[Conventions] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1141,27 +1222,45 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
update_cache_memory:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: always() && needs.detection.outputs.success == 'true'
|
||||
needs: agent
|
||||
if: always() && needs.agent.outputs.detection_success == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
env:
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: codeconventionsanalyzer
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
- name: Check if cache-memory folder has content (default)
|
||||
id: check_cache_default
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then
|
||||
echo "has_content=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_content=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
|
|
@ -27,6 +27,8 @@ safe-outputs:
|
|||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
network: defaults
|
||||
timeout-minutes: 20
|
||||
---
|
||||
|
|
|
|||
563
.github/workflows/code-simplifier.lock.yml
generated
vendored
563
.github/workflows/code-simplifier.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b and run:
|
||||
# gh aw compile
|
||||
|
|
@ -25,7 +25,7 @@
|
|||
#
|
||||
# Source: github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"ba4361e08cae6f750b8326eb91fd49aa292622523f2a01aaf2051ff7f94a07fb"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"6f3bad47dff7f3f86460672a86abd84130d8a7dee19358ef3391e3faf65f4857","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Code Simplifier"
|
||||
"on":
|
||||
|
|
@ -52,19 +52,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Code Simplifier"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -89,41 +121,18 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_issue, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -153,12 +162,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/code-simplifier.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -184,7 +194,6 @@ jobs:
|
|||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -204,8 +213,7 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -216,12 +224,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -245,14 +255,16 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: codesimplifier
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Checkout repository
|
||||
|
|
@ -268,6 +280,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -275,7 +288,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -286,59 +299,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Code Simplifier",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -350,7 +314,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -362,7 +326,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 1 issue(s) can be created. Title will be prefixed with \"[code-simplifier] \". Labels [refactoring code-quality automation] will be automatically added.",
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 1 issue(s) can be created. Title will be prefixed with \"[code-simplifier] \". Labels [\"refactoring\" \"code-quality\" \"automation\"] will be automatically added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -370,6 +334,10 @@ jobs:
|
|||
"description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -384,9 +352,13 @@ jobs:
|
|||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,8}$",
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
|
|
@ -411,10 +383,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -432,9 +412,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -461,9 +449,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -508,6 +504,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -600,10 +621,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -611,7 +633,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -635,17 +657,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -654,20 +670,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -675,6 +708,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -720,9 +754,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -744,13 +781,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -795,45 +832,172 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Code Simplifier"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-code-simplifier"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -843,7 +1007,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Code Simplifier"
|
||||
GH_AW_WORKFLOW_SOURCE: "github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b"
|
||||
GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/github/gh-aw/tree/76d37d925abd44fee97379206f105b74b91a285b/.github/workflows/code-simplifier.md"
|
||||
|
|
@ -883,8 +1047,12 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "code-simplifier"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -904,7 +1072,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -913,114 +1081,14 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Code Simplifier"
|
||||
WORKFLOW_DESCRIPTION: "Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
pre_activation:
|
||||
runs-on: ubuntu-slim
|
||||
outputs:
|
||||
activated: ${{ (steps.check_membership.outputs.is_team_member == 'true') && (steps.check_skip_if_match.outputs.skip_check_ok == 'true') }}
|
||||
matched_command: ''
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Check team membership for workflow
|
||||
|
|
@ -1050,16 +1118,15 @@ jobs:
|
|||
await main();
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/code-simplifier"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_TRACKER_ID: "code-simplifier"
|
||||
GH_AW_WORKFLOW_ID: "code-simplifier"
|
||||
|
|
@ -1067,22 +1134,28 @@ jobs:
|
|||
GH_AW_WORKFLOW_SOURCE: "github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b"
|
||||
GH_AW_WORKFLOW_SOURCE_URL: "${{ github.server_url }}/github/gh-aw/tree/76d37d925abd44fee97379206f105b74b91a285b/.github/workflows/code-simplifier.md"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
created_issue_number: ${{ steps.process_safe_outputs.outputs.created_issue_number }}
|
||||
created_issue_url: ${{ steps.process_safe_outputs.outputs.created_issue_url }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1092,6 +1165,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_issue\":{\"labels\":[\"refactoring\",\"code-quality\",\"automation\"],\"max\":1,\"title_prefix\":\"[code-simplifier] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1100,4 +1176,11 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
|
|
|
|||
793
.github/workflows/code-simplifier.md
vendored
793
.github/workflows/code-simplifier.md
vendored
|
|
@ -1,3 +1,4 @@
|
|||
<<<<<<< current (local changes)
|
||||
---
|
||||
on:
|
||||
schedule: daily
|
||||
|
|
@ -13,6 +14,8 @@ safe-outputs:
|
|||
- code-quality
|
||||
- automation
|
||||
title-prefix: "[code-simplifier] "
|
||||
noop:
|
||||
report-as-issue: false
|
||||
description: Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality
|
||||
name: Code Simplifier
|
||||
source: github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b
|
||||
|
|
@ -425,3 +428,793 @@ Your output MUST either:
|
|||
- Instructions for applying the diff or creating a PR
|
||||
|
||||
Begin your code simplification analysis now. Find recently modified code, assess simplification opportunities, apply improvements while preserving functionality, validate changes, and create an issue with a git diff if beneficial.
|
||||
||||||| base (original)
|
||||
---
|
||||
name: Code Simplifier
|
||||
description: Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality
|
||||
on:
|
||||
schedule: daily
|
||||
skip-if-match: 'is:pr is:open in:title "[code-simplifier]"'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
tracker-id: code-simplifier
|
||||
|
||||
imports:
|
||||
- shared/reporting.md
|
||||
|
||||
safe-outputs:
|
||||
create-pull-request:
|
||||
title-prefix: "[code-simplifier] "
|
||||
labels: [refactoring, code-quality, automation]
|
||||
reviewers: [copilot]
|
||||
expires: 7d
|
||||
|
||||
tools:
|
||||
github:
|
||||
toolsets: [default]
|
||||
|
||||
timeout-minutes: 30
|
||||
strict: true
|
||||
source: github/gh-aw/.github/workflows/code-simplifier.md@76d37d925abd44fee97379206f105b74b91a285b
|
||||
---
|
||||
|
||||
<!-- This prompt will be imported in the agentic workflow .github/workflows/code-simplifier.md at runtime. -->
|
||||
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
|
||||
|
||||
# Code Simplifier Agent
|
||||
|
||||
You are an expert code simplification specialist focused on enhancing code clarity, consistency, and maintainability while preserving exact functionality. Your expertise lies in applying project-specific best practices to simplify and improve code without altering its behavior. You prioritize readable, explicit code over overly compact solutions. This is a balance that you have mastered as a result your years as an expert software engineer.
|
||||
|
||||
## Your Mission
|
||||
|
||||
Analyze recently modified code from the last 24 hours and apply refinements that improve code quality while preserving all functionality. Create a pull request with the simplified code if improvements are found.
|
||||
|
||||
## Current Context
|
||||
|
||||
- **Repository**: ${{ github.repository }}
|
||||
- **Analysis Date**: $(date +%Y-%m-%d)
|
||||
- **Workspace**: ${{ github.workspace }}
|
||||
|
||||
## Phase 1: Identify Recently Modified Code
|
||||
|
||||
### 1.1 Find Recent Changes
|
||||
|
||||
Search for merged pull requests and commits from the last 24 hours:
|
||||
|
||||
```bash
|
||||
# Get yesterday's date in ISO format
|
||||
YESTERDAY=$(date -d '1 day ago' '+%Y-%m-%d' 2>/dev/null || date -v-1d '+%Y-%m-%d')
|
||||
|
||||
# List recent commits
|
||||
git log --since="24 hours ago" --pretty=format:"%H %s" --no-merges
|
||||
```
|
||||
|
||||
Use GitHub tools to:
|
||||
- Search for pull requests merged in the last 24 hours: `repo:${{ github.repository }} is:pr is:merged merged:>=${YESTERDAY}`
|
||||
- Get details of merged PRs to understand what files were changed
|
||||
- List commits from the last 24 hours to identify modified files
|
||||
|
||||
### 1.2 Extract Changed Files
|
||||
|
||||
For each merged PR or recent commit:
|
||||
- Use `pull_request_read` with `method: get_files` to list changed files
|
||||
- Use `get_commit` to see file changes in recent commits
|
||||
- Focus on source code files (`.go`, `.js`, `.ts`, `.tsx`, `.cjs`, `.py`, etc.)
|
||||
- Exclude test files, lock files, and generated files
|
||||
|
||||
### 1.3 Determine Scope
|
||||
|
||||
If **no files were changed in the last 24 hours**, exit gracefully without creating a PR:
|
||||
|
||||
```
|
||||
✅ No code changes detected in the last 24 hours.
|
||||
Code simplifier has nothing to process today.
|
||||
```
|
||||
|
||||
If **files were changed**, proceed to Phase 2.
|
||||
|
||||
## Phase 2: Analyze and Simplify Code
|
||||
|
||||
### 2.1 Review Project Standards
|
||||
|
||||
Before simplifying, review the project's coding standards from relevant documentation:
|
||||
|
||||
- For Go projects: Check `AGENTS.md`, `DEVGUIDE.md`, or similar files
|
||||
- For JavaScript/TypeScript: Look for `CLAUDE.md`, style guides, or coding conventions
|
||||
- For Python: Check for style guides, PEP 8 adherence, or project-specific conventions
|
||||
|
||||
**Key Standards to Apply:**
|
||||
|
||||
For **JavaScript/TypeScript** projects:
|
||||
- Use ES modules with proper import sorting and extensions
|
||||
- Prefer `function` keyword over arrow functions for top-level functions
|
||||
- Use explicit return type annotations for top-level functions
|
||||
- Follow proper React component patterns with explicit Props types
|
||||
- Use proper error handling patterns (avoid try/catch when possible)
|
||||
- Maintain consistent naming conventions
|
||||
|
||||
For **Go** projects:
|
||||
- Use `any` instead of `interface{}`
|
||||
- Follow console formatting for CLI output
|
||||
- Use semantic type aliases for domain concepts
|
||||
- Prefer small, focused files (200-500 lines ideal)
|
||||
- Use table-driven tests with descriptive names
|
||||
|
||||
For **Python** projects:
|
||||
- Follow PEP 8 style guide
|
||||
- Use type hints for function signatures
|
||||
- Prefer explicit over implicit code
|
||||
- Use list/dict comprehensions where they improve clarity (not complexity)
|
||||
|
||||
### 2.2 Simplification Principles
|
||||
|
||||
Apply these refinements to the recently modified code:
|
||||
|
||||
#### 1. Preserve Functionality
|
||||
- **NEVER** change what the code does - only how it does it
|
||||
- All original features, outputs, and behaviors must remain intact
|
||||
- Run tests before and after to ensure no behavioral changes
|
||||
|
||||
#### 2. Enhance Clarity
|
||||
- Reduce unnecessary complexity and nesting
|
||||
- Eliminate redundant code and abstractions
|
||||
- Improve readability through clear variable and function names
|
||||
- Consolidate related logic
|
||||
- Remove unnecessary comments that describe obvious code
|
||||
- **IMPORTANT**: Avoid nested ternary operators - prefer switch statements or if/else chains
|
||||
- Choose clarity over brevity - explicit code is often better than compact code
|
||||
|
||||
#### 3. Apply Project Standards
|
||||
- Use project-specific conventions and patterns
|
||||
- Follow established naming conventions
|
||||
- Apply consistent formatting
|
||||
- Use appropriate language features (modern syntax where beneficial)
|
||||
|
||||
#### 4. Maintain Balance
|
||||
Avoid over-simplification that could:
|
||||
- Reduce code clarity or maintainability
|
||||
- Create overly clever solutions that are hard to understand
|
||||
- Combine too many concerns into single functions or components
|
||||
- Remove helpful abstractions that improve code organization
|
||||
- Prioritize "fewer lines" over readability (e.g., nested ternaries, dense one-liners)
|
||||
- Make the code harder to debug or extend
|
||||
|
||||
### 2.3 Perform Code Analysis
|
||||
|
||||
For each changed file:
|
||||
|
||||
1. **Read the file contents** using the edit or view tool
|
||||
2. **Identify refactoring opportunities**:
|
||||
- Long functions that could be split
|
||||
- Duplicate code patterns
|
||||
- Complex conditionals that could be simplified
|
||||
- Unclear variable names
|
||||
- Missing or excessive comments
|
||||
- Non-standard patterns
|
||||
3. **Design the simplification**:
|
||||
- What specific changes will improve clarity?
|
||||
- How can complexity be reduced?
|
||||
- What patterns should be applied?
|
||||
- Will this maintain all functionality?
|
||||
|
||||
### 2.4 Apply Simplifications
|
||||
|
||||
Use the **edit** tool to modify files:
|
||||
|
||||
```bash
|
||||
# For each file with improvements:
|
||||
# 1. Read the current content
|
||||
# 2. Apply targeted edits to simplify code
|
||||
# 3. Ensure all functionality is preserved
|
||||
```
|
||||
|
||||
**Guidelines for edits:**
|
||||
- Make surgical, targeted changes
|
||||
- One logical improvement per edit (but batch multiple edits in a single response)
|
||||
- Preserve all original behavior
|
||||
- Keep changes focused on recently modified code
|
||||
- Don't refactor unrelated code unless it improves understanding of the changes
|
||||
|
||||
## Phase 3: Validate Changes
|
||||
|
||||
### 3.1 Run Tests
|
||||
|
||||
After making simplifications, run the project's test suite to ensure no functionality was broken:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make test-unit
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm test
|
||||
|
||||
# For Python projects
|
||||
pytest
|
||||
```
|
||||
|
||||
If tests fail:
|
||||
- Review the failures carefully
|
||||
- Revert changes that broke functionality
|
||||
- Adjust simplifications to preserve behavior
|
||||
- Re-run tests until they pass
|
||||
|
||||
### 3.2 Run Linters
|
||||
|
||||
Ensure code style is consistent:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make lint
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm run lint
|
||||
|
||||
# For Python projects
|
||||
flake8 . || pylint .
|
||||
```
|
||||
|
||||
Fix any linting issues introduced by the simplifications.
|
||||
|
||||
### 3.3 Check Build
|
||||
|
||||
Verify the project still builds successfully:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make build
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm run build
|
||||
|
||||
# For Python projects
|
||||
# (typically no build step, but check imports)
|
||||
python -m py_compile changed_files.py
|
||||
```
|
||||
|
||||
## Phase 4: Create Pull Request
|
||||
|
||||
### 4.1 Determine If PR Is Needed
|
||||
|
||||
Only create a PR if:
|
||||
- ✅ You made actual code simplifications
|
||||
- ✅ All tests pass
|
||||
- ✅ Linting is clean
|
||||
- ✅ Build succeeds
|
||||
- ✅ Changes improve code quality without breaking functionality
|
||||
|
||||
If no improvements were made or changes broke tests, exit gracefully:
|
||||
|
||||
```
|
||||
✅ Code analyzed from last 24 hours.
|
||||
No simplifications needed - code already meets quality standards.
|
||||
```
|
||||
|
||||
### 4.2 Generate PR Description
|
||||
|
||||
If creating a PR, use this structure:
|
||||
|
||||
```markdown
|
||||
## Code Simplification - [Date]
|
||||
|
||||
This PR simplifies recently modified code to improve clarity, consistency, and maintainability while preserving all functionality.
|
||||
|
||||
### Files Simplified
|
||||
|
||||
- `path/to/file1.go` - [Brief description of improvements]
|
||||
- `path/to/file2.js` - [Brief description of improvements]
|
||||
|
||||
### Improvements Made
|
||||
|
||||
1. **Reduced Complexity**
|
||||
- Simplified nested conditionals in `file1.go`
|
||||
- Extracted helper function for repeated logic
|
||||
|
||||
2. **Enhanced Clarity**
|
||||
- Renamed variables for better readability
|
||||
- Removed redundant comments
|
||||
- Applied consistent naming conventions
|
||||
|
||||
3. **Applied Project Standards**
|
||||
- Used `function` keyword instead of arrow functions
|
||||
- Added explicit type annotations
|
||||
- Followed established patterns
|
||||
|
||||
### Changes Based On
|
||||
|
||||
Recent changes from:
|
||||
- #[PR_NUMBER] - [PR title]
|
||||
- Commit [SHORT_SHA] - [Commit message]
|
||||
|
||||
### Testing
|
||||
|
||||
- ✅ All tests pass (`make test-unit`)
|
||||
- ✅ Linting passes (`make lint`)
|
||||
- ✅ Build succeeds (`make build`)
|
||||
- ✅ No functional changes - behavior is identical
|
||||
|
||||
### Review Focus
|
||||
|
||||
Please verify:
|
||||
- Functionality is preserved
|
||||
- Simplifications improve code quality
|
||||
- Changes align with project conventions
|
||||
- No unintended side effects
|
||||
|
||||
---
|
||||
|
||||
*Automated by Code Simplifier Agent - analyzing code from the last 24 hours*
|
||||
```
|
||||
|
||||
### 4.3 Use Safe Outputs
|
||||
|
||||
Create the pull request using the safe-outputs configuration:
|
||||
|
||||
- Title will be prefixed with `[code-simplifier]`
|
||||
- Labeled with `refactoring`, `code-quality`, `automation`
|
||||
- Assigned to `copilot` for review
|
||||
- Set as ready for review (not draft)
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
### Scope Control
|
||||
- **Focus on recent changes**: Only refine code modified in the last 24 hours
|
||||
- **Don't over-refactor**: Avoid touching unrelated code
|
||||
- **Preserve interfaces**: Don't change public APIs or exported functions
|
||||
- **Incremental improvements**: Make targeted, surgical changes
|
||||
|
||||
### Quality Standards
|
||||
- **Test first**: Always run tests after simplifications
|
||||
- **Preserve behavior**: Functionality must remain identical
|
||||
- **Follow conventions**: Apply project-specific patterns consistently
|
||||
- **Clear over clever**: Prioritize readability and maintainability
|
||||
|
||||
### Exit Conditions
|
||||
Exit gracefully without creating a PR if:
|
||||
- No code was changed in the last 24 hours
|
||||
- No simplifications are beneficial
|
||||
- Tests fail after changes
|
||||
- Build fails after changes
|
||||
- Changes are too risky or complex
|
||||
|
||||
### Success Metrics
|
||||
A successful simplification:
|
||||
- ✅ Improves code clarity without changing behavior
|
||||
- ✅ Passes all tests and linting
|
||||
- ✅ Applies project-specific conventions
|
||||
- ✅ Makes code easier to understand and maintain
|
||||
- ✅ Focuses on recently modified code
|
||||
- ✅ Provides clear documentation of changes
|
||||
|
||||
## Output Requirements
|
||||
|
||||
Your output MUST either:
|
||||
|
||||
1. **If no changes in last 24 hours**:
|
||||
```
|
||||
✅ No code changes detected in the last 24 hours.
|
||||
Code simplifier has nothing to process today.
|
||||
```
|
||||
|
||||
2. **If no simplifications beneficial**:
|
||||
```
|
||||
✅ Code analyzed from last 24 hours.
|
||||
No simplifications needed - code already meets quality standards.
|
||||
```
|
||||
|
||||
3. **If simplifications made**: Create a PR with the changes using safe-outputs
|
||||
|
||||
Begin your code simplification analysis now. Find recently modified code, assess simplification opportunities, apply improvements while preserving functionality, validate changes, and create a PR if beneficial.
|
||||
=======
|
||||
---
|
||||
name: Code Simplifier
|
||||
description: Analyzes recently modified code and creates pull requests with simplifications that improve clarity, consistency, and maintainability while preserving functionality
|
||||
on:
|
||||
schedule: daily
|
||||
skip-if-match: 'is:pr is:open in:title "[code-simplifier]"'
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
tracker-id: code-simplifier
|
||||
|
||||
imports:
|
||||
- shared/activation-app.md
|
||||
- shared/reporting.md
|
||||
|
||||
safe-outputs:
|
||||
create-pull-request:
|
||||
title-prefix: "[code-simplifier] "
|
||||
labels: [refactoring, code-quality, automation]
|
||||
reviewers: [copilot]
|
||||
expires: 1d
|
||||
|
||||
network:
|
||||
allowed:
|
||||
- go
|
||||
|
||||
tools:
|
||||
github:
|
||||
toolsets: [default]
|
||||
|
||||
timeout-minutes: 30
|
||||
strict: true
|
||||
source: github/gh-aw/.github/workflows/code-simplifier.md@6762bfba6ae426a03aac46e8f68701461c667404
|
||||
---
|
||||
|
||||
<!-- This prompt will be imported in the agentic workflow .github/workflows/code-simplifier.md at runtime. -->
|
||||
<!-- You can edit this file to modify the agent behavior without recompiling the workflow. -->
|
||||
|
||||
# Code Simplifier Agent
|
||||
|
||||
You are an expert code simplification specialist focused on enhancing code clarity, consistency, and maintainability while preserving exact functionality. Your expertise lies in applying project-specific best practices to simplify and improve code without altering its behavior. You prioritize readable, explicit code over overly compact solutions. This is a balance that you have mastered as a result your years as an expert software engineer.
|
||||
|
||||
## Your Mission
|
||||
|
||||
Analyze recently modified code from the last 24 hours and apply refinements that improve code quality while preserving all functionality. Create a pull request with the simplified code if improvements are found.
|
||||
|
||||
## Current Context
|
||||
|
||||
- **Repository**: ${{ github.repository }}
|
||||
- **Analysis Date**: $(date +%Y-%m-%d)
|
||||
- **Workspace**: ${{ github.workspace }}
|
||||
|
||||
## Phase 1: Identify Recently Modified Code
|
||||
|
||||
### 1.1 Find Recent Changes
|
||||
|
||||
Search for merged pull requests and commits from the last 24 hours:
|
||||
|
||||
```bash
|
||||
# Get yesterday's date in ISO format
|
||||
YESTERDAY=$(date -d '1 day ago' '+%Y-%m-%d' 2>/dev/null || date -v-1d '+%Y-%m-%d')
|
||||
|
||||
# List recent commits
|
||||
git log --since="24 hours ago" --pretty=format:"%H %s" --no-merges
|
||||
```
|
||||
|
||||
Use GitHub tools to:
|
||||
- Search for pull requests merged in the last 24 hours: `repo:${{ github.repository }} is:pr is:merged merged:>=${YESTERDAY}`
|
||||
- Get details of merged PRs to understand what files were changed
|
||||
- List commits from the last 24 hours to identify modified files
|
||||
|
||||
### 1.2 Extract Changed Files
|
||||
|
||||
For each merged PR or recent commit:
|
||||
- Use `pull_request_read` with `method: get_files` to list changed files
|
||||
- Use `get_commit` to see file changes in recent commits
|
||||
- Focus on source code files (`.go`, `.js`, `.ts`, `.tsx`, `.cjs`, `.py`, `.cs`, etc.)
|
||||
- Exclude test files, lock files, and generated files
|
||||
|
||||
### 1.3 Determine Scope
|
||||
|
||||
If **no files were changed in the last 24 hours**, exit gracefully without creating a PR:
|
||||
|
||||
```
|
||||
✅ No code changes detected in the last 24 hours.
|
||||
Code simplifier has nothing to process today.
|
||||
```
|
||||
|
||||
If **files were changed**, proceed to Phase 2.
|
||||
|
||||
## Phase 2: Analyze and Simplify Code
|
||||
|
||||
### 2.1 Review Project Standards
|
||||
|
||||
Before simplifying, review the project's coding standards from relevant documentation:
|
||||
|
||||
- For Go projects: Check `AGENTS.md`, `DEVGUIDE.md`, or similar files
|
||||
- For JavaScript/TypeScript: Look for `CLAUDE.md`, style guides, or coding conventions
|
||||
- For Python: Check for style guides, PEP 8 adherence, or project-specific conventions
|
||||
- For .NET/C#: Check `.editorconfig`, `Directory.Build.props`, or coding conventions in docs
|
||||
|
||||
**Key Standards to Apply:**
|
||||
|
||||
For **JavaScript/TypeScript** projects:
|
||||
- Use ES modules with proper import sorting and extensions
|
||||
- Prefer `function` keyword over arrow functions for top-level functions
|
||||
- Use explicit return type annotations for top-level functions
|
||||
- Follow proper React component patterns with explicit Props types
|
||||
- Use proper error handling patterns (avoid try/catch when possible)
|
||||
- Maintain consistent naming conventions
|
||||
|
||||
For **Go** projects:
|
||||
- Use `any` instead of `interface{}`
|
||||
- Follow console formatting for CLI output
|
||||
- Use semantic type aliases for domain concepts
|
||||
- Prefer small, focused files (200-500 lines ideal)
|
||||
- Use table-driven tests with descriptive names
|
||||
|
||||
For **Python** projects:
|
||||
- Follow PEP 8 style guide
|
||||
- Use type hints for function signatures
|
||||
- Prefer explicit over implicit code
|
||||
- Use list/dict comprehensions where they improve clarity (not complexity)
|
||||
|
||||
For **.NET/C#** projects:
|
||||
- Follow Microsoft C# coding conventions
|
||||
- Use `var` only when the type is obvious from the right side
|
||||
- Use file-scoped namespaces (`namespace X;`) where supported
|
||||
- Prefer pattern matching over type casting
|
||||
- Use `async`/`await` consistently, avoid `.Result` or `.Wait()`
|
||||
- Use nullable reference types and annotate nullability
|
||||
|
||||
### 2.2 Simplification Principles
|
||||
|
||||
Apply these refinements to the recently modified code:
|
||||
|
||||
#### 1. Preserve Functionality
|
||||
- **NEVER** change what the code does - only how it does it
|
||||
- All original features, outputs, and behaviors must remain intact
|
||||
- Run tests before and after to ensure no behavioral changes
|
||||
|
||||
#### 2. Enhance Clarity
|
||||
- Reduce unnecessary complexity and nesting
|
||||
- Eliminate redundant code and abstractions
|
||||
- Improve readability through clear variable and function names
|
||||
- Consolidate related logic
|
||||
- Remove unnecessary comments that describe obvious code
|
||||
- **IMPORTANT**: Avoid nested ternary operators - prefer switch statements or if/else chains
|
||||
- Choose clarity over brevity - explicit code is often better than compact code
|
||||
|
||||
#### 3. Apply Project Standards
|
||||
- Use project-specific conventions and patterns
|
||||
- Follow established naming conventions
|
||||
- Apply consistent formatting
|
||||
- Use appropriate language features (modern syntax where beneficial)
|
||||
|
||||
#### 4. Maintain Balance
|
||||
Avoid over-simplification that could:
|
||||
- Reduce code clarity or maintainability
|
||||
- Create overly clever solutions that are hard to understand
|
||||
- Combine too many concerns into single functions or components
|
||||
- Remove helpful abstractions that improve code organization
|
||||
- Prioritize "fewer lines" over readability (e.g., nested ternaries, dense one-liners)
|
||||
- Make the code harder to debug or extend
|
||||
|
||||
### 2.3 Perform Code Analysis
|
||||
|
||||
For each changed file:
|
||||
|
||||
1. **Read the file contents** using the edit or view tool
|
||||
2. **Identify refactoring opportunities**:
|
||||
- Long functions that could be split
|
||||
- Duplicate code patterns
|
||||
- Complex conditionals that could be simplified
|
||||
- Unclear variable names
|
||||
- Missing or excessive comments
|
||||
- Non-standard patterns
|
||||
3. **Design the simplification**:
|
||||
- What specific changes will improve clarity?
|
||||
- How can complexity be reduced?
|
||||
- What patterns should be applied?
|
||||
- Will this maintain all functionality?
|
||||
|
||||
### 2.4 Apply Simplifications
|
||||
|
||||
Use the **edit** tool to modify files:
|
||||
|
||||
```bash
|
||||
# For each file with improvements:
|
||||
# 1. Read the current content
|
||||
# 2. Apply targeted edits to simplify code
|
||||
# 3. Ensure all functionality is preserved
|
||||
```
|
||||
|
||||
**Guidelines for edits:**
|
||||
- Make surgical, targeted changes
|
||||
- One logical improvement per edit (but batch multiple edits in a single response)
|
||||
- Preserve all original behavior
|
||||
- Keep changes focused on recently modified code
|
||||
- Don't refactor unrelated code unless it improves understanding of the changes
|
||||
|
||||
## Phase 3: Validate Changes
|
||||
|
||||
### 3.1 Run Tests
|
||||
|
||||
After making simplifications, run the project's test suite to ensure no functionality was broken:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make test-unit
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm test
|
||||
|
||||
# For Python projects
|
||||
pytest
|
||||
|
||||
# For .NET projects
|
||||
dotnet test
|
||||
```
|
||||
|
||||
If tests fail:
|
||||
- Review the failures carefully
|
||||
- Revert changes that broke functionality
|
||||
- Adjust simplifications to preserve behavior
|
||||
- Re-run tests until they pass
|
||||
|
||||
### 3.2 Run Linters
|
||||
|
||||
Ensure code style is consistent:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make lint
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm run lint
|
||||
|
||||
# For Python projects
|
||||
flake8 . || pylint .
|
||||
|
||||
# For .NET projects
|
||||
dotnet format --verify-no-changes
|
||||
```
|
||||
|
||||
Fix any linting issues introduced by the simplifications.
|
||||
|
||||
### 3.3 Check Build
|
||||
|
||||
Verify the project still builds successfully:
|
||||
|
||||
```bash
|
||||
# For Go projects
|
||||
make build
|
||||
|
||||
# For JavaScript/TypeScript projects
|
||||
npm run build
|
||||
|
||||
# For Python projects
|
||||
# (typically no build step, but check imports)
|
||||
python -m py_compile changed_files.py
|
||||
|
||||
# For .NET projects
|
||||
dotnet build
|
||||
```
|
||||
|
||||
## Phase 4: Create Pull Request
|
||||
|
||||
### 4.1 Determine If PR Is Needed
|
||||
|
||||
Only create a PR if:
|
||||
- ✅ You made actual code simplifications
|
||||
- ✅ All tests pass
|
||||
- ✅ Linting is clean
|
||||
- ✅ Build succeeds
|
||||
- ✅ Changes improve code quality without breaking functionality
|
||||
|
||||
If no improvements were made or changes broke tests, exit gracefully:
|
||||
|
||||
```
|
||||
✅ Code analyzed from last 24 hours.
|
||||
No simplifications needed - code already meets quality standards.
|
||||
```
|
||||
|
||||
### 4.2 Generate PR Description
|
||||
|
||||
If creating a PR, use this structure:
|
||||
|
||||
```markdown
|
||||
## Code Simplification - [Date]
|
||||
|
||||
This PR simplifies recently modified code to improve clarity, consistency, and maintainability while preserving all functionality.
|
||||
|
||||
### Files Simplified
|
||||
|
||||
- `path/to/file1.go` - [Brief description of improvements]
|
||||
- `path/to/file2.js` - [Brief description of improvements]
|
||||
|
||||
### Improvements Made
|
||||
|
||||
1. **Reduced Complexity**
|
||||
- Simplified nested conditionals in `file1.go`
|
||||
- Extracted helper function for repeated logic
|
||||
|
||||
2. **Enhanced Clarity**
|
||||
- Renamed variables for better readability
|
||||
- Removed redundant comments
|
||||
- Applied consistent naming conventions
|
||||
|
||||
3. **Applied Project Standards**
|
||||
- Used `function` keyword instead of arrow functions
|
||||
- Added explicit type annotations
|
||||
- Followed established patterns
|
||||
|
||||
### Changes Based On
|
||||
|
||||
Recent changes from:
|
||||
- #[PR_NUMBER] - [PR title]
|
||||
- Commit [SHORT_SHA] - [Commit message]
|
||||
|
||||
### Testing
|
||||
|
||||
- ✅ All tests pass (`make test-unit`)
|
||||
- ✅ Linting passes (`make lint`)
|
||||
- ✅ Build succeeds (`make build`)
|
||||
- ✅ No functional changes - behavior is identical
|
||||
|
||||
### Review Focus
|
||||
|
||||
Please verify:
|
||||
- Functionality is preserved
|
||||
- Simplifications improve code quality
|
||||
- Changes align with project conventions
|
||||
- No unintended side effects
|
||||
|
||||
---
|
||||
|
||||
*Automated by Code Simplifier Agent - analyzing code from the last 24 hours*
|
||||
```
|
||||
|
||||
### 4.3 Use Safe Outputs
|
||||
|
||||
Create the pull request using the safe-outputs configuration:
|
||||
|
||||
- Title will be prefixed with `[code-simplifier]`
|
||||
- Labeled with `refactoring`, `code-quality`, `automation`
|
||||
- Assigned to `copilot` for review
|
||||
- Set as ready for review (not draft)
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
### Scope Control
|
||||
- **Focus on recent changes**: Only refine code modified in the last 24 hours
|
||||
- **Don't over-refactor**: Avoid touching unrelated code
|
||||
- **Preserve interfaces**: Don't change public APIs or exported functions
|
||||
- **Incremental improvements**: Make targeted, surgical changes
|
||||
|
||||
### Quality Standards
|
||||
- **Test first**: Always run tests after simplifications
|
||||
- **Preserve behavior**: Functionality must remain identical
|
||||
- **Follow conventions**: Apply project-specific patterns consistently
|
||||
- **Clear over clever**: Prioritize readability and maintainability
|
||||
|
||||
### Exit Conditions
|
||||
Exit gracefully without creating a PR if:
|
||||
- No code was changed in the last 24 hours
|
||||
- No simplifications are beneficial
|
||||
- Tests fail after changes
|
||||
- Build fails after changes
|
||||
- Changes are too risky or complex
|
||||
|
||||
### Success Metrics
|
||||
A successful simplification:
|
||||
- ✅ Improves code clarity without changing behavior
|
||||
- ✅ Passes all tests and linting
|
||||
- ✅ Applies project-specific conventions
|
||||
- ✅ Makes code easier to understand and maintain
|
||||
- ✅ Focuses on recently modified code
|
||||
- ✅ Provides clear documentation of changes
|
||||
|
||||
## Output Requirements
|
||||
|
||||
Your output MUST either:
|
||||
|
||||
1. **If no changes in last 24 hours**:
|
||||
```
|
||||
✅ No code changes detected in the last 24 hours.
|
||||
Code simplifier has nothing to process today.
|
||||
```
|
||||
|
||||
2. **If no simplifications beneficial**:
|
||||
```
|
||||
✅ Code analyzed from last 24 hours.
|
||||
No simplifications needed - code already meets quality standards.
|
||||
```
|
||||
|
||||
3. **If simplifications made**: Create a PR with the changes using safe-outputs
|
||||
|
||||
Begin your code simplification analysis now. Find recently modified code, assess simplification opportunities, apply improvements while preserving functionality, validate changes, and create a PR if beneficial.
|
||||
|
||||
**Important**: If no action is needed after completing your analysis, you **MUST** call the `noop` safe-output tool with a brief explanation. Failing to call any safe-output tool is the most common cause of safe-output workflow failures.
|
||||
|
||||
```json
|
||||
{"noop": {"message": "No action needed: [brief explanation of what was analyzed and why]"}}
|
||||
```
|
||||
>>>>>>> new (upstream)
|
||||
|
|
|
|||
4
.github/workflows/coverage.yml
vendored
4
.github/workflows/coverage.yml
vendored
|
|
@ -89,13 +89,13 @@ jobs:
|
|||
id: date
|
||||
run: echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_OUTPUT
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
- uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: coverage-${{steps.date.outputs.date}}
|
||||
path: ${{github.workspace}}/coverage.html
|
||||
retention-days: 4
|
||||
|
||||
- uses: actions/upload-artifact@v6
|
||||
- uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: coverage-details-${{steps.date.outputs.date}}
|
||||
path: ${{env.COV_DETAILS_PATH}}
|
||||
|
|
|
|||
261
.github/workflows/csa-analysis.lock.yml
generated
vendored
261
.github/workflows/csa-analysis.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.50.4). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,12 +23,12 @@
|
|||
#
|
||||
# Weekly Clang Static Analyzer (CSA) build and report for Z3, posting findings to GitHub Discussions
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"b8804724347ec1d5b5fd4088aa50e95480e5d3980da75fcc1cefefdb5c721197","compiler_version":"v0.50.4"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"426e3686de7e6ee862926b83c1f39892898a04643b2ccdf13511ebc3f3108703","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Clang Static Analyzer (CSA) Report"
|
||||
"on":
|
||||
schedule:
|
||||
- cron: "1 12 * * 0"
|
||||
- cron: "49 8 * * 3"
|
||||
# Friendly format: weekly (scattered)
|
||||
workflow_dispatch:
|
||||
|
||||
|
|
@ -47,27 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.50.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Validate context variables
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Clang Static Analyzer (CSA) Report"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/validate_context_variables.cjs');
|
||||
await main();
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -104,7 +128,7 @@ jobs:
|
|||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_discussion, missing_tool, missing_data
|
||||
Tools: create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
|
|
@ -203,12 +227,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -232,19 +258,19 @@ jobs:
|
|||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.50.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
@ -252,7 +278,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -273,7 +299,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -284,57 +310,8 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.417",
|
||||
cli_version: "v0.50.4",
|
||||
workflow_name: "Clang Static Analyzer (CSA) Report",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.23.0",
|
||||
awmg_version: "v0.1.5",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.417
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
|
|
@ -348,7 +325,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.5 ghcr.io/github/github-mcp-server:v0.31.0 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -372,6 +349,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -394,10 +379,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -415,9 +408,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -444,9 +445,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -601,10 +610,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.5'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -612,7 +622,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.31.0",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -636,41 +646,50 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
timeout-minutes: 90
|
||||
timeout-minutes: 180
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -724,9 +743,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -748,13 +770,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -797,7 +819,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -805,12 +827,11 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
|
|
@ -880,18 +901,28 @@ jobs:
|
|||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
|
|
@ -905,7 +936,7 @@ jobs:
|
|||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
|
|
@ -943,22 +974,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-csa-analysis"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.50.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1001,11 +1037,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "csa-analysis"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "180"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1022,7 +1061,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1041,6 +1080,7 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/csa-analysis"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "csa-analysis"
|
||||
GH_AW_WORKFLOW_NAME: "Clang Static Analyzer (CSA) Report"
|
||||
|
|
@ -1053,16 +1093,18 @@ jobs:
|
|||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.50.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1072,6 +1114,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[CSA] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1082,7 +1127,7 @@ jobs:
|
|||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
|
|
@ -1097,12 +1142,12 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: csaanalysis
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.50.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -1118,7 +1163,7 @@ jobs:
|
|||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
6
.github/workflows/csa-analysis.md
vendored
6
.github/workflows/csa-analysis.md
vendored
|
|
@ -5,7 +5,7 @@ on:
|
|||
schedule: weekly
|
||||
workflow_dispatch:
|
||||
|
||||
timeout-minutes: 90
|
||||
timeout-minutes: 180
|
||||
|
||||
permissions: read-all
|
||||
|
||||
|
|
@ -26,10 +26,12 @@ safe-outputs:
|
|||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
|
|||
59
.github/workflows/deeptest.md
vendored
59
.github/workflows/deeptest.md
vendored
|
|
@ -1,59 +0,0 @@
|
|||
---
|
||||
description: Generate comprehensive test cases for Z3 source files
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
file_path:
|
||||
description: 'Path to the source file to generate tests for (e.g., src/util/vector.h)'
|
||||
required: true
|
||||
type: string
|
||||
issue_number:
|
||||
description: 'Issue number to link the generated tests to (optional)'
|
||||
required: false
|
||||
type: string
|
||||
|
||||
permissions: read-all
|
||||
|
||||
network: defaults
|
||||
|
||||
tools:
|
||||
cache-memory: true
|
||||
serena: ["python"]
|
||||
github:
|
||||
toolsets: [default]
|
||||
bash: [":*"]
|
||||
edit: {}
|
||||
glob: {}
|
||||
safe-outputs:
|
||||
create-pull-request:
|
||||
title-prefix: "[DeepTest] "
|
||||
labels: [automated-tests, deeptest]
|
||||
draft: false
|
||||
add-comment:
|
||||
max: 2
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
|
||||
timeout-minutes: 30
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
---
|
||||
|
||||
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
|
||||
{{#runtime-import agentics/deeptest.md}}
|
||||
|
||||
## Context
|
||||
|
||||
You are the DeepTest agent for the Z3 theorem prover repository.
|
||||
|
||||
**Workflow dispatch file path**: ${{ github.event.inputs.file_path }}
|
||||
|
||||
**Issue number** (if linked): ${{ github.event.inputs.issue_number }}
|
||||
|
||||
## Instructions
|
||||
|
||||
Follow the workflow steps defined in the imported prompt above to generate comprehensive test cases for the specified source file.
|
||||
4
.github/workflows/docs.yml
vendored
4
.github/workflows/docs.yml
vendored
|
|
@ -34,7 +34,7 @@ jobs:
|
|||
python3 mk_go_doc.py --output-dir=api/html/go --go-api-path=../src/api/go
|
||||
|
||||
- name: Upload Go Documentation
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: go-docs
|
||||
path: doc/api/html/go/
|
||||
|
|
@ -125,7 +125,7 @@ jobs:
|
|||
python3 mk_api_doc.py --js --go --output-dir=api --mld --z3py-package-path=../build-x64/python/z3 --build=../build-x64
|
||||
|
||||
- name: Download Go Documentation
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: go-docs
|
||||
path: doc/api/html/go/
|
||||
|
|
|
|||
613
.github/workflows/issue-backlog-processor.lock.yml
generated
vendored
613
.github/workflows/issue-backlog-processor.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Processes the backlog of open issues every second day, creates a discussion with findings, and comments on relevant issues
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"81ff1a035a0bcdc0cfe260b8d19a5c10e874391ce07c33664f144a94c04c891c"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"5424d9402b8dedb25217216c006f6c53d734986434b89278b9a1ed4feccb6ac7","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Issue Backlog Processor"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Issue Backlog Processor"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -85,42 +117,19 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: add_comment, create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -150,12 +159,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/issue-backlog-processor.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -184,8 +194,6 @@ jobs:
|
|||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -208,9 +216,7 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKFLOW: process.env.GH_AW_GITHUB_WORKFLOW,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -221,12 +227,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -247,14 +255,16 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: issuebacklogprocessor
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Checkout repository
|
||||
|
|
@ -267,7 +277,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -280,6 +290,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -287,7 +298,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -298,59 +309,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Issue Backlog Processor",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -362,7 +324,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -386,6 +348,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -400,7 +370,7 @@ jobs:
|
|||
"name": "create_discussion"
|
||||
},
|
||||
{
|
||||
"description": "Add a comment to an existing GitHub issue, pull request, or discussion. Use this to provide feedback, answer questions, or add information to an existing conversation. For creating new items, use create_issue, create_discussion, or create_pull_request instead. IMPORTANT: Comments are subject to validation constraints enforced by the MCP server - maximum 65536 characters for the complete comment (including footer which is added automatically), 10 mentions (@username), and 50 links. Exceeding these limits will result in an immediate error with specific guidance. CONSTRAINTS: Maximum 20 comment(s) can be added.",
|
||||
"description": "Add a comment to an existing GitHub issue, pull request, or discussion. Use this to provide feedback, answer questions, or add information to an existing conversation. For creating new items, use create_issue, create_discussion, or create_pull_request instead. IMPORTANT: Comments are subject to validation constraints enforced by the MCP server - maximum 65536 characters for the complete comment (including footer which is added automatically), 10 mentions (@username), and 50 links. Exceeding these limits will result in an immediate error with specific guidance. NOTE: By default, this tool requires discussions:write permission. If your GitHub App lacks Discussions permission, set 'discussions: false' in the workflow's safe-outputs.add-comment configuration to exclude this permission. CONSTRAINTS: Maximum 20 comment(s) can be added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -408,9 +378,25 @@ jobs:
|
|||
"description": "The comment text in Markdown format. This is the 'body' field - do not use 'comment_body' or other variations. Provide helpful, relevant information that adds value to the conversation. CONSTRAINTS: The complete comment (your body text + automatically added footer) must not exceed 65536 characters total. Maximum 10 mentions (@username), maximum 50 links (http/https URLs). A footer (~200-500 characters) is automatically appended with workflow attribution, so leave adequate space. If these limits are exceeded, the tool call will fail with a detailed error message indicating which constraint was violated.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"item_number": {
|
||||
"description": "The issue, pull request, or discussion number to comment on. This is the numeric ID from the GitHub URL (e.g., 123 in github.com/owner/repo/issues/123). If omitted, the tool will attempt to resolve the target from the current workflow context (triggering issue, PR, or discussion).",
|
||||
"type": "number"
|
||||
"description": "The issue, pull request, or discussion number to comment on. This is the numeric ID from the GitHub URL (e.g., 123 in github.com/owner/repo/issues/123). Can also be a temporary_id (e.g., 'aw_abc123') from a previously created issue in the same workflow run. If omitted, the tool auto-targets the issue, PR, or discussion that triggered this workflow. Auto-targeting only works for issue, pull_request, discussion, and comment event triggers — it does NOT work for schedule, workflow_dispatch, push, or workflow_run triggers. For those trigger types, always provide item_number explicitly, or the tool call will fail with an error.",
|
||||
"type": [
|
||||
"number",
|
||||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for this comment. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Auto-generated if not provided. The temporary ID is returned in the tool response so you can reference this comment later.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -429,10 +415,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -450,9 +444,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -479,9 +481,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -504,6 +514,10 @@ jobs:
|
|||
},
|
||||
"item_number": {
|
||||
"issueOrPRNumber": true
|
||||
},
|
||||
"repo": {
|
||||
"type": "string",
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
|
|
@ -533,6 +547,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -625,10 +664,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -636,7 +676,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -660,17 +700,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -679,20 +713,37 @@ jobs:
|
|||
timeout-minutes: 60
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -700,6 +751,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -745,9 +797,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -769,13 +824,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -818,7 +873,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -826,23 +881,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Issue Backlog Processor"
|
||||
WORKFLOW_DESCRIPTION: "Processes the backlog of open issues every second day, creates a discussion with findings, and comments on relevant issues"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
- update_cache_memory
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
|
|
@ -852,22 +1029,27 @@ jobs:
|
|||
discussions: write
|
||||
issues: write
|
||||
pull-requests: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-issue-backlog-processor"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -877,7 +1059,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Issue Backlog Processor"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -908,10 +1090,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "issue-backlog-processor"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "60"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -928,7 +1114,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -937,112 +1123,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Issue Backlog Processor"
|
||||
WORKFLOW_DESCRIPTION: "Processes the backlog of open issues every second day, creates a discussion with findings, and comments on relevant issues"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -1051,26 +1134,33 @@ jobs:
|
|||
pull-requests: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/issue-backlog-processor"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "issue-backlog-processor"
|
||||
GH_AW_WORKFLOW_NAME: "Issue Backlog Processor"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
comment_id: ${{ steps.process_safe_outputs.outputs.comment_id }}
|
||||
comment_url: ${{ steps.process_safe_outputs.outputs.comment_url }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1080,6 +1170,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"add_comment\":{\"max\":20},\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[Issue Backlog] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1088,27 +1181,45 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
update_cache_memory:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: always() && needs.detection.outputs.success == 'true'
|
||||
needs: agent
|
||||
if: always() && needs.agent.outputs.detection_success == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
env:
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: issuebacklogprocessor
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@48d8fdfddc8cad854ac0c70ceb573f09fb8f9c9b # v0.62.5
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
- name: Check if cache-memory folder has content (default)
|
||||
id: check_cache_default
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then
|
||||
echo "has_content=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_content=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
2
.github/workflows/issue-backlog-processor.md
vendored
2
.github/workflows/issue-backlog-processor.md
vendored
|
|
@ -19,6 +19,8 @@ safe-outputs:
|
|||
close-older-discussions: true
|
||||
add-comment:
|
||||
max: 20
|
||||
noop:
|
||||
report-as-issue: false
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
timeout-minutes: 60
|
||||
|
|
|
|||
60
.github/workflows/mark-prs-ready-for-review.yml
vendored
Normal file
60
.github/workflows/mark-prs-ready-for-review.yml
vendored
Normal file
|
|
@ -0,0 +1,60 @@
|
|||
name: Mark Pull Requests Ready for Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened]
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: '0 0 * * *'
|
||||
|
||||
permissions: {}
|
||||
|
||||
jobs:
|
||||
mark-ready:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Mark all draft pull requests ready for review
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
async function markReady(nodeId, number, title) {
|
||||
core.info(`Marking PR #${number} "${title}" ready for review.`);
|
||||
try {
|
||||
await github.graphql(`
|
||||
mutation($id: ID!) {
|
||||
markPullRequestReadyForReview(input: { pullRequestId: $id }) {
|
||||
pullRequest { number isDraft }
|
||||
}
|
||||
}
|
||||
`, { id: nodeId });
|
||||
} catch (err) {
|
||||
core.warning(`Failed to mark PR #${number} ready for review: ${err.message}`);
|
||||
}
|
||||
}
|
||||
|
||||
if (context.eventName === 'pull_request') {
|
||||
const pr = context.payload.pull_request;
|
||||
if (pr.draft) {
|
||||
await markReady(pr.node_id, pr.number, pr.title);
|
||||
} else {
|
||||
core.info(`PR #${pr.number} is already ready for review. Nothing to do.`);
|
||||
}
|
||||
} else {
|
||||
const pulls = await github.paginate(github.rest.pulls.list, {
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
state: 'open',
|
||||
});
|
||||
|
||||
const drafts = pulls.filter(pr => pr.draft);
|
||||
core.info(`Found ${drafts.length} draft pull request(s).`);
|
||||
|
||||
for (const pr of drafts) {
|
||||
await markReady(pr.node_id, pr.number, pr.title);
|
||||
}
|
||||
}
|
||||
|
||||
core.info('Done.');
|
||||
File diff suppressed because it is too large
Load diff
224
.github/workflows/memory-safety-report.md
vendored
Normal file
224
.github/workflows/memory-safety-report.md
vendored
Normal file
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
description: >
|
||||
Analyze ASan/UBSan sanitizer logs from the memory-safety workflow
|
||||
and post findings as a GitHub Discussion.
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: ["Memory Safety Analysis"]
|
||||
types: [completed]
|
||||
branches:
|
||||
- master
|
||||
workflow_dispatch:
|
||||
|
||||
timeout-minutes: 30
|
||||
|
||||
permissions:
|
||||
actions: read
|
||||
contents: read
|
||||
discussions: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
|
||||
network: defaults
|
||||
|
||||
tools:
|
||||
cache-memory: true
|
||||
github:
|
||||
toolsets: [default, actions]
|
||||
bash: [":*"]
|
||||
glob: {}
|
||||
view: {}
|
||||
|
||||
safe-outputs:
|
||||
mentions: false
|
||||
allowed-github-references: []
|
||||
max-bot-mentions: 1
|
||||
create-discussion:
|
||||
title-prefix: "[Memory Safety] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
expires: 7
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
---
|
||||
|
||||
# Memory Safety Analysis Report Generator
|
||||
|
||||
## Job Description
|
||||
|
||||
Your name is ${{ github.workflow }}. You are an expert memory safety analyst for the Z3 theorem prover repository `${{ github.repository }}`. Your task is to download, analyze, and report on the results from the Memory Safety Analysis workflow, covering runtime sanitizer (ASan/UBSan) findings.
|
||||
|
||||
**The `gh` CLI is not authenticated inside AWF.** Use GitHub MCP tools for all GitHub API interaction. Do not use `gh run download` or any other `gh` command.
|
||||
|
||||
## Your Task
|
||||
|
||||
### 1. Download Artifacts from the Triggering Workflow Run
|
||||
|
||||
If triggered by `workflow_run`, the run ID is `${{ github.event.workflow_run.id }}`. If manual dispatch (empty run ID), call `github-mcp-server-actions_list` with method `list_workflow_runs` for the "Memory Safety Analysis" workflow and pick the latest completed run.
|
||||
|
||||
Get the artifact list and download URLs:
|
||||
|
||||
1. Call `github-mcp-server-actions_list` with method `list_workflow_run_artifacts` and the run ID. The run produces two artifacts: `asan-reports` and `ubsan-reports`.
|
||||
2. For each artifact, call `github-mcp-server-actions_get` with method `download_workflow_run_artifact` and the artifact ID. This returns a temporary download URL.
|
||||
3. Run the helper scripts to download, extract, and parse:
|
||||
|
||||
```bash
|
||||
bash .github/scripts/fetch-artifacts.sh "$ASAN_URL" "$UBSAN_URL"
|
||||
python3 .github/scripts/parse_sanitizer_reports.py /tmp/reports
|
||||
```
|
||||
|
||||
After this, `/tmp/reports/{asan,ubsan}-reports/` contain the extracted files, `/tmp/parsed-report.json` has structured findings, and `/tmp/fetch-artifacts.log` has the download log.
|
||||
|
||||
### 2. Analyze Sanitizer Reports
|
||||
|
||||
Read `/tmp/parsed-report.json` for structured data. Also inspect the raw files if needed:
|
||||
|
||||
```bash
|
||||
# Check ASan results
|
||||
if [ -d /tmp/reports/asan-reports ]; then
|
||||
cat /tmp/reports/asan-reports/summary.md
|
||||
ls /tmp/reports/asan-reports/
|
||||
fi
|
||||
|
||||
# Check UBSan results
|
||||
if [ -d /tmp/reports/ubsan-reports ]; then
|
||||
cat /tmp/reports/ubsan-reports/summary.md
|
||||
ls /tmp/reports/ubsan-reports/
|
||||
fi
|
||||
```
|
||||
|
||||
For each sanitizer finding, extract:
|
||||
- **Error type** (heap-buffer-overflow, heap-use-after-free, stack-buffer-overflow, signed-integer-overflow, null-pointer-dereference, etc.)
|
||||
- **Source location** (file, line, column)
|
||||
- **Stack trace** (first 5 frames)
|
||||
- **Allocation/deallocation site** (for memory errors)
|
||||
|
||||
### 3. Compare with Previous Results
|
||||
|
||||
Check cache memory for previous run results:
|
||||
- Total findings from last run (ASan + UBSan)
|
||||
- List of previously known issues
|
||||
- Identify new findings (regressions) vs. resolved findings (improvements)
|
||||
|
||||
### 4. Generate the Discussion Report
|
||||
|
||||
Create a GitHub Discussion. Use `###` or lower for section headers, never `##` or `#`. Wrap verbose sections in `<details>` tags to keep the report scannable.
|
||||
|
||||
```markdown
|
||||
**Date**: YYYY-MM-DD
|
||||
**Commit**: `<short SHA>` ([full_sha](link)) on branch `<branch>`
|
||||
**Commit message**: first line of commit message
|
||||
**Triggered by**: push / workflow_dispatch (Memory Safety Analysis run [#<run_id>](link))
|
||||
**Report run**: [#<run_id>](link)
|
||||
|
||||
### Executive Summary
|
||||
|
||||
| Category | ASan | UBSan | Total |
|
||||
|----------|------|-------|-------|
|
||||
| Buffer Overflow | Y | - | Z |
|
||||
| Use-After-Free | Y | - | Z |
|
||||
| Double-Free | Y | - | Z |
|
||||
| Null Dereference | - | - | Z |
|
||||
| Integer Overflow | - | Y | Z |
|
||||
| Undefined Behavior | - | Y | Z |
|
||||
| Other | Y | Z | Z |
|
||||
| **Total** | **Y** | **Z** | **N** |
|
||||
|
||||
### Trend
|
||||
|
||||
- New findings since last run: N
|
||||
- Resolved since last run: N
|
||||
- Unchanged: N
|
||||
|
||||
### Critical Findings (Immediate Action Needed)
|
||||
|
||||
[List any high-severity findings: buffer overflows, use-after-free, double-free]
|
||||
|
||||
### Important Findings (Should Fix)
|
||||
|
||||
[List medium-severity: null derefs, integer overflows]
|
||||
|
||||
### Low-Severity / Informational
|
||||
|
||||
[List warnings: potential issues]
|
||||
|
||||
<details>
|
||||
<summary><b>ASan Findings</b></summary>
|
||||
|
||||
[Each finding with error type, location, and stack trace snippet]
|
||||
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>UBSan Findings</b></summary>
|
||||
|
||||
[Each finding with error type, location, and explanation]
|
||||
|
||||
</details>
|
||||
|
||||
### Top Affected Files
|
||||
|
||||
| File | Findings |
|
||||
|------|----------|
|
||||
| src/... | N |
|
||||
|
||||
### Known Suppressions
|
||||
|
||||
[List from parsed-report.json suppressions field]
|
||||
|
||||
### Recommendations
|
||||
|
||||
1. [Actionable recommendations based on the findings]
|
||||
2. [Patterns to address]
|
||||
|
||||
<details>
|
||||
<summary><b>Raw Data</b></summary>
|
||||
|
||||
[Compressed summary of all data for future reference]
|
||||
|
||||
</details>
|
||||
```
|
||||
|
||||
If zero findings across all tools, create a discussion noting a clean run with the commit and workflow run link.
|
||||
|
||||
### 5. Update Cache Memory
|
||||
|
||||
Store the current run's results in cache memory for future comparison:
|
||||
- Total count by category
|
||||
- List of file:line pairs with findings
|
||||
- Run metadata (commit SHA, date, run ID)
|
||||
|
||||
### 6. Handle Edge Cases
|
||||
|
||||
- If the triggering workflow failed entirely, report that analysis could not complete and include any partial results.
|
||||
- If no artifacts are available, report that and suggest running the workflow manually.
|
||||
- If the helper scripts fail, report the error in the discussion body and stop.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- Be thorough: analyze every available artifact and log file.
|
||||
- Be accurate: distinguish between ASan and UBSan findings.
|
||||
- Be actionable: for each finding, include enough context to locate and understand the issue.
|
||||
- Track trends: use cache memory to identify regressions and improvements over time.
|
||||
- Prioritize: critical memory safety issues (buffer overflow, UAF, double-free) should be prominently highlighted.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- DO NOT create pull requests or modify source files.
|
||||
- DO NOT attempt to fix the findings automatically.
|
||||
- DO close older Memory Safety discussions automatically (configured via `close-older-discussions: true`).
|
||||
- DO always report the commit SHA so findings can be correlated with specific code versions.
|
||||
- DO use cache memory to track trends over multiple runs.
|
||||
249
.github/workflows/memory-safety.yml
vendored
Normal file
249
.github/workflows/memory-safety.yml
vendored
Normal file
|
|
@ -0,0 +1,249 @@
|
|||
name: Memory Safety Analysis
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 0 * * 1'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
full_scan:
|
||||
description: 'Run full codebase scan (not just changed files)'
|
||||
required: false
|
||||
default: 'false'
|
||||
type: boolean
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
actions: read
|
||||
|
||||
concurrency:
|
||||
group: memory-safety-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# ============================================================================
|
||||
# Job 1: AddressSanitizer Build and Tests
|
||||
# ============================================================================
|
||||
asan-test:
|
||||
name: "ASan Build & Test"
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 120
|
||||
env:
|
||||
ASAN_OPTIONS: "detect_leaks=1:halt_on_error=0:print_stats=1:log_path=/tmp/asan"
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.x'
|
||||
|
||||
- name: Install dependencies
|
||||
run: sudo apt-get update && sudo apt-get install -y ninja-build clang
|
||||
|
||||
- name: Configure with ASan
|
||||
run: |
|
||||
mkdir -p build-asan
|
||||
cd build-asan
|
||||
CC=clang CXX=clang++ cmake \
|
||||
-DCMAKE_BUILD_TYPE=Debug \
|
||||
-DCMAKE_C_FLAGS="-fsanitize=address -fno-omit-frame-pointer -fno-optimize-sibling-calls" \
|
||||
-DCMAKE_CXX_FLAGS="-fsanitize=address -fno-omit-frame-pointer -fno-optimize-sibling-calls" \
|
||||
-DCMAKE_EXE_LINKER_FLAGS="-fsanitize=address" \
|
||||
-DCMAKE_SHARED_LINKER_FLAGS="-fsanitize=address" \
|
||||
-G Ninja ../
|
||||
|
||||
- name: Build Z3 with ASan
|
||||
run: |
|
||||
cd build-asan
|
||||
ninja -j$(nproc)
|
||||
ninja test-z3
|
||||
|
||||
- name: Run unit tests under ASan
|
||||
run: |
|
||||
cd build-asan
|
||||
./test-z3 -a 2>&1 | tee /tmp/asan-unit-test.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run SMT-LIB2 benchmarks under ASan
|
||||
run: |
|
||||
cd build-asan
|
||||
for f in ../examples/SMT-LIB2/bounded\ model\ checking/*.smt2; do
|
||||
echo "=== Testing: $f ==="
|
||||
timeout 60 ./z3 "$f" 2>&1 || true
|
||||
done | tee /tmp/asan-benchmark.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run regression tests under ASan
|
||||
run: |
|
||||
git clone --depth=1 https://github.com/z3prover/z3test z3test
|
||||
python z3test/scripts/test_benchmarks.py build-asan/z3 z3test/regressions/smt2 2>&1 | tee /tmp/asan-regression.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Collect ASan reports
|
||||
if: always()
|
||||
run: |
|
||||
mkdir -p /tmp/asan-reports
|
||||
cp /tmp/asan* /tmp/asan-reports/ 2>/dev/null || true
|
||||
if ls /tmp/asan.* 1>/dev/null 2>&1; then
|
||||
cp /tmp/asan.* /tmp/asan-reports/
|
||||
fi
|
||||
echo "# ASan Summary" > /tmp/asan-reports/summary.md
|
||||
echo "" >> /tmp/asan-reports/summary.md
|
||||
if ls /tmp/asan-reports/asan.* 1>/dev/null 2>&1; then
|
||||
echo "## Errors Found" >> /tmp/asan-reports/summary.md
|
||||
for f in /tmp/asan-reports/asan.*; do
|
||||
echo '```' >> /tmp/asan-reports/summary.md
|
||||
head -50 "$f" >> /tmp/asan-reports/summary.md
|
||||
echo '```' >> /tmp/asan-reports/summary.md
|
||||
echo "" >> /tmp/asan-reports/summary.md
|
||||
done
|
||||
else
|
||||
echo "No ASan errors detected." >> /tmp/asan-reports/summary.md
|
||||
fi
|
||||
|
||||
- name: Upload ASan reports
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: asan-reports
|
||||
path: /tmp/asan-reports/
|
||||
retention-days: 30
|
||||
|
||||
# ============================================================================
|
||||
# Job 2: UndefinedBehaviorSanitizer Build and Tests
|
||||
# ============================================================================
|
||||
ubsan-test:
|
||||
name: "UBSan Build & Test"
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 120
|
||||
env:
|
||||
UBSAN_OPTIONS: "print_stacktrace=1:halt_on_error=0:log_path=/tmp/ubsan"
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.x'
|
||||
|
||||
- name: Install dependencies
|
||||
run: sudo apt-get update && sudo apt-get install -y ninja-build clang
|
||||
|
||||
- name: Configure with UBSan
|
||||
run: |
|
||||
mkdir -p build-ubsan
|
||||
cd build-ubsan
|
||||
CC=clang CXX=clang++ cmake \
|
||||
-DCMAKE_BUILD_TYPE=Debug \
|
||||
-DCMAKE_C_FLAGS="-fsanitize=undefined -fno-omit-frame-pointer -fsanitize-recover=all" \
|
||||
-DCMAKE_CXX_FLAGS="-fsanitize=undefined -fno-omit-frame-pointer -fsanitize-recover=all" \
|
||||
-DCMAKE_EXE_LINKER_FLAGS="-fsanitize=undefined" \
|
||||
-DCMAKE_SHARED_LINKER_FLAGS="-fsanitize=undefined" \
|
||||
-G Ninja ../
|
||||
|
||||
- name: Build Z3 with UBSan
|
||||
run: |
|
||||
cd build-ubsan
|
||||
ninja -j$(nproc)
|
||||
ninja test-z3
|
||||
|
||||
- name: Run unit tests under UBSan
|
||||
run: |
|
||||
cd build-ubsan
|
||||
./test-z3 -a 2>&1 | tee /tmp/ubsan-unit-test.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run SMT-LIB2 benchmarks under UBSan
|
||||
run: |
|
||||
cd build-ubsan
|
||||
for f in ../examples/SMT-LIB2/bounded\ model\ checking/*.smt2; do
|
||||
echo "=== Testing: $f ==="
|
||||
timeout 60 ./z3 "$f" 2>&1 || true
|
||||
done | tee /tmp/ubsan-benchmark.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Run regression tests under UBSan
|
||||
run: |
|
||||
git clone --depth=1 https://github.com/z3prover/z3test z3test
|
||||
python z3test/scripts/test_benchmarks.py build-ubsan/z3 z3test/regressions/smt2 2>&1 | tee /tmp/ubsan-regression.log
|
||||
continue-on-error: true
|
||||
|
||||
- name: Collect UBSan reports
|
||||
if: always()
|
||||
run: |
|
||||
mkdir -p /tmp/ubsan-reports
|
||||
cp /tmp/ubsan* /tmp/ubsan-reports/ 2>/dev/null || true
|
||||
if ls /tmp/ubsan.* 1>/dev/null 2>&1; then
|
||||
cp /tmp/ubsan.* /tmp/ubsan-reports/
|
||||
fi
|
||||
echo "# UBSan Summary" > /tmp/ubsan-reports/summary.md
|
||||
echo "" >> /tmp/ubsan-reports/summary.md
|
||||
if ls /tmp/ubsan-reports/ubsan.* 1>/dev/null 2>&1; then
|
||||
echo "## Errors Found" >> /tmp/ubsan-reports/summary.md
|
||||
for f in /tmp/ubsan-reports/ubsan.*; do
|
||||
echo '```' >> /tmp/ubsan-reports/summary.md
|
||||
head -50 "$f" >> /tmp/ubsan-reports/summary.md
|
||||
echo '```' >> /tmp/ubsan-reports/summary.md
|
||||
echo "" >> /tmp/ubsan-reports/summary.md
|
||||
done
|
||||
else
|
||||
echo "No UBSan errors detected." >> /tmp/ubsan-reports/summary.md
|
||||
fi
|
||||
|
||||
- name: Upload UBSan reports
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ubsan-reports
|
||||
path: /tmp/ubsan-reports/
|
||||
retention-days: 30
|
||||
|
||||
# ============================================================================
|
||||
# Job 3: Summary Report
|
||||
# ============================================================================
|
||||
summary:
|
||||
name: "Memory Safety Summary"
|
||||
runs-on: ubuntu-latest
|
||||
needs: [asan-test, ubsan-test]
|
||||
if: always()
|
||||
steps:
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
path: reports/
|
||||
|
||||
- name: Generate summary
|
||||
run: |
|
||||
echo "# Memory Safety Analysis Report" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Commit**: \`${{ github.sha }}\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Branch**: \`${{ github.ref_name }}\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "**Trigger**: \`${{ github.event_name }}\`" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
echo "## Job Results" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Analysis | Status |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|----------|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| AddressSanitizer | \`${{ needs.asan-test.result }}\` |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| UndefinedBehaviorSanitizer | \`${{ needs.ubsan-test.result }}\` |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
if [ -f reports/asan-reports/summary.md ]; then
|
||||
echo "## ASan Results" >> $GITHUB_STEP_SUMMARY
|
||||
cat reports/asan-reports/summary.md >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
if [ -f reports/ubsan-reports/summary.md ]; then
|
||||
echo "## UBSan Results" >> $GITHUB_STEP_SUMMARY
|
||||
cat reports/ubsan-reports/summary.md >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
echo "## Artifacts" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Sanitizer logs are available as workflow artifacts" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- Run with \`workflow_dispatch\` and \`full_scan: true\` for complete codebase analysis" >> $GITHUB_STEP_SUMMARY
|
||||
55
.github/workflows/nightly-validation.yml
vendored
55
.github/workflows/nightly-validation.yml
vendored
|
|
@ -165,23 +165,6 @@ jobs:
|
|||
cd test-nuget
|
||||
dotnet new console
|
||||
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
|
||||
# Configure project to properly load native dependencies on macOS x64
|
||||
# Use AnyCPU without RuntimeIdentifier to avoid architecture mismatch
|
||||
# The .NET runtime will automatically select the correct native library from runtimes/osx-x64/native/
|
||||
cat > test-nuget.csproj << 'CSPROJ'
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<OutputType>Exe</OutputType>
|
||||
<TargetFramework>net8.0</TargetFramework>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<Nullable>enable</Nullable>
|
||||
<PlatformTarget>AnyCPU</PlatformTarget>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.Z3" Version="*" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
CSPROJ
|
||||
|
||||
- name: Create test code
|
||||
run: |
|
||||
|
|
@ -237,23 +220,6 @@ jobs:
|
|||
cd test-nuget
|
||||
dotnet new console
|
||||
dotnet add package Microsoft.Z3 --source ../nuget-packages --prerelease
|
||||
# Configure project to properly load native dependencies on macOS ARM64
|
||||
# Use AnyCPU without RuntimeIdentifier to avoid architecture mismatch
|
||||
# The .NET runtime will automatically select the correct native library from runtimes/osx-arm64/native/
|
||||
cat > test-nuget.csproj << 'CSPROJ'
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<OutputType>Exe</OutputType>
|
||||
<TargetFramework>net8.0</TargetFramework>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<Nullable>enable</Nullable>
|
||||
<PlatformTarget>AnyCPU</PlatformTarget>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.Z3" Version="*" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
CSPROJ
|
||||
|
||||
- name: Create test code
|
||||
run: |
|
||||
|
|
@ -806,3 +772,24 @@ jobs:
|
|||
echo "✗ install_name_tool failed to update install name"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ============================================================================
|
||||
# BUILD SCRIPT UNIT TESTS
|
||||
# ============================================================================
|
||||
|
||||
validate-build-script-tests:
|
||||
name: "Validate build script unit tests"
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ github.event.workflow_run.conclusion == 'success' || github.event_name == 'workflow_dispatch' }}
|
||||
timeout-minutes: 10
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.x'
|
||||
|
||||
- name: Run build script unit tests
|
||||
run: python -m unittest discover -s scripts/tests -p "test_*.py" -v
|
||||
|
|
|
|||
68
.github/workflows/nightly.yml
vendored
68
.github/workflows/nightly.yml
vendored
|
|
@ -46,7 +46,7 @@ jobs:
|
|||
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=x64
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: dist/*.zip
|
||||
|
|
@ -69,7 +69,7 @@ jobs:
|
|||
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=arm64
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: MacArm64
|
||||
path: dist/*.zip
|
||||
|
|
@ -89,7 +89,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download macOS x64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: artifacts
|
||||
|
|
@ -137,7 +137,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download macOS ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: artifacts
|
||||
|
|
@ -198,7 +198,7 @@ jobs:
|
|||
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuBuild
|
||||
path: dist/*.zip
|
||||
|
|
@ -233,7 +233,7 @@ jobs:
|
|||
python scripts/mk_unix_dist.py --nodotnet --arch=arm64
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuArm64
|
||||
path: dist/*.zip
|
||||
|
|
@ -269,9 +269,9 @@ jobs:
|
|||
eval $(opam config env)
|
||||
python scripts/mk_make.py --ml
|
||||
cd build
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) examples
|
||||
make -j$(nproc) test-z3
|
||||
cd ..
|
||||
|
||||
- name: Generate documentation
|
||||
|
|
@ -288,7 +288,7 @@ jobs:
|
|||
run: zip -r z3doc.zip doc/api
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuDoc
|
||||
path: z3doc.zip
|
||||
|
|
@ -318,7 +318,7 @@ jobs:
|
|||
run: pip install ./src/api/python/wheelhouse/*.whl && python - <src/api/python/z3test.py z3 && python - <src/api/python/z3test.py z3num
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ManyLinuxPythonBuildAMD64
|
||||
path: src/api/python/wheelhouse/*.whl
|
||||
|
|
@ -358,7 +358,7 @@ jobs:
|
|||
run: cd src/api/python && CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld Z3_CROSS_COMPILING=aarch64 python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ManyLinuxPythonBuildArm64
|
||||
path: src/api/python/wheelhouse/*.whl
|
||||
|
|
@ -384,7 +384,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: dist/*.zip
|
||||
|
|
@ -410,7 +410,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: dist/*.zip
|
||||
|
|
@ -436,7 +436,7 @@ jobs:
|
|||
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }} --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: dist/arm64/*.zip
|
||||
|
|
@ -460,37 +460,37 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download Win64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: package
|
||||
|
||||
- name: Download Win ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: package
|
||||
|
||||
- name: Download Ubuntu Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: UbuntuBuild
|
||||
path: package
|
||||
|
||||
- name: Download Ubuntu ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: UbuntuArm64
|
||||
path: package
|
||||
|
||||
- name: Download macOS Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: package
|
||||
|
||||
- name: Download macOS Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: package
|
||||
|
|
@ -513,7 +513,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.sym.nuspec -Version ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: NuGet
|
||||
path: |
|
||||
|
|
@ -535,7 +535,7 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: package
|
||||
|
|
@ -558,7 +558,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.x86.sym.nuspec -Version ${{ env.MAJOR }}.${{ env.MINOR }}.${{ env.PATCH }}.${{ github.run_number }} -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: NuGet32
|
||||
path: |
|
||||
|
|
@ -580,43 +580,43 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download macOS x64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: artifacts
|
||||
|
||||
- name: Download macOS Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win32 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: artifacts
|
||||
|
||||
- name: Download ManyLinux AMD64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: ManyLinuxPythonBuildAMD64
|
||||
path: artifacts
|
||||
|
||||
- name: Download ManyLinux Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: ManyLinuxPythonBuildArm64
|
||||
path: artifacts
|
||||
|
|
@ -651,7 +651,7 @@ jobs:
|
|||
cp artifacts/*.whl src/api/python/dist/.
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: PythonPackages
|
||||
path: src/api/python/dist/*
|
||||
|
|
@ -684,7 +684,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
path: tmp
|
||||
|
||||
|
|
@ -749,7 +749,7 @@ jobs:
|
|||
contents: read
|
||||
steps:
|
||||
- name: Download Python packages
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: PythonPackages
|
||||
path: dist
|
||||
|
|
|
|||
20
.github/workflows/nuget-build.yml
vendored
20
.github/workflows/nuget-build.yml
vendored
|
|
@ -34,7 +34,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
|
||||
|
||||
- name: Upload Windows x64 artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: windows-x64
|
||||
path: dist/*.zip
|
||||
|
|
@ -58,7 +58,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
|
||||
|
||||
- name: Upload Windows x86 artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: windows-x86
|
||||
path: dist/*.zip
|
||||
|
|
@ -82,7 +82,7 @@ jobs:
|
|||
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ github.event.inputs.version || '4.17.0' }} --zip
|
||||
|
||||
- name: Upload Windows ARM64 artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: windows-arm64
|
||||
path: build-dist\arm64\dist\*.zip
|
||||
|
|
@ -103,7 +103,7 @@ jobs:
|
|||
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
|
||||
|
||||
- name: Upload Ubuntu artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ubuntu
|
||||
path: dist/*.zip
|
||||
|
|
@ -124,7 +124,7 @@ jobs:
|
|||
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk
|
||||
|
||||
- name: Upload macOS x64 artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: macos-x64
|
||||
path: dist/*.zip
|
||||
|
|
@ -145,7 +145,7 @@ jobs:
|
|||
run: python scripts/mk_unix_dist.py --dotnet-key=$GITHUB_WORKSPACE/resources/z3.snk --arch=arm64
|
||||
|
||||
- name: Upload macOS ARM64 artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: macos-arm64
|
||||
path: dist/*.zip
|
||||
|
|
@ -165,7 +165,7 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
path: packages
|
||||
|
||||
|
|
@ -198,7 +198,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload NuGet package
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: nuget-x64
|
||||
path: |
|
||||
|
|
@ -220,7 +220,7 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download x86 artifact
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: windows-x86
|
||||
path: packages
|
||||
|
|
@ -247,7 +247,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.x86.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload NuGet package
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: nuget-x86
|
||||
path: |
|
||||
|
|
|
|||
4
.github/workflows/ocaml.yaml
vendored
4
.github/workflows/ocaml.yaml
vendored
|
|
@ -21,7 +21,7 @@ jobs:
|
|||
|
||||
# Cache ccache (shared across runs)
|
||||
- name: Cache ccache
|
||||
uses: actions/cache@v5.0.3
|
||||
uses: actions/cache@v5.0.4
|
||||
with:
|
||||
path: ~/.ccache
|
||||
key: ${{ runner.os }}-ccache-${{ github.sha }}
|
||||
|
|
@ -30,7 +30,7 @@ jobs:
|
|||
|
||||
# Cache opam (compiler + packages)
|
||||
- name: Cache opam
|
||||
uses: actions/cache@v5.0.3
|
||||
uses: actions/cache@v5.0.4
|
||||
with:
|
||||
path: ~/.opam
|
||||
key: ${{ runner.os }}-opam-${{ matrix.ocaml-version }}-${{ github.sha }}
|
||||
|
|
|
|||
624
.github/workflows/specbot.lock.yml → .github/workflows/ostrich-benchmark.lock.yml
generated
vendored
624
.github/workflows/specbot.lock.yml → .github/workflows/ostrich-benchmark.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -21,35 +21,22 @@
|
|||
#
|
||||
# For more information: https://github.github.com/gh-aw/introduction/overview/
|
||||
#
|
||||
# Automatically annotate code with assertions capturing class invariants, pre-conditions, and post-conditions using LLM-based specification mining
|
||||
# Run Z3 string solver benchmarks (seq vs nseq) and ZIPT on all Ostrich benchmarks from tests/ostrich.zip on the c3 branch and post results as a GitHub discussion
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"375828e8a6e53eff88da442a8f8ab3894d7977dc514fce1046ff05bb53acc1b9"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"5da2ead1263e4a6b19d8bab174217a23a5312abe581843899042fffc18e9858f","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Specbot"
|
||||
name: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
"on":
|
||||
schedule:
|
||||
- cron: "3 7 * * 4"
|
||||
# Friendly format: weekly (scattered)
|
||||
- cron: "0 6 * * *"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
target_class:
|
||||
default: ""
|
||||
description: Specific class name to analyze (optional)
|
||||
required: false
|
||||
target_path:
|
||||
default: ""
|
||||
description: Target directory or file to analyze (e.g., src/ast/, src/smt/smt_context.cpp)
|
||||
required: false
|
||||
|
||||
permissions: {}
|
||||
|
||||
concurrency:
|
||||
group: "gh-aw-${{ github.workflow }}"
|
||||
|
||||
run-name: "Specbot"
|
||||
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.BOT_PAT }}
|
||||
run-name: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
|
||||
jobs:
|
||||
activation:
|
||||
|
|
@ -59,23 +46,55 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_WORKFLOW_FILE: "specbot.lock.yml"
|
||||
GH_AW_WORKFLOW_FILE: "ostrich-benchmark.lock.yml"
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -96,41 +115,18 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -160,16 +156,19 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
{{#runtime-import .github/workflows/specbot.md}}
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/ostrich-benchmark.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -188,8 +187,6 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -208,9 +205,7 @@ jobs:
|
|||
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
|
||||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -221,21 +216,20 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
needs: activation
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
permissions: read-all
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
env:
|
||||
|
|
@ -247,23 +241,29 @@ jobs:
|
|||
GH_AW_SAFE_OUTPUTS: /opt/gh-aw/safeoutputs/outputs.jsonl
|
||||
GH_AW_SAFE_OUTPUTS_CONFIG_PATH: /opt/gh-aw/safeoutputs/config.json
|
||||
GH_AW_SAFE_OUTPUTS_TOOLS_PATH: /opt/gh-aw/safeoutputs/tools.json
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: specbot
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: ostrichbenchmark
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
- name: Checkout c3 branch
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
ref: c3
|
||||
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
|
|
@ -272,6 +272,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -279,7 +280,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -290,59 +291,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Specbot",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -354,7 +306,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 ghcr.io/github/serena-mcp-server:latest ghcr.io/githubnext/serena-mcp-server:latest node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -366,7 +318,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a GitHub discussion for announcements, Q\u0026A, reports, status updates, or community conversations. Use this for content that benefits from threaded replies, doesn't require task tracking, or serves as documentation. For actionable work items that need assignment and status tracking, use create_issue instead. CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[SpecBot] \". Discussions will be created in category \"agentic workflows\".",
|
||||
"description": "Create a GitHub discussion for announcements, Q\u0026A, reports, status updates, or community conversations. Use this for content that benefits from threaded replies, doesn't require task tracking, or serves as documentation. For actionable work items that need assignment and status tracking, use create_issue instead. CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[Ostrich Benchmark] \". Discussions will be created in category \"agentic workflows\".",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -378,6 +330,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -400,10 +360,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -421,9 +389,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -450,9 +426,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -490,6 +474,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -582,10 +591,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -593,7 +603,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -607,14 +617,6 @@ jobs:
|
|||
"headers": {
|
||||
"Authorization": "\${GH_AW_SAFE_OUTPUTS_API_KEY}"
|
||||
}
|
||||
},
|
||||
"serena": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/serena-mcp-server:latest",
|
||||
"args": ["--network", "host"],
|
||||
"entrypoint": "serena",
|
||||
"entrypointArgs": ["start-mcp-server", "--context", "codex", "--project", "\${GITHUB_WORKSPACE}"],
|
||||
"mounts": ["\${GITHUB_WORKSPACE}:\${GITHUB_WORKSPACE}:rw"]
|
||||
}
|
||||
},
|
||||
"gateway": {
|
||||
|
|
@ -625,39 +627,50 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
timeout-minutes: 45
|
||||
timeout-minutes: 180
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -665,6 +678,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -710,9 +724,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -734,13 +751,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -785,23 +802,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
WORKFLOW_DESCRIPTION: "Run Z3 string solver benchmarks (seq vs nseq) and ZIPT on all Ostrich benchmarks from tests/ostrich.zip on the c3 branch and post results as a GitHub discussion"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
runs-on: ubuntu-slim
|
||||
|
|
@ -809,22 +948,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-ostrich-benchmark"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -834,8 +978,8 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_WORKFLOW_NAME: "Specbot"
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -850,7 +994,7 @@ jobs:
|
|||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_MISSING_TOOL_CREATE_ISSUE: "true"
|
||||
GH_AW_MISSING_TOOL_TITLE_PREFIX: "[missing tool]"
|
||||
GH_AW_WORKFLOW_NAME: "Specbot"
|
||||
GH_AW_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -863,14 +1007,18 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_WORKFLOW_NAME: "Specbot"
|
||||
GH_AW_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "specbot"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_WORKFLOW_ID: "ostrich-benchmark"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "180"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -883,11 +1031,11 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_WORKFLOW_NAME: "Specbot"
|
||||
GH_AW_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -896,112 +1044,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Specbot"
|
||||
WORKFLOW_DESCRIPTION: "Automatically annotate code with assertions capturing class invariants, pre-conditions, and post-conditions using LLM-based specification mining"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -1009,26 +1054,31 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/ostrich-benchmark"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "specbot"
|
||||
GH_AW_WORKFLOW_NAME: "Specbot"
|
||||
GH_AW_WORKFLOW_ID: "ostrich-benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "Ostrich Benchmark: Z3 c3 branch vs ZIPT"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1038,7 +1088,10 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[SpecBot] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[Ostrich Benchmark] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1046,4 +1099,11 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
404
.github/workflows/ostrich-benchmark.md
vendored
Normal file
404
.github/workflows/ostrich-benchmark.md
vendored
Normal file
|
|
@ -0,0 +1,404 @@
|
|||
---
|
||||
description: Run Z3 string solver benchmarks (seq vs nseq) and ZIPT on all Ostrich benchmarks from tests/ostrich.zip on the c3 branch and post results as a GitHub discussion
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 6 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions: read-all
|
||||
|
||||
network: defaults
|
||||
|
||||
tools:
|
||||
bash: true
|
||||
github:
|
||||
toolsets: [default]
|
||||
|
||||
safe-outputs:
|
||||
create-discussion:
|
||||
title-prefix: "[Ostrich Benchmark] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
timeout-minutes: 180
|
||||
|
||||
steps:
|
||||
- name: Checkout c3 branch
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
ref: c3
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
||||
---
|
||||
|
||||
|
||||
# Ostrich Benchmark: Z3 c3 branch vs ZIPT
|
||||
|
||||
You are an AI agent that benchmarks Z3 string solvers (`seq` and `nseq`) and the standalone ZIPT solver on all SMT-LIB2 benchmarks from the `tests/ostrich.zip` archive on the `c3` branch, and publishes a summary report as a GitHub discussion.
|
||||
|
||||
## Context
|
||||
|
||||
- **Repository**: ${{ github.repository }}
|
||||
- **Workspace**: ${{ github.workspace }}
|
||||
- **Branch**: c3 (already checked out by the workflow setup step)
|
||||
|
||||
## Phase 1: Build Z3
|
||||
|
||||
Build Z3 from the checked-out `c3` branch using CMake + Ninja, including the .NET bindings required by ZIPT.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Install build dependencies if missing
|
||||
sudo apt-get install -y ninja-build cmake python3 zstd dotnet-sdk-8.0 unzip 2>/dev/null || true
|
||||
|
||||
# Configure the build in Release mode for better performance and lower memory usage
|
||||
# (Release mode is sufficient for benchmarking; the workflow does not use -tr: trace flags)
|
||||
mkdir -p build
|
||||
cd build
|
||||
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Release -DZ3_BUILD_DOTNET_BINDINGS=ON 2>&1 | tail -20
|
||||
|
||||
# Build z3 binary and .NET bindings SYNCHRONOUSLY (do NOT add & to background these commands).
|
||||
# Running ninja in the background while the LLM agent is also active causes OOM and kills the
|
||||
# agent process. Wait for each build command to finish before continuing.
|
||||
# -j1 limits parallelism to reduce peak memory usage alongside the LLM agent process.
|
||||
ninja z3 2>&1 | tail -30
|
||||
ninja build_z3_dotnet_bindings 2>&1 | tail -20
|
||||
|
||||
# Verify the build succeeded
|
||||
./z3 --version
|
||||
|
||||
# Locate the Microsoft.Z3.dll produced by the build
|
||||
Z3_DOTNET_DLL=$(find . -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
|
||||
if [ -z "$Z3_DOTNET_DLL" ]; then
|
||||
echo "ERROR: Microsoft.Z3.dll not found after build"
|
||||
exit 1
|
||||
fi
|
||||
echo "Found Microsoft.Z3.dll at: $Z3_DOTNET_DLL"
|
||||
```
|
||||
|
||||
If the build fails, report the error clearly and exit without proceeding.
|
||||
|
||||
## Phase 2a: Clone and Build ZIPT
|
||||
|
||||
Clone the ZIPT solver from the `parikh` branch and compile it against the Z3 .NET bindings built in Phase 1.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Re-locate the Microsoft.Z3.dll if needed
|
||||
Z3_DOTNET_DLL=$(find build -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
|
||||
Z3_LIB_DIR=${{ github.workspace }}/build
|
||||
|
||||
# Clone ZIPT (parikh branch)
|
||||
git clone --depth=1 --branch parikh https://github.com/CEisenhofer/ZIPT.git /tmp/zipt
|
||||
|
||||
# Patch ZIPT.csproj to point at the freshly built Microsoft.Z3.dll
|
||||
# (the repo has a Windows-relative hardcoded path that won't exist here)
|
||||
sed -i "s|<HintPath>.*</HintPath>|<HintPath>$Z3_DOTNET_DLL</HintPath>|" /tmp/zipt/ZIPT/ZIPT.csproj
|
||||
|
||||
# Build ZIPT in Release mode
|
||||
cd /tmp/zipt/ZIPT
|
||||
dotnet build --configuration Release 2>&1 | tail -20
|
||||
|
||||
# Locate the built ZIPT.dll
|
||||
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" | head -1)
|
||||
if [ -z "$ZIPT_DLL" ]; then
|
||||
echo "ERROR: ZIPT.dll not found after build"
|
||||
exit 1
|
||||
fi
|
||||
echo "ZIPT binary: $ZIPT_DLL"
|
||||
|
||||
# Make libz3.so visible to the .NET runtime at ZIPT startup
|
||||
ZIPT_OUT_DIR=$(dirname "$ZIPT_DLL")
|
||||
if cp "$Z3_LIB_DIR/libz3.so" "$ZIPT_OUT_DIR/" 2>/dev/null; then
|
||||
echo "Copied libz3.so to $ZIPT_OUT_DIR"
|
||||
else
|
||||
echo "WARNING: could not copy libz3.so to $ZIPT_OUT_DIR — setting LD_LIBRARY_PATH fallback"
|
||||
fi
|
||||
export LD_LIBRARY_PATH="$Z3_LIB_DIR${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
|
||||
echo "ZIPT build complete."
|
||||
```
|
||||
|
||||
If the ZIPT build fails, note the error in the report but continue with the Z3-only benchmark columns.
|
||||
|
||||
## Phase 2b: Extract Benchmark Files
|
||||
|
||||
Extract all SMT-LIB2 files from the `tests/ostrich.zip` archive.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Extract the zip archive
|
||||
mkdir -p /tmp/ostrich_benchmarks
|
||||
unzip -q tests/ostrich.zip -d /tmp/ostrich_benchmarks
|
||||
|
||||
# List all .smt2 files
|
||||
find /tmp/ostrich_benchmarks -name "*.smt2" -type f | sort > /tmp/all_ostrich_files.txt
|
||||
TOTAL_FILES=$(wc -l < /tmp/all_ostrich_files.txt)
|
||||
echo "Total Ostrich .smt2 files: $TOTAL_FILES"
|
||||
|
||||
if [ "$TOTAL_FILES" -eq 0 ]; then
|
||||
echo "ERROR: No .smt2 files found in tests/ostrich.zip"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## Phase 3: Run Benchmarks
|
||||
|
||||
Run every file from `/tmp/all_ostrich_files.txt` with both Z3 string solvers and ZIPT. Use a **5-second timeout** per run.
|
||||
|
||||
For each file, run:
|
||||
1. `z3 smt.string_solver=seq -T:5 <file>` — seq solver
|
||||
2. `z3 smt.string_solver=nseq -T:5 <file>` — nseq (ZIPT) solver
|
||||
3. `dotnet <ZIPT.dll> -t:5000 <file>` — standalone ZIPT solver (milliseconds)
|
||||
|
||||
Capture:
|
||||
- **Verdict**: `sat`, `unsat`, `unknown`, `timeout` (if exit code indicates timeout or process is killed), or `bug` (if a solver crashes / produces a non-standard result)
|
||||
- **Time** (seconds): wall-clock time for the run
|
||||
- A row is flagged `SOUNDNESS_DISAGREEMENT` when any two solvers that both produced a definitive answer (sat/unsat) disagree
|
||||
|
||||
Use a bash script to automate this:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
Z3=${{ github.workspace }}/build/z3
|
||||
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" 2>/dev/null | head -1)
|
||||
ZIPT_AVAILABLE=false
|
||||
[ -n "$ZIPT_DLL" ] && ZIPT_AVAILABLE=true
|
||||
|
||||
# Ensure libz3.so is on the dynamic-linker path for the .NET runtime
|
||||
export LD_LIBRARY_PATH=${{ github.workspace }}/build${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
|
||||
|
||||
RESULTS=/tmp/benchmark_results.tsv
|
||||
mkdir -p /tmp/ostrich_run
|
||||
|
||||
echo -e "file\tseq_verdict\tseq_time\tnseq_verdict\tnseq_time\tzipt_verdict\tzipt_time\tnotes" > "$RESULTS"
|
||||
|
||||
run_z3_seq() {
|
||||
local file="$1"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
start=$(date +%s%3N)
|
||||
output=$(timeout 7 "$Z3" "smt.string_solver=seq" -T:5 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
if echo "$output" | grep -q "^unsat"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -q "^sat"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -q "^unknown"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
run_z3_nseq() {
|
||||
local file="$1"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
start=$(date +%s%3N)
|
||||
output=$(timeout 7 "$Z3" "smt.string_solver=nseq" -T:5 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
if echo "$output" | grep -q "^unsat"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -q "^sat"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -q "^unknown"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
run_zipt() {
|
||||
local file="$1"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
if [ "$ZIPT_AVAILABLE" != "true" ]; then
|
||||
echo "n/a 0.000"
|
||||
return
|
||||
fi
|
||||
|
||||
start=$(date +%s%3N)
|
||||
# ZIPT prints the filename on the first line, then SAT/UNSAT/UNKNOWN on subsequent lines
|
||||
output=$(timeout 7 dotnet "$ZIPT_DLL" -t:5000 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
if echo "$output" | grep -qi "^UNSAT$"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -qi "^SAT$"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -qi "^UNKNOWN$"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|crash\|exception\|Unsupported"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
COUNTER=0
|
||||
while IFS= read -r file; do
|
||||
COUNTER=$((COUNTER + 1))
|
||||
fname=$(basename "$file")
|
||||
|
||||
seq_result=$(run_z3_seq "$file")
|
||||
nseq_result=$(run_z3_nseq "$file")
|
||||
zipt_result=$(run_zipt "$file")
|
||||
|
||||
seq_verdict=$(echo "$seq_result" | cut -d' ' -f1)
|
||||
seq_time=$(echo "$seq_result" | cut -d' ' -f2)
|
||||
nseq_verdict=$(echo "$nseq_result" | cut -d' ' -f1)
|
||||
nseq_time=$(echo "$nseq_result" | cut -d' ' -f2)
|
||||
zipt_verdict=$(echo "$zipt_result" | cut -d' ' -f1)
|
||||
zipt_time=$(echo "$zipt_result" | cut -d' ' -f2)
|
||||
|
||||
# Flag soundness disagreement when any two definitive verdicts disagree
|
||||
notes=""
|
||||
declare -A definitive_map
|
||||
[ "$seq_verdict" = "sat" ] || [ "$seq_verdict" = "unsat" ] && definitive_map[seq]="$seq_verdict"
|
||||
[ "$nseq_verdict" = "sat" ] || [ "$nseq_verdict" = "unsat" ] && definitive_map[nseq]="$nseq_verdict"
|
||||
[ "$zipt_verdict" = "sat" ] || [ "$zipt_verdict" = "unsat" ] && definitive_map[zipt]="$zipt_verdict"
|
||||
has_sat=false; has_unsat=false
|
||||
for v in "${definitive_map[@]}"; do
|
||||
[ "$v" = "sat" ] && has_sat=true
|
||||
[ "$v" = "unsat" ] && has_unsat=true
|
||||
done
|
||||
if $has_sat && $has_unsat; then
|
||||
notes="SOUNDNESS_DISAGREEMENT"
|
||||
fi
|
||||
|
||||
echo -e "$fname\t$seq_verdict\t$seq_time\t$nseq_verdict\t$nseq_time\t$zipt_verdict\t$zipt_time\t$notes" >> "$RESULTS"
|
||||
echo "[$COUNTER] [$fname] seq=$seq_verdict(${seq_time}s) nseq=$nseq_verdict(${nseq_time}s) zipt=$zipt_verdict(${zipt_time}s) $notes"
|
||||
done < /tmp/all_ostrich_files.txt
|
||||
|
||||
echo "Benchmark run complete. Results saved to $RESULTS"
|
||||
```
|
||||
|
||||
Save this script to `/tmp/run_ostrich_benchmarks.sh`, make it executable, and run it. Do not skip any file.
|
||||
|
||||
## Phase 4: Generate Summary Report
|
||||
|
||||
Read `/tmp/benchmark_results.tsv` and compute statistics. Then generate a Markdown report.
|
||||
|
||||
Compute:
|
||||
- **Total benchmarks**: total number of files run
|
||||
- **Per solver (seq, nseq, and ZIPT)**: count of sat / unsat / unknown / timeout / bug verdicts
|
||||
- **Total time used**: sum of all times for each solver
|
||||
- **Average time per benchmark**: total_time / total_files
|
||||
- **Soundness disagreements**: files where any two solvers that both returned a definitive answer disagree
|
||||
- **Bugs / crashes**: files with error/crash verdicts
|
||||
|
||||
Format the report as a GitHub Discussion post (GitHub-flavored Markdown):
|
||||
|
||||
```markdown
|
||||
### Ostrich Benchmark Report — Z3 c3 branch
|
||||
|
||||
**Date**: <today's date>
|
||||
**Branch**: c3
|
||||
**Benchmark set**: Ostrich (all files from tests/ostrich.zip)
|
||||
**Timeout**: 5 seconds per benchmark (`-T:5` for Z3; `-t:5000` for ZIPT)
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
| Metric | seq solver | nseq solver | ZIPT solver |
|
||||
|--------|-----------|-------------|-------------|
|
||||
| sat | X | X | X |
|
||||
| unsat | X | X | X |
|
||||
| unknown | X | X | X |
|
||||
| timeout | X | X | X |
|
||||
| bug/crash | X | X | X |
|
||||
| **Total time (s)** | X.XXX | X.XXX | X.XXX |
|
||||
| **Avg time/benchmark (s)** | X.XXX | X.XXX | X.XXX |
|
||||
|
||||
**Soundness disagreements** (any two solvers return conflicting sat/unsat): N
|
||||
|
||||
---
|
||||
|
||||
### Per-File Results
|
||||
|
||||
<details>
|
||||
<summary>Click to expand full per-file table</summary>
|
||||
|
||||
| # | File | seq verdict | seq time (s) | nseq verdict | nseq time (s) | ZIPT verdict | ZIPT time (s) | Notes |
|
||||
|---|------|-------------|-------------|--------------|--------------|--------------|--------------|-------|
|
||||
| 1 | benchmark_0001.smt2 | sat | 0.123 | sat | 0.456 | sat | 0.789 | |
|
||||
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
### Notable Issues
|
||||
|
||||
#### Soundness Disagreements (Critical)
|
||||
<list files where any two solvers disagree on sat/unsat, naming which solvers disagree>
|
||||
|
||||
#### Crashes / Bugs
|
||||
<list files where any solver crashed or produced an error>
|
||||
|
||||
#### Slow Benchmarks (> 4s)
|
||||
<list files that took more than 4 seconds for any solver>
|
||||
|
||||
---
|
||||
|
||||
*Generated automatically by the Ostrich Benchmark workflow on the c3 branch.*
|
||||
```
|
||||
|
||||
## Phase 5: Post to GitHub Discussion
|
||||
|
||||
Post the Markdown report as a new GitHub Discussion using the `create-discussion` safe output.
|
||||
|
||||
- **Category**: "Agentic Workflows"
|
||||
- **Title**: `[Ostrich Benchmark] Z3 c3 branch — <date>`
|
||||
- Close older discussions with the same title prefix to avoid clutter.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Always build from c3 branch**: The workspace is already checked out on c3; don't change branches.
|
||||
- **Synchronous builds only**: Never run `ninja` (or any other build command) in the background using `&`. Running the build concurrently with LLM inference causes the agent process to be killed by the OOM killer (exit 137) because C++ compilation and the LLM together exceed available RAM. Always wait for each build command to finish before proceeding.
|
||||
- **Release build**: The build uses `CMAKE_BUILD_TYPE=Release` for lower memory footprint and faster compilation on the GitHub Actions runner. The benchmark only needs verdict and timing output; no `-tr:` trace flags are used.
|
||||
- **Run all benchmarks**: Unlike the QF_S workflow, run every file in the archive — do not randomly sample.
|
||||
- **5-second timeout**: Pass `-T:5` to Z3 (both seq and nseq) and `-t:5000` to ZIPT (milliseconds). Use `timeout 7` as the outer OS-level guard to allow the solver to exit cleanly before being killed.
|
||||
- **Be precise with timing**: Use millisecond-precision timestamps and report times in seconds with 3 decimal places.
|
||||
- **Distinguish timeout from unknown**: A timeout is different from `(unknown)` returned by a solver within its time budget.
|
||||
- **ZIPT output format**: ZIPT prints the input filename on the first line, then `SAT`, `UNSAT`, or `UNKNOWN` on subsequent lines. Parse accordingly.
|
||||
- **Report soundness bugs prominently**: If any benchmark shows a conflict between any two solvers that both returned a definitive sat/unsat answer, highlight it as a critical finding and name which pair disagrees.
|
||||
- **Handle build failures gracefully**: If Z3 fails to build, report the error and create a brief discussion noting the build failure. If ZIPT fails to build, continue with only the seq/nseq columns and note `n/a` for ZIPT results.
|
||||
- **Large report**: Always put the per-file table in a `<details>` collapsible section since there may be many files.
|
||||
- **Progress logging**: Print a line per file as you run it (e.g., `[N] [filename] seq=...`) so the workflow log shows progress even for large benchmark sets.
|
||||
100
.github/workflows/qf-s-benchmark.lock.yml
generated
vendored
100
.github/workflows/qf-s-benchmark.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.53.4). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,13 +23,12 @@
|
|||
#
|
||||
# Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"11e7fe880a77098e320d93169917eed62c8c0c2288cd5d3e54f9251ed6edbf7e","compiler_version":"v0.53.4"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"d7c341a4c4224962ddf5d76ae2e39b3fc7965a5d9a7899d0674877de090be242","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Qf S Benchmark"
|
||||
name: "ZIPT String Solver Benchmark"
|
||||
"on":
|
||||
schedule:
|
||||
- cron: "16 3 * * 3"
|
||||
# Friendly format: weekly (scattered)
|
||||
- cron: "0 0,12 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions: {}
|
||||
|
|
@ -37,7 +36,7 @@ permissions: {}
|
|||
concurrency:
|
||||
group: "gh-aw-${{ github.workflow }}"
|
||||
|
||||
run-name: "Qf S Benchmark"
|
||||
run-name: "ZIPT String Solver Benchmark"
|
||||
|
||||
jobs:
|
||||
activation:
|
||||
|
|
@ -51,7 +50,7 @@ jobs:
|
|||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.53.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
|
|
@ -61,9 +60,9 @@ jobs:
|
|||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "0.0.421"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.53.4"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
|
|
@ -72,6 +71,7 @@ jobs:
|
|||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
|
|
@ -85,12 +85,12 @@ jobs:
|
|||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -167,6 +167,8 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -216,7 +218,7 @@ jobs:
|
|||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: activation
|
||||
path: |
|
||||
|
|
@ -251,13 +253,13 @@ jobs:
|
|||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.53.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout c3 branch
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
|
|
@ -290,7 +292,7 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.421
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
|
|
@ -304,7 +306,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.31.0 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -316,7 +318,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a GitHub discussion for announcements, Q\u0026A, reports, status updates, or community conversations. Use this for content that benefits from threaded replies, doesn't require task tracking, or serves as documentation. For actionable work items that need assignment and status tracking, use create_issue instead. CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[QF_S Benchmark] \". Discussions will be created in category \"agentic workflows\".",
|
||||
"description": "Create a GitHub discussion for announcements, Q\u0026A, reports, status updates, or community conversations. Use this for content that benefits from threaded replies, doesn't require task tracking, or serves as documentation. For actionable work items that need assignment and status tracking, use create_issue instead. CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[ZIPT Benchmark] \". Discussions will be created in category \"agentic workflows\".",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -601,7 +603,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.31.0",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -626,7 +628,7 @@ jobs:
|
|||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
|
|
@ -638,6 +640,7 @@ jobs:
|
|||
timeout-minutes: 90
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
|
|
@ -646,15 +649,22 @@ jobs:
|
|||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
|
|
@ -714,9 +724,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -738,13 +751,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -789,7 +802,7 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
|
|
@ -835,7 +848,7 @@ jobs:
|
|||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Qf S Benchmark"
|
||||
WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
WORKFLOW_DESCRIPTION: "Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
|
|
@ -863,6 +876,7 @@ jobs:
|
|||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
|
|
@ -870,13 +884,20 @@ jobs:
|
|||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
|
|
@ -890,7 +911,7 @@ jobs:
|
|||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
|
|
@ -936,13 +957,13 @@ jobs:
|
|||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.53.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
|
|
@ -958,7 +979,7 @@ jobs:
|
|||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -973,7 +994,7 @@ jobs:
|
|||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_MISSING_TOOL_CREATE_ISSUE: "true"
|
||||
GH_AW_MISSING_TOOL_TITLE_PREFIX: "[missing tool]"
|
||||
GH_AW_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -986,7 +1007,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "qf-s-benchmark"
|
||||
|
|
@ -996,6 +1017,7 @@ jobs:
|
|||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "90"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1009,11 +1031,11 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1035,7 +1057,7 @@ jobs:
|
|||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/qf-s-benchmark"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "qf-s-benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "Qf S Benchmark"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
|
|
@ -1045,13 +1067,13 @@ jobs:
|
|||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.53.4
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8.0.0
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
|
|
@ -1069,7 +1091,7 @@ jobs:
|
|||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[QF_S Benchmark] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[ZIPT Benchmark] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1079,7 +1101,7 @@ jobs:
|
|||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
|
|
|
|||
445
.github/workflows/qf-s-benchmark.md
vendored
445
.github/workflows/qf-s-benchmark.md
vendored
|
|
@ -2,7 +2,8 @@
|
|||
description: Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion
|
||||
|
||||
on:
|
||||
schedule: weekly
|
||||
schedule:
|
||||
- cron: "0 0,12 * * *"
|
||||
workflow_dispatch:
|
||||
|
||||
permissions: read-all
|
||||
|
|
@ -16,17 +17,19 @@ tools:
|
|||
|
||||
safe-outputs:
|
||||
create-discussion:
|
||||
title-prefix: "[QF_S Benchmark] "
|
||||
title-prefix: "[ZIPT Benchmark] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
timeout-minutes: 90
|
||||
|
||||
steps:
|
||||
- name: Checkout c3 branch
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
ref: c3
|
||||
fetch-depth: 1
|
||||
|
|
@ -34,5 +37,437 @@ steps:
|
|||
|
||||
---
|
||||
|
||||
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
|
||||
@./agentics/qf-s-benchmark.md
|
||||
|
||||
# ZIPT String Solver Benchmark
|
||||
|
||||
You are an AI agent that benchmarks Z3 string solvers (`seq` and `nseq`) and the standalone ZIPT solver on QF_S SMT-LIB2 benchmarks from the `c3` branch, and publishes a summary report as a GitHub discussion.
|
||||
|
||||
## Context
|
||||
|
||||
- **Repository**: ${{ github.repository }}
|
||||
- **Workspace**: ${{ github.workspace }}
|
||||
- **Branch**: c3 (already checked out by the workflow setup step)
|
||||
|
||||
## Phase 1: Build Z3
|
||||
|
||||
Build Z3 from the checked-out `c3` branch using CMake + Ninja, including the .NET bindings required by ZIPT.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Install build dependencies if missing
|
||||
sudo apt-get install -y ninja-build cmake python3 zstd dotnet-sdk-8.0 2>/dev/null || true
|
||||
|
||||
# Configure the build in Debug mode to enable assertions and tracing
|
||||
# (Debug mode is required for -tr: trace flags to produce meaningful output)
|
||||
mkdir -p build
|
||||
cd build
|
||||
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Debug -DZ3_BUILD_DOTNET_BINDINGS=ON 2>&1 | tail -20
|
||||
|
||||
# Build z3 binary and .NET bindings (this takes ~15-17 minutes)
|
||||
ninja z3 2>&1 | tail -30
|
||||
ninja build_z3_dotnet_bindings 2>&1 | tail -20
|
||||
|
||||
# Verify the build succeeded
|
||||
./z3 --version
|
||||
|
||||
# Locate the Microsoft.Z3.dll produced by the build
|
||||
Z3_DOTNET_DLL=$(find . -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
|
||||
if [ -z "$Z3_DOTNET_DLL" ]; then
|
||||
echo "ERROR: Microsoft.Z3.dll not found after build"
|
||||
exit 1
|
||||
fi
|
||||
echo "Found Microsoft.Z3.dll at: $Z3_DOTNET_DLL"
|
||||
```
|
||||
|
||||
If the build fails, report the error clearly and exit without proceeding.
|
||||
|
||||
## Phase 2a: Clone and Build ZIPT
|
||||
|
||||
Clone the ZIPT solver from the `parikh` branch and compile it against the Z3 .NET bindings built in Phase 1.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Re-locate the Microsoft.Z3.dll if needed
|
||||
Z3_DOTNET_DLL=$(find build -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
|
||||
Z3_LIB_DIR=${{ github.workspace }}/build
|
||||
|
||||
# Clone ZIPT (parikh branch)
|
||||
git clone --depth=1 --branch parikh https://github.com/CEisenhofer/ZIPT.git /tmp/zipt
|
||||
|
||||
# Patch ZIPT.csproj to point at the freshly built Microsoft.Z3.dll
|
||||
# (the repo has a Windows-relative hardcoded path that won't exist here)
|
||||
sed -i "s|<HintPath>.*</HintPath>|<HintPath>$Z3_DOTNET_DLL</HintPath>|" /tmp/zipt/ZIPT/ZIPT.csproj
|
||||
|
||||
# Build ZIPT in Release mode
|
||||
cd /tmp/zipt/ZIPT
|
||||
dotnet build --configuration Release 2>&1 | tail -20
|
||||
|
||||
# Locate the built ZIPT.dll
|
||||
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" | head -1)
|
||||
if [ -z "$ZIPT_DLL" ]; then
|
||||
echo "ERROR: ZIPT.dll not found after build"
|
||||
exit 1
|
||||
fi
|
||||
echo "ZIPT binary: $ZIPT_DLL"
|
||||
|
||||
# Make libz3.so visible to the .NET runtime at ZIPT startup
|
||||
ZIPT_OUT_DIR=$(dirname "$ZIPT_DLL")
|
||||
if cp "$Z3_LIB_DIR/libz3.so" "$ZIPT_OUT_DIR/" 2>/dev/null; then
|
||||
echo "Copied libz3.so to $ZIPT_OUT_DIR"
|
||||
else
|
||||
echo "WARNING: could not copy libz3.so to $ZIPT_OUT_DIR — setting LD_LIBRARY_PATH fallback"
|
||||
fi
|
||||
export LD_LIBRARY_PATH="$Z3_LIB_DIR${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
|
||||
echo "ZIPT build complete."
|
||||
```
|
||||
|
||||
If the ZIPT build fails, note the error in the report but continue with the Z3-only benchmark columns.
|
||||
|
||||
## Phase 2b: Extract and Select Benchmark Files
|
||||
|
||||
Extract the QF_S benchmark archive and randomly select 50 files.
|
||||
|
||||
```bash
|
||||
cd ${{ github.workspace }}
|
||||
|
||||
# Extract the archive
|
||||
mkdir -p /tmp/qfs_benchmarks
|
||||
tar --zstd -xf tests/QF_S.tar.zst -C /tmp/qfs_benchmarks
|
||||
|
||||
# List all .smt2 files
|
||||
find /tmp/qfs_benchmarks -name "*.smt2" -type f > /tmp/all_qfs_files.txt
|
||||
TOTAL_FILES=$(wc -l < /tmp/all_qfs_files.txt)
|
||||
echo "Total QF_S files: $TOTAL_FILES"
|
||||
|
||||
# Randomly select 200 files
|
||||
shuf -n 200 /tmp/all_qfs_files.txt > /tmp/selected_files.txt
|
||||
echo "Selected 200 files for benchmarking"
|
||||
cat /tmp/selected_files.txt
|
||||
```
|
||||
|
||||
## Phase 3: Run Benchmarks
|
||||
|
||||
Run each of the 200 selected files with both Z3 string solvers and ZIPT. Use a 5-second timeout for seq and a 10-second timeout for nseq and ZIPT.
|
||||
|
||||
For each file, run:
|
||||
1. `z3 smt.string_solver=seq -tr:seq -T:5 <file>` — seq solver with sequence-solver tracing enabled; rename the `.z3-trace` output after each run so it is not overwritten. Use `-T:5` when tracing to cap trace size.
|
||||
2. `z3 smt.string_solver=nseq -T:5 <file>` — nseq solver without tracing (timing only).
|
||||
3. `dotnet <ZIPT.dll> -t:5000 <file>` — ZIPT solver (milliseconds).
|
||||
|
||||
Capture:
|
||||
- **Verdict**: `sat`, `unsat`, `unknown`, `timeout` (if exit code indicates timeout or process is killed), or `bug` (if a solver crashes / produces a non-standard result)
|
||||
- **Time** (seconds): wall-clock time for the run
|
||||
- A row is flagged `SOUNDNESS_DISAGREEMENT` when any two solvers that both produced a definitive answer (sat/unsat) disagree
|
||||
|
||||
Use a bash script to automate this:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
Z3=${{ github.workspace }}/build/z3
|
||||
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" 2>/dev/null | head -1)
|
||||
ZIPT_AVAILABLE=false
|
||||
[ -n "$ZIPT_DLL" ] && ZIPT_AVAILABLE=true
|
||||
|
||||
# Ensure libz3.so is on the dynamic-linker path for the .NET runtime
|
||||
export LD_LIBRARY_PATH=${{ github.workspace }}/build${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
|
||||
|
||||
RESULTS=/tmp/benchmark_results.tsv
|
||||
TRACES_DIR=/tmp/seq_traces
|
||||
mkdir -p "$TRACES_DIR"
|
||||
|
||||
echo -e "file\tseq_verdict\tseq_time\tnseq_verdict\tnseq_time\tzipt_verdict\tzipt_time\tnotes" > "$RESULTS"
|
||||
|
||||
run_z3_seq_traced() {
|
||||
# Run seq solver with -tr:seq tracing. Cap at 5 s so trace files stay manageable.
|
||||
local file="$1"
|
||||
local trace_dest="$2"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
# Remove any leftover trace from a prior run so we can detect whether one was produced.
|
||||
rm -f .z3-trace
|
||||
|
||||
start=$(date +%s%3N)
|
||||
output=$(timeout 7 "$Z3" "smt.string_solver=seq" -tr:seq -T:5 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
# Rename the trace file immediately so the next run does not overwrite it.
|
||||
if [ -f .z3-trace ]; then
|
||||
mv .z3-trace "$trace_dest"
|
||||
else
|
||||
# Write a sentinel so Phase 4 can detect the absence of a trace.
|
||||
echo "(no trace produced)" > "$trace_dest"
|
||||
fi
|
||||
|
||||
if echo "$output" | grep -q "^unsat"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -q "^sat"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -q "^unknown"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
run_z3_nseq() {
|
||||
local file="$1"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
start=$(date +%s%3N)
|
||||
output=$(timeout 12 "$Z3" "smt.string_solver=nseq" -T:5 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
if echo "$output" | grep -q "^unsat"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -q "^sat"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -q "^unknown"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
run_zipt() {
|
||||
local file="$1"
|
||||
local start end elapsed verdict output exit_code
|
||||
|
||||
if [ "$ZIPT_AVAILABLE" != "true" ]; then
|
||||
echo "n/a 0.000"
|
||||
return
|
||||
fi
|
||||
|
||||
start=$(date +%s%3N)
|
||||
# ZIPT prints the filename on the first line, then SAT/UNSAT/UNKNOWN on subsequent lines
|
||||
output=$(timeout 12 dotnet "$ZIPT_DLL" -t:5000 "$file" 2>&1)
|
||||
exit_code=$?
|
||||
end=$(date +%s%3N)
|
||||
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
|
||||
|
||||
if echo "$output" | grep -qi "^UNSAT$"; then
|
||||
verdict="unsat"
|
||||
elif echo "$output" | grep -qi "^SAT$"; then
|
||||
verdict="sat"
|
||||
elif echo "$output" | grep -qi "^UNKNOWN$"; then
|
||||
verdict="unknown"
|
||||
elif [ "$exit_code" -eq 124 ]; then
|
||||
verdict="timeout"
|
||||
elif echo "$output" | grep -qi "error\|crash\|exception\|Unsupported"; then
|
||||
verdict="bug"
|
||||
else
|
||||
verdict="unknown"
|
||||
fi
|
||||
|
||||
echo "$verdict $elapsed"
|
||||
}
|
||||
|
||||
while IFS= read -r file; do
|
||||
fname=$(basename "$file")
|
||||
# Use a sanitised filename (replace non-alphanumeric with _) for the trace path.
|
||||
safe_name=$(echo "$fname" | tr -cs 'A-Za-z0-9._-' '_')
|
||||
trace_path="$TRACES_DIR/${safe_name}.z3-trace"
|
||||
|
||||
seq_result=$(run_z3_seq_traced "$file" "$trace_path")
|
||||
nseq_result=$(run_z3_nseq "$file")
|
||||
zipt_result=$(run_zipt "$file")
|
||||
|
||||
seq_verdict=$(echo "$seq_result" | cut -d' ' -f1)
|
||||
seq_time=$(echo "$seq_result" | cut -d' ' -f2)
|
||||
nseq_verdict=$(echo "$nseq_result" | cut -d' ' -f1)
|
||||
nseq_time=$(echo "$nseq_result" | cut -d' ' -f2)
|
||||
zipt_verdict=$(echo "$zipt_result" | cut -d' ' -f1)
|
||||
zipt_time=$(echo "$zipt_result" | cut -d' ' -f2)
|
||||
|
||||
# Flag soundness disagreement when any two definitive verdicts disagree
|
||||
notes=""
|
||||
# Build list of (solver, verdict) pairs for definitive answers only
|
||||
declare -A definitive_map
|
||||
[ "$seq_verdict" = "sat" ] || [ "$seq_verdict" = "unsat" ] && definitive_map[seq]="$seq_verdict"
|
||||
[ "$nseq_verdict" = "sat" ] || [ "$nseq_verdict" = "unsat" ] && definitive_map[nseq]="$nseq_verdict"
|
||||
[ "$zipt_verdict" = "sat" ] || [ "$zipt_verdict" = "unsat" ] && definitive_map[zipt]="$zipt_verdict"
|
||||
# Check every pair for conflict
|
||||
has_sat=false; has_unsat=false
|
||||
for v in "${definitive_map[@]}"; do
|
||||
[ "$v" = "sat" ] && has_sat=true
|
||||
[ "$v" = "unsat" ] && has_unsat=true
|
||||
done
|
||||
if $has_sat && $has_unsat; then
|
||||
notes="SOUNDNESS_DISAGREEMENT"
|
||||
fi
|
||||
|
||||
echo -e "$fname\t$seq_verdict\t$seq_time\t$nseq_verdict\t$nseq_time\t$zipt_verdict\t$zipt_time\t$notes" >> "$RESULTS"
|
||||
echo "[$fname] seq=$seq_verdict(${seq_time}s) nseq=$nseq_verdict(${nseq_time}s) zipt=$zipt_verdict(${zipt_time}s) $notes"
|
||||
done < /tmp/selected_files.txt
|
||||
|
||||
echo "Benchmark run complete. Results saved to $RESULTS"
|
||||
echo "Trace files saved to $TRACES_DIR"
|
||||
```
|
||||
|
||||
Save this script to `/tmp/run_benchmarks.sh`, make it executable, and run it.
|
||||
|
||||
## Phase 3.5: Identify seq-fast / nseq-slow Cases and Analyse Traces
|
||||
|
||||
After the benchmark loop completes, identify files where seq solved the instance quickly but nseq was significantly slower (or timed out). For each such file, read its saved seq trace and produce a hypothesis for why nseq is slower.
|
||||
|
||||
**Definition of "seq-fast / nseq-slow"**: seq_time < 1.0 s AND nseq_time > 3 × seq_time (and nseq_time > 0.5 s).
|
||||
|
||||
For each matching file:
|
||||
1. Read the corresponding trace file from `/tmp/seq_traces/`.
|
||||
2. Look for the sequence of lemmas, reductions, or decisions that led seq to a fast conclusion.
|
||||
3. Identify patterns absent or less exploited in nseq: e.g., length-based propagation early in the trace, Parikh constraints eliminating possibilities, Nielsen graph pruning, equation splitting, or overlap resolution.
|
||||
4. Write a 3–5 sentence hypothesis explaining the likely reason for the nseq slowdown, referencing specific trace entries where possible.
|
||||
|
||||
Use a script to collect the candidates:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
RESULTS=/tmp/benchmark_results.tsv
|
||||
TRACES_DIR=/tmp/seq_traces
|
||||
ANALYSIS=/tmp/trace_analysis.md
|
||||
|
||||
echo "# Trace Analysis: seq-fast / nseq-slow Candidates" > "$ANALYSIS"
|
||||
echo "" >> "$ANALYSIS"
|
||||
|
||||
# Skip header line; columns: file seq_verdict seq_time nseq_verdict nseq_time ...
|
||||
tail -n +2 "$RESULTS" | while IFS=$'\t' read -r fname seq_verdict seq_time nseq_verdict nseq_time _rest; do
|
||||
# Use bc for floating-point comparison; bc does not support && so split into separate tests.
|
||||
is_fast=$(echo "$seq_time < 1.0" | bc -l 2>/dev/null || echo 0)
|
||||
threshold=$(echo "$seq_time * 3" | bc -l 2>/dev/null || echo 99999)
|
||||
is_slow_threshold=$(echo "$nseq_time > $threshold" | bc -l 2>/dev/null || echo 0)
|
||||
# Extra guard: exclude trivially fast seq cases where 3× is still < 0.5 s
|
||||
is_over_half=$(echo "$nseq_time > 0.5" | bc -l 2>/dev/null || echo 0)
|
||||
|
||||
if [ "$is_fast" = "1" ] && [ "$is_slow_threshold" = "1" ] && [ "$is_over_half" = "1" ]; then
|
||||
safe_name=$(echo "$fname" | tr -cs 'A-Za-z0-9._-' '_')
|
||||
trace_path="$TRACES_DIR/${safe_name}.z3-trace"
|
||||
echo "## $fname" >> "$ANALYSIS"
|
||||
echo "" >> "$ANALYSIS"
|
||||
echo "seq: ${seq_time}s (${seq_verdict}), nseq: ${nseq_time}s (${nseq_verdict})" >> "$ANALYSIS"
|
||||
echo "" >> "$ANALYSIS"
|
||||
echo "### Trace excerpt (first 200 lines)" >> "$ANALYSIS"
|
||||
echo '```' >> "$ANALYSIS"
|
||||
head -200 "$trace_path" 2>/dev/null >> "$ANALYSIS" || echo "(trace file not found on disk)" >> "$ANALYSIS"
|
||||
echo '```' >> "$ANALYSIS"
|
||||
echo "" >> "$ANALYSIS"
|
||||
echo "---" >> "$ANALYSIS"
|
||||
echo "" >> "$ANALYSIS"
|
||||
fi
|
||||
done
|
||||
|
||||
echo "Candidate list written to $ANALYSIS"
|
||||
cat "$ANALYSIS"
|
||||
```
|
||||
|
||||
Save this to `/tmp/analyse_traces.sh`, make it executable, and run it. Then read the trace excerpts collected in `/tmp/trace_analysis.md` and — for each candidate — write your hypothesis in the Phase 4 summary report under a **"Trace Analysis"** section.
|
||||
|
||||
## Phase 4: Generate Summary Report
|
||||
|
||||
Read `/tmp/benchmark_results.tsv` and compute statistics. Then generate a Markdown report.
|
||||
|
||||
Compute:
|
||||
- **Total benchmarks**: 200
|
||||
- **Per solver (seq, nseq, and ZIPT)**: count of sat / unsat / unknown / timeout / bug verdicts
|
||||
- **Total time used**: sum of all times for each solver
|
||||
- **Average time per benchmark**: total_time / 200
|
||||
- **Soundness disagreements**: files where any two solvers that both returned a definitive answer disagree (these are the most critical bugs)
|
||||
- **Bugs / crashes**: files with error/crash verdicts
|
||||
|
||||
Format the report as a GitHub Discussion post (GitHub-flavored Markdown):
|
||||
|
||||
```markdown
|
||||
### ZIPT Benchmark Report — Z3 c3 branch
|
||||
|
||||
**Date**: <today's date>
|
||||
**Branch**: c3
|
||||
**Benchmark set**: QF_S (200 randomly selected files from tests/QF_S.tar.zst)
|
||||
**Timeout**: 5 seconds for seq (`-T:5`); 5 seconds for nseq (`-T:5`) and ZIPT (`-t:5000`)
|
||||
|
||||
---
|
||||
|
||||
### Summary
|
||||
|
||||
| Metric | seq solver | nseq solver | ZIPT solver |
|
||||
|--------|-----------|-------------|-------------|
|
||||
| sat | X | X | X |
|
||||
| unsat | X | X | X |
|
||||
| unknown | X | X | X |
|
||||
| timeout | X | X | X |
|
||||
| bug/crash | X | X | X |
|
||||
| **Total time (s)** | X.XXX | X.XXX | X.XXX |
|
||||
| **Avg time/benchmark (s)** | X.XXX | X.XXX | X.XXX |
|
||||
|
||||
**Soundness disagreements** (any two solvers return conflicting sat/unsat): N
|
||||
|
||||
---
|
||||
|
||||
### Per-File Results
|
||||
|
||||
| # | File | seq verdict | seq time (s) | nseq verdict | nseq time (s) | ZIPT verdict | ZIPT time (s) | Notes |
|
||||
|---|------|-------------|-------------|--------------|--------------|--------------|--------------|-------|
|
||||
| 1 | benchmark_0001.smt2 | sat | 0.123 | sat | 0.456 | sat | 0.789 | |
|
||||
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
|
||||
|
||||
---
|
||||
|
||||
### Notable Issues
|
||||
|
||||
#### Soundness Disagreements (Critical)
|
||||
<list files where any two solvers disagree on sat/unsat, naming which solvers disagree>
|
||||
|
||||
#### Crashes / Bugs
|
||||
<list files where any solver crashed or produced an error>
|
||||
|
||||
#### Slow Benchmarks (> 8s)
|
||||
<list files that took more than 8 seconds for any solver>
|
||||
|
||||
#### Trace Analysis: seq-fast / nseq-slow Hypotheses
|
||||
<For each file where seq finished in < 1 s and nseq took > 3× longer, write a 3–5 sentence hypothesis based on the trace excerpt, referencing specific trace entries where possible. If no such files were found, state "No seq-fast / nseq-slow cases were observed in this run.">
|
||||
|
||||
---
|
||||
|
||||
*Generated automatically by the ZIPT Benchmark workflow on the c3 branch.*
|
||||
```
|
||||
|
||||
## Phase 5: Post to GitHub Discussion
|
||||
|
||||
Post the Markdown report as a new GitHub Discussion using the `create-discussion` safe output.
|
||||
|
||||
- **Category**: "Agentic Workflows"
|
||||
- **Title**: `[ZIPT Benchmark] Z3 c3 branch — <date>`
|
||||
- Close older discussions with the same title prefix to avoid clutter.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- **Always build from c3 branch**: The workspace is already checked out on c3; don't change branches.
|
||||
- **Debug build required**: The build must use `CMAKE_BUILD_TYPE=Debug` so that Z3's internal assertions and trace infrastructure are active; `-tr:` trace flags have no effect in Release builds.
|
||||
- **Tracing time cap**: Always pass `-T:5` when running with `-tr:seq` to limit solver runtime and keep trace files a manageable size. The nseq and ZIPT runs use `-T:5` / `-t:5000` as before.
|
||||
- **Rename trace files immediately**: After each seq run, rename `.z3-trace` to a per-benchmark path before starting the next run, or the next invocation will overwrite it.
|
||||
- **Handle build failures gracefully**: If Z3 fails to build, report the error and create a brief discussion noting the build failure. If ZIPT fails to build, continue with only the seq/nseq columns and note `n/a` for ZIPT results.
|
||||
- **Handle missing zstd**: If `tar --zstd` fails, try `zstd -d tests/QF_S.tar.zst --stdout | tar -x -C /tmp/qfs_benchmarks`.
|
||||
- **Be precise with timing**: Use millisecond-precision timestamps and report times in seconds with 3 decimal places.
|
||||
- **Distinguish timeout from unknown**: A timeout (process killed after 7s outer / 5s Z3-internal for seq, or 12s/10s for nseq) is different from `(unknown)` returned by a solver.
|
||||
- **ZIPT timeout unit**: ZIPT's `-t` flag takes **milliseconds**, so pass `-t:5000` for a 5-second limit.
|
||||
- **ZIPT output format**: ZIPT prints the input filename on the first line, then `SAT`, `UNSAT`, or `UNKNOWN` on subsequent lines. Parse accordingly.
|
||||
- **Report soundness bugs prominently**: If any benchmark shows a conflict between any two solvers that both returned a definitive sat/unsat answer, highlight it as a critical finding and name which pair disagrees.
|
||||
- **Don't skip any file**: Run all 200 files even if some fail.
|
||||
- **Large report**: If the per-file table is very long, put it in a `<details>` collapsible section.
|
||||
|
|
|
|||
559
.github/workflows/release-notes-updater.lock.yml
generated
vendored
559
.github/workflows/release-notes-updater.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Weekly release notes updater that generates updates based on changes since last release
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"ea00e3f06493e27d8163a18fbbbd37f5c9fdad4497869fcd70ca66c83b546a04"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"f1c5ca93aaf4a1971d65fe091f4954b074f555289bfc951ca5c232c58d2d5b36","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Release Notes Updater"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Release Notes Updater"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -85,41 +117,18 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -149,12 +158,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/release-notes-updater.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -180,8 +190,6 @@ jobs:
|
|||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -201,9 +209,7 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKFLOW: process.env.GH_AW_GITHUB_WORKFLOW,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -214,12 +220,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -240,22 +248,25 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: releasenotesupdater
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
persist-credentials: false
|
||||
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
|
|
@ -264,6 +275,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -271,7 +283,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -282,59 +294,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Release Notes Updater",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -346,7 +309,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -370,6 +333,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -392,10 +363,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -413,9 +392,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -442,9 +429,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -482,6 +477,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -574,10 +594,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -585,7 +606,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -609,17 +630,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -628,20 +643,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -649,6 +681,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -694,9 +727,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -718,13 +754,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -769,23 +805,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Release Notes Updater"
|
||||
WORKFLOW_DESCRIPTION: "Weekly release notes updater that generates updates based on changes since last release"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
runs-on: ubuntu-slim
|
||||
|
|
@ -793,22 +951,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-release-notes-updater"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -818,7 +981,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Release Notes Updater"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -849,10 +1012,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "release-notes-updater"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -869,7 +1036,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -878,112 +1045,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Release Notes Updater"
|
||||
WORKFLOW_DESCRIPTION: "Weekly release notes updater that generates updates based on changes since last release"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -991,26 +1055,31 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/release-notes-updater"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "release-notes-updater"
|
||||
GH_AW_WORKFLOW_NAME: "Release Notes Updater"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1020,7 +1089,10 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"announcements\",\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[Release Notes] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"announcements\",\"close_older_discussions\":false,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[Release Notes] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1028,4 +1100,11 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
|
|
|
|||
5
.github/workflows/release-notes-updater.md
vendored
5
.github/workflows/release-notes-updater.md
vendored
|
|
@ -24,13 +24,16 @@ safe-outputs:
|
|||
title-prefix: "[Release Notes] "
|
||||
category: "Announcements"
|
||||
close-older-discussions: false
|
||||
noop:
|
||||
report-as-issue: false
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
fetch-depth: 0 # Fetch full history for analyzing commits
|
||||
persist-credentials: false
|
||||
|
||||
---
|
||||
|
||||
|
|
|
|||
72
.github/workflows/release.yml
vendored
72
.github/workflows/release.yml
vendored
|
|
@ -53,7 +53,7 @@ jobs:
|
|||
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: dist/*.zip
|
||||
|
|
@ -79,7 +79,7 @@ jobs:
|
|||
run: git clone https://github.com/z3prover/z3test z3test
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: MacArm64
|
||||
path: dist/*.zip
|
||||
|
|
@ -99,7 +99,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download macOS x64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: artifacts
|
||||
|
|
@ -147,7 +147,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download macOS ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: artifacts
|
||||
|
|
@ -208,7 +208,7 @@ jobs:
|
|||
run: python z3test/scripts/test_benchmarks.py build-dist/z3 z3test/regressions/smt2
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuBuild
|
||||
path: dist/*.zip
|
||||
|
|
@ -243,7 +243,7 @@ jobs:
|
|||
python scripts/mk_unix_dist.py --nodotnet --arch=arm64
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuArm64
|
||||
path: dist/*.zip
|
||||
|
|
@ -279,9 +279,9 @@ jobs:
|
|||
eval $(opam config env)
|
||||
python scripts/mk_make.py --ml
|
||||
cd build
|
||||
make -j3
|
||||
make -j3 examples
|
||||
make -j3 test-z3
|
||||
make -j$(nproc)
|
||||
make -j$(nproc) examples
|
||||
make -j$(nproc) test-z3
|
||||
cd ..
|
||||
|
||||
- name: Generate documentation
|
||||
|
|
@ -298,7 +298,7 @@ jobs:
|
|||
run: zip -r z3doc.zip doc/api
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: UbuntuDoc
|
||||
path: z3doc.zip
|
||||
|
|
@ -328,7 +328,7 @@ jobs:
|
|||
run: pip install ./src/api/python/wheelhouse/*.whl && python - <src/api/python/z3test.py z3 && python - <src/api/python/z3test.py z3num
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ManyLinuxPythonBuildAMD64
|
||||
path: src/api/python/wheelhouse/*.whl
|
||||
|
|
@ -368,7 +368,7 @@ jobs:
|
|||
run: cd src/api/python && CC=aarch64-none-linux-gnu-gcc CXX=aarch64-none-linux-gnu-g++ AR=aarch64-none-linux-gnu-ar LD=aarch64-none-linux-gnu-ld Z3_CROSS_COMPILING=aarch64 python -m build && AUDITWHEEL_PLAT= auditwheel repair --best-plat dist/*.whl && cd ../../..
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: ManyLinuxPythonBuildArm64
|
||||
path: src/api/python/wheelhouse/*.whl
|
||||
|
|
@ -394,7 +394,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: dist/*.zip
|
||||
|
|
@ -420,7 +420,7 @@ jobs:
|
|||
python scripts\mk_win_dist.py --x86-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: dist/*.zip
|
||||
|
|
@ -446,7 +446,7 @@ jobs:
|
|||
python scripts\mk_win_dist_cmake.py --arm64-only --dotnet-key=%GITHUB_WORKSPACE%\resources\z3.snk --assembly-version=${{ env.RELEASE_VERSION }} --zip
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: dist/arm64/*.zip
|
||||
|
|
@ -470,37 +470,37 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download Win64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: package
|
||||
|
||||
- name: Download Win ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: package
|
||||
|
||||
- name: Download Ubuntu Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: UbuntuBuild
|
||||
path: package
|
||||
|
||||
- name: Download Ubuntu ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: UbuntuArm64
|
||||
path: package
|
||||
|
||||
- name: Download macOS Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: package
|
||||
|
||||
- name: Download macOS Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: package
|
||||
|
|
@ -523,7 +523,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: NuGet
|
||||
path: |
|
||||
|
|
@ -545,7 +545,7 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: package
|
||||
|
|
@ -568,7 +568,7 @@ jobs:
|
|||
nuget pack out\Microsoft.Z3.x86.sym.nuspec -OutputDirectory . -Verbosity detailed -Symbols -SymbolPackageFormat snupkg -BasePath out
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: NuGet32
|
||||
path: |
|
||||
|
|
@ -590,43 +590,43 @@ jobs:
|
|||
python-version: '3.x'
|
||||
|
||||
- name: Download macOS x64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: macOsBuild
|
||||
path: artifacts
|
||||
|
||||
- name: Download macOS Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: MacArm64
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x64
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win32 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-x86
|
||||
path: artifacts
|
||||
|
||||
- name: Download Win ARM64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: WindowsBuild-arm64
|
||||
path: artifacts
|
||||
|
||||
- name: Download ManyLinux AMD64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: ManyLinuxPythonBuildAMD64
|
||||
path: artifacts
|
||||
|
||||
- name: Download ManyLinux Arm64 Build
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: ManyLinuxPythonBuildArm64
|
||||
path: artifacts
|
||||
|
|
@ -658,7 +658,7 @@ jobs:
|
|||
cp artifacts/*.whl src/api/python/dist/.
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v6
|
||||
uses: actions/upload-artifact@v7
|
||||
with:
|
||||
name: PythonPackage
|
||||
path: src/api/python/dist/*
|
||||
|
|
@ -692,7 +692,7 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
path: tmp
|
||||
|
||||
|
|
@ -748,13 +748,13 @@ jobs:
|
|||
uses: actions/checkout@v6.0.2
|
||||
|
||||
- name: Download NuGet packages
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: NuGet
|
||||
path: packages
|
||||
|
||||
- name: Download NuGet32 packages
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: NuGet32
|
||||
path: packages
|
||||
|
|
@ -781,7 +781,7 @@ jobs:
|
|||
contents: read
|
||||
steps:
|
||||
- name: Download Python packages
|
||||
uses: actions/download-artifact@v7
|
||||
uses: actions/download-artifact@v8.0.1
|
||||
with:
|
||||
name: PythonPackage
|
||||
path: dist
|
||||
|
|
|
|||
41
.github/workflows/soundness-bug-detector.md
vendored
41
.github/workflows/soundness-bug-detector.md
vendored
|
|
@ -1,41 +0,0 @@
|
|||
---
|
||||
description: Automatically validate and reproduce reported soundness bugs
|
||||
|
||||
on:
|
||||
issues:
|
||||
types: [opened, labeled]
|
||||
schedule: daily
|
||||
|
||||
roles: all
|
||||
|
||||
permissions: read-all
|
||||
|
||||
network: defaults
|
||||
|
||||
tools:
|
||||
cache-memory: true
|
||||
github:
|
||||
toolsets: [default]
|
||||
bash: [":*"]
|
||||
web-fetch: {}
|
||||
|
||||
safe-outputs:
|
||||
add-comment:
|
||||
max: 2
|
||||
create-discussion:
|
||||
title-prefix: "[Soundness] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
|
||||
timeout-minutes: 30
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
---
|
||||
|
||||
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
|
||||
@./agentics/soundness-bug-detector.md
|
||||
58
.github/workflows/specbot.md
vendored
58
.github/workflows/specbot.md
vendored
|
|
@ -1,58 +0,0 @@
|
|||
---
|
||||
description: Automatically annotate code with assertions capturing class invariants, pre-conditions, and post-conditions using LLM-based specification mining
|
||||
|
||||
on:
|
||||
schedule: weekly
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
target_path:
|
||||
description: 'Target directory or file to analyze (e.g., src/ast/, src/smt/smt_context.cpp)'
|
||||
required: false
|
||||
default: ''
|
||||
target_class:
|
||||
description: 'Specific class name to analyze (optional)'
|
||||
required: false
|
||||
default: ''
|
||||
|
||||
roles: [write, maintain, admin]
|
||||
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.BOT_PAT }}
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
issues: read
|
||||
pull-requests: read
|
||||
|
||||
tools:
|
||||
github:
|
||||
toolsets: [default]
|
||||
view: {}
|
||||
glob: {}
|
||||
edit: {}
|
||||
bash:
|
||||
- ":*"
|
||||
|
||||
mcp-servers:
|
||||
serena:
|
||||
container: "ghcr.io/githubnext/serena-mcp-server"
|
||||
version: "latest"
|
||||
|
||||
safe-outputs:
|
||||
create-discussion:
|
||||
title-prefix: "[SpecBot] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
|
||||
timeout-minutes: 45
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
|
||||
---
|
||||
|
||||
<!-- Edit the file linked below to modify the agent without recompilation. Feel free to move the entire markdown body to that file. -->
|
||||
@./agentics/specbot.md
|
||||
595
.github/workflows/tactic-to-simplifier.lock.yml
generated
vendored
595
.github/workflows/tactic-to-simplifier.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Compares exposed tactics and simplifiers in Z3, and creates issues for tactics that can be converted to simplifiers
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"50116844aa0308890a39445e2e30a0cc857b66711c75cecd175c4e064608b1aa"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"e13e5cf1ed6470a8d8ed325d91a01992a543c67b3b1393c2a01d8008b90992dc","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Tactic-to-Simplifier Comparison Agent"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Tactic-to-Simplifier Comparison Agent"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -84,42 +116,19 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_issue, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -149,12 +158,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/tactic-to-simplifier.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -181,8 +191,6 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -204,9 +212,7 @@ jobs:
|
|||
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
|
||||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -217,12 +223,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -246,20 +254,22 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: tactictosimplifier
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
@ -267,7 +277,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -280,6 +290,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -287,7 +298,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -298,59 +309,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Tactic-to-Simplifier Comparison Agent",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -362,7 +324,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -374,7 +336,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 3 issue(s) can be created. Title will be prefixed with \"[tactic-to-simplifier] \". Labels [enhancement refactoring tactic-to-simplifier] will be automatically added.",
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 3 issue(s) can be created. Title will be prefixed with \"[tactic-to-simplifier] \". Labels [\"enhancement\" \"refactoring\" \"tactic-to-simplifier\"] will be automatically added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -382,6 +344,10 @@ jobs:
|
|||
"description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -396,9 +362,13 @@ jobs:
|
|||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,8}$",
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
|
|
@ -423,10 +393,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -444,9 +422,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -473,9 +459,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -520,6 +514,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -612,10 +631,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -623,7 +643,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -647,17 +667,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -666,20 +680,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -687,6 +718,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -732,9 +764,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -756,13 +791,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -805,7 +840,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -813,23 +848,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Tactic-to-Simplifier Comparison Agent"
|
||||
WORKFLOW_DESCRIPTION: "Compares exposed tactics and simplifiers in Z3, and creates issues for tactics that can be converted to simplifiers"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
- update_cache_memory
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
|
|
@ -837,22 +994,27 @@ jobs:
|
|||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-tactic-to-simplifier"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -862,7 +1024,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Tactic-to-Simplifier Comparison Agent"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -893,8 +1055,12 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "tactic-to-simplifier"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -911,7 +1077,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -920,138 +1086,42 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Tactic-to-Simplifier Comparison Agent"
|
||||
WORKFLOW_DESCRIPTION: "Compares exposed tactics and simplifiers in Z3, and creates issues for tactics that can be converted to simplifiers"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/tactic-to-simplifier"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "tactic-to-simplifier"
|
||||
GH_AW_WORKFLOW_NAME: "Tactic-to-Simplifier Comparison Agent"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
created_issue_number: ${{ steps.process_safe_outputs.outputs.created_issue_number }}
|
||||
created_issue_url: ${{ steps.process_safe_outputs.outputs.created_issue_url }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1061,6 +1131,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_issue\":{\"labels\":[\"enhancement\",\"refactoring\",\"tactic-to-simplifier\"],\"max\":3,\"title_prefix\":\"[tactic-to-simplifier] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1069,27 +1142,45 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
update_cache_memory:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: always() && needs.detection.outputs.success == 'true'
|
||||
needs: agent
|
||||
if: always() && needs.agent.outputs.detection_success == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
env:
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: tactictosimplifier
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
- name: Check if cache-memory folder has content (default)
|
||||
id: check_cache_default
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then
|
||||
echo "has_content=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_content=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
4
.github/workflows/tactic-to-simplifier.md
vendored
4
.github/workflows/tactic-to-simplifier.md
vendored
|
|
@ -30,11 +30,13 @@ safe-outputs:
|
|||
- tactic-to-simplifier
|
||||
title-prefix: "[tactic-to-simplifier] "
|
||||
max: 3
|
||||
noop:
|
||||
report-as-issue: false
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
|
|||
587
.github/workflows/workflow-suggestion-agent.lock.yml
generated
vendored
587
.github/workflows/workflow-suggestion-agent.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.45.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Weekly agent that suggests which agentic workflow agents should be added to the Z3 repository
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"4b33fde33f7b00d5b78ebf13851b0c74a0b8a72ccd1d51ac5714095269b61862"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"59124869a8a5924dd1000f62007eb3bbcc53c3e16a0ea8a30cc80f008206de6d","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "Workflow Suggestion Agent"
|
||||
"on":
|
||||
|
|
@ -47,19 +47,51 @@ jobs:
|
|||
outputs:
|
||||
comment_id: ""
|
||||
comment_repo: ""
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
env:
|
||||
GH_AW_INFO_ENGINE_ID: "copilot"
|
||||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "Workflow Suggestion Agent"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
GH_AW_INFO_STAGED: "false"
|
||||
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
|
||||
GH_AW_INFO_FIREWALL_ENABLED: "true"
|
||||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { main } = require('/opt/gh-aw/actions/generate_aw_info.cjs');
|
||||
await main(core, context);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -85,42 +117,19 @@ jobs:
|
|||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
run: |
|
||||
bash /opt/gh-aw/actions/create_prompt_first.sh
|
||||
cat << 'GH_AW_PROMPT_EOF' > "$GH_AW_PROMPT"
|
||||
{
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat "/opt/gh-aw/prompts/xpia.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/markdown.md" >> "$GH_AW_PROMPT"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md" >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
<safe-outputs>
|
||||
<description>GitHub API Access Instructions</description>
|
||||
<important>
|
||||
The gh CLI is NOT authenticated. Do NOT use gh commands for GitHub operations.
|
||||
</important>
|
||||
<instructions>
|
||||
To create or modify GitHub resources (issues, discussions, pull requests, etc.), you MUST call the appropriate safe output tool. Simply writing content will NOT work - the workflow requires actual tool calls.
|
||||
|
||||
Temporary IDs: Some safe output tools support a temporary ID field (usually named temporary_id) so you can reference newly-created items elsewhere in the SAME agent output (for example, using #aw_abc1 in a later body).
|
||||
|
||||
**IMPORTANT - temporary_id format rules:**
|
||||
- If you DON'T need to reference the item later, OMIT the temporary_id field entirely (it will be auto-generated if needed)
|
||||
- If you DO need cross-references/chaining, you MUST match this EXACT validation regex: /^aw_[A-Za-z0-9]{3,8}$/i
|
||||
- Format: aw_ prefix followed by 3 to 8 alphanumeric characters (A-Z, a-z, 0-9, case-insensitive)
|
||||
- Valid alphanumeric characters: ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789
|
||||
- INVALID examples: aw_ab (too short), aw_123456789 (too long), aw_test-id (contains hyphen), aw_id_123 (contains underscore)
|
||||
- VALID examples: aw_abc, aw_abc1, aw_Test123, aw_A1B2C3D4, aw_12345678
|
||||
- To generate valid IDs: use 3-8 random alphanumeric characters or omit the field to let the system auto-generate
|
||||
|
||||
Do NOT invent other aw_* formats — downstream steps will reject them with validation errors matching against /^aw_[A-Za-z0-9]{3,8}$/i.
|
||||
|
||||
Discover available tools from the safeoutputs MCP server.
|
||||
|
||||
**Critical**: Tool calls write structured data that downstream jobs process. Without tool calls, follow-up actions will be skipped.
|
||||
|
||||
**Note**: If you made no other safe output tool calls during this workflow execution, call the "noop" tool to provide a status message indicating completion or that no actions were needed.
|
||||
</instructions>
|
||||
</safe-outputs>
|
||||
cat "/opt/gh-aw/prompts/xpia.md"
|
||||
cat "/opt/gh-aw/prompts/temp_folder_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/markdown.md"
|
||||
cat "/opt/gh-aw/prompts/cache_memory_prompt.md"
|
||||
cat "/opt/gh-aw/prompts/safe_outputs_prompt.md"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
<safe-output-tools>
|
||||
Tools: create_discussion, missing_tool, missing_data, noop
|
||||
</safe-output-tools>
|
||||
<github-context>
|
||||
The following GitHub context information is available for this workflow:
|
||||
{{#if __GH_AW_GITHUB_ACTOR__ }}
|
||||
|
|
@ -150,12 +159,13 @@ jobs:
|
|||
</github-context>
|
||||
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
</system>
|
||||
GH_AW_PROMPT_EOF
|
||||
cat << 'GH_AW_PROMPT_EOF' >> "$GH_AW_PROMPT"
|
||||
cat << 'GH_AW_PROMPT_EOF'
|
||||
{{#runtime-import .github/workflows/workflow-suggestion-agent.md}}
|
||||
GH_AW_PROMPT_EOF
|
||||
} > "$GH_AW_PROMPT"
|
||||
- name: Interpolate variables and render templates
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -184,8 +194,6 @@ jobs:
|
|||
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
|
||||
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: ${{ needs.pre_activation.outputs.activated }}
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: ${{ needs.pre_activation.outputs.matched_command }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
|
|
@ -208,9 +216,7 @@ jobs:
|
|||
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
|
||||
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
|
||||
GH_AW_GITHUB_WORKFLOW: process.env.GH_AW_GITHUB_WORKFLOW,
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_ACTIVATED,
|
||||
GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND: process.env.GH_AW_NEEDS_PRE_ACTIVATION_OUTPUTS_MATCHED_COMMAND
|
||||
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
|
||||
}
|
||||
});
|
||||
- name: Validate prompt placeholders
|
||||
|
|
@ -221,12 +227,14 @@ jobs:
|
|||
env:
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
run: bash /opt/gh-aw/actions/print_prompt_summary.sh
|
||||
- name: Upload prompt artifact
|
||||
- name: Upload activation artifact
|
||||
if: success()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
name: activation
|
||||
path: |
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
retention-days: 1
|
||||
|
||||
agent:
|
||||
|
|
@ -247,20 +255,22 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: workflowsuggestionagent
|
||||
outputs:
|
||||
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
|
||||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
model: ${{ steps.generate_aw_info.outputs.model }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
@ -268,7 +278,7 @@ jobs:
|
|||
- name: Create cache-memory directory
|
||||
run: bash /opt/gh-aw/actions/create_cache_memory_dir.sh
|
||||
- name: Restore cache-memory file share data
|
||||
uses: actions/cache/restore@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
uses: actions/cache/restore@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
@ -281,6 +291,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -288,7 +299,7 @@ jobs:
|
|||
- name: Checkout PR branch
|
||||
id: checkout-pr
|
||||
if: |
|
||||
github.event.pull_request
|
||||
(github.event.pull_request) || (github.event.issue.pull_request)
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -299,59 +310,10 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Generate agentic run info
|
||||
id: generate_aw_info
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const fs = require('fs');
|
||||
|
||||
const awInfo = {
|
||||
engine_id: "copilot",
|
||||
engine_name: "GitHub Copilot CLI",
|
||||
model: process.env.GH_AW_MODEL_AGENT_COPILOT || "",
|
||||
version: "",
|
||||
agent_version: "0.0.410",
|
||||
cli_version: "v0.45.6",
|
||||
workflow_name: "Workflow Suggestion Agent",
|
||||
experimental: false,
|
||||
supports_tools_allowlist: true,
|
||||
run_id: context.runId,
|
||||
run_number: context.runNumber,
|
||||
run_attempt: process.env.GITHUB_RUN_ATTEMPT,
|
||||
repository: context.repo.owner + '/' + context.repo.repo,
|
||||
ref: context.ref,
|
||||
sha: context.sha,
|
||||
actor: context.actor,
|
||||
event_name: context.eventName,
|
||||
staged: false,
|
||||
allowed_domains: ["defaults"],
|
||||
firewall_enabled: true,
|
||||
awf_version: "v0.19.1",
|
||||
awmg_version: "v0.1.4",
|
||||
steps: {
|
||||
firewall: "squid"
|
||||
},
|
||||
created_at: new Date().toISOString()
|
||||
};
|
||||
|
||||
// Write to /tmp/gh-aw directory to avoid inclusion in PR
|
||||
const tmpPath = '/tmp/gh-aw/aw_info.json';
|
||||
fs.writeFileSync(tmpPath, JSON.stringify(awInfo, null, 2));
|
||||
console.log('Generated aw_info.json at:', tmpPath);
|
||||
console.log(JSON.stringify(awInfo, null, 2));
|
||||
|
||||
// Set model as output for reuse in other steps/jobs
|
||||
core.setOutput('model', awInfo.model);
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.19.1
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
id: determine-automatic-lockdown
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
|
|
@ -363,7 +325,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.19.1 ghcr.io/github/gh-aw-firewall/squid:0.19.1 ghcr.io/github/gh-aw-mcpg:v0.1.4 ghcr.io/github/github-mcp-server:v0.30.3 ghcr.io/github/serena-mcp-server:latest node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 ghcr.io/github/serena-mcp-server:latest node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -387,6 +349,14 @@ jobs:
|
|||
"description": "Discussion category by name (e.g., 'General'), slug (e.g., 'general'), or ID. If omitted, uses the first available category. Category must exist in the repository.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
"description": "Concise discussion title summarizing the topic. The title appears as the main heading, so keep it brief and descriptive.",
|
||||
"type": "string"
|
||||
|
|
@ -409,10 +379,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -430,9 +408,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -459,9 +445,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -499,6 +493,31 @@ jobs:
|
|||
}
|
||||
}
|
||||
},
|
||||
"missing_data": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
"alternatives": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"context": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
},
|
||||
"data_type": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 128
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"sanitize": true,
|
||||
"maxLength": 256
|
||||
}
|
||||
}
|
||||
},
|
||||
"missing_tool": {
|
||||
"defaultMax": 20,
|
||||
"fields": {
|
||||
|
|
@ -591,10 +610,11 @@ jobs:
|
|||
export MCP_GATEWAY_API_KEY
|
||||
export MCP_GATEWAY_PAYLOAD_DIR="/tmp/gh-aw/mcp-payloads"
|
||||
mkdir -p "${MCP_GATEWAY_PAYLOAD_DIR}"
|
||||
export MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD="524288"
|
||||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.4'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -602,7 +622,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.30.3",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -634,17 +654,11 @@ jobs:
|
|||
}
|
||||
}
|
||||
GH_AW_MCP_CONFIG_EOF
|
||||
- name: Generate workflow overview
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
- name: Download activation artifact
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
script: |
|
||||
const { generateWorkflowOverview } = require('/opt/gh-aw/actions/generate_workflow_overview.cjs');
|
||||
await generateWorkflowOverview(core);
|
||||
- name: Download prompt artifact
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: prompt
|
||||
path: /tmp/gh-aw/aw-prompts
|
||||
name: activation
|
||||
path: /tmp/gh-aw
|
||||
- name: Clean git credentials
|
||||
run: bash /opt/gh-aw/actions/clean_git_credentials.sh
|
||||
- name: Execute GitHub Copilot CLI
|
||||
|
|
@ -653,20 +667,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.19.1 --skip-pull \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-all-tools --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -674,6 +705,7 @@ jobs:
|
|||
run: |
|
||||
git config --global user.email "github-actions[bot]@users.noreply.github.com"
|
||||
git config --global user.name "github-actions[bot]"
|
||||
git config --global am.keepcr true
|
||||
# Re-authenticate git with GitHub token
|
||||
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
|
||||
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
|
||||
|
|
@ -719,9 +751,12 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output
|
||||
path: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
|
|
@ -743,13 +778,13 @@ jobs:
|
|||
await main();
|
||||
- name: Upload sanitized agent output
|
||||
if: always() && env.GH_AW_AGENT_OUTPUT
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-output
|
||||
path: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
if-no-files-found: warn
|
||||
- name: Upload engine output files
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent_outputs
|
||||
path: |
|
||||
|
|
@ -792,7 +827,7 @@ jobs:
|
|||
echo 'AWF binary not installed, skipping firewall log summary'
|
||||
fi
|
||||
- name: Upload cache-memory data as artifact
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
if: always()
|
||||
with:
|
||||
name: cache-memory
|
||||
|
|
@ -800,23 +835,145 @@ jobs:
|
|||
- name: Upload agent artifacts
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: |
|
||||
/tmp/gh-aw/aw-prompts/prompt.txt
|
||||
/tmp/gh-aw/aw_info.json
|
||||
/tmp/gh-aw/mcp-logs/
|
||||
/tmp/gh-aw/sandbox/firewall/logs/
|
||||
/tmp/gh-aw/agent-stdio.log
|
||||
/tmp/gh-aw/agent/
|
||||
if-no-files-found: ignore
|
||||
# --- Threat Detection (inline) ---
|
||||
- name: Check if detection needed
|
||||
id: detection_guard
|
||||
if: always()
|
||||
env:
|
||||
OUTPUT_TYPES: ${{ steps.collect_output.outputs.output_types }}
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
run: |
|
||||
if [[ -n "$OUTPUT_TYPES" || "$HAS_PATCH" == "true" ]]; then
|
||||
echo "run_detection=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection will run: output_types=$OUTPUT_TYPES, has_patch=$HAS_PATCH"
|
||||
else
|
||||
echo "run_detection=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection skipped: no agent outputs or patches to analyze"
|
||||
fi
|
||||
- name: Clear MCP configuration for detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
rm -f /tmp/gh-aw/mcp-config/mcp-servers.json
|
||||
rm -f /home/runner/.copilot/mcp-config.json
|
||||
rm -f "$GITHUB_WORKSPACE/.gemini/settings.json"
|
||||
- name: Prepare threat detection files
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection/aw-prompts
|
||||
cp /tmp/gh-aw/aw-prompts/prompt.txt /tmp/gh-aw/threat-detection/aw-prompts/prompt.txt 2>/dev/null || true
|
||||
cp /tmp/gh-aw/agent_output.json /tmp/gh-aw/threat-detection/agent_output.json 2>/dev/null || true
|
||||
for f in /tmp/gh-aw/aw-*.patch; do
|
||||
[ -f "$f" ] && cp "$f" /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
done
|
||||
echo "Prepared threat detection files:"
|
||||
ls -la /tmp/gh-aw/threat-detection/ 2>/dev/null || true
|
||||
- name: Setup threat detection
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Workflow Suggestion Agent"
|
||||
WORKFLOW_DESCRIPTION: "Weekly agent that suggests which agentic workflow agents should be added to the Z3 repository"
|
||||
HAS_PATCH: ${{ steps.collect_output.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Execute GitHub Copilot CLI
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
id: detection_agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always() && steps.detection_guard.outputs.run_detection == 'true'
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
- name: Set detection conclusion
|
||||
id: detection_conclusion
|
||||
if: always()
|
||||
env:
|
||||
RUN_DETECTION: ${{ steps.detection_guard.outputs.run_detection }}
|
||||
DETECTION_SUCCESS: ${{ steps.parse_detection_results.outputs.success }}
|
||||
run: |
|
||||
if [[ "$RUN_DETECTION" != "true" ]]; then
|
||||
echo "conclusion=skipped" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection was not needed, marking as skipped"
|
||||
elif [[ "$DETECTION_SUCCESS" == "true" ]]; then
|
||||
echo "conclusion=success" >> "$GITHUB_OUTPUT"
|
||||
echo "success=true" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection passed successfully"
|
||||
else
|
||||
echo "conclusion=failure" >> "$GITHUB_OUTPUT"
|
||||
echo "success=false" >> "$GITHUB_OUTPUT"
|
||||
echo "Detection found issues"
|
||||
fi
|
||||
|
||||
conclusion:
|
||||
needs:
|
||||
- activation
|
||||
- agent
|
||||
- detection
|
||||
- safe_outputs
|
||||
- update_cache_memory
|
||||
if: (always()) && (needs.agent.result != 'skipped')
|
||||
|
|
@ -825,22 +982,27 @@ jobs:
|
|||
contents: read
|
||||
discussions: write
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-workflow-suggestion-agent"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -850,7 +1012,7 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_NOOP_MAX: 1
|
||||
GH_AW_NOOP_MAX: "1"
|
||||
GH_AW_WORKFLOW_NAME: "Workflow Suggestion Agent"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -881,10 +1043,14 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_WORKFLOW_ID: "workflow-suggestion-agent"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.agent.outputs.secret_verification_result }}
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_CREATE_DISCUSSION_ERRORS: ${{ needs.safe_outputs.outputs.create_discussion_errors }}
|
||||
GH_AW_CREATE_DISCUSSION_ERROR_COUNT: ${{ needs.safe_outputs.outputs.create_discussion_error_count }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -901,7 +1067,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -910,112 +1076,9 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/handle_noop_message.cjs');
|
||||
await main();
|
||||
|
||||
detection:
|
||||
needs: agent
|
||||
if: needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
concurrency:
|
||||
group: "gh-aw-copilot-${{ github.workflow }}"
|
||||
timeout-minutes: 10
|
||||
outputs:
|
||||
success: ${{ steps.parse_results.outputs.success }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent artifacts
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-artifacts
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Download agent output artifact
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/threat-detection/
|
||||
- name: Echo agent output types
|
||||
env:
|
||||
AGENT_OUTPUT_TYPES: ${{ needs.agent.outputs.output_types }}
|
||||
run: |
|
||||
echo "Agent output-types: $AGENT_OUTPUT_TYPES"
|
||||
- name: Setup threat detection
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
WORKFLOW_NAME: "Workflow Suggestion Agent"
|
||||
WORKFLOW_DESCRIPTION: "Weekly agent that suggests which agentic workflow agents should be added to the Z3 repository"
|
||||
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/setup_threat_detection.cjs');
|
||||
await main();
|
||||
- name: Ensure threat-detection directory and log
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/threat-detection
|
||||
touch /tmp/gh-aw/threat-detection/detection.log
|
||||
- name: Validate COPILOT_GITHUB_TOKEN secret
|
||||
id: validate-secret
|
||||
run: /opt/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
|
||||
env:
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.410
|
||||
- name: Execute GitHub Copilot CLI
|
||||
id: agentic_execution
|
||||
# Copilot CLI tool arguments (sorted):
|
||||
# --allow-tool shell(cat)
|
||||
# --allow-tool shell(grep)
|
||||
# --allow-tool shell(head)
|
||||
# --allow-tool shell(jq)
|
||||
# --allow-tool shell(ls)
|
||||
# --allow-tool shell(tail)
|
||||
# --allow-tool shell(wc)
|
||||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
COPILOT_CLI_INSTRUCTION="$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"
|
||||
mkdir -p /tmp/
|
||||
mkdir -p /tmp/gh-aw/
|
||||
mkdir -p /tmp/gh-aw/agent/
|
||||
mkdir -p /tmp/gh-aw/sandbox/agent/logs/
|
||||
copilot --add-dir /tmp/ --add-dir /tmp/gh-aw/ --add-dir /tmp/gh-aw/agent/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-tool 'shell(cat)' --allow-tool 'shell(grep)' --allow-tool 'shell(head)' --allow-tool 'shell(jq)' --allow-tool 'shell(ls)' --allow-tool 'shell(tail)' --allow-tool 'shell(wc)' --share /tmp/gh-aw/sandbox/agent/logs/conversation.md --prompt "$COPILOT_CLI_INSTRUCTION"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"} 2>&1 | tee /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_results
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
const { setupGlobals } = require('/opt/gh-aw/actions/setup_globals.cjs');
|
||||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/parse_threat_detection_results.cjs');
|
||||
await main();
|
||||
- name: Upload threat detection log
|
||||
if: always()
|
||||
uses: actions/upload-artifact@b7c566a772e6b6bfb58ed0dc250532a479d7789f # v6
|
||||
with:
|
||||
name: threat-detection.log
|
||||
path: /tmp/gh-aw/threat-detection/detection.log
|
||||
if-no-files-found: ignore
|
||||
|
||||
safe_outputs:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.detection.outputs.success == 'true')
|
||||
needs: agent
|
||||
if: ((!cancelled()) && (needs.agent.result != 'skipped')) && (needs.agent.outputs.detection_success == 'true')
|
||||
runs-on: ubuntu-slim
|
||||
permissions:
|
||||
contents: read
|
||||
|
|
@ -1023,26 +1086,31 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/workflow-suggestion-agent"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "workflow-suggestion-agent"
|
||||
GH_AW_WORKFLOW_NAME: "Workflow Suggestion Agent"
|
||||
outputs:
|
||||
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
|
||||
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
|
||||
create_discussion_error_count: ${{ steps.process_safe_outputs.outputs.create_discussion_error_count }}
|
||||
create_discussion_errors: ${{ steps.process_safe_outputs.outputs.create_discussion_errors }}
|
||||
process_safe_outputs_processed_count: ${{ steps.process_safe_outputs.outputs.processed_count }}
|
||||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1052,6 +1120,9 @@ jobs:
|
|||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
GH_AW_AGENT_OUTPUT: ${{ env.GH_AW_AGENT_OUTPUT }}
|
||||
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com"
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[Workflow Suggestions] \"},\"missing_data\":{},\"missing_tool\":{}}"
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
|
@ -1060,27 +1131,45 @@ jobs:
|
|||
setupGlobals(core, github, context, exec, io);
|
||||
const { main } = require('/opt/gh-aw/actions/safe_output_handler_manager.cjs');
|
||||
await main();
|
||||
- name: Upload safe output items manifest
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
with:
|
||||
name: safe-output-items
|
||||
path: /tmp/safe-output-items.jsonl
|
||||
if-no-files-found: warn
|
||||
|
||||
update_cache_memory:
|
||||
needs:
|
||||
- agent
|
||||
- detection
|
||||
if: always() && needs.detection.outputs.success == 'true'
|
||||
needs: agent
|
||||
if: always() && needs.agent.outputs.detection_success == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
permissions: {}
|
||||
env:
|
||||
GH_AW_WORKFLOW_ID_SANITIZED: workflowsuggestionagent
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@c3acb23c6772826a8df80b2b68ae13d268ff43e1 # v0.45.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6
|
||||
id: download_cache_default
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
continue-on-error: true
|
||||
with:
|
||||
name: cache-memory
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
- name: Check if cache-memory folder has content (default)
|
||||
id: check_cache_default
|
||||
shell: bash
|
||||
run: |
|
||||
if [ -d "/tmp/gh-aw/cache-memory" ] && [ "$(ls -A /tmp/gh-aw/cache-memory 2>/dev/null)" ]; then
|
||||
echo "has_content=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "has_content=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
- name: Save cache-memory to cache (default)
|
||||
uses: actions/cache/save@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
|
||||
if: steps.check_cache_default.outputs.has_content == 'true'
|
||||
uses: actions/cache/save@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5.0.3
|
||||
with:
|
||||
key: memory-${{ env.GH_AW_WORKFLOW_ID_SANITIZED }}-${{ github.run_id }}
|
||||
path: /tmp/gh-aw/cache-memory
|
||||
|
|
|
|||
|
|
@ -23,11 +23,13 @@ safe-outputs:
|
|||
title-prefix: "[Workflow Suggestions] "
|
||||
category: "Agentic Workflows"
|
||||
close-older-discussions: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
|
|||
120
.github/workflows/zipt-code-reviewer.lock.yml
generated
vendored
120
.github/workflows/zipt-code-reviewer.lock.yml
generated
vendored
|
|
@ -13,7 +13,7 @@
|
|||
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
|
||||
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
|
||||
#
|
||||
# This file was automatically generated by gh-aw (v0.51.6). DO NOT EDIT.
|
||||
# This file was automatically generated by gh-aw (v0.57.2). DO NOT EDIT.
|
||||
#
|
||||
# To update this file, edit the corresponding .md file and run:
|
||||
# gh aw compile
|
||||
|
|
@ -23,7 +23,7 @@
|
|||
#
|
||||
# Reviews Z3 string/sequence graph implementation (euf_sgraph, euf_seq_plugin, src/smt/seq) by comparing with the ZIPT reference implementation and reporting improvements as git diffs in GitHub issues
|
||||
#
|
||||
# gh-aw-metadata: {"schema_version":"v1","frontmatter_hash":"adecdddc8c5555c7d326638cfa13674b67a5ef94e37a23c4c4d84824ab82ad9c","compiler_version":"v0.51.6"}
|
||||
# gh-aw-metadata: {"schema_version":"v2","frontmatter_hash":"d9207e6b6bf1f4cf435599de0128969e89aac9bc6235e505631482c35af1d1c4","compiler_version":"v0.57.2","strict":true}
|
||||
|
||||
name: "ZIPT Code Reviewer"
|
||||
"on":
|
||||
|
|
@ -50,7 +50,7 @@ jobs:
|
|||
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.51.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Generate agentic run info
|
||||
|
|
@ -60,8 +60,8 @@ jobs:
|
|||
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
|
||||
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_INFO_VERSION: ""
|
||||
GH_AW_INFO_AGENT_VERSION: "0.0.420"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.51.6"
|
||||
GH_AW_INFO_AGENT_VERSION: "latest"
|
||||
GH_AW_INFO_CLI_VERSION: "v0.57.2"
|
||||
GH_AW_INFO_WORKFLOW_NAME: "ZIPT Code Reviewer"
|
||||
GH_AW_INFO_EXPERIMENTAL: "false"
|
||||
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
|
||||
|
|
@ -71,6 +71,7 @@ jobs:
|
|||
GH_AW_INFO_AWF_VERSION: "v0.23.0"
|
||||
GH_AW_INFO_AWMG_VERSION: ""
|
||||
GH_AW_INFO_FIREWALL_TYPE: "squid"
|
||||
GH_AW_COMPILED_STRICT: "true"
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
with:
|
||||
script: |
|
||||
|
|
@ -84,12 +85,12 @@ jobs:
|
|||
- name: Checkout .github and .agents folders
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
sparse-checkout: |
|
||||
.github
|
||||
.agents
|
||||
sparse-checkout-cone-mode: true
|
||||
fetch-depth: 1
|
||||
persist-credentials: false
|
||||
- name: Check workflow file timestamps
|
||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
|
||||
env:
|
||||
|
|
@ -253,18 +254,19 @@ jobs:
|
|||
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
|
||||
detection_success: ${{ steps.detection_conclusion.outputs.success }}
|
||||
has_patch: ${{ steps.collect_output.outputs.has_patch }}
|
||||
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
|
||||
model: ${{ needs.activation.outputs.model }}
|
||||
output: ${{ steps.collect_output.outputs.output }}
|
||||
output_types: ${{ steps.collect_output.outputs.output_types }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.51.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Create gh-aw temp directory
|
||||
run: bash /opt/gh-aw/actions/create_gh_aw_tmp_dir.sh
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v5
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
@ -305,7 +307,7 @@ jobs:
|
|||
const { main } = require('/opt/gh-aw/actions/checkout_pr_branch.cjs');
|
||||
await main();
|
||||
- name: Install GitHub Copilot CLI
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh 0.0.420
|
||||
run: /opt/gh-aw/actions/install_copilot_cli.sh latest
|
||||
- name: Install awf binary
|
||||
run: bash /opt/gh-aw/actions/install_awf_binary.sh v0.23.0
|
||||
- name: Determine automatic lockdown mode for GitHub MCP Server
|
||||
|
|
@ -319,7 +321,7 @@ jobs:
|
|||
const determineAutomaticLockdown = require('/opt/gh-aw/actions/determine_automatic_lockdown.cjs');
|
||||
await determineAutomaticLockdown(github, context, core);
|
||||
- name: Download container images
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.6 ghcr.io/github/github-mcp-server:v0.31.0 node:lts-alpine
|
||||
run: bash /opt/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.23.0 ghcr.io/github/gh-aw-firewall/api-proxy:0.23.0 ghcr.io/github/gh-aw-firewall/squid:0.23.0 ghcr.io/github/gh-aw-mcpg:v0.1.8 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
|
||||
- name: Write Safe Outputs Config
|
||||
run: |
|
||||
mkdir -p /opt/gh-aw/safeoutputs
|
||||
|
|
@ -331,7 +333,7 @@ jobs:
|
|||
cat > /opt/gh-aw/safeoutputs/tools.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_EOF'
|
||||
[
|
||||
{
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 3 issue(s) can be created. Title will be prefixed with \"[zipt-review] \". Labels [code-quality automated string-solver] will be automatically added.",
|
||||
"description": "Create a new GitHub issue for tracking bugs, feature requests, or tasks. Use this for actionable work items that need assignment, labeling, and status tracking. For reports, announcements, or status updates that don't require task tracking, use create_discussion instead. CONSTRAINTS: Maximum 3 issue(s) can be created. Title will be prefixed with \"[zipt-review] \". Labels [\"code-quality\" \"automated\" \"string-solver\"] will be automatically added.",
|
||||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
|
|
@ -339,6 +341,10 @@ jobs:
|
|||
"description": "Detailed issue description in Markdown. Do NOT repeat the title as a heading since it already appears as the issue's h1. Include context, reproduction steps, or acceptance criteria as appropriate.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"labels": {
|
||||
"description": "Labels to categorize the issue (e.g., 'bug', 'enhancement'). Labels must exist in the repository.",
|
||||
"items": {
|
||||
|
|
@ -353,9 +359,13 @@ jobs:
|
|||
"string"
|
||||
]
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"temporary_id": {
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 8 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,8}$",
|
||||
"description": "Unique temporary identifier for referencing this issue before it's created. Format: 'aw_' followed by 3 to 12 alphanumeric characters (e.g., 'aw_abc1', 'aw_Test123'). Use '#aw_ID' in body text to reference other issues by their temporary_id; these are replaced with actual issue numbers after creation.",
|
||||
"pattern": "^aw_[A-Za-z0-9]{3,12}$",
|
||||
"type": "string"
|
||||
},
|
||||
"title": {
|
||||
|
|
@ -380,10 +390,18 @@ jobs:
|
|||
"description": "Any workarounds, manual steps, or alternative approaches the user could take (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this tool is needed or what information you want to share about the limitation (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
},
|
||||
"tool": {
|
||||
"description": "Optional: Name or description of the missing tool or capability (max 128 characters). Be specific about what functionality is needed.",
|
||||
"type": "string"
|
||||
|
|
@ -401,9 +419,17 @@ jobs:
|
|||
"inputSchema": {
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"message": {
|
||||
"description": "Status or completion message to log. Should explain what was analyzed and the outcome (e.g., 'Code review complete - no issues found', 'Analysis complete - all tests passing').",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [
|
||||
|
|
@ -430,9 +456,17 @@ jobs:
|
|||
"description": "Type or description of the missing data or information (max 128 characters). Be specific about what data is needed.",
|
||||
"type": "string"
|
||||
},
|
||||
"integrity": {
|
||||
"description": "Trustworthiness level of the message source (e.g., \"low\", \"medium\", \"high\").",
|
||||
"type": "string"
|
||||
},
|
||||
"reason": {
|
||||
"description": "Explanation of why this data is needed to complete the task (max 256 characters).",
|
||||
"type": "string"
|
||||
},
|
||||
"secrecy": {
|
||||
"description": "Confidentiality level of the message content (e.g., \"public\", \"internal\", \"private\").",
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"required": [],
|
||||
|
|
@ -598,7 +632,7 @@ jobs:
|
|||
export DEBUG="*"
|
||||
|
||||
export GH_AW_ENGINE="copilot"
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.6'
|
||||
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_LOCKDOWN -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.1.8'
|
||||
|
||||
mkdir -p /home/runner/.copilot
|
||||
cat << GH_AW_MCP_CONFIG_EOF | bash /opt/gh-aw/actions/start_mcp_gateway.sh
|
||||
|
|
@ -606,7 +640,7 @@ jobs:
|
|||
"mcpServers": {
|
||||
"github": {
|
||||
"type": "stdio",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.31.0",
|
||||
"container": "ghcr.io/github/github-mcp-server:v0.32.0",
|
||||
"env": {
|
||||
"GITHUB_LOCKDOWN_MODE": "$GITHUB_MCP_LOCKDOWN",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "\${GITHUB_MCP_SERVER_TOKEN}",
|
||||
|
|
@ -664,24 +698,37 @@ jobs:
|
|||
timeout-minutes: 30
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "*.githubusercontent.com,api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,codeload.github.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github-cloud.githubusercontent.com,github-cloud.s3.amazonaws.com,github.com,github.githubassets.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,lfs.github.com,objects.githubusercontent.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool github --allow-tool safeoutputs --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(clang-format:*)'\'' --allow-tool '\''shell(date)'\'' --allow-tool '\''shell(echo)'\'' --allow-tool '\''shell(git diff:*)'\'' --allow-tool '\''shell(git log:*)'\'' --allow-tool '\''shell(git show:*)'\'' --allow-tool '\''shell(git status)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(pwd)'\'' --allow-tool '\''shell(sort)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(uniq)'\'' --allow-tool '\''shell(wc)'\'' --allow-tool '\''shell(yq)'\'' --allow-tool web_fetch --allow-tool write --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_AGENT_COPILOT:+ --model "$GH_AW_MODEL_AGENT_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool github --allow-tool safeoutputs --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(clang-format:*)'\'' --allow-tool '\''shell(date)'\'' --allow-tool '\''shell(echo)'\'' --allow-tool '\''shell(git diff:*)'\'' --allow-tool '\''shell(git log:*)'\'' --allow-tool '\''shell(git show:*)'\'' --allow-tool '\''shell(git status)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(pwd)'\'' --allow-tool '\''shell(sort)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(uniq)'\'' --allow-tool '\''shell(wc)'\'' --allow-tool '\''shell(yq)'\'' --allow-tool web_fetch --allow-tool write --add-dir /tmp/gh-aw/cache-memory/ --allow-all-paths --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json
|
||||
GH_AW_MODEL_AGENT_COPILOT: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }}
|
||||
GH_AW_PHASE: agent
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_SAFE_OUTPUTS: ${{ env.GH_AW_SAFE_OUTPUTS }}
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Detect inference access error
|
||||
id: detect-inference-error
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
run: bash /opt/gh-aw/actions/detect_inference_access_error.sh
|
||||
- name: Configure Git credentials
|
||||
env:
|
||||
REPO_NAME: ${{ github.repository }}
|
||||
|
|
@ -735,6 +782,9 @@ jobs:
|
|||
SECRET_GH_AW_GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN }}
|
||||
SECRET_GH_AW_GITHUB_TOKEN: ${{ secrets.GH_AW_GITHUB_TOKEN }}
|
||||
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Append agent step summary
|
||||
if: always()
|
||||
run: bash /opt/gh-aw/actions/append_agent_step_summary.sh
|
||||
- name: Upload Safe Outputs
|
||||
if: always()
|
||||
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
|
||||
|
|
@ -890,20 +940,28 @@ jobs:
|
|||
timeout-minutes: 20
|
||||
run: |
|
||||
set -o pipefail
|
||||
touch /tmp/gh-aw/agent-step-summary.md
|
||||
# shellcheck disable=SC1003
|
||||
sudo -E awf --env-all --container-workdir "${GITHUB_WORKSPACE}" --allow-domains "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,raw.githubusercontent.com,registry.npmjs.org,telemetry.enterprise.githubcopilot.com" --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --enable-host-access --image-tag 0.23.0 --skip-pull --enable-api-proxy \
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"${GH_AW_MODEL_DETECTION_COPILOT:+ --model "$GH_AW_MODEL_DETECTION_COPILOT"}' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --add-dir "${GITHUB_WORKSPACE}" --disable-builtin-mcps --allow-tool '\''shell(cat)'\'' --allow-tool '\''shell(grep)'\'' --allow-tool '\''shell(head)'\'' --allow-tool '\''shell(jq)'\'' --allow-tool '\''shell(ls)'\'' --allow-tool '\''shell(tail)'\'' --allow-tool '\''shell(wc)'\'' --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
|
||||
env:
|
||||
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
|
||||
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
|
||||
GH_AW_MODEL_DETECTION_COPILOT: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
|
||||
GH_AW_PHASE: detection
|
||||
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
|
||||
GH_AW_VERSION: v0.57.2
|
||||
GITHUB_API_URL: ${{ github.api_url }}
|
||||
GITHUB_AW: true
|
||||
GITHUB_HEAD_REF: ${{ github.head_ref }}
|
||||
GITHUB_REF_NAME: ${{ github.ref_name }}
|
||||
GITHUB_SERVER_URL: ${{ github.server_url }}
|
||||
GITHUB_STEP_SUMMARY: ${{ env.GITHUB_STEP_SUMMARY }}
|
||||
GITHUB_STEP_SUMMARY: /tmp/gh-aw/agent-step-summary.md
|
||||
GITHUB_WORKSPACE: ${{ github.workspace }}
|
||||
GIT_AUTHOR_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_AUTHOR_NAME: github-actions[bot]
|
||||
GIT_COMMITTER_EMAIL: github-actions[bot]@users.noreply.github.com
|
||||
GIT_COMMITTER_NAME: github-actions[bot]
|
||||
XDG_CONFIG_HOME: /home/runner
|
||||
- name: Parse threat detection results
|
||||
id: parse_detection_results
|
||||
|
|
@ -954,22 +1012,27 @@ jobs:
|
|||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
concurrency:
|
||||
group: "gh-aw-conclusion-zipt-code-reviewer"
|
||||
cancel-in-progress: false
|
||||
outputs:
|
||||
noop_message: ${{ steps.noop.outputs.noop_message }}
|
||||
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
|
||||
total_count: ${{ steps.missing_tool.outputs.total_count }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.51.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1014,7 +1077,10 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID: "zipt-code-reviewer"
|
||||
GH_AW_SECRET_VERIFICATION_RESULT: ${{ needs.activation.outputs.secret_verification_result }}
|
||||
GH_AW_CHECKOUT_PR_SUCCESS: ${{ needs.agent.outputs.checkout_pr_success }}
|
||||
GH_AW_INFERENCE_ACCESS_ERROR: ${{ needs.agent.outputs.inference_access_error }}
|
||||
GH_AW_GROUP_REPORTS: "false"
|
||||
GH_AW_FAILURE_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_TIMEOUT_MINUTES: "30"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1031,7 +1097,7 @@ jobs:
|
|||
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
|
||||
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
|
||||
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "true"
|
||||
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
|
||||
with:
|
||||
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
|
||||
script: |
|
||||
|
|
@ -1049,7 +1115,7 @@ jobs:
|
|||
issues: write
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/${{ github.workflow }}"
|
||||
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/zipt-code-reviewer"
|
||||
GH_AW_ENGINE_ID: "copilot"
|
||||
GH_AW_WORKFLOW_ID: "zipt-code-reviewer"
|
||||
GH_AW_WORKFLOW_NAME: "ZIPT Code Reviewer"
|
||||
|
|
@ -1064,16 +1130,18 @@ jobs:
|
|||
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.51.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download agent output artifact
|
||||
id: download-agent-output
|
||||
continue-on-error: true
|
||||
uses: actions/download-artifact@70fc10c6e5e1ce46ad2ea6f2b72d43f7d47b13c3 # v8
|
||||
with:
|
||||
name: agent-output
|
||||
path: /tmp/gh-aw/safeoutputs/
|
||||
- name: Setup agent output environment variable
|
||||
if: steps.download-agent-output.outcome == 'success'
|
||||
run: |
|
||||
mkdir -p /tmp/gh-aw/safeoutputs/
|
||||
find "/tmp/gh-aw/safeoutputs/" -type f -print
|
||||
|
|
@ -1111,7 +1179,7 @@ jobs:
|
|||
GH_AW_WORKFLOW_ID_SANITIZED: ziptcodereviewer
|
||||
steps:
|
||||
- name: Setup Scripts
|
||||
uses: github/gh-aw/actions/setup@v0.51.6
|
||||
uses: github/gh-aw/actions/setup@32b3a711a9ee97d38e3989c90af0385aff0066a7 # v0.57.2
|
||||
with:
|
||||
destination: /opt/gh-aw/actions
|
||||
- name: Download cache-memory artifact (default)
|
||||
|
|
|
|||
4
.github/workflows/zipt-code-reviewer.md
vendored
4
.github/workflows/zipt-code-reviewer.md
vendored
|
|
@ -35,12 +35,14 @@ safe-outputs:
|
|||
max: 3
|
||||
missing-tool:
|
||||
create-issue: true
|
||||
noop:
|
||||
report-as-issue: false
|
||||
|
||||
timeout-minutes: 30
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6.0.2
|
||||
with:
|
||||
persist-credentials: false
|
||||
|
||||
|
|
|
|||
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -118,3 +118,4 @@ bazel-*
|
|||
# Local issue tracking
|
||||
.beads
|
||||
build/
|
||||
.z3-agent/
|
||||
|
|
|
|||
20
README.md
20
README.md
|
|
@ -27,9 +27,9 @@ See the [release notes](RELEASE_NOTES.md) for notes on various stable releases o
|
|||
| -----------|---------------|---------------|---------------|-------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/wip.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/android-build.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/pyodide.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/nightly.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/cross-build.yml) |
|
||||
|
||||
| MSVC Static | MSVC Clang-CL | Build Z3 Cache |
|
||||
|-------------|---------------|----------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build-clang-cl.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/build-z3-cache.yml) |
|
||||
| MSVC Static | MSVC Clang-CL | Build Z3 Cache | Code Coverage | Memory Safety | Mark PRs Ready |
|
||||
|-------------|---------------|----------------|---------------|---------------|----------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/msvc-static-build-clang-cl.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/build-z3-cache.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/coverage.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/memory-safety.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/mark-prs-ready-for-review.yml) |
|
||||
|
||||
### Manual & Release Workflows
|
||||
| Documentation | Release Build | WASM Release | NuGet Build |
|
||||
|
|
@ -42,9 +42,17 @@ See the [release notes](RELEASE_NOTES.md) for notes on various stable releases o
|
|||
| [](https://github.com/Z3Prover/z3/actions/workflows/nightly-validation.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/copilot-setup-steps.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/agentics-maintenance.yml) |
|
||||
|
||||
### Agentic Workflows
|
||||
| A3 Python | API Coherence | Code Simplifier | Deeptest | Release Notes | Specbot | Workflow Suggestion |
|
||||
| ----------|---------------|-----------------|----------|---------------|---------|---------------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/a3-python.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/api-coherence-checker.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/code-simplifier.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/deeptest.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/release-notes-updater.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/specbot.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/workflow-suggestion-agent.lock.yml) |
|
||||
| A3 Python | API Coherence | Code Simplifier | Release Notes | Workflow Suggestion |
|
||||
| ----------|---------------|-----------------|---------------|---------------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/a3-python.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/api-coherence-checker.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/code-simplifier.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/release-notes-updater.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/workflow-suggestion-agent.lock.yml) |
|
||||
|
||||
| Academic Citation | Build Warning Fixer | Code Conventions | CSA Report | Issue Backlog |
|
||||
| ------------------|---------------------|------------------|------------|---------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/academic-citation-tracker.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/build-warning-fixer.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/code-conventions-analyzer.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/csa-analysis.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/issue-backlog-processor.lock.yml) |
|
||||
|
||||
| Memory Safety Report | Ostrich Benchmark | QF-S Benchmark | Tactic-to-Simplifier | ZIPT Code Reviewer |
|
||||
| ---------------------|-------------------|----------------|----------------------|--------------------|
|
||||
| [](https://github.com/Z3Prover/z3/actions/workflows/memory-safety-report.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/ostrich-benchmark.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/qf-s-benchmark.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/tactic-to-simplifier.lock.yml) | [](https://github.com/Z3Prover/z3/actions/workflows/zipt-code-reviewer.lock.yml) |
|
||||
|
||||
[1]: #building-z3-on-windows-using-visual-studio-command-prompt
|
||||
[2]: #building-z3-using-make-and-gccclang
|
||||
|
|
|
|||
|
|
@ -21,6 +21,45 @@ Version 4.17.0
|
|||
Thanks to Nuno Lopes, https://github.com/Z3Prover/z3/pull/8583
|
||||
- Fix spurious sort error with nested quantifiers in model finder. `Fixes #8563`
|
||||
- NLSAT optimizations including improvements to handle_nullified_poly and levelwise algorithm. Thanks to Lev Nachmanson.
|
||||
- Add ASan/UBSan memory safety CI workflow for continuous runtime safety checking. Thanks to Angelica Moreira.
|
||||
https://github.com/Z3Prover/z3/pull/8856
|
||||
- Add missing API bindings across multiple languages:
|
||||
- Python: BvNand, BvNor, BvXnor operations, Optimize.translate()
|
||||
- Go: MkAsArray, MkRecFuncDecl, AddRecDef, Model.Translate, MkBVRotateLeft, MkBVRotateRight, MkRepeat, and 8 BV overflow/underflow check functions
|
||||
- TypeScript: Array.fromFunc, Model.translate
|
||||
- OCaml: Model.translate, mk_re_allchar (thanks to Filipe Marques, https://github.com/Z3Prover/z3/pull/8785)
|
||||
- Java: as-array method (thanks to Ruijie Fang, https://github.com/Z3Prover/z3/pull/8762)
|
||||
- Fix #7507: simplify (>= product_of_consecutive_ints 0) to true
|
||||
- Fix #7951: add cancellation checks to polynomial gcd_prs and HNF computation
|
||||
- Fix #7677: treat FC_CONTINUE from check_nla as FEASIBLE in maximize
|
||||
- Fix assertion violation in q_mbi diagnostic output
|
||||
- Fix memory leaks in model_based_opt def ref-counting
|
||||
- Fix NoSuchFieldError in JNI for BoolPtr: use Z field descriptor and SetBooleanField
|
||||
- Fix TypeScript Array.fromFunc to use f.ptr instead of f.ast for Z3_func_decl type
|
||||
- Fix intblast ubv_to_int bug: add bv2int axioms for compound expressions
|
||||
- Fix static analysis findings: uninitialized variables, bitwise shift undefined behavior, and null pointer dereferences
|
||||
- Convert bv1-blast and blast-term-ite tactics to also expose as simplifiers for more flexible integration
|
||||
- Change default of param lws_subs_witness_disc to true for improved NLSAT performance. Thanks to Lev Nachmanson.
|
||||
- Nl2Lin integrates a linear under-approximation of a CAD cell by Valentin Promies for improved NLSAT performance on nonlinear arithmetic problems.
|
||||
https://github.com/Z3Prover/z3/pull/8982
|
||||
- Fix incorrect optimization of mod in box mode. Fixes #9012
|
||||
- Fix inconsistent optimization with scaled objectives in the LP optimizer when nonlinear constraints prevent exploration of the full feasible region.
|
||||
https://github.com/Z3Prover/z3/pull/8998
|
||||
- Fix NLA optimization regression and improve LP restore_x handling.
|
||||
https://github.com/Z3Prover/z3/pull/8944
|
||||
- Enable sum of monomials simplification in the optimizer for improved nonlinear arithmetic optimization.
|
||||
- Convert injectivity and special-relations tactics to simplifier-based implementations for better integration with the simplifier pipeline.
|
||||
https://github.com/Z3Prover/z3/pull/8954, https://github.com/Z3Prover/z3/pull/8955
|
||||
- Fix assertion violation in mpz.cpp when running with -tr:arith tracing.
|
||||
https://github.com/Z3Prover/z3/pull/8945
|
||||
- Additional API improvements:
|
||||
- Java: numeral extraction helpers (getInt, getLong, getDouble for ArithExpr and BitVecNum). Thanks to Angelica Moreira, https://github.com/Z3Prover/z3/pull/8978
|
||||
- Java: missing AST query methods (isTrue, isFalse, isNot, isOr, isAnd, isDistinct, getBoolValue, etc.). Thanks to Angelica Moreira, https://github.com/Z3Prover/z3/pull/8977
|
||||
- Go: Goal, FuncEntry, Model APIs; TypeScript: Seq higher-order operations (map, fold). https://github.com/Z3Prover/z3/pull/9006
|
||||
- Fix API coherence issues across Go, Java, C++, and TypeScript bindings.
|
||||
https://github.com/Z3Prover/z3/pull/8983
|
||||
- Fix deep API bugs in Z3 C API (null pointer handling, error propagation).
|
||||
https://github.com/Z3Prover/z3/pull/8972
|
||||
|
||||
Version 4.16.0
|
||||
==============
|
||||
|
|
|
|||
463
Z3-AGENT.md
Normal file
463
Z3-AGENT.md
Normal file
|
|
@ -0,0 +1,463 @@
|
|||
# Z3 Agent
|
||||
|
||||
A Copilot agent for the Z3 theorem prover. It wraps 9 skills that cover
|
||||
SMT solving and code quality analysis.
|
||||
|
||||
## What it does
|
||||
|
||||
The agent handles two kinds of requests:
|
||||
|
||||
1. **SMT solving**: formulate constraints, check satisfiability, prove
|
||||
properties, optimize objectives, simplify expressions.
|
||||
2. **Code quality**: run sanitizers (ASan, UBSan) and Clang Static Analyzer
|
||||
against the Z3 codebase to catch memory bugs and logic errors.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need a built Z3 binary. The scripts look for it in this order:
|
||||
|
||||
1. Explicit `--z3 path/to/z3`
|
||||
2. `build/z3`, `build/release/z3`, `build/debug/z3` (relative to repo root)
|
||||
3. `z3` on your PATH
|
||||
|
||||
For code quality skills you also need:
|
||||
|
||||
- **memory-safety**: cmake, make, and a compiler with sanitizer support
|
||||
(gcc or clang). The script checks at startup and tells you what is missing.
|
||||
- **static-analysis**: scan-build (part of clang-tools). Same early check
|
||||
with install instructions if absent.
|
||||
|
||||
## Using the agent in Copilot Chat
|
||||
|
||||
Mention `@z3` and describe what you want in plain language.
|
||||
The agent figures out which skill to use, builds the formula if needed,
|
||||
runs Z3, and gives you the result.
|
||||
|
||||
### solve: check satisfiability
|
||||
|
||||
```
|
||||
@z3 is (x > 0 and y > 0 and x + y < 5) satisfiable over the integers?
|
||||
```
|
||||
|
||||
Expected response: **sat** with a model like `x = 1, y = 1`.
|
||||
|
||||
```
|
||||
@z3 can x + y = 10 and x - y = 4 both hold at the same time?
|
||||
```
|
||||
|
||||
Expected response: **sat** with `x = 7, y = 3`.
|
||||
|
||||
```
|
||||
@z3 is there an integer x where x > 0, x < 0?
|
||||
```
|
||||
|
||||
Expected response: **unsat** (no such integer exists).
|
||||
|
||||
### prove: check if something is always true
|
||||
|
||||
```
|
||||
@z3 prove that x * x >= 0 for all integers x
|
||||
```
|
||||
|
||||
Expected response: **valid** (the negation is unsatisfiable, so the property holds).
|
||||
|
||||
```
|
||||
@z3 is it true that (a + b)^2 >= 0 for all real a and b?
|
||||
```
|
||||
|
||||
Expected response: **valid**.
|
||||
|
||||
```
|
||||
@z3 prove that if x > y and y > z then x > z, for integers
|
||||
```
|
||||
|
||||
Expected response: **valid** (transitivity of >).
|
||||
|
||||
### optimize: find the best value
|
||||
|
||||
```
|
||||
@z3 maximize 3x + 2y where x >= 1, y >= 1, and x + y <= 20
|
||||
```
|
||||
|
||||
Expected response: **sat** with `x = 19, y = 1` (objective = 59).
|
||||
|
||||
```
|
||||
@z3 minimize x + y where x >= 5, y >= 3, and x + y >= 10
|
||||
```
|
||||
|
||||
Expected response: **sat** with `x = 5, y = 5` or similar (objective = 10).
|
||||
|
||||
### simplify: reduce an expression
|
||||
|
||||
```
|
||||
@z3 simplify x + 0 + 1*x
|
||||
```
|
||||
|
||||
Expected response: `2*x`.
|
||||
|
||||
```
|
||||
@z3 simplify (a and true) or (a and false)
|
||||
```
|
||||
|
||||
Expected response: `a`.
|
||||
|
||||
### encode: translate a problem to SMT-LIB2
|
||||
|
||||
```
|
||||
@z3 encode this as SMT-LIB2: find integers x and y where x + y = 10 and x > y
|
||||
```
|
||||
|
||||
Expected response: the SMT-LIB2 formula:
|
||||
```
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (= (+ x y) 10))
|
||||
(assert (> x y))
|
||||
(check-sat)
|
||||
(get-model)
|
||||
```
|
||||
|
||||
### explain: interpret Z3 output
|
||||
|
||||
```
|
||||
@z3 what does this Z3 output mean?
|
||||
sat
|
||||
(
|
||||
(define-fun x () Int 7)
|
||||
(define-fun y () Int 3)
|
||||
)
|
||||
```
|
||||
|
||||
Expected response: a readable summary like "satisfying assignment: x = 7, y = 3".
|
||||
|
||||
```
|
||||
@z3 Z3 returned unknown, what does that mean?
|
||||
```
|
||||
|
||||
Expected response: an explanation of common causes (timeout, incomplete theory, quantifiers).
|
||||
|
||||
### benchmark: measure performance
|
||||
|
||||
```
|
||||
@z3 how fast can Z3 solve (x > 0 and y > 0 and x + y < 100)? run it 5 times
|
||||
```
|
||||
|
||||
Expected response: timing stats like min/median/max in milliseconds and the result.
|
||||
|
||||
### memory-safety: find memory bugs in Z3
|
||||
|
||||
```
|
||||
@z3 run AddressSanitizer on the Z3 test suite
|
||||
```
|
||||
|
||||
Expected response: builds Z3 with ASan, runs the tests, reports any findings
|
||||
with category, file, and line number. If clean, reports no findings.
|
||||
|
||||
```
|
||||
@z3 check for undefined behavior in Z3
|
||||
```
|
||||
|
||||
Expected response: runs UBSan, same format.
|
||||
|
||||
```
|
||||
@z3 run both sanitizers
|
||||
```
|
||||
|
||||
Expected response: runs ASan and UBSan, aggregates findings from both.
|
||||
|
||||
### static-analysis: find bugs without running the code
|
||||
|
||||
```
|
||||
@z3 run static analysis on the Z3 source
|
||||
```
|
||||
|
||||
Expected response: runs Clang Static Analyzer, reports findings grouped by
|
||||
category (null dereference, dead store, memory leak, etc.) with file and line.
|
||||
|
||||
### multi-skill: the agent chains skills when needed
|
||||
|
||||
```
|
||||
@z3 prove that for all integers, if x^2 is even then x is even
|
||||
```
|
||||
|
||||
The agent uses **encode** to formalize and negate the statement, then
|
||||
**prove** to check it, then **explain** to present the result.
|
||||
|
||||
```
|
||||
@z3 full verification pass before the release
|
||||
```
|
||||
|
||||
The agent runs **memory-safety** (ASan + UBSan) and **static-analysis**
|
||||
in parallel, then aggregates and deduplicates findings sorted by severity.
|
||||
|
||||
## Using the scripts directly
|
||||
|
||||
Every skill lives under `.github/skills/<name>/scripts/`. All scripts
|
||||
accept `--debug` for full tracing and `--db path` to specify where the
|
||||
SQLite log goes (defaults to `z3agent.db` in the current directory).
|
||||
|
||||
### solve
|
||||
|
||||
Check whether a set of constraints has a solution.
|
||||
|
||||
```
|
||||
python3 .github/skills/solve/scripts/solve.py \
|
||||
--z3 build/release/z3 \
|
||||
--formula '
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (> x 0))
|
||||
(assert (> y 0))
|
||||
(assert (< (+ x y) 5))
|
||||
(check-sat)
|
||||
(get-model)'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
sat
|
||||
x = 1
|
||||
y = 1
|
||||
```
|
||||
|
||||
### prove
|
||||
|
||||
Check whether a property holds for all values. The script negates your
|
||||
conjecture and asks Z3 if the negation is satisfiable. If it is not,
|
||||
the property is valid.
|
||||
|
||||
```
|
||||
python3 .github/skills/prove/scripts/prove.py \
|
||||
--z3 build/release/z3 \
|
||||
--conjecture '(>= (* x x) 0)' \
|
||||
--vars 'x:Int'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
valid
|
||||
```
|
||||
|
||||
If Z3 finds a counterexample, it prints `invalid` followed by the
|
||||
counterexample values.
|
||||
|
||||
### optimize
|
||||
|
||||
Find the best value of an objective subject to constraints.
|
||||
|
||||
```
|
||||
python3 .github/skills/optimize/scripts/optimize.py \
|
||||
--z3 build/release/z3 \
|
||||
--formula '
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (>= x 1))
|
||||
(assert (>= y 1))
|
||||
(assert (<= (+ x y) 20))
|
||||
(maximize (+ (* 3 x) (* 2 y)))
|
||||
(check-sat)
|
||||
(get-model)'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
sat
|
||||
x = 19
|
||||
y = 1
|
||||
```
|
||||
|
||||
Here Z3 maximizes `3x + 2y` under the constraint `x + y <= 20`, so it
|
||||
pushes x as high as possible (19) and keeps y at its minimum (1),
|
||||
giving `3*19 + 2*1 = 59`.
|
||||
|
||||
### simplify
|
||||
|
||||
Reduce expressions using Z3 tactic chains.
|
||||
|
||||
```
|
||||
python3 .github/skills/simplify/scripts/simplify.py \
|
||||
--z3 build/release/z3 \
|
||||
--formula '(declare-const x Int)(simplify (+ x 0 (* 1 x)))'
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
(* 2 x)
|
||||
(goals
|
||||
(goal
|
||||
:precision precise :depth 1)
|
||||
)
|
||||
```
|
||||
|
||||
Z3 simplified `x + 0 + 1*x` down to `2*x`.
|
||||
|
||||
### benchmark
|
||||
|
||||
Measure solving time over multiple runs.
|
||||
|
||||
```
|
||||
python3 .github/skills/benchmark/scripts/benchmark.py \
|
||||
--z3 build/release/z3 \
|
||||
--runs 5 \
|
||||
--formula '
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (> x 0))
|
||||
(assert (> y 0))
|
||||
(assert (< (+ x y) 100))
|
||||
(check-sat)'
|
||||
```
|
||||
|
||||
Output (times will vary on your machine):
|
||||
|
||||
```
|
||||
runs: 5
|
||||
min: 27ms
|
||||
median: 28ms
|
||||
max: 30ms
|
||||
result: sat
|
||||
```
|
||||
|
||||
### explain
|
||||
|
||||
Interpret Z3 output in readable form. It reads from stdin:
|
||||
|
||||
```
|
||||
echo 'sat
|
||||
(
|
||||
(define-fun x () Int
|
||||
19)
|
||||
(define-fun y () Int
|
||||
1)
|
||||
)' | python3 .github/skills/explain/scripts/explain.py --stdin --type model
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
```
|
||||
satisfying assignment:
|
||||
x = 19
|
||||
y = 1
|
||||
```
|
||||
|
||||
### encode
|
||||
|
||||
Validate that an SMT-LIB2 file is well-formed by running it through Z3:
|
||||
|
||||
```
|
||||
python3 .github/skills/encode/scripts/encode.py \
|
||||
--z3 build/release/z3 \
|
||||
--validate problem.smt2
|
||||
```
|
||||
|
||||
If the file parses and runs without errors, it prints the formula back.
|
||||
If there are syntax or sort errors, it prints the Z3 error message.
|
||||
|
||||
### memory-safety
|
||||
|
||||
Build Z3 with AddressSanitizer or UndefinedBehaviorSanitizer, run the
|
||||
test suite, and collect any findings.
|
||||
|
||||
```
|
||||
python3 .github/skills/memory-safety/scripts/memory_safety.py --sanitizer asan
|
||||
python3 .github/skills/memory-safety/scripts/memory_safety.py --sanitizer ubsan
|
||||
python3 .github/skills/memory-safety/scripts/memory_safety.py --sanitizer both
|
||||
```
|
||||
|
||||
Use `--skip-build` to reuse a previous instrumented build. Use
|
||||
`--build-dir path` to control where the build goes (defaults to
|
||||
`build/sanitizer-asan` or `build/sanitizer-ubsan` under the repo root).
|
||||
|
||||
If cmake, make, or a C compiler is not found, the script prints what
|
||||
you need to install and exits.
|
||||
|
||||
### static-analysis
|
||||
|
||||
Run Clang Static Analyzer over the Z3 source tree.
|
||||
|
||||
```
|
||||
python3 .github/skills/static-analysis/scripts/static_analysis.py \
|
||||
--build-dir build/scan
|
||||
```
|
||||
|
||||
Results go to `build/scan/scan-results/` by default. Findings are
|
||||
printed grouped by category with file and line number.
|
||||
|
||||
If `scan-build` is not on your PATH, the script prints install
|
||||
instructions for Ubuntu, macOS, and Fedora.
|
||||
|
||||
## Debug tracing
|
||||
|
||||
Add `--debug` to any command to see the full trace: run IDs, z3 binary
|
||||
path, the exact command and stdin sent to Z3, stdout/stderr received,
|
||||
timing, and database logging. Example:
|
||||
|
||||
```
|
||||
python3 .github/skills/solve/scripts/solve.py \
|
||||
--z3 build/release/z3 --debug \
|
||||
--formula '
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (> x 0))
|
||||
(assert (> y 0))
|
||||
(assert (< (+ x y) 5))
|
||||
(check-sat)
|
||||
(get-model)'
|
||||
```
|
||||
|
||||
```
|
||||
[DEBUG] started run 31 (skill=solve, hash=d64beb5a61842362)
|
||||
[DEBUG] found z3: build/release/z3
|
||||
[DEBUG] cmd: build/release/z3 -in
|
||||
[DEBUG] stdin:
|
||||
(declare-const x Int)
|
||||
(declare-const y Int)
|
||||
(assert (> x 0))
|
||||
(assert (> y 0))
|
||||
(assert (< (+ x y) 5))
|
||||
(check-sat)
|
||||
(get-model)
|
||||
[DEBUG] exit_code=0 duration=28ms
|
||||
[DEBUG] stdout:
|
||||
sat
|
||||
(
|
||||
(define-fun x () Int
|
||||
1)
|
||||
(define-fun y () Int
|
||||
1)
|
||||
)
|
||||
[DEBUG] finished run 31: sat (28ms)
|
||||
sat
|
||||
x = 1
|
||||
y = 1
|
||||
```
|
||||
|
||||
## Logging
|
||||
|
||||
Every run is logged to a SQLite database (`z3agent.db` by default).
|
||||
You can query it directly:
|
||||
|
||||
```
|
||||
sqlite3 z3agent.db "SELECT id, skill, status, duration_ms FROM runs ORDER BY id DESC LIMIT 10;"
|
||||
```
|
||||
|
||||
Use `--db /path/to/file.db` on any script to put the database somewhere
|
||||
else.
|
||||
|
||||
## Skill list
|
||||
|
||||
| Skill | What it does |
|
||||
|-------|-------------|
|
||||
| solve | check satisfiability, extract models or unsat cores |
|
||||
| prove | prove validity by negating and checking unsatisfiability |
|
||||
| optimize | minimize or maximize objectives under constraints |
|
||||
| simplify | reduce formulas with Z3 tactic chains |
|
||||
| encode | translate problems into SMT-LIB2, validate syntax |
|
||||
| explain | interpret Z3 output (models, cores, stats, errors) |
|
||||
| benchmark | measure solving time, collect statistics |
|
||||
| memory-safety | run ASan/UBSan on Z3 test suite |
|
||||
| static-analysis | run Clang Static Analyzer on Z3 source |
|
||||
364
agentics/qf-s-benchmark.md
Normal file
364
agentics/qf-s-benchmark.md
Normal file
|
|
@ -0,0 +1,364 @@
|
|||
# QF_S String Solver Benchmark
|
||||
|
||||
## Job Description
|
||||
|
||||
Your name is ${{ github.workflow }}. You are an expert performance analyst for the Z3 theorem prover, specializing in the string/sequence theory. Your task is to benchmark the `seq` solver (classical string theory) against the `nseq` solver (ZIPT-based string theory) on the QF_S test suite from the `c3` branch, and post a structured report as a GitHub Discussion.
|
||||
|
||||
The workspace already contains the `c3` branch (checked out by the preceding workflow step).
|
||||
|
||||
## Phase 1: Set Up the Build Environment
|
||||
|
||||
Install required build tools:
|
||||
|
||||
```bash
|
||||
sudo apt-get update -y
|
||||
sudo apt-get install -y cmake ninja-build python3 python3-pip time
|
||||
```
|
||||
|
||||
Verify tools:
|
||||
|
||||
```bash
|
||||
cmake --version
|
||||
ninja --version
|
||||
python3 --version
|
||||
```
|
||||
|
||||
## Phase 2: Build Z3 in Debug Mode with Seq Tracing
|
||||
|
||||
Build Z3 with debug symbols so that tracing and timing data are meaningful.
|
||||
|
||||
```bash
|
||||
mkdir -p /tmp/z3-build
|
||||
cd /tmp/z3-build
|
||||
cmake "$GITHUB_WORKSPACE" \
|
||||
-G Ninja \
|
||||
-DCMAKE_BUILD_TYPE=Debug \
|
||||
-DZ3_BUILD_TEST_EXECUTABLES=OFF \
|
||||
2>&1 | tee /tmp/z3-cmake.log
|
||||
ninja z3 2>&1 | tee /tmp/z3-build.log
|
||||
```
|
||||
|
||||
Verify the binary was built:
|
||||
|
||||
```bash
|
||||
/tmp/z3-build/z3 --version
|
||||
```
|
||||
|
||||
If the build fails, report it immediately and stop.
|
||||
|
||||
## Phase 3: Discover QF_S Benchmark Files
|
||||
|
||||
Find all `.smt2` benchmark files in the workspace that belong to the QF_S logic:
|
||||
|
||||
```bash
|
||||
# Search for explicit QF_S logic declarations
|
||||
grep -rl 'QF_S' "$GITHUB_WORKSPACE" --include='*.smt2' 2>/dev/null > /tmp/qf_s_files.txt
|
||||
|
||||
# Also look in dedicated benchmark directories
|
||||
find "$GITHUB_WORKSPACE" \
|
||||
\( -path "*/QF_S/*" -o -path "*/qf_s/*" -o -path "*/benchmarks/*" \) \
|
||||
-name '*.smt2' 2>/dev/null >> /tmp/qf_s_files.txt
|
||||
|
||||
# Deduplicate
|
||||
sort -u /tmp/qf_s_files.txt -o /tmp/qf_s_files.txt
|
||||
|
||||
TOTAL=$(wc -l < /tmp/qf_s_files.txt)
|
||||
echo "Found $TOTAL QF_S benchmark files"
|
||||
head -20 /tmp/qf_s_files.txt
|
||||
```
|
||||
|
||||
If fewer than 5 files are found, also scan the entire workspace for any `.smt2` file that exercises string constraints:
|
||||
|
||||
```bash
|
||||
if [ "$TOTAL" -lt 5 ]; then
|
||||
grep -rl 'declare.*String\|str\.\|seq\.' "$GITHUB_WORKSPACE" \
|
||||
--include='*.smt2' 2>/dev/null >> /tmp/qf_s_files.txt
|
||||
sort -u /tmp/qf_s_files.txt -o /tmp/qf_s_files.txt
|
||||
TOTAL=$(wc -l < /tmp/qf_s_files.txt)
|
||||
echo "After extended search: $TOTAL files"
|
||||
fi
|
||||
```
|
||||
|
||||
Cap the benchmark set to keep total runtime under 60 minutes:
|
||||
|
||||
```bash
|
||||
# Use at most 500 files; take a random sample if more are available
|
||||
if [ "$TOTAL" -gt 500 ]; then
|
||||
shuf -n 500 /tmp/qf_s_files.txt > /tmp/qf_s_sample.txt
|
||||
else
|
||||
cp /tmp/qf_s_files.txt /tmp/qf_s_sample.txt
|
||||
fi
|
||||
SAMPLE=$(wc -l < /tmp/qf_s_sample.txt)
|
||||
echo "Running benchmarks on $SAMPLE files"
|
||||
```
|
||||
|
||||
## Phase 4: Run Benchmarks — seq vs nseq
|
||||
|
||||
Run each benchmark with both solvers. Use a per-file timeout of 10 seconds. Set Z3's internal timeout to 9 seconds so it exits cleanly before the shell timeout fires.
|
||||
|
||||
```bash
|
||||
Z3=/tmp/z3-build/z3
|
||||
TIMEOUT_SEC=10
|
||||
Z3_TIMEOUT_SEC=9
|
||||
RESULTS=/tmp/benchmark-results.csv
|
||||
|
||||
echo "file,seq_result,seq_time_ms,nseq_result,nseq_time_ms" > "$RESULTS"
|
||||
|
||||
total=0
|
||||
done_count=0
|
||||
while IFS= read -r smt_file; do
|
||||
total=$((total + 1))
|
||||
|
||||
# Run with seq solver; capture both stdout (z3 output) and stderr (time output)
|
||||
SEQ_OUT=$({ time timeout "$TIMEOUT_SEC" "$Z3" \
|
||||
smt.string_solver=seq \
|
||||
-T:"$Z3_TIMEOUT_SEC" \
|
||||
"$smt_file" 2>/dev/null; } 2>&1)
|
||||
SEQ_RESULT=$(echo "$SEQ_OUT" | grep -E '^(sat|unsat|unknown)' | head -1)
|
||||
SEQ_MS=$(echo "$SEQ_OUT" | grep real | awk '{split($2,a,"m"); split(a[2],b,"s"); printf "%d", (a[1]*60+b[1])*1000}')
|
||||
[ -z "$SEQ_RESULT" ] && SEQ_RESULT="timeout"
|
||||
[ -z "$SEQ_MS" ] && SEQ_MS=$((TIMEOUT_SEC * 1000))
|
||||
|
||||
# Run with nseq solver; same structure
|
||||
NSEQ_OUT=$({ time timeout "$TIMEOUT_SEC" "$Z3" \
|
||||
smt.string_solver=nseq \
|
||||
-T:"$Z3_TIMEOUT_SEC" \
|
||||
"$smt_file" 2>/dev/null; } 2>&1)
|
||||
NSEQ_RESULT=$(echo "$NSEQ_OUT" | grep -E '^(sat|unsat|unknown)' | head -1)
|
||||
NSEQ_MS=$(echo "$NSEQ_OUT" | grep real | awk '{split($2,a,"m"); split(a[2],b,"s"); printf "%d", (a[1]*60+b[1])*1000}')
|
||||
[ -z "$NSEQ_RESULT" ] && NSEQ_RESULT="timeout"
|
||||
[ -z "$NSEQ_MS" ] && NSEQ_MS=$((TIMEOUT_SEC * 1000))
|
||||
|
||||
SHORT=$(basename "$smt_file")
|
||||
echo "$SHORT,$SEQ_RESULT,$SEQ_MS,$NSEQ_RESULT,$NSEQ_MS" >> "$RESULTS"
|
||||
|
||||
done_count=$((done_count + 1))
|
||||
if [ $((done_count % 50)) -eq 0 ]; then
|
||||
echo "Progress: $done_count / $SAMPLE files completed"
|
||||
fi
|
||||
done < /tmp/qf_s_sample.txt
|
||||
|
||||
echo "Benchmark run complete: $done_count files"
|
||||
```
|
||||
|
||||
## Phase 5: Collect Seq Traces for Interesting Cases
|
||||
|
||||
For benchmarks where `seq` solves in under 2 s but `nseq` times out (seq-fast/nseq-slow cases), collect a brief `seq` trace to understand what algorithm is used:
|
||||
|
||||
```bash
|
||||
Z3=/tmp/z3-build/z3
|
||||
mkdir -p /tmp/traces
|
||||
|
||||
# Find seq-fast / nseq-slow files: seq solved (sat/unsat) in <2000ms AND nseq timed out
|
||||
awk -F, 'NR>1 && ($2=="sat"||$2=="unsat") && $3<2000 && $4=="timeout" {print $1}' \
|
||||
/tmp/benchmark-results.csv > /tmp/seq_fast_nseq_slow.txt
|
||||
echo "seq-fast / nseq-slow files: $(wc -l < /tmp/seq_fast_nseq_slow.txt)"
|
||||
|
||||
# Collect traces for at most 5 such cases
|
||||
head -5 /tmp/seq_fast_nseq_slow.txt | while IFS= read -r short; do
|
||||
# Find the full path
|
||||
full=$(grep "/$short$" /tmp/qf_s_sample.txt | head -1)
|
||||
[ -z "$full" ] && continue
|
||||
timeout 5 "$Z3" \
|
||||
smt.string_solver=seq \
|
||||
-tr:seq \
|
||||
-T:5 \
|
||||
"$full" > "/tmp/traces/${short%.smt2}.seq.trace" 2>&1 || true
|
||||
done
|
||||
```
|
||||
|
||||
## Phase 6: Analyze Results
|
||||
|
||||
Compute summary statistics from the CSV:
|
||||
|
||||
```bash
|
||||
Save the analysis script to a file and run it:
|
||||
|
||||
```bash
|
||||
cat > /tmp/analyze_benchmark.py << 'PYEOF'
|
||||
import csv, sys
|
||||
|
||||
results = []
|
||||
with open('/tmp/benchmark-results.csv') as f:
|
||||
reader = csv.DictReader(f)
|
||||
for row in reader:
|
||||
results.append(row)
|
||||
|
||||
total = len(results)
|
||||
if total == 0:
|
||||
print("No results found.")
|
||||
sys.exit(0)
|
||||
|
||||
def is_correct(r, solver):
|
||||
prefix = 'seq' if solver == 'seq' else 'nseq'
|
||||
return r[f'{prefix}_result'] in ('sat', 'unsat')
|
||||
|
||||
def timed_out(r, solver):
|
||||
prefix = 'seq' if solver == 'seq' else 'nseq'
|
||||
return r[f'{prefix}_result'] == 'timeout'
|
||||
|
||||
seq_solved = sum(1 for r in results if is_correct(r, 'seq'))
|
||||
nseq_solved = sum(1 for r in results if is_correct(r, 'nseq'))
|
||||
seq_to = sum(1 for r in results if timed_out(r, 'seq'))
|
||||
nseq_to = sum(1 for r in results if timed_out(r, 'nseq'))
|
||||
|
||||
seq_times = [int(r['seq_time_ms']) for r in results if is_correct(r, 'seq')]
|
||||
nseq_times = [int(r['nseq_time_ms']) for r in results if is_correct(r, 'nseq')]
|
||||
|
||||
def median(lst):
|
||||
s = sorted(lst)
|
||||
n = len(s)
|
||||
return s[n//2] if n else 0
|
||||
|
||||
def mean(lst):
|
||||
return sum(lst)//len(lst) if lst else 0
|
||||
|
||||
# Disagreements (sat vs unsat or vice-versa)
|
||||
disagreements = [
|
||||
r for r in results
|
||||
if r['seq_result'] in ('sat','unsat')
|
||||
and r['nseq_result'] in ('sat','unsat')
|
||||
and r['seq_result'] != r['nseq_result']
|
||||
]
|
||||
|
||||
# seq-fast / nseq-slow: seq solved in <2s, nseq timed out
|
||||
seq_fast_nseq_slow = [
|
||||
r for r in results
|
||||
if is_correct(r, 'seq') and int(r['seq_time_ms']) < 2000 and timed_out(r, 'nseq')
|
||||
]
|
||||
# nseq-fast / seq-slow: nseq solved in <2s, seq timed out
|
||||
nseq_fast_seq_slow = [
|
||||
r for r in results
|
||||
if is_correct(r, 'nseq') and int(r['nseq_time_ms']) < 2000 and timed_out(r, 'seq')
|
||||
]
|
||||
|
||||
print(f"TOTAL={total}")
|
||||
print(f"SEQ_SOLVED={seq_solved}")
|
||||
print(f"NSEQ_SOLVED={nseq_solved}")
|
||||
print(f"SEQ_TIMEOUTS={seq_to}")
|
||||
print(f"NSEQ_TIMEOUTS={nseq_to}")
|
||||
print(f"SEQ_MEDIAN_MS={median(seq_times)}")
|
||||
print(f"NSEQ_MEDIAN_MS={median(nseq_times)}")
|
||||
print(f"SEQ_MEAN_MS={mean(seq_times)}")
|
||||
print(f"NSEQ_MEAN_MS={mean(nseq_times)}")
|
||||
print(f"DISAGREEMENTS={len(disagreements)}")
|
||||
print(f"SEQ_FAST_NSEQ_SLOW={len(seq_fast_nseq_slow)}")
|
||||
print(f"NSEQ_FAST_SEQ_SLOW={len(nseq_fast_seq_slow)}")
|
||||
|
||||
# Print top-10 slowest for nseq that seq handles fast
|
||||
print("\nTOP_SEQ_FAST_NSEQ_SLOW:")
|
||||
for r in sorted(seq_fast_nseq_slow, key=lambda x: -int(x['nseq_time_ms']))[:10]:
|
||||
print(f" {r['file']} seq={r['seq_time_ms']}ms nseq={r['nseq_time_ms']}ms seq_result={r['seq_result']} nseq_result={r['nseq_result']}")
|
||||
|
||||
print("\nTOP_NSEQ_FAST_SEQ_SLOW:")
|
||||
for r in sorted(nseq_fast_seq_slow, key=lambda x: -int(x['seq_time_ms']))[:10]:
|
||||
print(f" {r['file']} seq={r['seq_time_ms']}ms nseq={r['nseq_time_ms']}ms seq_result={r['seq_result']} nseq_result={r['nseq_result']}")
|
||||
|
||||
if disagreements:
|
||||
print(f"\nDISAGREEMENTS ({len(disagreements)}):")
|
||||
for r in disagreements[:10]:
|
||||
print(f" {r['file']} seq={r['seq_result']} nseq={r['nseq_result']}")
|
||||
PYEOF
|
||||
|
||||
python3 /tmp/analyze_benchmark.py
|
||||
```
|
||||
|
||||
## Phase 7: Create GitHub Discussion
|
||||
|
||||
Use the `create_discussion` safe-output tool to post a structured benchmark report.
|
||||
|
||||
The discussion body should be formatted as follows (fill in real numbers from Phase 6):
|
||||
|
||||
```markdown
|
||||
# QF_S Benchmark: seq vs nseq
|
||||
|
||||
**Date**: YYYY-MM-DD
|
||||
**Branch**: c3
|
||||
**Commit**: `<short SHA>`
|
||||
**Workflow Run**: [#<run_id>](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
|
||||
**Files benchmarked**: N (capped at 500, timeout 10 s per file)
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Metric | seq | nseq |
|
||||
|--------|-----|------|
|
||||
| Files solved (sat/unsat) | SEQ_SOLVED | NSEQ_SOLVED |
|
||||
| Timeouts | SEQ_TO | NSEQ_TO |
|
||||
| Median solve time (solved files) | X ms | Y ms |
|
||||
| Mean solve time (solved files) | X ms | Y ms |
|
||||
| **Disagreements (sat≠unsat)** | — | N |
|
||||
|
||||
---
|
||||
|
||||
## Performance Comparison
|
||||
|
||||
### seq-fast / nseq-slow (seq < 2 s, nseq timed out)
|
||||
|
||||
These are benchmarks where the classical `seq` solver is significantly faster. These represent regression risk for `nseq`.
|
||||
|
||||
| File | seq (ms) | nseq (ms) | seq result | nseq result |
|
||||
|------|----------|-----------|------------|-------------|
|
||||
[TOP 10 ENTRIES]
|
||||
|
||||
### nseq-fast / seq-slow (nseq < 2 s, seq timed out)
|
||||
|
||||
These are benchmarks where `nseq` shows a performance advantage.
|
||||
|
||||
| File | seq (ms) | nseq (ms) | seq result | nseq result |
|
||||
|------|----------|-----------|------------|-------------|
|
||||
[TOP 10 ENTRIES]
|
||||
|
||||
---
|
||||
|
||||
## Correctness
|
||||
|
||||
**Disagreements** (files where seq says `sat` but nseq says `unsat` or vice versa): N
|
||||
|
||||
[If disagreements exist, list all of them here with file paths and both results]
|
||||
|
||||
---
|
||||
|
||||
## seq Trace Analysis (seq-fast / nseq-slow cases)
|
||||
|
||||
<details>
|
||||
<summary>Click to expand trace snippets for top seq-fast/nseq-slow cases</summary>
|
||||
|
||||
[Insert trace snippet for each traced file, or "No traces collected" if section was skipped]
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
## Raw Data
|
||||
|
||||
<details>
|
||||
<summary>Full results CSV (click to expand)</summary>
|
||||
|
||||
```csv
|
||||
[PASTE FIRST 200 LINES OF /tmp/benchmark-results.csv]
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
---
|
||||
|
||||
*Generated by the QF_S Benchmark workflow. To reproduce: build Z3 from the `c3` branch and run `z3 smt.string_solver=seq|nseq -T:10 <file.smt2>`.*
|
||||
```
|
||||
|
||||
## Edge Cases
|
||||
|
||||
- If the build fails, call `missing_data` explaining the build error and stop.
|
||||
- If no benchmark files are found at all, call `missing_data` explaining that no QF_S `.smt2` files were found in the `c3` branch.
|
||||
- If Z3 crashes (segfault) on a file with either solver, record the result as `crash` and continue.
|
||||
- If the total benchmark set is very small (< 5 files), note this prominently in the discussion and suggest adding more QF_S benchmarks to the `c3` branch.
|
||||
- If zero disagreements and both solvers time out on the same files, note that the solvers are in agreement.
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **DO NOT** modify any source files or create pull requests.
|
||||
- **DO NOT** run benchmarks for longer than 80 minutes total (leave buffer for posting).
|
||||
- **DO** always report the commit SHA so results can be correlated with specific code versions.
|
||||
- **DO** close older ZIPT Benchmark discussions automatically (configured via `close-older-discussions: true`).
|
||||
- **DO** highlight disagreements prominently — these are potential correctness bugs.
|
||||
|
|
@ -2277,6 +2277,200 @@ class JavaExample
|
|||
|
||||
}
|
||||
|
||||
void numeralDoubleExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("NumeralDoubleExample");
|
||||
Log.append("NumeralDoubleExample");
|
||||
|
||||
IntNum n42 = ctx.mkInt(42);
|
||||
if (n42.getNumeralDouble() != 42.0)
|
||||
throw new TestFailedException();
|
||||
|
||||
RatNum half = ctx.mkReal(1, 2);
|
||||
if (Math.abs(half.getNumeralDouble() - 0.5) > 1e-10)
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("NumeralDoubleExample passed.");
|
||||
}
|
||||
|
||||
void unsignedNumeralExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("UnsignedNumeralExample");
|
||||
Log.append("UnsignedNumeralExample");
|
||||
|
||||
IntNum n100 = ctx.mkInt(100);
|
||||
if (n100.getUint() != 100)
|
||||
throw new TestFailedException();
|
||||
|
||||
IntNum big = ctx.mkInt(3000000000L);
|
||||
if (big.getUint64() != 3000000000L)
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("UnsignedNumeralExample passed.");
|
||||
}
|
||||
|
||||
void rationalExtractionExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("RationalExtractionExample");
|
||||
Log.append("RationalExtractionExample");
|
||||
|
||||
RatNum r34 = ctx.mkReal(3, 4);
|
||||
|
||||
// getSmall returns [numerator, denominator]
|
||||
long[] small = r34.getSmall();
|
||||
if (small[0] != 3 || small[1] != 4)
|
||||
throw new TestFailedException();
|
||||
|
||||
// getRationalInt64 returns [numerator, denominator] or null
|
||||
long[] ri64 = r34.getRationalInt64();
|
||||
if (ri64 == null || ri64[0] != 3 || ri64[1] != 4)
|
||||
throw new TestFailedException();
|
||||
|
||||
// integer as rational: 7/1
|
||||
RatNum r71 = ctx.mkReal(7, 1);
|
||||
long[] small71 = r71.getSmall();
|
||||
if (small71[0] != 7 || small71[1] != 1)
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("RationalExtractionExample passed.");
|
||||
}
|
||||
|
||||
void isGroundExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("IsGroundExample");
|
||||
Log.append("IsGroundExample");
|
||||
|
||||
// a constant integer is ground
|
||||
IntExpr five = ctx.mkInt(5);
|
||||
if (!five.isGround())
|
||||
throw new TestFailedException();
|
||||
|
||||
// an uninterpreted constant is also ground (no bound variables)
|
||||
IntExpr x = ctx.mkIntConst("x");
|
||||
if (!x.isGround())
|
||||
throw new TestFailedException();
|
||||
|
||||
// an addition of constants is ground
|
||||
Expr sum = ctx.mkAdd(ctx.mkInt(1), ctx.mkInt(2));
|
||||
if (!sum.isGround())
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("IsGroundExample passed.");
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
void astDepthExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("AstDepthExample");
|
||||
Log.append("AstDepthExample");
|
||||
|
||||
// a plain integer constant has depth 1
|
||||
IntExpr five = ctx.mkInt(5);
|
||||
if (five.getDepth() != 1)
|
||||
throw new TestFailedException();
|
||||
|
||||
// (x + 1) should have depth 2
|
||||
IntExpr x = ctx.mkIntConst("x");
|
||||
Expr sum = ctx.mkAdd(x, ctx.mkInt(1));
|
||||
if (sum.getDepth() != 2)
|
||||
throw new TestFailedException();
|
||||
|
||||
// nested: (x + 1) * y should have depth 3
|
||||
IntExpr y = ctx.mkIntConst("y");
|
||||
Expr prod = ctx.mkMul(sum, y);
|
||||
if (prod.getDepth() != 3)
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("AstDepthExample passed.");
|
||||
}
|
||||
|
||||
void arrayArityExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("ArrayArityExample");
|
||||
Log.append("ArrayArityExample");
|
||||
|
||||
// Array Int -> Int has arity 1
|
||||
ArraySort<IntSort, IntSort> arr1 = ctx.mkArraySort(ctx.getIntSort(), ctx.getIntSort());
|
||||
if (arr1.getArity() != 1)
|
||||
throw new TestFailedException();
|
||||
|
||||
// Array (Int, Bool) -> Int has arity 2
|
||||
ArraySort arr2 = ctx.mkArraySort(new Sort[]{ctx.getIntSort(), ctx.getBoolSort()}, ctx.getIntSort());
|
||||
if (arr2.getArity() != 2)
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("ArrayArityExample passed.");
|
||||
}
|
||||
|
||||
void recursiveDatatypeExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("RecursiveDatatypeExample");
|
||||
Log.append("RecursiveDatatypeExample");
|
||||
|
||||
// a list sort is recursive (cons refers back to the list)
|
||||
Constructor<Sort> nil = ctx.mkConstructor("nil", "is_nil", null, null, null);
|
||||
Constructor<Sort> cons = ctx.mkConstructor("cons", "is_cons",
|
||||
new String[]{"head", "tail"},
|
||||
new Sort[]{ctx.getIntSort(), null},
|
||||
new int[]{0, 0});
|
||||
DatatypeSort<Sort> intList = ctx.mkDatatypeSort("intlist", new Constructor[]{nil, cons});
|
||||
if (!intList.isRecursive())
|
||||
throw new TestFailedException();
|
||||
|
||||
// a simple pair sort is not recursive
|
||||
Constructor<Sort> mkPair = ctx.mkConstructor("mkpair", "is_pair",
|
||||
new String[]{"fst", "snd"},
|
||||
new Sort[]{ctx.getIntSort(), ctx.getBoolSort()},
|
||||
null);
|
||||
DatatypeSort<Sort> pair = ctx.mkDatatypeSort("Pair", new Constructor[]{mkPair});
|
||||
if (pair.isRecursive())
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("RecursiveDatatypeExample passed.");
|
||||
}
|
||||
|
||||
void fpNumeralExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("FpNumeralExample");
|
||||
Log.append("FpNumeralExample");
|
||||
|
||||
FPSort fpsort = ctx.mkFPSort32();
|
||||
|
||||
// a floating point numeral
|
||||
FPExpr fpval = (FPExpr) ctx.mkFP(3.14, fpsort);
|
||||
if (!fpval.isNumeral())
|
||||
throw new TestFailedException();
|
||||
|
||||
// a symbolic FP variable is not a numeral
|
||||
FPExpr fpvar = (FPExpr) ctx.mkConst("fpx", fpsort);
|
||||
if (fpvar.isNumeral())
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("FpNumeralExample passed.");
|
||||
}
|
||||
|
||||
@SuppressWarnings("unchecked")
|
||||
void isLambdaExample(Context ctx) throws TestFailedException
|
||||
{
|
||||
System.out.println("IsLambdaExample");
|
||||
Log.append("IsLambdaExample");
|
||||
|
||||
// build lambda x : Int . x + 1
|
||||
IntExpr x = (IntExpr) ctx.mkBound(0, ctx.getIntSort());
|
||||
Expr body = ctx.mkAdd(x, ctx.mkInt(1));
|
||||
Expr lam = ctx.mkLambda(new Sort[]{ctx.getIntSort()},
|
||||
new Symbol[]{ctx.mkSymbol("x")}, body);
|
||||
if (!lam.isLambda())
|
||||
throw new TestFailedException();
|
||||
|
||||
// a regular expression is not a lambda
|
||||
IntExpr y = ctx.mkIntConst("y");
|
||||
if (y.isLambda())
|
||||
throw new TestFailedException();
|
||||
|
||||
System.out.println("IsLambdaExample passed.");
|
||||
}
|
||||
|
||||
public static void main(String[] args)
|
||||
{
|
||||
JavaExample p = new JavaExample();
|
||||
|
|
@ -2328,6 +2522,15 @@ class JavaExample
|
|||
p.finiteDomainExample(ctx);
|
||||
p.floatingPointExample1(ctx);
|
||||
// core dumps: p.floatingPointExample2(ctx);
|
||||
p.numeralDoubleExample(ctx);
|
||||
p.unsignedNumeralExample(ctx);
|
||||
p.rationalExtractionExample(ctx);
|
||||
p.isGroundExample(ctx);
|
||||
p.astDepthExample(ctx);
|
||||
p.arrayArityExample(ctx);
|
||||
p.recursiveDatatypeExample(ctx);
|
||||
p.fpNumeralExample(ctx);
|
||||
p.isLambdaExample(ctx);
|
||||
}
|
||||
|
||||
{ // These examples need proof generation turned on.
|
||||
|
|
|
|||
|
|
@ -1919,11 +1919,8 @@ class JavaDLLComponent(Component):
|
|||
if IS_WINDOWS: # On Windows, CL creates a .lib file to link against.
|
||||
out.write('\t$(SLINK) $(SLINK_OUT_FLAG)libz3java$(SO_EXT) $(SLINK_FLAGS) %s$(OBJ_EXT) libz3$(LIB_EXT)\n' %
|
||||
os.path.join('api', 'java', 'Native'))
|
||||
elif IS_OSX and IS_ARCH_ARM64:
|
||||
out.write('\t$(SLINK) $(SLINK_OUT_FLAG)libz3java$(SO_EXT) $(SLINK_FLAGS) -arch arm64 %s$(OBJ_EXT) libz3$(SO_EXT)\n' %
|
||||
os.path.join('api', 'java', 'Native'))
|
||||
else:
|
||||
out.write('\t$(SLINK) $(SLINK_OUT_FLAG)libz3java$(SO_EXT) $(SLINK_FLAGS) %s$(OBJ_EXT) libz3$(SO_EXT)\n' %
|
||||
out.write('\t$(SLINK) $(SLINK_OUT_FLAG)libz3java$(SO_EXT) $(SLINK_FLAGS) %s$(OBJ_EXT) libz3$(SO_EXT) $(SLINK_EXTRA_FLAGS)\n' %
|
||||
os.path.join('api', 'java', 'Native'))
|
||||
out.write('%s.jar: libz3java$(SO_EXT) ' % self.package_name)
|
||||
deps = ''
|
||||
|
|
|
|||
278
scripts/tests/test_jni_arch_flags.py
Normal file
278
scripts/tests/test_jni_arch_flags.py
Normal file
|
|
@ -0,0 +1,278 @@
|
|||
############################################
|
||||
# Copyright (c) 2024 Microsoft Corporation
|
||||
#
|
||||
# Unit tests for JNI architecture flags in Makefile generation.
|
||||
#
|
||||
# Regression tests for:
|
||||
# "JNI bindings use wrong architecture in macOS cross-compilation (arm64 to x64)"
|
||||
#
|
||||
# The fix ensures that libz3java.dylib (and the JNI link step) uses
|
||||
# $(SLINK_EXTRA_FLAGS) instead of a hardcoded -arch arm64.
|
||||
# $(SLINK_EXTRA_FLAGS) is populated correctly in mk_config() for:
|
||||
# - Native ARM64 builds: SLINK_EXTRA_FLAGS contains -arch arm64
|
||||
# - Cross-compile to x86_64: SLINK_EXTRA_FLAGS contains -arch x86_64
|
||||
# - Other platforms: SLINK_EXTRA_FLAGS has no -arch flag
|
||||
############################################
|
||||
import io
|
||||
import os
|
||||
import sys
|
||||
import unittest
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
# Add the scripts directory to the path so we can import mk_util
|
||||
_SCRIPTS_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
if _SCRIPTS_DIR not in sys.path:
|
||||
sys.path.insert(0, _SCRIPTS_DIR)
|
||||
|
||||
import mk_util
|
||||
|
||||
|
||||
class TestJNIArchitectureFlagsInMakefile(unittest.TestCase):
|
||||
"""
|
||||
Tests that JavaDLLComponent.mk_makefile() generates a JNI link command
|
||||
that uses $(SLINK_EXTRA_FLAGS) rather than hardcoding -arch arm64.
|
||||
|
||||
$(SLINK_EXTRA_FLAGS) is set by mk_config() to contain the correct -arch
|
||||
flag for the TARGET architecture (not the host), so using it ensures
|
||||
cross-compilation works correctly.
|
||||
"""
|
||||
|
||||
def setUp(self):
|
||||
"""Save mk_util global state before each test."""
|
||||
self._saved_components = list(mk_util._Components)
|
||||
self._saved_names = set(mk_util._ComponentNames)
|
||||
self._saved_name2component = dict(mk_util._Name2Component)
|
||||
self._saved_id = mk_util._Id
|
||||
self._saved_javac = mk_util.JAVAC
|
||||
self._saved_jar = mk_util.JAR
|
||||
|
||||
def tearDown(self):
|
||||
"""Restore mk_util global state after each test."""
|
||||
mk_util._Components[:] = self._saved_components
|
||||
mk_util._ComponentNames.clear()
|
||||
mk_util._ComponentNames.update(self._saved_names)
|
||||
mk_util._Name2Component.clear()
|
||||
mk_util._Name2Component.update(self._saved_name2component)
|
||||
mk_util._Id = self._saved_id
|
||||
mk_util.JAVAC = self._saved_javac
|
||||
mk_util.JAR = self._saved_jar
|
||||
|
||||
def _make_java_dll_component(self):
|
||||
"""
|
||||
Create a JavaDLLComponent instance bypassing the registry check so
|
||||
that tests remain independent of each other.
|
||||
"""
|
||||
# Register a stub 'api' component that provides to_src_dir
|
||||
api_stub = MagicMock()
|
||||
api_stub.to_src_dir = '../src/api'
|
||||
mk_util._Name2Component['api'] = api_stub
|
||||
mk_util._ComponentNames.add('api')
|
||||
|
||||
# Build the component without going through the full Component.__init__
|
||||
# registration path (which enforces uniqueness globally).
|
||||
comp = mk_util.JavaDLLComponent.__new__(mk_util.JavaDLLComponent)
|
||||
comp.name = 'java'
|
||||
comp.dll_name = 'libz3java'
|
||||
comp.package_name = 'com.microsoft.z3'
|
||||
comp.manifest_file = None
|
||||
comp.to_src_dir = '../src/api/java'
|
||||
comp.src_dir = 'src/api/java'
|
||||
comp.deps = []
|
||||
comp.install = True
|
||||
return comp
|
||||
|
||||
def _generate_makefile(self, comp, *, is_windows, is_osx, is_arch_arm64):
|
||||
"""
|
||||
Call mk_makefile() with the given platform flags and return the
|
||||
generated Makefile text.
|
||||
"""
|
||||
buf = io.StringIO()
|
||||
with patch.object(mk_util, 'JAVA_ENABLED', True), \
|
||||
patch.object(mk_util, 'IS_WINDOWS', is_windows), \
|
||||
patch.object(mk_util, 'IS_OSX', is_osx), \
|
||||
patch.object(mk_util, 'IS_ARCH_ARM64', is_arch_arm64), \
|
||||
patch.object(mk_util, 'JNI_HOME', '/path/to/jni'), \
|
||||
patch.object(mk_util, 'JAVAC', 'javac'), \
|
||||
patch.object(mk_util, 'JAR', 'jar'), \
|
||||
patch.object(mk_util, 'BUILD_DIR', '/tmp/test_build'), \
|
||||
patch('mk_util.mk_dir'), \
|
||||
patch('mk_util.get_java_files', return_value=[]):
|
||||
comp.mk_makefile(buf)
|
||||
return buf.getvalue()
|
||||
|
||||
def _find_jni_link_lines(self, makefile_text):
|
||||
"""Return lines that contain the JNI library link command."""
|
||||
return [
|
||||
line for line in makefile_text.splitlines()
|
||||
if 'libz3java$(SO_EXT)' in line and 'SLINK' in line
|
||||
]
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Tests for non-Windows platforms (where SLINK_EXTRA_FLAGS matters)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_macos_arm64_native_uses_slink_extra_flags(self):
|
||||
"""
|
||||
On native ARM64 macOS builds, the JNI link command must use
|
||||
$(SLINK_EXTRA_FLAGS) so that the -arch arm64 flag added to
|
||||
SLINK_EXTRA_FLAGS by mk_config() is respected.
|
||||
"""
|
||||
comp = self._make_java_dll_component()
|
||||
text = self._generate_makefile(
|
||||
comp, is_windows=False, is_osx=True, is_arch_arm64=True
|
||||
)
|
||||
link_lines = self._find_jni_link_lines(text)
|
||||
self.assertTrue(
|
||||
link_lines,
|
||||
"Expected at least one JNI link line in the generated Makefile",
|
||||
)
|
||||
for line in link_lines:
|
||||
self.assertIn(
|
||||
'$(SLINK_EXTRA_FLAGS)', line,
|
||||
"JNI link command must use $(SLINK_EXTRA_FLAGS) so the "
|
||||
"correct target architecture flag is applied",
|
||||
)
|
||||
|
||||
def test_macos_arm64_native_no_hardcoded_arch_arm64(self):
|
||||
"""
|
||||
The JNI link command must NOT hardcode -arch arm64.
|
||||
Hardcoding -arch arm64 breaks cross-compilation from an ARM64 host
|
||||
to an x86_64 target, which is the bug this fix addresses.
|
||||
"""
|
||||
comp = self._make_java_dll_component()
|
||||
text = self._generate_makefile(
|
||||
comp, is_windows=False, is_osx=True, is_arch_arm64=True
|
||||
)
|
||||
link_lines = self._find_jni_link_lines(text)
|
||||
self.assertTrue(link_lines, "Expected at least one JNI link line")
|
||||
for line in link_lines:
|
||||
self.assertNotIn(
|
||||
'-arch arm64', line,
|
||||
"JNI link command must not hardcode '-arch arm64'. "
|
||||
"Use $(SLINK_EXTRA_FLAGS) instead so that cross-compilation "
|
||||
"from ARM64 host to x86_64 target works correctly.",
|
||||
)
|
||||
|
||||
def test_macos_x86_64_uses_slink_extra_flags(self):
|
||||
"""
|
||||
When building for x86_64 on macOS (e.g. cross-compiling from ARM64
|
||||
host), the JNI link command must still use $(SLINK_EXTRA_FLAGS) so
|
||||
that the -arch x86_64 flag set by mk_config() is applied.
|
||||
"""
|
||||
comp = self._make_java_dll_component()
|
||||
text = self._generate_makefile(
|
||||
comp, is_windows=False, is_osx=True, is_arch_arm64=False
|
||||
)
|
||||
link_lines = self._find_jni_link_lines(text)
|
||||
self.assertTrue(link_lines, "Expected at least one JNI link line")
|
||||
for line in link_lines:
|
||||
self.assertIn(
|
||||
'$(SLINK_EXTRA_FLAGS)', line,
|
||||
"JNI link command must use $(SLINK_EXTRA_FLAGS)",
|
||||
)
|
||||
|
||||
def test_linux_uses_slink_extra_flags(self):
|
||||
"""On Linux, the JNI link command must use $(SLINK_EXTRA_FLAGS)."""
|
||||
comp = self._make_java_dll_component()
|
||||
text = self._generate_makefile(
|
||||
comp, is_windows=False, is_osx=False, is_arch_arm64=False
|
||||
)
|
||||
link_lines = self._find_jni_link_lines(text)
|
||||
self.assertTrue(link_lines, "Expected at least one JNI link line")
|
||||
for line in link_lines:
|
||||
self.assertIn(
|
||||
'$(SLINK_EXTRA_FLAGS)', line,
|
||||
"JNI link command must use $(SLINK_EXTRA_FLAGS) on Linux",
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Tests for Windows (different codepath - links against LIB_EXT)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_windows_links_against_lib_ext(self):
|
||||
"""
|
||||
On Windows the JNI library is linked against the import library
|
||||
(libz3$(LIB_EXT)), not the shared library, and SLINK_EXTRA_FLAGS is
|
||||
handled differently by the VS build system.
|
||||
"""
|
||||
comp = self._make_java_dll_component()
|
||||
text = self._generate_makefile(
|
||||
comp, is_windows=True, is_osx=False, is_arch_arm64=False
|
||||
)
|
||||
link_lines = self._find_jni_link_lines(text)
|
||||
self.assertTrue(link_lines, "Expected at least one JNI link line")
|
||||
for line in link_lines:
|
||||
self.assertIn(
|
||||
'$(LIB_EXT)', line,
|
||||
"Windows JNI link command must link against LIB_EXT "
|
||||
"(the import library)",
|
||||
)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Consistency check: SLINK_EXTRA_FLAGS in mk_config for cross-compile
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def test_slibextraflags_contains_x86_64_when_cross_compiling(self):
|
||||
"""
|
||||
When mk_config() runs on an ARM64 macOS host with IS_ARCH_ARM64=False
|
||||
(i.e. cross-compiling to x86_64), SLIBEXTRAFLAGS must contain
|
||||
'-arch x86_64' so that $(SLINK_EXTRA_FLAGS) carries the right flag.
|
||||
|
||||
This validates the mk_config() logic that feeds into $(SLINK_EXTRA_FLAGS).
|
||||
"""
|
||||
# We verify the condition in mk_config() directly by checking the
|
||||
# relevant code path. The cross-compile path in mk_config() is:
|
||||
#
|
||||
# elif IS_OSX and os.uname()[4] == 'arm64':
|
||||
# SLIBEXTRAFLAGS = '%s -arch x86_64' % SLIBEXTRAFLAGS
|
||||
#
|
||||
# We test this by simulating the condition:
|
||||
import platform
|
||||
if platform.system() != 'Darwin' or platform.machine() != 'arm64':
|
||||
self.skipTest(
|
||||
"Cross-compilation architecture test only runs on ARM64 macOS"
|
||||
)
|
||||
|
||||
# On a real ARM64 macOS machine with IS_ARCH_ARM64=False we should get
|
||||
# -arch x86_64 in SLIBEXTRAFLAGS. Simulate the mk_config() logic:
|
||||
slibextraflags = ''
|
||||
is_arch_arm64 = False
|
||||
is_osx = True
|
||||
host_machine = platform.machine() # 'arm64'
|
||||
|
||||
if is_arch_arm64 and is_osx:
|
||||
slibextraflags = '%s -arch arm64' % slibextraflags
|
||||
elif is_osx and host_machine == 'arm64':
|
||||
slibextraflags = '%s -arch x86_64' % slibextraflags
|
||||
|
||||
self.assertIn(
|
||||
'-arch x86_64', slibextraflags,
|
||||
"When cross-compiling from ARM64 macOS to x86_64, "
|
||||
"SLIBEXTRAFLAGS must contain '-arch x86_64'",
|
||||
)
|
||||
|
||||
def test_slibextraflags_contains_arm64_for_native_arm64_build(self):
|
||||
"""
|
||||
When mk_config() runs on a native ARM64 macOS build (IS_ARCH_ARM64=True),
|
||||
SLIBEXTRAFLAGS must contain '-arch arm64'.
|
||||
"""
|
||||
import platform
|
||||
if platform.system() != 'Darwin':
|
||||
self.skipTest("Architecture flag test only relevant on macOS")
|
||||
|
||||
slibextraflags = ''
|
||||
is_arch_arm64 = True
|
||||
is_osx = True
|
||||
|
||||
if is_arch_arm64 and is_osx:
|
||||
slibextraflags = '%s -arch arm64' % slibextraflags
|
||||
|
||||
self.assertIn(
|
||||
'-arch arm64', slibextraflags,
|
||||
"For a native ARM64 macOS build, SLIBEXTRAFLAGS must contain "
|
||||
"'-arch arm64' so that $(SLINK_EXTRA_FLAGS) carries the correct flag",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
|
@ -852,12 +852,18 @@ def mk_java(java_src, java_dir, package_name):
|
|||
java_wrapper.write(' RELEASELONGAELEMS(a%s, _a%s);\n' % (i, i))
|
||||
|
||||
elif k == OUT or k == INOUT:
|
||||
if param_type(param) == INT or param_type(param) == UINT or param_type(param) == BOOL:
|
||||
if param_type(param) == INT or param_type(param) == UINT:
|
||||
java_wrapper.write(' {\n')
|
||||
java_wrapper.write(' jclass mc = jenv->GetObjectClass(a%s);\n' % i)
|
||||
java_wrapper.write(' jfieldID fid = jenv->GetFieldID(mc, "value", "I");\n')
|
||||
java_wrapper.write(' jenv->SetIntField(a%s, fid, (jint) _a%s);\n' % (i, i))
|
||||
java_wrapper.write(' }\n')
|
||||
elif param_type(param) == BOOL:
|
||||
java_wrapper.write(' {\n')
|
||||
java_wrapper.write(' jclass mc = jenv->GetObjectClass(a%s);\n' % i)
|
||||
java_wrapper.write(' jfieldID fid = jenv->GetFieldID(mc, "value", "Z");\n')
|
||||
java_wrapper.write(' jenv->SetBooleanField(a%s, fid, (jboolean) _a%s);\n' % (i, i))
|
||||
java_wrapper.write(' }\n')
|
||||
elif param_type(param) == STRING:
|
||||
java_wrapper.write(' {\n')
|
||||
java_wrapper.write(' jclass mc = jenv->GetObjectClass(a%s);\n' % i)
|
||||
|
|
|
|||
|
|
@ -38,6 +38,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_mk_array_sort_n(c, n, domain, range);
|
||||
RESET_ERROR_CODE();
|
||||
if (n == 0) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "array sort requires at least one domain sort");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
vector<parameter> params;
|
||||
for (unsigned i = 0; i < n; ++i) params.push_back(parameter(to_sort(domain[i])));
|
||||
params.push_back(parameter(to_sort(range)));
|
||||
|
|
|
|||
|
|
@ -898,6 +898,10 @@ extern "C" {
|
|||
RESET_ERROR_CODE();
|
||||
ast_manager & m = mk_c(c)->m();
|
||||
expr * a = to_expr(_a);
|
||||
if (num_exprs > 0 && (!_from || !_to)) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null from/to arrays with non-zero num_exprs");
|
||||
RETURN_Z3(of_expr(nullptr));
|
||||
}
|
||||
expr * const * from = to_exprs(num_exprs, _from);
|
||||
expr * const * to = to_exprs(num_exprs, _to);
|
||||
expr * r = nullptr;
|
||||
|
|
@ -1514,6 +1518,10 @@ extern "C" {
|
|||
LOG_Z3_translate(c, a, target);
|
||||
RESET_ERROR_CODE();
|
||||
CHECK_VALID_AST(a, nullptr);
|
||||
if (!target) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null target context");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
if (c == target) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, nullptr);
|
||||
RETURN_Z3(nullptr);
|
||||
|
|
|
|||
|
|
@ -167,6 +167,7 @@ extern "C" {
|
|||
RESET_ERROR_CODE();
|
||||
if (ebits < 2 || sbits < 3) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "ebits should be at least 2, sbits at least 3");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
api::context * ctx = mk_c(c);
|
||||
sort * s = ctx->fpautil().mk_float_sort(ebits, sbits);
|
||||
|
|
|
|||
|
|
@ -64,6 +64,7 @@ extern "C" {
|
|||
LOG_Z3_model_get_const_interp(c, m, a);
|
||||
RESET_ERROR_CODE();
|
||||
CHECK_NON_NULL(m, nullptr);
|
||||
CHECK_NON_NULL(a, nullptr);
|
||||
expr * r = to_model_ref(m)->get_const_interp(to_func_decl(a));
|
||||
if (!r) {
|
||||
RETURN_Z3(nullptr);
|
||||
|
|
@ -212,6 +213,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_model_translate(c, m, target);
|
||||
RESET_ERROR_CODE();
|
||||
if (!target) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null target context");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
Z3_model_ref* dst = alloc(Z3_model_ref, *mk_c(target));
|
||||
ast_translation tr(mk_c(c)->m(), mk_c(target)->m());
|
||||
dst->m_model = to_model_ref(m)->translate(tr);
|
||||
|
|
@ -246,7 +251,8 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_add_func_interp(c, m, f, else_val);
|
||||
RESET_ERROR_CODE();
|
||||
CHECK_NON_NULL(f, nullptr);
|
||||
CHECK_NON_NULL(m, nullptr);
|
||||
CHECK_NON_NULL(f, nullptr);
|
||||
func_decl* d = to_func_decl(f);
|
||||
model* mdl = to_model_ref(m);
|
||||
Z3_func_interp_ref * f_ref = alloc(Z3_func_interp_ref, *mk_c(c), mdl);
|
||||
|
|
|
|||
|
|
@ -94,7 +94,11 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_optimize_assert_soft(c, o, a, weight, id);
|
||||
RESET_ERROR_CODE();
|
||||
CHECK_FORMULA(a,0);
|
||||
CHECK_FORMULA(a,0);
|
||||
if (!weight) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null weight string");
|
||||
return 0;
|
||||
}
|
||||
rational w(weight);
|
||||
return to_optimize_ptr(o)->add_soft_constraint(to_expr(a), w, to_symbol(id));
|
||||
Z3_CATCH_RETURN(0);
|
||||
|
|
|
|||
|
|
@ -321,6 +321,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_mk_pattern(c, num_patterns, terms);
|
||||
RESET_ERROR_CODE();
|
||||
if (num_patterns == 0) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "pattern requires at least one term");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
for (unsigned i = 0; i < num_patterns; ++i) {
|
||||
if (!is_app(to_expr(terms[i]))) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, nullptr);
|
||||
|
|
|
|||
|
|
@ -48,6 +48,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_mk_string(c, str);
|
||||
RESET_ERROR_CODE();
|
||||
if (!str) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null string");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
zstring s(str);
|
||||
app* a = mk_c(c)->sutil().str.mk_string(s);
|
||||
mk_c(c)->save_ast_trail(a);
|
||||
|
|
@ -59,6 +63,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_mk_lstring(c, sz, str);
|
||||
RESET_ERROR_CODE();
|
||||
if (sz > 0 && !str) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "null string buffer");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
unsigned_vector chs;
|
||||
for (unsigned i = 0; i < sz; ++i) chs.push_back((unsigned char)str[i]);
|
||||
zstring s(sz, chs.data());
|
||||
|
|
@ -314,6 +322,10 @@ extern "C" {
|
|||
Z3_TRY;
|
||||
LOG_Z3_mk_re_loop(c, r, lo, hi);
|
||||
RESET_ERROR_CODE();
|
||||
if (hi != 0 && lo > hi) {
|
||||
SET_ERROR_CODE(Z3_INVALID_ARG, "loop lower bound must not exceed upper bound");
|
||||
RETURN_Z3(nullptr);
|
||||
}
|
||||
app* a = hi == 0 ? mk_c(c)->sutil().re.mk_loop(to_expr(r), lo) : mk_c(c)->sutil().re.mk_loop(to_expr(r), lo, hi);
|
||||
mk_c(c)->save_ast_trail(a);
|
||||
RETURN_Z3(of_ast(a));
|
||||
|
|
|
|||
|
|
@ -379,14 +379,15 @@ extern "C" {
|
|||
LOG_Z3_solver_from_file(c, s, file_name);
|
||||
char const* ext = get_extension(file_name);
|
||||
std::ifstream is(file_name);
|
||||
init_solver(c, s);
|
||||
if (!is) {
|
||||
SET_ERROR_CODE(Z3_FILE_ACCESS_ERROR, nullptr);
|
||||
}
|
||||
else if (ext && (std::string("dimacs") == ext || std::string("cnf") == ext)) {
|
||||
init_solver(c, s);
|
||||
solver_from_dimacs_stream(c, s, is);
|
||||
}
|
||||
else {
|
||||
init_solver(c, s);
|
||||
solver_from_stream(c, s, is);
|
||||
}
|
||||
Z3_CATCH;
|
||||
|
|
@ -1153,24 +1154,24 @@ extern "C" {
|
|||
void Z3_API Z3_solver_propagate_created(Z3_context c, Z3_solver s, Z3_created_eh created_eh) {
|
||||
Z3_TRY;
|
||||
RESET_ERROR_CODE();
|
||||
user_propagator::created_eh_t c = (void(*)(void*, user_propagator::callback*, expr*))created_eh;
|
||||
to_solver_ref(s)->user_propagate_register_created(c);
|
||||
user_propagator::created_eh_t created_fn = (void(*)(void*, user_propagator::callback*, expr*))created_eh;
|
||||
to_solver_ref(s)->user_propagate_register_created(created_fn);
|
||||
Z3_CATCH;
|
||||
}
|
||||
|
||||
void Z3_API Z3_solver_propagate_decide(Z3_context c, Z3_solver s, Z3_decide_eh decide_eh) {
|
||||
Z3_TRY;
|
||||
RESET_ERROR_CODE();
|
||||
user_propagator::decide_eh_t c = (void(*)(void*, user_propagator::callback*, expr*, unsigned, bool))decide_eh;
|
||||
to_solver_ref(s)->user_propagate_register_decide(c);
|
||||
user_propagator::decide_eh_t decide_fn = (void(*)(void*, user_propagator::callback*, expr*, unsigned, bool))decide_eh;
|
||||
to_solver_ref(s)->user_propagate_register_decide(decide_fn);
|
||||
Z3_CATCH;
|
||||
}
|
||||
|
||||
void Z3_API Z3_solver_propagate_on_binding(Z3_context c, Z3_solver s, Z3_on_binding_eh binding_eh) {
|
||||
Z3_TRY;
|
||||
RESET_ERROR_CODE();
|
||||
user_propagator::binding_eh_t c = (bool(*)(void*, user_propagator::callback*, expr*, expr*))binding_eh;
|
||||
to_solver_ref(s)->user_propagate_register_on_binding(c);
|
||||
user_propagator::binding_eh_t binding_fn = (bool(*)(void*, user_propagator::callback*, expr*, expr*))binding_eh;
|
||||
to_solver_ref(s)->user_propagate_register_on_binding(binding_fn);
|
||||
Z3_CATCH;
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -1533,6 +1533,8 @@ namespace z3 {
|
|||
|
||||
expr rotate_left(unsigned i) const { Z3_ast r = Z3_mk_rotate_left(ctx(), i, *this); ctx().check_error(); return expr(ctx(), r); }
|
||||
expr rotate_right(unsigned i) const { Z3_ast r = Z3_mk_rotate_right(ctx(), i, *this); ctx().check_error(); return expr(ctx(), r); }
|
||||
expr ext_rotate_left(expr const& n) const { Z3_ast r = Z3_mk_ext_rotate_left(ctx(), *this, n); ctx().check_error(); return expr(ctx(), r); }
|
||||
expr ext_rotate_right(expr const& n) const { Z3_ast r = Z3_mk_ext_rotate_right(ctx(), *this, n); ctx().check_error(); return expr(ctx(), r); }
|
||||
expr repeat(unsigned i) const { Z3_ast r = Z3_mk_repeat(ctx(), i, *this); ctx().check_error(); return expr(ctx(), r); }
|
||||
|
||||
friend expr bvredor(expr const & a);
|
||||
|
|
|
|||
|
|
@ -124,3 +124,33 @@ func (c *Context) MkGt(lhs, rhs *Expr) *Expr {
|
|||
func (c *Context) MkGe(lhs, rhs *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_ge(c.ptr, lhs.ptr, rhs.ptr))
|
||||
}
|
||||
|
||||
// MkPower creates an exponentiation expression (base^exp).
|
||||
func (c *Context) MkPower(base, exp *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_power(c.ptr, base.ptr, exp.ptr))
|
||||
}
|
||||
|
||||
// MkAbs creates an absolute value expression.
|
||||
func (c *Context) MkAbs(arg *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_abs(c.ptr, arg.ptr))
|
||||
}
|
||||
|
||||
// MkInt2Real coerces an integer expression to a real.
|
||||
func (c *Context) MkInt2Real(arg *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_int2real(c.ptr, arg.ptr))
|
||||
}
|
||||
|
||||
// MkReal2Int converts a real expression to an integer (floor).
|
||||
func (c *Context) MkReal2Int(arg *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_real2int(c.ptr, arg.ptr))
|
||||
}
|
||||
|
||||
// MkIsInt creates a predicate that checks whether a real expression is an integer.
|
||||
func (c *Context) MkIsInt(arg *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_is_int(c.ptr, arg.ptr))
|
||||
}
|
||||
|
||||
// MkDivides creates an integer divisibility predicate (t1 divides t2).
|
||||
func (c *Context) MkDivides(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_divides(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -64,3 +64,23 @@ func (c *Context) MkArrayDefault(array *Expr) *Expr {
|
|||
func (c *Context) MkArrayExt(a1, a2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_array_ext(c.ptr, a1.ptr, a2.ptr))
|
||||
}
|
||||
|
||||
// MkAsArray creates an array from a function declaration.
|
||||
// The resulting array maps each input to the output of the function.
|
||||
func (c *Context) MkAsArray(f *FuncDecl) *Expr {
|
||||
return newExpr(c, C.Z3_mk_as_array(c.ptr, f.ptr))
|
||||
}
|
||||
|
||||
// MkMap applies a function to the elements of one or more arrays, returning a new array.
|
||||
// The function f is applied element-wise to the given arrays.
|
||||
func (c *Context) MkMap(f *FuncDecl, arrays ...*Expr) *Expr {
|
||||
cArrays := make([]C.Z3_ast, len(arrays))
|
||||
for i, a := range arrays {
|
||||
cArrays[i] = a.ptr
|
||||
}
|
||||
var cArraysPtr *C.Z3_ast
|
||||
if len(cArrays) > 0 {
|
||||
cArraysPtr = &cArrays[0]
|
||||
}
|
||||
return newExpr(c, C.Z3_mk_map(c.ptr, f.ptr, C.uint(len(arrays)), cArraysPtr))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -158,3 +158,88 @@ func (c *Context) MkSignExt(i uint, expr *Expr) *Expr {
|
|||
func (c *Context) MkZeroExt(i uint, expr *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_zero_ext(c.ptr, C.uint(i), expr.ptr))
|
||||
}
|
||||
|
||||
// MkBVRotateLeft rotates the bits of t to the left by i positions.
|
||||
func (c *Context) MkBVRotateLeft(i uint, t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_rotate_left(c.ptr, C.uint(i), t.ptr))
|
||||
}
|
||||
|
||||
// MkBVRotateRight rotates the bits of t to the right by i positions.
|
||||
func (c *Context) MkBVRotateRight(i uint, t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_rotate_right(c.ptr, C.uint(i), t.ptr))
|
||||
}
|
||||
|
||||
// MkRepeat repeats the given bit-vector t a total of i times.
|
||||
func (c *Context) MkRepeat(i uint, t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_repeat(c.ptr, C.uint(i), t.ptr))
|
||||
}
|
||||
|
||||
// MkBVAddNoOverflow creates a predicate that checks that the bit-wise addition
|
||||
// of t1 and t2 does not overflow. If isSigned is true, checks for signed overflow.
|
||||
func (c *Context) MkBVAddNoOverflow(t1, t2 *Expr, isSigned bool) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvadd_no_overflow(c.ptr, t1.ptr, t2.ptr, C.bool(isSigned)))
|
||||
}
|
||||
|
||||
// MkBVAddNoUnderflow creates a predicate that checks that the bit-wise signed addition
|
||||
// of t1 and t2 does not underflow.
|
||||
func (c *Context) MkBVAddNoUnderflow(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvadd_no_underflow(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkBVSubNoOverflow creates a predicate that checks that the bit-wise signed subtraction
|
||||
// of t1 and t2 does not overflow.
|
||||
func (c *Context) MkBVSubNoOverflow(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvsub_no_overflow(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkBVSubNoUnderflow creates a predicate that checks that the bit-wise subtraction
|
||||
// of t1 and t2 does not underflow. If isSigned is true, checks for signed underflow.
|
||||
func (c *Context) MkBVSubNoUnderflow(t1, t2 *Expr, isSigned bool) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvsub_no_underflow(c.ptr, t1.ptr, t2.ptr, C.bool(isSigned)))
|
||||
}
|
||||
|
||||
// MkBVSdivNoOverflow creates a predicate that checks that the bit-wise signed division
|
||||
// of t1 and t2 does not overflow.
|
||||
func (c *Context) MkBVSdivNoOverflow(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvsdiv_no_overflow(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkBVNegNoOverflow creates a predicate that checks that bit-wise negation does not overflow
|
||||
// when t1 is interpreted as a signed bit-vector.
|
||||
func (c *Context) MkBVNegNoOverflow(t1 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvneg_no_overflow(c.ptr, t1.ptr))
|
||||
}
|
||||
|
||||
// MkBVMulNoOverflow creates a predicate that checks that the bit-wise multiplication
|
||||
// of t1 and t2 does not overflow. If isSigned is true, checks for signed overflow.
|
||||
func (c *Context) MkBVMulNoOverflow(t1, t2 *Expr, isSigned bool) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvmul_no_overflow(c.ptr, t1.ptr, t2.ptr, C.bool(isSigned)))
|
||||
}
|
||||
|
||||
// MkBVMulNoUnderflow creates a predicate that checks that the bit-wise signed multiplication
|
||||
// of t1 and t2 does not underflow.
|
||||
func (c *Context) MkBVMulNoUnderflow(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvmul_no_underflow(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkBVRedAnd computes the bitwise AND reduction of a bit-vector, returning a 1-bit vector.
|
||||
func (c *Context) MkBVRedAnd(t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvredand(c.ptr, t.ptr))
|
||||
}
|
||||
|
||||
// MkBVRedOr computes the bitwise OR reduction of a bit-vector, returning a 1-bit vector.
|
||||
func (c *Context) MkBVRedOr(t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_bvredor(c.ptr, t.ptr))
|
||||
}
|
||||
|
||||
// MkBVExtRotateLeft rotates the bits of t1 to the left by the number of bits given by t2.
|
||||
// Both t1 and t2 must be bit-vectors of the same width.
|
||||
func (c *Context) MkBVExtRotateLeft(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_ext_rotate_left(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkBVExtRotateRight rotates the bits of t1 to the right by the number of bits given by t2.
|
||||
// Both t1 and t2 must be bit-vectors of the same width.
|
||||
func (c *Context) MkBVExtRotateRight(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_ext_rotate_right(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
|
|
|||
115
src/api/go/fp.go
115
src/api/go/fp.go
|
|
@ -167,3 +167,118 @@ func (c *Context) MkFPToIEEEBV(expr *Expr) *Expr {
|
|||
func (c *Context) MkFPToReal(expr *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_real(c.ptr, expr.ptr))
|
||||
}
|
||||
|
||||
// MkFPRNE creates the round-nearest-ties-to-even rounding mode.
|
||||
func (c *Context) MkFPRNE() *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rne(c.ptr))
|
||||
}
|
||||
|
||||
// MkFPRNA creates the round-nearest-ties-to-away rounding mode.
|
||||
func (c *Context) MkFPRNA() *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rna(c.ptr))
|
||||
}
|
||||
|
||||
// MkFPRTP creates the round-toward-positive rounding mode.
|
||||
func (c *Context) MkFPRTP() *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rtp(c.ptr))
|
||||
}
|
||||
|
||||
// MkFPRTN creates the round-toward-negative rounding mode.
|
||||
func (c *Context) MkFPRTN() *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rtn(c.ptr))
|
||||
}
|
||||
|
||||
// MkFPRTZ creates the round-toward-zero rounding mode.
|
||||
func (c *Context) MkFPRTZ() *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rtz(c.ptr))
|
||||
}
|
||||
|
||||
// MkFPFP creates a floating-point number from a sign bit (1-bit BV), exponent BV, and significand BV.
|
||||
func (c *Context) MkFPFP(sgn, exp, sig *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_fp(c.ptr, sgn.ptr, exp.ptr, sig.ptr))
|
||||
}
|
||||
|
||||
// MkFPNumeralFloat creates a floating-point numeral from a float32 value.
|
||||
func (c *Context) MkFPNumeralFloat(v float32, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_numeral_float(c.ptr, C.float(v), sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPNumeralDouble creates a floating-point numeral from a float64 value.
|
||||
func (c *Context) MkFPNumeralDouble(v float64, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_numeral_double(c.ptr, C.double(v), sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPNumeralInt creates a floating-point numeral from a signed integer.
|
||||
func (c *Context) MkFPNumeralInt(v int, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_numeral_int(c.ptr, C.int(v), sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPNumeralIntUint creates a floating-point numeral from a sign, signed exponent, and unsigned significand.
|
||||
func (c *Context) MkFPNumeralIntUint(sgn bool, exp int, sig uint, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_numeral_int_uint(c.ptr, C.bool(sgn), C.int(exp), C.uint(sig), sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPNumeralInt64Uint64 creates a floating-point numeral from a sign, int64 exponent, and uint64 significand.
|
||||
func (c *Context) MkFPNumeralInt64Uint64(sgn bool, exp int64, sig uint64, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_numeral_int64_uint64(c.ptr, C.bool(sgn), C.int64_t(exp), C.uint64_t(sig), sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPFMA creates a floating-point fused multiply-add: round((t1 * t2) + t3, rm).
|
||||
func (c *Context) MkFPFMA(rm, t1, t2, t3 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_fma(c.ptr, rm.ptr, t1.ptr, t2.ptr, t3.ptr))
|
||||
}
|
||||
|
||||
// MkFPRem creates a floating-point remainder.
|
||||
func (c *Context) MkFPRem(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_rem(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkFPMin creates the minimum of two floating-point values.
|
||||
func (c *Context) MkFPMin(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_min(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkFPMax creates the maximum of two floating-point values.
|
||||
func (c *Context) MkFPMax(t1, t2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_max(c.ptr, t1.ptr, t2.ptr))
|
||||
}
|
||||
|
||||
// MkFPRoundToIntegral creates a floating-point round-to-integral operation.
|
||||
func (c *Context) MkFPRoundToIntegral(rm, t *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_round_to_integral(c.ptr, rm.ptr, t.ptr))
|
||||
}
|
||||
|
||||
// MkFPToFPBV converts a bit-vector to a floating-point number (reinterpretation of IEEE 754 bits).
|
||||
func (c *Context) MkFPToFPBV(bv *Expr, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_fp_bv(c.ptr, bv.ptr, sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPToFPFloat converts a floating-point number to another floating-point sort with rounding.
|
||||
func (c *Context) MkFPToFPFloat(rm, t *Expr, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_fp_float(c.ptr, rm.ptr, t.ptr, sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPToFPReal converts a real number to a floating-point number with rounding.
|
||||
func (c *Context) MkFPToFPReal(rm, t *Expr, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_fp_real(c.ptr, rm.ptr, t.ptr, sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPToFPSigned converts a signed bit-vector to a floating-point number with rounding.
|
||||
func (c *Context) MkFPToFPSigned(rm, t *Expr, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_fp_signed(c.ptr, rm.ptr, t.ptr, sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPToFPUnsigned converts an unsigned bit-vector to a floating-point number with rounding.
|
||||
func (c *Context) MkFPToFPUnsigned(rm, t *Expr, sort *Sort) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_fp_unsigned(c.ptr, rm.ptr, t.ptr, sort.ptr))
|
||||
}
|
||||
|
||||
// MkFPToSBV converts a floating-point number to a signed bit-vector with rounding.
|
||||
func (c *Context) MkFPToSBV(rm, t *Expr, sz uint) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_sbv(c.ptr, rm.ptr, t.ptr, C.uint(sz)))
|
||||
}
|
||||
|
||||
// MkFPToUBV converts a floating-point number to an unsigned bit-vector with rounding.
|
||||
func (c *Context) MkFPToUBV(rm, t *Expr, sz uint) *Expr {
|
||||
return newExpr(c, C.Z3_mk_fpa_to_ubv(c.ptr, rm.ptr, t.ptr, C.uint(sz)))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -230,3 +230,58 @@ func (c *Context) MkSeqReplaceRe(seq, re, replacement *Expr) *Expr {
|
|||
func (c *Context) MkSeqReplaceReAll(seq, re, replacement *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_replace_re_all(c.ptr, seq.ptr, re.ptr, replacement.ptr))
|
||||
}
|
||||
|
||||
// MkSeqReplaceAll replaces all occurrences of src with dst in seq.
|
||||
func (c *Context) MkSeqReplaceAll(seq, src, dst *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_replace_all(c.ptr, seq.ptr, src.ptr, dst.ptr))
|
||||
}
|
||||
|
||||
// MkSeqNth retrieves the n-th element of a sequence as a single-element expression.
|
||||
func (c *Context) MkSeqNth(seq, index *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_nth(c.ptr, seq.ptr, index.ptr))
|
||||
}
|
||||
|
||||
// MkSeqLastIndex returns the last index of substr in seq.
|
||||
func (c *Context) MkSeqLastIndex(seq, substr *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_last_index(c.ptr, seq.ptr, substr.ptr))
|
||||
}
|
||||
|
||||
// MkSeqMap applies a function to each element of a sequence, returning a new sequence.
|
||||
func (c *Context) MkSeqMap(f, seq *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_map(c.ptr, f.ptr, seq.ptr))
|
||||
}
|
||||
|
||||
// MkSeqMapi applies an indexed function to each element of a sequence, returning a new sequence.
|
||||
func (c *Context) MkSeqMapi(f, i, seq *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_mapi(c.ptr, f.ptr, i.ptr, seq.ptr))
|
||||
}
|
||||
|
||||
// MkSeqFoldl applies a fold-left operation to a sequence.
|
||||
func (c *Context) MkSeqFoldl(f, a, seq *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_foldl(c.ptr, f.ptr, a.ptr, seq.ptr))
|
||||
}
|
||||
|
||||
// MkSeqFoldli applies an indexed fold-left operation to a sequence.
|
||||
func (c *Context) MkSeqFoldli(f, i, a, seq *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_seq_foldli(c.ptr, f.ptr, i.ptr, a.ptr, seq.ptr))
|
||||
}
|
||||
|
||||
// MkStrLt creates a string less-than comparison.
|
||||
func (c *Context) MkStrLt(s1, s2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_str_lt(c.ptr, s1.ptr, s2.ptr))
|
||||
}
|
||||
|
||||
// MkStrLe creates a string less-than-or-equal comparison.
|
||||
func (c *Context) MkStrLe(s1, s2 *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_str_le(c.ptr, s1.ptr, s2.ptr))
|
||||
}
|
||||
|
||||
// MkStringToCode converts a single-character string to its Unicode code point.
|
||||
func (c *Context) MkStringToCode(s *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_string_to_code(c.ptr, s.ptr))
|
||||
}
|
||||
|
||||
// MkStringFromCode converts a Unicode code point to a single-character string.
|
||||
func (c *Context) MkStringFromCode(code *Expr) *Expr {
|
||||
return newExpr(c, C.Z3_mk_string_from_code(c.ptr, code.ptr))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -52,3 +52,10 @@ func (s *Simplifier) GetHelp() string {
|
|||
func (s *Simplifier) GetParamDescrs() *ParamDescrs {
|
||||
return newParamDescrs(s.ctx, C.Z3_simplifier_get_param_descrs(s.ctx.ptr, s.ptr))
|
||||
}
|
||||
|
||||
// GetSimplifierDescr returns a description of the simplifier with the given name.
|
||||
func (c *Context) GetSimplifierDescr(name string) string {
|
||||
cName := C.CString(name)
|
||||
defer C.free(unsafe.Pointer(cName))
|
||||
return C.GoString(C.Z3_simplifier_get_descr(c.ptr, cName))
|
||||
}
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue