3
0
Fork 0
mirror of https://github.com/Z3Prover/z3 synced 2026-04-13 15:56:27 +00:00

Apply qf-s-benchmark fix: replace ZIPT/dotnet workflow with seq vs nseq only (#9266)

* Apply qf-s-benchmark fix from agentics/qf-s-benchmark.md: remove ZIPT/dotnet dependency

Agent-Logs-Url: https://github.com/Z3Prover/z3/sessions/c36bada5-c222-4b97-99c4-08392955b32d

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

* Update qf-s-benchmark title prefix and note to QF_S Benchmark

Agent-Logs-Url: https://github.com/Z3Prover/z3/sessions/c36bada5-c222-4b97-99c4-08392955b32d

Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: NikolajBjorner <3085284+NikolajBjorner@users.noreply.github.com>
This commit is contained in:
Copilot 2026-04-10 14:30:44 -07:00 committed by GitHub
parent 58ad1f0918
commit 9c81571eb8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
3 changed files with 570 additions and 570 deletions

View file

@ -30,16 +30,6 @@
"version": "v7.0.0",
"sha": "bbbca2ddaa5d8feaa63e36b76fdaad77386f024f"
},
"github/gh-aw-actions/setup-cli@v0.65.4": {
"repo": "github/gh-aw-actions/setup-cli",
"version": "v0.65.4",
"sha": "934698b44320d87a7a9196339f90293f10bd2247"
},
"github/gh-aw-actions/setup@v0.65.4": {
"repo": "github/gh-aw-actions/setup",
"version": "v0.65.4",
"sha": "934698b44320d87a7a9196339f90293f10bd2247"
},
"github/gh-aw/actions/setup@v0.65.4": {
"repo": "github/gh-aw/actions/setup",
"version": "v0.65.4",

View file

@ -1,3 +1,5 @@
# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"e5e5c332eb206c2bde9e7e5cb0bb1babe7b1c50e0437a00b4093ddb8b5ab80cf","compiler_version":"v0.67.4","strict":true,"agent_id":"copilot"}
# gh-aw-manifest: {"version":1,"secrets":["COPILOT_GITHUB_TOKEN","GH_AW_GITHUB_MCP_SERVER_TOKEN","GH_AW_GITHUB_TOKEN","GITHUB_TOKEN"],"actions":[{"repo":"actions/checkout","sha":"de0fac2e4500dabe0009e67214ff5f5447ce83dd","version":"v6.0.2"},{"repo":"actions/download-artifact","sha":"3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c","version":"v8.0.1"},{"repo":"actions/github-script","sha":"ed597411d8f924073f98dfc5c65a23a2325f34cd","version":"v8"},{"repo":"actions/upload-artifact","sha":"bbbca2ddaa5d8feaa63e36b76fdaad77386f024f","version":"v7"},{"repo":"github/gh-aw-actions/setup","sha":"v0.67.4","version":"v0.67.4"}]}
# ___ _ _
# / _ \ | | (_)
# | |_| | __ _ ___ _ __ | |_ _ ___
@ -12,7 +14,7 @@
# \ /\ / (_) | | | | ( | | | | (_) \ V V /\__ \
# \/ \/ \___/|_| |_|\_\|_| |_|\___/ \_/\_/ |___/
#
# This file was automatically generated by gh-aw (v0.65.4). DO NOT EDIT.
# This file was automatically generated by gh-aw (v0.67.4). DO NOT EDIT.
#
# To update this file, edit the corresponding .md file and run:
# gh aw compile
@ -20,11 +22,22 @@
#
# For more information: https://github.github.com/gh-aw/introduction/overview/
#
# Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion
# Benchmark Z3 seq vs nseq string solvers on QF_S test suite from the c3 branch and post results as a GitHub discussion
#
# gh-aw-metadata: {"schema_version":"v3","frontmatter_hash":"d7c341a4c4224962ddf5d76ae2e39b3fc7965a5d9a7899d0674877de090be242","compiler_version":"v0.65.4","strict":true,"agent_id":"copilot"}
# Secrets used:
# - COPILOT_GITHUB_TOKEN
# - GH_AW_GITHUB_MCP_SERVER_TOKEN
# - GH_AW_GITHUB_TOKEN
# - GITHUB_TOKEN
#
# Custom actions used:
# - actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
# - actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
# - actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
# - actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
# - github/gh-aw-actions/setup@v0.67.4
name: "ZIPT String Solver Benchmark"
name: "QF_S String Solver Benchmark"
"on":
schedule:
- cron: "0 0,12 * * *"
@ -41,12 +54,13 @@ permissions: {}
concurrency:
group: "gh-aw-${{ github.workflow }}"
run-name: "ZIPT String Solver Benchmark"
run-name: "QF_S String Solver Benchmark"
jobs:
activation:
runs-on: ubuntu-slim
permissions:
actions: read
contents: read
outputs:
comment_id: ""
@ -54,27 +68,30 @@ jobs:
lockdown_check_failed: ${{ steps.generate_aw_info.outputs.lockdown_check_failed == 'true' }}
model: ${{ steps.generate_aw_info.outputs.model }}
secret_verification_result: ${{ steps.validate-secret.outputs.verification_result }}
setup-trace-id: ${{ steps.setup.outputs.trace-id }}
steps:
- name: Setup Scripts
uses: github/gh-aw-actions/setup@934698b44320d87a7a9196339f90293f10bd2247 # v0.65.4
id: setup
uses: github/gh-aw-actions/setup@v0.67.4
with:
destination: ${{ runner.temp }}/gh-aw/actions
job-name: ${{ github.job }}
- name: Generate agentic run info
id: generate_aw_info
env:
GH_AW_INFO_ENGINE_ID: "copilot"
GH_AW_INFO_ENGINE_NAME: "GitHub Copilot CLI"
GH_AW_INFO_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'auto' }}
GH_AW_INFO_VERSION: "latest"
GH_AW_INFO_AGENT_VERSION: "latest"
GH_AW_INFO_CLI_VERSION: "v0.65.4"
GH_AW_INFO_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_INFO_VERSION: "1.0.20"
GH_AW_INFO_AGENT_VERSION: "1.0.20"
GH_AW_INFO_CLI_VERSION: "v0.67.4"
GH_AW_INFO_WORKFLOW_NAME: "QF_S String Solver Benchmark"
GH_AW_INFO_EXPERIMENTAL: "false"
GH_AW_INFO_SUPPORTS_TOOLS_ALLOWLIST: "true"
GH_AW_INFO_STAGED: "false"
GH_AW_INFO_ALLOWED_DOMAINS: '["defaults"]'
GH_AW_INFO_FIREWALL_ENABLED: "true"
GH_AW_INFO_AWF_VERSION: "v0.25.6"
GH_AW_INFO_AWF_VERSION: "v0.25.18"
GH_AW_INFO_AWMG_VERSION: ""
GH_AW_INFO_FIREWALL_TYPE: "squid"
GH_AW_COMPILED_STRICT: "true"
@ -87,7 +104,7 @@ jobs:
await main(core, context);
- name: Validate COPILOT_GITHUB_TOKEN secret
id: validate-secret
run: ${RUNNER_TEMP}/gh-aw/actions/validate_multi_secret.sh COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_multi_secret.sh" COPILOT_GITHUB_TOKEN 'GitHub Copilot CLI' https://github.github.com/gh-aw/reference/engines/#github-copilot-default
env:
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
- name: Checkout .github and .agents folders
@ -99,10 +116,11 @@ jobs:
.agents
sparse-checkout-cone-mode: true
fetch-depth: 1
- name: Check workflow file timestamps
- name: Check workflow lock file
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_WORKFLOW_FILE: "qf-s-benchmark.lock.yml"
GH_AW_CONTEXT_WORKFLOW_REF: "${{ github.workflow_ref }}"
with:
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
@ -112,7 +130,7 @@ jobs:
- name: Check compile-agentic version
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_COMPILED_VERSION: "v0.65.4"
GH_AW_COMPILED_VERSION: "v0.67.4"
with:
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
@ -130,19 +148,20 @@ jobs:
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
# poutine:ignore untrusted_checkout_exec
run: |
bash ${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh
bash "${RUNNER_TEMP}/gh-aw/actions/create_prompt_first.sh"
{
cat << 'GH_AW_PROMPT_c81e7aa1da1942d6_EOF'
cat << 'GH_AW_PROMPT_c25676ba2ab40d85_EOF'
<system>
GH_AW_PROMPT_c81e7aa1da1942d6_EOF
GH_AW_PROMPT_c25676ba2ab40d85_EOF
cat "${RUNNER_TEMP}/gh-aw/prompts/xpia.md"
cat "${RUNNER_TEMP}/gh-aw/prompts/temp_folder_prompt.md"
cat "${RUNNER_TEMP}/gh-aw/prompts/markdown.md"
cat "${RUNNER_TEMP}/gh-aw/prompts/safe_outputs_prompt.md"
cat << 'GH_AW_PROMPT_c81e7aa1da1942d6_EOF'
cat << 'GH_AW_PROMPT_c25676ba2ab40d85_EOF'
<safe-output-tools>
Tools: create_discussion, missing_tool, missing_data, noop
</safe-output-tools>
@ -174,19 +193,20 @@ jobs:
{{/if}}
</github-context>
GH_AW_PROMPT_c81e7aa1da1942d6_EOF
GH_AW_PROMPT_c25676ba2ab40d85_EOF
cat "${RUNNER_TEMP}/gh-aw/prompts/github_mcp_tools_with_safeoutputs_prompt.md"
cat << 'GH_AW_PROMPT_c81e7aa1da1942d6_EOF'
cat << 'GH_AW_PROMPT_c25676ba2ab40d85_EOF'
</system>
{{#runtime-import .github/workflows/qf-s-benchmark.md}}
GH_AW_PROMPT_c81e7aa1da1942d6_EOF
GH_AW_PROMPT_c25676ba2ab40d85_EOF
} > "$GH_AW_PROMPT"
- name: Interpolate variables and render templates
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
with:
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
@ -204,6 +224,7 @@ jobs:
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: ${{ github.event.pull_request.number }}
GH_AW_GITHUB_REPOSITORY: ${{ github.repository }}
GH_AW_GITHUB_RUN_ID: ${{ github.run_id }}
GH_AW_GITHUB_WORKFLOW: ${{ github.workflow }}
GH_AW_GITHUB_WORKSPACE: ${{ github.workspace }}
with:
script: |
@ -223,6 +244,7 @@ jobs:
GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER: process.env.GH_AW_GITHUB_EVENT_PULL_REQUEST_NUMBER,
GH_AW_GITHUB_REPOSITORY: process.env.GH_AW_GITHUB_REPOSITORY,
GH_AW_GITHUB_RUN_ID: process.env.GH_AW_GITHUB_RUN_ID,
GH_AW_GITHUB_WORKFLOW: process.env.GH_AW_GITHUB_WORKFLOW,
GH_AW_GITHUB_WORKSPACE: process.env.GH_AW_GITHUB_WORKSPACE
}
});
@ -230,12 +252,12 @@ jobs:
env:
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
# poutine:ignore untrusted_checkout_exec
run: bash ${RUNNER_TEMP}/gh-aw/actions/validate_prompt_placeholders.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/validate_prompt_placeholders.sh"
- name: Print prompt
env:
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
# poutine:ignore untrusted_checkout_exec
run: bash ${RUNNER_TEMP}/gh-aw/actions/print_prompt_summary.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/print_prompt_summary.sh"
- name: Upload activation artifact
if: success()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
@ -244,6 +266,8 @@ jobs:
path: |
/tmp/gh-aw/aw_info.json
/tmp/gh-aw/aw-prompts/prompt.txt
/tmp/gh-aw/github_rate_limits.jsonl
if-no-files-found: ignore
retention-days: 1
agent:
@ -261,16 +285,21 @@ jobs:
GH_AW_WORKFLOW_ID_SANITIZED: qfsbenchmark
outputs:
checkout_pr_success: ${{ steps.checkout-pr.outputs.checkout_pr_success || 'true' }}
effective_tokens: ${{ steps.parse-mcp-gateway.outputs.effective_tokens }}
has_patch: ${{ steps.collect_output.outputs.has_patch }}
inference_access_error: ${{ steps.detect-inference-error.outputs.inference_access_error || 'false' }}
model: ${{ needs.activation.outputs.model }}
output: ${{ steps.collect_output.outputs.output }}
output_types: ${{ steps.collect_output.outputs.output_types }}
setup-trace-id: ${{ steps.setup.outputs.trace-id }}
steps:
- name: Setup Scripts
uses: github/gh-aw-actions/setup@934698b44320d87a7a9196339f90293f10bd2247 # v0.65.4
id: setup
uses: github/gh-aw-actions/setup@v0.67.4
with:
destination: ${{ runner.temp }}/gh-aw/actions
job-name: ${{ github.job }}
trace-id: ${{ needs.activation.outputs.setup-trace-id }}
- name: Set runtime paths
id: set-runtime-paths
run: |
@ -278,9 +307,9 @@ jobs:
echo "GH_AW_SAFE_OUTPUTS_CONFIG_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" >> "$GITHUB_OUTPUT"
echo "GH_AW_SAFE_OUTPUTS_TOOLS_PATH=${RUNNER_TEMP}/gh-aw/safeoutputs/tools.json" >> "$GITHUB_OUTPUT"
- name: Create gh-aw temp directory
run: bash ${RUNNER_TEMP}/gh-aw/actions/create_gh_aw_tmp_dir.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/create_gh_aw_tmp_dir.sh"
- name: Configure gh CLI for GitHub Enterprise
run: bash ${RUNNER_TEMP}/gh-aw/actions/configure_gh_for_ghe.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/configure_gh_for_ghe.sh"
env:
GH_TOKEN: ${{ github.token }}
- name: Checkout c3 branch
@ -294,13 +323,14 @@ jobs:
env:
REPO_NAME: ${{ github.repository }}
SERVER_URL: ${{ github.server_url }}
GITHUB_TOKEN: ${{ github.token }}
run: |
git config --global user.email "github-actions[bot]@users.noreply.github.com"
git config --global user.name "github-actions[bot]"
git config --global am.keepcr true
# Re-authenticate git with GitHub token
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
git remote set-url origin "https://x-access-token:${GITHUB_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
echo "Git configured with standard GitHub Actions identity"
- name: Checkout PR branch
id: checkout-pr
@ -317,9 +347,11 @@ jobs:
const { main } = require('${{ runner.temp }}/gh-aw/actions/checkout_pr_branch.cjs');
await main();
- name: Install GitHub Copilot CLI
run: ${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh latest
run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.20
env:
GH_HOST: github.com
- name: Install AWF binary
run: bash ${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh v0.25.6
run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.18
- name: Determine automatic lockdown mode for GitHub MCP Server
id: determine-automatic-lockdown
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
@ -331,114 +363,134 @@ jobs:
const determineAutomaticLockdown = require('${{ runner.temp }}/gh-aw/actions/determine_automatic_lockdown.cjs');
await determineAutomaticLockdown(github, context, core);
- name: Download container images
run: bash ${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.25.6 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.6 ghcr.io/github/gh-aw-firewall/squid:0.25.6 ghcr.io/github/gh-aw-mcpg:v0.2.11 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.18 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.18 ghcr.io/github/gh-aw-firewall/squid:0.25.18 ghcr.io/github/gh-aw-mcpg:v0.2.17 ghcr.io/github/github-mcp-server:v0.32.0 node:lts-alpine
- name: Write Safe Outputs Config
run: |
mkdir -p ${RUNNER_TEMP}/gh-aw/safeoutputs
mkdir -p "${RUNNER_TEMP}/gh-aw/safeoutputs"
mkdir -p /tmp/gh-aw/safeoutputs
mkdir -p /tmp/gh-aw/mcp-logs/safeoutputs
cat > ${RUNNER_TEMP}/gh-aw/safeoutputs/config.json << 'GH_AW_SAFE_OUTPUTS_CONFIG_2df51426bf24e365_EOF'
{"create_discussion":{"category":"agentic workflows","close_older_discussions":true,"expires":168,"fallback_to_issue":true,"max":1,"title_prefix":"[ZIPT Benchmark] "},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"false"}}
GH_AW_SAFE_OUTPUTS_CONFIG_2df51426bf24e365_EOF
cat > "${RUNNER_TEMP}/gh-aw/safeoutputs/config.json" << 'GH_AW_SAFE_OUTPUTS_CONFIG_ef274886e4650c6c_EOF'
{"create_discussion":{"category":"agentic workflows","close_older_discussions":true,"expires":168,"fallback_to_issue":true,"max":1,"title_prefix":"[QF_S Benchmark] "},"create_report_incomplete_issue":{},"missing_data":{},"missing_tool":{},"noop":{"max":1,"report-as-issue":"false"},"report_incomplete":{}}
GH_AW_SAFE_OUTPUTS_CONFIG_ef274886e4650c6c_EOF
- name: Write Safe Outputs Tools
run: |
cat > ${RUNNER_TEMP}/gh-aw/safeoutputs/tools_meta.json << 'GH_AW_SAFE_OUTPUTS_TOOLS_META_919533c9cc87fea1_EOF'
{
"description_suffixes": {
"create_discussion": " CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[ZIPT Benchmark] \". Discussions will be created in category \"agentic workflows\"."
},
"repo_params": {},
"dynamic_tools": []
}
GH_AW_SAFE_OUTPUTS_TOOLS_META_919533c9cc87fea1_EOF
cat > ${RUNNER_TEMP}/gh-aw/safeoutputs/validation.json << 'GH_AW_SAFE_OUTPUTS_VALIDATION_906347b49a85a96b_EOF'
{
"create_discussion": {
"defaultMax": 1,
"fields": {
"body": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 65000
},
"category": {
"type": "string",
"sanitize": true,
"maxLength": 128
},
"repo": {
"type": "string",
"maxLength": 256
},
"title": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 128
env:
GH_AW_TOOLS_META_JSON: |
{
"description_suffixes": {
"create_discussion": " CONSTRAINTS: Maximum 1 discussion(s) can be created. Title will be prefixed with \"[QF_S Benchmark] \". Discussions will be created in category \"agentic workflows\"."
},
"repo_params": {},
"dynamic_tools": []
}
GH_AW_VALIDATION_JSON: |
{
"create_discussion": {
"defaultMax": 1,
"fields": {
"body": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 65000
},
"category": {
"type": "string",
"sanitize": true,
"maxLength": 128
},
"repo": {
"type": "string",
"maxLength": 256
},
"title": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 128
}
}
}
},
"missing_data": {
"defaultMax": 20,
"fields": {
"alternatives": {
"type": "string",
"sanitize": true,
"maxLength": 256
},
"context": {
"type": "string",
"sanitize": true,
"maxLength": 256
},
"data_type": {
"type": "string",
"sanitize": true,
"maxLength": 128
},
"reason": {
"type": "string",
"sanitize": true,
"maxLength": 256
},
"missing_data": {
"defaultMax": 20,
"fields": {
"alternatives": {
"type": "string",
"sanitize": true,
"maxLength": 256
},
"context": {
"type": "string",
"sanitize": true,
"maxLength": 256
},
"data_type": {
"type": "string",
"sanitize": true,
"maxLength": 128
},
"reason": {
"type": "string",
"sanitize": true,
"maxLength": 256
}
}
}
},
"missing_tool": {
"defaultMax": 20,
"fields": {
"alternatives": {
"type": "string",
"sanitize": true,
"maxLength": 512
},
"reason": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 256
},
"tool": {
"type": "string",
"sanitize": true,
"maxLength": 128
},
"missing_tool": {
"defaultMax": 20,
"fields": {
"alternatives": {
"type": "string",
"sanitize": true,
"maxLength": 512
},
"reason": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 256
},
"tool": {
"type": "string",
"sanitize": true,
"maxLength": 128
}
}
}
},
"noop": {
"defaultMax": 1,
"fields": {
"message": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 65000
},
"noop": {
"defaultMax": 1,
"fields": {
"message": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 65000
}
}
},
"report_incomplete": {
"defaultMax": 5,
"fields": {
"details": {
"type": "string",
"sanitize": true,
"maxLength": 65000
},
"reason": {
"required": true,
"type": "string",
"sanitize": true,
"maxLength": 1024
}
}
}
}
}
GH_AW_SAFE_OUTPUTS_VALIDATION_906347b49a85a96b_EOF
node ${RUNNER_TEMP}/gh-aw/actions/generate_safe_outputs_tools.cjs
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/generate_safe_outputs_tools.cjs');
await main();
- name: Generate Safe Outputs MCP Server Config
id: safe-outputs-config
run: |
@ -461,6 +513,7 @@ jobs:
id: safe-outputs-start
env:
DEBUG: '*'
GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }}
GH_AW_SAFE_OUTPUTS_PORT: ${{ steps.safe-outputs-config.outputs.safe_outputs_port }}
GH_AW_SAFE_OUTPUTS_API_KEY: ${{ steps.safe-outputs-config.outputs.safe_outputs_api_key }}
GH_AW_SAFE_OUTPUTS_TOOLS_PATH: ${{ runner.temp }}/gh-aw/safeoutputs/tools.json
@ -469,13 +522,14 @@ jobs:
run: |
# Environment variables are set above to prevent template injection
export DEBUG
export GH_AW_SAFE_OUTPUTS
export GH_AW_SAFE_OUTPUTS_PORT
export GH_AW_SAFE_OUTPUTS_API_KEY
export GH_AW_SAFE_OUTPUTS_TOOLS_PATH
export GH_AW_SAFE_OUTPUTS_CONFIG_PATH
export GH_AW_MCP_LOG_DIR
bash ${RUNNER_TEMP}/gh-aw/actions/start_safe_outputs_server.sh
bash "${RUNNER_TEMP}/gh-aw/actions/start_safe_outputs_server.sh"
- name: Start MCP Gateway
id: start-mcp-gateway
@ -502,10 +556,10 @@ jobs:
export DEBUG="*"
export GH_AW_ENGINE="copilot"
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_GUARD_MIN_INTEGRITY -e GITHUB_MCP_GUARD_REPOS -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.2.11'
export MCP_GATEWAY_DOCKER_COMMAND='docker run -i --rm --network host -v /var/run/docker.sock:/var/run/docker.sock -e MCP_GATEWAY_PORT -e MCP_GATEWAY_DOMAIN -e MCP_GATEWAY_API_KEY -e MCP_GATEWAY_PAYLOAD_DIR -e MCP_GATEWAY_PAYLOAD_SIZE_THRESHOLD -e DEBUG -e MCP_GATEWAY_LOG_DIR -e GH_AW_MCP_LOG_DIR -e GH_AW_SAFE_OUTPUTS -e GH_AW_SAFE_OUTPUTS_CONFIG_PATH -e GH_AW_SAFE_OUTPUTS_TOOLS_PATH -e GH_AW_ASSETS_BRANCH -e GH_AW_ASSETS_MAX_SIZE_KB -e GH_AW_ASSETS_ALLOWED_EXTS -e DEFAULT_BRANCH -e GITHUB_MCP_SERVER_TOKEN -e GITHUB_MCP_GUARD_MIN_INTEGRITY -e GITHUB_MCP_GUARD_REPOS -e GITHUB_REPOSITORY -e GITHUB_SERVER_URL -e GITHUB_SHA -e GITHUB_WORKSPACE -e GITHUB_TOKEN -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e GITHUB_JOB -e GITHUB_ACTION -e GITHUB_EVENT_NAME -e GITHUB_EVENT_PATH -e GITHUB_ACTOR -e GITHUB_ACTOR_ID -e GITHUB_TRIGGERING_ACTOR -e GITHUB_WORKFLOW -e GITHUB_WORKFLOW_REF -e GITHUB_WORKFLOW_SHA -e GITHUB_REF -e GITHUB_REF_NAME -e GITHUB_REF_TYPE -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GH_AW_SAFE_OUTPUTS_PORT -e GH_AW_SAFE_OUTPUTS_API_KEY -v /tmp/gh-aw/mcp-payloads:/tmp/gh-aw/mcp-payloads:rw -v /opt:/opt:ro -v /tmp:/tmp:rw -v '"${GITHUB_WORKSPACE}"':'"${GITHUB_WORKSPACE}"':rw ghcr.io/github/gh-aw-mcpg:v0.2.17'
mkdir -p /home/runner/.copilot
cat << GH_AW_MCP_CONFIG_515b057f9cf50300_EOF | bash ${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.sh
cat << GH_AW_MCP_CONFIG_c990404fbee0ddc5_EOF | bash "${RUNNER_TEMP}/gh-aw/actions/start_mcp_gateway.sh"
{
"mcpServers": {
"github": {
@ -546,7 +600,7 @@ jobs:
"payloadDir": "${MCP_GATEWAY_PAYLOAD_DIR}"
}
}
GH_AW_MCP_CONFIG_515b057f9cf50300_EOF
GH_AW_MCP_CONFIG_c990404fbee0ddc5_EOF
- name: Download activation artifact
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c # v8.0.1
with:
@ -554,7 +608,7 @@ jobs:
path: /tmp/gh-aw
- name: Clean git credentials
continue-on-error: true
run: bash ${RUNNER_TEMP}/gh-aw/actions/clean_git_credentials.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/clean_git_credentials.sh"
- name: Execute GitHub Copilot CLI
id: agentic_execution
# Copilot CLI tool arguments (sorted):
@ -563,8 +617,8 @@ jobs:
set -o pipefail
touch /tmp/gh-aw/agent-step-summary.md
# shellcheck disable=SC1003
sudo -E awf --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --exclude-env GITHUB_MCP_SERVER_TOKEN --exclude-env MCP_GATEWAY_API_KEY --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --image-tag 0.25.6 --skip-pull --enable-api-proxy \
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-all-tools --allow-all-paths --add-dir "${GITHUB_WORKSPACE}" --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
sudo -E awf --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --exclude-env GITHUB_MCP_SERVER_TOKEN --exclude-env MCP_GATEWAY_API_KEY --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --image-tag 0.25.18 --skip-pull --enable-api-proxy \
-- /bin/bash -c 'node ${RUNNER_TEMP}/gh-aw/actions/copilot_driver.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-all-tools --allow-all-paths --add-dir "${GITHUB_WORKSPACE}" --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/agent-stdio.log
env:
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
@ -573,9 +627,10 @@ jobs:
GH_AW_PHASE: agent
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
GH_AW_SAFE_OUTPUTS: ${{ steps.set-runtime-paths.outputs.GH_AW_SAFE_OUTPUTS }}
GH_AW_VERSION: v0.65.4
GH_AW_VERSION: v0.67.4
GITHUB_API_URL: ${{ github.api_url }}
GITHUB_AW: true
GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows
GITHUB_HEAD_REF: ${{ github.head_ref }}
GITHUB_MCP_SERVER_TOKEN: ${{ secrets.GH_AW_GITHUB_MCP_SERVER_TOKEN || secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
GITHUB_REF_NAME: ${{ github.ref_name }}
@ -591,36 +646,24 @@ jobs:
id: detect-inference-error
if: always()
continue-on-error: true
run: bash ${RUNNER_TEMP}/gh-aw/actions/detect_inference_access_error.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/detect_inference_access_error.sh"
- name: Configure Git credentials
env:
REPO_NAME: ${{ github.repository }}
SERVER_URL: ${{ github.server_url }}
GITHUB_TOKEN: ${{ github.token }}
run: |
git config --global user.email "github-actions[bot]@users.noreply.github.com"
git config --global user.name "github-actions[bot]"
git config --global am.keepcr true
# Re-authenticate git with GitHub token
SERVER_URL_STRIPPED="${SERVER_URL#https://}"
git remote set-url origin "https://x-access-token:${{ github.token }}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
git remote set-url origin "https://x-access-token:${GITHUB_TOKEN}@${SERVER_URL_STRIPPED}/${REPO_NAME}.git"
echo "Git configured with standard GitHub Actions identity"
- name: Copy Copilot session state files to logs
if: always()
continue-on-error: true
run: |
# Copy Copilot session state files to logs folder for artifact collection
# This ensures they are in /tmp/gh-aw/ where secret redaction can scan them
SESSION_STATE_DIR="$HOME/.copilot/session-state"
LOGS_DIR="/tmp/gh-aw/sandbox/agent/logs"
if [ -d "$SESSION_STATE_DIR" ]; then
echo "Copying Copilot session state files from $SESSION_STATE_DIR to $LOGS_DIR"
mkdir -p "$LOGS_DIR"
cp -v "$SESSION_STATE_DIR"/*.jsonl "$LOGS_DIR/" 2>/dev/null || true
echo "Session state files copied successfully"
else
echo "No session-state directory found at $SESSION_STATE_DIR"
fi
run: bash "${RUNNER_TEMP}/gh-aw/actions/copy_copilot_session_state.sh"
- name: Stop MCP Gateway
if: always()
continue-on-error: true
@ -629,7 +672,7 @@ jobs:
MCP_GATEWAY_API_KEY: ${{ steps.start-mcp-gateway.outputs.gateway-api-key }}
GATEWAY_PID: ${{ steps.start-mcp-gateway.outputs.gateway-pid }}
run: |
bash ${RUNNER_TEMP}/gh-aw/actions/stop_mcp_gateway.sh "$GATEWAY_PID"
bash "${RUNNER_TEMP}/gh-aw/actions/stop_mcp_gateway.sh" "$GATEWAY_PID"
- name: Redact secrets in logs
if: always()
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
@ -647,7 +690,7 @@ jobs:
SECRET_GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Append agent step summary
if: always()
run: bash ${RUNNER_TEMP}/gh-aw/actions/append_agent_step_summary.sh
run: bash "${RUNNER_TEMP}/gh-aw/actions/append_agent_step_summary.sh"
- name: Copy Safe Outputs
if: always()
env:
@ -683,6 +726,7 @@ jobs:
await main();
- name: Parse MCP Gateway logs for step summary
if: always()
id: parse-mcp-gateway
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
@ -705,6 +749,16 @@ jobs:
else
echo 'AWF binary not installed, skipping firewall log summary'
fi
- name: Parse token usage for step summary
if: always()
continue-on-error: true
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
with:
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/parse_token_usage.cjs');
await main();
- name: Write agent output placeholder if missing
if: always()
run: |
@ -722,8 +776,10 @@ jobs:
/tmp/gh-aw/sandbox/agent/logs/
/tmp/gh-aw/redacted-urls.log
/tmp/gh-aw/mcp-logs/
/tmp/gh-aw/agent_usage.json
/tmp/gh-aw/agent-stdio.log
/tmp/gh-aw/agent/
/tmp/gh-aw/github_rate_limits.jsonl
/tmp/gh-aw/safeoutputs.jsonl
/tmp/gh-aw/agent_output.json
/tmp/gh-aw/aw-*.patch
@ -756,14 +812,18 @@ jobs:
group: "gh-aw-conclusion-qf-s-benchmark"
cancel-in-progress: false
outputs:
incomplete_count: ${{ steps.report_incomplete.outputs.incomplete_count }}
noop_message: ${{ steps.noop.outputs.noop_message }}
tools_reported: ${{ steps.missing_tool.outputs.tools_reported }}
total_count: ${{ steps.missing_tool.outputs.total_count }}
steps:
- name: Setup Scripts
uses: github/gh-aw-actions/setup@934698b44320d87a7a9196339f90293f10bd2247 # v0.65.4
id: setup
uses: github/gh-aw-actions/setup@v0.67.4
with:
destination: ${{ runner.temp }}/gh-aw/actions
job-name: ${{ github.job }}
trace-id: ${{ needs.activation.outputs.setup-trace-id }}
- name: Download agent output artifact
id: download-agent-output
continue-on-error: true
@ -784,22 +844,25 @@ jobs:
env:
GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }}
GH_AW_NOOP_MAX: "1"
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_WORKFLOW_NAME: "QF_S String Solver Benchmark"
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
with:
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/noop.cjs');
const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_noop_message.cjs');
await main();
- name: Record Missing Tool
- name: Record missing tool
id: missing_tool
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }}
GH_AW_MISSING_TOOL_CREATE_ISSUE: "true"
GH_AW_MISSING_TOOL_TITLE_PREFIX: "[missing tool]"
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_WORKFLOW_NAME: "QF_S String Solver Benchmark"
with:
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
script: |
@ -807,13 +870,27 @@ jobs:
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/missing_tool.cjs');
await main();
- name: Handle Agent Failure
- name: Record incomplete
id: report_incomplete
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }}
GH_AW_REPORT_INCOMPLETE_CREATE_ISSUE: "true"
GH_AW_WORKFLOW_NAME: "QF_S String Solver Benchmark"
with:
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/report_incomplete_handler.cjs');
await main();
- name: Handle agent failure
id: handle_agent_failure
if: always()
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }}
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_WORKFLOW_NAME: "QF_S String Solver Benchmark"
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
GH_AW_WORKFLOW_ID: "qf-s-benchmark"
@ -834,37 +911,27 @@ jobs:
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_agent_failure.cjs');
await main();
- name: Handle No-Op Message
id: handle_noop_message
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
GH_AW_AGENT_OUTPUT: ${{ steps.setup-agent-output-env.outputs.GH_AW_AGENT_OUTPUT }}
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_RUN_URL: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
GH_AW_AGENT_CONCLUSION: ${{ needs.agent.result }}
GH_AW_NOOP_MESSAGE: ${{ steps.noop.outputs.noop_message }}
GH_AW_NOOP_REPORT_AS_ISSUE: "false"
with:
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
script: |
const { setupGlobals } = require('${{ runner.temp }}/gh-aw/actions/setup_globals.cjs');
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/handle_noop_message.cjs');
await main();
detection:
needs: agent
needs:
- activation
- agent
if: >
always() && needs.agent.result != 'skipped' && (needs.agent.outputs.output_types != '' || needs.agent.outputs.has_patch == 'true')
runs-on: ubuntu-latest
permissions:
contents: read
outputs:
detection_conclusion: ${{ steps.detection_conclusion.outputs.conclusion }}
detection_success: ${{ steps.detection_conclusion.outputs.success }}
steps:
- name: Setup Scripts
uses: github/gh-aw-actions/setup@934698b44320d87a7a9196339f90293f10bd2247 # v0.65.4
id: setup
uses: github/gh-aw-actions/setup@v0.67.4
with:
destination: ${{ runner.temp }}/gh-aw/actions
job-name: ${{ github.job }}
trace-id: ${{ needs.activation.outputs.setup-trace-id }}
- name: Download agent output artifact
id: download-agent-output
continue-on-error: true
@ -879,9 +946,14 @@ jobs:
mkdir -p /tmp/gh-aw/
find "/tmp/gh-aw/" -type f -print
echo "GH_AW_AGENT_OUTPUT=/tmp/gh-aw/agent_output.json" >> "$GITHUB_OUTPUT"
- name: Checkout repository for patch context
if: needs.agent.outputs.has_patch == 'true'
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
with:
persist-credentials: false
# --- Threat Detection ---
- name: Download container images
run: bash ${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh ghcr.io/github/gh-aw-firewall/agent:0.25.6 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.6 ghcr.io/github/gh-aw-firewall/squid:0.25.6
run: bash "${RUNNER_TEMP}/gh-aw/actions/download_docker_images.sh" ghcr.io/github/gh-aw-firewall/agent:0.25.18 ghcr.io/github/gh-aw-firewall/api-proxy:0.25.18 ghcr.io/github/gh-aw-firewall/squid:0.25.18
- name: Check if detection needed
id: detection_guard
if: always()
@ -920,8 +992,8 @@ jobs:
if: always() && steps.detection_guard.outputs.run_detection == 'true'
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8
env:
WORKFLOW_NAME: "ZIPT String Solver Benchmark"
WORKFLOW_DESCRIPTION: "Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion"
WORKFLOW_NAME: "QF_S String Solver Benchmark"
WORKFLOW_DESCRIPTION: "Benchmark Z3 seq vs nseq string solvers on QF_S test suite from the c3 branch and post results as a GitHub discussion"
HAS_PATCH: ${{ needs.agent.outputs.has_patch }}
with:
script: |
@ -935,9 +1007,11 @@ jobs:
mkdir -p /tmp/gh-aw/threat-detection
touch /tmp/gh-aw/threat-detection/detection.log
- name: Install GitHub Copilot CLI
run: ${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh latest
run: bash "${RUNNER_TEMP}/gh-aw/actions/install_copilot_cli.sh" 1.0.20
env:
GH_HOST: github.com
- name: Install AWF binary
run: bash ${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh v0.25.6
run: bash "${RUNNER_TEMP}/gh-aw/actions/install_awf_binary.sh" v0.25.18
- name: Execute GitHub Copilot CLI
if: always() && steps.detection_guard.outputs.run_detection == 'true'
id: detection_agentic_execution
@ -947,17 +1021,18 @@ jobs:
set -o pipefail
touch /tmp/gh-aw/agent-step-summary.md
# shellcheck disable=SC1003
sudo -E awf --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,telemetry.enterprise.githubcopilot.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --image-tag 0.25.6 --skip-pull --enable-api-proxy \
-- /bin/bash -c '/usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-all-tools --add-dir "${GITHUB_WORKSPACE}" --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
sudo -E awf --container-workdir "${GITHUB_WORKSPACE}" --mount "${RUNNER_TEMP}/gh-aw:${RUNNER_TEMP}/gh-aw:ro" --mount "${RUNNER_TEMP}/gh-aw:/host${RUNNER_TEMP}/gh-aw:ro" --env-all --exclude-env COPILOT_GITHUB_TOKEN --allow-domains api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,github.com,host.docker.internal,telemetry.enterprise.githubcopilot.com --log-level info --proxy-logs-dir /tmp/gh-aw/sandbox/firewall/logs --audit-dir /tmp/gh-aw/sandbox/firewall/audit --enable-host-access --image-tag 0.25.18 --skip-pull --enable-api-proxy \
-- /bin/bash -c 'node ${RUNNER_TEMP}/gh-aw/actions/copilot_driver.cjs /usr/local/bin/copilot --add-dir /tmp/gh-aw/ --log-level all --log-dir /tmp/gh-aw/sandbox/agent/logs/ --disable-builtin-mcps --allow-all-tools --add-dir "${GITHUB_WORKSPACE}" --prompt "$(cat /tmp/gh-aw/aw-prompts/prompt.txt)"' 2>&1 | tee -a /tmp/gh-aw/threat-detection/detection.log
env:
COPILOT_AGENT_RUNNER_TYPE: STANDALONE
COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }}
COPILOT_MODEL: ${{ vars.GH_AW_MODEL_DETECTION_COPILOT || '' }}
GH_AW_PHASE: detection
GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt
GH_AW_VERSION: v0.65.4
GH_AW_VERSION: v0.67.4
GITHUB_API_URL: ${{ github.api_url }}
GITHUB_AW: true
GITHUB_COPILOT_INTEGRATION_ID: agentic-workflows
GITHUB_HEAD_REF: ${{ github.head_ref }}
GITHUB_REF_NAME: ${{ github.ref_name }}
GITHUB_SERVER_URL: ${{ github.server_url }}
@ -990,6 +1065,7 @@ jobs:
safe_outputs:
needs:
- activation
- agent
- detection
if: (!cancelled()) && needs.agent.result != 'skipped' && needs.detection.result == 'success'
@ -1001,10 +1077,11 @@ jobs:
timeout-minutes: 15
env:
GH_AW_CALLER_WORKFLOW_ID: "${{ github.repository }}/qf-s-benchmark"
GH_AW_EFFECTIVE_TOKENS: ${{ needs.agent.outputs.effective_tokens }}
GH_AW_ENGINE_ID: "copilot"
GH_AW_ENGINE_MODEL: ${{ needs.agent.outputs.model }}
GH_AW_WORKFLOW_ID: "qf-s-benchmark"
GH_AW_WORKFLOW_NAME: "ZIPT String Solver Benchmark"
GH_AW_WORKFLOW_NAME: "QF_S String Solver Benchmark"
outputs:
code_push_failure_count: ${{ steps.process_safe_outputs.outputs.code_push_failure_count }}
code_push_failure_errors: ${{ steps.process_safe_outputs.outputs.code_push_failure_errors }}
@ -1014,9 +1091,12 @@ jobs:
process_safe_outputs_temporary_id_map: ${{ steps.process_safe_outputs.outputs.temporary_id_map }}
steps:
- name: Setup Scripts
uses: github/gh-aw-actions/setup@934698b44320d87a7a9196339f90293f10bd2247 # v0.65.4
id: setup
uses: github/gh-aw-actions/setup@v0.67.4
with:
destination: ${{ runner.temp }}/gh-aw/actions
job-name: ${{ github.job }}
trace-id: ${{ needs.activation.outputs.setup-trace-id }}
- name: Download agent output artifact
id: download-agent-output
continue-on-error: true
@ -1048,7 +1128,7 @@ jobs:
GH_AW_ALLOWED_DOMAINS: "api.business.githubcopilot.com,api.enterprise.githubcopilot.com,api.github.com,api.githubcopilot.com,api.individual.githubcopilot.com,api.snapcraft.io,archive.ubuntu.com,azure.archive.ubuntu.com,crl.geotrust.com,crl.globalsign.com,crl.identrust.com,crl.sectigo.com,crl.thawte.com,crl.usertrust.com,crl.verisign.com,crl3.digicert.com,crl4.digicert.com,crls.ssl.com,github.com,host.docker.internal,json-schema.org,json.schemastore.org,keyserver.ubuntu.com,ocsp.digicert.com,ocsp.geotrust.com,ocsp.globalsign.com,ocsp.identrust.com,ocsp.sectigo.com,ocsp.ssl.com,ocsp.thawte.com,ocsp.usertrust.com,ocsp.verisign.com,packagecloud.io,packages.cloud.google.com,packages.microsoft.com,ppa.launchpad.net,raw.githubusercontent.com,registry.npmjs.org,s.symcb.com,s.symcd.com,security.ubuntu.com,telemetry.enterprise.githubcopilot.com,ts-crl.ws.symantec.com,ts-ocsp.ws.symantec.com,www.googleapis.com"
GITHUB_SERVER_URL: ${{ github.server_url }}
GITHUB_API_URL: ${{ github.api_url }}
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[ZIPT Benchmark] \"},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"false\"}}"
GH_AW_SAFE_OUTPUTS_HANDLER_CONFIG: "{\"create_discussion\":{\"category\":\"agentic workflows\",\"close_older_discussions\":true,\"expires\":168,\"fallback_to_issue\":true,\"max\":1,\"title_prefix\":\"[QF_S Benchmark] \"},\"create_report_incomplete_issue\":{},\"missing_data\":{},\"missing_tool\":{},\"noop\":{\"max\":1,\"report-as-issue\":\"false\"},\"report_incomplete\":{}}"
with:
github-token: ${{ secrets.GH_AW_GITHUB_TOKEN || secrets.GITHUB_TOKEN }}
script: |
@ -1056,11 +1136,11 @@ jobs:
setupGlobals(core, github, context, exec, io);
const { main } = require('${{ runner.temp }}/gh-aw/actions/safe_output_handler_manager.cjs');
await main();
- name: Upload Safe Output Items
- name: Upload Safe Outputs Items
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7
with:
name: safe-output-items
name: safe-outputs-items
path: /tmp/gh-aw/safe-output-items.jsonl
if-no-files-found: ignore

View file

@ -1,5 +1,5 @@
---
description: Run Z3 string solver benchmarks (seq vs nseq) on QF_S test suite from the c3 branch and post results as a GitHub discussion
description: Benchmark Z3 seq vs nseq string solvers on QF_S test suite from the c3 branch and post results as a GitHub discussion
on:
schedule:
@ -17,7 +17,7 @@ tools:
safe-outputs:
create-discussion:
title-prefix: "[ZIPT Benchmark] "
title-prefix: "[QF_S Benchmark] "
category: "Agentic Workflows"
close-older-discussions: true
missing-tool:
@ -37,437 +37,367 @@ steps:
---
# QF_S String Solver Benchmark
# ZIPT String Solver Benchmark
## Job Description
You are an AI agent that benchmarks Z3 string solvers (`seq` and `nseq`) and the standalone ZIPT solver on QF_S SMT-LIB2 benchmarks from the `c3` branch, and publishes a summary report as a GitHub discussion.
Your name is ${{ github.workflow }}. You are an expert performance analyst for the Z3 theorem prover, specializing in the string/sequence theory. Your task is to benchmark the `seq` solver (classical string theory) against the `nseq` solver (ZIPT-based string theory) on the QF_S test suite from the `c3` branch, and post a structured report as a GitHub Discussion.
## Context
The workspace already contains the `c3` branch (checked out by the preceding workflow step).
- **Repository**: ${{ github.repository }}
- **Workspace**: ${{ github.workspace }}
- **Branch**: c3 (already checked out by the workflow setup step)
## Phase 1: Set Up the Build Environment
## Phase 1: Build Z3
Build Z3 from the checked-out `c3` branch using CMake + Ninja, including the .NET bindings required by ZIPT.
Install required build tools:
```bash
cd ${{ github.workspace }}
# Install build dependencies if missing
sudo apt-get install -y ninja-build cmake python3 zstd dotnet-sdk-8.0 2>/dev/null || true
# Configure the build in Debug mode to enable assertions and tracing
# (Debug mode is required for -tr: trace flags to produce meaningful output)
mkdir -p build
cd build
cmake .. -G Ninja -DCMAKE_BUILD_TYPE=Debug -DZ3_BUILD_DOTNET_BINDINGS=ON 2>&1 | tail -20
# Build z3 binary and .NET bindings (this takes ~15-17 minutes)
ninja z3 2>&1 | tail -30
ninja build_z3_dotnet_bindings 2>&1 | tail -20
# Verify the build succeeded
./z3 --version
# Locate the Microsoft.Z3.dll produced by the build
Z3_DOTNET_DLL=$(find . -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
if [ -z "$Z3_DOTNET_DLL" ]; then
echo "ERROR: Microsoft.Z3.dll not found after build"
exit 1
fi
echo "Found Microsoft.Z3.dll at: $Z3_DOTNET_DLL"
sudo apt-get update -y
sudo apt-get install -y cmake ninja-build python3 python3-pip time
```
If the build fails, report the error clearly and exit without proceeding.
## Phase 2a: Clone and Build ZIPT
Clone the ZIPT solver from the `parikh` branch and compile it against the Z3 .NET bindings built in Phase 1.
Verify tools:
```bash
cd ${{ github.workspace }}
cmake --version
ninja --version
python3 --version
```
# Re-locate the Microsoft.Z3.dll if needed
Z3_DOTNET_DLL=$(find build -name "Microsoft.Z3.dll" -not -path "*/obj/*" | head -1)
Z3_LIB_DIR=${{ github.workspace }}/build
## Phase 2: Build Z3 in Debug Mode with Seq Tracing
# Clone ZIPT (parikh branch)
git clone --depth=1 --branch parikh https://github.com/CEisenhofer/ZIPT.git /tmp/zipt
Build Z3 with debug symbols so that tracing and timing data are meaningful.
# Patch ZIPT.csproj to point at the freshly built Microsoft.Z3.dll
# (the repo has a Windows-relative hardcoded path that won't exist here)
sed -i "s|<HintPath>.*</HintPath>|<HintPath>$Z3_DOTNET_DLL</HintPath>|" /tmp/zipt/ZIPT/ZIPT.csproj
```bash
mkdir -p /tmp/z3-build
cd /tmp/z3-build
cmake "$GITHUB_WORKSPACE" \
-G Ninja \
-DCMAKE_BUILD_TYPE=Debug \
-DZ3_BUILD_TEST_EXECUTABLES=OFF \
2>&1 | tee /tmp/z3-cmake.log
ninja z3 2>&1 | tee /tmp/z3-build.log
```
# Build ZIPT in Release mode
cd /tmp/zipt/ZIPT
dotnet build --configuration Release 2>&1 | tail -20
Verify the binary was built:
# Locate the built ZIPT.dll
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" | head -1)
if [ -z "$ZIPT_DLL" ]; then
echo "ERROR: ZIPT.dll not found after build"
exit 1
```bash
/tmp/z3-build/z3 --version
```
If the build fails, report it immediately and stop.
## Phase 3: Discover QF_S Benchmark Files
Find all `.smt2` benchmark files in the workspace that belong to the QF_S logic:
```bash
# Search for explicit QF_S logic declarations
grep -rl 'QF_S' "$GITHUB_WORKSPACE" --include='*.smt2' 2>/dev/null > /tmp/qf_s_files.txt
# Also look in dedicated benchmark directories
find "$GITHUB_WORKSPACE" \
\( -path "*/QF_S/*" -o -path "*/qf_s/*" -o -path "*/benchmarks/*" \) \
-name '*.smt2' 2>/dev/null >> /tmp/qf_s_files.txt
# Deduplicate
sort -u /tmp/qf_s_files.txt -o /tmp/qf_s_files.txt
TOTAL=$(wc -l < /tmp/qf_s_files.txt)
echo "Found $TOTAL QF_S benchmark files"
head -20 /tmp/qf_s_files.txt
```
If fewer than 5 files are found, also scan the entire workspace for any `.smt2` file that exercises string constraints:
```bash
if [ "$TOTAL" -lt 5 ]; then
grep -rl 'declare.*String\|str\.\|seq\.' "$GITHUB_WORKSPACE" \
--include='*.smt2' 2>/dev/null >> /tmp/qf_s_files.txt
sort -u /tmp/qf_s_files.txt -o /tmp/qf_s_files.txt
TOTAL=$(wc -l < /tmp/qf_s_files.txt)
echo "After extended search: $TOTAL files"
fi
echo "ZIPT binary: $ZIPT_DLL"
```
# Make libz3.so visible to the .NET runtime at ZIPT startup
ZIPT_OUT_DIR=$(dirname "$ZIPT_DLL")
if cp "$Z3_LIB_DIR/libz3.so" "$ZIPT_OUT_DIR/" 2>/dev/null; then
echo "Copied libz3.so to $ZIPT_OUT_DIR"
Cap the benchmark set to keep total runtime under 60 minutes:
```bash
# Use at most 500 files; take a random sample if more are available
if [ "$TOTAL" -gt 500 ]; then
shuf -n 500 /tmp/qf_s_files.txt > /tmp/qf_s_sample.txt
else
echo "WARNING: could not copy libz3.so to $ZIPT_OUT_DIR — setting LD_LIBRARY_PATH fallback"
cp /tmp/qf_s_files.txt /tmp/qf_s_sample.txt
fi
export LD_LIBRARY_PATH="$Z3_LIB_DIR${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
echo "ZIPT build complete."
SAMPLE=$(wc -l < /tmp/qf_s_sample.txt)
echo "Running benchmarks on $SAMPLE files"
```
If the ZIPT build fails, note the error in the report but continue with the Z3-only benchmark columns.
## Phase 4: Run Benchmarks — seq vs nseq
## Phase 2b: Extract and Select Benchmark Files
Extract the QF_S benchmark archive and randomly select 50 files.
Run each benchmark with both solvers. Use a per-file timeout of 10 seconds. Set Z3's internal timeout to 9 seconds so it exits cleanly before the shell timeout fires.
```bash
cd ${{ github.workspace }}
Z3=/tmp/z3-build/z3
TIMEOUT_SEC=10
Z3_TIMEOUT_SEC=9
RESULTS=/tmp/benchmark-results.csv
# Extract the archive
mkdir -p /tmp/qfs_benchmarks
tar --zstd -xf tests/QF_S.tar.zst -C /tmp/qfs_benchmarks
echo "file,seq_result,seq_time_ms,nseq_result,nseq_time_ms" > "$RESULTS"
# List all .smt2 files
find /tmp/qfs_benchmarks -name "*.smt2" -type f > /tmp/all_qfs_files.txt
TOTAL_FILES=$(wc -l < /tmp/all_qfs_files.txt)
echo "Total QF_S files: $TOTAL_FILES"
total=0
done_count=0
while IFS= read -r smt_file; do
total=$((total + 1))
# Randomly select 200 files
shuf -n 200 /tmp/all_qfs_files.txt > /tmp/selected_files.txt
echo "Selected 200 files for benchmarking"
cat /tmp/selected_files.txt
# Run with seq solver; capture both stdout (z3 output) and stderr (time output)
SEQ_OUT=$({ time timeout "$TIMEOUT_SEC" "$Z3" \
smt.string_solver=seq \
-T:"$Z3_TIMEOUT_SEC" \
"$smt_file" 2>/dev/null; } 2>&1)
SEQ_RESULT=$(echo "$SEQ_OUT" | grep -E '^(sat|unsat|unknown)' | head -1)
SEQ_MS=$(echo "$SEQ_OUT" | grep real | awk '{split($2,a,"m"); split(a[2],b,"s"); printf "%d", (a[1]*60+b[1])*1000}')
[ -z "$SEQ_RESULT" ] && SEQ_RESULT="timeout"
[ -z "$SEQ_MS" ] && SEQ_MS=$((TIMEOUT_SEC * 1000))
# Run with nseq solver; same structure
NSEQ_OUT=$({ time timeout "$TIMEOUT_SEC" "$Z3" \
smt.string_solver=nseq \
-T:"$Z3_TIMEOUT_SEC" \
"$smt_file" 2>/dev/null; } 2>&1)
NSEQ_RESULT=$(echo "$NSEQ_OUT" | grep -E '^(sat|unsat|unknown)' | head -1)
NSEQ_MS=$(echo "$NSEQ_OUT" | grep real | awk '{split($2,a,"m"); split(a[2],b,"s"); printf "%d", (a[1]*60+b[1])*1000}')
[ -z "$NSEQ_RESULT" ] && NSEQ_RESULT="timeout"
[ -z "$NSEQ_MS" ] && NSEQ_MS=$((TIMEOUT_SEC * 1000))
SHORT=$(basename "$smt_file")
echo "$SHORT,$SEQ_RESULT,$SEQ_MS,$NSEQ_RESULT,$NSEQ_MS" >> "$RESULTS"
done_count=$((done_count + 1))
if [ $((done_count % 50)) -eq 0 ]; then
echo "Progress: $done_count / $SAMPLE files completed"
fi
done < /tmp/qf_s_sample.txt
echo "Benchmark run complete: $done_count files"
```
## Phase 3: Run Benchmarks
## Phase 5: Collect Seq Traces for Interesting Cases
Run each of the 200 selected files with both Z3 string solvers and ZIPT. Use a 5-second timeout for seq and a 10-second timeout for nseq and ZIPT.
For each file, run:
1. `z3 smt.string_solver=seq -tr:seq -T:5 <file>` — seq solver with sequence-solver tracing enabled; rename the `.z3-trace` output after each run so it is not overwritten. Use `-T:5` when tracing to cap trace size.
2. `z3 smt.string_solver=nseq -T:5 <file>` — nseq solver without tracing (timing only).
3. `dotnet <ZIPT.dll> -t:5000 <file>` — ZIPT solver (milliseconds).
Capture:
- **Verdict**: `sat`, `unsat`, `unknown`, `timeout` (if exit code indicates timeout or process is killed), or `bug` (if a solver crashes / produces a non-standard result)
- **Time** (seconds): wall-clock time for the run
- A row is flagged `SOUNDNESS_DISAGREEMENT` when any two solvers that both produced a definitive answer (sat/unsat) disagree
Use a bash script to automate this:
For benchmarks where `seq` solves in under 2 s but `nseq` times out (seq-fast/nseq-slow cases), collect a brief `seq` trace to understand what algorithm is used:
```bash
#!/usr/bin/env bash
set -euo pipefail
Z3=/tmp/z3-build/z3
mkdir -p /tmp/traces
Z3=${{ github.workspace }}/build/z3
ZIPT_DLL=$(find /tmp/zipt/ZIPT/bin/Release -name "ZIPT.dll" 2>/dev/null | head -1)
ZIPT_AVAILABLE=false
[ -n "$ZIPT_DLL" ] && ZIPT_AVAILABLE=true
# Find seq-fast / nseq-slow files: seq solved (sat/unsat) in <2000ms AND nseq timed out
awk -F, 'NR>1 && ($2=="sat"||$2=="unsat") && $3<2000 && $4=="timeout" {print $1}' \
/tmp/benchmark-results.csv > /tmp/seq_fast_nseq_slow.txt
echo "seq-fast / nseq-slow files: $(wc -l < /tmp/seq_fast_nseq_slow.txt)"
# Ensure libz3.so is on the dynamic-linker path for the .NET runtime
export LD_LIBRARY_PATH=${{ github.workspace }}/build${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
RESULTS=/tmp/benchmark_results.tsv
TRACES_DIR=/tmp/seq_traces
mkdir -p "$TRACES_DIR"
echo -e "file\tseq_verdict\tseq_time\tnseq_verdict\tnseq_time\tzipt_verdict\tzipt_time\tnotes" > "$RESULTS"
run_z3_seq_traced() {
# Run seq solver with -tr:seq tracing. Cap at 5 s so trace files stay manageable.
local file="$1"
local trace_dest="$2"
local start end elapsed verdict output exit_code
# Remove any leftover trace from a prior run so we can detect whether one was produced.
rm -f .z3-trace
start=$(date +%s%3N)
output=$(timeout 7 "$Z3" "smt.string_solver=seq" -tr:seq -T:5 "$file" 2>&1)
exit_code=$?
end=$(date +%s%3N)
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
# Rename the trace file immediately so the next run does not overwrite it.
if [ -f .z3-trace ]; then
mv .z3-trace "$trace_dest"
else
# Write a sentinel so Phase 4 can detect the absence of a trace.
echo "(no trace produced)" > "$trace_dest"
fi
if echo "$output" | grep -q "^unsat"; then
verdict="unsat"
elif echo "$output" | grep -q "^sat"; then
verdict="sat"
elif echo "$output" | grep -q "^unknown"; then
verdict="unknown"
elif [ "$exit_code" -eq 124 ]; then
verdict="timeout"
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
verdict="bug"
else
verdict="unknown"
fi
echo "$verdict $elapsed"
}
run_z3_nseq() {
local file="$1"
local start end elapsed verdict output exit_code
start=$(date +%s%3N)
output=$(timeout 12 "$Z3" "smt.string_solver=nseq" -T:5 "$file" 2>&1)
exit_code=$?
end=$(date +%s%3N)
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
if echo "$output" | grep -q "^unsat"; then
verdict="unsat"
elif echo "$output" | grep -q "^sat"; then
verdict="sat"
elif echo "$output" | grep -q "^unknown"; then
verdict="unknown"
elif [ "$exit_code" -eq 124 ]; then
verdict="timeout"
elif echo "$output" | grep -qi "error\|assertion\|segfault\|SIGABRT\|exception"; then
verdict="bug"
else
verdict="unknown"
fi
echo "$verdict $elapsed"
}
run_zipt() {
local file="$1"
local start end elapsed verdict output exit_code
if [ "$ZIPT_AVAILABLE" != "true" ]; then
echo "n/a 0.000"
return
fi
start=$(date +%s%3N)
# ZIPT prints the filename on the first line, then SAT/UNSAT/UNKNOWN on subsequent lines
output=$(timeout 12 dotnet "$ZIPT_DLL" -t:5000 "$file" 2>&1)
exit_code=$?
end=$(date +%s%3N)
elapsed=$(echo "scale=3; ($end - $start) / 1000" | bc)
if echo "$output" | grep -qi "^UNSAT$"; then
verdict="unsat"
elif echo "$output" | grep -qi "^SAT$"; then
verdict="sat"
elif echo "$output" | grep -qi "^UNKNOWN$"; then
verdict="unknown"
elif [ "$exit_code" -eq 124 ]; then
verdict="timeout"
elif echo "$output" | grep -qi "error\|crash\|exception\|Unsupported"; then
verdict="bug"
else
verdict="unknown"
fi
echo "$verdict $elapsed"
}
while IFS= read -r file; do
fname=$(basename "$file")
# Use a sanitised filename (replace non-alphanumeric with _) for the trace path.
safe_name=$(echo "$fname" | tr -cs 'A-Za-z0-9._-' '_')
trace_path="$TRACES_DIR/${safe_name}.z3-trace"
seq_result=$(run_z3_seq_traced "$file" "$trace_path")
nseq_result=$(run_z3_nseq "$file")
zipt_result=$(run_zipt "$file")
seq_verdict=$(echo "$seq_result" | cut -d' ' -f1)
seq_time=$(echo "$seq_result" | cut -d' ' -f2)
nseq_verdict=$(echo "$nseq_result" | cut -d' ' -f1)
nseq_time=$(echo "$nseq_result" | cut -d' ' -f2)
zipt_verdict=$(echo "$zipt_result" | cut -d' ' -f1)
zipt_time=$(echo "$zipt_result" | cut -d' ' -f2)
# Flag soundness disagreement when any two definitive verdicts disagree
notes=""
# Build list of (solver, verdict) pairs for definitive answers only
declare -A definitive_map
[ "$seq_verdict" = "sat" ] || [ "$seq_verdict" = "unsat" ] && definitive_map[seq]="$seq_verdict"
[ "$nseq_verdict" = "sat" ] || [ "$nseq_verdict" = "unsat" ] && definitive_map[nseq]="$nseq_verdict"
[ "$zipt_verdict" = "sat" ] || [ "$zipt_verdict" = "unsat" ] && definitive_map[zipt]="$zipt_verdict"
# Check every pair for conflict
has_sat=false; has_unsat=false
for v in "${definitive_map[@]}"; do
[ "$v" = "sat" ] && has_sat=true
[ "$v" = "unsat" ] && has_unsat=true
done
if $has_sat && $has_unsat; then
notes="SOUNDNESS_DISAGREEMENT"
fi
echo -e "$fname\t$seq_verdict\t$seq_time\t$nseq_verdict\t$nseq_time\t$zipt_verdict\t$zipt_time\t$notes" >> "$RESULTS"
echo "[$fname] seq=$seq_verdict(${seq_time}s) nseq=$nseq_verdict(${nseq_time}s) zipt=$zipt_verdict(${zipt_time}s) $notes"
done < /tmp/selected_files.txt
echo "Benchmark run complete. Results saved to $RESULTS"
echo "Trace files saved to $TRACES_DIR"
```
Save this script to `/tmp/run_benchmarks.sh`, make it executable, and run it.
## Phase 3.5: Identify seq-fast / nseq-slow Cases and Analyse Traces
After the benchmark loop completes, identify files where seq solved the instance quickly but nseq was significantly slower (or timed out). For each such file, read its saved seq trace and produce a hypothesis for why nseq is slower.
**Definition of "seq-fast / nseq-slow"**: seq_time < 1.0 s AND nseq_time > 3 × seq_time (and nseq_time > 0.5 s).
For each matching file:
1. Read the corresponding trace file from `/tmp/seq_traces/`.
2. Look for the sequence of lemmas, reductions, or decisions that led seq to a fast conclusion.
3. Identify patterns absent or less exploited in nseq: e.g., length-based propagation early in the trace, Parikh constraints eliminating possibilities, Nielsen graph pruning, equation splitting, or overlap resolution.
4. Write a 35 sentence hypothesis explaining the likely reason for the nseq slowdown, referencing specific trace entries where possible.
Use a script to collect the candidates:
```bash
#!/usr/bin/env bash
RESULTS=/tmp/benchmark_results.tsv
TRACES_DIR=/tmp/seq_traces
ANALYSIS=/tmp/trace_analysis.md
echo "# Trace Analysis: seq-fast / nseq-slow Candidates" > "$ANALYSIS"
echo "" >> "$ANALYSIS"
# Skip header line; columns: file seq_verdict seq_time nseq_verdict nseq_time ...
tail -n +2 "$RESULTS" | while IFS=$'\t' read -r fname seq_verdict seq_time nseq_verdict nseq_time _rest; do
# Use bc for floating-point comparison; bc does not support && so split into separate tests.
is_fast=$(echo "$seq_time < 1.0" | bc -l 2>/dev/null || echo 0)
threshold=$(echo "$seq_time * 3" | bc -l 2>/dev/null || echo 99999)
is_slow_threshold=$(echo "$nseq_time > $threshold" | bc -l 2>/dev/null || echo 0)
# Extra guard: exclude trivially fast seq cases where 3× is still < 0.5 s
is_over_half=$(echo "$nseq_time > 0.5" | bc -l 2>/dev/null || echo 0)
if [ "$is_fast" = "1" ] && [ "$is_slow_threshold" = "1" ] && [ "$is_over_half" = "1" ]; then
safe_name=$(echo "$fname" | tr -cs 'A-Za-z0-9._-' '_')
trace_path="$TRACES_DIR/${safe_name}.z3-trace"
echo "## $fname" >> "$ANALYSIS"
echo "" >> "$ANALYSIS"
echo "seq: ${seq_time}s (${seq_verdict}), nseq: ${nseq_time}s (${nseq_verdict})" >> "$ANALYSIS"
echo "" >> "$ANALYSIS"
echo "### Trace excerpt (first 200 lines)" >> "$ANALYSIS"
echo '```' >> "$ANALYSIS"
head -200 "$trace_path" 2>/dev/null >> "$ANALYSIS" || echo "(trace file not found on disk)" >> "$ANALYSIS"
echo '```' >> "$ANALYSIS"
echo "" >> "$ANALYSIS"
echo "---" >> "$ANALYSIS"
echo "" >> "$ANALYSIS"
fi
# Collect traces for at most 5 such cases
head -5 /tmp/seq_fast_nseq_slow.txt | while IFS= read -r short; do
# Find the full path
full=$(grep "/$short$" /tmp/qf_s_sample.txt | head -1)
[ -z "$full" ] && continue
timeout 5 "$Z3" \
smt.string_solver=seq \
-tr:seq \
-T:5 \
"$full" > "/tmp/traces/${short%.smt2}.seq.trace" 2>&1 || true
done
echo "Candidate list written to $ANALYSIS"
cat "$ANALYSIS"
```
Save this to `/tmp/analyse_traces.sh`, make it executable, and run it. Then read the trace excerpts collected in `/tmp/trace_analysis.md` and — for each candidate — write your hypothesis in the Phase 4 summary report under a **"Trace Analysis"** section.
## Phase 6: Analyze Results
## Phase 4: Generate Summary Report
Compute summary statistics from the CSV:
Read `/tmp/benchmark_results.tsv` and compute statistics. Then generate a Markdown report.
```bash
Save the analysis script to a file and run it:
Compute:
- **Total benchmarks**: 200
- **Per solver (seq, nseq, and ZIPT)**: count of sat / unsat / unknown / timeout / bug verdicts
- **Total time used**: sum of all times for each solver
- **Average time per benchmark**: total_time / 200
- **Soundness disagreements**: files where any two solvers that both returned a definitive answer disagree (these are the most critical bugs)
- **Bugs / crashes**: files with error/crash verdicts
```bash
cat > /tmp/analyze_benchmark.py << 'PYEOF'
import csv, sys
Format the report as a GitHub Discussion post (GitHub-flavored Markdown):
results = []
with open('/tmp/benchmark-results.csv') as f:
reader = csv.DictReader(f)
for row in reader:
results.append(row)
total = len(results)
if total == 0:
print("No results found.")
sys.exit(0)
def is_correct(r, solver):
prefix = 'seq' if solver == 'seq' else 'nseq'
return r[f'{prefix}_result'] in ('sat', 'unsat')
def timed_out(r, solver):
prefix = 'seq' if solver == 'seq' else 'nseq'
return r[f'{prefix}_result'] == 'timeout'
seq_solved = sum(1 for r in results if is_correct(r, 'seq'))
nseq_solved = sum(1 for r in results if is_correct(r, 'nseq'))
seq_to = sum(1 for r in results if timed_out(r, 'seq'))
nseq_to = sum(1 for r in results if timed_out(r, 'nseq'))
seq_times = [int(r['seq_time_ms']) for r in results if is_correct(r, 'seq')]
nseq_times = [int(r['nseq_time_ms']) for r in results if is_correct(r, 'nseq')]
def median(lst):
s = sorted(lst)
n = len(s)
return s[n//2] if n else 0
def mean(lst):
return sum(lst)//len(lst) if lst else 0
# Disagreements (sat vs unsat or vice-versa)
disagreements = [
r for r in results
if r['seq_result'] in ('sat','unsat')
and r['nseq_result'] in ('sat','unsat')
and r['seq_result'] != r['nseq_result']
]
# seq-fast / nseq-slow: seq solved in <2s, nseq timed out
seq_fast_nseq_slow = [
r for r in results
if is_correct(r, 'seq') and int(r['seq_time_ms']) < 2000 and timed_out(r, 'nseq')
]
# nseq-fast / seq-slow: nseq solved in <2s, seq timed out
nseq_fast_seq_slow = [
r for r in results
if is_correct(r, 'nseq') and int(r['nseq_time_ms']) < 2000 and timed_out(r, 'seq')
]
print(f"TOTAL={total}")
print(f"SEQ_SOLVED={seq_solved}")
print(f"NSEQ_SOLVED={nseq_solved}")
print(f"SEQ_TIMEOUTS={seq_to}")
print(f"NSEQ_TIMEOUTS={nseq_to}")
print(f"SEQ_MEDIAN_MS={median(seq_times)}")
print(f"NSEQ_MEDIAN_MS={median(nseq_times)}")
print(f"SEQ_MEAN_MS={mean(seq_times)}")
print(f"NSEQ_MEAN_MS={mean(nseq_times)}")
print(f"DISAGREEMENTS={len(disagreements)}")
print(f"SEQ_FAST_NSEQ_SLOW={len(seq_fast_nseq_slow)}")
print(f"NSEQ_FAST_SEQ_SLOW={len(nseq_fast_seq_slow)}")
# Print top-10 slowest for nseq that seq handles fast
print("\nTOP_SEQ_FAST_NSEQ_SLOW:")
for r in sorted(seq_fast_nseq_slow, key=lambda x: -int(x['nseq_time_ms']))[:10]:
print(f" {r['file']} seq={r['seq_time_ms']}ms nseq={r['nseq_time_ms']}ms seq_result={r['seq_result']} nseq_result={r['nseq_result']}")
print("\nTOP_NSEQ_FAST_SEQ_SLOW:")
for r in sorted(nseq_fast_seq_slow, key=lambda x: -int(x['seq_time_ms']))[:10]:
print(f" {r['file']} seq={r['seq_time_ms']}ms nseq={r['nseq_time_ms']}ms seq_result={r['seq_result']} nseq_result={r['nseq_result']}")
if disagreements:
print(f"\nDISAGREEMENTS ({len(disagreements)}):")
for r in disagreements[:10]:
print(f" {r['file']} seq={r['seq_result']} nseq={r['nseq_result']}")
PYEOF
python3 /tmp/analyze_benchmark.py
```
## Phase 7: Create GitHub Discussion
Use the `create_discussion` safe-output tool to post a structured benchmark report.
The discussion body should be formatted as follows (fill in real numbers from Phase 6):
```markdown
### ZIPT Benchmark Report — Z3 c3 branch
# QF_S Benchmark: seq vs nseq
**Date**: <today's date>
**Date**: YYYY-MM-DD
**Branch**: c3
**Benchmark set**: QF_S (200 randomly selected files from tests/QF_S.tar.zst)
**Timeout**: 5 seconds for seq (`-T:5`); 5 seconds for nseq (`-T:5`) and ZIPT (`-t:5000`)
**Commit**: `<short SHA>`
**Workflow Run**: [#<run_id>](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})
**Files benchmarked**: N (capped at 500, timeout 10 s per file)
---
### Summary
## Summary
| Metric | seq solver | nseq solver | ZIPT solver |
|--------|-----------|-------------|-------------|
| sat | X | X | X |
| unsat | X | X | X |
| unknown | X | X | X |
| timeout | X | X | X |
| bug/crash | X | X | X |
| **Total time (s)** | X.XXX | X.XXX | X.XXX |
| **Avg time/benchmark (s)** | X.XXX | X.XXX | X.XXX |
**Soundness disagreements** (any two solvers return conflicting sat/unsat): N
| Metric | seq | nseq |
|--------|-----|------|
| Files solved (sat/unsat) | SEQ_SOLVED | NSEQ_SOLVED |
| Timeouts | SEQ_TO | NSEQ_TO |
| Median solve time (solved files) | X ms | Y ms |
| Mean solve time (solved files) | X ms | Y ms |
| **Disagreements (sat≠unsat)** | — | N |
---
### Per-File Results
## Performance Comparison
| # | File | seq verdict | seq time (s) | nseq verdict | nseq time (s) | ZIPT verdict | ZIPT time (s) | Notes |
|---|------|-------------|-------------|--------------|--------------|--------------|--------------|-------|
| 1 | benchmark_0001.smt2 | sat | 0.123 | sat | 0.456 | sat | 0.789 | |
| ... | ... | ... | ... | ... | ... | ... | ... | ... |
### seq-fast / nseq-slow (seq < 2 s, nseq timed out)
These are benchmarks where the classical `seq` solver is significantly faster. These represent regression risk for `nseq`.
| File | seq (ms) | nseq (ms) | seq result | nseq result |
|------|----------|-----------|------------|-------------|
[TOP 10 ENTRIES]
### nseq-fast / seq-slow (nseq < 2 s, seq timed out)
These are benchmarks where `nseq` shows a performance advantage.
| File | seq (ms) | nseq (ms) | seq result | nseq result |
|------|----------|-----------|------------|-------------|
[TOP 10 ENTRIES]
---
### Notable Issues
## Correctness
#### Soundness Disagreements (Critical)
<list files where any two solvers disagree on sat/unsat, naming which solvers disagree>
**Disagreements** (files where seq says `sat` but nseq says `unsat` or vice versa): N
#### Crashes / Bugs
<list files where any solver crashed or produced an error>
#### Slow Benchmarks (> 8s)
<list files that took more than 8 seconds for any solver>
#### Trace Analysis: seq-fast / nseq-slow Hypotheses
<For each file where seq finished in < 1 s and nseq took > 3× longer, write a 35 sentence hypothesis based on the trace excerpt, referencing specific trace entries where possible. If no such files were found, state "No seq-fast / nseq-slow cases were observed in this run.">
[If disagreements exist, list all of them here with file paths and both results]
---
*Generated automatically by the ZIPT Benchmark workflow on the c3 branch.*
## seq Trace Analysis (seq-fast / nseq-slow cases)
<details>
<summary>Click to expand trace snippets for top seq-fast/nseq-slow cases</summary>
[Insert trace snippet for each traced file, or "No traces collected" if section was skipped]
</details>
---
## Raw Data
<details>
<summary>Full results CSV (click to expand)</summary>
```csv
[PASTE FIRST 200 LINES OF /tmp/benchmark-results.csv]
```
## Phase 5: Post to GitHub Discussion
</details>
Post the Markdown report as a new GitHub Discussion using the `create-discussion` safe output.
---
- **Category**: "Agentic Workflows"
- **Title**: `[ZIPT Benchmark] Z3 c3 branch — <date>`
- Close older discussions with the same title prefix to avoid clutter.
*Generated by the QF_S Benchmark workflow. To reproduce: build Z3 from the `c3` branch and run `z3 smt.string_solver=seq|nseq -T:10 <file.smt2>`.*
```
## Guidelines
## Edge Cases
- **Always build from c3 branch**: The workspace is already checked out on c3; don't change branches.
- **Debug build required**: The build must use `CMAKE_BUILD_TYPE=Debug` so that Z3's internal assertions and trace infrastructure are active; `-tr:` trace flags have no effect in Release builds.
- **Tracing time cap**: Always pass `-T:5` when running with `-tr:seq` to limit solver runtime and keep trace files a manageable size. The nseq and ZIPT runs use `-T:5` / `-t:5000` as before.
- **Rename trace files immediately**: After each seq run, rename `.z3-trace` to a per-benchmark path before starting the next run, or the next invocation will overwrite it.
- **Handle build failures gracefully**: If Z3 fails to build, report the error and create a brief discussion noting the build failure. If ZIPT fails to build, continue with only the seq/nseq columns and note `n/a` for ZIPT results.
- **Handle missing zstd**: If `tar --zstd` fails, try `zstd -d tests/QF_S.tar.zst --stdout | tar -x -C /tmp/qfs_benchmarks`.
- **Be precise with timing**: Use millisecond-precision timestamps and report times in seconds with 3 decimal places.
- **Distinguish timeout from unknown**: A timeout (process killed after 7s outer / 5s Z3-internal for seq, or 12s/10s for nseq) is different from `(unknown)` returned by a solver.
- **ZIPT timeout unit**: ZIPT's `-t` flag takes **milliseconds**, so pass `-t:5000` for a 5-second limit.
- **ZIPT output format**: ZIPT prints the input filename on the first line, then `SAT`, `UNSAT`, or `UNKNOWN` on subsequent lines. Parse accordingly.
- **Report soundness bugs prominently**: If any benchmark shows a conflict between any two solvers that both returned a definitive sat/unsat answer, highlight it as a critical finding and name which pair disagrees.
- **Don't skip any file**: Run all 200 files even if some fail.
- **Large report**: If the per-file table is very long, put it in a `<details>` collapsible section.
- If the build fails, call `missing_data` explaining the build error and stop.
- If no benchmark files are found at all, call `missing_data` explaining that no QF_S `.smt2` files were found in the `c3` branch.
- If Z3 crashes (segfault) on a file with either solver, record the result as `crash` and continue.
- If the total benchmark set is very small (< 5 files), note this prominently in the discussion and suggest adding more QF_S benchmarks to the `c3` branch.
- If zero disagreements and both solvers time out on the same files, note that the solvers are in agreement.
## Important Notes
- **DO NOT** modify any source files or create pull requests.
- **DO NOT** run benchmarks for longer than 80 minutes total (leave buffer for posting).
- **DO** always report the commit SHA so results can be correlated with specific code versions.
- **DO** close older QF_S Benchmark discussions automatically (configured via `close-older-discussions: true`).
- **DO** highlight disagreements prominently — these are potential correctness bugs.