CLI Workflow Recipes
Copy-paste shell scripts and recipes for common AskVerdict CLI workflows: batch creation, CSV export, CI/CD gates, cron summaries, and team workflows.
Ready-to-use scripts for common AskVerdict CLI workflows. Every recipe is copy-pasteable and works with bash/zsh on Linux and macOS.
Prerequisites
All recipes assume you have set ASKVERDICT_TOKEN (or run askverdict config set-token <key>) and have jq installed for JSON parsing. Install jq with brew install jq (macOS) or apt install jq (Ubuntu).
Recipe 1 — Batch Debate Creation
Run multiple debates from a text file, one question per line. Results are appended to a JSONL file for later processing.
#!/usr/bin/env bash
# batch-debates.sh — Run all questions from a file and save results
set -euo pipefail
QUESTIONS_FILE="${1:-questions.txt}"
OUTPUT_FILE="${2:-results.jsonl}"
MODE="${DEBATE_MODE:-balanced}"
if [[ ! -f "$QUESTIONS_FILE" ]]; then
echo "Usage: $0 <questions-file> [output-file]" >&2
exit 1
fi
echo "Reading questions from: $QUESTIONS_FILE"
echo "Output will be written to: $OUTPUT_FILE"
echo "Mode: $MODE"
echo ""
count=0
failed=0
while IFS= read -r question || [[ -n "$question" ]]; do
# Skip blank lines and comments
[[ -z "$question" || "$question" == \#* ]] && continue
count=$((count + 1))
echo "[$count] $question"
result=$(askverdict debate "$question" \
--mode "$MODE" \
--no-stream \
--json 2>/dev/null) || {
echo " ERROR: failed to create debate" >&2
failed=$((failed + 1))
continue
}
# Append question + result to JSONL
echo "$result" \
| jq --arg q "$question" '. + {question: $q}' \
>> "$OUTPUT_FILE"
debate_id=$(echo "$result" | jq -r '.debateId')
echo " Created: $debate_id"
# Respect rate limits — wait 2 seconds between debates
sleep 2
done < "$QUESTIONS_FILE"
echo ""
echo "Done. $count questions processed, $failed failed."
echo "Results saved to: $OUTPUT_FILE"questions.txt example:
# Architecture decisions Q1 2026
Should we migrate our REST API to GraphQL?
Should we adopt a microservices architecture?
# Infrastructure
Should we move from AWS EC2 to ECS Fargate?
Should we use Terraform or Pulumi for IaC?Run it:
chmod +x batch-debates.sh
./batch-debates.sh questions.txt debates-q1.jsonlRecipe 2 — CSV Export Pipeline
List all completed debates and export a summary CSV for sharing with stakeholders.
#!/usr/bin/env bash
# export-csv.sh — Export completed debates to a CSV file
set -euo pipefail
OUTPUT="${1:-debates.csv}"
LIMIT="${DEBATE_LIMIT:-100}"
echo "Fetching up to $LIMIT completed debates..."
# Write CSV header
echo "id,question,mode,confidence,recommendation,created" > "$OUTPUT"
# Fetch debates as JSON, then transform each one
askverdict list \
--status completed \
--limit "$LIMIT" \
--json \
| jq -r '
.debates[]
| [
.id,
(.question | gsub(","; ";") | gsub("\n"; " ")),
(.mode // ""),
"",
"",
(.createdAt | split("T")[0])
]
| @csv
' >> "$OUTPUT"
echo "Saved $( tail -n +2 "$OUTPUT" | wc -l | tr -d ' ') debates to $OUTPUT"To include verdict confidence and recommendation, fetch each debate individually:
#!/usr/bin/env bash
# export-csv-full.sh — Full CSV export with verdict data (slower — one request per debate)
set -euo pipefail
OUTPUT="${1:-debates-full.csv}"
LIMIT="${DEBATE_LIMIT:-50}"
# Write CSV header
echo "id,question,mode,confidence,recommendation,created,completed" > "$OUTPUT"
askverdict list --status completed --limit "$LIMIT" --json \
| jq -r '.debates[].id' \
| while read -r debate_id; do
row=$(askverdict view "$debate_id" --json | jq -r '
[
.id,
(.question | gsub(","; ";") | gsub("\n"; " ")),
(.mode // ""),
(if .verdict.recommendation then
(.verdict.recommendation.confidence * 100 | round | tostring) + "%"
else "" end),
(.verdict.recommendation.recommendation // "" | gsub(","; ";") | gsub("\n"; " ")),
(.createdAt | split("T")[0]),
(if .completedAt then .completedAt | split("T")[0] else "" end)
] | @csv
')
echo "$row" >> "$OUTPUT"
sleep 0.5 # avoid hammering the API
done
echo "Exported to $OUTPUT"Open directly in Excel or Google Sheets:
# macOS — open in Numbers
./export-csv-full.sh debates-full.csv && open debates-full.csv
# Linux — open in LibreOffice
./export-csv-full.sh debates-full.csv && libreoffice --calc debates-full.csvRecipe 3 — CI/CD Decision Gate
Block a deployment or merge unless an AI debate recommends proceeding. Exit code 1 fails the pipeline.
#!/usr/bin/env bash
# decision-gate.sh — Run a debate and exit non-zero if confidence is below threshold
set -euo pipefail
QUESTION="${1:-}"
THRESHOLD="${CONFIDENCE_THRESHOLD:-60}" # minimum confidence % to proceed
if [[ -z "$QUESTION" ]]; then
echo "Usage: ASKVERDICT_TOKEN=... $0 'Should we deploy version 2.4.1?'" >&2
exit 1
fi
echo "Running decision gate..."
echo "Question: $QUESTION"
echo "Threshold: ${THRESHOLD}%"
echo ""
# Create debate, get the ID
result=$(askverdict debate "$QUESTION" \
--mode fast \
--no-stream \
--json)
debate_id=$(echo "$result" | jq -r '.debateId')
echo "Debate ID: $debate_id"
# Poll until complete (max 5 minutes)
max_attempts=30
for attempt in $(seq 1 $max_attempts); do
sleep 10
view=$(askverdict view "$debate_id" --json)
status=$(echo "$view" | jq -r '.status')
if [[ "$status" == "completed" ]]; then
confidence=$(echo "$view" \
| jq -r '
if .verdict.recommendation.confidence then
(.verdict.recommendation.confidence * 100) | round
else 0 end
')
recommendation=$(echo "$view" | jq -r '.verdict.recommendation.recommendation // "No recommendation"')
echo ""
echo "Verdict: $recommendation"
echo "Confidence: ${confidence}%"
if (( confidence >= THRESHOLD )); then
echo ""
echo "PASS — confidence ${confidence}% >= threshold ${THRESHOLD}%"
exit 0
else
echo ""
echo "FAIL — confidence ${confidence}% < threshold ${THRESHOLD}%"
echo "See full debate: askverdict view $debate_id"
exit 1
fi
fi
echo " Attempt $attempt/$max_attempts — status: $status"
done
echo "TIMEOUT — debate did not complete within $((max_attempts * 10))s" >&2
exit 1GitHub Actions integration
name: Pre-deployment Decision Gate
on:
workflow_dispatch:
inputs:
version:
description: "Version to deploy"
required: true
jobs:
decision-gate:
runs-on: ubuntu-latest
steps:
- name: Install AskVerdict CLI
run: npm install -g @askverdict/cli
- name: Run decision gate
env:
ASKVERDICT_TOKEN: ${{ secrets.ASKVERDICT_API_KEY }}
CONFIDENCE_THRESHOLD: "65"
run: |
bash decision-gate.sh \
"Should we deploy ${{ inputs.version }} to production? \
Recent test results: all passing. \
Last incident: 14 days ago."
deploy:
needs: decision-gate # only runs if gate passes (exit 0)
runs-on: ubuntu-latest
steps:
- name: Deploy
run: echo "Deploying ${{ inputs.version }}..."Set CONFIDENCE_THRESHOLD as a repository variable so you can adjust it without modifying scripts. Higher threshold = stricter gate. A value of 70 is a reasonable starting point for production deploys.
Recipe 4 — Daily Summary Cron Job
Fetch yesterday's completed debates and email or Slack a summary. Schedule with cron or launchd.
#!/usr/bin/env bash
# daily-summary.sh — Summarize the previous day's debates
set -euo pipefail
# Yesterday's date in YYYY-MM-DD (works on Linux and macOS)
if date --version 2>/dev/null | grep -q GNU; then
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d) # GNU date (Linux)
else
YESTERDAY=$(date -v-1d +%Y-%m-%d) # BSD date (macOS)
fi
REPORT_FILE="/tmp/askverdict-summary-${YESTERDAY}.md"
echo "# AskVerdict Daily Summary — $YESTERDAY" > "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
# Overall stats
stats=$(askverdict stats --json)
total=$(echo "$stats" | jq -r '.dashboard.totalDebates')
this_week=$(echo "$stats" | jq -r '.dashboard.debatesThisWeek')
accuracy=$(echo "$stats" | jq -r '.score.overallAccuracy // "N/A"')
echo "## Overview" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "- Total debates (all time): $total" >> "$REPORT_FILE"
echo "- Debates this week: $this_week" >> "$REPORT_FILE"
echo "- Decision accuracy: ${accuracy}%" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
# Recent completed debates
echo "## Completed Debates" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
askverdict list \
--status completed \
--limit 20 \
--json \
| jq -r '.debates[] | "- \(.question | .[0:80]) — \(.mode) mode"' \
>> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
# Pending outcomes
pending=$(askverdict outcomes pending --json | jq -r '.pending | length')
if (( pending > 0 )); then
echo "## Action Required" >> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
echo "$pending debate(s) need outcome tracking — run \`askverdict outcomes pending\`" \
>> "$REPORT_FILE"
echo "" >> "$REPORT_FILE"
fi
echo "Report written to: $REPORT_FILE"
cat "$REPORT_FILE"
# Optional: post to Slack
# curl -s -X POST "$SLACK_WEBHOOK_URL" \
# -H "Content-Type: application/json" \
# -d "{\"text\": \"$(cat "$REPORT_FILE" | sed 's/"/\\"/g')\"}"Schedule with cron (runs at 08:00 every morning):
# Edit crontab
crontab -e
# Add this line (replace paths as needed)
0 8 * * * ASKVERDICT_TOKEN=vrd_your_key /usr/local/bin/daily-summary.sh >> /var/log/askverdict-summary.log 2>&1Or on macOS with launchd — create ~/Library/LaunchAgents/ai.askverdict.summary.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>ai.askverdict.summary</string>
<key>ProgramArguments</key>
<array>
<string>/bin/bash</string>
<string>/usr/local/bin/daily-summary.sh</string>
</array>
<key>EnvironmentVariables</key>
<dict>
<key>ASKVERDICT_TOKEN</key>
<string>vrd_your_key</string>
</dict>
<key>StartCalendarInterval</key>
<dict>
<key>Hour</key>
<integer>8</integer>
<key>Minute</key>
<integer>0</integer>
</dict>
<key>StandardOutPath</key>
<string>/tmp/askverdict-summary.log</string>
<key>StandardErrorPath</key>
<string>/tmp/askverdict-summary.err</string>
</dict>
</plist>launchctl load ~/Library/LaunchAgents/ai.askverdict.summary.plistRecipe 5 — Team Workspace Workflow
A collaborative workflow for teams: one member runs the debate, teammates stream it, everyone votes, and results are exported and shared.
#!/usr/bin/env bash
# team-debate.sh — Run a team debate and collect participation
set -euo pipefail
QUESTION="${1:-}"
CONTEXT="${2:-}"
EXPORT_DIR="${EXPORT_DIR:-./debate-exports}"
if [[ -z "$QUESTION" ]]; then
echo "Usage: $0 'Question' 'Optional context'" >&2
exit 1
fi
mkdir -p "$EXPORT_DIR"
echo "Starting team debate..."
echo "Question: $QUESTION"
[[ -n "$CONTEXT" ]] && echo "Context: $CONTEXT"
echo ""
# Build debate command
debate_args=("$QUESTION" --mode thorough --agents 4 --rounds 6 --no-stream --json)
[[ -n "$CONTEXT" ]] && debate_args+=(--context "$CONTEXT")
result=$(askverdict debate "${debate_args[@]}")
debate_id=$(echo "$result" | jq -r '.debateId')
echo "Debate ID: $debate_id"
echo ""
echo "Share this command with your team so they can watch live:"
echo ""
echo " ASKVERDICT_TOKEN=<their-token> askverdict stream $debate_id"
echo ""
# Stream the debate ourselves
echo "Streaming debate (Ctrl+C to detach — debate continues server-side)..."
askverdict stream "$debate_id" || true
# Wait for debate to complete
echo ""
echo "Waiting for debate to complete..."
while true; do
status=$(askverdict view "$debate_id" --json | jq -r '.status')
[[ "$status" == "completed" ]] && break
[[ "$status" == "failed" ]] && { echo "Debate failed." >&2; exit 1; }
sleep 5
done
echo "Debate complete."
echo ""
# Export results
timestamp=$(date +%Y%m%d-%H%M%S)
json_file="$EXPORT_DIR/${debate_id}-${timestamp}.json"
md_file="$EXPORT_DIR/${debate_id}-${timestamp}.md"
askverdict export "$debate_id" --format json --output "$json_file"
askverdict export "$debate_id" --format md --output "$md_file"
echo "Exported:"
echo " JSON: $json_file"
echo " Markdown: $md_file"
echo ""
# Print summary
verdict=$(askverdict view "$debate_id" --json)
recommendation=$(echo "$verdict" | jq -r '.verdict.recommendation.recommendation // "See full report"')
confidence=$(echo "$verdict" | jq -r '
if .verdict.recommendation.confidence then
(.verdict.recommendation.confidence * 100 | round | tostring) + "%"
else "N/A" end
')
echo "Verdict: $recommendation"
echo "Confidence: $confidence"
echo ""
echo "Full report: askverdict view $debate_id"Typical team session:
# Lead runs the debate
ASKVERDICT_TOKEN=vrd_lead_key \
./team-debate.sh \
"Should we adopt a design system in Q2?" \
"Team size: 6 engineers. Current UI is inconsistent across 3 products."
# Teammates stream live (each with their own token)
ASKVERDICT_TOKEN=vrd_teammate_token askverdict stream dbt_abc123
# After the debate — teammates vote on arguments
askverdict vote dbt_abc123 claim_001 agree
askverdict vote dbt_abc123 claim_007 disagree
# Team lead creates a poll for final alignment
askverdict polls create dbt_abc123 \
-q "Are you aligned with the AI recommendation?" \
--options "Fully agree,Agree with reservations,Need more info,Disagree"
# Teammates vote on the poll (using their tokens)
askverdict polls vote dbt_abc123 poll_abc opt_001
# Check poll results
askverdict polls list dbt_abc123Each team member needs their own API key. All team members can view and vote on any debate they can access — there is no separate collaboration feature required.
Utility Snippets
Short one-liners useful across multiple workflows.
# Get the ID of the most recent completed debate
askverdict list --status completed --limit 1 --json \
| jq -r '.debates[0].id'
# Watch your accuracy improve over time
askverdict stats --json \
| jq '{accuracy: .score.overallAccuracy, streak: .streak.currentStreak, total: .dashboard.totalDebates}'
# Export all debates from the last month to individual Markdown files
askverdict list --status completed --limit 50 --json \
| jq -r '.debates[].id' \
| while read -r id; do
askverdict export "$id" --format md --output "exports/${id}.md"
done
# Check API health before running a batch script
askverdict health --json | jq -e '.status == "ok"' \
|| { echo "API is not healthy — aborting" >&2; exit 1; }
# Search for past decisions on a topic and list their verdicts
askverdict search "kubernetes" --status completed --json \
| jq -r '.results[] | "\(.id): \(.verdict.recommendation // "no verdict")"'
# Record outcomes in bulk from a CSV (id,outcome,correct columns)
tail -n +2 outcomes.csv | while IFS=, read -r id outcome correct; do
flag=""
[[ "$correct" == "yes" ]] && flag="--correct"
[[ "$correct" == "no" ]] && flag="--incorrect"
askverdict outcomes record "$id" "$outcome" $flag
doneWas this page helpful?