SSE Streaming Events

Complete reference for all Server-Sent Events emitted during an AskVerdict debate stream, with payload schemas and consumption examples.

5 min read
Share

AskVerdict streams debate progress in real time using Server-Sent Events (SSE). Every event is a JSON-encoded payload delivered over a long-lived HTTP connection to POST /v1/verdicts?stream=true.

This page documents every event type in the stream, the lifecycle order they arrive in, and how to consume them from JavaScript, Python, and cURL.


Stream Endpoint

bash
POST https://api.askverdict.ai/v1/verdicts?stream=true
Content-Type: application/json
Authorization: Bearer vrd_your_api_key
 
{
  "question": "Should we rewrite our backend in Rust?",
  "mode": "balanced"
}

The response is a text/event-stream with individual events in the format:

plaintext
id: evt_001
event: debate:start
data: {"type":"debate:start","data":{...}}
 
id: evt_002
event: agent:thinking
data: {"type":"agent:thinking","data":{...}}

Each event has an id, an event name matching the event type, and a data field containing the full JSON payload including a type discriminator and a data object.


Debate Lifecycle

Events arrive in the following general sequence. Not all events appear in every debate — agent:search, graph:update, consensus:check, and analysis events are emitted only in thorough mode.

plaintext
debate:start
  ├── agent:intro           (one per agent, interactive mode only)
  ├── question:clarification (if question is ambiguous)

  └── [ Per round ]
       ├── agent:thinking    (one per agent, before each argument)
       ├── agent:search      (thorough mode: before argument if web search used)
       ├── agent:argument    (one per agent argument)
       ├── agent:error       (if an agent fails — recoverable)
       ├── graph:update      (thorough mode: after all arguments in round)
       ├── consensus:check   (thorough mode: convergence measurement)
       └── analysis:mid_debate (thorough mode: mid-debate summary)
 
  ├── analysis:fact_check    (thorough mode: after final round)
  ├── analysis:controversy   (thorough mode)

  └── verdict:start
       └── synthesis:progress (step-by-step synthesis status)
           └── verdict:complete
               └── debate:complete
 
# Alternate paths:
debate:cached              (identical question found — no new debate run)
debate:error               (unrecoverable failure at any point)
engine:error               (engine-level failure)
cost:update                (SaaS: credit balance after each API call)

Core Debate Events

debate:start

Emitted once at the beginning of every debate. Contains the agent lineup and debate configuration.

json
{
  "type": "debate:start",
  "data": {
    "debateId": "dbt_abc123",
    "question": "Should we rewrite our backend in Rust?",
    "context": "We have a Node.js monolith, 4 engineers, shipping every 2 weeks.",
    "agentCount": 3,
    "maxRounds": 3,
    "agents": [
      { "id": "agent_1", "persona": "pragmatist", "name": "Alex" },
      { "id": "agent_2", "persona": "challenger", "name": "Morgan" },
      { "id": "agent_3", "persona": "synthesizer", "name": "Jordan" }
    ]
  }
}

agent:thinking

Emitted just before an agent begins generating its argument. Use this to show a typing indicator in your UI.

json
{
  "type": "agent:thinking",
  "data": {
    "agentId": "agent_1",
    "agentName": "Alex",
    "round": 1
  }
}

agent:argument

The core content event. Each argument contains the agent's full response, attached evidence, confidence score, and any cross-examination responses.

json
{
  "type": "agent:argument",
  "data": {
    "agentId": "agent_1",
    "agentName": "Alex",
    "round": 1,
    "content": "Rewriting in Rust would be premature. The team's Node.js throughput is not the bottleneck — database queries are. A rewrite would take 6–12 months and introduce new failure modes before addressing the actual constraint.",
    "evidence": [
      {
        "id": "ev_001",
        "source": "Web search",
        "url": "https://example.com/rust-rewrite-costs",
        "summary": "Typical Rust migration timelines for teams of 4–8 engineers",
        "relevance": 0.87
      }
    ],
    "confidence": 0.82,
    "responses": [],
    "timestamp": 1740045600000
  }
}

agent:error

Emitted when a single agent fails to produce an argument. The debate continues with the remaining agents if recoverable is true at the debate level.

json
{
  "type": "agent:error",
  "data": {
    "agentId": "agent_2",
    "agentName": "Morgan",
    "round": 2,
    "error": "Provider timeout after 30s",
    "code": "PROVIDER_TIMEOUT"
  }
}

agent:search

Emitted when an agent performs a web search in thorough mode, before it writes its argument.

json
{
  "type": "agent:search",
  "data": {
    "agentId": "agent_1",
    "agentName": "Alex",
    "query": "Rust rewrite migration costs small team 2025",
    "resultsCount": 5
  }
}

Verdict Events

verdict:start

Signals that synthesis has begun. Emitted once after all debate rounds complete.

json
{
  "type": "verdict:start",
  "data": {
    "round": 3
  }
}

synthesis:progress

Emitted during verdict synthesis to show step-by-step progress. The progress field is a value between 0 and 1.

json
{
  "type": "synthesis:progress",
  "data": {
    "step": "weighing",
    "message": "Weighing argument strength and evidence quality...",
    "progress": 0.5
  }
}

Possible step values: "analyzing""weighing""drafting""finalizing".


verdict:complete

The final verdict payload. Contains the full verdict with recommendation, confidence, reasoning, and supporting arguments.

json
{
  "type": "verdict:complete",
  "data": {
    "verdict": {
      "id": "vrd_xyz789",
      "recommendation": "Do not rewrite in Rust at this time",
      "confidence": 0.78,
      "summary": "The team's current bottleneck is database performance, not runtime speed. A Rust rewrite is technically sound but strategically premature given team size and shipping cadence.",
      "winningPosition": "against",
      "consensusScore": 0.71,
      "arguments": { "for": [...], "against": [...] },
      "keyInsights": [
        "Database query optimization can yield 60–80% improvement within 2 weeks",
        "Rust migration risk is high with a 4-person team"
      ]
    }
  }
}

debate:complete

The closing event of every successful debate. Contains the final verdict summary, total cost, and duration. After this event, the stream closes.

json
{
  "type": "debate:complete",
  "data": {
    "debateId": "dbt_abc123",
    "verdict": { "...": "same as verdict:complete" },
    "totalCost": 0.0041,
    "durationSeconds": 28
  }
}

debate:cached

Emitted instead of the normal lifecycle when an identical question was recently debated and the result is served from cache. The stream ends immediately after this event.

json
{
  "type": "debate:cached",
  "data": {
    "cachedDebateId": "dbt_prev456",
    "question": "Should we rewrite our backend in Rust?",
    "verdict": { "...": "cached verdict object" }
  }
}

Error Events

debate:error

An unrecoverable error at the debate level. The stream terminates after this event.

json
{
  "type": "debate:error",
  "data": {
    "error": "All agents failed to respond within the timeout window",
    "stage": "round_2",
    "recoverable": false
  }
}

engine:error

An error at the engine level (not tied to a specific agent or debate stage).

json
{
  "type": "engine:error",
  "data": {
    "error": "Provider rate limit exceeded",
    "code": "RATE_LIMIT",
    "recoverable": true
  }
}

Thorough-Mode Events

These events are emitted only in mode: "thorough" debates.

graph:update

Emitted after all agents complete a round. Contains a snapshot of the argument graph with claim and edge counts.

json
{
  "type": "graph:update",
  "data": {
    "round": 2,
    "claimsAdded": 4,
    "edgesAdded": 6,
    "snapshot": {
      "nodes": [...],
      "edges": [...]
    }
  }
}

consensus:check

Emitted after each round in thorough mode. Tracks convergence between agent positions.

json
{
  "type": "consensus:check",
  "data": {
    "round": 2,
    "state": {
      "convergenceScore": 0.62,
      "hasConverged": false,
      "dominantPosition": "against",
      "positionStrengths": { "for": 0.38, "against": 0.62 }
    }
  }
}

analysis:mid_debate

A mid-debate summary emitted by the moderator agent in thorough mode.

json
{
  "type": "analysis:mid_debate",
  "data": {
    "round": 2,
    "keyClaimIds": ["claim_1", "claim_3", "claim_7"],
    "summary": "The core disagreement centers on team capacity vs. long-term performance gains.",
    "focusAreas": ["migration cost", "performance baseline", "team upskilling"]
  }
}

analysis:fact_check

Post-debate fact verification of key claims.

json
{
  "type": "analysis:fact_check",
  "data": {
    "round": 3,
    "checks": [
      {
        "claimId": "claim_3",
        "claim": "Rust is 2-5x faster than Node.js for CPU-bound tasks",
        "verified": true,
        "explanation": "Supported by multiple benchmark studies. Effect size is context-dependent."
      }
    ]
  }
}

analysis:controversy

Controversy score for the question, updated after each round.

json
{
  "type": "analysis:controversy",
  "data": {
    "round": 3,
    "score": 0.65,
    "explanation": "Moderate controversy — strong positions exist on both sides but the technical facts are not disputed.",
    "roundAdjustment": 0
  }
}

Status and Lifecycle Events

debate:status

Generic status message. Used for progress updates that do not fit a specific category.

json
{
  "type": "debate:status",
  "data": {
    "message": "Initializing agents...",
    "detail": "Loading model configurations"
  }
}

debate:paused / debate:resumed

Emitted in interactive mode when the debate is paused for user input and when it resumes.

json
{
  "type": "debate:paused",
  "data": { "round": 2, "message": "Waiting for moderator input" }
}
json
{
  "type": "debate:resumed",
  "data": { "round": 2, "message": "Resuming debate with moderator context injected" }
}

question:clarification

Emitted when the engine detects an ambiguous question before agents begin debating.

json
{
  "type": "question:clarification",
  "data": {
    "clarity": 0.55,
    "gaps": ["No timeline specified", "Team size not mentioned"],
    "suggestions": [
      "Should we rewrite our backend in Rust within the next 6 months?",
      "Should a 4-person team rewrite a Node.js monolith in Rust?"
    ],
    "proceedAnyway": true
  }
}

agent:intro

Emitted once per agent at the start of interactive-mode debates. Shows agent identity before the debate begins.

json
{
  "type": "agent:intro",
  "data": {
    "agentId": "agent_1",
    "name": "Alex",
    "persona": "pragmatist",
    "stance": "Focuses on practical constraints and shipping velocity",
    "color": "#4CAF50",
    "emoji": "🔧"
  }
}

moderator:inject

Emitted when a moderator input is injected into the debate mid-stream.

json
{
  "type": "moderator:inject",
  "data": {
    "content": "Consider that the team will double in size next quarter.",
    "round": 2,
    "acknowledgment": "Agents will incorporate this context in the next round."
  }
}

SaaS-Specific Events

cost:update

Emitted after each AI API call when using AskVerdict-managed credits (non-BYOK). Shows current spend and remaining balance.

json
{
  "type": "cost:update",
  "data": {
    "totalCost": 0.0018,
    "lastCallCost": 0.0006,
    "creditsRemaining": 47
  }
}

engine:api_call / engine:api_response

Engine transparency events showing the raw AI API calls being made. Useful for debugging and cost attribution.

json
{
  "type": "engine:api_call",
  "data": {
    "model": "claude-3-5-sonnet",
    "taskType": "argument",
    "tier": "primary",
    "provider": "anthropic",
    "maxTokens": 1024
  }
}
json
{
  "type": "engine:api_response",
  "data": {
    "model": "claude-3-5-sonnet",
    "taskType": "argument",
    "tier": "primary",
    "provider": "anthropic",
    "inputTokens": 512,
    "outputTokens": 384,
    "cost": 0.00063,
    "durationMs": 1840,
    "runningTotalCost": 0.0018
  }
}

Consuming Events

JavaScript (EventSource)

The native EventSource API only supports GET requests, so use fetch with a ReadableStream for POST debates:

typescript
const response = await fetch("https://api.askverdict.ai/v1/verdicts?stream=true", {
  method: "POST",
  headers: {
    Authorization: "Bearer vrd_your_api_key",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({ question: "Should we rewrite in Rust?", mode: "balanced" }),
});
 
const reader = response.body!.getReader();
const decoder = new TextDecoder();
 
while (true) {
  const { done, value } = await reader.read();
  if (done) break;
 
  const chunk = decoder.decode(value);
  const lines = chunk.split("\n");
 
  for (const line of lines) {
    if (line.startsWith("data: ")) {
      const payload = JSON.parse(line.slice(6));
      handleEvent(payload);
    }
  }
}
 
function handleEvent(event: { type: string; data: unknown }) {
  switch (event.type) {
    case "agent:argument":
      console.log("New argument:", (event.data as { content: string }).content);
      break;
    case "verdict:complete":
      console.log("Verdict:", event.data);
      break;
    case "debate:complete":
      console.log("Done. Cost:", (event.data as { totalCost: number }).totalCost);
      break;
    case "debate:error":
    case "engine:error":
      console.error("Error:", event.data);
      break;
  }
}

Python

python
import httpx
import json
 
url = "https://api.askverdict.ai/v1/verdicts"
 
headers = {
    "Authorization": "Bearer vrd_your_api_key",
    "Content-Type": "application/json",
}
 
payload = {
    "question": "Should we rewrite our backend in Rust?",
    "mode": "balanced",
}
 
with httpx.stream(
    "POST",
    url,
    headers=headers,
    json=payload,
    params={"stream": "true"},
    timeout=120,
) as response:
    for line in response.iter_lines():
        if line.startswith("data: "):
            event = json.loads(line[6:])
            event_type = event.get("type")
            data = event.get("data", {})
 
            if event_type == "agent:argument":
                print(f"[{data['agentName']}] {data['content'][:100]}...")
            elif event_type == "verdict:complete":
                verdict = data["verdict"]
                print(f"\nVerdict: {verdict['recommendation']}")
                print(f"Confidence: {verdict['confidence']:.0%}")
            elif event_type == "debate:complete":
                print(f"\nDone in {data['durationSeconds']}s — cost ${data['totalCost']:.4f}")
                break
            elif event_type in ("debate:error", "engine:error"):
                print(f"Error: {data['error']}")
                break

cURL

Useful for quick testing and CI pipelines:

bash
curl -X POST "https://api.askverdict.ai/v1/verdicts?stream=true" \
  -H "Authorization: Bearer vrd_your_api_key" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -d '{"question": "Should we rewrite our backend in Rust?", "mode": "balanced"}' \
  --no-buffer

To extract only argument content lines:

bash
curl -X POST "https://api.askverdict.ai/v1/verdicts?stream=true" \
  -H "Authorization: Bearer vrd_your_api_key" \
  -H "Content-Type: application/json" \
  -d '{"question":"Should we rewrite in Rust?","mode":"balanced"}' \
  --no-buffer \
  | grep '^data: ' \
  | while IFS= read -r line; do
      echo "${line#data: }" | python3 -c "
import sys, json
e = json.load(sys.stdin)
if e['type'] == 'agent:argument':
    print(f\"[{e['data']['agentName']}] {e['data']['content'][:80]}\")
elif e['type'] == 'verdict:complete':
    print(f\"VERDICT: {e['data']['verdict']['recommendation']}\")
"
    done

SDK Integration

The AskVerdict TypeScript SDK handles SSE parsing automatically:

typescript
import { AskVerdictClient } from "@askverdict/sdk";
 
const client = new AskVerdictClient({ apiKey: "vrd_your_api_key" });
 
const stream = client.verdicts.stream({
  question: "Should we rewrite our backend in Rust?",
  mode: "balanced",
});
 
for await (const event of stream) {
  if (event.type === "agent:argument") {
    process.stdout.write(`[${event.data.agentName}] `);
    console.log(event.data.content);
  }
  if (event.type === "debate:complete") {
    console.log("Complete:", event.data.verdict.recommendation);
  }
}

Event Type Summary

Event TypeWhenMode
debate:startStart of every debateAll
agent:introAgent introductionsInteractive only
question:clarificationAmbiguous question detectedAll
debate:statusProgress updatesAll
agent:thinkingBefore each argumentAll
agent:searchBefore web-augmented argumentThorough
agent:argumentEach agent argumentAll
agent:errorAgent failureAll
graph:updateAfter round endsThorough
consensus:checkAfter round endsThorough
analysis:mid_debateMid-debate summaryThorough
analysis:fact_checkAfter final roundThorough
analysis:controversyAfter each roundThorough
debate:pausedAwaiting moderator inputInteractive
moderator:injectAfter moderator injectsInteractive
debate:resumedDebate restartsInteractive
verdict:startSynthesis beginsAll
synthesis:progressDuring synthesisAll
verdict:completeSynthesis completeAll
debate:completeStream closingAll
debate:cachedCache hitAll
debate:errorUnrecoverable failureAll
engine:errorEngine-level failureAll
engine:api_callBefore each AI callAll
engine:api_responseAfter each AI callAll
cost:updateAfter each AI callSaaS credits

Was this page helpful?