8 Deliberation Modes
Each mode implements a distinct debate protocol optimized for specific decision types. Choose the mode that matches your use case, or let Auto mode decide for you.
Comparison
| Mode | Rounds | Models | Time | Best For |
|---|---|---|---|---|
| Quick | 1 | 1 | ~15s | Simple factual queries |
| Council | 3 | 2-5 | ~45s | Architecture, technical decisions |
| Deep | 5 | 2-5 | ~90s | High-stakes, mission-critical |
| Blind | 3 | 2-5 | ~45s | Unbiased evaluation, legal |
| Red Team | 1 | 3+ | ~120s | Security, adversarial testing |
| Jury | 3 | 3-5 | ~60s | Risk, healthcare, compliance |
| Market | 5 | 3+ | ~90s | Prediction, probability, finance |
| Auto | 1-5 | 1-5 | ~45s | When unsure which mode to use |
Single-round rapid analysis for straightforward questions. One model generates a response, it gets evaluated, and you receive the output. No debate, no cross-examination — just a fast, direct answer.
Max Rounds
1
Models
1
Phases
3
Phase Pipeline
When To Use
- •Simple factual questions: "What is the time complexity of quicksort?"
- •Quick sanity checks on a single idea
- •Time-constrained decisions where speed matters more than depth
- •Low-stakes queries that don't warrant multi-model debate
Technical Details
Bypasses challenge, rebuttal, voting, and convergence phases entirely. Cost is minimal: one API call to one model. Useful as a baseline to compare against multi-model deliberation results.
The standard multi-round deliberation mode. Models independently propose positions, cross-examine each other's reasoning, defend or revise their claims, then vote using formal social choice theory. The debate continues until mathematical convergence is detected or max rounds are reached.
Max Rounds
3
Models
2-5
Phases
8
Phase Pipeline
When To Use
- •Architecture decisions: "Should we use microservices or a monolith?"
- •Complex technical questions requiring diverse perspectives
- •Design decisions where trade-offs need explicit exploration
- •Any question where you want genuine multi-model debate
Technical Details
Voting uses Condorcet method (checks if any candidate beats all others pairwise) with Ranked Pairs fallback. Convergence score = 0.4 * ranking_similarity + 0.35 * proposal_similarity + 0.25 * concession_rate. Converged when score >= 0.85. Borda count provides confidence-weighted scoring for full ranking.
Extended deliberation for high-stakes decisions. Same phase pipeline as Council but with 5 rounds instead of 3, plus sub-agent research capability. The additional rounds allow positions to evolve more thoroughly through multiple cycles of challenge and defense.
Max Rounds
5
Models
2-5
Phases
8
Phase Pipeline
When To Use
- •Security audits requiring exhaustive vulnerability analysis
- •Compliance reviews where missing something has real consequences
- •Mission-critical architecture decisions
- •Complex research questions needing deep exploration
Technical Details
5 rounds means 5 full cycles of propose-challenge-rebut-evaluate-vote-converge. Sub-agents can be spawned for targeted research within each round. Higher cost but significantly more thorough analysis. Convergence may trigger early if score >= 0.85 before round 5.
Anonymous evaluation that eliminates model identity bias. All proposals are stripped of model identity before evaluation. The judge evaluates arguments in multiple orderings to prevent anchoring bias — ensuring the quality of reasoning matters, not which company built the model.
Max Rounds
3
Models
2-5
Phases
8
Phase Pipeline
When To Use
- •Fair model comparison without brand bias
- •Legal analysis where anchoring bias could skew outcomes
- •Situations where you suspect evaluators favor certain providers
- •Academic or research contexts requiring objectivity
Technical Details
Before evaluation, proposals are anonymized: model IDs mapped to anonymous_1, anonymous_2, etc. AI fingerprints stripped. Judge sees arguments in randomized orderings to prevent position bias. The legal template uses blind mode by default with two advocates (risk vs. acceptability).
Adversarial testing where models actively try to break each other's arguments. Attackers probe for vulnerabilities across 8 categories, defenders respond, and judges evaluate the validity of attacks and strength of defenses. Produces a comprehensive vulnerability report.
Max Rounds
1
Models
3+
Phases
5
Phase Pipeline
When To Use
- •Security assessment of code, architectures, or proposals
- •Finding vulnerabilities in business plans or strategies
- •Stress-testing ideas before committing resources
- •Code review focused on catching security issues
Technical Details
8 attack categories: LOGICAL_FLAW (reasoning errors), EDGE_CASE (boundary conditions), SECURITY_VULN (security vulnerabilities), BIAS_DETECTION (systematic biases), HALLUCINATION_PROBE (factual accuracy), PROMPT_INJECTION (injection attacks), ROBUSTNESS_TEST (input variations), CONSISTENCY_CHECK (contradictions). The code review template uses this mode with weights: security 30%, correctness 25%, performance 20%, maintainability 15%, style 10%.
Panel deliberation with mandatory dissent reporting. Every result must explicitly declare both majority and minority positions — no decision is presented as unanimous unless mathematically verified through agglomerative clustering. Models must declare dissent even if they're in the minority.
Max Rounds
3
Models
3-5
Phases
8
Phase Pipeline
When To Use
- •Risk assessment where overlooking a minority opinion could be catastrophic
- •Healthcare decisions requiring transparent disagreement
- •Regulatory compliance requiring documented dissent
- •Financial analysis where consensus bias is dangerous
Technical Details
MANDATORY_DISSENT flag forces dissent detection via agglomerative clustering. Builds Jaccard similarity matrix between proposals, iteratively merges clusters (threshold >= 0.5). Single cluster = genuine consensus. Multiple clusters = dissent with majority/minority positions. The risk assessment and finance templates use this mode by default.
Prediction market mechanism for probabilistic consensus. Models assign probability distributions to outcomes, then update their positions based on others' assessments using log-opinion pooling. The market converges when the maximum difference between consecutive probability distributions falls below 5%.
Max Rounds
5
Models
3+
Phases
5
Phase Pipeline
When To Use
- •Prediction and forecasting questions
- •Probability estimation ("What's the likelihood of X?")
- •Financial analysis with bull/bear perspectives
- •Scenarios requiring quantified uncertainty
Technical Details
Log-opinion pooling: each model's probability distribution is combined using logarithmic aggregation, which naturally weights extreme positions less. Convergence triggered when max(|P_t - P_{t-1}|) < 0.05 across all outcome probabilities. Unlike voting-based modes, this produces a probability distribution rather than a single winner.
Intelligent routing that analyzes your query's complexity and automatically selects the optimal mode, number of models, and round count. Uses feature extraction (token count, code presence, stakes keywords, analytical/creative nature) to score complexity and route accordingly.
Max Rounds
1-5
Models
1-5
Phases
1
Phase Pipeline
When To Use
- •When you're not sure which mode to pick
- •General-purpose usage where cost optimization matters
- •Mixed-complexity workflows with varying query types
- •Building applications that handle diverse user queries
Technical Details
Complexity scoring: base from token count (<20: 0.1, ≤100: 0.3, ≤500: 0.5, >500: 0.7). Adjustments: +0.2 for code, +0.3 for stakes keywords (medical/legal/financial/security/compliance/hipaa/soc), +0.2 for analytical, +0.1 for creative, -0.2 for factual. Routing: score <0.3 → Quick/1 model, 0.3-0.6 → Council/3 models, ≥0.6 → Council or Deep with 3-5 models.