Overview

AI Board of Directors — Multi-Model Deliberation

Submit any business question. Three AI models from competing providers debate it in real-time across multiple rounds, converging on a unified recommendation. Like having a boardroom of advisors on demand.

3
AI Models per Session
9
Model Combinations
3
Tier Levels
Live
board.rishonlgcy.tech

The Concept

Real boards of directors bring diverse perspectives, challenge assumptions, and force sharper thinking. This platform does the same with AI — three models from Anthropic, OpenAI, and Google each bring different reasoning strengths.

The debate format matters. Models don't just answer independently — they respond to each other's arguments, identify blind spots, and build on strong points. The result is better than any single model alone.

Key Differentiators

  • Multi-round deliberation with convergence detection
  • Complexity auto-assessment before debate begins
  • BYOK support — users can bring their own API keys
  • Real-time streaming — watch the debate unfold live
  • 9 model combinations across 3 quality tiers
  • Export to Email, Markdown, or JSON

The Three AI Advisors

Claude
Anthropic
Nuanced reasoning, safety-conscious, strong on ethics and long-form analysis
GPT-4
OpenAI
Broad knowledge, creative solutions, strong on implementation details and structured thinking
🔬
Gemini
Google
Data-driven analysis, research grounding, strong on factual accuracy and quantitative reasoning
💡
Why three providers? Single-model responses have blind spots. By using Claude, GPT-4, and Gemini — each trained differently with different strengths — you get responses that challenge and complement each other. Groupthink is structurally impossible.
Core Mechanic

How Deliberation Works

The deliberation engine orchestrates a structured multi-round debate. Each round builds on the previous one, with models responding to each other's arguments until convergence is detected.

Deliberation Pipeline

User QuestionBusiness problem
ComplexityAuto-assess
Round 1Initial positions
Round 2+Debate & respond
ConvergenceUnified output

Step-by-Step Process

  1. User Submits a Question Any business question: strategy, pricing, hiring, product decisions, market analysis, risk assessment. Free-form text input with optional context.
  2. Complexity Auto-Assessment The system analyzes the question's complexity to determine the optimal number of deliberation rounds. Simple questions get 2 rounds. Complex multi-faceted problems get 4-5 rounds.
  3. Round 1: Initial Positions All three models independently analyze the question. Each provides their initial perspective, key considerations, and preliminary recommendation. No model sees another's response yet.
  4. Round 2+: Cross-Examination Each model receives the other two models' previous responses. They identify points of agreement, challenge weak arguments, raise counterpoints, and refine their positions. This is where the real value emerges.
  5. Convergence Detection The system monitors for convergence: when models start agreeing on core recommendations. If convergence is reached early, the debate ends efficiently. If disagreements persist, additional rounds continue.
  6. Unified Summary A final synthesis is generated: areas of consensus, remaining disagreements, the majority recommendation, minority opinions, and concrete action items. This is your "board resolution."

Live Deliberation Example

Question: "Should we expand to the European market this year or focus on US growth?"
Round 1 — Initial Positions
C
I'd advise focusing on US growth. European expansion requires GDPR compliance, local partnerships, and cultural adaptation. Without 70%+ US market penetration, you're spreading resources too thin.
G
I see merit in a phased European entry. Start with the UK (English-speaking, lower barrier) while maintaining US growth. The TAM in Europe is 40% larger than US alone. First-mover advantage matters.
Ge
Data suggests companies that expand internationally before $50M ARR have 30% lower success rates. I recommend a market study first: test demand with a UK landing page and paid ads before committing resources.
Round 2 — Cross-Examination
C
Gemini's market-test approach is pragmatic — I'll revise my position. A UK-only test with minimal investment could validate demand without the full expansion cost. My concern remains: even a "test" requires GDPR work.
G
Claude raises a valid point on GDPR. I'd estimate 6-8 weeks of engineering work. But that's a one-time cost that unlocks the entire EU market. Gemini's 30% stat — is that controlled for company stage and product-market fit?
⋮ convergence detected after round 3 ⋮
Convergence is the goal, not forced agreement. If all three models genuinely disagree, the summary reflects that — with each position's reasoning clearly laid out. Honest disagreement is more valuable than artificial consensus.
Models

Model Tiers & Combinations

Three quality tiers, each with three model combinations. Choose based on question complexity and budget. Higher tiers use more capable (and expensive) models.

Low Tier
Claude 3 Haiku
GPT-4o Mini
Gemini 1.5 Flash
3
combinations
Best for: Simple questions, brainstorming, quick opinions. Fastest response time.
~$0.02 / session
Mid Tier
Claude 3.5 Sonnet
GPT-4o
Gemini 1.5 Pro
3
combinations
Best for: Strategy, planning, analysis. Balance of quality and cost. Recommended default.
~$0.15 / session
High Tier
Claude 3.5 Opus
GPT-4 Turbo
Gemini Ultra
3
combinations
Best for: Critical decisions, complex analysis, high-stakes strategy. Maximum reasoning depth.
~$0.60 / session

9 Model Combinations

All Available Combinations
# Tier Claude (Anthropic) GPT (OpenAI) Gemini (Google)
1LowHaiku4o-mini1.5 Flash
2LowHaiku4o-mini1.5 Flash
3LowHaiku4o-mini1.5 Flash
4Mid3.5 SonnetGPT-4o1.5 Pro
5Mid3.5 SonnetGPT-4o1.5 Pro
6Mid3.5 SonnetGPT-4o1.5 Pro
7High3.5 OpusGPT-4 TurboUltra
8High3.5 OpusGPT-4 TurboUltra
9High3.5 OpusGPT-4 TurboUltra

Each tier offers 3 rotation variants that adjust model order and prompting strategy for variety.

💡
Complexity auto-selects tier. The system can auto-detect question complexity and recommend the appropriate tier. Simple "yes/no" questions use Low. Multi-faceted strategic questions default to High. Users can always override.
Platform

Authentication & Billing

Supabase handles auth. Stripe handles payments. Two tiers: Free (5 sessions/month) and Pro (unlimited). BYOK users bypass billing entirely.

Free Tier

$0
per month
  • 5 deliberation sessions per month
  • Access to Low tier models only
  • Basic export (Markdown)
  • 7-day session history
  • Supabase email/password auth
  • No credit card required

Pro Tier

Unlimited
deliberation sessions
  • Unlimited sessions per month
  • Access to all 3 tiers (Low, Mid, High)
  • Full export (Email, Markdown, JSON)
  • Unlimited session history
  • Priority processing
  • Stripe subscription billing

Authentication Flow

Sign UpEmail + password
Email VerifySupabase magic link
Free Tier Active5 sessions/mo
Upgrade to ProStripe checkout

Billing Details

Stripe Integration
Payment processorStripe Checkout + Customer Portal
Subscription modelMonthly recurring. Cancel anytime.
Usage trackingSession count stored in Supabase. Resets monthly on billing cycle date.
Upgrade flowIn-app button → Stripe Checkout → Redirect back → Instant access
DowngradeVia Stripe Customer Portal. Access continues until current period ends.
BYOK bypassUsers with their own API keys don't need Pro — they pay providers directly
⚠️
Free tier rate limiting: After 5 sessions, the Submit button is disabled with a clear message showing when sessions reset. Users can upgrade or add BYOK keys to continue.
Security

BYOK — Bring Your Own Keys

Users can supply their own API keys for Anthropic, OpenAI, and Google. Keys are encrypted with AES-256-GCM and stored securely. BYOK users bypass subscription billing entirely.

How BYOK Works

  1. User Navigates to Settings API Key management section in their account settings.
  2. Enters API Keys One field per provider: Anthropic (sk-ant-...), OpenAI (sk-...), Google (AIza...). Can provide any combination — don't need all three.
  3. Keys Are Encrypted Each key is encrypted with AES-256-GCM before being stored. The encryption key is derived from a server-side secret. Keys are never stored in plaintext.
  4. Validation Check A minimal API call is made to each provider to verify the key is valid and has the required permissions. Invalid keys are rejected immediately.
  5. Session Uses User Keys When the user starts a deliberation, their keys are decrypted at runtime and used for API calls. Costs go directly to the user's provider accounts.

Encryption Details

AES-256-GCM Implementation
AlgorithmAES-256-GCM (Galois/Counter Mode) — authenticated encryption
Key derivationServer-side ENCRYPTION_KEY environment variable (256-bit)
IV (nonce)Random 12-byte IV generated per encryption operation
Auth tag128-bit authentication tag prevents tampering
Storage formativ:encrypted:authTag (base64 encoded, stored in Supabase)
At restEncrypted in Supabase (PostgreSQL). RLS ensures per-user isolation.
In transitHTTPS/TLS for all API communication
Key rotationUsers can update keys anytime. Old encrypted values are overwritten.

Benefits of BYOK

  • No subscription needed — use the platform free forever
  • Direct billing — pay providers at their rates (often cheaper at volume)
  • Full tier access — all 9 model combinations available
  • No session limits — unlimited deliberations
  • Enterprise-friendly — keys stay under your org's billing

Security Guarantees

  • Never logged — keys don't appear in server logs
  • Never cached — decrypted only during API call execution
  • Per-user isolation — RLS prevents cross-user access
  • Deletable — users can remove keys at any time
  • Auditable — key usage is logged (not the key itself)
🚨
The ENCRYPTION_KEY environment variable is critical. If it changes, all stored BYOK keys become unreadable. This key must be backed up securely and never committed to version control.
Features

Export & Session History

Every deliberation is saved. Export results via email, Markdown, or JSON. Reference past sessions anytime. Build a library of AI-advised decisions.

Export Formats

Available Formats
Email Formatted HTML email with the question, all rounds of deliberation, and the final unified summary. Sent to the user's registered email address. Includes action items as a checklist. Pro
Markdown Clean Markdown document with headers, model attributions, and the convergence summary. Perfect for pasting into Notion, Obsidian, or any documentation tool. Free
JSON Structured JSON with all metadata: models used, round-by-round responses, confidence scores, convergence status, timestamps. Ideal for programmatic processing or archiving. Pro

Session History

What's Stored Per Session
QuestionThe original user question and any provided context
Tier & ModelsWhich model combination was used
All RoundsComplete text of every model's response in every round
Convergence DataWhen/if convergence was detected, which round, agreement %
Final SummaryThe unified board resolution with action items
MetadataTimestamp, duration, token usage, cost estimate
User NotesOptional user-added notes post-deliberation (for context)

History Features

  • Searchable — Full-text search across all past sessions
  • Filterable — By date, tier, model, topic
  • Shareable — Generate a read-only link for team members
  • Bookmarkable — Star important deliberations
  • Deletable — Users can remove any session

Retention Policy

  • Free tier: 7-day retention (sessions auto-deleted after)
  • Pro tier: Unlimited retention
  • BYOK users: Unlimited retention
  • Deleted accounts: All data purged within 30 days
  • Export before delete: Users can export all sessions as a bulk JSON download
💡
Decision journal pattern: Over time, your session history becomes a decision journal. You can look back at what the AI board recommended, what you actually did, and how it turned out. This is powerful for improving decision-making over time.
System Design

Architecture

A FastAPI backend orchestrating three AI providers, with Supabase for auth/storage and Stripe for billing. The frontend is vanilla JS for simplicity and speed.

System Overview

FrontendVanilla JS
FastAPIDeliberation engine
SupabaseAuth + Storage
AnthropicClaude
OpenAIGPT-4
GoogleGemini
StripeBilling

Backend Components

FastAPI Application
app/main.pyFastAPI application entry point. CORS, middleware, route mounting.Core
app/deliberation/Deliberation engine: orchestrator, round management, convergence detection, prompt construction.Engine
app/providers/AI provider adapters: Anthropic, OpenAI, Google. Unified interface with provider-specific API handling.AI
app/auth/Supabase JWT verification, user session management, BYOK key decryption.Auth
app/billing/Stripe integration: checkout sessions, webhooks, subscription status, usage tracking.Billing
app/export/Export formatters: HTML email, Markdown, JSON. Template rendering for email output.Export
app/models/Pydantic models for request/response validation. Tier definitions, model configurations.Schema

Frontend

  • Vanilla JS — No framework. Pure HTML/CSS/JS.
  • Real-time streaming — SSE (Server-Sent Events) for live debate display
  • Responsive design — Mobile-friendly layout
  • Progressive disclosure — Rounds reveal as they complete
  • Dark mode — System preference detection
  • Served from /frontend/ — Static files via FastAPI

Data Layer

  • Supabase Auth — Email/password + magic links
  • PostgreSQL tables:
    • users, profiles
    • sessions (deliberation records)
    • rounds (per-round model responses)
    • api_keys (BYOK, AES-256-GCM encrypted)
    • subscriptions (Stripe sync)
    • usage (session counts per billing period)
  • RLS — Row-level security on all user-facing tables

Deliberation Orchestration

# Simplified orchestration flow (Python)

async def run_deliberation(question, tier, user_keys):
  models = get_models_for_tier(tier)
  complexity = assess_complexity(question)
  max_rounds = complexity_to_rounds(complexity)

  history = []
  for round_num in range(max_rounds):
    responses = await asyncio.gather(
      call_model(models[0], question, history, user_keys),
      call_model(models[1], question, history, user_keys),
      call_model(models[2], question, history, user_keys),
    )
    history.append(responses)
    yield stream_round(round_num, responses) # SSE

    if detect_convergence(history):
      break

  summary = generate_synthesis(history)
  yield stream_summary(summary)

All three models are called in parallel each round via asyncio.gather for optimal performance.

Architecture Strengths

  • No frontend build step (vanilla JS = instant deploys)
  • Real-time streaming via SSE (not WebSocket)
  • Parallel model calls = fast rounds
  • BYOK makes it free for power users
  • Convergence detection prevents wasted API calls
  • Supabase RLS = security without middleware

Known Constraints

  • FastAPI single-process (needs uvicorn workers for scale)
  • No rate limiting on API endpoints yet
  • No team/org features (individual accounts only)
  • No caching of deliberation results (each session is fresh)
  • Provider outage = degraded deliberation (2 models only)
  • No offline/export-all capability yet
🎉
Walkthrough complete. You now understand how the AI Board of Directors orchestrates multi-model deliberation, the tier system, BYOK encryption, export capabilities, and the full system architecture. The platform is live at board.rishonlgcy.tech.