FAQ

Fast answers for first-time evaluators.

This page exists so the public front door can answer common install, trust, and support questions without burying them inside the README.

Do I need to build Prompt Switchboard locally?

No. The shortest evaluation path is the packaged release zip from GitHub Releases.

Is there a browser-store install today?

Not yet. The supported install path today is the packaged GitHub Release zip. Browser-store submission materials are being kept ready, but GitHub Releases remains the supported install surface today.

What do I need before the first compare run?

A Chromium-compatible browser with Developer Mode, plus at least one supported AI chat tab already open and signed in inside the same browser profile where you load Prompt Switchboard. The empty state inside the extension now walks you through opening tabs, checking readiness, and seeding the first prompt.

Which sites are supported today?

The public support surface currently targets ChatGPT, Gemini, Perplexity, Qwen, and Grok. The supported-sites page is the canonical public list for this front door.

Does Prompt Switchboard proxy prompts through a hosted backend?

No. The compare flow stays inside your browser and reuses the sessions you already trust.

Where should I go for support or onboarding?

Start with the install guide, first compare guide, supported sites page, and trust boundary page. For follow-up questions, use GitHub Discussions.

Can I export a compare result or share it outside the side panel?

Yes, but the current share surface is still local-first. Prompt Switchboard can copy a compare summary, export Markdown, and keep a readable compare artifact without turning the product into a hosted share-link service.

What are prompt packs?

Prompt packs are reusable compare starters for writing, research, coding explanations, and rewriting. They exist to get you to the first useful compare run faster than a blank input box.

Does the AI Compare Analyst change the trust boundary?

The core compare lane does not change: it still reuses the browser tabs and sessions you already trust. The AI Compare Analyst is optional and runs one structured prompt through a supported browser tab when you enable that lane.

Does this build store an analysis API key?

No. The current repository build does not ship direct browser API-key execution for the analyst lane. The shipped path uses a browser session you already trust, and the direct BYOK path stays gated until there is a safer server-backed token flow.

What does the AI Compare Analyst actually add?

It adds an optional decision-assist lane on top of the core compare board. The analyst can summarize consensus, explain disagreement, recommend the best-fit answer to continue from, and draft the next follow-up question, but it does not pretend to be an absolute automatic judge.

What does the MCP sidecar expose?

The MCP sidecar exposes Prompt Switchboard product actions, not arbitrary browser automation. The current surface covers readiness checks, model-tab opening, compare runs, failed-model retries, session reads, compare export, the browser-session analysis lane, the optional local Switchyard runtime-backed analyst lane, and the built-in next-step workflow tools including workflow run/list/get/resume actions.

Is there a repo-local operator helper for Codex and Claude Code workflows?

Yes. Use npm run mcp:operator -- doctor, npm run mcp:operator -- server, and npm run mcp:operator -- smoke as the main maintainer path. The same helper also keeps bridge status and workflow staging inside the governed MCP tool contract instead of inventing a second public CLI protocol, while readiness returns the cached readiness snapshot and workflow-get reads the current workflow snapshot through the same local sidecar story.

Which coding agents are the main MCP bindings today?

The strongest repo-specific MCP bindings today are OpenAI Codex and Claude Code. This repo now also ships starter-kit generic MCP assets for OpenCode and OpenClaw, but those paths are still narrower and less verified than the Codex and Claude Code host lanes.

If you want the actual copy-paste setup snippets, open the MCP starter kits guide.

Is there a public API or SDK today?

Not yet. The current builder-facing surface is the local MCP sidecar plus the shared bridge protocol inside this repository. Prompt Switchboard does not ship a public HTTP API or a general-purpose SDK in this release.

Does the MCP sidecar need a hosted relay?

No. MCP clients connect to a local stdio sidecar, and that sidecar reaches the extension through a localhost bridge on the same machine.

Are workflow runs durable after the browser session ends?

No. Workflow run snapshots are session-scoped runtime cache. They are useful for the current browser session and current compare turn, but they are not a durable cold-start ledger that survives every restart. Use the workflow list/get/resume tools only while the current browser session still owns that snapshot.

Does the Switchyard runtime replace the browser-tab compare lane?

No. The optional local Switchyard runtime lane only powers the analyst execution path. Prompt Switchboard still keeps the compare-first side panel, tab readiness, compare fan-out, and workflow shell inside the browser.

Does “Seed composer only” run the next compare automatically?

No. Seed-only actions only stage the next prompt in the composer. Use the workflow panel's Run next compare now action when you want Prompt Switchboard to execute another compare turn through the compare-first lane.

Does this release use Chrome Native Messaging?

No. The shipped transport is the local stdio sidecar plus the localhost bridge. Native Messaging remains an optional future hardening path, not the active runtime transport in this build.

What belongs in Discussions, what belongs in Issues, and where do security reports go?

Use Discussions for onboarding, workflow ideas, and open-ended product questions. Use Issues for reproducible bugs on the supported surfaces. Use SECURITY.md for security-sensitive reports instead of posting details in public.

What if a supported site stops working after a UI change?

That is usually selector drift: the site changed its DOM and the extension needs an update. Check the latest packaged build and search Issues or Discussions before filing a new bug report.

Can Prompt Switchboard tell me when a model tab is not ready?

Yes. The compare workspace now surfaces model readiness so you can see whether a selected tab is ready, still loading, missing, or likely affected by selector drift before you send the next run, and the repair center points to the next action instead of leaving you with a dead status pill.

Do I have to rerun every model when only one site fails?

No. Prompt Switchboard now supports targeted recovery so you can retry the failed model cards instead of replaying the entire comparison board.

Is this a provider SDK or automation framework for any website?

No. Prompt Switchboard is a compare-first browser extension for a defined set of AI chat surfaces. It is intentionally narrower than a generic automation stack, and the MCP sidecar keeps that same product boundary instead of exposing raw browser control.

Public onboarding links

Install guide

Use this when you want the shortest path from download to first compare run.

Open install guide

Supported sites

Use this when you want the current support list and selector-drift expectations.

Open supported sites

Trust boundary

Use this when you want the local-first and no-hosted-relay explanation.

Read trust boundary

Discussions

Use this when you want onboarding help, product discussion, or compare-first feedback.

Open Discussions

MCP-capable coding agents

Use this when you want the shortest explanation of how Prompt Switchboard fits OpenAI Codex and Claude Code through the governed MCP sidecar, while keeping other MCP-capable local coding agents in the compatibility-note layer instead of turning Prompt Switchboard into generic browser automation.

Open MCP agents guide