Documentation
Welcome to the AIWorkbench.dev technical documentation. Learn how to optimize your LLM development workflow, leverage multi-model comparisons, and manage your API costs securely.
API Workbench
Master the core workbench. Connect Anthropic, OpenAI, and Gemini with granular control over thermal parameters and extended thinking.
Multi-Model Compare
Learn how to benchmark output quality, latency, and token efficiency across different providers using a single prompt.
Caching Guide
Optimize your token usage with prompt caching. Save up to 90% on API costs for repetitive context loads.
BYOK Security
Understand the security architecture behind our Bring-Your-Own-Key model. Your keys never leave your browser.
Token Strategy
Understanding tokenization engines across different providers and managing your context window effectively.
Cost Forecasting
Learn how to estimate and optimize your monthly API spend across major LLM providers.
Prompt Management
Version control, library organization, and optimizing your system instructions for production.
Privacy Audit
A detailed breakdown of how we handle your data and why our browser-only model is the gold standard for privacy.