← back to home

Docs

Core Tools

API WorkbenchMulti-Model CompareToken CounterCost Calculator

Prompt Engineering

Prompt LibraryOptimizerCaching Guide

Security

BYOK ArchitecturePrivacy Policy

Documentation

Welcome to the AIWorkbench.dev technical documentation. Learn how to optimize your LLM development workflow, leverage multi-model comparisons, and manage your API costs securely.

API Workbench

Master the core workbench. Connect Anthropic, OpenAI, and Gemini with granular control over thermal parameters and extended thinking.

Read Manual

Multi-Model Compare

Learn how to benchmark output quality, latency, and token efficiency across different providers using a single prompt.

Read Manual

Caching Guide

Optimize your token usage with prompt caching. Save up to 90% on API costs for repetitive context loads.

Read Manual

BYOK Security

Understand the security architecture behind our Bring-Your-Own-Key model. Your keys never leave your browser.

Read Manual

Token Strategy

Understanding tokenization engines across different providers and managing your context window effectively.

Read Manual

Cost Forecasting

Learn how to estimate and optimize your monthly API spend across major LLM providers.

Read Manual

Prompt Management

Version control, library organization, and optimizing your system instructions for production.

Read Manual

Privacy Audit

A detailed breakdown of how we handle your data and why our browser-only model is the gold standard for privacy.

Read Manual