API Workbench
Optimize your LLM testing workflow with industrial-grade control.
The API Workbench is the core testing engine of AIWorkbench.dev. It provides a direct, zero-backend connection between your browser and the world's most powerful AI models.
Core Parameters
Mastering individual model parameters is thermal engineering for AI. Our workbench offers granular control over these settings across all providers.
| Parameter | Function | Typical Range |
|---|---|---|
| Temperature | Controls randomness. Lower is focused; higher is creative. | 0.0 - 2.0 (OpenAI), 0.0 - 1.0 (Anthropic) |
| Top P | Nucleus sampling; limits the model to a cumulative probability. | 0.0 - 1.0 |
| Max Tokens | The absolute limit on completion length. | Model Dependent |
Provider Specifics
Anthropic Claude
Claude models feature an Extended Thinking mode. When enabled, the model performs internal reasoning before outputting its final response.
- Thinking Budget: Must be set in tokens.
- Requirement: Temperature must be set to precisely 1.0 for thinking to function correctly.
OpenAI GPT-4o & o-series
The OpenAI o-series (o1, o3-mini) are specialized Reasoning Models.
- These models manage their own internal chain-of-thought and do not support manual temperature or top_p adjustments.
- AIWorkbench.dev automatically grays out these controls when a reasoning model is selected.