← back to home

Docs

Core Tools

API WorkbenchMulti-Model CompareToken CounterCost Calculator

Prompt Engineering

Prompt LibraryOptimizerCaching Guide

Security

BYOK ArchitecturePrivacy Policy

API Workbench

Optimize your LLM testing workflow with industrial-grade control.

The API Workbench is the core testing engine of AIWorkbench.dev. It provides a direct, zero-backend connection between your browser and the world's most powerful AI models.

Core Parameters

Mastering individual model parameters is thermal engineering for AI. Our workbench offers granular control over these settings across all providers.

ParameterFunctionTypical Range
TemperatureControls randomness. Lower is focused; higher is creative.0.0 - 2.0 (OpenAI), 0.0 - 1.0 (Anthropic)
Top PNucleus sampling; limits the model to a cumulative probability.0.0 - 1.0
Max TokensThe absolute limit on completion length.Model Dependent

Provider Specifics

Anthropic Claude

Claude models feature an Extended Thinking mode. When enabled, the model performs internal reasoning before outputting its final response.

  • Thinking Budget: Must be set in tokens.
  • Requirement: Temperature must be set to precisely 1.0 for thinking to function correctly.

OpenAI GPT-4o & o-series

The OpenAI o-series (o1, o3-mini) are specialized Reasoning Models.

  • These models manage their own internal chain-of-thought and do not support manual temperature or top_p adjustments.
  • AIWorkbench.dev automatically grays out these controls when a reasoning model is selected.