Qwen: Qwen3 32B

qwen/qwen3-32b

Created Apr 28, 202540,960 context
$0.10/M input tokens$0.30/M output tokens

Qwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.

    Qwen3 32B - API, Providers, Stats | OpenRouter