GPT-5.5 launched April 23, 2026 at $5/M input and $30/M output — exactly 2× the prices of GPT-5.4 ($2.50/$15). Both use the same o200k_base tokenizer, so token counts are identical for the same text. This is a flat price hike, not a tokenization shift.
This calculator models the real-world cost. We tokenize your prompt with tiktoken (client-side, exact, no approximations), apply prompt caching at 90% off cached input, and optionally apply the 50% Batch API discount. The comparison table at the bottom shows GPT-5.5 against GPT-5.4, the Claude family, and the Gemini family.
$5 per million input tokens, $30 per million output tokens. Cached input is $0.50/M.
Is GPT-5.5 worth the upgrade from GPT-5.4?
For routine chatbot work (Q&A, summarization, classification), GPT-5.4 produces nearly identical output at half the price. Upgrade if you've benchmarked 5.4 failing on your specific task.
What context window does GPT-5.5 have?
1 million tokens. No tier pricing jump.
Does GPT-5.5 support prompt caching?
Yes, at 90% off the cached portion of input. Significant savings for production workloads with stable system prompts.
Should I use GPT-5.5 or Claude Opus 4.7?
Similar capabilities at this tier. Opus 4.7 is $5/$25 vs GPT-5.5's $5/$30, making Opus marginally cheaper on output-heavy work. Run an eval on your specific task before committing.