OpenAI has released two new AI models, GPT-5.4 mini and GPT-5.4 nano. As the names suggest: Both are smaller and faster versions of the company’s flagship GPT-5.4 model, designed for high-volume tasks where speed and cost matter.
According to OpenAI, GPT-5.4 mini runs more than twice as fast as its predecessor, GPT-5 mini, and shows improvements in coding, reasoning, understanding images, and using software tools. On several benchmark tests, it approaches the performance of the larger GPT-5.4 model. GPT-5.4 nano is the smallest and cheapest of the two, aimed at simpler tasks such as classifying text, extracting data, and ranking results.
OpenAI says these models are built for situations where slow responses hurt the user experience — for example, coding assistants, systems that read screenshots, or applications that analyze images in real time.
One key use case involves combining models of different sizes. A larger model can handle planning and coordination, while smaller models like GPT-5.4 mini execute specific subtasks in parallel. OpenAI uses this approach in its Codex coding tool.
Pricing and availability:
- GPT-5.4 mini: $0.75 per million input tokens, $4.50 per million output tokens; available via API, Codex, and ChatGPT
- GPT-5.4 nano: $0.20 per million input tokens, $1.25 per million output tokens; available via API only
- For comparison, GPT-5.4 costs $2.50 per million input tokens and $15.00 per million output tokens