OpenAI Unveils New Lightweight Models: GPT-5.4 Mini and GPT-5.4 Nano
OpenAI is enhancing its portfolio of compact models with the introduction of GPT-5.4 Mini and GPT-5.4 Nano. According to the company, these new models represent their most proficient lightweight iterations to date, finely tuned for applications such as coding, automation, and multi-agent workflows.
This update also opens the door to improved model performance across budget-friendly ChatGPT tiers, including the Free and Go plans, where users can now access the Mini model via the Thinking option.
The rollout of GPT-5.4 Mini and GPT-5.4 Nano follows closely on the heels of the flagship GPT-5.4, tailored specifically for professional and developer tasks. While the comprehensive GPT-5.4 model is engineered for intricate workflows, the new smaller variants are crafted for scenarios where rapidity and efficiency take precedence.
The GPT-5.4 Mini model supersedes GPT-5 Mini, boasting enhancements in coding capabilities, logical reasoning, tool utilization, and multimodal comprehension. It is reported to operate over twice as swiftly as its predecessor while nearing GPT-5.4’s performance benchmarks in various tests.
This level of efficiency makes it particularly advantageous for applications where both alacrity and precision are paramount, such as coding assistants, chatbots, and real-time AI tools.
Conversely, the GPT-5.4 Nano stands as the most diminutive member of the GPT-5.4 ensemble, designed for straightforward yet high-frequency tasks.
OpenAI posits that Nano is excellently suited for classification, ranking, data extraction, and background automation.
In scenarios employing multiple AI agents collaboratively, Nano can adeptly manage minor auxiliary tasks, allowing larger models to focus on strategic planning and logical deduction.
Both Mini and Nano are optimized for what OpenAI refers to as sub-agent workflows. Here, several AI models synchronize to accomplish a singular task: for instance, a larger model might ascertain the overall requirements, while the smaller models execute specific functions such as file searching, code verification, or data retrieval.
OpenAI contends that this methodology not only curtails costs but also maintains low response times—an essential feature in developer tools, coding assistants, and enterprise applications.
Furthermore, the company emphasizes that GPT-5.4 Mini exhibits superior performance relative to GPT-5 Mini, maintaining similar latency levels and, in certain instances, approaching the capabilities of the flagship GPT-5.4, all while offering enhanced speed.
This renders Mini an ideal choice for coding tools and instantaneous assistant applications that necessitate both rapidity and accuracy.
Availability
Regarding availability, GPT-5.4 Mini is now live in ChatGPT, the API, and Codex, whereas Nano primarily targets developers utilizing the API. In ChatGPT, users on Free and Go plans can access the Mini model via the Thinking option located in the tools menu.
For developers, both the API versions of Mini and Nano accommodate text and image inputs, tool functionalities, function execution, file and web searches, and computing operations.
OpenAI claims that Mini consumes roughly 30% of the GPT-5.4 quota in Codex, thereby enabling simpler tasks to be executed at reduced costs.

While OpenAI has yet to disclose specific pricing for India, it has indicated that Nano is the most economical offering in the GPT-5.4 range, with Mini priced lower than the flagship GPT-5.4 model.
This implies that developers in India utilizing the API should enjoy reduced operational costs, while ChatGPT users on free and budget-friendly plans will experience enhanced performance without the necessity for upgrading.
Source link: Indiatoday.in.






