12/01 2025
452
Yesterday, OpenAI introduced its latest state-of-the-art intelligent coding model, GPT-5.1-Codex-Max.
Built upon an enhanced version of the GPT foundational reasoning model, GPT-5.1-Codex-Max has undergone training in intelligent tasks spanning software engineering, mathematics, research, and various other domains.
Throughout every phase of the development cycle, GPT-5.1-Codex-Max exhibits enhancements in speed, intelligence, and code-processing capabilities. It stands as OpenAI's inaugural model trained natively through a compression process, empowering it to function seamlessly across multiple context windows and coherently handle millions of tokens in a single task. This capability opens doors to large-scale project refactoring, in-depth debugging, and prolonged agent loops.
GPT-5.1-Codex-Max has been trained on practical software engineering tasks, including creating pull requests (PRs), conducting code reviews, front-end coding, and engaging in Q&A sessions. It has achieved outstanding results in numerous cutting-edge coding evaluations.
In real-world application scenarios, GPT-5.1-Codex-Max operates efficiently within a Windows environment, excelling particularly in tasks that demand enhanced collaboration with Codex CLI.
Owing to its improved reasoning efficiency, GPT-5.1-Codex-Max demonstrates significant enhancements in token efficiency. In the SWE-bench Verified test, it surpassed GPT-5.1-Codex in tasks of moderate reasoning difficulty while reducing token usage by 30%.
For tasks that are not latency-sensitive, GPT-5.1-Codex-Max introduces a new ultra-high (xhigh) reasoning difficulty level, extending the thinking time to obtain superior answers.
Remarkably, GPT-5.1-Codex-Max offers cost-effectiveness, capable of generating front-end designs of comparable quality to GPT-5.1-Codex but at a reduced cost.
The compression mechanism enables GPT-5.1-Codex-Max to accomplish tasks that were previously hindered by context window limitations, such as complex refactoring and long-running agent loops. This is achieved by streamlining the history while preserving essential contextual information.
In Codex applications, when GPT-5.1-Codex-Max nears the context window limit, it automatically compresses the session, acquires a new context window, and repeats the process until the task is completed.
Officials from OpenAI have stated that GPT-5.1-Codex-Max can operate continuously and autonomously for over 24 hours, undergoing continuous iteration and improvement during this period. It fixes failed tests and delivers successful outcomes.
Leveraging compression technology, the model can work coherently across multiple context windows, achieving superior results in fields such as long-term coding and cybersecurity.
GPT-5.1-Codex-Max represents OpenAI's most powerful cybersecurity model deployed to date. OpenAI is committed to meeting stringent cybersecurity capability standards, enhancing security protections in the cyber domain, and ensuring defenders are safeguarded through initiatives like Aardvark.
Upon the release of GPT-5-Codex, OpenAI implemented a dedicated cybersecurity monitoring system to detect and thwart malicious activities. No significant increase in large-scale abuse has been observed, and all suspicious activities are directed to the policy monitoring system for review.
Codex file writing is confined to its workspace, and network access is disabled by default, except for developers. To aid developers in code reviews, Codex generates terminal logs and lists its tool calls and test results.
GPT-5.1-Codex-Max, coupled with continuous upgrades to OpenAI's CLI, IDE extensions, cloud integration, and code review tools, significantly boosts engineering efficiency.
An illustrative example reveals that 95% of OpenAI's internal engineers utilize Codex on a weekly basis, resulting in an approximate 70% increase in the number of pull requests they submit.
GPT-5.1-Codex-Max is now accessible in Codex for CLI, IDE extensions, cloud, and code review functionalities, with API access slated for introduction in the near future.
References:
https://openai.com/index/gpt-5-1-codex-max/