Last night, OpenAI inadvertently revealed an internal staging environment within Codex, exposing a model picker that featured internal codenames and advanced models like GPT-5.5 and oai-2.1 before the information was quickly retracted. This incident suggests that OpenAI’s development of AI technology is progressing more rapidly than previously disclosed. Six weeks ago, Sam Altman articulated a vision where current AI models are intelligent enough to explore new architectures beyond transformers, indicating a self-reinforcing cycle of improvement in AI technologies. This leak hints that OpenAI’s research momentum is likely further advanced than the public was aware.

OpenAI: OpenAI is a leading AI research organization that builds frontier models and products including ChatGPT, Codex, and enterprise tools to advance toward artificial general intelligence. It recently announced initiatives for the next phase of enterprise AI, accelerating adoption across industries with specialized solutions. In this incident, an internal staging environment was accidentally deployed live in Codex, exposing unreleased models like GPT-5.5 and oai-2.1 to pro users.
Sam Altman: Sam Altman is the CEO of OpenAI, directing its research into advanced AI architectures and product strategies. Recently, he highlighted that current models possess the intelligence to discover transformative architectures beyond transformers, akin to transformers surpassing LSTMs, and positioned AGI as a precursor to greater advancements. His comments precede and contextualize the leaked Codex models, suggesting OpenAI’s internal efforts are realizing this self-accelerating research flywheel.

`json
{
“Research Momentum”: “The leak indicates OpenAI’s internal model development is progressing faster than public announcements suggest.”,
“Accidental Exposure”: “OpenAI’s Codex model picker briefly displayed internal codenames and advanced entries like GPT-5.5 and oai-2.1 during a staging push to production.”,
“Architecture Thesis”: “Sam Altman recently predicted that AI models are now capable of researching post-transformer architectures, creating a self-accelerating cycle between models and designs.”
}
`