Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
AI Powerhouse Releases Its First Public Model in 6 Years

OpenAI released two open-weight AI reasoning models, marking its first public model release since GPT-2 six years ago.
See Also: AI Agents Demand Scalable Identity Security Frameworks
The launch of gpt-oss-120b and gpt-oss-20b reintroduces the company to an open-source ecosystem altered by competition from Chinese labs and shifting U.S. policy preferences.
The models are designed specifically for reasoning tasks and are licensed under Apache 2.0, one of the most permissive open licenses. This allows developers and enterprises to use, modify and monetize the models without restrictions, a departure from OpenAI’s recent API-first, closed-weight approach.
OpenAI’s said the models are available on repositories including Hugging Face and support inference frameworks such as Transformers, PyTorch, Triton, vLLM, Ollama and LM Studio.
The release comes as Chinese labs such as DeepSeek, Moonshot AI and Alibaba’s Qwen gain traction with high-performing open models. Meta’s once-leading Llama series has faced challenges with keeping pace. The release also follows calls from the Trump administration for U.S. firms to open-source more AI technologies (see: Trump’s AI Plan Sparks Industry Praise and Warnings of Risk).
GPT-OSS is available in two model sizes. gpt-oss-120b has 117 billion total parameters and is designed to run on a single Nvidia H100 80 gigabyte chip. gpt-oss-20b, with 21 billion parameters, is built to run on consumer-grade hardware with 16 GB of memory. Both models use a mixture-of-experts architecture that activates a subset of parameters during inference – 5.1 billion for gpt-oss-120b and 3.6 billion for gpt-oss-20b – allowing for more efficient generation.
OpenAI says the models were trained on primarily English-language text with an emphasis on STEM, coding and general knowledge. Unlike the GPT-4o model, GPT-OSS does not support multimodal input such as images or audio. The company applied reinforcement learning during post-training to improve the models’ ability to reason through complex prompts, a technique similar to what was used in developing the proprietary o-series.
OpenAI did not release full benchmark results for comparison with proprietary models but said GPT-OSS performs well on reasoning tasks and is optimized for integration with AI agents capable of chaining reasoning with tool use, such as web search or code execution.
OpenAI did not publish the training datasets used to build GPT-OSS, citing legal considerations. OpenAI, like other AI developers, faces ongoing litigation over whether copyrighted materials were used without consent during training. This puts GPT-OSS in contrast with open models from organizations, such as Ai2, that have shared more training transparency.
OpenAI said it implemented safety precautions during development to reduce misuse risks. It filtered training data to remove harmful content, including materials related to chemical, biological, radiological and nuclear threats. It also said that safety evaluations conducted internally and by third parties did not find evidence of the models being usable for cyberattacks or biothreats.
The models are text-only and do not support image or voice generation. But they are capable of powering tool-using agents and integrating into cloud-based pipelines, including routing queries to larger proprietary models when needed.
OpenAI said the models are state of the art among open-weight offerings, but they come with trade-offs. The company has previously acknowledged that smaller models may hallucinate more frequently than their larger counterparts due to having less world knowledge.
Sam Altman hinted that this may not be the company’s only release this week: “Big upgrade later this week,” he wrote, prompting speculation about additional open model releases or possibly the long-awaited GPT-5.