Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
New Open-Source Model Rivals OpenAI, While Treading Beijing’s Red Line

Artificial intelligence startup DeepSeek released Thursday an updated version of its flagship reasoning model months after its Chinese origin sent shockwaves through industry.
See Also: On Demand | Global Incident Response Report 2025
The model is available on Hugging Face under an MIT license, allowing for commercial use with minimal restrictions. It published the full weights of DeepSeek-V2-R1-0528, a mixture-of-experts model with a total of 685 billion parameters, under an open license.
The model is a glimpse into the kind of high-performance, large-scale systems being trained and deployed, under vastly different norms governed by Beijing (see: DeepSeek’s New AI Model Shakes American Tech Industry).
On the Academic Foundation Model Evaluation Benchmark, DeepSeek-R1-0528 ranks just behind GPT-4 and Claude 3 Opus and outperformed Gemini 1.5 Pro and OpenAI’s o3. The site aggregates scores across coding, reasoning and general knowledge tasks and has become a common reference point for researchers.
“In the latest update, DeepSeek R1 has significantly improved its depth of reasoning and inference capabilities by leveraging increased computational resources and introducing algorithmic optimization mechanisms during post-training,” the Hangzho, China-based company said. “The model has demonstrated outstanding performance across various benchmark evaluations, including mathematics, programming and general logic.”
The full-size model is gated, but DeepSeek also released a distilled version DeepSeek-R1-0528-Qwen3-8B, which has been downloaded tens of thousands of times from Hugging Face. The smaller model offers similar capabilities with a much lower computing footprint and can run inference locally on machines with 40 to 80 gigabytes of RAM, making it a viable option for both researchers and startups in jurisdictions with permissive AI laws.
Researchers testing the model in English flagged concerns about output filtering and response censorship. In a thread on X, formerly Twitter, an independent evaluator said earlier DeepSeek models would return relatively neutral or varied responses on political issues. The latest version frequently aligns with Chinese state narratives, particularly on topics related to governance and human rights.
“It’s the most censored DeepSeek model yet. When asked about Tiananmen, the Uyghur situation, or Xi Jinping, the model toes the party line with minimal deviation,” wrote X user @xlr8harder, who regularly evaluates multilingual models for censorship patterns.
The model in some cases acknowledges the existence of internment camps, but justifies them using official state narratives. Compared to Western models, which may decline to answer or flag responses as sensitive, DeepSeek-V2-R1 often provides full answers that appear filtered through a political lens rather than a safety framework.
DeepSeek’s release comes amid intensifying global scrutiny of the intersection of AI and geopolitics. Although DeepSeek describes its model as “open,” it ships with restrictions on use in domains such as healthcare, autonomous driving and finance, areas where regulators in China have begun tightening oversight.
Unlike several Western models such as Meta’s Llama, which ship with a non-commercial license, DeepSeek’s release includes few legal restrictions on use or modification, making it technically more permissive, even as its training signals remain opaque.