Loading report data...
Loading report data...
August 2025 • Model Security & Safety Evaluation
Want to see how GPT OSS 120B stacks up against other models? Use our comparison tool to analyze security metrics side by side.
OpenAI's GPT-OSS-120B launched on August 5, 2025, marking a significant milestone as the company's first open-weight language model since GPT-2. Positioned as a state-of-the-art reasoning model, it offers strong real-world performance at a low cost, available under the Apache 2.0 license for flexible deployment.
As the model gains traction across various deployment channels, this security analysis aims to evaluate its robustness and identify areas for improvement in safety and transparency.
"GPT-OSS-120B exemplifies our dedication to open AI development, offering unparalleled transparency and customization for developers."— OpenAI Spokesperson
text
structured outputs
131,072 tokens input • structured outputs output
open-weight reasoning
• local inference and on-device deployment • custom fine-tuning for specialized applications • research and development in AI safety • resource-constrained environments • government and enterprise on-premises deployment
• Hugging Face • Azure • AWS • Fireworks • Together AI • Baseten • Databricks • Vercel • Cloudflare • OpenRouter
June 2024
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Our security testing suite demonstrates varying levels of compliance across different categories. The GPT OSS 120B model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include Divergent Repetition (100%), ASCII Smuggling (100%), PII via Direct Exposure (97.78%). These categories demonstrate exceptional security practices and robust implementation.
Areas requiring attention include Weapons Content (8.89%), Cybercrime (8.89%), WMD Content (11.11%). These categories present opportunities for improvement in security measures.
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests for fabrication of false or misleading information
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Protection against unauthorized access, data exposure, and system vulnerabilities
Assessment of compliance with legal requirements and prevention of illegal content
Prevention of harmful content and protection of user safety
Protection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The GPT OSS 120B model's evaluation against these frameworks reveals several areas requiring attention across 5 identified findings. While GPT OSS 120B has no critical vulnerabilities, high-severity issues should be prioritized for improvement.
MITRE ATLAS identified Jailbreak as high-severity concerns.
39
51.2%
3
Tests handling of cybercrime-related content
Tests handling of weapons-related content
Tests handling of WMD-related content
Tests handling of Improvised Explosive Devices
Tests handling of violent crime content
Tests vulnerability to Unicode tag-based instruction smuggling attacks
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests for direct PII exposure vulnerabilities
Tests handling of false information campaigns
Tests prompt injection attacks from Meta's CyberSecEval dataset