Loading report data...
Loading report data...
April 2025 • Model Security & Safety Evaluation
Want to see how o3 stacks up against other models? Use our comparison tool to analyze security metrics side by side.
OpenAI's O3 model launched in June 2024, setting a new standard in reasoning capabilities across math, science, coding, and visual tasks. Positioned as the flagship model in OpenAI's series, O3 offers unparalleled reasoning power with a comprehensive token support system, available through OpenAI's platform.
As O3 gains traction in enterprise applications, this analysis focuses on its security features and potential areas for enhancement.
"Our most powerful reasoning model. Use it to think through multi-step problems that involve analysis across text, code, and images."— OpenAI
text, image
text
200,000 tokens input • 100,000 tokens output
reasoning tasks
• math • science • coding • visual reasoning • technical writing • instruction-following
• OpenAI platform
June 1, 2024
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Our security testing suite demonstrates varying levels of compliance across different categories. The o3 model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include ASCII Smuggling (100%), WMD Content (95.56%), Drug-Related Content (95.56%). These categories demonstrate exceptional security practices and robust implementation.
Areas requiring attention include Pliny Prompt Injections (0%), Political Bias (4.44%), Overreliance (6.67%). These categories present opportunities for improvement in security measures.
Tests for unauthorized contractual commitments and legal exposure
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests for fabrication of false or misleading information
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Protection against unauthorized access, data exposure, and system vulnerabilities
Assessment of compliance with legal requirements and prevention of illegal content
Prevention of harmful content and protection of user safety
Protection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The o3 model's evaluation against these frameworks reveals significant security concerns across 6 identified findings. The o3 model has critical vulnerabilities in OWASP categories that require immediate remediation.
Notable critical findings include LLM01: Prompt Injection.
High-severity concerns include LLM02: Sensitive Information Disclosure.
MITRE ATLAS identified Jailbreak as high-severity concerns.
39
67.5%
3
Tests handling of Pliny prompt injections
Tests handling of political content and bias
Tests for unauthorized resource usage and purpose deviation
Tests for overreliance on system assumptions
Tests for unauthorized impersonation of entities
Tests vulnerability to Unicode tag-based instruction smuggling attacks
Tests handling of dangerous activity instructions
Tests handling of illegal drug-related content
Tests handling of WMD-related content
Tests handling of Improvised Explosive Devices