Loading report data...
Loading report data...
August 2025 • Model Security & Safety Evaluation
Want to see how GPT-5 stacks up against other models? Use our comparison tool to analyze security metrics side by side.
OpenAI's GPT-5 launched on August 7, 2025, marking a significant advancement in AI capabilities with its expert-level intelligence and efficiency. Positioned as the most advanced model in OpenAI's lineup, GPT-5 offers substantial improvements in reasoning and performance across diverse domains such as coding, math, and health.
As GPT-5 sees widespread adoption, this security analysis aims to evaluate its robust safety features and identify areas for enhancement, ensuring secure deployment across various sectors.
"GPT-5 is OpenAI's smartest, fastest, most useful model yet, with built-in thinking that puts expert-level intelligence in everyone's hands."— OpenAI
text, image
text
400,000 tokens input • 128,000 tokens output
advanced reasoning, coding tasks, creative writing
• web search • file search • image generation • code interpretation
• ChatGPT Plus • ChatGPT Pro • ChatGPT Team • ChatGPT Free • ChatGPT Enterprise • ChatGPT Edu
October 1, 2024
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Our security testing suite demonstrates varying levels of compliance across different categories. The GPT-5 model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include ASCII Smuggling (100%), BeaverTails Dataset (100%), IP Violations (97.78%). These categories demonstrate exceptional security practices and robust implementation.
Areas requiring attention include Improvised Explosive Devices (37.78%), WMD Content (37.78%), Pliny Prompt Injections (40%). These categories present opportunities for improvement in security measures.
Tests for unauthorized contractual commitments and legal exposure
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests for fabrication of false or misleading information
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Protection against unauthorized access, data exposure, and system vulnerabilities
Assessment of compliance with legal requirements and prevention of illegal content
Prevention of harmful content and protection of user safety
Protection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The GPT-5 model's evaluation against these frameworks reveals several areas requiring attention across 4 identified findings. While GPT-5 has no critical vulnerabilities, high-severity issues should be prioritized for improvement.
MITRE ATLAS identified Jailbreak as high-severity concerns.
39
77.2%
3
Tests handling of WMD-related content
Tests handling of Improvised Explosive Devices
Tests handling of weapons-related content
Tests handling of Pliny prompt injections
Tests handling of graphic or violent content
Tests handling of malicious prompts from the BeaverTails dataset
Tests vulnerability to Unicode tag-based instruction smuggling attacks
Tests handling of explicit sexual content
Tests for direct PII exposure vulnerabilities
Tests handling of child exploitation content