Claude 4 Sonnet Security Report
May 2025 • Model Security & Safety Evaluation
Compare Model Performance
Want to see how Claude 4 Sonnet stacks up against other models? Use our comparison tool to analyze security metrics side by side.
Anthropic's Claude 4 Sonnet launched on May 22, 2025, marking a significant upgrade over its predecessor, Claude Sonnet 3.7, with superior coding and reasoning capabilities. Available through Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, this model enhances performance and efficiency.
As industry adoption grows, this analysis aims to evaluate the model's security features and identify areas for potential improvement.
"Claude Sonnet 4 significantly improves on Sonnet 3.7's industry-leading capabilities, excelling in coding with a state-of-the-art 72.7% on SWE-bench."— Anthropic
About This Evaluation
Core Capabilities
text, code
text, code
1 million tokens input • 1 million tokens output
Use Cases
coding, reasoning
• software development • problem-solving • agentic tasks
Availability
• Anthropic API • Amazon Bedrock • Google Cloud's Vertex AI
January 2025
Security Test Results
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Performance Overview
Our security testing suite demonstrates varying levels of compliance across different categories. The Claude 4 Sonnet model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include Explicit Content (100%), Divergent Repetition (100%), ASCII Smuggling (100%). These categories demonstrate exceptional security practices and robust implementation.
Areas requiring attention include Pliny Prompt Injections (20%), Graphic Content (51.11%), CyberSecEval Dataset (68.89%). These categories present opportunities for improvement in security measures.
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Security & Access Control
97% Pass RateProtection against unauthorized access, data exposure, and system vulnerabilities
Compliance & Legal
93% Pass RateAssessment of compliance with legal requirements and prevention of illegal content
Trust & Safety
90% Pass RatePrevention of harmful content and protection of user safety
Brand
88% Pass RateProtection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The Claude 4 Sonnet model's evaluation against these frameworks reveals several areas requiring attention across 5 identified findings. While Claude 4 Sonnet has no critical vulnerabilities, high-severity issues should be prioritized for improvement.
MITRE ATLAS identified Jailbreak as high-severity concerns.
OWASP Top 10 for LLMs 2025 - Claude 4 Sonnet Assessment
Moderate Risk Areas
MITRE ATLAS Findings for Claude 4 Sonnet
High Severity Findings
Moderate Severity Findings
Total Test Categories
39
Average Pass Rate
90.0%
Critical Issues
3
Severity Distribution
Performance Summary
Security & Access Control
Compliance & Legal
Trust & Safety
Brand
Worst Performing Tests
Pliny Prompt Injections
Tests handling of Pliny prompt injections
Graphic Content
Tests handling of graphic or violent content
CyberSecEval Dataset
Tests prompt injection attacks from Meta's CyberSecEval dataset
Overreliance
Tests for overreliance on system assumptions
Dangerous Activity Content
Tests handling of dangerous activity instructions
Best Performing Tests
PII via Direct Exposure
Tests for direct PII exposure vulnerabilities
Unauthorized Advice
Tests handling of unauthorized professional advice
Child Exploitation
Tests handling of child exploitation content
Self-Harm
Tests handling of self-harm and suicide content
Methamphetamine Content
Tests handling of methamphetamine-related content
Key Observations
Strengths
- Strong performance in handling PII via Direct Exposure (100% pass rate)
- Effective prevention in Unauthorized Advice (100%)
- Consistent performance across critical security tests
Areas for Improvement
- Low pass rate (20%) for Pliny Prompt Injections
- 3 critical severity issues identified
- Average pass rate of 90.0% indicates room for improvement