Claude 4 Sonnet Security Report

May 2025 • Model Security & Safety Evaluation

Released
May 22, 2025

Compare Model Performance

Want to see how Claude 4 Sonnet stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

Anthropic's Claude 4 Sonnet launched on May 22, 2025, marking a significant upgrade over its predecessor, Claude Sonnet 3.7, with superior coding and reasoning capabilities. Available through Anthropic API, Amazon Bedrock, and Google Cloud's Vertex AI, this model enhances performance and efficiency.

As industry adoption grows, this analysis aims to evaluate the model's security features and identify areas for potential improvement.

Company Logo

"Claude Sonnet 4 significantly improves on Sonnet 3.7's industry-leading capabilities, excelling in coding with a state-of-the-art 72.7% on SWE-bench."Anthropic

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

text, code

Output Type

text, code

Token Limits

1 million tokens input • 1 million tokens output

Use Cases

Primary Focus

coding, reasoning

Key Applications

• software development • problem-solving • agentic tasks

Availability

Access Points

• Anthropic API • Amazon Bedrock • Google Cloud's Vertex AI

Knowledge Cutoff

January 2025

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The Claude 4 Sonnet model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Top performing areas include Explicit Content (100%), Divergent Repetition (100%), ASCII Smuggling (100%). These categories demonstrate exceptional security practices and robust implementation.

Areas requiring attention include Pliny Prompt Injections (20%), Graphic Content (51.11%), CyberSecEval Dataset (68.89%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Type
Severity
Description
Pass Rate
Low
Tests vulnerability to Unicode tag-based instruction smuggling attacks
100%
Low
Tests handling of malicious prompts from the BeaverTails dataset
100%
Medium
Tests for unauthorized contractual commitments and legal exposure
95.56%
Medium
Tests prompt injection attacks from Meta's CyberSecEval dataset
68.89%
Medium
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
100%
Medium
Tests for unauthorized actions beyond defined system boundaries
91.11%
Medium
Tests for fabrication of false or misleading information
97.78%
High
Tests handling of WMD-related content
91.11%
Critical
Tests handling of child exploitation content
100%
Low
Tests handling of intellectual property violations
97.78%
Low
Tests handling of cybercrime-related content
93.33%
Low
Tests creation of malicious code
93.33%
Medium
Tests handling of graphic or violent content
51.11%
Low
Tests handling of harassment and bullying content
100%
Critical
Tests handling of hate speech and discriminatory content
97.78%
Medium
Tests handling of general illegal activities
84.44%
Medium
Tests handling of illegal drug-related content
95.56%
Low
Tests handling of methamphetamine-related content
100%
Medium
Tests handling of weapons-related content
88.89%
Low
Tests handling of personal attacks and insults
97.78%
Medium
Tests handling of IP theft and violations
93.33%
Medium
Tests handling of false information campaigns
91.11%
Medium
Tests handling of non-violent criminal content
95.56%
Low
Tests handling of profane or inappropriate language
95.56%
Low
Tests handling of extremist content and recruitment
91.11%
Critical
Tests handling of self-harm and suicide content
100%
High
Tests handling of sexual crime content
93.33%
Medium
Tests handling of explicit sexual content
100%
Medium
Tests handling of unauthorized professional advice
100%
Low
Tests handling of dangerous activity instructions
73.33%
High
Tests handling of violent crime content
95.56%
Low
Tests handling of Improvised Explosive Devices
88.89%
High
Tests for unauthorized resource usage and purpose deviation
88.89%
Low
Tests for unauthorized impersonation of entities
77.78%
Low
Tests for overreliance on system assumptions
73.33%
High
Tests for direct PII exposure vulnerabilities
100%
Medium
Tests handling of Pliny prompt injections
20%
Low
Tests handling of political content and bias
93.33%
Low
Tests handling of religious content and bias
93.33%
Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

97% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

5 failed probes

Compliance & Legal

93% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

54 failed probes

Trust & Safety

90% Pass Rate

Prevention of harmful content and protection of user safety

48 failed probes

Brand

88% Pass Rate

Protection of brand integrity and prevention of misuse

42 failed probes
Standards
Security analysis of Claude 4 Sonnet against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The Claude 4 Sonnet model's evaluation against these frameworks reveals several areas requiring attention across 5 identified findings. While Claude 4 Sonnet has no critical vulnerabilities, high-severity issues should be prioritized for improvement.

MITRE ATLAS identified Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - Claude 4 Sonnet Assessment

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for Claude 4 Sonnet

High Severity Findings

Jailbreak

Moderate Severity Findings

Prompt Injections
Erode ML Model Integrity
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

90.0%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

Pass Rate97.0%

Compliance & Legal

Pass Rate93.0%

Trust & Safety

Pass Rate90.0%

Brand

Pass Rate88.0%

Worst Performing Tests

Pliny Prompt Injections

Tests handling of Pliny prompt injections

20%

Graphic Content

Tests handling of graphic or violent content

51.11%

CyberSecEval Dataset

Tests prompt injection attacks from Meta's CyberSecEval dataset

68.89%

Overreliance

Tests for overreliance on system assumptions

73.33%

Dangerous Activity Content

Tests handling of dangerous activity instructions

73.33%

Best Performing Tests

PII via Direct Exposure

Tests for direct PII exposure vulnerabilities

100%

Unauthorized Advice

Tests handling of unauthorized professional advice

100%

Child Exploitation

Tests handling of child exploitation content

100%

Self-Harm

Tests handling of self-harm and suicide content

100%

Methamphetamine Content

Tests handling of methamphetamine-related content

100%

Key Observations

Strengths

  • Strong performance in handling PII via Direct Exposure (100% pass rate)
  • Effective prevention in Unauthorized Advice (100%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (20%) for Pliny Prompt Injections
  • 3 critical severity issues identified
  • Average pass rate of 90.0% indicates room for improvement