Llama 4 Scout Security Report

April 2025 • Model Security & Safety Evaluation

Released
April 5, 2025

Compare Model Performance

Want to see how Llama 4 Scout stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

Meta's Llama 4 Scout launched on April 5, 2025, setting a new standard in multimodal AI performance with its 17 billion active parameters and 16 experts. Positioned as the most powerful model in its class, it surpasses previous Llama generations while maintaining compatibility with a single NVIDIA H100 GPU. Available through Meta's AI platforms, this release prompts a thorough security evaluation to assess its robustness and potential vulnerabilities.

As Llama 4 Scout gains traction in the AI community, this security analysis aims to evaluate its protective measures and identify areas for enhancement.

Company Logo

"Llama 4 Scout is a game-changer in the AI landscape, offering unmatched performance and efficiency."Meta AI Research Team

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

text, image

Output Type

text, image

Token Limits

10 million input • 10 million output

Use Cases

Primary Focus

multimodal processing

Key Applications

• advanced reasoning • complex data analysis • creative content generation

Availability

Access Points

• Meta AI platforms

Knowledge Cutoff

October 2023

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The Llama 4 Scout model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Areas requiring attention include Pliny Prompt Injections (0%), Overreliance (0%), Resource Hijacking (0%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Type
Severity
Description
Pass Rate
Low
Tests vulnerability to Unicode tag-based instruction smuggling attacks
88.89%
Low
Tests handling of malicious prompts from the BeaverTails dataset
15.56%
Medium
Tests for unauthorized contractual commitments and legal exposure
0%
Medium
Tests prompt injection attacks from Meta's CyberSecEval dataset
44.44%
Medium
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
6.67%
Medium
Tests for unauthorized actions beyond defined system boundaries
13.33%
Medium
Tests for fabrication of false or misleading information
0%
High
Tests handling of WMD-related content
37.78%
Critical
Tests handling of child exploitation content
48.89%
Low
Tests handling of intellectual property violations
17.78%
Low
Tests handling of cybercrime-related content
31.11%
Low
Tests creation of malicious code
15.56%
Medium
Tests handling of graphic or violent content
13.33%
Low
Tests handling of harassment and bullying content
22.22%
Critical
Tests handling of hate speech and discriminatory content
11.11%
Medium
Tests handling of general illegal activities
15.56%
Medium
Tests handling of illegal drug-related content
24.44%
Low
Tests handling of methamphetamine-related content
28.89%
Medium
Tests handling of weapons-related content
40%
Low
Tests handling of personal attacks and insults
8.89%
Medium
Tests handling of IP theft and violations
4.44%
Medium
Tests handling of false information campaigns
0%
Medium
Tests handling of non-violent criminal content
31.11%
Low
Tests handling of profane or inappropriate language
13.33%
Low
Tests handling of extremist content and recruitment
24.44%
Critical
Tests handling of self-harm and suicide content
37.78%
High
Tests handling of sexual crime content
60%
Medium
Tests handling of explicit sexual content
26.67%
Medium
Tests handling of unauthorized professional advice
15.56%
Low
Tests handling of dangerous activity instructions
28.89%
High
Tests handling of violent crime content
46.67%
Low
Tests handling of Improvised Explosive Devices
44.44%
High
Tests for unauthorized resource usage and purpose deviation
0%
Low
Tests for unauthorized impersonation of entities
2.22%
Low
Tests for overreliance on system assumptions
0%
High
Tests for direct PII exposure vulnerabilities
22.22%
Medium
Tests handling of Pliny prompt injections
0%
Low
Tests handling of political content and bias
2.22%
Low
Tests handling of religious content and bias
2.22%
Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

15% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

123 failed probes

Compliance & Legal

28% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

521 failed probes

Trust & Safety

24% Pass Rate

Prevention of harmful content and protection of user safety

380 failed probes

Brand

3% Pass Rate

Protection of brand integrity and prevention of misuse

351 failed probes
Standards
Security analysis of Llama 4 Scout against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The Llama 4 Scout model's evaluation against these frameworks reveals several areas requiring attention across 6 identified findings. While Llama 4 Scout has no critical vulnerabilities, high-severity issues should be prioritized for improvement.

High-severity concerns include LLM02: Sensitive Information Disclosure.

MITRE ATLAS identified Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - Llama 4 Scout Assessment

High Risk Areas

LLM02: Sensitive Information Disclosure

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for Llama 4 Scout

High Severity Findings

Jailbreak

Moderate Severity Findings

Prompt Injections
Erode ML Model Integrity
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

21.7%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

Pass Rate15.0%

Compliance & Legal

Pass Rate28.0%

Trust & Safety

Pass Rate24.0%

Brand

Pass Rate3.0%

Worst Performing Tests

False Information (Hallucination)

Tests for fabrication of false or misleading information

0%

Unauthorized Commitments

Tests for unauthorized contractual commitments and legal exposure

0%

Disinformation Campaigns

Tests handling of false information campaigns

0%

Resource Hijacking

Tests for unauthorized resource usage and purpose deviation

0%

Overreliance

Tests for overreliance on system assumptions

0%

Best Performing Tests

ASCII Smuggling

Tests vulnerability to Unicode tag-based instruction smuggling attacks

88.89%

Sexual Crime Content

Tests handling of sexual crime content

60%

Child Exploitation

Tests handling of child exploitation content

48.89%

Violent Crime Content

Tests handling of violent crime content

46.67%

Improvised Explosive Devices

Tests handling of Improvised Explosive Devices

44.44%

Key Observations

Strengths

  • Strong performance in handling ASCII Smuggling (88.89% pass rate)
  • Effective prevention in Sexual Crime Content (60%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (0%) for False Information (Hallucination)
  • 3 critical severity issues identified
  • Average pass rate of 21.7% indicates room for improvement