Skip to main content

Grok 4 Security Report

July 2025 • Model Security & Safety Evaluation

Released
July 09, 2025

Compare Model Performance

Want to see how Grok 4 stacks up against other models? Use our comparison tool to analyze security metrics side by side.

Compare Models →

xAI's Grok 4 launched on July 09, 2025, marking a significant advancement in frontier intelligence with its integration of native tool use and real-time search capabilities.

As Grok 4 gains traction among enterprise users, this security analysis aims to evaluate its robust security features and identify areas for enhancement.

Company Logo

"Grok 4 represents a leap in frontier intelligence, setting a new state-of-the-art for closed models on ARC-AGI V2 with 15.9%."xAI

About This Evaluation

Red Teaming

Open-source AI red teaming covering 50+ vulnerability types.

Learn more

Quick Start

Run your own security evaluation with our step-by-step guide.

View guide

Documentation

Learn more about our AI red teaming framework.

Read docs
About the Model
Let's take a closer look at the model before diving into our security findings.

Core Capabilities

Input Support

text, vision, voice

Output Type

text, visual analysis

Token Limits

256,000 input • varies by subscription output

Use Cases

Primary Focus

frontier intelligence, real-time search

Key Applications

• enterprise-grade AI solutions • advanced reasoning tasks • multimodal interactions

Availability

Access Points

• SuperGrok and Premium+ subscriptions • xAI API

Knowledge Cutoff

November 2024

Security Test Results

Comprehensive analysis across 39 test categories, sorted by severity and pass rate:

Performance Overview

Our security testing suite demonstrates varying levels of compliance across different categories. The Grok 4 model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.

Areas requiring attention include Pliny Prompt Injections (0%), CyberSecEval Dataset (0%), Sexual Crime Content (11.11%). These categories present opportunities for improvement in security measures.

Showing 39 of 39 tests
Low

Tests vulnerability to Unicode tag-based instruction smuggling attacks

Low

Tests handling of malicious prompts from the BeaverTails dataset

Medium

Tests for unauthorized contractual commitments and legal exposure

Medium

Tests prompt injection attacks from Meta's CyberSecEval dataset

Medium

Tests for training data leaks through repetitive pattern exploitation that causes model divergence

Medium

Tests for unauthorized actions beyond defined system boundaries

Medium

Tests for fabrication of false or misleading information

High

Tests handling of WMD-related content

Critical

Tests handling of child exploitation content

Low

Tests handling of intellectual property violations

Low

Tests handling of cybercrime-related content

Low

Tests creation of malicious code

Medium

Tests handling of graphic or violent content

Low

Tests handling of harassment and bullying content

Critical

Tests handling of hate speech and discriminatory content

Medium

Tests handling of general illegal activities

Medium

Tests handling of illegal drug-related content

Low

Tests handling of methamphetamine-related content

Medium

Tests handling of weapons-related content

Low

Tests handling of personal attacks and insults

IP Theft35.56%
Medium

Tests handling of IP theft and violations

Medium

Tests handling of false information campaigns

Medium

Tests handling of non-violent criminal content

Profanity37.78%
Low

Tests handling of profane or inappropriate language

Low

Tests handling of extremist content and recruitment

Self-Harm42.22%
Critical

Tests handling of self-harm and suicide content

High

Tests handling of sexual crime content

Medium

Tests handling of explicit sexual content

Medium

Tests handling of unauthorized professional advice

Low

Tests handling of dangerous activity instructions

High

Tests handling of violent crime content

Low

Tests handling of Improvised Explosive Devices

High

Tests for unauthorized resource usage and purpose deviation

Low

Tests for unauthorized impersonation of entities

Low

Tests for overreliance on system assumptions

High

Tests for direct PII exposure vulnerabilities

Medium

Tests handling of Pliny prompt injections

Low

Tests handling of political content and bias

Low

Tests handling of religious content and bias

Total Tests: 39
Critical: 3High: 5Medium: 15Low: 16

Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:

Security & Access Control

50% Pass Rate

Protection against unauthorized access, data exposure, and system vulnerabilities

72 failed probes

Compliance & Legal

24% Pass Rate

Assessment of compliance with legal requirements and prevention of illegal content

545 failed probes

Trust & Safety

24% Pass Rate

Prevention of harmful content and protection of user safety

378 failed probes

Brand

39% Pass Rate

Protection of brand integrity and prevention of misuse

218 failed probes
Standards
Security analysis of Grok 4 against industry-standard frameworks for Large Language Models

OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.

The Grok 4 model's evaluation against these frameworks reveals several areas requiring attention across 6 identified findings. While Grok 4 has no critical vulnerabilities, high-severity issues should be prioritized for improvement.

High-severity concerns include LLM02: Sensitive Information Disclosure.

MITRE ATLAS identified Jailbreak as high-severity concerns.

OWASP Top 10 for LLMs 2025 - Grok 4 Assessment

High Risk Areas

LLM02: Sensitive Information Disclosure

Moderate Risk Areas

LLM06: Excessive Agency
LLM09: Misinformation

MITRE ATLAS Findings for Grok 4

High Severity Findings

Jailbreak

Moderate Severity Findings

Prompt Injections
Erode ML Model Integrity
Statistical Analysis
Comprehensive breakdown of test results across 39 security test categories

Total Test Categories

39

Average Pass Rate

28.2%

Critical Issues

3

Severity Distribution

critical3 tests
high5 tests
medium15 tests
low16 tests

Performance Summary

Security & Access Control

77%

Compliance & Legal

26%

Trust & Safety

31%

Brand

66%

Worst Performing Tests

CyberSecEval Dataset

Tests prompt injection attacks from Meta's CyberSecEval dataset

0%

Pliny Prompt Injections

Tests handling of Pliny prompt injections

0%

Child Exploitation

Tests handling of child exploitation content

11.11%

Sexual Crime Content

Tests handling of sexual crime content

11.11%

Non-Violent Crime

Tests handling of non-violent criminal content

13.33%

Best Performing Tests

Divergent Repetition

Tests for training data leaks through repetitive pattern exploitation that causes model divergence

64.44%

Resource Hijacking

Tests for unauthorized resource usage and purpose deviation

53.33%

Entity Impersonation

Tests for unauthorized impersonation of entities

44.44%

Religious Bias

Tests handling of religious content and bias

44.44%

Overreliance

Tests for overreliance on system assumptions

44.44%

Key Observations

Strengths

  • Strong performance in handling Divergent Repetition (64.44% pass rate)
  • Effective prevention in Resource Hijacking (53.33%)
  • Consistent performance across critical security tests

Areas for Improvement

  • Low pass rate (0%) for CyberSecEval Dataset
  • 3 critical severity issues identified
  • Average pass rate of 28.2% indicates room for improvement