GPT-4.1 mini Security Report
April 2025 • Model Security & Safety Evaluation
Compare Model Performance
Want to see how GPT-4.1 mini stacks up against other models? Use our comparison tool to analyze security metrics side by side.
OpenAI's GPT-4.1 mini launched in October 2023, offering a balanced approach to intelligence, speed, and cost efficiency. Positioned as a cost-effective alternative within the GPT-4 series, it introduces enhancements in processing speed and token handling.
As adoption grows across various industries, this security analysis aims to evaluate the model's safety features and identify areas for improvement.
"GPT-4.1 mini provides a balance between intelligence, speed, and cost that makes it an attractive model for many use cases."— OpenAI
About This Evaluation
Core Capabilities
text, image
text
1,047,576 tokens input • 32,768 tokens output
Use Cases
intelligence, speed, cost efficiency
• production workloads • cost-effective AI solutions
Availability
• OpenAI API • integrated platforms
June 1, 2024
Security Test Results
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Performance Overview
Our security testing suite demonstrates varying levels of compliance across different categories. The GPT-4.1 mini model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include ASCII Smuggling (100%), Child Exploitation (62.22%), Sexual Crime Content (53.33%). These categories show satisfactory security practices.
Areas requiring attention include Entity Impersonation (0%), Religious Bias (0%), Resource Hijacking (0%). These categories present opportunities for improvement in security measures.
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Security & Access Control
15% Pass RateProtection against unauthorized access, data exposure, and system vulnerabilities
Compliance & Legal
35% Pass RateAssessment of compliance with legal requirements and prevention of illegal content
Trust & Safety
26% Pass RatePrevention of harmful content and protection of user safety
Brand
4% Pass RateProtection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The GPT-4.1 mini model's evaluation against these frameworks reveals significant security concerns across 7 identified findings. The GPT-4.1 mini model has critical vulnerabilities in OWASP categories that require immediate remediation.
Notable critical findings include LLM01: Prompt Injection.
High-severity concerns include LLM02: Sensitive Information Disclosure.
MITRE ATLAS identified Jailbreak as high-severity concerns.
OWASP Top 10 for LLMs 2025 - GPT-4.1 mini Assessment
Critical Vulnerabilities
High Risk Areas
Moderate Risk Areas
MITRE ATLAS Findings for GPT-4.1 mini
High Severity Findings
Moderate Severity Findings
Total Test Categories
39
Average Pass Rate
25.9%
Critical Issues
3
Severity Distribution
Performance Summary
Security & Access Control
Compliance & Legal
Trust & Safety
Brand
Worst Performing Tests
Divergent Repetition
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Pliny Prompt Injections
Tests handling of Pliny prompt injections
Resource Hijacking
Tests for unauthorized resource usage and purpose deviation
Religious Bias
Tests handling of religious content and bias
Entity Impersonation
Tests for unauthorized impersonation of entities
Best Performing Tests
ASCII Smuggling
Tests vulnerability to Unicode tag-based instruction smuggling attacks
Child Exploitation
Tests handling of child exploitation content
Sexual Crime Content
Tests handling of sexual crime content
Violent Crime Content
Tests handling of violent crime content
Improvised Explosive Devices
Tests handling of Improvised Explosive Devices
Key Observations
Strengths
- Strong performance in handling ASCII Smuggling (100% pass rate)
- Effective prevention in Child Exploitation (62.22%)
- Consistent performance across critical security tests
Areas for Improvement
- Low pass rate (0%) for Divergent Repetition
- 3 critical severity issues identified
- Average pass rate of 25.9% indicates room for improvement