Loading report data...
Loading report data...
September 2025 • Model Security & Safety Evaluation
Want to see how Qwen3-235B-A22B-Instruct-2507 stacks up against other models? Use our comparison tool to analyze security metrics side by side.
Qwen's Qwen3-235B-A22B-Instruct-2507 launched on July 30, 2025, representing an advanced mixture-of-experts language model with 22 billion active parameters per forward pass operating exclusively in non-thinking mode. With significant improvements in instruction following, logical reasoning, mathematical capabilities, coding performance, and enhanced 262K context understanding, this model demonstrates superior alignment with user preferences across multiple languages. Available through OpenRouter API, Hugging Face, and Qwen AI platforms, this evaluation assesses its security robustness across diverse attack vectors.
As the model gains traction in diverse applications, this security analysis aims to evaluate its safety features and identify areas for improvement in its deployment.
"The Qwen3-235B-A22B-Instruct-2507 model represents a leap forward in multilingual and logical reasoning capabilities."— Qwen Team
multilingual text, mathematical expressions, code snippets
text generation, logical reasoning, mathematical solutions
262K input • 262K output
general-purpose text generation
• instruction following • logical reasoning • mathematics • coding • tool usage
• OpenRouter API • Hugging Face • Qwen AI platforms • Cerebras Inference
Not published
Comprehensive analysis across 39 test categories, sorted by severity and pass rate:
Our security testing suite demonstrates varying levels of compliance across different categories. The Qwen3-235B-A22B-Instruct-2507 model's overall pass rate distribution shows 3 critical, 5 high, 15 medium, and 16 low severity findings.
Top performing areas include ASCII Smuggling (100%), Divergent Repetition (100%), Sexual Crime Content (93.33%). These categories demonstrate exceptional security practices and robust implementation.
Areas requiring attention include Entity Impersonation (48.89%), IP Violations (48.89%), Religious Bias (48.89%). These categories present opportunities for improvement in security measures.
Tests for unauthorized contractual commitments and legal exposure
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests for fabrication of false or misleading information
Our evaluation included over 1,600 test probes across four security categories. The analysis revealed both strengths and areas requiring additional safety measures:
Protection against unauthorized access, data exposure, and system vulnerabilities
Assessment of compliance with legal requirements and prevention of illegal content
Prevention of harmful content and protection of user safety
Protection of brand integrity and prevention of misuse
OWASP Top 10 for LLMs (2025) and MITRE ATLAS represent the primary security assessment frameworks for large language models. OWASP focuses on critical deployment risks, while ATLAS maps adversarial techniques against AI systems, similar to the MITRE ATT&CK framework for traditional systems.
The Qwen3-235B-A22B-Instruct-2507 model's evaluation against these frameworks reveals several areas requiring attention across 5 identified findings. While Qwen3-235B-A22B-Instruct-2507 has no critical vulnerabilities, high-severity issues should be prioritized for improvement.
MITRE ATLAS identified Jailbreak as high-severity concerns.
39
69.4%
3
Tests handling of cybercrime-related content
Tests prompt injection attacks from Meta's CyberSecEval dataset
Tests handling of religious content and bias
Tests handling of intellectual property violations
Tests for unauthorized impersonation of entities
Tests for training data leaks through repetitive pattern exploitation that causes model divergence
Tests vulnerability to Unicode tag-based instruction smuggling attacks
Tests handling of sexual crime content
Tests handling of hate speech and discriminatory content
Tests for direct PII exposure vulnerabilities