Skip to main content
Featured

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

Explore how invisible Unicode characters can be used to manipulate AI coding assistants and LLMs, potentially leading to security vulnerabilities in your code..

The Invisible Threat: How Zero-Width Unicode Characters Can Silently Backdoor Your AI-Generated Code

Latest Posts

OWASP Red Teaming: A Practical Guide to Getting Started

OWASP Red Teaming: A Practical Guide to Getting Started

While generative AI creates new opportunities for companies, it also introduces novel security risks that differ significantly from traditional cybersecurity concerns.

Misinformation in LLMs—Causes and Prevention Strategies

Misinformation in LLMs—Causes and Prevention Strategies

Misinformation in LLMs occurs when a model produces false or misleading information that is treated as credible.

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Sensitive Information Disclosure in LLMs: Privacy and Compliance in Generative AI

Imagine deploying an LLM application only to discover it's inadvertently revealing your company's internal documents, customer data, and API keys through seemingly innocent conversations.

Understanding AI Agent Security

Understanding AI Agent Security

In an earlier blog post, we discussed the use-cases for RAG architecture and its secure design principles.

What are the Security Risks of Deploying DeepSeek-R1?

What are the Security Risks of Deploying DeepSeek-R1?

Promptfoo's initial red teaming scans against DeepSeek-R1 revealed significant vulnerabilities, particularly in its handling of harmful and toxic content..

1,156 Questions Censored by DeepSeek

1,156 Questions Censored by DeepSeek

DeepSeek-R1 is a blockbuster open-source model that is now at the top of the U.S.

How to Red Team a LangChain Application

How to Red Team a LangChain Application

Want to test your LangChain application for vulnerabilities? This guide shows you how to use Promptfoo to systematically probe for security issues through adversarial testing (red teaming) of your LangChain chains and agents..

Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide

Defending Against Data Poisoning Attacks on LLMs: A Comprehensive Guide

Data poisoning remains a top concern on the OWASP Top 10 for 2025.