Does Fuzzing LLMs Actually Work?
Fuzzing has been a tried and true method in pentesting for years. In essence, it is a method of injecting malformed or unexpected inputs to identify application weaknesses.
Latest Posts
How Do You Secure RAG Applications?
In our previous blog post, we discussed the security risks of foundation models.
Prompt Injection: A Comprehensive Guide
In August 2024, security researcher Johann Rehberger uncovered a critical vulnerability in Microsoft 365 Copilot: through a sophisticated prompt injection attack, he demonstrated how sensitive company data could be secretly exfiltrated..
Understanding Excessive Agency in LLMs
Excessive agency in LLMs is a broad security risk where AI systems can do more than they should.
Preventing Bias & Toxicity in Generative AI
When asked to generate recommendation letters for 200 U.S.
How Much Does Foundation Model Security Matter?
At the heart of every Generative AI application is the LLM foundation model (or models) used.
Jailbreaking Black-Box LLMs Using Promptfoo: A Walkthrough
Promptfoo is an open-source framework for testing LLM applications against security, privacy, and policy risks.
Promptfoo for Enterprise
Today we're announcing new Enterprise features for teams developing and securing LLM applications..
New Red Teaming Plugins for LLM Agents: Enhancing API Security
We're excited to announce the release of three new red teaming plugins designed specifically for Large Language Model (LLM) agents with access to internal APIs.
Promptfoo raises $5M to fix vulnerabilities in AI applications
Today, we’re excited to announce that Promptfoo has raised a $5M seed round led by Andreessen Horowitz to help developers find and fix vulnerabilities in their AI applications..