How Much Does Foundation Model Security Matter?
At the heart of every Generative AI application is the LLM foundation model (or models) used. Since LLMs are notoriously expensive to build from scratch, most enterprises will rely on foundation models that can be enhanced through few shot or many shot prompting, retrieval augmented generation (RAG), and/or fine-tuning.
Latest Posts
Jailbreaking Black-Box LLMs Using Promptfoo: A Walkthrough
Promptfoo is an open-source framework for testing LLM applications against security, privacy, and policy risks.
Promptfoo for Enterprise
Today we're announcing new Enterprise features for teams developing and securing LLM applications..
New Red Teaming Plugins for LLM Agents: Enhancing API Security
We're excited to announce the release of three new red teaming plugins designed specifically for Large Language Model (LLM) agents with access to internal APIs.
Promptfoo raises $5M to fix vulnerabilities in AI applications
Today, we’re excited to announce that Promptfoo has raised a $5M seed round led by Andreessen Horowitz to help developers find and fix vulnerabilities in their AI applications..
Automated jailbreaking techniques with Dall-E
We all know that image models like OpenAI’s Dall-E can be jailbroken to generate violent, disturbing, and offensive images.