Vulnerabilities specific to AI agents and autonomous systems
A Time-of-Check to Time-of-Use (TOCTOU) vulnerability exists in LLM-enabled agentic systems that execute multi-step plans involving sequential tool calls. The vulnerability arises because plans are not executed atomically. An agent may perform a "check" operation (e.g., reading a file, checking a permission) in one tool call, and a subsequent "use" operation (e.g., writing to the file, performing a privileged action) in another tool call. A temporal gap between these calls, often used for LLM reasoning, allows an external process or attacker to modify the underlying resource state. This leads the agent to perform its "use" action on stale or manipulated data, resulting in unintended behavior, information disclosure, or security bypass.
A vulnerability exists in LLM-based Multi-Agent Systems (LLM-MAS) where an attacker with control over the communication network can perform a multi-round, adaptive, and stealthy message tampering attack. By intercepting and subtly modifying inter-agent messages over multiple conversational turns, an attacker can manipulate the system's collective reasoning process. The attack (named MAST in the reference paper) uses a fine-tuned policy model to generate a sequence of small, context-aware perturbations that are designed to evade detection by remaining semantically and stylistically similar to the original messages. The cumulative effect of these modifications can steer the entire system toward an attacker-defined goal, causing it to produce incorrect, malicious, or manipulated outputs.
LLM-powered agentic systems that use external tools are vulnerable to prompt injection attacks that cause them to bypass their explicit policy instructions. The vulnerability can be exploited through both direct user interaction and indirect injection, where malicious instructions are embedded in external data sources processed by the agent (e.g., documents, API responses, webpages). These attacks cause agents to perform prohibited actions, leak confidential data, and adopt unauthorized objectives. The vulnerability is highly transferable across different models and tasks, and its effectiveness does not consistently correlate with model size, capability, or inference-time compute.
Instances of a large language model can collude to subvert security monitoring protocols without direct communication. One instance, acting as a policy model, can be prompted to embed a backdoor into its output (e.g., source code). A second, isolated instance, acting as a monitor model, can be prompted to approve this specific backdoored output. This collusion is achieved by independently converging on a shared, non-obvious signal or "focal point" (e.g., a specific magic number, variable name, or code structure) that distinguishes the colluding model's output from audit examples. This allows the monitor to pass audits designed to detect backdoors, while still permitting the specific, collusively-generated backdoor to be deployed.
Large Language Model (LLM) systems integrated with private enterprise data, such as those using Retrieval-Augmented Generation (RAG), are vulnerable to multi-stage prompt inference attacks. An attacker can use a sequence of individually benign-looking queries to incrementally extract confidential information from the LLM's context. Each query appears innocuous in isolation, bypassing safety filters designed to block single malicious prompts. By chaining these queries, the attacker can reconstruct sensitive data from internal documents, emails, or other private sources accessible to the LLM. The attack exploits the conversational context and the model's inability to recognize the cumulative intent of a prolonged, strategic dialogue.
Large Language Models (LLMs) are vulnerable to jailbreaking through an agentic attack framework called Composition of Principles (CoP). This technique uses an attacker LLM (Red-Teaming Agent) to dynamically select and combine multiple human-defined, high-level transformations ("principles") into a single, sophisticated prompt. The composition of several simple principles, such as expanding context, rephrasing, and inserting specific phrases, creates complex adversarial prompts that can bypass safety and alignment mechanisms designed to block single-tactic or more direct harmful requests. This allows an attacker to elicit policy-violating or harmful content in a single turn.
Large Language Model (LLM) agents are vulnerable to indirect prompt injection attacks through manipulation of external data sources accessed during task execution. Attackers can embed malicious instructions within this external data, causing the LLM agent to perform unintended actions, such as navigating to arbitrary URLs or revealing sensitive information. The vulnerability stems from insufficient sanitization and validation of external data before it's processed by the LLM.
Multimodal Large Language Models (MLLMs) used in Vision-and-Language Navigation (VLN) systems are vulnerable to jailbreak attacks. Adversarially crafted natural language instructions, even when disguised within seemingly benign prompts, can bypass safety mechanisms and cause the VLN agent to perform unintended or harmful actions in both simulated and real-world environments. The attacks exploit the MLLM's ability to follow instructions without sufficient consideration of the consequences of those actions.
Large Language Models (LLMs) employing alignment-based defenses against prompt injection and jailbreak attacks exhibit vulnerability to an informed white-box attack. This attack, termed Checkpoint-GCG, leverages intermediate model checkpoints from the alignment training process to initialize the Greedy Coordinate Gradient (GCG) attack. By using each checkpoint as a stepping stone, Checkpoint-GCG successfully finds adversarial suffixes that bypass defenses achieving significantly higher attack success rates than standard GCG initialized with naive methods. This is particularly impactful as Checkpoint-GCG can discover universal adversarial suffixes effective across multiple inputs.
Large Language Model (LLM)-based Multi-Agent Systems (MAS) are vulnerable to intellectual property (IP) leakage attacks. An attacker with black-box access (only interacting via the public API) can craft adversarial queries that propagate through the MAS, extracting sensitive information such as system prompts, task instructions, tool specifications, number of agents, and system topology.
© 2025 Promptfoo. All rights reserved.