GIVEAWAY
Win the ultimate AI security check with a free pentest giveaway!
GIVEAWAY
Win the ultimate AI security check with a free pentest giveaway!

LLM Vulnerability: Excessive Agency Overview

From prompt injection attacks to over reliance on model output correctness, large language models (LLMs) offer security practitioners a wide variety of risks to mitigate. 

Since LLM outputs have the potential to impact security from leaking sensitive data to insecure plugin design or API exploits, LLM-enabled applications must go through a complete security review before being used by the masses. 

Today we take a closer look at Excessive Agency, one of the top ten LLM application vulnerabilities identified by OWASP. We provide insights into how cybersecurity professionals mitigate the risks in their LLM models and LLM agents to improve LLM-enabled web application security.  

Excessive Agency Defined

Excessive Agency in the context of Large Language Model (LLM) based applications or chatbots refers to scenarios where the language model suggests actions that exceed the intended scope or permissions granted by its users or administrators. 

This can manifest in various ways, such as executing unauthorized commands, making unintended information disclosures, or interacting with external systems beyond its restricted parameters.

This vulnerability can arise from various sources, including hallucinations, both direct and indirect prompt manipulations, malicious plugins, sub-optimally crafted benign prompts, or simply an underperforming model. Each of these factors can contribute to a scenario where the LLM acts outside of expected operational guidelines, potentially leading to damaging consequences.

While Excessive Agency can be prevalent in systems using single response or actions using LLMs, the risk increases with increasing agency of systems using Autonomous AI AgentsOWASP Top 10 for LLM Applications lists Excessive Agency as LLM08, one of the top 10 most critical vulnerabilities often seen in LLM applications, highlighting its potential impact, ease of exploitation, and prevalence in real-world applications.

Examples of Excessive Agency

Excessive Agency refers to situations where LLMs undertake actions that go beyond what is intended or authorized by their operators. This overreach can manifest in various operational, ethical, or system-level impacts.

Excessive Autonomy

AI systems, including LLMs, might expand their operational scope, for example, by applying learned behaviors in inappropriate contexts.

Example: An autonomous AI system designed to harvest and analyze business information on the internet might broaden its scope to gathering information about individuals on social media, surpassing intended ethical boundaries.

Excessive Functionality

LLM-based systems may have access to plugins or functionality beyond the requirements for their operation.

Example: An AI system designed to use a shell tool to read and write data from a file on a server in response to user requests. If this system also has access to other shell tools on the server it may misuse them and thereby compromise confidentiality, availability, or integrity.

Excessive Permissions

LLM-based systems with access to database read operations may, if permissions are not properly managed, compromise system integrity. This security vulnerability is comparable to unauthorized access and could lead to backend systems being inappropriately accessed and sensitive information being disclosed.

Example: If an AI system designed to read data from a database in response to user requests also has write/delete permissions it might drop a table or inadvertently modify data.

System Overload

LLMs can inadvertently consume excessive computational resources while processing large or complex requests, which may lead to denial of service (DoS).

Example: An AI tool intended for data analysis continuously initiates large-scale queries beyond the requirement, thereby overloading the server and impacting other critical services.

Sensitive Data Leakage

AI systems powered by LLMs with access to customer document storage may, if given excessive permissions, access and leak information from other customers.

Example: An LLM-powered tool used for creating reports inadvertently includes sensitive information when prompted to complete an otherwise innocent task. 

Mitigation Strategies for Excessive Agency


Define Clear Boundaries

It’s essential to establish and enforce strict boundaries for what LLMs can and cannot do. This includes defining the scope of tasks, the type of data they can access, and their interaction with other systems. Prefer using the concept of whitelisting of allowed plugins and tools over blacklisting in order to reduce risk.

Implement Robust Authentication and Authorization

Ensure that all interactions with the LLM are authenticated and that the model operates under strict authorization protocols to prevent it from executing any actions that exceed user permissions.

Use Autonomous AI Agents with caution

Autonomous Agents by definition work in cycles, whereby taking several subsequent actions from one single input request. This increases the risks associated with the LLM application. When designing AI Agent powered applications, limit its scope by choosing plugins or tools that are specifically tailored for narrow tasks. 

For example, if an autonomous AI agent is required to write data to a file, using a broad shell command plugin could inadvertently allow a vast array of other commands to be executed, posing a significant security risk. Instead, a more secure approach would involve developing a specialized tool designed solely for file writing. This focused approach not only enhances security but also ensures that the AI agent operates strictly within its intended boundaries, thus preventing it from taking unintended actions.

Establish Ethical Guidelines and Governance

It's critical to set up clear ethical guidelines and governance frameworks for the responsible deployment and operation of LLMs and autonomous AI agents. These frameworks should manage acceptable use and ensure AI actions are aligned with organizational and societal ethical standards. This governance supports not just compliance but also fosters trust and reliability in AI applications.

Regular Audits and Monitoring

Continuously monitor the behavior of LLMs to detect any actions that might indicate Excessive Agency. Regular audits can help ensure that the model is operating within its defined limits and that any deviations are quickly addressed.

Developer Training and Awareness

Developers should be educated about the potential risks associated with LLMs, e.g. based upon OWASP Top 10 for LLM Applications. Understanding what the model should and should not do will help users recognize and report unexpected behaviors.

 

Closing: Protecting your LLM-enabled applications

As businesses integrate AI and LLMs more deeply into their operations, the need for robust security measures becomes critical. This blog has explored the various facets of Excessive Agency and its potential risks. Future developments in AI are expected to involve more agency and therefore this topic turns ever more relevant. To address these risks effectively, tailored security strategies as described above are invaluable.

LLM penetration testing assesses and reinforces the defenses of the model’s implementations. By identifying vulnerabilities before they impact the systems, these services not only protect but also optimize the performance of AI applications. Such strategic assessments ensure that AI technologies can deliver their full potential safely and reliably.

Investing in penetration testing is a prudent step toward maintaining a secure and trustworthy AI environment, enabling innovations to advance without undue risk. Explore other examples of emerging LLM security risks with content on:

Furthermore, see how Cobalt helps companies secure their LLM-enabled applications with AI penetration testing services.

SANS AI Survey Report 2024 Cover Image

Back to Blog
About Adam Lundqvist
Adam Lundqvist is an Engineering Director at Cobalt, where his work sits at the intersection of artificial intelligence and offensive security. Steering the data and infrastructure teams, Adam is a driving force behind the adoption of cutting-edge AI solutions that bolster the effectiveness of Cobalt's security products and its community of security professionals. With a career spanning over two decades, Adam has evolved from a hands-on developer to a strategic leader, amassing a wealth of technical expertise. His nuanced understanding of cybersecurity and the tech world, coupled with his talent for motivating his teams through a collaborative and visionary approach, positions him as a pivotal figure in translating complex technical initiatives into strategic business outcomes. Beyond the digital battleground, Adam is a devoted family man, treasuring time with his partner and their three children. His leisure time reflects his adventurous spirit, whether he's downhill skiing, playing ice hockey, or tackling the grueling challenge of mountain marathons. Adam relishes stepping out of his comfort zone, continually seeking the thrill of new and demanding experiences. More By Adam Lundqvist