{"id":65000,"date":"2026-04-02T09:05:14","date_gmt":"2026-04-02T07:05:14","guid":{"rendered":"https:\/\/www.usd.de\/?page_id=65000"},"modified":"2026-04-02T09:05:17","modified_gmt":"2026-04-02T07:05:17","slug":"pentest-of-ai-llm-systems","status":"publish","type":"page","link":"https:\/\/www.usd.de\/en\/pentest\/pentest-of-ai-llm-systems\/","title":{"rendered":"Pentest of AI\/LLM Systems"},"content":{"rendered":"<p>[et_pb_section fb_built=\"1\" _builder_version=\"4.16\" _module_preset=\"default\" custom_padding=\"0px||0px||true|false\" locked=\"off\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_row _builder_version=\"4.16\" _module_preset=\"default\" width=\"100%\" custom_padding=\"0px||||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.16\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" text_text_color=\"#FFFFFF\" text_font_size=\"30px\" text_line_height=\"1.2em\" header_font=\"Roboto||||||||\" header_text_color=\"#F07F1D\" header_font_size=\"50px\" background_color=\"#3C3C3C\" background_image=\"https:\/\/www.usd.de\/wp-content\/uploads\/usd-security-analysis-header-pentest-ai-llm-systemen_EN.jpg\" background_blend=\"multiply\" custom_margin=\"-25px||0px||false|false\" custom_padding=\"95px||60px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h1 style=\"text-align: center;line-height: 120%;font-weight: 400\">Pentest of AI\/LLM Systems<\/h1>\n<p style=\"text-align: center;line-height: 130%\"><span>Protect Your AI Solution and LLM-Based Applications<\/span><\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.24.1\" _module_preset=\"default\" custom_padding=\"||5px|||\" hover_enabled=\"0\" global_colors_info=\"{}\" theme_builder_area=\"post_content\" sticky_enabled=\"0\"][et_pb_column type=\"4_4\" _builder_version=\"4.24.1\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"||11px|||\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h2>Where AI and LLM Systems Are Vulnerable<\/h2>\n<p>AI-based applications and large language models (LLMs) are being rapidly integrated into business-critical processes. Companies are using them to boost internal productivity, power customer-facing applications, and enable automated decision-making and agent-based workflows.<\/p>\n<p>As an integral part of corporate infrastructure, <a href=\"https:\/\/www.usd.de\/en\/security-consulting\/ai\/\">AI<\/a> and LLM applications are subject to the same security requirements as traditional IT systems and are increasingly subject to regulatory requirements as well. At the same time, they currently pose particularly high risks: They are often developed under high time pressure, their failure behavior is still relatively poorly understood due to their stochastic nature, and they are typically closely interlinked with sensitive data sources, tools, APIs, and the extended internal organizational infrastructure.<\/p>\n<p>Due to their central role and high degree of autonomy, LLMs present a new target for attacks. Instead of exploiting only code errors or misconfigurations, attackers now target model behavior to exfiltrate sensitive data via retrieval mechanisms or abuse complex agent functions through context manipulation and prompt injections. These risks often cannot be reliably detected using traditional security analyses. For companies, this means that security can only be thoroughly analyzed using specialized testing approaches, such as our pentest of AI\/LLM systems.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"||3px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_divider _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_divider][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3><span style=\"color: #f07f1d\">Common Vulnerabilities in AI\/LLM Systems Include:<br \/><\/span><\/h3>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=\"1_4,3_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"||1px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"1_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_image src=\"https:\/\/www.usd.de\/wp-content\/uploads\/icon-schwachstelle-orange-003-1.png\" alt=\"Schwachstelle\" title_text=\"icon-schwachstelle-orange-003\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_image][\/et_pb_column][et_pb_column type=\"3_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<ul>\n<li>AI agents or AI chatbots produce unwanted, regulatory or liability\u2011relevant outputs through attacker\u2011controlled malicious inputs (\"jailbreaks\").<\/li>\n<li>Exploitation of the \"Lethal Trifecta\": Exfiltration of sensitive data via prompt injections from retrieval\u2011augmented (RAG) data sources.<\/li>\n<li>Misuse of connected tools and APIs: Unauthorized actions in downstream systems, lateral movement within internal networks, and execution of arbitrary code outside secure sandboxes.<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"27px||3px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_divider _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_divider][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.24.1\" _module_preset=\"default\" custom_padding=\"||5px|||\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.24.1\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"||11px|||\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h2>How Does usd AG Approach Penetration Testing of AI\/LLM Systems?<\/h2>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.24.1\" _module_preset=\"default\" custom_padding=\"0px||0px|||\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.24.1\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" custom_padding=\"||19px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<p>Our tests are based on realistic attack scenarios within a specific context of use. This is because not every theoretical prompt injection automatically poses a relevant vulnerability.<\/p>\n<p>Our methodology combines qualitative analyses from an attacker\u2019s perspective with quantitative assessments. Building on our established methodical <a href=\"https:\/\/www.usd.de\/en\/pentest\/pentest-approach\/\">pentest approach<\/a>, we conduct threat modeling and develop targeted, application-specific threat scenarios that reveal undesirable behavior and expose architectural and design weaknesses. Additionally, we address the stochastic behavior of AI\/LLM systems by conducting multiple attacks under realistic conditions and measuring success rates and reproducibility. This results in robust risk metrics rather than one-off proof-of-concepts.<\/p>\n<p>This assessment is based on established standards, including the <a href=\"https:\/\/genai.owasp.org\/llm-top-10\/\" target=\"_blank\" rel=\"noopener\">OWASP Top 10 for LLM and agents<\/a>, the <a href=\"https:\/\/atlas.mitre.org\/\" target=\"_blank\" rel=\"noopener\">MITRE ATLAS Framework<\/a>, and the <a href=\"https:\/\/www.usd.de\/en\/owasp-ai-red-teaming-provider-criteria\/\">OWASP Vendor Evaluation Criteria for AI Red Teaming Providers<\/a>.<\/p>\n<p>Since LLM applications often build upon existing system landscapes, we combine our AI-specific assessments with <a href=\"https:\/\/www.usd.de\/en\/pentest\/pentest-webapplications\/\">web<\/a>, <a href=\"https:\/\/www.usd.de\/en\/pentest\/pentest-mobile-applications\/\">mobile<\/a>, or <a href=\"https:\/\/www.usd.de\/en\/pentest\/api-webservices-pentest\/\">API penetration tests<\/a> as needed. This allows us to address traditional vulnerabilities and security-relevant interfaces between LLM stacks and existing software.<\/p>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.27.0\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.27.0\" _module_preset=\"default\" border_color_all=\"#F07F1D\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_testimonial author=\"Florian Kimmes\" job_title=\"usd Senior Consultant IT Security and expert in AI\/LLM Systems \" portrait_url=\"https:\/\/www.usd.de\/wp-content\/uploads\/Florian-Kimmes-rund.png\" quote_icon_color=\"#F07F1D\" quote_icon_background_color=\"#FFFFFF\" font_icon=\"&#xe06a;||divi||400\" portrait_width=\"200px\" portrait_height=\"200px\" use_icon_font_size=\"on\" icon_font_size=\"35px\" _builder_version=\"4.27.6\" _module_preset=\"default\" background_color=\"RGBA(255,255,255,0)\" custom_padding=\"3%||2%||false|false\" animation_style=\"fade\" border_width_all=\"2px\" border_color_all=\"#F07F1D\" border_radii_portrait=\"on|100%|100%|100%|100%\" border_color_all_portrait=\"RGBA(255,255,255,0)\" box_shadow_style_image=\"preset4\" box_shadow_horizontal_image=\"0px\" box_shadow_vertical_image=\"0px\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<span style=\"font-size: 18px;font-weight: 300\">A simple \u201cprompt injection possible\u201d message is not helpful for security teams. What matters is which manipulations in your environment create real security risks\u2014ranging from reputation-damaging statements and sensitive data leaks to complete system compromise. This is exactly where our penetration testing of AI\/LLM systems comes in: We focus not on the isolated language model, but on its specific implementation within the corporate context. In this way, we demonstrate how AI applications actually behave under real-world conditions and where targeted security measures are necessary. <\/span>[\/et_pb_testimonial][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h2>What Checks Are Included in the Penetration Test of Your AI\/LLM System?<\/h2>\n<p>These checks are included in the pentest of your AI\/LLM system, among others:<\/p>\n<ul>\n<li>Taint analysis of the information flow throughout the entire system, with a focus on context poisoning, to detect direct and indirect prompt injections<\/li>\n<li>Assessment of data exfiltration possibilities<\/li>\n<li>Impact of attacker-induced malicious LLM outputs on downstream systems<\/li>\n<li>Analysis of potential tool calls for \u201cexcessive agency\u201d<\/li>\n<li>Identification of LLM-based broken access control<\/li>\n<li>Assessing the effectiveness of alignment training for deployed models with regard to:\n<ul>\n<li>Misinformation<\/li>\n<li>Unethical outputs<\/li>\n<li>Regulatory-relevant statements<\/li>\n<li>Liability-relevant statements<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=\"1_4,3_4\" custom_padding_last_edited=\"on|desktop\" _builder_version=\"4.27.2\" _module_preset=\"default\" custom_padding=\"4%||24px||false|false\" custom_padding_tablet=\"4%||24px||false|false\" custom_padding_phone=\"4%||24px||false|false\" border_width_all=\"2px\" border_color_all=\"#00a2b6\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"1_4\" _builder_version=\"4.27.2\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_image src=\"https:\/\/www.usd.de\/wp-content\/uploads\/security-analysis-pentest-grafik-tipp.svg\" title_text=\"security-analysis-pentest-grafik-tipp\" align=\"center\" align_tablet=\"center\" align_phone=\"center\" align_last_edited=\"on|phone\" _builder_version=\"4.27.2\" _module_preset=\"default\" width=\"75%\" width_tablet=\"75%\" width_phone=\"40%\" width_last_edited=\"on|phone\" custom_margin=\"0px|||20%|false|false\" custom_margin_tablet=\"0px|||0px|false|false\" custom_margin_phone=\"4%|||30%|false|false\" custom_margin_last_edited=\"on|phone\" custom_padding=\"||||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_image][\/et_pb_column][et_pb_column type=\"3_4\" _builder_version=\"4.27.2\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" custom_margin_tablet=\"\" custom_margin_phone=\"|8%|0%|8%|false|false\" custom_margin_last_edited=\"on|phone\" custom_padding_tablet=\"\" custom_padding_phone=\"1%||4%||false|false\" custom_padding_last_edited=\"on|phone\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3><span style=\"color: #f07f1d\">Tip: AI Security Training<\/span><\/h3>\n<p>[\/et_pb_text][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" custom_margin=\"|3%|3%||false|false\" custom_margin_tablet=\"|3%|3%||false|false\" custom_margin_phone=\"|3%|3%|1%|false|false\" custom_margin_last_edited=\"on|phone\" custom_padding=\"|3%|||false|false\" custom_padding_tablet=\"|3%|||false|false\" custom_padding_phone=\"|8%||8%|false|false\" custom_padding_last_edited=\"on|phone\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<p>Only those who understand the relevant threats posed by AI\/LLM systems can implement effective protective measures. In our AI Security training course, we provide practical insights into real-world attack scenarios and demonstrate which measures are truly effective in a corporate environment. Upon request, we can tailor the content and technology stacks specifically to your organization and align the agenda with your requirements.<\/p>\n<p>[\/et_pb_text][et_pb_button button_url=\"https:\/\/www.usd.de\/en\/contact-form-analysis-pentests\/\" url_new_window=\"on\" button_text=\"Contact us\" button_alignment=\"left\" _builder_version=\"4.27.6\" _module_preset=\"7d5eca5e-7ccf-4359-a023-e8404a31180a\" button_text_color=\"#f49525\" button_bg_color=\"RGBA(255,255,255,0)\" button_border_width=\"1px\" button_border_color=\"#f49525\" custom_margin=\"||1%||false|false\" custom_margin_tablet=\"||1%||false|false\" custom_margin_phone=\"7%|8%|1%|8%|false|false\" custom_margin_last_edited=\"on|phone\" custom_padding_tablet=\"\" custom_padding_phone=\"||||false|false\" custom_padding_last_edited=\"on|phone\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_button][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.27.4\" _module_preset=\"default\" custom_padding=\"30px||19px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.24.1\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.16\" _module_preset=\"default\" custom_margin=\"-8px||||false|false\" custom_padding=\"||2px|||\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.16\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_divider color=\"#d8d8d8\" _builder_version=\"4.16\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_divider][\/et_pb_column][\/et_pb_row][et_pb_row _builder_version=\"4.21.0\" _module_preset=\"default\" locked=\"off\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"4_4\" _builder_version=\"4.21.0\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"default\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h2>Get More Insights<\/h2>\n<p>[\/et_pb_text][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=\"1_2,1_2\" _builder_version=\"4.27.6\" _module_preset=\"default\" custom_margin=\"|auto|-17px|auto||\" custom_padding=\"19px|||||\" locked=\"off\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"1_2\" _builder_version=\"4.27.6\" _module_preset=\"default\" background_color=\"#2e353d\" background_image=\"https:\/\/www.usd.de\/wp-content\/uploads\/usd-security-analysis-kachel-pentest-einstieg.jpg\" background_blend=\"multiply\" global_colors_info=\"{}\" background__hover_enabled=\"off|hover\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"2f9ba085-a5fa-4356-993b-05b9ace0780d\" custom_padding=\"47px|30px|25px|30px|false|true\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3 style=\"text-align: left\"><span style=\"color: #f07f1d\">Pentest: <\/span><span style=\"color: #ffffff\">Our standardized approach<\/span><\/h3>\n<p>[\/et_pb_text][et_pb_button button_url=\"https:\/\/www.usd.de\/en\/pentest\/pentest-approach\/\" button_text=\"Learn more\" button_alignment=\"center\" _builder_version=\"4.27.6\" _module_preset=\"7244f902-5e49-458a-9554-eef332089ce2\" custom_margin=\"||26px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_button][\/et_pb_column][et_pb_column type=\"1_2\" _builder_version=\"4.21.0\" _module_preset=\"default\" background_color=\"rgba(46,53,61,0.86)\" background_image=\"https:\/\/www.usd.de\/wp-content\/uploads\/Pentest-Vorteile-usd-AG.jpg\" background_blend=\"multiply\" global_colors_info=\"{}\" background__hover_enabled=\"off|hover\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"2f9ba085-a5fa-4356-993b-05b9ace0780d\" custom_padding=\"47px|30px|25px|30px|false|true\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3 style=\"text-align: left\"><span style=\"color: #f07f1d\">Pentests with usd AG:<\/span><span style=\"color: #ffffff\"><br \/>Your benefits at a glance<\/span><\/h3>\n<p>[\/et_pb_text][et_pb_button button_url=\"https:\/\/www.usd.de\/en\/pentest\/pentest-benefits\/\" button_text=\"Learn more\" button_alignment=\"center\" _builder_version=\"4.27.6\" _module_preset=\"7244f902-5e49-458a-9554-eef332089ce2\" custom_margin=\"||26px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_button][\/et_pb_column][\/et_pb_row][et_pb_row column_structure=\"1_2,1_2\" _builder_version=\"4.27.6\" _module_preset=\"default\" custom_margin=\"|auto|-17px|auto||\" custom_padding=\"19px|||||\" locked=\"off\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][et_pb_column type=\"1_2\" _builder_version=\"4.27.6\" _module_preset=\"default\" background_color=\"rgba(46,53,61,0.86)\" background_image=\"https:\/\/www.usd.de\/wp-content\/uploads\/News-Web-Application-Pentests-KI-Chatbots-1.jpg\" background_blend=\"multiply\" global_colors_info=\"{}\" background__hover_enabled=\"off|hover\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"2f9ba085-a5fa-4356-993b-05b9ace0780d\" custom_padding=\"47px|30px|25px|30px|false|true\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3 style=\"text-align: left\"><span style=\"color: #f07f1d\">How secure are AI chatbots?<\/span><span style=\"color: #ffffff\">Common vulnerabilities in LLM platforms<\/span><\/h3>\n<p>[\/et_pb_text][et_pb_button button_url=\"https:\/\/www.usd.de\/en\/ai-chatbots-pentests-vulnerabilites-llm-platforms\/\" button_text=\"Learn more\" button_alignment=\"center\" _builder_version=\"4.27.6\" _module_preset=\"7244f902-5e49-458a-9554-eef332089ce2\" custom_margin=\"||26px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_button][\/et_pb_column][et_pb_column type=\"1_2\" _builder_version=\"4.27.6\" _module_preset=\"default\" background_color=\"rgba(46,53,61,0.86)\" background_image=\"https:\/\/www.usd.de\/wp-content\/uploads\/news-usd-ag-owasp-vendor-evaluation-criteria.jpg\" background_blend=\"multiply\" global_colors_info=\"{}\" background__hover_enabled=\"off|hover\" theme_builder_area=\"post_content\"][et_pb_text _builder_version=\"4.27.6\" _module_preset=\"2f9ba085-a5fa-4356-993b-05b9ace0780d\" custom_padding=\"47px|30px|25px|30px|false|true\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"]<\/p>\n<h3 style=\"text-align: left\"><span style=\"color: #f07f1d\">OWASP \u201eVendor Evaluation Criteria for AI Red Teaming Providers &amp; Tooling v1.0\u201d[\/et_pb_text][et_pb_button button_url=\"https:\/\/www.usd.de\/en\/owasp-ai-red-teaming-provider-criteria\/\" button_text=\"Learn more\" button_alignment=\"center\" _builder_version=\"4.27.6\" _module_preset=\"7244f902-5e49-458a-9554-eef332089ce2\" custom_margin=\"||26px||false|false\" global_colors_info=\"{}\" theme_builder_area=\"post_content\"][\/et_pb_button][\/et_pb_column][\/et_pb_row][\/et_pb_section]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Pentest of AI\/LLM Systems Protect Your AI Solution and LLM-Based ApplicationsWhere AI and LLM Systems Are Vulnerable AI-based applications and large language models (LLMs) are being rapidly integrated into business-critical processes. Companies are using them to boost internal productivity, power customer-facing applications, and enable automated decision-making and agent-based workflows. As an integral part of corporate [&hellip;]<\/p>\n","protected":false},"author":112,"featured_media":0,"parent":40183,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_et_pb_use_builder":"on","_et_pb_old_content":"","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"class_list":["post-65000","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/pages\/65000","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/users\/112"}],"replies":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/comments?post=65000"}],"version-history":[{"count":5,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/pages\/65000\/revisions"}],"predecessor-version":[{"id":65078,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/pages\/65000\/revisions\/65078"}],"up":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/pages\/40183"}],"wp:attachment":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/media?parent=65000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}