{"id":60375,"date":"2025-08-21T08:50:26","date_gmt":"2025-08-21T06:50:26","guid":{"rendered":"https:\/\/www.usd.de\/?p=60375"},"modified":"2025-08-21T08:50:29","modified_gmt":"2025-08-21T06:50:29","slug":"bsi-ai-criteria-catalogue-finance-administration","status":"publish","type":"post","link":"https:\/\/www.usd.de\/en\/bsi-ai-criteria-catalogue-finance-administration\/","title":{"rendered":"New BSI Criteria Catalogues: Guidelines for the Use of AI in the Financial and Administrative Sectors"},"content":{"rendered":"\n<p>The German Federal Office for Information Security (BSI) has published two new sets of criteria for evaluating Artificial Intelligence (AI). They are intended for federal government organizations as well as companies and institutions in the financial sector.<\/p>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">BSI criteria catalogues and EU AI Act<\/h2>\n\n\n\n<p>AI systems are increasingly taking over decisions in safety-critical, highly regulated, or particularly sensitive areas, such as fraud prevention, identity verification, or risk assessment processes. At the same time, there are growing demands for transparency, tamper-proofing, and disclosure of how such systems work.&nbsp;<\/p>\n\n\n\n<p>With its latest publication, the BSI has significantly clarified the requirements for the use of AI in Germany \u2013 particularly in the context of the EU AI Act. The Act came into force in August 2024 and, for the first time, establishes a binding, EU-wide legal framework for the use of AI systems. It sets clear standards for security, transparency, and accountability.<\/p>\n\n\n\n<p>The EU AI Act will be implemented in stages until 2031. A key milestone was reached on 2 August 2025: Since then, essential regulations have been in force, including those governing GPAI models, governance structures, and the work of so-called \u201cnotified bodies\u201d that assess high-risk AI systems.<\/p>\n\n\n\n<div style=\"height:23px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">What is there to know about the BSI criteria catalogues?&nbsp;<\/h2>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Which target groups are being addressed?&nbsp;<\/h3>\n\n\n\n<p>There are two customized criteria catalogues for the administrative and financial sectors:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>\"Kriterienkatalog f\u00fcr generative KI in der Verwaltung\" (Criteria catalog for generative AI in administration, only available in German)&nbsp;<\/li>\n\n\n\n<li>\"Test Criteria Catalogue for AI Systems in Finance\" (only available in English)&nbsp;<\/li>\n<\/ul>\n\n\n\n<p>Although both catalogues are intended for specific sectors, as indicated by their names and introductions, they can also be used by organizations in other industries.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">So what does the criteria catalogue for AI in the federal government call for?<\/h3>\n\n\n\n<p>The <a href=\"https:\/\/www.bsi.bund.de\/SharedDocs\/Downloads\/DE\/BSI\/KI\/Kriterienkatalog_KI-Modelle_Bundesverwaltung.pdf?__blob=publicationFile&amp;v=3\" target=\"_blank\" rel=\"noopener\">criteria catalogue<\/a> is currently designed as a non-binding guide and pursues a holistic, risk-based approach to assessment and regulation. The focus is on the safe, transparent, and traceable use of AI in public authorities. It takes into account the entire life cycle of AI systems, from development and operation to decommissioning. In addition, the catalogue includes guidelines for risk analysis, documentation, and regular review of the AI systems used.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What requirements does the criteria catalogue set for AI in the financial sector?<\/h3>\n\n\n\n<p>This <a href=\"https:\/\/www.bsi.bund.de\/SharedDocs\/Downloads\/EN\/BSI\/KI\/AI-Finance_Test-Criteria.pdf?__blob=publicationFile&amp;v=3\" target=\"_blank\" rel=\"noopener\">document<\/a> translates the abstract requirements of the EU AI Act into a practical testing framework for banks, financial service providers, and related organizations. It also aims to establish a holistic and risk-based testing approach, covering key topics through comprehensive test criteria and linking procedural issues with technical testing procedures. These include, in particular, aspects such as robustness, data quality, and IT security of AI systems. In addition, requirements for regular audits and the continuous further development of the AI systems used are set.&nbsp;<\/p>\n\n\n\n<div style=\"height:40px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">The opinion of our experts on the criteria catalogues<\/h2>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:18% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/\/Nicole-Trebel-rund-1024x1024.png\" alt=\"\" class=\"wp-image-60426 size-full\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>With the first industry-specific criteria catalogues for AI use, the BSI is laying important foundations for the secure and traceable use of AI. I would definitely recommend using the catalogues as a template for internal guidelines. However, it is important to note that they require a great deal of personal responsibility from companies. Compared to other security standards such as ISO\/IEC or NIST, the criteria catalogues do not define specific methods, thresholds, or testing processes. In addition, the BSI emphasizes that fulfillment of the criteria does not automatically mean compliance with the EU AI Act, but should only be seen as a \u201cpossible contribution.\u201d Despite the good foundation, integration into your own company therefore requires extensive internal or even external expertise and, in my opinion, should be combined with an analysis of the requirements of the EU AI Act.<\/em><\/p>\n<cite><strong>Dr. Nicole Trebel, Senior Security Consultant, usd AG<\/strong>&nbsp;<\/cite><\/blockquote>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-media-text has-media-on-the-right is-stacked-on-mobile\" style=\"grid-template-columns:auto 18%\"><div class=\"wp-block-media-text__content\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>The criteria are structured in such a detailed manner that they not only form a good basis for guidelines, but are also particularly suitable for internal and external audits. I can definitely recommend checking or having the security level of your AI systems checked using the criteria catalogue in order to develop a roadmap for preparing for the requirements of the EU AI Act. In the financial sector, for example, this type of audits are already underway.<\/em><\/p>\n<cite><strong>Raphael Heinlein, Managing Security Auditor, usd AG<\/strong><\/cite><\/blockquote>\n<\/div><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"446\" height=\"446\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/Raphael-Heinlein_rund.jpg\" alt=\"\" class=\"wp-image-58412 size-full\" \/><\/figure><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:18% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"384\" height=\"384\" src=\"https:\/\/www.usd.de\/wp-content\/uploads\/software-security-zitat-st.jpg\" alt=\"Newspost Serie Software Security Zitat Stephan Neumann\" class=\"wp-image-20758 size-full\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>In my opinion, the catalogue combines the methods of auditing and technical security testing in an exemplary manner. This approach is particularly promising, as the combination ensures #moresecurity. When it comes to meeting the requirements, we can draw on our expertise and experience with pentests of LLM applications. Although, as Nicole mentioned, it is always necessary to check individually which measures are actually applied due to the independent implementation, this is precisely where the strength of our pentesters lies.<\/em><\/p>\n<cite><strong>Stephan Neumann, Head of usd HeroLab<\/strong><\/cite><\/blockquote>\n<\/div><\/div>\n\n\n\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<p>Are you dealing with the BSI requirements and need support with implementation, audits, or performing technical testing? <a href=\"https:\/\/www.usd.de\/en\/contact-form-analysis-pentests\/\">Contact us<\/a>, our security experts will be happy to help.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The German Federal Office for Information Security (BSI) has published two new sets of criteria for evaluating Artificial Intelligence (AI). They are intended for federal government organizations as well as companies and institutions in the financial sector. BSI criteria catalogues and EU AI Act AI systems are increasingly taking over decisions in safety-critical, highly regulated, [&hellip;]<\/p>\n","protected":false},"author":90,"featured_media":60458,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_et_pb_use_builder":"off","_et_pb_old_content":"<!-- wp:paragraph -->\n<p>Das Bundesamt f\u00fcr Sicherheit in der Informationstechnik (BSI) hat zwei neue Kriterienkataloge zur Bewertung von K\u00fcnstlicher Intelligenz (KI) ver\u00f6ffentlicht. Sie richten sich an Organisationen der Bundesverwaltung sowie an Unternehmen und Institutionen des Finanzsektors.<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:spacer {\"height\":\"15px\"} -->\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\">BSI-Kriterienkataloge und EU AI Act<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>KI-Systeme \u00fcbernehmen zunehmend Entscheidungen in sicherheitskritischen, hoch regulierten oder besonders sch\u00fctzenswerten Bereichen, etwa bei der Betrugspr\u00e4vention, der Identit\u00e4tspr\u00fcfung oder in Risikobewertungsprozessen. Gleichzeitig steigen die Anforderungen an Nachvollziehbarkeit, Manipulationssicherheit und die Offenlegung der Funktionsweise solcher Systeme.\u00a0<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Mit der aktuellen Ver\u00f6ffentlichung konkretisiert das BSI die Anforderungen f\u00fcr den Einsatz von KI in Deutschland deutlich \u2013 insbesondere im Rahmen des EU AI Acts. Dieser ist im August 2024 in Kraft getreten und schafft erstmals einen verbindlichen, europaweiten Rechtsrahmen f\u00fcr den Umgang mit KI-Systemen. Er setzt klare Ma\u00dfst\u00e4be f\u00fcr Sicherheit, Transparenz und Verantwortlichkeit.\u00a0<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:paragraph -->\n<p>Die Umsetzung des EU AI Acts erfolgt schrittweise bis 2031. Ein zentraler Meilenstein wurde am 2. August 2025 erreicht: Seitdem gelten wesentliche Vorschriften, unter anderem f\u00fcr GPAI-Modelle, Governance-Strukturen sowie die Arbeit sogenannter \u201eNotified Bodies\u201c (notifizierte Stellen), die hochriskante KI-Systeme pr\u00fcfen.\u00a0<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:spacer {\"height\":\"23px\"} -->\n<div style=\"height:23px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\">Was sollten Sie zu den BSI-Kriterienkatalogen wissen?\u00a0<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:heading {\"level\":4} -->\n<h4 class=\"wp-block-heading\">Welche Zielgruppen werden adressiert?\u00a0<\/h4>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Es gibt zwei ma\u00dfgeschneiderte Kriterienkataloge f\u00fcr die Branchen Verwaltung und Finance:&nbsp;&nbsp;<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:list -->\n<ul class=\"wp-block-list\"><!-- wp:list-item -->\n<li>Kriterienkatalog f\u00fcr generative KI in der Verwaltung\u00a0<\/li>\n<!-- \/wp:list-item -->\n\n<!-- wp:list-item -->\n<li>Test Criteria Catalogue for AI Systems in Finance (ausschlie\u00dflich englischsprachig)\u00a0<\/li>\n<!-- \/wp:list-item --><\/ul>\n<!-- \/wp:list -->\n\n<!-- wp:paragraph -->\n<p>Obwohl sich beide Kataloge ihrem Namen und der Einleitung nach an spezifische Branchen richten, k\u00f6nnen sie ebenso von Organisationen anderer Wirtschaftszweige genutzt werden.&nbsp;<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading {\"level\":4} -->\n<h4 class=\"wp-block-heading\">Was fordert der Kriterienkatalog f\u00fcr KI in der Bundesverwaltung?\u00a0<\/h4>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Der Kriterienkatalog ist derzeit als unverbindliche Orientierungshilfe konzipiert und verfolgt einen ganzheitlichen, risikobasierten Ansatz zur Bewertung und Regulierung. Der Schwerpunkt liegt auf einer sicheren, transparenten und nachvollziehbaren Nutzung von KI in Beh\u00f6rden. Er ber\u00fccksichtigt dabei den gesamten Lebenszyklus von KI-Systemen, von der Entwicklung \u00fcber den Betrieb bis hin zur Au\u00dferbetriebnahme. Erg\u00e4nzend umfasst der Katalog Vorgaben zur Risikoanalyse, zur Dokumentation sowie zur regelm\u00e4\u00dfigen \u00dcberpr\u00fcfung der eingesetzten KI-Systeme.&nbsp;<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:heading {\"level\":4} -->\n<h4 class=\"wp-block-heading\">Welche Anforderungen stellt der Kriterienkatalog f\u00fcr KI im Finanzsektor?<\/h4>\n<!-- \/wp:heading -->\n\n<!-- wp:paragraph -->\n<p>Dieses Dokument \u00fcbersetzt die abstrakten Anforderungen des EU AI Acts in einen praktischen Pr\u00fcfrahmen f\u00fcr Banken, Finanzdienstleister und verwandte Organisationen. Er strebt ebenso einen ganzheitlichen und risikobasierten Pr\u00fcfansatz an, deckt zentrale Themenfelder durch umfassende Testkriterien ab und verkn\u00fcpft dabei prozessuale Fragestellungen mit technischen Pr\u00fcfverfahren. Dazu z\u00e4hlen insbesondere Aspekte wie Robustheit, Datenqualit\u00e4t und IT-Sicherheit von KI-Systemen. Zus\u00e4tzlich werden Anforderungen an regelm\u00e4\u00dfige Audits und die kontinuierliche Weiterentwicklung der eingesetzten KI-Systeme gestellt.\u00a0<\/p>\n<!-- \/wp:paragraph -->\n\n<!-- wp:spacer {\"height\":\"23px\"} -->\n<div style=\"height:23px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:heading -->\n<h2 class=\"wp-block-heading\">Das sagen unsere Expert*innen<\/h2>\n<!-- \/wp:heading -->\n\n<!-- wp:spacer {\"height\":\"15px\"} -->\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:media-text {\"align\":\"\",\"mediaId\":57956,\"mediaLink\":\"https:\/\/www.usd.de\/swift-cscfv2025-architekturtyp-b\/lea-straumann-rund\/\",\"mediaType\":\"image\",\"mediaWidth\":20} -->\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:20% auto\"><figure class=\"wp-block-media-text__media\"><img src=\"https:\/\/www.usd.de\/wp-content\/uploads\/\/Lea-Straumann-rund-1024x1024.png\" alt=\"\" class=\"wp-image-57956 size-full\"\/><\/figure><div class=\"wp-block-media-text__content\"><!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph -->\n<p>Mit den ersten branchenspezifischen Kriterienkatalogen f\u00fcr KI-Nutzung legt das BSI wichtige Grundlagen f\u00fcr den sicheren und nachvollziehbaren Einsatz von KI. Ich kann auf jeden Fall empfehlen, die Kataloge als Vorlage f\u00fcr interne Richtlinien zu nutzen. Allerdings muss man dabei folgendes beachten: Sie fordern von den Unternehmen ein gro\u00dfes St\u00fcck Eigenverantwortung. Im Vergleich zu anderen Sicherheitsstandards wie ISO\/IEC oder NIST sind in den Kriterienkatalogen keine konkreten Methoden, Schwellenwerte und Pr\u00fcfprozesse definiert. Hinzu kommt, dass das BSI betont, dass die Erf\u00fcllung der Kriterien nicht automatisch die Konformit\u00e4t mit dem EU AI Act bedeutet, sondern lediglich als \u201em\u00f6glicher Beitrag\u201c zu sehen ist. Die Integration im eigenen Unternehmen erfordert daher trotz der guten Grundlage umfangreiche interne oder sogar externe Expertise und sollte meiner Meinung nach mit einer Analyse der Anforderungen des EU AI Act kombiniert werden.<\/p>\n<!-- \/wp:paragraph --><cite><strong>Nicole Trebel, Senior Security Consultant, usd AG<\/strong>\u00a0<\/cite><\/blockquote>\n<!-- \/wp:quote --><\/div><\/div>\n<!-- \/wp:media-text -->\n\n<!-- wp:spacer {\"height\":\"15px\"} -->\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:media-text {\"align\":\"\",\"mediaPosition\":\"right\",\"mediaId\":60176,\"mediaLink\":\"https:\/\/www.usd.de\/ot-und-iot-systeme-pentests-im-ueberblick\/robin-plugge_rund\/\",\"mediaType\":\"image\",\"mediaWidth\":20} -->\n<div class=\"wp-block-media-text has-media-on-the-right is-stacked-on-mobile\" style=\"grid-template-columns:auto 20%\"><div class=\"wp-block-media-text__content\"><!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph {\"placeholder\":\"Inhalt\u00a0\u2026\"} -->\n<p><em>Die Kriterien sind so detailliert strukturiert, dass sie nicht nur eine gute Basis f\u00fcr Richtlinien bilden, sondern sich auch besonders f\u00fcr interne und externe Audits eignen. Das Sicherheitsniveau seiner KI-Systeme anhand eines Kriterienkatalogs zu pr\u00fcfen oder pr\u00fcfen zu lassen, um eine Roadmap f\u00fcr die Vorbereitung auf die Anforderungen des EU AI Act zu entwickeln, kann ich auf jeden Fall empfehlen. Erste solche Audits laufen beispielsweise im Finanzsektor bereits.<\/em><\/p>\n<!-- \/wp:paragraph --><cite><strong>Raphael Heinlein, Managing Security Auditor, usd AG<\/strong><\/cite><\/blockquote>\n<!-- \/wp:quote --><\/div><figure class=\"wp-block-media-text__media\"><img src=\"https:\/\/www.usd.de\/wp-content\/uploads\/\/Robin-Plugge_rund-1024x1024.png\" alt=\"\" class=\"wp-image-60176 size-full\"\/><\/figure><\/div>\n<!-- \/wp:media-text -->\n\n<!-- wp:spacer {\"height\":\"15px\"} -->\n<div style=\"height:15px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n<!-- \/wp:spacer -->\n\n<!-- wp:media-text {\"align\":\"\",\"mediaId\":20758,\"mediaLink\":\"https:\/\/www.usd.de\/software-security-zitat-st\/\",\"mediaType\":\"image\",\"mediaWidth\":20} -->\n<div class=\"wp-block-media-text is-stacked-on-mobile\" style=\"grid-template-columns:20% auto\"><figure class=\"wp-block-media-text__media\"><img src=\"https:\/\/www.usd.de\/wp-content\/uploads\/software-security-zitat-st.jpg\" alt=\"Newspost Serie Software Security Zitat Stephan Neumann\" class=\"wp-image-20758 size-full\"\/><\/figure><div class=\"wp-block-media-text__content\"><!-- wp:quote -->\n<blockquote class=\"wp-block-quote\"><!-- wp:paragraph {\"placeholder\":\"Inhalt\u00a0\u2026\"} -->\n<p><em>Der Katalog verbindet aus meiner Sicht vorbildlich die Methoden des Audits und der technischen Sicherheits\u00fcberpr\u00fcfung. Dieser Ansatz ist besonders erfolgsversprechend, da die Kombination f\u00fcr #moresecurity sorgt. Wir k\u00f6nnen bei den Anforderungen auf unsere Expertise und Erfahrung mit Pentests von LLM-Applikationen zur\u00fcckgreifen. Zwar muss aufgrund der von Nicole erw\u00e4hnten eigenverantwortlichen Umsetzung immer gepr\u00fcft werden, welche Ma\u00dfnahmen wirklich Anwendung finden, aber genau darin liegt die St\u00e4rke unserer Pentester*innen.<\/em><\/p>\n<!-- \/wp:paragraph --><cite><strong>Stephan Neumann, Head of usd HeroLab<\/strong><\/cite><\/blockquote>\n<!-- \/wp:quote --><\/div><\/div>\n<!-- \/wp:media-text -->","_et_gb_content_width":"","inline_featured_image":false,"footnotes":""},"categories":[410,373,374,389],"tags":[14474,14452,14453,14454,449,14473,14455,14456,6689,3598,14457,14458,14459],"class_list":["post-60375","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-financial-sector-compliance-en","category-news-en","category-pentests-security-analyses-en","category-security-audits-en","tag-administration","tag-ai-en","tag-ai-security-en","tag-artificial-intelligence-en","tag-bsi-en","tag-bsi-criteria-catalogue","tag-bsi-kriterienkatalog-en","tag-eu-ai-act-en","tag-financial-sector","tag-finanzsektor-en","tag-ki-en","tag-kuenstliche-intelligenz-en","tag-verwaltung-en"],"_links":{"self":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/60375","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/users\/90"}],"replies":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/comments?post=60375"}],"version-history":[{"count":5,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/60375\/revisions"}],"predecessor-version":[{"id":60454,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/posts\/60375\/revisions\/60454"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/media\/60458"}],"wp:attachment":[{"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/media?parent=60375"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/categories?post=60375"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.usd.de\/en\/wp-json\/wp\/v2\/tags?post=60375"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}