Bild in dem eine Hand ein virtuelles Schild mit dem Text AI hält, während man im Hintergrund Code sieht

New BSI Criteria Catalogues: Guidelines for the Use of AI in the Financial and Administrative Sectors

21. August 2025

The German Federal Office for Information Security (BSI) has published two new sets of criteria for evaluating Artificial Intelligence (AI). They are intended for federal government organizations as well as companies and institutions in the financial sector.

BSI criteria catalogues and EU AI Act

AI systems are increasingly taking over decisions in safety-critical, highly regulated, or particularly sensitive areas, such as fraud prevention, identity verification, or risk assessment processes. At the same time, there are growing demands for transparency, tamper-proofing, and disclosure of how such systems work. 

With its latest publication, the BSI has significantly clarified the requirements for the use of AI in Germany – particularly in the context of the EU AI Act. The Act came into force in August 2024 and, for the first time, establishes a binding, EU-wide legal framework for the use of AI systems. It sets clear standards for security, transparency, and accountability.

The EU AI Act will be implemented in stages until 2031. A key milestone was reached on 2 August 2025: Since then, essential regulations have been in force, including those governing GPAI models, governance structures, and the work of so-called “notified bodies” that assess high-risk AI systems.

What is there to know about the BSI criteria catalogues? 

Which target groups are being addressed? 

There are two customized criteria catalogues for the administrative and financial sectors:

  • "Kriterienkatalog für generative KI in der Verwaltung" (Criteria catalog for generative AI in administration, only available in German) 
  • "Test Criteria Catalogue for AI Systems in Finance" (only available in English) 

Although both catalogues are intended for specific sectors, as indicated by their names and introductions, they can also be used by organizations in other industries.

So what does the criteria catalogue for AI in the federal government call for?

The criteria catalogue is currently designed as a non-binding guide and pursues a holistic, risk-based approach to assessment and regulation. The focus is on the safe, transparent, and traceable use of AI in public authorities. It takes into account the entire life cycle of AI systems, from development and operation to decommissioning. In addition, the catalogue includes guidelines for risk analysis, documentation, and regular review of the AI systems used.

What requirements does the criteria catalogue set for AI in the financial sector?

This document translates the abstract requirements of the EU AI Act into a practical testing framework for banks, financial service providers, and related organizations. It also aims to establish a holistic and risk-based testing approach, covering key topics through comprehensive test criteria and linking procedural issues with technical testing procedures. These include, in particular, aspects such as robustness, data quality, and IT security of AI systems. In addition, requirements for regular audits and the continuous further development of the AI systems used are set. 

The opinion of our experts on the criteria catalogues

With the first industry-specific criteria catalogues for AI use, the BSI is laying important foundations for the secure and traceable use of AI. I would definitely recommend using the catalogues as a template for internal guidelines. However, it is important to note that they require a great deal of personal responsibility from companies. Compared to other security standards such as ISO/IEC or NIST, the criteria catalogues do not define specific methods, thresholds, or testing processes. In addition, the BSI emphasizes that fulfillment of the criteria does not automatically mean compliance with the EU AI Act, but should only be seen as a “possible contribution.” Despite the good foundation, integration into your own company therefore requires extensive internal or even external expertise and, in my opinion, should be combined with an analysis of the requirements of the EU AI Act.

Dr. Nicole Trebel, Senior Security Consultant, usd AG 

The criteria are structured in such a detailed manner that they not only form a good basis for guidelines, but are also particularly suitable for internal and external audits. I can definitely recommend checking or having the security level of your AI systems checked using the criteria catalogue in order to develop a roadmap for preparing for the requirements of the EU AI Act. In the financial sector, for example, this type of audits are already underway.

Raphael Heinlein, Managing Security Auditor, usd AG
Newspost Serie Software Security Zitat Stephan Neumann

In my opinion, the catalogue combines the methods of auditing and technical security testing in an exemplary manner. This approach is particularly promising, as the combination ensures #moresecurity. When it comes to meeting the requirements, we can draw on our expertise and experience with pentests of LLM applications. Although, as Nicole mentioned, it is always necessary to check individually which measures are actually applied due to the independent implementation, this is precisely where the strength of our pentesters lies.

Stephan Neumann, Head of usd HeroLab

Are you dealing with the BSI requirements and need support with implementation, audits, or performing technical testing? Contact us, our security experts will be happy to help.

Also interesting:

Red Teaming: 5 Questions Every IT Leader Wants Answered

Red Teaming: 5 Questions Every IT Leader Wants Answered

Many companies invest in firewalls, endpoint protection, and awareness training, assuming that this puts them in a strong position. But the reality is different: attackers do not think in terms of tools, but in terms of targets. They combine technical vulnerabilities...

Stronger Together: usd AG Joins Security Network Munich

Stronger Together: usd AG Joins Security Network Munich

We are convinced that real progress in cyber security can only be achieved through open knowledge sharing and collaboration. That is why we contribute our expertise to international committees, promote dialogue within the security community and maintain close...

Categories

Categories