Add your offcanvas content in here

The Company

Simplifying your IT-security journey.

LLM Security

The challenge

Comprehensive security for Generative AI and LLM (Large Language Model) applications means application security squared.

In applications with Generative AI, specifically in LLM (Large Language Model) applications, the classic web and app vulnerabilities are joined by a variety of other potential security problems: through the use of external or internal models, the use of plugins, agents, RAG connections, etc. In practice, the entire spectrum of cloud security is usually added.

An illuminating example of the "different" nature of the requirements for the security of Generative AI is the way the pattern of injection vulnerabilities presents itself here. In the field of classical programming, injection can be reliably prevented by applying the appropriate countermeasure (for example, prepared statements against SQL injection). In the field of LLMs, we are currently faced with the fundamental impossibility of a similarly secure countermeasure. Simply because the user input - the prompt - must become part of the overall context so that the system can fulfill its purpose.

Our Services

We offer comprehensive support for the secure design, implementation, and operation of your AI.

Security Assessments & Penetration Tests

We test the security of LLM applications based on OWASP LLM Top 10 and Mitre ATLAS

LLM Security Testing

Workshop: Security for LLM Applications

This workshop provides the tools to anchor security in the design and construction of an AI system.

To the workshop

Training: OWASP Top 10 for LLM Applications

Everything designers, architects and software developers need to know to deploy secure AI systems from the ground up.

Training: LLM Security Best Practices

Mirko Richter

I am available to answer any questions about LLM Security.