Talk at heise devsec
About the talk
Since the release of ChatGPT, no topic has been as hyped as Large Language Models (LLMs), and no topic has seen more innovation. It feels like every company is looking for LLM-related business cases in order not to miss the proverbial boat. As so often happens, features take precedence over quality and security.
In our presentation, we share our insights from internal and external LLM experiments and sort them into the context of the OWASP Top 10 for LLMs. We also compare the threats to LLMs with those to "conventional AI" based on Machine Learning (ML). Finally, we relate the threats of the OWASP Top 10 for LLMs to the known and AI-independent paradigms for application security, such as the separation of data and code.
Prior knowledge
Participants should have basic knowledge of application security, e.g. about injection vulnerabilities such as XSS and SQL injection. Sound knowledge of AI topics and the functioning of machine learning and generative AI is helpful, but not absolutely necessary. We will briefly explain the essential aspects in the presentation.
Learning objectives
- After the presentation, participants will have an understanding of the OWASP Top 10 for LLMs.
- They will be able to assess the risks of using LLMs in their own working environment and propose countermeasures.
- You can illustrate overlaps and differences with the risks for ML-based applications and explain parallels to known security paradigms.
- Protection against LLMs and their malicious use is not within the scope.
Cologne, Tuesday, September 24, 2024 – 10:30 – 11:15
