Talk at the IT Security Summit: What is the performance of LLMs in static code analysis for security?
About the talk
The flood of results from automated, tool-supported security source code analyses can easily overwhelm software projects: often, the specialists for correct classification are lacking, or truly relevant observations are lost in the mass of so-called "false positives". What if modern large language models (LLMs) with their often impressive programming skills could provide support here?
In empirical studies, using thousands of tool results, we have addressed this question, compared different models, and evaluated the results: What quality can be expected for which types of findings? What needs to be delivered in context? How do open-source models compare to the large proprietary models? How do certain prompt adjustments affect the results?
In the presentation, we will present our findings and provide developers, architects, and other software development stakeholders who would like to take similar paths for their projects with useful information on the subject.
Berlin, Tuesday, June 18, 2024 – 14:15 – 15:00
