Large language models (LLMs) are awesome, but pose a potential cyber threat due to their capacity to generate false responses and follow hidden commands. In a two-part discussion with Chenta Lee from the IBM Security team, it first delves into prompt injection, where a malicious actor can manipulate LLMs into creating false realities and potentially accessing unauthorized data. In the second part, Chenta provides more details and explains how to address these potential threats.
View this media item
View this media item