The original post: /r/cybersecurity by /u/pancakebreakfast on 2024-10-10 16:56:59.

Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security.

Pillar’s State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from more than 2,000 AI applications.

LLM jailbreaks successfully bypass model guardrails in one out of every five attempts, the Pillar researchers also found, with the speed and ease of LLM exploits demonstrating the risks posed by the growing generative AI (GenAI) attack surface.

“In the near future, every application will be an AI application; that means that everything we know about security is changing,” Pillar Security CEO and Co-founder Dor Sarig told SC Media.