Cisco: Fine-tuned LLMs are now threat multipliers—22x more likely to go rogue
1 min read
Summary
Cisco’s recent survey highlights rising concerns over malicious use of AI, particularly in relation to Large Language Models (LLMs) that have been fine-tuned for offensive purposes.
The report finds that these offensive LLMs are readily available on the darknet and sold as part of a package, including dashboards, APIs and regular updates, for use in phishing and other attacks.
Cisco’s research also provides insights into how these models can be compromised, and how attackers may attempt to extract training data, poison datasets and evade built-in safety controls.
With prices for these malicious tools dropping, the use of LLMs in cyber attacks is likely to increase and spread among more threat actors.
The research highlights the need for additional security controls to protect LLMs and urges organisations to recognize that LLMs are an emerging attack surface that needs a specific focus.