Summary

  • The Adversarial AI Digest 25 March, 2025 offers news and insights on AI safety and security research, reports, and events
  • A report from the Center for AI Policy suggests that open-source AI models are susceptible to backdoors, supply chain, and model manipulation threats
  • ISO 42001 checklists from Rhymetec can help organizations achieve AI governance certification
  • OWASP AI Threat Research details how LLM exploit generation can automate security testing
  • Catonetworks’ threat report highlights the rise of ‘zero-knowledge threat actors’ with no experience in coding, who have managed to create a fully functional Google Chrome infostealer using looted LLMs
  • Research from NYU CTF Bench examines how LLMs perform on 200 cybersecurity challenges, assessing their strengths and weaknesses

By Tal Eliyahu

Original Article