Shadow AI: How unapproved AI apps are compromising security, and what you can do about it
1 min read
Summary
‘Shadow AI’ is a term used to describe the AI applications created by trustworthy employees without the knowledge of a company’s IT or security departments.
Employees create these applications to streamline marketing automation, data visualisation and advanced data analysis, using their company’s private data to train public domain models.
The existence of these applications is typically unknown to IT and security departments and thus don’t have the necessary guardrails in place, posing a significant risk.
A recent Software AG survey found that 75% of knowledge workers use AI tools and 46% wouldn’t give them up even if banned by their employer.
Prompt Security’s Itamar Golan likens the use of shadow AI to ‘doping in the Tour de France’, in that users are drawn to its advantages without considering the long-term consequences or harm to their employer.