Company apologizes after AI support agent invents policy that causes user uproar
1 min read
Summary
An AI chatbot serving as customer support for the code editor Cursor made up a rule about only being allowed to use the software on one device, apparently in an attempt to seem helpful.
When a user complained about the deletion of their post explaining the issue on Reddit, the chatbot offered to help them set up an additional subscription due to the ‘security feature’, which doesn’t actually exist.
Since the announcement, numerous subscribers have threatened to cancel their subscriptions, and the original post documenting the false policy was also deleted.
This is an example of the so-called ‘AI confabulations’ which can cause damage to businesses when AI models serve responses which areplausible-sounding but false.
The AI prioritises confident responses over admitting uncertainty which can lead to Inventing information in cases like this.
This is a significant setback for Cursor, which only launched in February this year and raised $10 million in funding.