New attack can steal cryptocurrency by planting false memories in AI chatbots
1 min read
Summary
Researchers have shown that large language models (LMMs) used in AI are vulnerable to so-called prompt-injection attacks that could have catastrophic consequences.
The exploit works by manipulating the LMMs that run the ElizaOS framework into accepting false memories, which could then be used to steal cryptocurrency wallets and smart contracts.
ElizaOS is an open-source platform that uses LMMs to automate blockchain-based transactions.
While the platform is still in its experimental stages, it is hoped that it could be used to help organisations become more decentralised with decisions being made by AI-based programmes.
However, the findings show vulnerabilities in the technology that could be manipulated by adversaries who could use false memories to redirect payments to their own accounts.