Prompt Injection in ChatGPT and LLMs: What Developers Must Know
1 min read
Summary
ChatGPT and other Large Language Models (LLMs) are controlled entirely by the text input or prompt given to them by the user.
However, if poorly designed, these prompts are susceptible to “prompt injection,” which is akin to SQL injection but with malicious inputs crafted to manipulate the model’s output.
The implications of prompt injection include application breakage, data leaks, and exposure of system flaws.
Here is a now-removed example of a prompt injection vulnerability in the command line tool ChatGPT.
Developers must take caution to mitigate against the risk of prompt injection by practicing strong development and security practices.
Educate yourself on best practices by reading the original medium article referenced in this short piece.