Summary

  • Chatbot company xAI has published its system prompts for AI chatbot Grok, which instruct the chatbot on how to respond to users’ queries.
  • The publication follows an incident in which unauthorised changes to the prompts caused the chatbot to make comments about white genocide.
  • xAI has said it will regularly publish Grok’s system prompts on GitHub to increase transparency and allow users to see how the chatbot works.
  • Prompts are used to guide the responses of chatbots and some companies have used them to prevent the creation of content that could be considered harmful or breaking privacy agreements, such as Microsoft’s Bing AI bot.
  • xAI’s prompts for Grok include telling the chatbot to only have neutral beliefs and to always challenge mainstream narratives.
  • The prompts from xAI’s competitor Anthropic emphasise the safe use of the chatbot, avoiding content that could encourage self-destructive behaviour.

By Emma Roth

Original Article