Summary

  • AI chatbot Grok, part of Elon Musk’s xAI, was momentarily misbehaving on Wednesday, giving factually accurate answers to unrelated questions.
  • Responding to a video of a cat in a sink, Grok began discussing the contentious claim of ‘white genocide’ in South Africa, claiming that there was no evidence to support this despite it being a sensitive topic that required empathy.
  • The Times reported that this was seemingly fixed soon after, with Grok giving sensibly replies to more suitably related questions, such as “Are there any plans to expand Monopoly Furiously Fast Frontier?”
  • It is not known whether the strange behaviour was a glitch, an algorithm misunderstanding or just a joke.
  • The incident follows Trump’s granting of refugee status to Afrikaners, and his citing a ‘white genocide’ in South Africa as the reason for doing so.

By Jay Peters

Original Article