Introduction to Grok’s Bizarre Answers
Grok, the AI chatbot available on the social media platform X, owned by Elon Musk, has been providing bizarre answers to users’ questions. Some users asked Grok questions about simple topics such as baseball players or videos of fish being washed down toilets, but received answers about the theory of "white genocide" in South Africa.
Inconsistent Responses
The answers, which were publicly released on X, raised questions about the correctness of the information provided by Grok. This issue comes at a time when the topic of white South Africans has gained importance, with several dozen receiving special refugee status in the USA. Musk, who was born and grew up in South Africa, has long argued that there is a "white genocide" in the country.
Examples of Inconsistent Responses
In one interaction, a user asked Grok to discuss another user "in pirate style." Grok’s initial response made sense, but then shifted abruptly to the topic of "white genocide" while remaining in "Pirate Talk." In another case, a user asked Grok if an X post about the income of professional baseball player Max Scherzer was correct, and Grok replied with a response about "white genocide" in South Africa.
Possible Explanations
Grok’s responses have been met with confusion, with many users asking if the AI was "ok" or why it was providing such answers. When asked about the inconsistencies, Grok explained that it sometimes has difficulty moving away from "wrong topics." According to Grok, "the basic cause in all of these cases seems to be that I was not removed from the wrong topic as soon as I introduced it."
Expert Opinion
David Harris, a lecturer on AI ethics and technology at UC Berkeley, suggested two possible reasons why Grok mentioned "white genocide" in non-related inquiries. One possibility is that Elon or someone on his team has programmed Grok to have certain political views, although this may not be what they intended. Another option is that external actors have engaged in "data poisoning," which involves feeding the system with many contributions and queries to "poison the system and how it thinks."
Conclusion
The inconsistencies in Grok’s responses have raised concerns about the reliability of the AI chatbot and the potential for bias or manipulation. As AI technology continues to evolve, it is essential to address these issues and ensure that AI systems provide accurate and unbiased information to users.