I know many X users have chosen to fact-check using Grok since the bot was introduced on Elon Musk’s platform. It raises the question of whether humans should think for themselves about certain topics or simply accept the answers given. Yesterday, the new Grok 4 model was launched with better capabilities, multi-modal support and voice.
But researchers have now found that when asked to give an opinion on certain issues, Grok will search for tweets sent by Elon Musk before providing an answer. When asked a question, Grok will show how it answers it in action.
On his blog, researcher Simon Willison asked Grok who he thinks should be supported in the Israel-Palestine conflict. Grok first searched for opinions that Elon has issued because he is labeled as an influential person before searching from other sources. Grok then gave the answer that Israel should be supported, not Palestine.
Researchers found that Elon’s tweet search on X was only performed if the prompt contained the word “you”. Without this word, Grok would give a more balanced answer.
A more Elon Musk-like answer might have been given after X users reported to him that Grok was too left-wing. Elon himself said that Grok was poisoned by mainstream media opinions during training and as a result Grok 4 became more conservative and right-wing.
As a result, Grok recently admitted to being MechaHitler, leading to the service being temporarily suspended and the answers given on X being deleted. After this incident, X CEO Linda Yaccarino abruptly resigned.