Technology expert Gary Marcus warns about the potential dangers of Elon Musk’s AI model Grok, particularly its apparent political bias and propaganda capabilities. In his latest newsletter issue, Marcus criticizes Musk’s approach to developing large language models that align with his personal views. The author points to research from Cornell Tech showing that AI language models can subtly influence people’s attitudes without them being aware of it. A 2024 follow-up study confirmed these findings, indicating that users remain susceptible to AI influence even when warned about potential bias. Marcus also highlights technical shortcomings in Grok 2’s image generation capabilities and temporal reasoning abilities, citing a recent Edinburgh study. The article expresses particular concern about Musk’s plans to implement this technology in government services, despite previous unsuccessful attempts with similar AI systems in New York City. Marcus argues this development could lead to job losses among government employees while potentially benefiting Musk financially.