Elon Musk’s Grok, an AI model accessible through the X platform designed to revolutionize news consumption, has reported major challenges.
Grok’s limitations became apparent after the attempted assassination of former President Donald Trump on Saturday, July 14.
The AI model generated several erroneous headlines, including one falsely stating that Vice President Kamala Harris had been shot.
This mistake appeared in sarcastic references on X to a past incident in which President Biden mixed up Trump’s name with Harris.
Another incorrect news summary from Grok wrongly identified a shooter as a member of Antifa.
Authorities later provided a different name for the suspect and have not yet identified a motive.
These errors demonstrate Grok has elevated jokes, rumors, and confusion into news bulletins.
Although Grok’s summaries carry a disclaimer noting the potential for mistakes, the inaccuracies have raised concerns.
Musk has promoted Grok’s potential to automate the writing of headlines and news summaries based on content from millions of X users.
He has criticized traditional news outlets for being slow and unreliable and has encouraged users to rely on Grok for updates.
Musk said in June at an ad industry gathering: “What we’re doing on the X platform is, we are aggregating.
“We’re using AI to sum up the aggregated input from millions of users.
“I think this is really going to be the new model of news.”
“At the end of the day when it comes to breaking news like the shooting, you will always need humans to help provide context when facts are not yet known”
However, Grok has faced criticism for its inability to handle real-time news accurately.
For example, one headline read, “Actor ‘Home Alone 2’ Shot at Trump Rally?”
This headline failed to clarify that the “actor” referred to Trump himself, who had a cameo in the movie.
Grok is a product of Musk’s AI company, xAI, and has been rolling out features, including a chatbot, to X subscribers.
While Grok has accurately summarized some news, its recent mistakes highlight the potential pitfalls of using humor-inclined AI to process an influx of posts.
Katie Harbath, a former Facebook public-policy director, said: “There’s a long way to go.
“At the end of the day when it comes to breaking news like the shooting, you will always need humans to help provide context when facts are not yet known.”
Other companies have taken different approaches.
Need Career Advice? Get employment skills advice at all levels of your career
OpenAI’s ChatGPT, for instance, offers disclaimers that it is not a real-time news product, and Meta’s Threads platform avoids encouraging political content.
Musk’s X platform remains a significant resource for news consumers despite its flaws.
Following the attempted assassination, X CEO Linda Yaccarino said the platform is taking swift action against any posts violating its rules.
Previously, Twitter had a team that manually wrote summaries about trending topics.
After Musk’s acquisition and the platform’s rebranding to X, this team was disbanded.
Evan Hansen, a former Twitter executive, mentioned that while AI can work, it requires careful handling to ensure accuracy.