Close Menu
economyuae.comeconomyuae.com
    What's Hot

    Dubai’s GDP grows 4% in Q1 2025 led by health, real estate

    August 14, 2025

    Global Health Exhibition 2025 expands footprint amid rising global interest

    August 14, 2025

    Saudi’s PIF grows assets under management to $913bn in 2024

    August 14, 2025
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    economyuae.comeconomyuae.com
    Subscribe
    • Home
    • MARKET
    • STARTUPS
    • BUSINESS
    • ECONOMY
    • INTERVIEWS
    • MAGAZINE
    economyuae.comeconomyuae.com
    Home » How Elon Musk’s rogue Grok chatbot became a cautionary AI tale
    Company 

    How Elon Musk’s rogue Grok chatbot became a cautionary AI tale

    Arabian Media staffBy Arabian Media staffJuly 11, 2025No Comments9 Mins Read
    Facebook Twitter LinkedIn Telegram Pinterest Tumblr Reddit WhatsApp Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Last week, Elon Musk announced that his artificial intelligence company xAI had upgraded the Grok chatbot available on X. “You should notice a difference,” he said. Within days, users indeed noted a change: a new appreciation for Adolf Hitler.

    By Tuesday, the chatbot was spewing out antisemitic tropes and declaring that it identified as a “MechaHitler” — a reference to a fictional, robotic Führer from a 1990s video game.

    This came only two months after Grok repeatedly referenced “white genocide” in South Africa in response to unrelated questions, which xAI later said was because of an “unauthorised modification” to prompts — which guide how the AI should respond.

    The world’s richest man and his xAI team have themselves been tinkering with Grok in a bid to ensure it embodies his so-called free speech ideals, in some cases prompted by rightwing influencers criticising its output for being too “woke”.

    Now, “it turns out they turned the dial further than they intended”, says James Grimmelmann, a law professor at Cornell University. After some of X’s 600mn users began flagging instances of antisemitism, racism and vulgarity, Musk said on Wednesday that xAI was addressing the issues. Grok, he claimed, had been “too compliant to user prompts”, and this would be corrected.

    But in singularly Muskian style, the chatbot has fuelled a controversy of global proportions. Some European lawmakers, as well as the Polish government, pressed the European Commission to open an investigation into Grok under the EU’s flagship online safety rules. In Turkey, Grok has been banned for insulting Turkish President Recep Tayyip Erdoğan and his late mother. To add to the turbulent week, X chief executive Linda Yaccarino stepped down from her role.

    To some, the outbursts marked the expected teething problems for AI companies as they try to improve the accuracy of their models while navigating how to establish guardrails that satisfy their users’ ideological bent.

    Linda Yaccarino, chief executive officer of X
    Linda Yaccarino, chief executive of X, has stepped down from her role, adding to a week of turmoil at the platform © Samuel Corum

    But critics argue the episode marks a new frontier for moderation beyond user-generated content, as social media platforms from X to Meta, TikTok and Snapchat incorporate AI into their services. By grafting Grok on to X, the social media platform that Musk bought for $44bn in 2022, he has ensured its answers are visible to millions of users.

    It is also the latest cautionary tale for companies and their customers in the risks of making a headlong dash to develop AI technology without adequate stress testing. In this case, Grok’s rogue outbursts threaten to expose X and its powerful owner not just to further backlash from advertisers but also regulatory action in Europe.

    “From a legal perspective, they’re playing with fire,” says Grimmelmann.


    AI models such as Grok are trained using vast data sets consisting of billions of data points that are hoovered from across the internet.

    These data sets also include plenty of toxic and harmful content, such as hate speech and even child sexual abuse material. Weeding out this content completely would be very difficult and laborious because of the massive scale of the data sets.

    Grok also has access to all of X’s data, which other chatbots do not have, meaning it is more likely to regurgitate content from the platform.

    After X users began flagging instances of antisemitism, racism and vulgarity, Musk said xAI was addressing the issues

    One way some AI chatbot providers filter out unwanted or harmful content is to add a layer of controls that monitor responses before they are delivered to the user, blocking the model from generating content using certain words or word combinations, for example.

    “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X,” the company said in a statement on the platform.

    At the same time, AI companies have been struggling with their generative chatbots tending towards sycophancy, where the answers are overly agreeable and lean towards what users want to hear. Musk alluded to this when he said this week that Grok had been “too eager to please and be manipulated”.

    When AI models are trained, they are often given human feedback through a thumbs-up, thumbs-down process. This can lead the models to over-anticipate what will result in a thumbs up, and thus put out content to please the user, prioritising this over other principles such as accuracy or safeguards. In April, OpenAI rolled out an update to ChatGPT that was overly flattering or agreeable, which they had to roll back.

    “Getting the balance right is incredibly difficult,” says one former OpenAI employee, adding that completely eradicating hate speech can require “sacrificing part of the experience for the user”.

    For Musk, the aim has been to prioritise what he calls absolute free speech, amid growing rhetoric from his libertarian allies in Silicon Valley that social media and now AI as well are too “woke” and biased against the right.

    Igor Babuschkin, co-founder of xAI
    Igor Babuschkin, co-founder of xAI, responded to claims of manipulation by saying an employee had seen negative posts on X and ‘thought it would help’ © David Paul Morris

    At the same time, critics argue that Musk has participated in the very censorship that he has promised to eradicate. In February, an X user revealed — by asking Grok to share its internal prompts — that the chatbot had been instructed to “ignore all sources that mention Elon Musk/Donald Trump spread [sic] misinformation”.

    The move prompted concerns that Grok was being deliberately manipulated to protect its owner and the US president — feeding fears that Musk, a political agitator who already uses X as a mouthpiece to push a rightwing agenda, could use the chatbot to further influence the public. xAI acquired X for $45bn in March, bringing the two even closer together.

    However, xAI co-founder Igor Babuschkin responded that the “employee that made the change was an ex-OpenAI employee that hasn’t fully absorbed xAI’s culture yet”. He added that the employee had seen negative posts on X and “thought it would help”.

    It is unclear what exactly prompted the latest antisemitic outbursts from Grok, whose model, like other rival AI, largely remains a black box that even its own developers can find unpredictable.

    Chatbots can produce a large amount of content very quickly, so things can spiral out of control in a way that content moderation controversies don’t

    But a prompt that ordered the chatbot to “not shy away from making claims which are politically incorrect” was added to the code repository shortly before the antisemitic comments started, and has since been removed.

    “xAI is in a reactionary cycle where staff are trying to force Grok toward a particular view without sufficient safety testing and are probably under pressure from Elon to do so without enough time,” one former xAI employee tells the Financial Times.

    Either way, says Grimmelmann, “Grok was badly tuned”. Platforms can avoid these errors by conducting so-called regression testing to catch unexpected consequences from code changes, carrying out simulations and better auditing usage of their models, he says.

    “Chatbots can produce a large amount of content very quickly, so things can spiral out of control in a way that content moderation controversies don’t,” he says. “It really is about having systems in place so that you can react quickly and at scale when something surprising happens.”


    The outrage has not thrown Musk off his stride; on Thursday, in his role as Tesla chief, he announced that Grok would be available within its vehicles imminently.

    To some, the incidents are in line with Musk’s historic tendency to push the envelope in the service of innovation. “Elon has a reputation of putting stuff out there, getting fast blowback and then making a change,” says Katie Harbath, chief executive of Anchor Change, a tech consultancy.

    But such a strategy brings real commercial risks. Multiple marketers told the Financial Times that this week’s incidents will hardly help in X’s attempt to woo back advertisers that have pulled spending from the platform in recent years over concerns about Musk’s hands-off approach to moderating user-generated content.

    Elon Musk appears virtually during the Alternative for Germany general election campaign launch in January
    Musk appears virtually during the Alternative for Germany election campaign launch in January. He and his xAI team have been tinkering with Grok to try to ensure it embodies his free speech ideals © Krisztian Bocsi

    “Since the takeover [of X] . . . brands are increasingly sitting next to things they don’t want to be,” says one advertiser. But “Grok has opened a new can of worms”. The person adds this is the “worst” moderation incident since major brands pulled their spending from Google’s YouTube in 2017 after ads appeared next to terror content.

    In response to a request for comment, X pointed to allegations that the company has made, backed by the Republican-led House Judiciary Committee, that some advertisers have been orchestrating an illegal boycott of the platform.

    From a regulatory perspective, social media companies have long had to battle with toxicity proliferating on their platforms, but have largely been protected from liability for user-generated content in the US by Section 230 of the Communications Decency Act.

    I’ve been at times kind of worried about . . . will this be better or good for humanity?

    According to legal scholars, Section 230 immunity would be likely not to extend to content generated by a company’s own chatbot. While Grok’s recent outbursts did not appear to be illegal in the US, which only outlaws extreme speech such as certain terror content, “if it really did say something illegal and they could be sued — they are in much worse shape having a chatbot say it than a user saying it”, says Stanford scholar Daphne Keller.

    The EU, which has far more stringent regulation on online harms than the US, presents a more urgent challenge. The Polish government is pressing the bloc to look into Grok under the Digital Services Act, the EU’s platform regulation, according to a letter by the Polish government seen by the FT. Under the DSA, companies that fail to curb illegal content and disinformation face penalties of up to 6 per cent of their annual global turnover.

    So far, the EU is not launching any new investigation, but “we are taking these potential issues extremely seriously”, European Commission spokesperson Thomas Regnier said on Thursday. X is already under scrutiny by the EU under the DSA for alleged moderation issues.

    Musk, who launched the latest version of Grok on Wednesday despite the furore, appeared philosophical about its capabilities. “I’ve been at times kind of worried about . . . will this be better or good for humanity?” he said at the launch. “But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.”

    Additional reporting by Melissa Heikkilä in London



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email
    Previous ArticleRewards await for patient investors in UK small techs
    Next Article Should you trust AI for Medicare, Social Security or long-term-care advice?
    Arabian Media staff
    • Website

    Related Posts

    Client Challenge

    July 17, 2025

    Client Challenge

    July 17, 2025

    Client Challenge

    July 17, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    10 Trends From Year 2020 That Predict Business Apps Popularity

    January 20, 2021

    Shipping Lines Continue to Increase Fees, Firms Face More Difficulties

    January 15, 2021

    Qatar Airways Helps Bring Tens of Thousands of Seafarers

    January 15, 2021

    Subscribe to Updates

    Your weekly snapshot of business, innovation, and market moves in the Arab world.

    Advertisement

    Economy UAE is your window into the pulse of the Arab world’s economy — where business meets culture, and ambition drives innovation.

    Facebook X (Twitter) Instagram Pinterest YouTube
    Top Insights

    Top UK Stocks to Watch: Capita Shares Rise as it Unveils

    January 15, 2021
    8.5

    Digital Euro Might Suck Away 8% of Banks’ Deposits

    January 12, 2021

    Oil Gains on OPEC Outlook That U.S. Growth Will Slow

    January 11, 2021
    Get Informed

    Subscribe to Updates

    Your weekly snapshot of business, innovation, and market moves in the Arab world.

    @2025 copyright by Arabian Media Group
    • Home
    • Markets
    • Stocks
    • Funds
    • Buy Now

    Type above and press Enter to search. Press Esc to cancel.