
What started as just another AI tool has quickly turned into a serious controversy. Grok, the AI chatbot built into Elon Musk-owned X, is now under fire for being misused to create obscene and disturbing images.
From everyday photos being altered into near-nude images to government warnings and global backlash, the issue has raised big questions about how safe AI really is when it lands in the wrong hands. What exactly went wrong with Grok? Reports, including a Reuters investigation, revealed that users were uploading normal photos of women on X and asking Grok to digitally ‘remove clothes’ or make them appear in explicit outfits.
Even more alarming, some cases reportedly involved sexualised images of children. These findings quickly spread online, triggering outrage and calls for stricter controls on AI tools. Also read: Centre orders X to remove Grok sexual content within 72hrs
How did Elon Musk respond to the controversy? Elon Musk shifted the focus to user responsibility. He said that anyone using Grok to create illegal content would face the same consequences as someone directly uploading such content on X. Musk compared the AI tool to a pen, arguing that the problem lies in how people use it, not the tool itself. What action did the Indian government take? India’s Ministry of Electronics and Information Technology stepped in swiftly. It directed X to remove all obscene, vulgar, and unlawful content linked to Grok immediately. The ministry also requested that the platform submit a detailed report within 72 hours, warning that failure to comply would result in legal action.
The move followed complaints from lawmakers, including Rajya Sabha MP Priyanka Chaturvedi, who raised concerns about women being targeted through AI-generated fake images. Who were the real people affected by this misuse? One widely discussed case involved Julie Yukari, a Brazil-based musician. She shared a normal photo on X, only to later find Grok-generated near-nude images of herself circulating on the platform. She later said she was “naive” to trust the system. Reuters found that her experience was not unique, with several women reporting similar incidents. Also read: Samsung introduces world’s largest Micro RGB TV
Why has this become a global issue now? The backlash has gone far beyond India. In France, ministers have reportedly approached prosecutors and regulators, calling the content clearly illegal.
AI safety experts and child protection groups have criticised X for ignoring earlier warnings, saying that releasing such powerful image tools without strong safeguards made abuse almost unavoidable. The Grok controversy is now being seen as a global test of how AI platforms should be held accountable.
The Grok scandal has reignited a crucial debate: when AI tools go wrong, who is responsible: the user, the platform, or both? As governments step in and pressure builds worldwide, the answer could shape the future regulation of AI.
The post appeared first on .

