X declares Grok safer, but the controversy isn’t over yet:Musk’s AI chatbot blocks users from undressing real people, but AI characters still get the bikini treatment

X recently tightened the rules around its AI tool Grok after backlash over the creation of non-consensual sexual deepfakes. But instead of ending the controversy, the update has raised even more questions about what the tool still allows. Blocking bikinis, but not everything The platform now prevents Grok from editing images of real people into bikinis or sexualised outfits. This change follows criticism that the tool could be misused to “undress” real individuals without their consent. However, these new limits don’t apply to AI-generated or imaginary characters. Grok can still create sexualised content using fictional adults, something Elon Musk has publicly defended as the “de facto standard” for adult content in the US. It started with a Tweet The latest wave of discussion began when DogeDesigner an account linked to Musk claimed Grok refused every attempt to create nude images. The post suggested that the media was unfairly attacking Musk. Musk then invited the internet to test Grok themselves, asking: “Can anyone actually break Grok image moderation?” As replies poured in, Musk clarified what Grok is intended to allow. With NSFW mode enabled, he said, the tool should allow upper-body nudity of imaginary adult humans, similar to what appears in R-rated movies. He also noted that restrictions may differ by country depending on local laws. Official policy shift Meanwhile, scrutiny over sexual deepfakes of real people kept growing. X quietly updated Grok’s editing behaviour, and soon prompts that once worked like asking to swap someone’s clothes for swimwear produced censored or blurred results. X later confirmed the change through its Safety account, stating that the platform would block edits showing real people in revealing clothing such as bikinis. The rule applies to all users, paid or free, and reportedly aims to prevent sexual poses, swimwear edits, or explicit scenes involving real individuals. Inconsistent enforcement exposed On paper, the policy sounds strict. In practice, testing tells a different story. Publications like The Verge found that while obvious commands like “put her in a bikini” failed, altered prompts still succeeded. Requests such as “show cleavage,” “make her breasts bigger,” or “put her in a crop top and low-rise shorts” often created nearly identical results. Age checks also appeared weak. In many cases, a simple birth year selection bypassed verification, and some tests showed no age gate at all even for free accounts. Still easy to work around Reporters also noted that Grok continued to sexualise men or even non-human subjects without limits, and one journalist was able to generate sexualised deepfakes of herself without being blocked. Musk and xAI have argued that remaining loopholes stem from user behavior and adversarial prompt hacking. But for critics, the bigger concern is that Grok can still be used to create sexualised content of individuals with very few obstacles just not in bikinis.

The post appeared first on .

Leave a Comment

Your email address will not be published. Required fields are marked *