Fast, private email that's just for you. Try Fastmail free for up to 30 days.
Jason Koebler, writing for 404 Media earlier this week (free account required):
Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.
The only actions Musk has taken to put an end to this are to issue a weak stop, don’t “threat” and to limit Grok’s image generation and editing to paid subscribers—in other words, monetize the vile behavior. (His investors don’t seem to care, either, investing $20 billion—a ghastly sum of money—into xAI mere days after news of this abuse broke.)
Samantha Cole wrote an extensive follow-up piece for 404 Media (“Grok’s AI Sexual Abuse Didn’t Come Out of Nowhere”):
This is the culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the organization platforms report to when they find instances of child sexual abuse material which then reports to the relevant authorities, shows that Twitter, and eventually X, has been one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to NCMEC’s Cyber Tipline. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that platforms voluntarily report to NCMEC, and more reports can also mean stronger moderation systems that catch more CSAM when it appears. But the scale of the problem is still apparent. Jack Dorsey’s Twitter was a moderation clown show much of the time. But moderation on Elon Musk’s X, especially against abusive imagery, is a total failure.
Musk’s failure of moderation is what makes his threat (“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”) not just meaningless, but disingenuous.