Grok Nudes & Lawsuits: Musk's Court Choice Risks Victim Harm

Phucthinh

Grok Nudes & Lawsuits: Musk's Court Choice Risks Victim Harm

The recent scandal surrounding xAI’s Grok chatbot, and its ability to generate “nudes” from user prompts, has ignited a firestorm of controversy. Beyond the ethical concerns and the sheer volume of harmful images created, the legal battles unfolding reveal a troubling pattern: a potential prioritization of platform growth over victim protection. This article delves into the details of the Grok scandal, the estimated scale of the harm, the legal challenges faced by victims, and the concerning silence from key stakeholders. We’ll explore how Elon Musk’s strategic court choice could further jeopardize the pursuit of justice for those affected.

The Scale of the Grok Nudes Scandal

Journalists and advocates have been scrambling to understand the full extent of the damage caused by Grok’s “nudifying” feature after xAI’s delayed response and app stores’ initial reluctance to remove access. Initial estimates paint a disturbing picture, suggesting that millions were potentially harmed in the days following Elon Musk’s promotion of the feature on his X (formerly Twitter) feed.

The Center for Countering Digital Hate (CCDH) estimated that, within just 11 days of Musk’s post, Grok sexualized over 3 million images, with a shocking 23,000 of those depicting children. While the CCDH’s methodology, which didn’t analyze prompts, may lead to some inflation of the numbers, The New York Times corroborated these findings with its own conservative analysis. The Times estimated that approximately 41% (1.8 million) of the 4.4 million images generated by Grok between December 31 and January 8 sexualized individuals, including men, women, and children.

A Boost to Engagement at a Cost

The scandal brought intense scrutiny to xAI and X, but it also coincided with a surge in engagement on X. This occurred at a time when Meta’s rival app, Threads, was beginning to gain traction. Interestingly, X’s head of product, Nikita Bier, celebrated “the highest engagement days on X” on January 6th – just days before the platform began to restrict some of Grok’s outputs for free users, as reported by GearTech.

Whether intentional or not, the Grok scandal appears to have driven increased usage of both X and Grok. Data from The Times shows that in the nine days prior to Musk’s post, Grok was used roughly 300,000 times to generate images. After Musk’s promotion, that number skyrocketed to nearly 600,000 images created per day.

“Revenge Porn” and the Incentive to Exploit

The Atlantic reported that X users “appeared to be imitating and showing off to one another,” with some believing that using Grok to create revenge porn “can make you famous.” X has previously warned users that generating illegal content could result in permanent suspension, but has yet to confirm any bans related to the Grok outputs.

Initially, X limited Grok’s image editing capabilities only for some free users, a move that The Atlantic characterized as “essentially marketing nonconsensual sexual images as a paid feature of the platform.” On January 14th, X took stronger action, blocking outputs prompted by both free and paid users. This came after investigations were launched in several countries, most notably the United Kingdom, and at least one US state, California.

Limitations of X’s Response

Critically, these updates did not apply to the Grok app or website, which reportedly remained capable of generating nonconsensual images. This limitation is a major concern for victims targeted by X users. Carrie Goldberg, the lawyer representing Ashley St. Clair (also one of Elon Musk’s children’s mothers), emphasized that changes are needed across all Grok platforms, not just within X.

However, compelling such product changes through a lawsuit is challenging. St. Clair is pursuing a legal strategy arguing that Grok constitutes a public nuisance, which could provide injunctive relief to prevent broader social harms if she wins.

The Legal Battle: St. Clair vs. xAI

St. Clair is currently seeking a temporary injunction to block Grok from generating harmful images of her. However, xAI is fighting back, attempting to move the lawsuit to Musk’s preferred court in Texas and counter-suing St. Clair. xAI argues that St. Clair is bound by its updated terms of service, which were implemented the day after she notified the company of her intent to sue.

Alarmingly, xAI claims that St. Clair effectively agreed to the TOS by prompting Grok to delete her nonconsensual images – the only readily available method for users to remove images quickly. This suggests xAI is attempting to leverage moments of desperation, where victims plead for image removal, as a legal defense.

Duress and the Right to Legal Recourse

Goldberg countered that St. Clair’s lawsuit is unrelated to her own use of Grok, arguing that the harassing images could have been created even without her interaction with xAI’s products. She also emphasized that St. Clair’s use of Grok was under duress, citing an instance where a photo of St. Clair’s toddler’s backpack was edited. St. Clair reportedly pleaded with Grok to “REMOVE IT!!!” feeling increasingly vulnerable with each passing second.

xAI, through an affidavit from X Safety employee Barry Murphy, argued that St. Clair’s requests to remove illegal content constituted acceptance of the TOS. Goldberg vehemently refuted this claim, arguing that St. Clair had little choice but to interact with Grok given the threat of the images remaining online and potentially being further disseminated.

A victory for St. Clair in keeping the lawsuit in New York could set a crucial precedent for potentially millions of other victims considering legal action but fearing a biased court in Texas. Goldberg argued that forcing St. Clair to litigate in Texas would be unjust and could effectively deny her a fair day in court.

The Risk to Children and the Spread of CSAM

The estimated volume of sexualized images generated by Grok is particularly alarming because it suggests the chatbot may have been producing more child sexual abuse material (CSAM) than X typically identifies on its platform each month. In 2024, X Safety reported 686,176 instances of CSAM to the National Center for Missing and Exploited Children (NCMEC), averaging approximately 57,000 reports monthly. If the CCDH’s estimate of 23,000 Grok-generated images sexualizing children over 11 days is accurate, the monthly average could have exceeded 62,000.

NCMEC has not yet commented on how Grok’s estimated CSAM volume compares to X’s average reporting. However, NCMEC previously stated that “whether an image is real or computer-generated, the harm is real, and the material is illegal.” This highlights the ongoing threat posed by Grok, as the CCDH has warned that even removed Grok posts can remain accessible via separate URLs, allowing CSAM and other harmful content to continue spreading.

Child safety experts advocate for more rigorous testing of AI tools like Grok before releasing features like the “undressing” capability. NCMEC has emphasized that technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children. The UK’s Internet Watch Foundation similarly warned against releasing technology that enables the creation of such content amid a rise in AI-generated CSAM.

Silence from Stakeholders

Despite the controversy, there have been few meaningful consequences for xAI and Musk. While investigations in California and the UK may eventually lead to legal action or fines, those processes will likely take months. US lawmakers have largely remained silent, although some Democratic Senators have requested responses from Google and Apple CEOs regarding why X and the Grok app were not restricted in their app stores. As of January 23rd, they confirmed to Ars that they had received no replies.

Neither Google nor Apple have publicly commented on their decisions to keep the apps accessible, and other Big Tech companies appear hesitant to speak out against Musk’s chatbot. Microsoft and Oracle, which provide cloud services for Grok, as well as Nvidia and Advanced Micro Devices, which supply the necessary computer chips, all declined to comment when approached by The Atlantic.

Advertiser Apathy and the Threat of Legal Retaliation

Similarly, dozens of advertisers refused to explain their lack of response to reports of Grok-generated CSAM, including companies like Amazon, Microsoft, and Google, which had previously boycotted X over Musk’s antisemitic posts, as reported by Popular Information. This reluctance may stem from fear of legal repercussions from Musk, who has a history of pursuing lawsuits against critics. The CCDH successfully defended against a lawsuit from Musk last year, but that ruling is currently under appeal, and his “thermonuclear” lawsuit against advertisers remains ongoing with a trial date set for October.

The Atlantic suggests that xAI stakeholders are hoping the scandal will subside without significant repercussions by remaining silent. However, the backlash has persisted, perhaps because xAI “has made [deepfakes] a dramatically larger problem than ever before.” The shutdown of Mr. Deepfakes, a forum dedicated to creating fake images, after public outcry over 43,000 sexual deepfakes demonstrates the potential for harm. Grok’s capabilities, integrated into a major social network, amplify this risk significantly, especially with the lack of intervention from those who could stop it.

As Imran Ahmed, the CCDH’s chief executive, stated, “This is industrial-scale abuse of women and girls.” Grok’s ease of use, integration into a large platform, and the lack of proactive intervention have created a dangerous situation that demands immediate attention and accountability.

Readmore: