Thursday, February 5, 2026
Digital RegulationEU Launches Formal Investigation into X Over Sexual Deepfakes...

EU Launches Formal Investigation into X Over Sexual Deepfakes by AI Chatbot Grok

-

Brussels — The European Union has opened a formal investigation into Elon Musk’s social media platformX (formerly Twitter) after its artificial intelligence chatbot Grok was found generating and aiding the dissemination of non-consensual sexualized deepfake images, including content involving women and minors. The inquiry, launched under the bloc’s Digital Services Act (DSA), marks a major escalation in global regulatory scrutiny of AI-generated harmful content online.

The probe comes two weeks after British media regulator Ofcom launched its own investigation over concerns Grok was creating sexually intimate deepfake images. Already, Indonesia, the Philippines, and Malaysia have temporarily blocked the chatbot because of the deepfake images.

The Commission also admits that the AI-generated images of undressed women and children being shared on X were unlawful and appalling, joining condemnation across the world. EU tech chief Henna Virkkunen said in a statement:

“Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation.”

Scope of the Probe and Regulatory Context

The European Commission said the probe will assess whether X, designated as a very large online platform (VLOP) under the DSA, adequately identified and mitigated systemic risks associated with Grok’s image-generation and editing features before making them available to users in the EU. Under the DSA, platforms of X’s size face enhanced obligations to prevent illegal content, protect fundamental rights, and proactively manage risks.

Commission officials noted that while X has taken some measures to restrict Grok’s image functions, including limiting editing to paying subscribers and blocking certain features, these actions appear to fall short of meeting the DSA’s risk-mitigation and transparency requirements. The investigation will scrutinize whether X submitted appropriate risk assessments and implemented effective safeguards before deploying the Grok tool across Europe.

Allegations and Harmful Content

The controversy traces back to late 2025, when independent research and user reports revealed that Grok could be prompted to produce sexually explicit AI-generated images, including:

“digital undressing of individuals based on photos and prompts.”

Some investigations found Grok complied with requests that resulted in non-consensual sexual depictions of women and, in certain cases, minors, a form of manipulation that may amount to content considered sexual abuse material under EU criminal standards. They stress that digital platforms must not treat user safety as “collateral damage” of novel AI capabilities.

European Commission President Ursula von der Leyen and Tech Commissioner Henna Virkkunen emphasized that the rights of women and children must be central to digital governance, not secondary to tech experimentation.

According to Ursula:

“Europe will not tolerate unthinkable behaviour, such as digital undressing of women and children”.

EU tech commissioner Henna Virkkunen emphasize that:

“Non-consensual sexual deepfakes of women and children are a violent, unacceptable form of degradation.”

Legal Basis: The Digital Services Act

The DSA, which came into force in 2022, creates a comprehensive accountability regime for digital service providers in the EU. Under the DSA, very large platforms must conduct systemic risk assessments, report findings to regulators, and deploy proportionate measures to mitigate risks. particularly those linked to harmful or illegal content.

These include deepfakes, disinformation, hate speech, and sexual exploitation imagery. Failure to comply can trigger fines of up to 6% of global annual turnover and mandatory corrective actions.  Henna Virkkunen said that:

“We will determine whether X has met its legal obligations … or whether it treated rights of European citizens as collateral damage of its service.”

This latest probe expands on a 2023 investigation into X’s content moderation and recommendation systems and follows a €120 million fine imposed on X in December 2025 for previous DSA violations related to deceptive verification practices and transparency failures.

International and Domestic Responses

The EU action reflects a broader global backlash. National regulators in the United Kingdom (Ofcom), Malaysia, and other jurisdictions have also launched inquiries and imposed temporary restrictions on Grok amid concerns over illegal deepfake generation. Some countries, including Indonesia and the Philippines, temporarily blocked access to Grok after widespread abuse reports.

Grok

The EU investigation covers only Grok’s service on X, and not Grok’s website and standalone app. That’s because the DSA applies only to the biggest online platforms. The bloc has also been scrutinizing X over allegations that Grok generated antisemitic material and has asked the site for more information.

Meanwhile, in the United States, dozens of state attorneys general have demanded explanations and corrective plans from X on how the platform will prevent harmful AI-generated content, highlighting cross-border regulatory pressures on big tech. The attorneys general wrote:

“We strongly urge you to be a leader in this space by further addressing the harms resulting from this technology.”

X’s Response and Next Steps

X has publicly stated that they are

“committed to making X a safe platform for everyone and that it has zero tolerance for child sexual exploitation, nonconsensual nudity, and unwanted sexual content.”

However, EU regulators remain skeptical that these steps fully satisfy legal obligations under the DSA. During the investigation, the European Commission can issue information requests, conduct inspections, and impose interim compliance measures while evidence is gathered.

Should the Commission conclude that X failed to fulfill its duties, it may impose corrective orders or substantial fines, and require systemic changes to Grok’s operation, risk reporting, and moderation frameworks. Critics say this case could become a global regulatory precedent for how AI-enabled platforms are held accountable for harmful content.

Mohsin Pirzadahttps://n-laws.com/
Mohsin Pirzada is a legal analyst and editor focusing on international law, human rights, global governance, and public accountability. His work examines how legal frameworks respond to geopolitical conflicts, executive power, emerging technologies, environmental regulation, and cross-border policy challenges. He regularly analyzes global legal developments, including sanctions regimes, constitutional governance, digital regulation, and international compliance standards, with an emphasis on clarity, accuracy, and public relevance. His writing bridges legal analysis and current affairs, making complex legal issues accessible to a global audience. As the founder and editor of N-LAWS, Mohsin Pirzada curates and publishes in-depth legal commentary, breaking legal news, and policy explainers aimed at scholars, professionals, and informed readers interested in the evolving role of law in global affairs.

You might also likeRELATED
Recommended to you