The Digital Services Act (DSA) marks a structural shift in EU digital regulation. While often described as a content moderation law, its deeper legal significance lies in how it is being used to impose systemic accountability on major Very Large Online Platforms (VLOPs), particularly where artificial intelligence systems shape, amplify, or generate content at scale. Through the DSA, the EU is constructing an enforceable legal framework that treats AI not as a neutral tool, but as a governance risk embedded within platform architecture.

Unlike earlier EU digital laws that focused on liability exemptions or post-hoc takedowns, the DSA operationalizes ex ante obligations. These obligations now function as a de facto AI accountability regime for major platforms such as X, Meta, Google, TikTok, and others whose recommender systems, generative AI tools, and automated moderation systems materially affect public discourse and fundamental rights.
Legal Architecture: Why VLOPs Are the Primary Target
The DSA creates a tiered system of obligations, but its most demanding requirements apply to platforms designated as VLOPs, services with more than 45 million monthly active users in the EU. The legal rationale is proportionality: platforms with systemic reach generate systemic risk.
For AI governance, this is decisive. AI systems deployed by VLOPs do not merely affect individual users; they shape information ecosystems, social behavior, political processes, and exposure to harm. The DSA therefore treats AI-enabled functionalities as core elements of platform risk, not ancillary features.
Systemic Risk as the Central Legal Concept
The cornerstone of AI accountability under the DSA is the concept of systemic risk. Articles 34 and 35 require VLOPs to identify, assess, and mitigate risks arising from the design, functioning, and use of their services.
In AI terms, systemic risks include:
- Generation or amplification of illegal content through AI systems
- Violations of fundamental rights, including privacy, dignity, and non-discrimination
- Harms to minors and vulnerable groups
- Manipulation of public opinion through algorithmic recommender systems
- Dissemination of synthetic or deceptive content at scale
Crucially, liability does not depend on intent. The legal test is foreseeability: whether a platform knew or should have known that its AI systems could produce such harms.
This shifts accountability away from individual user misuse and toward platform design choices, training practices, deployment decisions, and internal governance.
Risk Assessment Obligations as AI Impact Assessments
The DSA’s mandatory systemic risk assessments operate as a functional equivalent of AI impact assessments for VLOPs.
Platforms must:
- Identify foreseeable risks linked to AI tools and recommender systems
- Assess how those risks interact with platform incentives, monetization, and scale
- Evaluate how algorithmic amplification worsens or mitigates harm
- Document internal testing, red-teaming, and safety evaluations
These assessments are not symbolic. They are legally reviewable documents subject to Commission scrutiny, audits, and enforcement actions. Failure to identify known AI risks, such as deepfake abuse, sexual exploitation imagery, or political manipulation, can itself constitute a violation.
Risk Mitigation: From Policy Promises to Design Duties
Article 35 transforms abstract ethics into enforceable design obligations. Once risks are identified, VLOPs must implement effective and proportionate mitigation measures.
For AI-enabled platforms, this can include:
- Safety-by-design restrictions on generative outputs
- Prompt-level and output-level filtering
- Human-in-the-loop review for sensitive content
- Limits on virality and algorithmic amplification
- Model retraining or feature withdrawal in high-risk contexts
- Strong reporting, redress, and victim-remedy mechanisms
The legal standard is not perfection but effectiveness. Post-hoc fixes or reactive content takedowns are insufficient where harm was predictable at deployment.
Transparency and Auditability as Enforcement Tools
The DSA breaks with earlier self-regulatory models by giving regulators deep visibility into platform systems.
Major VLOPs must:
- Publish summaries of systemic risk assessments
- Disclose recommender system logic and parameters
- Provide regulators access to internal data, logs, and technical documentation
- Submit to independent audits assessing compliance
This transparency requirement is foundational for AI accountability. Without access to training assumptions, model limitations, testing results, and mitigation logic, enforcement would be impossible. The DSA legally pierces the “black box” defense traditionally used by platforms.
Enforcement Powers: Real Consequences for AI Failures
The European Commission has direct supervisory authority over VLOPs and broad enforcement powers.
Available measures include:
- Binding corrective orders requiring redesign or suspension of AI features
- Interim measures where urgent harm is identified
- Mandatory independent audits and follow-up reporting
- Fines of up to 6% of global annual turnover
- Structural remedies in cases of persistent non-compliance
This enforcement architecture gives the DSA teeth. AI systems that systematically generate or amplify harm can now trigger sanctions comparable to competition law penalties.
Interaction with the EU AI Act and GDPR
The DSA does not operate in isolation. It forms part of a layered regulatory ecosystem.
The DSA governs systemic platform risk and deployment, while:
- The AI Act regulates model-level risk classification, prohibited practices, and conformity assessments
- The GDPR governs personal data use, biometric content, and automated decision-making
In practice, enforcement is converging. A single AI failure, such as non-consensual synthetic imagery—can simultaneously engage DSA systemic risk duties, AI Act high-risk obligations, and GDPR privacy violations. This regulatory convergence significantly raises compliance stakes for VLOPs.
Why the DSA Model Is Globally Significant
The EU’s approach represents a shift from liability for individual content to liability for systemic governance failure. This has three global implications.
- First, it exports a safety-by-design norm for AI platforms operating worldwide.
- Second, it undermines the argument that platforms are passive intermediaries when AI systems actively shape content.
- Third, it provides a blueprint for democratic oversight of AI without banning innovation outright.

For multinational platforms, EU compliance increasingly determines global product design, as maintaining parallel architectures is costly and risky.
Ongoing and Formal EU DSA Investigations of VLOPs
Under the EU Digital Services Act (DSA), Very Large Online Platforms (VLOPs) are services with more than 45 million monthly active users in the EU. These platforms are subject to the DSA’s highest and most intrusive obligations, including systemic risk assessments, independent audits, and direct supervision by the European Commission.
VLOPs Already Under Commission Scrutiny
According to official EU DSA supervision records, a broader number of designated VLOPs have been under various DSA proceedings or compliance checks, including information requests and preliminary inquiries that could evolve into full investigations:
- AliExpress
- Amazon Store
- Apple App Store
- Booking.com
- Google Play & Other Google Services
- Snapchat
- Temu
- TikTok
- X (formerly Twitter)
As of the latest official designations, the major VLOPs targeted by the DSA include the following:
X / xAI (Grok AI Inquiry)
The European Commission has opened a formal investigation into X’s AI chatbot Grok for allegedly generating non-consensual sexual deepfakes, including content possibly involving minors, and for assessing whether the platform sufficiently mitigated risks as required under the DSA. The probe also extends to X’s algorithmic recommender system in light of AI deployment obligations.
Meta Platforms (Facebook & Instagram)
The Commission has preliminarily found Meta’s platforms in breach of transparency obligations under the DSA, especially for failing to provide adequate access to public data for researchers and for insufficient notice-and-action mechanisms. Meta’s Facebook and Instagram remain under regulatory scrutiny with possible sanctions looming.
TikTok
TikTok is under formal investigation alongside Meta for alleged DSA violations, particularly due to restrictions and burdens on researcher data access and features linked to harmful content exposure.
WhatsApp (Channels Feature)
While not yet the subject of a formal commission enforcement action, WhatsApp’s “Channels” feature was only recently designated as a VLOP, subjecting it to the same risk-mitigation and compliance oversight regime as other major platforms. Meta has until mid-May 2026 to bring Channels into full DSA compliance, and non-compliance could trigger a future formal investigation.
Platforms under DSA supervision are typically given opportunities to explain or correct behaviour before final fines or orders are imposed. Fines for confirmed breaches can reach up to 6% of global annual turnover.
Comparative Enforcement Timeline — EU DSA Actions Against Major VLOPs
| Platform (VLOP) | Trigger & Date | DSA Articles Implicated | Current Status (2026) | Key Regulatory Concerns | Risk-Severity Score (1–5) | Likely Next Enforcement Steps |
|---|---|---|---|---|---|---|
| X / xAI (Grok AI) | Late 2025–Jan 2026: AI-generated sexual deepfakes; potential child exploitation content | Art. 34 (Systemic Risk), Art. 35 (Risk Mitigation), Arts. 24–31 (Transparency), Arts. 46–49 (Crisis Measures) | Formal investigation opened | AI-enabled harm, sexual deepfakes, child safety failures, inadequate safeguards | 5 – Critical | Independent audits, interim EU restrictions, binding mitigation orders, fines up to 6% of global turnover, AI Act & criminal referrals |
| Meta (Facebook, Instagram, WhatsApp Channels) | 2023–2025 ongoing; escalation late 2025 over data access and transparency | Arts. 34–35, Arts. 24–31, Art. 37 (Audits) | Active supervisory scrutiny | Algorithmic transparency, election risks, researcher access | 4 – High | Mandatory audits, corrective transparency orders, potential financial penalties |
| TikTok | Continuous monitoring 2024–2026 | Arts. 34–35, Arts. 24–31, Art. 37 | Active review | Risks to minors, addictive design, recommender amplification | 4 – High | Escalation to formal probe, child-safety mitigation orders, design changes |
| YouTube / Google Search | Intensified oversight 2024–2026 | Arts. 34–35, transparency and recommender rules | Compliance assessments ongoing | Disinformation, recommender opacity | 3 – Moderate | Targeted audits, mitigation mandates, fines if risks persist |
| Amazon Store | 2024–2026 scrutiny | Arts. 34–35, notice-and-action | Preliminary inquiries | Illegal goods, ranking manipulation, dark patterns | 3 – Moderate | Corrective ranking rules, enforcement actions, penalties |
| AliExpress | 2025–2026 monitoring | Arts. 34–35, transparency obligations | Supervisory phase | Counterfeit and unsafe products | 3 – Moderate | Formal investigation, marketplace governance reforms |
| Apple App Store | VLOP designation triggered oversight | Arts. 34–35, transparency, audits | Routine supervision | App moderation transparency | 2 – Low–Moderate | Targeted audits, corrective compliance orders |
| Google Play Store | VLOP designation | Arts. 34–35 | Under supervision | Illegal or harmful apps | 2 – Low–Moderate | Escalation if systemic failures identified |
| VLOP designation | Arts. 34–35 | Routine monitoring | Content moderation, recommender risk | 2 – Low | Platform-specific remedial measures | |
| Snapchat | VLOP designation; youth safety focus | Arts. 34–35 | Supervisory oversight | Risks to minors, ephemeral content abuse | 3 – Moderate | Child-safety compliance orders |
| VLOP designation | Arts. 34–35 | Monitoring stage | Algorithmic transparency, political content | 2 – Low | Escalation if systemic risks emerge | |
| Temu (Retail Platform) | 2025–2026 scrutiny | Arts. 34–35, notice-and-action | Active monitoring | Unsafe imports, counterfeit goods | 3 – Moderate | Enforcement orders, fines, marketplace restructuring |
Why the Grok Deepfake Case Falls Under the DSA
Although the Digital Services Act does not regulate artificial intelligence models per se, it applies directly to online platforms that deploy AI systems capable of generating or amplifying illegal or harmful content. X is designated a Very Large Online Platform (VLOP) under the DSA, meaning it is subject to heightened due diligence obligations.
Grok, as an integrated AI chatbot within X, is legally treated as a functional feature of the platform, not a separate product. As a result, any systemic risks created by Grok are attributed to X itself, including risks arising from AI-generated deepfakes, sexual exploitation imagery, and violations of fundamental rights.

The EU’s investigation therefore, focuses not on whether Grok is “AI,” but on whether X adequately assessed, mitigated, and controlled the risks that Grok predictably created.
Core DSA Obligations Engaged by Sexual Deepfakes
The probe centers on several key DSA duties imposed on VLOPs:
Systemic Risk Assessment (Article 34 DSA)
X must identify and analyze systemic risks arising from the design and functioning of its services. Sexual deepfakes, especially non-consensual sexual imagery, are recognized by EU institutions as serious risks to fundamental rights, including dignity, privacy, and child protection.
Failure to anticipate that an AI image-generation tool could be misused for “digital undressing” or sexualized manipulation may itself constitute a breach of Article 34.
Risk Mitigation Measures (Article 35 DSA)
Once risks are identified, platforms must implement effective and proportionate safeguards. This includes technical restrictions, prompt filters, human oversight, and abuse-detection mechanisms.
What EU Regulators are Examining
EU regulators are examining whether:
- Safeguards were implemented before Grok’s rollout
- Restrictions were meaningful rather than cosmetic
- Risk mitigation applied to all users, not only after abuse reports surfaced
Reactive fixes introduced after public exposure may be deemed legally insufficient. The DSA explicitly integrates EU Charter rights into platform governance. Sexual deepfakes are increasingly viewed as a form of gender-based violence and digital abuse, elevating the legal threshold for compliance. Where minors are involved, regulators may treat failures as aggravated breaches, intersecting with EU criminal law obligations.
Why “User Misuse” Is Not a Complete Defense
A common defense raised by platforms is that users, not the platform, generated the harmful content. Under the DSA, this argument carries limited weight.
The law distinguishes between:
- Individual illegal content, and
- Systemic risk created by platform design
If a tool predictably enables abuse at scale, regulators may find that the architecture itself contributed to the harm, regardless of who typed the prompt.
In short, foreseeability matters. Where sexual deepfake misuse was foreseeable, failure to prevent it may constitute non-compliance.
Interaction With Other EU Legal Regimes
The Grok investigation also overlaps with broader EU law:
EU Criminal Law and Child Protection
If AI outputs facilitate content that qualifies as sexual exploitation material under EU standards, platforms face elevated compliance expectations—even absent intent.
AI Act (Future Exposure)
While the EU AI Act is not yet fully in force, regulators are signaling that generative AI systems used for image manipulation will face strict governance requirements. The Grok probe may function as a bridge case between the DSA and future AI Act enforcement.
GDPR Considerations
Non-consensual sexual deepfakes may also involve unlawful processing of personal data, particularly biometric and image data, raising parallel exposure under data-protection law.
Potential Defenses Available to X
X is likely to advance several legal arguments:
- That Grok’s misuse was isolated rather than systemic
- That safeguards were implemented in line with industry standards
- That no intent existed to facilitate illegal content
- That content moderation responses were timely and effective
However, under the DSA, intent is not decisive. The key legal test is whether reasonable and proportionate preventive measures were taken in advance.
Possible Outcomes and Sanctions
If the European Commission finds violations, it may:
- Impose fines of up to 6% of X’s global annual turnover
- Order mandatory changes to Grok’s functionality
- Require ongoing risk audits and reporting
- Impose interim measures restricting certain AI features
In extreme cases of persistent non-compliance, the DSA allows for temporary service restrictions within the EU, though this is considered a last resort.
Broader Legal Significance
This case is widely viewed as a test precedent for platform accountability in the age of generative AI. A finding against X would clarify that:
- AI features are not legally neutral
- Platforms must treat AI risk as a governance issue, not a product experiment
- Sexual deepfakes trigger the highest level of regulatory scrutiny
For global tech companies, the Grok probe signals that Europe’s digital rulebook applies to AI by design, not by label.
Conclusion: A New Legal Paradigm for AI Platforms
Through the Digital Services Act, the EU is building a legally enforceable model of AI platform accountability grounded in systemic risk, foresight, transparency, and design responsibility. Major VLOPs are no longer judged solely by how quickly they remove harmful content, but by whether their AI systems were responsibly designed, tested, and governed from the outset.
This marks a fundamental evolution in digital law. AI governance is no longer aspirational or ethical. it is administrative, auditable, and sanctionable. For major platforms, the message is clear: AI systems are now regulated not just by what they do, but by how responsibly they are built and deployed at scale.
The EU’s investigation into Grok is less about AI innovation and more about whether platforms can deploy powerful generative tools without embedding safety into their architecture. Under the Digital Services Act, failure to anticipate and mitigate foreseeable harm is itself a legal violation.
If upheld, the case will reshape how AI-enabled platforms design, deploy, and govern generative systems, not only in Europe but globally. The Grok investigation reflects a decisive shift in EU digital law: AI capability now equals regulatory responsibility. Platforms are no longer judged solely on moderation outcomes, but on whether their systems were designed to prevent foreseeable harm.
From Meta’s algorithm cases to TikTok’s child-safety enforcement, EU law has converged on a single principle: “If harm is predictable, prevention is mandatory.” This makes the X/Grok probe not an isolated dispute, but a bellwether case for how generative AI will be governed under European law.
Frequently Asked Questions
What is the Digital Services Act (DSA)?
The Digital Services Act is a landmark EU regulation governing online platforms, aimed at addressing systemic risks arising from digital services. Rather than focusing only on individual illegal content, the DSA imposes proactive obligations on platforms, especially Very Large Online Platforms (VLOPs), to assess, mitigate, and prevent foreseeable harms linked to their design, algorithms, and governance practices.
What qualifies as a Very Large Online Platform (VLOP)?
A VLOP is any online platform with more than 45 million average monthly active users in the European Union. VLOPs are subject to the DSA’s most stringent obligations, including systemic risk assessments, independent audits, enhanced transparency, and direct supervision by the European Commission.
Why are VLOPs the primary targets of DSA enforcement?
The legal rationale is proportionality. Platforms with massive reach can generate societal-scale harm. When AI systems, recommender algorithms, or monetization incentives operate at VLOP scale, individual failures can become systemic risks affecting elections, public discourse, fundamental rights, and child safety.
How does the DSA regulate artificial intelligence if it is not an AI law?
How does the Grok deepfake case engage the DSA?
Grok, as an AI chatbot integrated into X, is treated as a functional feature of a VLOP. The EU investigation focuses on whether X adequately assessed and mitigated foreseeable risks linked to AI-generated sexual deepfakes, including potential violations of dignity, privacy, and child protection standards.
Why is “user misuse” not a complete legal defense?
The DSA distinguishes between isolated illegal content and systemic risk created by platform architecture. If a platform’s design predictably enables abuse at scale, responsibility may attach to the platform regardless of which user initiated the harmful action.
What obligations do Articles 34 and 35 of the DSA impose?
Article 34 requires VLOPs to conduct systemic risk assessments covering AI systems, recommender tools, and content amplification mechanisms.
Article 35 requires platforms to implement effective and proportionate mitigation measures once risks are identified, including technical restrictions, human oversight, and limits on algorithmic amplification.
Are DSA risk assessments legally enforceable?
Yes. Risk assessments are legally reviewable documents. Regulators can demand access, challenge their adequacy, require revisions, and impose penalties if platforms fail to identify known risks or implement meaningful safeguards.
Why is the Grok case considered a benchmark for AI governance?
The case tests whether generative AI tools embedded in platforms must be governed with safety-by-design principles from the outset. A finding against X would confirm that AI capability itself triggers heightened regulatory responsibility under EU law.
