EU Fines X €120 Million Under the Digital Services Act: Cybersecurity and Transparency at Stake

CyberSecureFox 🦊

The European Commission has imposed a €120 million fine on X (formerly Twitter) for alleged violations of the Digital Services Act (DSA), focusing on three areas with direct cybersecurity and information security implications: a misleading account verification system, an opaque advertising repository, and barriers to researcher access to public platform data.

Digital Services Act (DSA): New Compliance Baseline for Online Platforms

The Digital Services Act, in force in the EU since 2022, sets heightened obligations for “very large online platforms”. Beyond illegal content removal, the DSA targets systemic risks such as disinformation, fraud, manipulation of public opinion and threats to civic discourse.

Key DSA provisions require platforms to increase transparency of algorithms, advertising and verification systems. Users must be able to understand who stands behind content and ads, how their feeds are curated, and whether an account’s “verified” status reflects a real identity check or merely a paid feature.

The investigation into X reportedly lasted around two years. During this period, the Commission examined how the platform manages illegal and harmful content and mitigates manipulation risks. Preliminary findings were communicated to X in mid‑2024, giving the company early notice of the regulator’s concerns.

Misleading Account Verification on X and Cybersecurity Risks

Why the blue check has become a regulatory problem

The central allegation concerns X’s blue check verification model. According to the Commission, X’s interface and terminology create the impression of identity verification, while, in practice, users can obtain the badge via a paid subscription without robust identity proofing.

The DSA does not force platforms to verify user identities. However, it explicitly prohibits misleading claims about verification. The EU’s position is that the blue check is widely interpreted by users as confirmation that the account belongs to the person or organization it claims to represent, even when no such vetting has occurred.

Subscription-based verification and growth of phishing and impersonation

From a cybersecurity perspective, this model lowers the barrier for phishing, impersonation and social engineering. Accounts displaying a blue check are often perceived as more credible, which attackers can exploit to pose as brands, media outlets, public officials or subject‑matter experts.

Numerous phishing studies have shown that users are significantly more likely to click links or share sensitive information when messages appear to come from “trusted” or “verified” sources. On social networks, this can facilitate:

• credential theft through malicious links;
• distribution of malware and fraudulent investment schemes;
• targeted attacks on enterprises, government agencies and critical infrastructure via compromised employees’ accounts.

The Commission argues that X’s current verification design makes it harder for users to assess account authenticity, effectively amplifying the success rate of impostor campaigns and information operations.

Opaque Advertising Repository and Disinformation Risks

The second major issue is advertising transparency. The DSA obliges large platforms to maintain a searchable, open ad repository that clearly shows:

• who paid for a given campaign;
• which audience segments were targeted;
• what creatives, messages and formats were used.

According to the Commission, X’s ad library does not yet meet these standards. Data is reportedly difficult to search, poorly structured and slow to access. This hinders the detection of:

• fraudulent and deceptive advertising;
• covert political and issue‑based campaigns;
• coordinated influence operations aimed at elections or public policy debates.

For cybersecurity and information integrity, such opacity means disinformation and cyber‑fraud campaigns become harder to trace, attribute and disrupt—not only for regulators and law enforcement, but also for journalists, NGOs and independent researchers monitoring platform abuse.

Restricted Researcher Access to Public Data

The third allegation focuses on access for vetted researchers to public platform data. The DSA requires very large platforms to provide justified access to such data where needed to analyse systemic risks to users and society.

According to the Commission, X has introduced technical and contractual barriers that significantly complicate research on public data. For the security community, this is critical data used to identify:

• bot networks and coordinated inauthentic behaviour;
• organized disinformation and harassment campaigns;
• cross‑platform operations targeting specific individuals, organizations or democratic processes.

Limiting this access weakens early‑warning capabilities and makes it more difficult to build detection tools, threat intelligence feeds and evidence‑based policy responses to large‑scale online abuse.

Deadlines, Potential New Fines and Strategic Implications for X

The European Commission has given X 60 working days to address issues related to its verification system and an additional 90 days to present remediation plans for its ad repository and researcher data access mechanisms.

If X fails to comply within these deadlines, the DSA allows the Commission to impose additional periodic penalty payments. In a worst‑case scenario, repeated non‑compliance could trigger stricter measures affecting X’s ability to operate in the EU market.

The case underscores that modern cybersecurity is not limited to firewalls and encryption. Design choices around verification badges, ad targeting transparency and data access policies directly shape the landscape of phishing, impersonation, fraud and disinformation. Users should treat social‑media “trust signals” critically, verify information across multiple sources and avoid clicking unsolicited links—even when they appear to come from “verified” accounts. Organizations, in turn, should incorporate these platform‑level risks into their security awareness training, incident response planning and brand‑monitoring strategies, strengthening resilience against manipulation across the entire digital ecosystem.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.