First DSA Report Points to Key Systemic Risks on Large Online Platforms

The European Commission’s first DSA risk report highlights major threats including the spread of illegal content, challenges to fundamental rights like freedom of expression and non-discrimination, and escalating risks to minors and democratic processes, often worsened by generative AI.



November 28, 2025
You can download this article in PDF format here!

The European Commission and the Board of the Digital Services Coordinators, who are responsible for enforcing the Digital Services Act, have published a report on the landscape of prominent and recurrent risks on large online platforms.

The report identifies systemic risks such as the spread of illegal content and threats to fundamental rights on very large online platforms. It also provides an initial overview of the mitigation measures implemented by platforms based on transparency requirements under the DSA.


You can read the full report here


The report draws on the platforms' own risk assessments, audits and transparency reports, as well as independent research on certain risks and contributions from diverse civil society organisations.

Online platforms like Zalando, AliExpress, X, and Facebook face significant risks of spreading illegal content, which can be embedded in products, advertising, or user-generated reviews. 

The report is a consolidation of initial findings based on the first round of risk assessments submitted by VLOPs and VLOSEs, along with input from stakeholders.


Key findings also cover risks to mental health and the protection of minors online, the impact of emerging technologies such as generative AI on online platforms, and challenges to intellectual property protection on online marketplaces.

Notable mitigation measures include the use of automated systems to detect emojis used to code illegal activities online, such as the sale of illegal drugs.


Threats to Democracy

2024, marked by elections across Europe, saw heightened systemic risks to democratic processes. Both platforms and civil society organisations reported that disinformation, algorithm-driven amplification of misleading content and coordinated inauthentic behaviour can distort civic discourse and undermine trust in institutions.

Generative AI was repeatedly cited as an aggravating factor, with risks that AI-generated content may mimic authoritative information sources, answer incorrectly to election-related queries, or contribute to polarisation through algorithmic virality.

AI chatbots providing incorrect election dates or misleading information were specifically highlighted, whereas platforms warned of the repurposing of old violent footage as “real-time” events to trigger panic or inflame tensions. They also warned of the rapid spread of hostile or extremist content.

Systemic risks to freedom of expression largely arise from content moderation practices. Both providers and Civil Society Organisations (CSOs) highlight the danger of over-moderation of legal content, which harms civic discourse, often exacerbated by over-reliance on automated systems or poor appeals mechanisms.

Conversely, under-moderation of illegal content (like hate speech) can also discourage free expression and cause self-censorship. 

Key risk factors include advertising systems enabling discriminatory ad targeting (e.g., job vacancies by gender) and recommender systems that may exclude or under-amplify content from specific groups.

Platforms observe limitations in discrimination detection systems for subtle content (e.g., local dialects). Risks also include services failing to function equitably for users with disabilities (Google), and the potential for biased ad targeting affecting access to critical services (Bing).

Finally, internal platform risks include users sharing hate speech, job posters creating discriminatory ads, and biased recommender systems suggesting candidates based on algorithmic bias (LinkedIn).

Wikipedia observes that user-to-user interactions causing distress or emotional harm to vulnerable groups can deter their participation, thus diminishing their freedom of expression and contribution to knowledge.

Likewise, X identifies that abuse, harassment, and hateful conduct create risks through both direct censorship resulting from policy enforcement and widespread self-censorship by users trying to avoid harm. 


Risks to Minors: Exposure, Exploitation and AI-Driven Harms

The protection of minors emerged as one of the most urgent areas of systemic risk in the report. Platforms and CSOs documented widespread issues, including grooming, cyberbullying, sextortion, sexualised content involving minors, self-harm promotion, and child exploitation.

Systemic risks related to gender-based violence disproportionately affect vulnerable groups. Girls are more often negatively impacted by the non-consensual sharing of intimate media than boys (Save the Children, Denmark).

Google identifies risks like image-based abuse (including NCEI and ISPI, often AI-generated) and widespread gender-based/LGBTQIA+ harassment. Instagram notes the disproportionate targeting of the LGBTQIA+ community and female politicians (especially women of colour) with bullying, which results in the silencing of their voices and feelings of intimidation and fear for safety.

The report describes challenges with age-assurance systems, adults posing as minors, and children being redirected to less moderated spaces where exploitation risks increase.

Recommender systems were identified as a major vector of harm, potentially amplifying legal but harmful content due to cumulative exposure—especially for young users.

Civil society organisations also warned of an emerging threat: the use of generative AI to produce child sexual abuse material (CSAM) from any image of a child and its dissemination across major platforms.

They further documented risks from AI chatbots that can foster emotional dependency among minors, potentially disrupting healthy development.

To improve awareness and give users more information about systemic risks, providers mentioned the creation of knowledge or information panels for authoritative information about elections or crises events.


Also, about the creation of media literacy campaigns, the creation of wellness help pages for users and creators (e.g. for bullying and harassment mental well-being resources), privacy and safety pages, user helplines, and guides to the attention of parents and guardians. 

Other measures, such as the displaying of banners with information for users and creators to consider before commenting or uploading content. Labels informing that an account has been verified, or that content has been fact-checked, have also been mentioned by providers.

Some CSOs suggested awareness-raising measures about the systemic risks of excessive social media usage, especially for minors and first-time users or after the introduction of new features. This is the first in an annual series of risk landscape reports.