The Commission Releases The Second Version of The Digital Age Verification Blueprint

The European Commission has unveiled the updated blueprint for an age verification system for digital platforms. Five EU countries will pilot the solution in 2025 using a secure 'mini wallet' compatible with Digital Identity Wallets. While the initiative aims to protect adolescents online, privacy groups fear potential loss of freedoms.

By Matthaios Tsimitakis
October 16, 2025
You can download this article in PDF format here!

The European Commission published the second version of its age verification blueprint as part of its EU-wide harmonised approach to age verification when it comes to using digital services.

This updated blueprint includes features such as onboarding using passports and ID cards, and support for the Digital Credentials API. 

The blueprint—also known as the “mini wallet”—is technically aligned with the upcoming European Digital Identity Wallets, which are scheduled to be rolled out by the end of 2026, ensuring compatibility and integration capacity in the future.

The Digital Identity Wallets are secure mobile applications that allow EU citizens, residents, and businesses to store and manage their digital identity credentials in one place. These wallets enable users to prove their identity, age, qualifications, or other attributes online and offline while maintaining full control over which information is shared. The wallets support selective disclosure, meaning users can share only what’s necessary without revealing excess personal data.

The age verification solution is also designed to be user-friendly and privacy-preserving (users are not sharing other personal data).  The blueprint's technical specifications, source code, and a beta release have already been made openly available.​ 

The solution will enter a pilot phase involving Member States, online platforms, users, and software providers, with opportunities for customisation, such as translation. Private sector parties must accept it within a deadline (36 months) after implementing acts across the nation states.

“Parents should be raising children, not algorithms”

According to the European Commission’s 2025 guidelines on protecting minors online, 97% of young people in the EU use the internet daily, primarily for social networking, but they remain insufficiently safe online. Recent data reveals the scale of child sexual abuse in the continent, described as a pandemic. Regulating technological applications through age verification is seen as a measure to control the problem.

A 2024 WHO study found that more than 1 in 10 adolescents (11%) showed signs of problematic social media behaviour, struggling to control their use and experiencing negative consequences. Girls reported higher levels of problematic social media use than boys (13% vs 9%).

The report defines problematic social media use as a pattern of behaviour characterized by addiction-like symptoms.

These include an inability to control social media usage, experiencing withdrawal when not using it, neglecting other activities in favour of social media, and facing negative consequences in daily life due to excessive use.

On 24 September 2025, EC President Ursula von der Leyen delivered a keynote speech at the high-profile event “Protecting Children in the Digital Age.”

She emphasised that children face serious harms online, including exposure to addictive algorithms, adult content, cyberbullying, and self-harm promotion.

She underscored that “parents, not algorithms, should be raising children,” advocating for empowering families while protecting minors from exploitative tech practices.

Since then, the EU has been actively discussing age verification measures for access to online platforms and services, especially social media, to protect minors in the context of the Digital Services Act.


Henna Virkkunen, Executive Vice-President for Technological Sovereignty and Security, emphasised the importance of the initiative, stating that online platforms “no longer have any excuse” to put children at risk

She highlighted that the app allows platforms to verify age without accessing personal data, ensuring privacy through separated issuing and submission processes. 

On October 10, EU ministers pledged to increase their efforts to protect children online. The Danish Jutland Declaration, drafted by the Nordic country as it chairs EU ministers' meetings during the second half of 2025, was signed by 25 member states — minus Estonia and Belgium — during the informal meeting of national telecom ministers on Friday the 10th of October.

Age verification is recommended, particularly to restrict access to adult content and social media such as pornography, gambling, and alcohol purchases, when national laws require minimum age limits.

Different EU countries currently set their own age limits, typically between 13 and 16 years, but there is increasing momentum toward harmonising a minimum age requirement of 16.

According to Article 8 of the GDPR, processing of personal data of a child is lawful if the child is at least 16 years old; otherwise, consent must be provided or authorised by a parent or guardian.

This effectively sets the age of digital consent to 16 across the EU, though member states have the option to lower it to 13 years.​

Civil Society: The problem is not age, it's the platforms


The blueprint’s open-source design and privacy-preserving features garnered support from advocacy groups focused on children's online safety, but also Digital Rights groups emphasising the value of a coherent, user-friendly approach across member states.

However, privacy and digital rights groups caution that relying heavily on age verification alone risks overlooking broader online harms and the root causes of vulnerability.

They advocate for more holistic, rights-respecting solutions that go beyond technical gating to include education, moderation, and systemic changes.

The European Digital Rights (EDRi) association emphasises the need for strict safeguards to ensure that only the minimum amount of data necessary is collected and processed during age verification. 

“Age verification has become a seemingly low-hanging fruit for which there is much political appetite. This is a clear example of technosolutionism: the belief that complex social problems can be quickly fixed by technology,” it writes in a statement. “Yet, widespread age gates do nothing to address the deeper issues of platform design: toxic layouts, addictive patterns, and environments that fuel harassment remain untouched. The real risk is that once young people cross the minimum age threshold, they are simply thrust into equally toxic and addictive spaces, where they continue to be exploited for platforms’ economic gain. As a result, the harmful systems persist.”

The Electronic Frontier Foundation (EFF) opposes the EU's age verification app, arguing that while child safety is important, the proposed system risks undermining digital access for marginalized populations.

They are particularly concerned about potential discrimination against refugees, unhoused individuals, and those without official documentation.

The organization criticizes the technical approach, especially the reliance on Zero Knowledge Proofs (ZKPs), which they see as an incomplete privacy solution. Without robust regulation, the app could force users into an imbalanced relationship where sensitive personal data is repeatedly exposed.

The German Chaos Computer Club, the oldest Digital Rights group in the continent, said in a letter to the German Federal Government that Mass surveillance must be clearly rejected. “The tech companies and US intelligence services should no longer be fed with our data”. They particularly call for a lower dependence from tech giants like Google.  

In Australia, where the ideas were first conceived and implemented, a government-commissioned Age Assurance Technology Trial tested over 60 age verification technologies. The report that followed found that though many technologies can generally estimate age quickly and respect privacy, significant accuracy challenges exist—especially for individuals close to the age threshold, young women, and non-Caucasian users. Inaccuracies could wrongly block some teens or allow younger children access.  Privacy and security concerns remain paramount among Australian consumers, given risks of data breaches and the possibility that some age assurance tools are developing data tracking capabilities.

Five countries test the pilot scheme


Digital Rights groups fear that mandatory age verification systems, especially those involving biometric or identity document checks, could morph into tools for broader surveillance or profiling, undermining anonymity and user freedoms online.

The pilot countries that will first apply the EU's age verification solution in 2025 are Denmark, France, Greece, Italy, and Spain. These five member states are testing a prototype of the online age verification app developed under the European Commission’s blueprint as part of the pilot phase.

During this phase, users in these countries will be able to activate the age verification by integrating a compatible age verification functionality into their national digital wallets. Or by preparing a customised national age verification app for publication in the app stores. This allows them to prove they are over 18 (or another age limit depending on the country) without revealing personal data to the online services they access. 

However, in countries like Greece concerns raise issues such as potential low adoption due to complexity, digital literacy gaps among citizens, and worries about over-centralization of identity data that could increase risks of data breaches or surveillance. 

Greece is seen as a front-runner in digital identity innovation in Europe, integrating legal recognition, digital signatures, and easy verification methods through dynamic credentials. The challenge remains balancing trust, and privacy both between consumers and tech monopolies, as well as citizens and governments.