Introduction
Digitalisation is reshaping the administration of justice across Europe. Electronic filing, online portals, and artificial intelligence (AI) tools now structure many of the everyday interactions between individuals and courts. This shift is often presented as a route to broader participation and more efficient, responsive institutions. Yet the experience of many users paints a more complex picture. When digital systems presuppose stable connectivity, up-to-date devices, high (digital) literacy levels, or the confidence to navigate unfamiliar online environments, they risk reproducing the very inequalities they are meant to alleviate.
This blog post explores the emerging concept of equitable digital justice within the European Union (EU). Building on recent debates about vulnerability, group-based harm, and the limits of current EU regulatory frameworks, it argues that digital transformation will only improve access to justice if institutional design and regulatory oversight take the realities of digital exclusion seriously.
Digital Exclusion as a Structural Justice Problem
Digitalisation in the justice sector is rarely neutral. It redistributes burdens across the system: what previously required a physical visit to a court or administrative office may now demand a stable internet connection, a functioning device, and the ability to interpret interfaces that resemble commercial platforms. For many individuals, these expectations are manageable; for others, they are prohibitive.
Rather than treating digital vulnerability as a fixed personal condition, it is clearer in this context to describe it as a heightened risk of exclusion, delay, or disadvantage when legal procedures move online. That risk emerges when individual circumstances – such as limited digital skills, disability, low income, unstable connectivity, or language barriers – interact with procedural requirements and interface design. In this sense, the problem is structural in a built-in and systemic way: exclusion is produced not only by what users can or cannot do, but also by how institutions design and govern digital pathways. This understanding resonates with recent work on vulnerability in EU data protection and technology governance (Malgieri, 2023) and with broader research on how digital infrastructures can intensify pre-existing socio-economic inequality (Eubanks, 2018).
Those most exposed to digital exclusion are often people who already face disadvantage: persons with disabilities, older adults, people with limited literacy, people living in rural or deprived areas, and people in precarious economic situations. When those disadvantages meet rigid digital procedures, the result can be delayed claims, abandoned complaints, or an inability to exercise rights at all. If access to justice is to remain meaningful, legal and institutional frameworks must therefore identify and reduce these recurrent patterns of exclusion.
Designing Justice Systems That Include Rather Than Exclude
A growing body of research in law and technology highlights how digital interfaces can shape user behaviour in subtle but significant ways. Seemingly technical design choices – such as the mandatory use of electronic identification, strict upload formats, or time-limited sessions – may render online systems inaccessible to those who face constraints in time, resources, or digital familiarity. These obstacles are often invisible to system designers, yet they can determine whether an individual’s claim is advanced or abandoned.
Recent evaluations of the EU Online Dispute Resolution (ODR) Platform illustrate this point sharply. Although the platform received millions of visits each year, fewer than 2% of complaints led to any substantive engagement with the trader. That outcome does not prove that digital vulnerability was the sole reason for low engagement: weak trader incentives, limited awareness, and the voluntary structure of the scheme also mattered. It does, however, show that the mere existence of a digital interface says little about whether users can engage with it meaningfully. Formal availability is therefore an insufficient measure of accessibility.
For courts and administrative bodies, inclusive design is therefore not a secondary usability concern but a precondition for access to justice. In judicial settings, this is tied directly to Article 47 of the Charter of Fundamental Rights and, more broadly, to the rule-of-law requirement that rights be practically and effectively exercisable. Institutions should provide multiple identification routes, allow users to save progress and return later, ensure compatibility with low-bandwidth connections, and offer assisted-digital support through accessible channels. Just as important, a non-digital pathway must remain genuinely available as a prerequisite, not merely as an exceptional fallback. Paper-based or in-person routes should be procedurally equivalent and should never carry an adverse inference.
The Role and Limits of EU Regulation
The EU has taken important steps to regulate the use of digital tools and AI systems. The General Data Protection Regulation (GDPR), the Digital Services Act (DSA), and the Digital Markets Act (DMA) each contribute to a broader framework governing personal data, online intermediaries, and digital markets. The most significant recent development is the AI Act, which entered into force on 1 August 2024 and adopts a risk-based model. Importantly for this discussion, the Act treats certain AI systems used in the administration of justice as high-risk because of their potential impact on fundamental rights, the rule of law, and the right to an effective remedy. In the justice context, those concerns are usually central to the legality and legitimacy of the system itself.
Even so, these instruments do not yet add up to a comprehensive framework for equitable digital justice. The AI Act can certainly be read as moving in that direction, and the DSA likewise shows that EU law can translate rights concerns into interface design and out-of-court dispute settlement duties. The consumer ADR/ODR framework points in the same direction. But the overall picture remains fragmented. These instruments address particular sectors, services, or categories of system; they do not impose a general duty on courts, tribunals, or public authorities to monitor digital exclusion, preserve equivalent offline routes, or assess unequal burdens on groups with limited digital capacity. The GDPR, meanwhile, remains better suited to controller duties and individual complaints than to cumulative forms of group-based exclusion.
This matters because digital injustice is often caused by ordinary design and governance choices rather than by one-off user mistakes. Default digital-only channels, rigid identity checks, inaccessible file formats, short time-outs, or the absence of human support may each seem minor in isolation. Taken together, however, they can systematically disadvantage the same groups. Effective governance, therefore, requires institutions to identify recurring patterns of exclusion, monitor outcomes across different groups, and intervene when disparities persist.
Towards Institutional Models of Equitable Digital Justice
To ensure that digital transformation genuinely advances access to justice, a set of institutional commitments must accompany technological change. Three areas are particularly important.
First, vulnerability-sensitive design should be treated as a substantive obligation, not an optional enhancement. Institutions must test whether their systems remain usable for people with low-cost devices, unstable connectivity, limited digital literacy, disabilities, or limited confidence online. Assisted-digital support is part of that obligation, but it is not enough on its own: a genuinely available non-digital route must remain in place as a prerequisite for fair digitalisation.
Second, robust assessment and audit mechanisms are essential, especially in the justice sector, where procedural fairness, equality of arms, and public trust are at stake. Scholarship on algorithmic accountability has argued for institutional approaches to impact assessment (Selbst, 2021). In this context, fundamental rights impact assessments should be required before deployment and revisited during use. Where such assessments already exist, they should go beyond box-ticking exercises and include disaggregated indicators that reveal uneven effects across different groups. Supervisory bodies should also have a clear mandate to require technical or procedural changes when evidence shows that digital tools are exacerbating inequality.
Third, continuous feedback loops are crucial to ensure that systems evolve in response to their actual performance in the real world. This includes transparent reporting on errors, drop-off rates, and user difficulties; mechanisms for independent oversight; and, where necessary, sunset clauses for underperforming digital services. The history of the EU ODR Platform demonstrates that unused or ineffective digital tools should not remain in place simply because they exist.
Conclusion
Digital transformation has the potential to expand access to justice, reduce administrative burdens, and increase the transparency of legal processes. However, it can also entrench disadvantage if systems are designed without regard to the varied capacities and circumstances of their users. Achieving a more equitable digital justice landscape, therefore, requires both regulatory reform and institutional discipline: the recognition that digital exclusion is not accidental, but often built into procedures, interfaces, and default assumptions. A practical lesson we learn from our projects on digital justice design is that the most consequential choices are often the most ordinary ones: clear instructions, channel choice, the possibility to pause and return, and access to a human being when something goes wrong.
The challenge for the EU is therefore not whether to digitalise, but how. Ensuring that digital tools enhance, rather than hinder, the exercise of rights demands an approach anchored in inclusiveness, transparency, accountability, and the continuing availability of non-digital routes. If institutions adopt vulnerability-sensitive design, develop meaningful assessment mechanisms, and commit to continuous oversight, digitalisation can serve as a vehicle for justice. Without such safeguards, it risks becoming yet another barrier for those already at the margins.
(Photo: Max Harlynking)