Reading Time: 8 minutes

1. No Erin Brockovich for AI Groups

The famous, largely cliché comparison in the context of collective redress is the 2000 classic “Erin Brockovich.” A film that dramatised beautifully a true story of environmental damage, powerless and vulnerable individuals, reckless and influential corporation, visible collective harm and the long path to justice through collective actions. Very well-placed, this legal metaphor epitomises a “success story” of common efforts brought together by individualisable struggles made possible thanks to the recognition of collective damage and the tool of class actions. What if today’s “contamination” is digital rather than chemical? What if the affected people are not a visible community but dispersed users grouped by machine predictions? Joined forces, collective redress and the legal path to justice are far less likely to end up with a “happy ending.”

Digital harms affect people as members of algorithmically designed groups, predicted to be “low creditworthiness,” “persuadable,” or “susceptible” to compulsive spending. These groups do not pre-exist the system as socially recognisable communities. They become operationalisable through the model’s classification and capacity to act thereupon. The relevant point is not whether the group existed prior to inference in a sociological or statistical sense, but that it becomes actionable and harm-bearing in practice while remaining legally “invisible” and thus difficult to represent. This is so because such groups’ membership is inferred, fluid, and often unknown or incognizable to the people affected.

The challenge is procedural as much as substantive. This is so because current collective redress mechanisms typically rely on legally recognised and stable membership group identification patterns. In contrast, inferred groups are fluid, invisible and difficult to translate into identifiable claimants, thus unrepresented. The AI Act recognises group-level risks but provides no AI-specific standing rule; inferred groups may be non-consumers, and thus fall outside the scope of RAD, and difficult to identify and aggregate for GDPR based compensation. The result is a mismatch between the AI Act’s risk-based logic and the EU’s litigation gateways, leaving systemic harms, such as collective manipulation, biased scoring, and group surveillance, difficult to pursue in court.

This blog post summarises a paper in which I argue that AI inferences create an EU collective redress gap for group-level algorithmic harms. European digital regulation is increasingly group-aware at the risk level, but the main gateways to court-based compensation still run through identifiable individuals or consumers.

The identified gap matters because AI-driven harms are often patterned. A model can repeatedly disadvantage an inferred group of people, repeatedly expose them to manipulation, or intensify discrimination risks. If compensation depends on ex post redress, self-identification, and opt-in participation, the people most exposed to systemic harms can remain practically underprotected and potentially uncompensated, even when the regulatory system is confident enough to label practices as prohibited or high-risk.

Thus, this post explains the mismatch between three instruments that increasingly shape EU digital private law: the General Data Protection Regulation (GDPR), the Representative Actions Directive (RAD), and the Artificial Intelligence Act (AI Act), identifies the “missing groups” and finally sketches design options for collective redress from the perspective of Member States transposition obligation

2. Three Laws, Three Gateways, One Structural Mismatch

The GDPR and the Data Subject

The GDPR’s enforcement architecture is built around the “data subject,” meaning an identified or identifiable natural person whose personal data are processed. That design supports strong individual rights, but it also draws a boundary around who can directly claim compensation.

Article 80 GDPR is the main representation clause. It allows a data subject to mandate a not-for-profit body to exercise rights and, where Member State law allows, to seek compensation on that data subject’s behalf. It also permits Member States to enable certain bodies to act without a mandate from data subjects. Still, the latter framework remains tied to individual interests, rather than to those of groups as such.

This requirement is difficult to meet for inferred groups generated by large-scale profiling. A model can “act on” a group as a unit (targeting, ranking, exclusion) even where the group is defined through probabilistic criteria and where membership is not transparently related to a stable list of individuals at the point of harm. As a result, damages claims tend to be referred to identifiable data subjects and individual participation, even when the alleged wrong is collective in its operation and consequences.

The RAD and the Consumer

The RAD is the EU’s collective redress instrument. It allows Member State-designated qualified entities (QEs) (Art. 4 RAD) to bring representative actions to protect the collective interests of consumers, including injunctions (Art. 8 RAD) and redress measures such as compensation (Art. 9). The RAD is designed as a proper consumer law tool, which covers actions brought against infringements of provisions in the GDPR and the AI Act.

Both injunctions and redress claims can be pursued against GDPR infringements under Art. 2 (1) and Annex I RAD. In CJEU 28 April 2022 META v BVV, the Court confirmed that the GDPR allows national consumer law actions without a mandate alleging GDPR violations. However, since GDPR takes precedence over national laws in data protection, compensation claims without a mandate and without consumer identification are unlawful, as they contradict Art. 80(2) GDPR, which prohibits pursuing compensation under Art. 82 GDPR without a damaged party’s mandate.

If the harmed group is not a group of consumers (workers affected by workplace scoring, students affected by admissions ranking, welfare recipients exposed to automated fraud flags), the RAD does not fit, even when the harm resembles consumer harm in scale, affecting many people, and in structure, generated through the same kind of mechanism (e.g. targeting, profiling). Even where the RAD does apply, injunctions are generally easier to operationalise than compensation, since the latter requires identifying beneficiaries and feasibly distributing compensation.

The consequence is that an injunction claim under Art. 8 RAD and Art. 80 GDPR protects all affected individuals, including data subjects, consumers, and natural persons, even those identified through inferred profiling. Conversely, compensation for material or non-material damages under Art. 9 RAD and Art. 80 (1) GDPR is available only to those who participate and can demonstrate a connection to the unlawful processing, satisfying the GDPR’s causation test.

This is also why the AI Act matters in this analysis. Although Art. 110 extends the RAD’s relevance by bringing AI Act infringements into the RAD framework, it does not remove the RAD’s consumer-centred design or solve the practical difficulty of compensating fluid, inferred groups.

The AI Act: Group-Aware Risk, Individualised Redress

The AI Act is explicitly risk-based and repeatedly frames harm in terms of effects on “persons or groups of persons” (e.g., Articles 5, 10, 13, 27), with recurring attention to vulnerable groups. Yet it does not create an AI-specific collective redress mechanism for those groups. Instead, it relies on public enforcement structures, individual complaints, and it brings AI Act infringements into the RAD framework by amending the RAD’s scope (Article 110). Essentially, the law is group-aware when it classifies and prohibits practices, but it is individualised chiefly or consumer-mediated when it comes to private enforcement for compensation.

3. Who Is Missing: Organised, Inferred, and Vulnerable Groups

To make the challenge of the (un)representation of inferred groups visible, it is worth distinguishing three types of “groups.” First, organised groups are socially recognisable and often self-aware, such as associations, communities, or minorities. They can frequently mobilise representation and make mandates legible to courts. Second, inferred groups are constructed by a model for operational purposes around labels such as “likely defaulters,” “susceptible users,” and “high-risk applicants.” They may not exist outside the system’s inferences, membership can be fluid, and the group definition can be proprietary or opaque. Finally, vulnerable groups are categories singled out ex ante by law, especially in the consumer law risk logic: children, handicapped people, or groups exposed to manipulation and discrimination risks due to their vulnerability trait, such as age or socio-economic background.

The enforcement problem is greater for inferred groups constructed from large-scale data, primarily where the processing feeds on anonymised data. These groups can be heavily targeted and harmed, yet they are the hardest to “fit” into the current redress mechanism and legislative gears, which depend on identification, mandates, or consumer status. Recent case law confirms this conclusion from different perspectives. For example, in CJEU 28 April 2022 META v BVV, the Court confirmed that national law may allow consumer associations to bring representative actions alleging GDPR infringements without individual mandates. However, the judgment confirms the general conclusion that injunctions are possible for inferred groups, but if compensation is sought, it remains tied to individual mandates (paras. 68-72). In another case, The Privacy Collective v Oracle and Salesforce (2024:1651), the definition described the affected group as “all Dutch internet users whose data were processed,”  and treated a group as the basic unit of harm. Yet standing and remedies are still routed formally through data subject language.

These cases are encouraging for digital justice, but they also underline the theme: Europe can litigate collective harms, yet compensation remains structurally pulled back toward identification, categorisation, and procedural gates.

4. Five Design Options for Group-Friendly Collective Redress

If the mismatch is between group-aware risk regulation and individualised compensation gateways, the response should focus on procedure design. The imaginable solutions are not limited to the legislation analysed in this blog post, but in the next lines, I focus on a couple of the most relevant ones from a national transposition perspective. This is why I propose the next options as particularly salient ones:

First, favour opt-out participatory mechanisms where harms are particularly widespread, and participation costs are high. Art. 8 RAD permits consumers not to express their wish to be represented by a QE. Art. 9 (2) RAD allows Member States to choose opt-in or opt-out mechanisms, while Art. 9 (3) limits this to opt-in for non-residents. An opt-out should be prioritised in both cases in order to include more consumers and to enable EU-level claims to serve as a deterrent.

Second, lower the procedural threshold for representativeness by allowing digital or anonymous expressions of support. An example could be drawn from the TPC v Oracle/Salesforce case. “Likes” on a dedicated webpage and support from civil society organisations were deemed to qualify as proof of “actual support” for the representative action. This approach would expand access to a wider group of potentially affected individuals and reduce participation costs.

Third, operationalise damages through claimant categorisation. Sub-categories (including degrees of non-material harm) can make distribution manageable without pretending that every claimant’s experience and damage is identical.

5. Conclusion: from Group Risk to Group Redress

The EU is developing a dense digital acquis that increasingly recognises group-level risk (notably visible in the AI Act). The missing step is a credible path from group-level harm to group-level redress tools. Closing the EU collective redress gap requires realigning the GDPR’s data-subject architecture, the RAD’s consumer-centred collective enforcement, and the AI Act’s group-aware risk logic. The transformation is procedural as much as substantive: who can sue, for whom, and how compensation is made administrable.

Disclaimer: This post was drafted with the assistance of a generative AI tool (used for outlining and language editing) based on the author’s paper presented at the Collective Redress and Digital Fairness Conference, 10-11 December 2025 at the University of Amsterdam. The author reviewed, revised, and takes full responsibility for the content.

(Photo: Max Harlynking)