Reading Time: 7 minutes

This blog post builds on the analysis developed in our article ‘Towards Collective Redress for Data Harms under the GDPR’, published in Mass Claims Journal 2025/1, pp 67-77.

In a global digital and information economy, certain data protection violations are less incidental than structural consequences of business models relying on large-scale data exploitation. Examples include algorithmic profiling and behavioural targeting based on personal data collected without sufficient transparency or legal basis, or the extensive unauthorised sharing of user data with third parties as part of advertising infrastructures.

Because the harms such violations produce are inherently collective in nature, meaningful redress and enforcement necessarily require collective redress mechanisms and remedies capable of capturing the collective dimension of the harm.

Drawing on scholarship in informational capitalism and recent CJEU case law (both discussed below), we argue that EU law already provides the doctrinal tools for such collective responses—particularly through Articles 80 and 82 of the GDPR and the Representative Actions Directive (RAD). The key question is how these existing instruments should be interpreted and applied to large-scale data protection violations by Big Tech, so as to implement effective collective redress mechanisms that ensure sufficient access to justice.

This blog post will first present the main features of business models based on large-scale data exploitation and explain why the harm they cause has an intrinsically collective dimension. It will then discuss the challenges these harms pose for proper compensation and enforcement, to support the case for a redress framework capable of ensuring truly effective remedies for the victims involved.

  1. From isolated breaches to business model violations

Many of today’s most significant GDPR violations are embedded in the architecture of data-driven business models, rather than isolated breaches or compliance failures. These “business model violations” pertain to large-scale and systematic unlawful data processing practices that sit at the core of many digital platforms’ revenue structures.  To understand these practices, we draw on the literature on informational capitalism, including works such as Shoshana Zuboff, The Age of Surveillance Capitalism (2019), Julie E. Cohen, Between Truth and Power (2019), Salomé Viljoen, A Relational Theory of Data Governance (2021), and Ignacio Cofone, The Privacy Fallacy (2024).

This literature shows how personal data has become a core resource of the digital economy, with data extraction and behavioural prediction being foundational to platform capitalism. Platforms are structurally incentivised to maximise data collection and retention, often relying on opaque processing practices that limit individuals’ ability to understand or control how their data is used. Within these business models, tensions with, and infringements of, data protection requirements arise from their underlying economic and operational logic. Because this logic operates at scale and by design, it affects entire user populations rather than isolated individuals.

  1. The nature of data harms: relational, pattern-based, and collective

These harms have three interconnected characteristics: they are essentially relational, pattern-based, and collective. What a company knows about one person is shaped and co-determined by what it knows about others who share similar characteristics. The causation of harms is usually connected to the construction of a network of information, and moves along said networks of subjects who become progressively more vulnerable as statistical data and interferences allow even more pervasive privacy intrusions.  Problematic patterns often become visible only when viewed across a group. These harms are therefore essentially collective in their manifestation: although ultimately experienced individually, they arise from systemic processes affecting many.

However, the traditional focus on individual harm obscures structural wrongdoing, and is often incapable of properly addressing it. In individual proceedings, each case is assessed in isolation. The structural character of the decision-making process, anchored in datasets, correlations, and probabilistic inferences, remains largely invisible. Assessed collectively, they may reveal a methodology shaped by shared datasets and predictive models. For example, a single rejected job application may appear ordinary. But when viewed collectively, patterns of algorithmic harms may emerge.  This argument builds on scholarship such as Alessandro Mantelero’s work on the collective dimension of data protection and the literature on group privacy (e.g., Linnet Taylor et al., Group Privacy, 2017).

  1. Compensation and enforcement gap

The structural character of business model violations produces two related difficulties. First, there is a compensation gap: the harm generated by large-scale data practices often cannot be adequately recognised or quantified through individual proceedings, because its collective and pattern-based nature becomes visible only at the group level. Second, there is an enforcement gap: both public and private enforcement mechanisms struggle to address systemic infringements effectively.

Only collective litigation, and specifically collective compensation claims, have the true power to both reveal business model violations and address the harm caused by such behaviour. Although national data protection authorities (DPAs) do impose fines, enforcement remains uneven and often slow. As Gentile & Lynskey argue, the GDPR’s transnational enforcement structure suffers from procedural and coordination deficiencies. And even when fines are imposed, they do not compensate victims for the harm suffered.

Individual litigation is equally problematic: data subjects may not know a violation occurred, causation and harm are difficult to prove, individual damages are often small, especially relative to litigation costs, and information asymmetries are severe. As Janciute (2019) and Mulders (2022) have argued, scattered damages and rational apathy make GDPR claims particularly ill-suited for individual litigation. This is not simply a problem of access to justice, but a structural mismatch: individual procedures cannot adequately address collective harms caused by business model GDPR violations.

  1. The GDPR is sufficiently flexible to allow effective collective litigation and full compensation of collective harm

While a GDPR violation does not automatically constitute compensable damage (C-300/21, Österreichische Post), large-scale violations may generate distinct forms of harm that require legal recognition. Yet, we believe that the notion of damages under the GDPR, as interpreted by the CJEU, is sufficiently flexible to allow for compensation of damage, both material and non-material, caused by business model violations. Article 82 GDPR grants any person who has suffered material or non-material damage because of a GDPR violation the right to compensation. Recent CJEU case law has dealt with several questions concerning the definition of what constitutes non-material damage, its functions, and the criteria for its assessment, and while some firm points have been posed, many open questions remain. The Court has, for instance, clarified that non-material damage does not require a minimum threshold of seriousness; that fear or loss of control may, in specific circumstances, qualify as compensable harm, and the award under Article 82 has the sole function of ensuring full and adequate compensation (C-300/21, Österreichische Post; C-340/21, Natsionalna agentsia za prihodite; C-667/21, Krankenversicherung Nordrhein; C-687/21, MediaMarktSaturn; C-741/21, Juris; C-182/22, Scalable Capital; C-456/22, Gemeinde Ummendorf; C-590/22, PS; C-200/23, Agentsia po vpisvaniyata; C-507/23, Patērētāju tiesību aizsardzības centrs; C-655/23, Quirin Privatbank ; see for an overview Walree 2025). Yet, scattered as they may be, those developments do not set any restriction to privacy harm and opacity loss as compensable damage, to the extent that the latter are properly understood as concrete negative consequences arising from the infringement, which require full redress.

Importantly, recognising that harms arising from mass data protection violations are distinctive and require collective mechanisms for proper compensation helps address a common criticism, namely, that they would be driven by objectives unrelated to the principles of interpersonal justice that define tort law as a distinct field. Collective damages would not “instrumentalise” private law, as they are first and foremost a way of ensuring actual compensation of real harms, characterised by a collective dimension that cannot be captured at the individual level. In this sense, they are first and foremost a real redress tool, which does not introduce punitive or policy-driven objectives into private law but rather ensures that existing compensatory principles can operate effectively in contexts where harm manifests collectively rather than individually (see also Van Duin et al. 2024). Compensation itself, if complete and adequate, has, to a certain extent, a deterrent function; however, the latter is a byproduct of the former, not the main driver behind it.

If this understanding of Article 82 GDPR is possible within the current legislative and judicial framework, we also claim that the most appropriate way of realising it at the collective level – opt out claims – is equally permitted under Article 80 GDPR. Article 80 GDPR allows data subjects to mandate nonprofit organisations to act on their behalf and permits Member States to enable independent representative actions. Its purpose is to enhance the defence of data subjects’ interests. Now, the link established by said provision between a compensatory claim and the presence of a mandate to a representative organisation by data subjects, seems at times to be interpreted as meaning that compensatory proceedings can only occur in opt-in form. On the contrary, and taking Article 47 of the EU Charter as a normative benchmark, we claim that interpreting Article 80 to exclude collective damages would frustrate effective judicial protection, and would go essentially against the goals and formulation of the RAD. For these reasons, we rather support an interpretation of Article 80 which does not equate the presence of a mandate with a particular participatory regime, but rather recognises that appropriate forms of support by data subjects and the decisions to be ultimately bound by the award represent, in themselves, a ‘mandate’ for the purpose of this provision (on the topic, see the recent preliminary question posed by the Rotterdam District Court).

  1. Conclusion

Collective redress is not merely a way to overcome rational apathy, a procedural efficiency tool and/or a response to scattered damages. It is necessary to properly conceptualise and address harm in the data economy. Collective proceedings enable a qualitatively different form of adjudication than individual proceedings — one capable of recognising patterns, systemic discrimination, and structural opacity loss.

No radical new legal architecture is required. The GDPR and the RAD already contain the normative and procedural foundations for collective redress. The task is interpretative and requires implementation on the Member State level. First, courts must embrace a broad(er) understanding of compensable harm, with an eye to the collective nature of data harms. Second, Member States must implement robust, opt-out collective redress mechanisms.

As Big Tech’s business models continue to evolve, the future of effective data protection will, in our view, depend less on the size of administrative fines and more on the capacity of private law to respond to systemic harm.

(Photo: Max Harlynking)