Reading Time: 7 minutes

The transatlantic speech regulation crisis is in full swing. Since proclaiming that American online platforms are victims of ‘overseas extortion and unfair fines and penalties’ in early 2025, the US officials and other prominent public figures have fiercely challenged the legitimacy of EU rules on content moderation. The EU refuses to back down. By finally issuing a hefty fine to X, delivering a preliminary finding that TikTok’s addictive design violated the DSA and swiftly responding to the recent scandal surrounding xAI’s chatbot Grok, the European Commission clearly wishes to signal that it is serious about ramping up its enforcement capacity and upholding its commitment to ensure a safe, predictable and trusted online environment.

The current standoff between regulators is a serious obstacle towards developing a cohesive approach to regulating online platforms. Furthermore, a question arises as to how geopolitical tensions affect bottom-up efforts aimed at promoting common principles and standards for fairness and integrity in content moderation and holding platforms accountable where their practices cause harm to individual users or society as a whole.

This blogpost reflects on the current state and future of transnational legal mobilisation in the area of content moderation, including both recourse to courts as well as law-based advocacy and activism. It argues that given the increasingly bleak outlook for online safety, there is a dire need for a more visible and influential platform accountability movement spearheaded by non-profit organisations, activists and researchers from both sides of the Atlantic. After briefly discussing the phenomenon of transnational movements, the blogpost identifies the main challenges to collective action and how the present instability could exacerbate them. It then emphasises the vital role of mobilisation beyond national and continental borders in reaffirming respect for fundamental rights and values affected by content moderation and highlights the opportunities for shaping and sustaining them even under the turbulent circumstances.

The role of transnational movements in protecting digital rights

Transnational movements are defined as a form of sustained, collective effort characterised by regular cross-border interaction and cooperation among the actors involved. In their seminal 1998 book Activists beyond Borders: Advocacy Networks in International Politics, Margaret E. Keck and Kathryn Sikkink offered the first systematic account of how transnational advocacy networks (TANs), formed around a shared commitment to human rights, exert significant influence in both global and domestic politics and are capable of not only effecting targeted policy changes but also of reshaping broader normative discourses. While transnational action can take diverse forms, strategic litigation – defined as a legal action initiated to achieve broader social, political, or economic ends (Cebulak, Morvillo and Salomon, 2025) – is arguably one of the most powerful means of opposing the impunity of powerful actors and achieving meaningful societal change.

Given the inherently global nature of the Internet, transnational movements are instrumental in protecting and promoting digital rights (Strobel, 2022). The field of privacy and data protection offers a great example of how civil society actors actively cooperate across jurisdictions to resist global surveillance. As shown in the work of Lehoucq and Tarrow (2020), an alliance between US- and EU-based organisations and activists, fuelled by Snowden’s revelations, paved the way for robust collective action both within and outside courtrooms, despite considerable differences in the respective legal frameworks.

Limitations of transatlantic legal mobilisation initiatives in advancing platform accountability

In both the US and the EU, civil society plays a crucial role in the platform regulation discourse at the domestic level. However, cooperation between societal actors across the Atlantic appears less cohesive and robust than in other areas. There are some important global initiatives aimed at promoting accountability in content moderation. Collaborative projects like Global Network Initiative (GNI) seek to foster global solidarity, and established events like RightsCon provide an important platform for sharing experiences and identifying common problems and solutions. The multi-actor cooperation from different parts of the world (including the Global South) has also enabled transnational standard-setting through instruments such as the Manila Principles on Intermediary Liability and the Santa Clara Principles on Transparency and Accountability in Content Moderation. In January 2026, the newly formed DSA Human Rights Alliance, consisting of civil society organisations, researchers and human rights advocates, also released the ‘Principles for a Human-Rights Centred Application of the DSA’, which aims to reaffirm respect for international human rights standards and reflect perspectives from around the globe.

At the same time, civil society actors in the EU and the US often struggle to build unity and mutual support around issues of platform accountability. As my colleague Sarah Tas and I have previously observed, transnational strategic litigation on content moderation involving organisations and activists from both continents has yet to emerge. The growing geopolitical pressure poses a further challenge to existing collaborations. Most American organisations refrain from openly condemning the Trump administration’s war on online safety initiatives, and only a few expressed solidarity with the individuals affected by the recently adopted travel restrictions.

Barriers to more resilient transatlantic cooperation

There are several reasons why building a more solid and influential transnational platform to advocate for better content moderation practices remains a daunting task. The existing movement focused on content moderation is much more fragmented compared to other areas. Platform policies and features affect a broad range of rights and societal interests. It is therefore hardly surprising that there is a highly heterogeneous group of actors involved in the debates on platform governance and regulation, with organisations and individuals working on issues ranging from child protection and counterterrorism to artistic freedom and electoral integrity. There is also a glaring lack of consensus regarding what fair and responsible content moderation really is. Since online speech issues are notoriously complex and nuanced, civil society actors often seek very different (and sometimes opposing) outcomes through legal action. All of this stands in stark contrast to the global anti-surveillance movement, which has coalesced around a shared understanding of privacy as a public good and has been steered by lawyers and advocates promoting concrete pathways to safeguard it.

The obstacles to collaborative legal mobilisation efforts also stem from divergent legal histories and political cultures. As rightly noted by Hannah Bloch-Wehba (2024), many prominent digital rights organisations, such as the Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU), operate within a civil libertarian paradigm, whereby they focus primarily on resisting governmental overreach while devoting significantly less attention to abuses of corporate power. This orientation reflects the strong influence of the First Amendment and Section 230 of the Communications Decency Act (CDA), which affords platforms extensive immunity for their content moderation decisions. In Europe, freedom of expression is subject to a more restrictive interpretation. Therefore, unlike their American counterparts, EU-based organisations and activists do not shy away from urging platforms to take a stricter approach when moderating content by prioritising the interests of vulnerable users over freedom of expression concerns. For instance, long before the EU’s regulatory approach to online platforms took shape, NGOs such as the Union of Jewish Students of France (URJF) and the International League Against Racism and Anti-Semitism (LICRA) confronted Twitter in French courts over their failure to combat hateful content. Since the adoption of the DSA, there is a rapidly growing number of private enforcement lawsuits in several Member States. However, the absence of transnational cooperation could undermine the real-world impact of litigation efforts. For instance, in the ongoing case against Meta brought by the Dutch advocacy group Bits of Freedom, the District Court of Amsterdam ordered Meta to ensure easy access to a chronological feed on Facebook and Instagram, but only for users in the Netherlands. By joining forces, civil society actors could enhance the visibility and reach of their activities and achieve more far-reaching outcomes. Such collective action can take various forms, ranging from bringing parallel judicial proceedings in multiple jurisdictions to submitting third-party interventions and exchanging both empirical data and legal expertise.

Way forward

Despite the current transatlantic conundrum, cooperation among US- and EU-based civil society actors remains indispensable, especially as the recent rollback of safety measures by several leading platforms poses a real threat to democracies around the world. Despite the growing portrayal of the US as a stronghold of free speech absolutism, lawmakers are increasingly engaging with online safety concerns. Just last year, Congress adopted the TAKE IT DOWN Act aimed at counteracting non-consensual intimate imagery, with several child safety bills remaining on the agenda. These developments arguably signal the emergence of a shared recognition of the importance of ensuring responsible and trustworthy platform behaviour, which can in turn provide fertile ground for transnational collective action. Even where societal actors may struggle to agree on the substantive aspects of content moderation, there can be more room for consensus on its procedural attributes, such as transparency reporting, explainability of algorithmic systems, and data access for researchers, given their salience on both sides of the Atlantic (Palladino, Redeker and Celeste, 2024). Online child and youth safety is another topic that is of growing concern to non-profit organisations, lawyers, and activists worldwide. In this respect, the ongoing court proceedings targeting online platforms’ addictive design features in the US could carve out a new compelling field for civic cooperation across continents.

An elaborate system of redress mechanisms under the DSA is also expected to play an important role in prompting transnational legal and political mobilisation efforts. Given the well-known hurdles to access to justice, non-profit organisations could be particularly drawn to out-of-court dispute settlement bodies, which offer a swift, professional, and cost-effective review of platforms’ content moderation decisions. Furthermore, civil society actors can also exercise pressure for supervisory action by bringing complaints before the Digital Services Coordinators (DSCs) or the Commission. In 2024, a collective effort to hold LinkedIn to account for violating the prohibition of targeted advertisements based on profiling using sensitive personal data resulted in the platform voluntarily adjusting its practices to ensure compliance with the DSA, showcasing the strong potential of bottom-up mobilisation. While the regulatory clash is likely to persist for the foreseeable future, the transatlantic platform accountability movement is key to reversing the ongoing decline of digital rights protection and standing up for those most affected by harmful platform practices.

(Photo: Max Harlynking)