Introduction
Algorithm-enabled “personalisation” is a hot topic in legal scholarship. In recent times, “personalised law” has been the subject of books (like this and this), a conference culminating in a high-level online symposium and numerous events. As someone who stays largely out of digitalisation debates, I was impressed to discover the debate that has already flourished in European Private Law. At the same time, when reflecting on the link between personalisation and a subject I am versed in, namely standard terms in consumer contracts, my scepticism ultimately takes the upper hand. I will use this space to make two fundamental points. The first is rather general, concerning the nature of “personalisation” debates from my non-digital scholarly angle. The second is more specific, trying to predict the likely qualities of personalised standard contract terms and their consequences for unfair terms control under EU law.
I will disclose from the outset that my foray into personalisation has led me to agree with the findings of two close colleagues (Marco Loos and Joasia Luzak). At a glance, these may seem perhaps far-fetched, but after digging a little deeper there is more than meets the eye. In their recent advisory report on unfair terms in the digital context, they concluded:
“We could recommend considering as potentially unfair such terms of DSPs, which have been adjusted as a result of automated decision-making for a given consumer, without this having been disclosed to consumers. Moreover, the assessment of unfairness of a given term could account for whether it has been personalised. A rebuttable presumption could be introduced that personalised prices and terms are discriminatory, and, therefore, unfair.”
I reach the same conclusion in two distinct steps:
Step 1: Why all the talk of personalisation?
We have all been there. Joking with friends and colleagues about how the almighty algorithm got it wrong once again, suggesting we follow a political party far removed from our ideas or subscribe to a teenage influencer in another continent. This evidence is of course anecdotal (even though such anecdotes are sometimes remarkable) and many will counter with reports of the Twitter algorithm’s accuracy in suggesting relevant profiles.
What the current and future potential of this technology, marketing agencies are offering a new generation of individualization tools which they distinguish from large-grain personalisation. These attempt to foster ever more accurate interventions in the shopping experience. This is known as “1:1 personalisation”. To me, there is little doubt that this represents further commodification of consumers. It hinges on personal data being used to create a fundamentally extractive business model for UX and advertisement companies, who claim these practices lead to large increases in revenue or engagement (for a critical view, see this paper by Nextcloud exec Daphne Muller). The data-as-consideration debate (succinctly reconstructed here) is simply the tip of a vast (if not entirely unexplored) iceberg. Consumer identities being de-constructed and later re-assembled according to contingent needs of algorithms is at the core of this commodification.
When it comes to standard terms (the basis of nearly all internet transactions) the commercial offering is more limited. Whilst contract automation software is on the rise, the market for contract personalisation tools and services seems rather dormant. While to some extent this may be a matter of time, my intuition is that this is connected to the ultimate nature of personalisation as a marketing tool.
In essence, personalisation is a form of marketing. While not always directly aimed at increasing revenue, it is inevitably geared towards aspects of the consumer experience which receive some degree of attention. This is hardly the case for standard terms. Such has been theorised in different forms and empirically investigated in a large body of scholarship. Put simply, if consumers do not base their shopping behaviour on standard terms then why bother personalising? Broadly speaking, companies use contracts to clarify expectations, allocate risk and pre-emptively regulate disputes. Most of these goals are connected with lowering transaction costs. In contrast, unnecessary personalisation efforts may ultimately increase transaction costs and do little in the way of generating extra revenue or reducing uncertainty.
To me, this seems another reason to doubt that personalisation and targeted advertisement has the potential to be “welfare-enhancing”. Targeted advertisement and personalization, some say, are welfare-enhancing in so far as they improve customer experience, reduce nuisance from irrelevant advertisement and allow us to discover products that we didn’t know we absolutely wanted to have. Even allowing for these (quite tenuous, in comparison) advantages, the reasoning does not easily extend from recommendations to personalized terms. We know from the extensive research above and even additional work generated in different corners that most standard terms are not salient to consumers when they are making a purchasing decision. Terms that are preference-compliant in abstracto, in other words, rarely have a marketing value. Say that a company does personalize them, why would they care to do so based on consumer preferences?
In practice, personalisation in standard terms ultimately resembles some version of personalised law: the unilateral rule setting by one party in the form of a likely quite sticky (that is, hard to alter) default contract terms. If industry personalises faster that than legal systems (which it likely will) then tensions are likely to arise. The legitimacy issues this poses have been widely discussed, with Auer effectively capturing these in a recent book contribution.
Step 2: Personalised standard terms and European Unfair Terms Control
Given the above considerations, what are the legal implications of personalised standard terms? In what follows, I will sketch an analysis based on the current European rules on unfair terms in consumer contracts.
First, it is possible that personalised standard terms would be considered standard terms under EU consumer law. Based on the definition contained in the Unfair Terms Directive, it is not necessary to construct documents for an indefinite number of utilisations. This contrasts with several Member State doctrines in force prior to the Directive. Even perfectly individualised terms are ultimately subject to the Directive and other frameworks that have developed in relation to it when they are drafted by the company and the consumer’s input is limited to accepting them.
Second, given our emerging understanding of digital vulnerability and lacking salience of most standard terms, there is little reason to assume providers use personalisation in a mutually beneficial way.
Assuming that personalisation would not immediately change the functions of standard terms, Collins provides the clearest analysis in what can be considered an academic classic. Writing over 20 years ago, he highlights functions covering customisation, risk-allocation/flexibilisation, sanctions and dispute proceduralisation.
The clearest example of customisation is undoubtably price personalisation, perhaps better known as price discrimination. However, following Porat & Strahilevitz further customisation seems plausible. Default delivery terms provide a solid example, whereby users deemed likely to opt for home delivery could initially be shown a price including shipping costs, only to later be offered a discount for pick-up. Such terms may benefit consumers in contexts where home delivery is the most popular option as the overall product price is displayed from the outset. However, when applying the same logic there appears little reason for a provider to switch to such default terms unless mandated. Notice, furthermore, how this example is based on relatively coarse, market-based personalisation rather than individualisation.
For industry, an even more interesting application of personalisation would be to use consumer data to tailor payment options. Payment by instalment could be made the default for certain consumers based on previous shopping behaviour and other characteristics, such as income and general expenditure levels. Such a default configuration seems likely to contribute to over-indebtedness by encouraging consumers to buy items they clearly cannot afford. This could further be exacerbated as recurring payments mount, increasing the risk of missing a single instalment.
This is enough, I believe, to cast a critical light on personalised standard terms. However, until now we have only mentioned relatively salient terms. What about non-salient ones?
In long-term contracts, flexibilisation is of prime interest to traders. Personalised flexibilisation is already present in some fields, such as insurance contracts. In particular, personalised flexibilisation in online services would provide benefits such as the ability to monitor a variety of user behaviours whilst collecting new user data in real time. This goes far beyond the already extensive data collection and adaptation permitted in so-called “black box” insurance contracts.
Finally, personalisation could be used to implement different termination clauses for different users. For instance, what if companies were able to adapt conventional limitation periods or dispute resolution instructions to the consumer’s projected claim-proclivity, tendency to perform tasks last minute or ability to navigate procedures?
Under EU unfair terms rules, terms which deviate, to the consumer’s detriment, from the otherwise applicable non-mandatory rules of national contract law are likely to be unfair (the so-called Leitbild approach developed since Mohammed Aziz). But how is consumer detriment to be construed in the context of personalised standard terms? Surely companies could simply claim that their personalised default rules cannot be considered unfair under the CJEU doctrine because they aim to enhance the welfare of specific consumers by allowing them to satisfy their preferences more easily?. As previously mentioned, examples include payment and delivery options. In such a context, how could an individual consumer ever show that these terms operate to their detriment without an explanation of the algorithm producing them?
All things considered, in the absence of effective algorithmic auditing it seems safest to assume that personalised standard terms are a technological transformation we can happily do without, and should indeed be presumed unfair. This can happen in two plausible ways: first, by arguing that personalised terms are by definition intransparent, because consumers cannot be expected to understand what the algorithm saw as likely consequences of such terms when including them in the contract; second, by more strictly applying the Leitbild approach to all terms deviating from non-mandatory rules. In both cases, it would be open to the concerned companies to show that the personalised term were in fact intended to be consumer friendly. Such a presumption would incentivise companies to make a careful assessment concerning the use of personalisation and leave space for true “win-wins” – while sticking to the wise rule of thumb that if something looks too good to be true, it’s probably not true.
This post is based on my contribution to the conference “Regulating Personalisation in the European Union”, held at the University of Amsterdam on 22-23 September 2022. I am indebted to the organizers for giving me an excuse to look into this fascinating subject. Thanks are further due, in particular, to Christoph Busch for raising the transparency point during the conference, to Marija Bartl and Yannick van den Berg for their comments on the draft post and to Angus Fry for editing.
(Photo: James Osborne)