Since Google announced on 3 March the upcoming end of the third-party cookie, it is trying to promote Federal Learning of Cohorts as the privacy-friendly alternative for targeting ads. But it does not mitigate many of the problems that arise in the context of targeted advertising, from manipulative advertising that exploits people’s frailties or, even worse, baseless political beliefs, to dependence of the ecosystem on a single player who manages the user interaction. Hence, it should not be considered a final answer to the demands for change and maybe no answer at all if FLoC means getting stuck with it instead of continuing to develop better answers.
Federated Learning and FLoC have been in the making for a while
Federated Learning is not new, but Google has been publicly talking about its ideas for some years already. The idea of Federated Learning is that instead of pooling the data in one place to train an algorithm, an algorithm is sent out to learn from locally stored data (such as on your smartphones) and to bring back home only the insights, not the data. This means that the data may not “leave your phone”, but it is still used to generate groups of individuals that may be linked by private and rather sensitive characteristics. In the case of FLoC, the algorithm uses the data to group individuals into “cohorts” who share similar interests – as inferred based on their browsing behaviour. These interests are then used to target advertising.
They are not as different from the status quo as we would want them to be
What does not not change with FLoC is that browsing behaviour will still be recorded. It will be primarily tracked by Google (and not by numerous other third-party trackers) and will not be used to identify you individually, but as part of a group (cohort). There is still some uncertainty regarding whether the act of automatically assigning individuals to cohorts is permissible under the GDPR (another reading is that Google could be holding back FLoC testing in the EEA for more political than legal reasons). No data is being sent from A to B, no user is personally identifiable. But does it really give us privacy? Privacy can be understood in at least two ways: First, privacy as valuable in itself as the freedom of being tracked. And we are still being tracked. Second, privacy as a means to shield us from harm if data about us can be used against us. And this can still happen: exactly because targeting remains opaque and users cannot control, it is likely that it continues to influence perceptions of users, including potential manipulation and exploitation of vulnerabilities (such as weak emotional states or perceptibility to political polarization) to target advertising.
What is more, it also perpetuates the current power relationships and users’ lack of control relating to personal data. Google continues to hold on to its central role in managing the relationship between consumers and everyone else. It even strengthens its position because third parties are shut out. What matters now is no longer control over data: exclusive control over the algorithm that has been trained on local data will decide who is the big winner in online advertising.
Imagining alternative (ad tech) worlds
Some personalisation, some targeting is useful to help consumers navigate the web – we want newsfeeds, music, films and other online content to be relevant for us. For these personalisation algorithms, it is great if they can learn in a decentralised way how to tailor content to someone's interests. But what about ad tech? There are at least three alternative ways: a) limit behavioural targeting; b) develop collective mechanisms to exercise data rights more effectively; and c) use technologies that give real control to consumers. Let’s look at them in turn.
Limit behavioural targeting: Is it true that the ‘free’ online world will implode the moment personal data is taken out of advertising? Possibly not, judging by the huge margins that Google and Facebook are making on their ad tech products, but certainly the ad sector will have to shrink. A current initiative in the European Parliament to ban behavioural advertising shows that there is real appetite for change. A slightly softer approach would be to allow the use of behavioural data on an opt-in basis (which was the original idea of the GDPR which, however, has never made its way into practice), giving consumers the option to decide which data they are willing to share (and maybe with whom – e.g. if they are keen to find out about local businesses). At the very least, it seems sensible to expect an easy way for consumers to express their desire not to be tracked and targeted.
Develop collective mechanisms to exercise data rights more effectively: While privacy laws are certainly useful, they often do not translate into tools that allow consumers to make good choices. This is particularly obvious in ad tracking which is full of dark patterns and an overload of decisions that consumers are asked to make. Adding players to the market that play in favour of consumers would change the game: One concrete example is the “authorised agent” under the Californian Consumer Privacy Act. Consumer Reports has piloted such an agent and shown that while the agent is very useful for consumers, firms are still fairly reluctant to properly respond to them. That gives us a hint of how responsive they might be towards single consumers making demands.
Use technologies that give real control to consumers: While a ban on behavioural targeting and more tools for consumers tend to “go against” the current way ad tech works, a longer-term transformation should allow for any targeting (some of which may be desirable) to be based on confidential data sharing the default. More research is needed to make Secure Multi-Party Computation, Homomorphic Encryption and Zero-Knowledge Proofs applicable to a wider range of settings. The German data strategy, for example, sees investing into research into these technologies as a key area of action.
Let’s use FLoC as an intermediate step at most
Many players are complaining about FLoC because it will reduce their ability to target ads themselves. They are right to complain about the unlevel playing field, but the question is whether the better change to FLoC is to make more data available to more parties or to make less data available to Google. There are good – privacy – reasons to believe that the latter option is preferable. Hence, FLoC might be acceptable if and only if it is clear that ad tech cannot stop there, but needs to give more tools to consumers and develop long-term options that eventually put an end to profiling, be it of individuals or cohorts.