Faculty of law blogs / UNIVERSITY OF OXFORD

Personalized Information Duties - A Way to Tackle Information Overload?

Author(s)

Louisa Specht-Riemenschneider
Chair of Civil Law, Information Law and Data Law and Co-Director of the Institute for Commercial and Business Law at the University of Bonn
Gregor-Rafael Görgen
Research assistant for the Chair of Civil Law, Information Law and Data Law at the University of Bonn

Posted

Time to read

13 Minutes

This post is part of a special series including contributions to the OBLB Annual Conference 2022 on ‘Personalized Law—Law by Algorithm’, held in Oxford on 16 June 2022. This post comes from Louisa Specht-Riemenschneider, who participated on the panel on ‘Personalized Law’, and Gregor-Rafael Görgen.

 

I. Introduction

As a general rule, law is characterised by a certain degree of abstraction, and once a rule applies to two individuals it applies equally to both. However, it is worth considering the idea of personalized information duties. Why? Legislators still respond to almost every consumer law problem by creating new duties to provide information. This is true for both traditional consumer law and digital consumer law, and a similar tendency can be perceived in data protection law. The reason for this is the traditional information paradigm, ie the idea that providing detailed information to consumers alone is sufficient to lead them to act to their advantage and to enable them to protect their interests through their decision-making. However, decades of consumer behaviour research have revealed that information duties do not necessarily lead to consumers actually being informed. Although an increase in the amount of available information initially contributes to an increase in subjective decision-making efficiency, but once a—very individual—amount of information has been provided, information intake decreases, or the consumer’s information intake even falls to zero entirely.

A study in which Facebook users were asked whether they had given their consent to the data processing practised by Facebook, for example, came to the conclusion that only 37% of the users were of the opinion that they had given their consent to Facebook to collect and use their data. About 43% of the respondents said they were not aware of this, and another 20% thought they had never given such consent. A large proportion of data protection consent declarations are therefore made without the data subjects actually taking note of the information provided in accordance with data protection law. They consent without knowing which data processing operations they are consenting to or that they are consenting at all. In the context of data protection law as well as in the context of general terms and conditions, it can usually be observed that the data subject merely scrolls down the text of the privacy policy or the general terms and conditions and ticks the consent declaration without actually reading the relevant texts. Seventy-eight per cent of the Facebook users surveyed in the above-mentioned study, for example, stated that they did not read the privacy policy or only skimmed through it. This is also referred to as the ‘clicking-without-reading’ phenomenon. Reading privacy policies of every website we visit in the course of a year would cost us about 76 working days of eight hours each. Combined with the low benefit of being aware of privacy notices, the lack of awareness may therefore even be rational. In view of excessively high transaction costs, 'rational ignorance' is to be expected.

Thus, it seems worth considering the idea that everyone should receive only the information that he or she actually needs to make an efficient decision. But knowing which information duty should to what extent apply to which people, requires a huge amount of data being analysed. This leads to the topic of this contribution: the question whether personalized information duties and European data protection law are compatible.

To analyse this question, previous approaches that have so far been used in order to counteract the inefficiencies that arise from measures based on the traditional information paradigm and its wrong assumptions will be presented first. In a second step, the data protection problems that arise from the personalization of information duties from a legal and a socio-political perspective will be discussed. Finally, an alternative to such personalized information duties will be presented.

The thesis developed here is that European data protection law does not prohibit the personalization of information duties, but that such personalization opens up the law to highly problematic business models, the development of which we should do our utmost to oppose. If personalized law—from a data protection perspective—should not open Pandora’s box we need safeguards, self-assessment solutions and technical support, for example, Personal Information Management Systems (PIMS), a form of data trusts.

II. Approaches so far to counter the current information overload

There are quite a few initiatives which try to counteract inefficient decision-making by consumers in the field of data protection and privacy policies.

First, there is a proposal for ‘nutrition labelling’ for privacy developed by the Carnegie Mellon University in Pittsburgh, which intends to present the data protection information in a condensed and visualized form, aiming to improve privacy policies’ comprehensibility. In field tests, however, the so-called ‘Privacy Nutrition Label’ did not achieve its desired effect.

The same is true for the so called one-pager: A study by Conpolicy shows that the clicking without reading phenomenon hardly changes when the privacy policies are presented in an abbreviated form as a one-pager instead. Thus, it has been shown that the data subject does not only fail to register information if it is of a certain length, but also if it is presented in condensed text form. There is a lack of awareness that the information could have an impact on the data subject.

This is where visualization solutions come in: Empirical studies show that human cognitive abilities respond significantly better to images than to texts, which is referred to as the ‘image superiority effect’ and is explained in particular by the fact that images are perceived holistically, whereas texts are perceived sequentially. Images are recognised well before any text can be captured. To perceive an image in a form that it can be recognised later, the human brain needs on average about one to two seconds for an image of medium complexity, while only about five to ten words of simple text can be taken in during the same viewing time. Although visualization solutions to convey information were originally included in the draft of the General Data Protection Regulation (GDPR), they did not make it into the final text of the regulation. Today, there are a few projects, eg at the Weizenbaum Institute, to push visual information transfer further, but they have not yet been completed.

As has been shown, despite there being some approaches to counteract inefficient decision-making by consumers, none of them has yet succeeded. So why not try personalized information duties?

III. Personalized information duties based on personality profiles: possible under data protection law but questionable from a socio-political perspective

Personalized information duties intend to provide only that information that is important for the individual data subject concerned. But how can we know which information the concerned data subject does and does not require for making an actually informed decision that benefits him- or herself? How do we know whether someone is interested in being able to buy particularly fairly produced products in online stores or in the products being particularly sustainable? Apart from knowing this from a self-assessment of the user (which is not problematic from a data protection perspective and is, therefore, not discussed here), knowing this would require collecting and analysing a great amount of data about each data subject in advance—in other words, it would require the creation and use of personality profiles which is highly problematic under data protection law.

Data protection law (1) requires that every processing of personal data is subject to a legal basis that permits it, and it also (2) provides some essential principles, three of which are especially relevant in this context as they could stand in the way of personality profiling for the purpose of personalized information duties: data minimization (Art. 5(1)(c) GDPR), storage limitation (Art. 5(1)(d) GDPR) and accuracy (Art. 5(1)(d) GDPR). In addition, (3) Art. 22 GDPR contains requirements for profiling and scoring that must also be met. These essential problems for personalized information duties are discussed in the following.

First of all, as the GDPR applies to the processing of personal data (Art. 2(1) GDPR), the data required for the personalization of information duties must be personal data in the meaning of the GDPR, for the relevant data protection law to apply. The scope of the term ‘personal data’ is very broad. It covers all information concerning an identified or identifiable natural person. A natural person is identifiable in this sense if it is possible by ‘all the means reasonably likely to be used (…) to identify the natural person directly or indirectly’ (Recital 26 GDPR). Data that are processed for the purposes of personalizing information duties have to be attributable to the person to whom the information has to be provided in a personalized manner for personalized information duties to make sense and are therefore by definition personal data. Given the relevant data being personal data, the aforementioned requirements must be complied with.

(1) Processing of personal data under European data protection law is forbidden unless data protection law allows for such processing, ie there must be a legal basis. One such legal basis could be a legal obligation for data processors to collect data required for personalizing information duties. This is made possible under Art. 6(1)(c) GDPR, but would require the legislator to create such an obligation. Another possible legal basis would be the categorisation of personalizing information duties as being a task in the public interest (Art. 6(1)(e) GDPR). This would also require the legislator to become active and create a legal basis from which the categorisation as being in the public interest can be derived (Recital 45 GDPR). However, as long as the legislator does not do any of this, the processing of personal data for the purpose of personalized information duties would only be possible with the consent of the data subject as per Art. 6(1)(a) GDPR or an interest in data processing that is at least equivalent to the interest of the data subject pursuant to Art. 6 (1) f GDPR (the latter is very unlikely when it comes to personality profiles). Before giving consent, the data subject must be informed that his or her data will be used for personality profiling for the purpose of personalized information duties, and he or she must have the opportunity to withdraw consent once given. As a result of a revocation, the personal data may no longer be processed and must be deleted unless its processing can be based on another legal basis. For special categories of personal data in the sense of Art. 9(1) GDPR, eg health data, the GDPR provides for even stricter requirements for processing. In this case, personality profiling for the purpose of personalizing information duties would only be possible if the data subject gives explicit consent. Without consent, personality profiling is only possible with personal data which are manifestly made public by the data subject (Art. 9(1)(e) GDPR).

In a nutshell, personalized information duties comply with the GDPR if they are based on consent.

(2) However, even if the data subject consents to a data processing operation for purposes of personality profiling to enable personalized information duties, the principles of data minimization and storage limitation must be met. According to the principle of data minimization, the amount of personal data processed must be ‘adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed’ (Art. 5(1)(c) GDPR). This requires an assessment of whether a personality profiling to this extent is proportionate considering the benefits of personalized information duties. The principle of storage limitation allows data processing only for the period necessary to achieve the purpose of the data processing (Art. 5(1)(e) GDPR). But how long is data storage for personality profiling purposes necessary to enable personalized information duties? Basically, as long as the data subject wants to be provided with personalized information. Thus, personalized information duties comply with these two data protection principles.

According to Art. 5(1)(d) GDPR, personal data must also be ‘accurate and, where necessary, kept up to date’. Keeping data up-to-date is necessary in those instances in which the up-to-dateness of the data is relevant for achieving the purpose of the processing in question. Personalized information duties, as mentioned above, aim to provide each data subject with the information he or she needs to make an efficient decision from his or her point of view, thus preventing efficiency-decreasing information overload. The information a data subject needs for decision-making along these lines may well change over time, for example due to advancing age, changing knowledge, increasing or decreasing professionalism in the technical environment and changing preferences. Given this complexity of the necessary personality profiles, data processors might not always be able to keep up in ensuring that the personality profile is accurate and up to date. The consequence is that there is a danger that personalized information duties are not always based on accurate and up-to-date personality profiles and therefore lead to a suboptimal information provision which might decrease the data subject’s decision-making efficiency. Ensuring the up-to-dateness of the relevant data in all instances and thereby avoiding suboptimal information provision would require excessively high efforts. However, given these efforts are made, it is at least theoretically possible for personalized information duties to comply with this data protection principle too.

(3) Furthermore, the data subject—under certain circumstances—has the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her’ (Art. 22(1) GDPR). Such a legal effect also occurs if, on the basis of the algorithmic data evaluation, the decision is made that only certain information is to be provided to the data subject. However, according to Art. 22(2)(c) GDPR in conjunction with Art. 22(3), (4) GDPR, this right does not apply if the data subject has consented to the data processing and reasonable measures have been taken to protect the rights and freedoms of the data subject. Such ‘reasonable measures’ are, for example, technical and organisational measures that prevent discrimination and provide for correction possibilities of inaccurate personal data. The data subject must therefore have the possibility to object to a categorization on the basis of which he is being given a certain kind of information. This presupposes that the basis for the decision on which he or she is categorized is always made available to him or her.

To sum up, personalized information duties on the basis of personality profiling are compatible with European data protection law on the basis of informed consent, but the data subject must be granted the right to object to the categorisation for the purpose of receiving only certain information or to correct his categorisation, and this in turn requires that he or she be given access to the basis of this categorization. For this purpose, the data subject may exercise its right to information, Art. 15 GDPR.

However, personalized information duties based on comprehensive personality profiles pave the way for socio-politically highly risky business models: By evaluating personal data, the personality of the data subjects can be decrypted with particular precision. As early as 2012, it was proven that it is possible to predict, from an average of 68 Facebook ‘likes’ by a user, which skin colour the data subject has, whether he or she is homosexual (88% accuracy), and whether the data subject is a Democrat or a Republican (85% accuracy). Intelligence, religious affiliation as well as alcohol, cigarette and drug consumption can also be determined. On the basis of ten Facebook ‘likes’, a person can be assessed better than an average work colleague could. Seventy ‘likes’ are enough to make computer-based judgements about a person that are more accurate than those of a friend, and with 150 ‘likes’ it is possible to surpass even the knowledge of the person’s parents. With 300 ‘likes’, a person’s behaviour can be predicted more clearly than their partner could. And with more ‘likes’, it is even possible to surpass what people think they know about themselves. Furthermore, in 2013, Mayer-Schöneberger and Cukier described how personal data can be used to estimate an individual’s likelihood of committing crimes. Personality profiles already exist but it is a difference whether we try to abandon the use of them as it is done by the Digital Services Act and the Digital Markets Act at the moment, or if we try to make them be used even more. The more comprehensive personality profiles are being used, the likelier is a misuse.

So even if personality profiling for the purposes of personalized information duties is possible under data protection law, we need safeguards, one of which could be a prohibition to use the personality profiles for other purposes even if the user consents to a further data processing. Another supportive instrument could be Personal Information Management Systems.

IV. Personal Information Management Systems as safeguards

Personal Information Management Systems (PIMS) are technical tools that aim to help data users to control the processing of their personal data. They can, however, greatly exceed this function and provide a service to users in the form of improved information transfer and advisory services (also outside the field of data protection law) and for enforcing data subjects’ rights under data protection law. Moreover, they are suitable for removing dark patterns. In general, PIMS could be of great value for personalizing the information that is given to the user. The user could set default settings to tell the PIMS which information he or she would like to be displayed out of the multitude of information, and the PIMS could then have to comply with this and could prepare the information in a comprehensible and simplified way for example through visualisation. The advantage of PIMS is that only the PIMS would have the information about the user’s preferences, and these PIMS are subject to specific security requirements set out in the Data Governance Act (DGA). Therefore, the risk of data misuse would decrease not only quantitatively but also qualitatively. However, this would require regulation of PIMS which, in addition to the aforementioned security requirements, also sets incentives for the emergence of PIMS and, above all, ensures legal certainty for their activities in terms of data protection law. Instead, the current regulation, sets out extensive requirements that PIMS would have to fulfil but does not at the same time create incentives for the emergence of such services, and thereby decreases the potential competitiveness of PIMS. The current regulation of PIMS via the DGA and the GDPR, as well as in Germany via § 26 of the Act on the Regulation of Data Protection and Privacy in Telecommunications and Telemedia, fundamentally complicates the activities of PIMS.

V. Conclusion

Concluding with a brief summary, it has been shown that personalized information duties are possible under European data protection law either based on a self-assessment of the user or based on personality profiles. However, the latter is quite risky from a socio-political perspective and therefore requires safeguards. Such a safeguard could be a prohibition to use the personality profiles for other purposes than personalizing the information even if the user consents. Moreover, Personal Information Management Systems (PIMS) could be proper safeguards as they restrict the use of the personality profiles technically.

 

Louisa Specht-Riemenschneider holds the Chair for Civil Law, Information Law and Data Law and is Co-Director of the Institute for Commercial and Business Law at the University of Bonn.

Gregor-Rafael Görgen holds a LL.B. in Law and Economics and studies Law at the University of Bonn and the London School of Economics and Political Science.

Share

With the support of