Faculty of law blogs / UNIVERSITY OF OXFORD

Reconciling Social Media Business Models with Human Rights Safeguards

Author(s)

Richard Mackenzie-Gray Scott
Postdoctoral Fellow at the Bonavero Institute of Human Rights

Posted

Time to read

4 Minutes

Social media platforms form a prominent part of the business and human rights agenda, notably because of their capacity to impact the enjoyment and exercise of many human rights at scale. The business models underpinning these platforms have been shown to have a power to drive conduct that contributes to negative effects on human beings. An example is the spread of misinformation, where algorithmic curation on newsfeeds can result in information remaining unobserved, sitting stagnant, or going viral. Misinformation can attain outsized reach even if it is initially shared by only a handful of users, and related accurate information potentially capable of countering that misinformation may simultaneously not reach many users. It has been pointed out that ‘all sorts of social and cultural settings may limit an individual’s exposure to information. But by optimizing for engagement and virality at scale, AI-assisted personalization may undermine an individual’s choice to find certain kinds of content.

A distinctive feature of social media platforms that sets them apart from other sources of information is the bespoke tailoring of content to specific individuals. This practice is conducted to increase user engagement, which in turn means users divulging more personal data, thus facilitating further tailoring of their experience, leading to more time spent on the platform, and so on and so forth, round and round on the (not-so) merry-go-round. These cycles generate revenue for platforms through the exploitation of the associated data. This business model is prominent in what has become the online attention economy. But it can lead to divisive content being shared widely when such information is capable of eliciting emotional responses from users. This is one reason why misinformation that is evocative or provocative for large numbers of users will likely be spread widely without users knowing they are interacting with false information.

However, new research helps show how human rights safeguards can be worked into this business model through digital design. An example is the use of digital nudges. While requiring more public scrutiny before being implemented further, this measure forms part of the online information environment, manifesting in various forms. Variants of one form are those that can be used on social media platforms to reduce the spread of misinformation. While these variations can be designed in many different ways, their overall use can flip the problem of social media business models aimed at increasing user-engagement on its head. Utilising this measure may not reduce such engagement, but in part relies on it. For instance, an alternative source digital nudge, which presents different sources of information to users when they interact with content containing misinformation, can be informed by the data mining practices of social media platforms, which provide indicators as to what sources of information each user trusts. Algorithms can therefore be designed that detect sources of misinformation and, upon recognising such sources, present alternative sources that the specific user is likely to trust, factoring in their expected affectivity towards them. Online social network dynamics can become an antidote for the market-driven incentives influencing social media companies if digital design can incorporate human rights considerations. Here exists the potential to reduce the resulting net negative effects of misinformation through the very goal of user-engagement that owning companies seek to maximize.

The significance from a human rights standpoint is that the design matters. If crafted with care, digital nudges can be content-agnostic and promote freedom of thought. Should these measures be geared towards prompting the rational decision-making processes of users, allowing people to think freely and consciously take into consideration other sources of information before choosing what to read and share online, then the human right to freedom of thought would be respected. And this is crucial in a world where the spread of information via online platforms is a doubled-edged sword. Enabling individual participation in collective action is offset by democratic decay. Satisfying the need for connection comes with fostering attention-seeking. Producing more knowledge brings both accuracy and inaccuracy.

Online platforms carrying, hosting, and transmitting information showcase the best and worst of what humans have to offer. Due to this mix, content moderation is necessary. It also presents a variety of trade-offsthat are tricky to navigate. Freedom of expression versus safety and security are one of many. While certain thoughts belong solely in the mind (if anywhere), others when expressed are a boon to individual self-fulfilment and community nourishment, including because they contribute to freedom of thought by providing different perspectives to consider.

In grappling with related issues, including those concerning when online content poses risks to safety and security, whether that of a person, group, or society, moderation efforts attempting to mitigate these risks need not overstep in their relationship with human rights. In working with current business models underpinning social media platforms, incentives exist to implement initiatives that can both reduce the spread of misinformation and protect freedom of thought. Arguments can certainly be advanced for changing these business models, the execution of which assumes that lobbying efforts of the owning companies can be overcome. Yet better digital design offers promise in safeguarding human rights without having to undertake such endeavours that may be met with heavy resistance if anticipated to compromise corporate bottom lines.

The above reasoning is not to imply that states should refrain from stepping up their game in the oversight and regulation of social media platforms, including by exercising due diligence to guard against related chilling effects on freedom of expression, information, and thought. A balance can be found between public regulatory mechanisms and private self-regulatory efforts striving for harmony with individual rights and community interests. Further focus needs to fall on the procedural aspects of online content moderation, which legally mandated reporting from platforms would help inform, particularly with respect to the technological architecture that determines what information receives whose attention and why.

What is also noteworthy when reflecting on online information circulation and management are the questions surrounding business models in addition to those underpinning social media platforms, which also concern knowledge production and dissemination. Much quality information online lies behind a paywall. There is a bifurcation of knowledge in this respect that depends on personal finances and access to resources. As is there an element of elitism at play, where actors benefiting from informational asymmetry criticise other actors for contributing to it without apparently realising how they too contribute to its occurrence. If accurate information is going to be in a better position to compete with misinformation in online marketplaces of attention, then ideas with veracity cannot remain inaccessible for the large majority of people. 

Richard Mackenzie-Gray Scott is Postdoctoral Fellow at the Bonavero Institute of Human Rights and St Antony’s College, University of Oxford.

Share

With the support of