How Can States Effectively Regulate Social Media Platforms?
The advent of social media has revolutionized the 21st century, making significant changes to internet communication. This post looks into the shortcomings of social media such as Facebook, Twitter, YouTube, Tiktok, Snap Chat, and Instagram; it assesses the need for regulatory interventions and proposes an international approach.
Facebook, YouTube and Instagram represent the leading social networks with 2.7 billion, 2 billion, and 1 billion users, respectively. Social Media has become a source of income for many people and has given voice to those who would normally be ignored by the traditional sources. It has also resulted in greater civic engagement in political issues (see Alex Rochefort).
Alongside these achievements, social media has also brought perils that are threatening the fabrics of our society. It has been used to spread misinformation or ‘fake news’, hate speech, and user-generated data is adopted for targeting political and commercial advertisement based on personal profiles. One of the systematic factors of digital platforms is that social media companies monetise our attention. They encourage content creators to deliver edgy information that can attract attention and encourage users to ‘click-through’ to a story, so they can generate advertising revenue. Fake news stories are often written with an emotional appeal to maximise the production and distribution of this information (see Samantha Bradshaw). Moreover, advanced algorithms are used to enable users to see what they like.
Digital advertisements that target specific segments of the population through demographic factors such as age, gender, or location—commonly referred to as ‘dark ads’—are currently legal. For example, the UK has legislation, in the form of Communication Act 2003, regulating political speech on traditional broadcast media, but this does not extend to political advertisements distributed digitally.
The question of social media regulations is inevitable, but what is the best way forward?
The Need for Focusing on the Root Cause, Not the Symptom
Governments around the globe have taken some form of regulatory actions. Globally, the USA, Germany, Brazil, Bahrain, Hong Kong, India, Switzerland and other nations have also advanced some sort of oversight. Similarly, at least seventeen countries, located in Europe, the Middle East, South America, Asia and other regions, have passed or proposed laws limiting online content to combat fake news and other problematic digital content (see Freedom House Report). One of the most significant regulatory developments has occurred in Germany in the form of the Network Enforcement Act, or NetzDG, a law that imposes content liabilities on categories of social media services.
Although there have been individual state responses, they do not address the root cause. The root cause is the systematic ability of social media companies to insert critical features at the development stage of the platform without any regulatory oversight. So when individual states respond they ignore the root causes and instead, they focus on the symptom. For example, the German response represents a type of limited regulation where the onus is on social media organisations to update German authorities on the social media organisations responsibilities. Tackling fake news and hate speech in such a way is problematic. Any ‘restriction of comprehensive nature on means of communication raises concerns about limitations on the freedom of citizens or suspected abrogation of communicative power by the state’ (see Jan Van Cuilenburg & Denis McQuail). It can be particularly worrisome for marginalised peripheral voices in despotic regimes. This article does not endorse a regulatory intervention that restricts users’ ability to post harmful content. In this regard it is important to highlight that misinformation and hate speech itself is not a new phenomenon. However, there are already hate speech and libel laws in majority of the countries including Germany and UK that deal with harmful content and actions can be brought through existing laws.
Instead, the article proposes an internationally harmonised comprehensive regulatory approach that targets the root cause of social media problems. The more appropriate approach of regulation is that of structuralist regulation, wherein ‘the goal is prophylactic: to limit the very structure and business models of these firms and to alter the dynamics of the markets in which they operate, thereby reducing at the source the incentives and conflicts of interest that magnify risks of exploitation, extraction, and fraud’ (see K Sabeel Rahman).
Addressing the Root Cause through Regulations
In the case of social media organizations there needs to be structural regulation focusing on systematic changes that deal with the mechanics of monetization of advertisement, profiling, algorithms, verification and bots.
Currently, platforms like Facebook allow advertisers to target their audience according to their age, gender, interests, etc. By enabling such features platforms shape the overall outlook of users and allow for the dissemination of misinformation to be well-targeted. Such features need to be curtailed through comprehensive regulation which either prohibit it overall or require companies to give users the choice of opting out from targeted advertisement. In addition, regulation needs to target algorithms used to influence users’ interests. Facebook, Twitter and YouTube create algorithms that identify a pattern of the type of content users engage with and then promote and highlight that kind of content. Similarly, bots (generally suitable for mundane work) have been employed to artificially drive-up users’ engagement by liking, sharing, or retweeting content. Automating these interactions can generate a false sense of popularity or consensus—not only around traditional consumer products but also around political ideologies or individual beliefs. These are active engineering decisions that need to be stopped at the development stage.
Structuralist regulation that aims at systematic reforms does not raise the risk of undermining freedom of speech. Such reforms are crucial from a public utility perspective. Social media platforms have become an indispensable infrastructure for the modern economy (see K Sabeel Rahman). This avenue of reform tends to be appealing for its conception of public utilities as not just economic entities, but moral and social actors.
Given the global aspect of the internet and social media platforms, it is appropriate to argue for a harmonised international regulatory response enforceable in domestic courts. Social media platforms have significant economic power and a transnational user base. They play a significant role in politics and influence culture through the spread of information and ideas. Therefore it is appropriate to respond to the dangers it poses through an international mechanism. Recently, international responses have been developing in areas like AI-based automated weapons, requiring control at development stages. An international response will prevent unbalanced individualist responses. The international response is also a good way to guarantee fundamental rights alongside legitimate restrictions. It could ensure universal access to information and potentially reduce problems associated with platform manipulation.
Israr Khan is a DPhil candidate at the Centre for Socio Legal Studies at the Faculty of Law in the University of Oxford.
YOU MAY ALSO BE INTERESTED IN