The Perils and Promises of Virtual Influencers: Where EU Consumer Law Falls Short
Social media influencers play a significant role in today’s online advertising campaigns. They are independent third-party endorsers who leverage their social media pages to provide targeted recommendations about the goods or services of other enterprises. A Belgian study has indicated that influencer marketing has a significant impact on the transactional decision-making of social media users, especially among youth. Recently, some companies, including Morphe, have been turning away from human influencers, due to their high costs and unpredictability. The companies often lack control over the messages conveyed by influencers, sometimes provoking controversies.
Amid this shift, a new type of influencer has emerged: virtual influencers—online personas brought to life through computer-generated imagery (CGI)—that employ anthropomorphism to come across eerily human-like. Sometimes they are referred to as AI-influencers, even though they might not necessarily use AI imageries. It is claimed that virtual influencers generate three times more engagement with brands than their human counterparts. The most prominent virtual influencer is Lil Miquela, a musician and arts student that has gained about 2.5 million followers, and makes a considerable amount of profit for her creators by ‘modelling’ clothing for brands such as Prada and Calvin Klein. In 2018, TIME even named her one of the 25 most influential ‘people’ on the internet. She joins an army of upcoming virtual influencers, including the Swedish Esther Olofsson and Belgian Bobbi Lee.
While virtual influencers might be an attractive marketing tool, they pose a significant danger to consumers, due to a lack of authenticity. Their product endorsements are by definition ingenuine and fabricated, as virtual influencers cannot exercise an independent judgement like humans can, and they lack the physical senses that are necessary to try on clothing, feel the texture of makeup on their skin, or perceive the fragrance of a scent. Moreover, a virtual influencer showcasing clothing may not accurately depict how the garments would appear on a real person. In the same vein, Lil Miquela cannot taste the Haribo candy she promotes. Instead, brands dictate the product endorsement that virtual influencers convey, while their message is portrayed as based on their own experience. Considering the implemented level of anthropomorphism of virtual influencers, one cannot reasonably argue that consumers uphold a different standard of expectations for them as opposed to human influencers, since 42 percent of Gen Z and millennials are unable to distinguish real from unreal online personalities in absence of proper disclosure in this respect.
The poor or absent disclosure of the virtual nature of virtual influencers, as well as the brands behind them, enables a deception of consumers. Legal scholars in the United States have warned the United States Federal Trade Commission (FTC) on numerous occasions in this regard, which prompted the FTC to adapt its Endorsement Guidelines to the practices of virtual influencers. In the European Union, consumer law rules on unfair commercial practices protect consumers against misleading and aggressive market practices originating from ‘traders.’ These rules have been harmonized through the transposition of the Unfair Commercial Practices Directive (UCPD). In the absence of a definitive regulatory response from the European Commission on virtual influencers, we argue that the harmonized standards of the UCPD already tackle most dangers posed by them.
The UCPD is essentially only applicable to ‘traders’. The term ‘trader’ refers to any natural or legal person who, in commercial practices covered by the Directive, is acting for purposes relating to their trade, business, craft or profession and anyone acting in the name of or on behalf of a trader. While human influencers may be considered traders in some cases, their virtual counterparts cannot be so characterised due to their lack of legal personhood. Consequently, the responsibility shifts to the individuals or entities behind these virtual influencers, specifically the provider (who develops or markets the AI system or puts it into service) and the deployer (who uses it). To qualify as traders under the UCPD, both the provider and deployer must ensure that their use of virtual influencers is directly tied to the promotion, sale or supply of a product to consumers. In our article (and this blogpost), we focus exclusively on the deployer of the virtual influencer, as the provider does not serve as a satisfactory anchor point for applying the UCPD. However, it is important to note that the doctrinal distinction between these two roles does not preclude the practical possibility of one individual fulfilling both capacities simultaneously as both deployer and provider.
The UCPD prohibits unfair commercial practices, particularly those deemed misleading or aggressive. Annex I to the Directive contains a list of misleading and aggressive commercial practices which are in all circumstances considered unfair, also known as the ‘blacklist’. Practices not on the list can still be prohibited if they are misleading, aggressive, or violate the requirements of professional diligence and materially distort the economic behaviour of the average consumer. A practice materially distorts economic behaviour if it significantly influences consumers to make transactional decisions they would not have taken otherwise. We are convinced that the average consumer has developed a legitimate and reasonable expectation from influencers to deliver an honest, personal, experience-based and trustworthy endorsement of a product. To that extent, we argue that when no disclosure about the robot nature of a highly-anthropomorphized virtual influencer has taken place, the consumer’s economic behaviour is likely to be distorted as they expect an authentic recommendation, resulting in a forbidden commercial practice. For that reason, the UCPD implicitly mandates the use of #IAmARobot for highly human-like virtual influencers. Additionally, because virtual influencers are entirely controlled by commercial entities, they are required to disclose their brand control using the hashtag #FromTrader[X]. The full control of the brand over the endorsement can be deemed material information that is essential for the average consumer to make an informed decision pertaining to a commercial transaction, and not disclosing this fact could conceivably satisfy the criteria for a misleading omission within the scope of the UCPD. Moreover, the disclosure of #FromTrader[X] could strengthen regulatory enforcement of virtual influencers, especially when the entity behind the virtual influencer is not easily identifiable.
Entities must comply with the UCPD once they use a virtual influencer to promote their own products or to promote those of a third-party trader. In both situations, at least one liable party is theoretically known to the consumer. In the latter case, while the third-party trader is identifiable, the deployer of the virtual influencer may remain undisclosed. In reality, authorities may thus encounter significant obstacles in effectively monitoring virtual influencers. The challenges surrounding virtual influencers are exacerbated by their ability to be easily deleted if non-compliant, allowing deployers to quickly create replacements. Another big concern regarding virtual influencers is their immunity from reputational damage as compared to human influencers. In many countries, the penalties for non-disclosure in influencer marketing are limited to negative public exposure (‘naming and shaming’) without significant penalties, leaving the legal responsibility of those involved in virtual influencers uncertain. This could render the legal framework a paper tiger. We argue that the systemic risk assessment of major social media platforms, as outlined in the Digital Services Act, offers a suitable tool for addressing this issue, complemented by a more concrete framework that includes specific disclosure requirements and provisions for deferred profile removal. Moreover, platforms should be obligated to retain profile data even after removal for a reasonable period, enabling authorities to access and analyse this data if necessary.
Finally, it is pertinent to consider the future advancements that AI may bring, particularly the emergence of fully autonomous virtual influencers that operate without human intervention or brand control. These influencers could utilise machine learning to analyse vast datasets of consumer experiences, using insights from online reviews, social media posts, and engagement rates. These influencers could offer more authentic and personalized endorsements, aligning with user interests, but the use of AI also risks misinterpreting feedback, leading to unintended outcomes. In addition, it is possible that other unfair market practices, such as excessive personalisation, inappropriate targeting of minors, data manipulation and exploitation of dark patterns, may emerge. Furthermore, AI-generated endorsements could result in fake positive reviews, undermining consumer trust, and imperfect AI systems may engage in offensive or coercive communication, creating risks of aggressive marketing practices. The UCPD, nevertheless, imposes sufficient hard law provisions to combat the misleading use of data and potentially aggressive communication caused by these systems.
The authors’ full paper, ‘Virtual Reality, Real Responsibility: The Regulatory Landscape for Virtual Influencers,’ is available here.
Floris Mertens is a PhD researcher at Ghent University, Belgium.
Julie Goetghebuer is a postdoctoral researcher at Ghent University, Belgium.
Share
YOU MAY ALSO BE INTERESTED IN