Disinformation Echo-Chambers on Facebook: Abstract & Introby@escholar

Disinformation Echo-Chambers on Facebook: Abstract & Intro

tldt arrow

Too Long; Didn't Read

Recent events have brought to light the negative effects of social media platforms, leading to the creation of echo chambers, where users are exposed only to content that aligns with their existing beliefs.
featured image - Disinformation Echo-Chambers on Facebook: Abstract & Intro
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC BY-SA 4.0 DEED license.


(1) Mathias-Felipe de-Lima-Santos, Faculty of Humanities, University of Amsterdam, Institute of Science and Technology, Federal University of São Paulo;

(2) Wilson Ceron, Institute of Science and Technology, Federal University of São Paulo.


The landscape of information has experienced significant transformations with the rapid expansion of the internet and the emergence of online social networks. Initially, there was optimism that these platforms would encourage a culture of active participation and diverse communication. However, recent events have brought to light the negative effects of social media platforms, leading to the creation of echo chambers, where users are exposed only to content that aligns with their existing beliefs. Furthermore, malicious individuals exploit these platforms to deceive people and undermine democratic processes. To gain a deeper understanding of these phenomena, this chapter introduces a computational method designed to identify coordinated inauthentic behavior within Facebook groups. The method focuses on analyzing posts, URLs, and images, revealing that certain Facebook groups engage in orchestrated campaigns. These groups simultaneously share identical content, which may expose users to repeated encounters with false or misleading narratives, effectively forming "disinformation echo chambers." This chapter concludes by discussing the theoretical and empirical implications of these findings.

Keywords: echo chambers; disinformation; fake news; coordinated campaigns; coordinated inauthentic behavior; Facebook; vaccines; COVID-19; anti-vaxxer; computational method; Brazil

1. Introduction

The information landscape has undergone significant transformations with the widespread adoption of the internet and online social networks. This has led to both positive and negative consequences. On the positive side, information can now spread quickly and reach a vast audience. Social media platforms have played a crucial role in fostering a culture of participation by motivating people to create and share content actively. However, there were also drawbacks. Social media platforms employ algorithms that restrict the diversity of content users are exposed to, leading to the reinforcement of pre-existing beliefs, commonly referred to as “echo chambers” [1]. These occur when individuals are exposed only to opinions that align with their own viewpoints, a phenomenon known as “confirmation bias” [2]. Furthermore, like-minded individuals often form homogeneous clusters where they reinforce and polarize their opinions through the content diffusion[3]. The functionalities and features of social media platforms, including ranking algorithms, selective exposure, and confirmation bias, have played a significant role in the development of online echo chambers [4,5].

Online social media platforms have also provided a platform for individuals with malicious intent to disseminate false or misleading narratives, with the intention of deceiving the public and undermining democratic processes [6]. The concern over false and misleading information is not a novel one. However, after the 2016 US Presidential elections [7] and the Brexit referendum in the UK [8], the propagation of extremism, hate speech, violence, and false news on platforms have significantly accentuated, highlighting their societal impact [9]. Such content often falls into the epistemological rabbit hole of “fake news” [10]. In this chapter, we use the term "disinformation" to encompass the concept of information disorder, encapsulating the so-called fake news, which includes false or misleading information created and disseminated for economic gain or intentionally deceiving the public [11].

The issue of disinformation becomes even more concerning in the context of health crises. Previous public health emergencies, such as the 2014 Ebola outbreak [12–14], showcased the widespread dissemination of inaccurate information on social media. Similarly, during the H1N1 epidemic in 2009, an array of erroneous or deceptive content propagated, ranging from conspiracy theories to unfounded rumors intended to cause harm [15,16].

Over the past decades, anti-vaccination movements have gained momentum globally [17], coinciding with a resurgence of previously controlled infectious diseases [18]. Debates on vaccines have been fueled by misleading, incorrect, and taken-out-of-context information, contributing to a perception that vaccinations are unsafe and unnecessary [18,19]. This influx of disinformation is jeopardizing the progress made against vaccine-preventable diseases, as it fuels vaccine hesitancy [20].

The COVID-19 pandemic has vividly demonstrated the disruptive potential of information disorder, shifting the focus from the health crisis toward political disinformation, which can erode the outcomes of public health policies [21]. In this context, particularly concerning are the myriad myths surrounding the safety and efficacy of COVID-19 vaccines [22–24].

Hence, online disinformation permeates all strata of society, necessitating multidisciplinary approaches for comprehension and the implementation of countermeasures. Since there is no onesize-fits-all solution to combating disinformation, experts propose a combination of interventions, such as rectifying false information and enhancing media literacy skills, to mitigate its impact [25].

ch as rectifying false information and enhancing media literacy skills, to mitigate its impact [25]. Despite concerted efforts, curbing disinformation has proven to be more challenging than initially anticipated. For instance, collusive users, as outlined in the literature, purposefully promote false narratives in others’ minds, amassing high counts of retweets, followers, or likes, thereby influencing public discourse. They are often funded or composed of individuals who exchange followers among themselves to amplify their visibility [26]. Such users corrode the trustworthiness and credibility of online platforms in a manner akin to spam accounts.

In response, online social media companies have adopted diverse strategies to combat information disorders. Twitter, for example, has been actively removing accounts engaged in spam and platform manipulation.

Facebook has been actively combatting problematic content on its platform since 2018 by employing the concept of “coordinated inauthentic behavior” [27]. While Facebook’s efforts have been subject to criticism for their enforcement of policies, the company has substantiated the link between coordinated behavior and the dissemination of problematic information as a measure to counter manipulation attempts on its platform. This term encompasses not only bots and trolls that propagate false content but also unwitting citizens and polarized groups recruited to play orchestrated roles in influencing society.

Thus, instead of establishing a distinct demarcation between problematic and non-problematic information, the company has adopted what can be described as an “ill-defined concept of coordinated inauthentic behavior” (CIB). This strategic decision is aimed at effectively tackling and curbing the spread of disorderly information throughout its platform, all the while avoiding the complexity of unequivocally labeling it as false content [9]. Academic literature suggests that these coordinated efforts have become fertile ground for the proliferation of political disinformation [28–30], a phenomenon observed across various social media platforms [31].

This chapter undertakes an examination of disinformation narratives concerning COVID-19 vaccines that have been propagated by users on Facebook. Through the lens of echo chambers concept, this study delves into the role of user-generated content (UGC) exhibiting signs of “coordinated inauthentic behavior” within Facebook groups. To this end, the study is guided by the following research questions: (RQ1) To what extent, is problematic content shared on Facebook?; (RQ2) How are these groups interconnected?; and (RQ3) How do these coordinated networks possess characteristics that contribute to the formation of echo chambers?

To answer these RQs, our approach involved sourcing fact-checked stories related to COVID-19 vaccines from two major Brazilian fact-checking initiatives, namely Agência Lupa and Aos Fatos. These stories were published during the period spanning January 2020 to June 2021, and they provided the foundation for generating keywords that were then utilized in our queries on CrowdTangle. This process was instrumental in identifying false narratives that were actively circulating on Facebook. In total, our study made use of 276 instances of debunked content to uncover and analyze disinformation narratives that were being disseminated across this online social media platform. Our analysis takes the form of a computational strategy aimed at predicting instances of coordinated behavior within Facebook groups. These groups engage in inauthentic tactics with the intent of boosting the visibility and reach of particular content, ultimately contributing to the amplification of problematic information on the platform [9].

Our computational approach involves an analysis of content frequency and similarity, which enables the detection of potential traces of “coordinated inauthentic behavior.” This can manifest through the replication of widely available narratives within specific Facebook groups or the sharing of common links in a condensed timeframe, often leading to external websites. Additionally, we extended our analysis to encompass the coordinated dissemination of visual content, commonly referred to as memes. These images are particularly susceptible to manipulation, rendering them more challenging to identify using conventional computational methods [32]. To address this, we leveraged a computer vision (CV) algorithm provided by Facebook to extract and analyze the textual content embedded within these images. This allowed our method to ascertain whether multiple images shared the same message over a brief period of time.

Our findings reveal a concerted endeavor to manipulate public discourse with a strategic objective of establishing “disinformation echo chambers.” This is achieved by fostering a high level of engagement with false narratives across various groups, a substantial number of which are characterized by political affiliations. These fabricated information pieces possess the potential to reinforce existing biases, erode public health efforts, and trigger adverse societal consequences in relation to COVID-19 vaccines. Furthermore, the content propagated within these diverse groups can be construed as beliefs that gain potency through repeated exposure within these tightly-knit communities, effectively shielding them from counterarguments and perpetuating echo chambers [33].

In addition to these implications, the coordinated efforts to manipulate discussions within Facebook groups pose specific societal risks. This manipulation can deceive users into replicating these fabricated narratives in offline scenarios, where the tendency to resist vaccination might be exacerbated. Ultimately, our study concludes by highlighting the overarching dangers posed by these coordinated inauthentic efforts, including the propagation of confusion and mistrust among individuals, all while hindering the effectiveness of public health responses.

This chapter seeks to expand the expanding literature on disinformation and digital platforms by illustrating how coordinated inauthentic information can potentially give rise to echo chambers by effectively amplifying specific false or misleading narratives. Moreover, the scrutiny of the structural attributes of these Facebook groups, which exhibit well-defined coordinated networks, offers insights into the potential hazards and challenges posed by disinformation narratives in influencing individuals’ decision-making processes regarding vaccines. This influence can result in ignorance and misperceptions that jeopardize the formulation and execution of crucial public health policies, such as vaccination campaigns [86]. The subsequent subsections delve into an exploration of the current landscape of research concerning echo chambers and disinformation.

1.1. Transitioning from Open Channels of Communication to Echo Chambers

In its initial stages, online social networks were hailed for their potential to influence democracy and the public sphere by facilitating the exchange of information, ideas, and discussions in an unrestricted manner[34]. Online social media platforms embodied an optimistic perspective, driven by the disruption of traditional communication patterns in shaping public opinion, such as the gatekeeping role of newspapers to other forms of expert and non-expert communications [35]. These hopeful viewpoints championed the expansion of freedom, the transformation of democratic discourse, and the creation of a communal online knowledge hub[36]. However, these positive outlooks have given way to a more pessimistic stance, characterized by the recognition of homophily structures within these networks. This suggests that users tend to interact more frequently with individuals who share similar viewpoints, resulting in a limited range of perspectives that could foster social division and stimulate polarized outlooks [37].

Within this framework, the metaphor of echo chambers has gained prominence as a way to elucidate these behaviors amplified by the algorithms of social media platforms. It illustrates a scenario where existing beliefs are echoed and reinforced, resembling reverberations within an acoustic echo chamber [38]. Alongside the homogeneity inherent to online social networks and exacerbated by their algorithms, the concepts of selective exposure and confirmation bias have also played pivotal roles in the formation of these echo chambers within digital platforms [4,5]. Previous research has indicated that online social networks and search engines contribute to the widening ideological gap between users. Similarly, studies have identified instances of echo chambers on online social media, particularly among groups divided along ideological lines [43,44] and on controversial issues [45].

Although some studies have suggested that these effects are relatively modest [39], others argue that the term “echo chambers” might oversimplify the issue, as it is not solely a consequence of platform mechanisms but also a result of existing social and political polarizations [40]. Scholars have also put forth the argument that the extent of ideological segregation in online social media usage has been overstated, challenging the assertion that echo chambers are universally present [41].

Conversely, Facebook employs various mechanisms that could potentially exacerbate exposure to like-minded content, including the social network structure, the feed population algorithm, and users’ content selection. Thus, the combination of these mechanisms might increase the exposure to ideologically diverse news and opinions. However, these mechanisms still leave individuals’ choices to play a “stronger role in limiting exposure to cross-cutting content” [42].

On Twitter, researchers have examined both political and nonpolitical matters to comprehend the presence of echo chambers. According to their outcomes, political topics tend to foster more interactions among individuals with similar ideological leanings compared to nonpolitical subjects [4,38,47]. In other words, their findings suggest that homophilic clusters of users dominate online interactions on Twitter, particularly concerning political subjects [46].

In the context of studying echo chambers on online social media, it is apparent that conceptual and methodological choices significantly impact research findings [38]. For instance, studies relying on interactions or digital traces tend to indicate a higher prevalence of echo chambers and polarization compared to those focusing on content exposure or self-reported data [38]. These amplifications of preexisting beliefs can also be shaped by the technological features of online social media platforms. In essence, the interplay between online social media interfaces and the user-technology relationship can influence the emergence of echo chambers [48].

Hence, it is crucial not only to analyze the nature of social media interactions but also to comprehend the content that users encounter in their news feeds or the groups they engage with. If the content within online groups promotes the limitation of exposure to diverse perspectives in favor of reinforcing like-minded groups that deliberately disseminate messages to larger audiences, consequently reinforcing a shared narrative, we argue that the network of groups resulting from these coordinated communication dynamics indeed resembles “echo chambers” [49]. In this chapter, we employ the term “echo chamber” to describe Facebook groups where the online media ecosystem is characterized by selective exposure, ideological segregation, and political polarization, with specific users assuming central roles in discussions.

1.2. The Never-Ending Challenge of “Fake News”

Online social networks exist in a paradoxical realm, characterized by the coexistence of homophilous behavior and the potential for information dissemination. This duality has given rise to an environment where conflicting facts and contradictory expert opinions flourish, allowing false news to proliferate and conspiracies to take root 10. Since 2016, the term “fake news” has gained global recognition as a descriptor for this false or misleading information spread in online spaces. This content can either be fabricated or intentionally manipulated to deceive individuals [11].

However, the term “fake news” has been wielded by politicians to undermine the media [10,86], leading to the emergence of alternative synonyms such as “information disorder,” “fake facts,” and “disinformation” [11]. Scholars engage in debates about differentiating between “disinformation” and “misinformation” [50]. Some argue that the distinction lies in intent, with misinformation lacking the deliberate intent to deceive. Yet, establishing intent can be challenging.

Despite these nuances, the term “disinformation” appears to be the most suitable to encompass this intricate landscape, as it covers both fabricated and intentionally manipulated content [11]. In the current complex information ecosystem, it is crucial to shift our focus from intention to the influence of the narratives that these posts align with. This is because people are not solely influenced by individual posts, but rather by the broader narratives they fit into [87]. The harmful consequences of information disorder arise from the human tendency to default to assuming the truth of a statement in the absence of compelling evidence to the contrary [5].

The rapid surge of disinformation from 2017 onward has fueled an extensive field of study, generating numerous publications approaching this multifaceted issue from diverse angles [51]. Some researchers aim to categorize various types of information disorders that emerge, while others scrutinize the social and individual dimensions of disinformation’s effects on the public and political spheres [11,52]. Computational methodologies have also been employed to detect the so-called “fake news” [51]

Over the years, automated accounts, or bots, have attracted significant attention of researchers for their potential to influence conversations, shape content distribution, and manipulate public opinion. Although terms like “bots,” “automated accounts,” “fake accounts,” and “spam accounts” have often been employed interchangeably, they do not always denote the same type of activity. Bots are accounts controlled by software to automate posting or interactions, while spammers generate unsolicited mass content. Fake accounts, in turn, impersonate real individuals on online platforms [53,85].

In this respect, studies demonstrate that external events and major global incidents trigger increased manipulation attempts on platforms, particularly during elections and health crises [21]. In these occurrences, traces of coordinated bot behavior could be detected in these events [9,54–56]. For example, on Twitter, estimates vary regarding the prevalence of bots, with some analyses suggesting 9% to 15% of profiles are automated accounts [57]. However, contrasting views also exist, asserting that bot accounts constitute more than 50% of Twitter users58. Interestingly, the platform itself provided an official statement in a public filing, indicating that fewer than 5% of its 229 million daily active users are categorized as “false” or “spam” accounts, as determined by an internal review of a sample [59].

To effectively address this issue, computational methods such as textual or social network analysis (SNA) play a crucial role in identifying and suspending harmful bots from platforms. These methods enable scholars to not only detect the detrimental effects of bots but also to mitigate their impact successfully. By understanding the nuanced differences between various types of automated accounts and their behaviors, researchers can develop more targeted strategies for preserving the authenticity and integrity of online conversations and content distribution [58].

Researchers have also identified the role of bots in amplifying the spread of disinformation and hoaxes by analyzing common interactions and network integrations. Hashtags used by these users have also been relevant for detecting automated accounts, as human users tend to use more generic ones and maintain a diverse range of social connections. Botometer, formerly known as BotOrNot, has been a widely used tool for bot detection on Twitter. It evaluates the extent to which a Twitter account exhibits characteristics similar to those of social bots, aiding in the study of inauthentic accounts and manipulation on online social media for over a decade [60]. However, scholars have also pointed out that bots are becoming more sophisticated around human behavior, which presents limitations of these tools [61]. Additionally, Botometer is exclusive to Twitter, making it challenging to detect malicious actors on other platforms.

Other techniques have been employed to detect manipulation attempts on online platforms, including disinformation and conspiracy narratives. These methodologies encompass statistical approaches like linear regression [62] as well as social network analysis (SNA) that considers the diverse relationships users form within networks. Additionally, artificial intelligence (AI) methods, such as naive Bayes models and convolutional neural networks (CNN) [63,64] have been utilized. These different techniques have been employed both individually and in combination. Despite their utility, some of these methods come with certain limitations. While AI holds potential for enhanced detection, it necessitates a wide range of input data and exhibits higher accuracy with more recent datasets. Ensuring datasets are consistently up to date is challenging. Additionally, the strategies employed by malicious bots have undergone substantial evolution in recent years, hampering these methods.

Other methods have been employed to detect manipulation attempts on platforms, including disinformation and conspiracy narratives. These techniques involve statistical methods, such as linear regression [62], social network analysis – SNA (which considers different types of relationships among users that form these networks), and artificial intelligence (AI) methods (e.g., naive Bayes models and convolutional neural networks – CNN) [63,64]. These methods have also been employed singularly or in combination. Despite this, some of them present caveats. While AI solutions hold promise for improved detection, they require diverse input data and are more accurate with recent datasets. However, datasets are not always up-to-date, and the strategies of malicious bots have evolved considerably in recent years.

Given these factors, there is a clear need for more sophisticated bot detection models or a greater reliance on methodologies that scrutinize the scope of activity within coordinated campaigns. When multiple entities collaborate within a network to achieve a common goal, the presence of coordination becomes evident [58]. In this vein, CIB strives to monitor the manipulation of information across online social networks, leveraging content dissemination through automated means to amplify its reach. This shift in focus from content and automated accounts to information dynamics within social networks aligns with Facebook’s policies, which link coordinated behavior with the sharing of problematic information [9,65].

Some scholars advocate for the advancement of techniques targeting bot coordination over mere bot detection, as orchestrated bot activities can prove significantly more detrimental [58]. This aligns with Facebook’s approach in its policies, employing the term CIB to underline the association between coordinated behavior and the propagation of problematic information [27].

Similarly, researchers have examined group-level features using graphs to identify orchestrated activities through users’ shared relationships such as friends, hashtags, URLs, or identical messages [65]. In this respect, previous studies have explored CIB through shared links on Facebook pages, groups, and verified public profiles [9].

Cordinated behaviors in online networks have been associated to the creation of echo chambers, as users intentionally orchestrate communication dynamics to disseminate messages to large audiences intentionally [66,67]. Another study has revealed a connection between the rapid dissemination of false information and the existence of echo chambers, primarily due to the existence of polarized clusters of opinions and networks that contribute to the spread of such information [68].

While researchers have recognized collective behavior among malicious actors driven by economic and ideological motives, the academic literature has not extensively explored coordinated mechanisms for spreading false or misleading content through messages and memes. Notably, the COVID-19 pandemic has highlighted the prevalence of visual content sharing for disseminating disinformation on online social networks [67]. In this context, Facebook groups could serve as pivotal conduits for the propagation of intricate contagions of viral disinformation.

This chapter seeks to address this knowledge gap by delving into this subject, specifically focusing on COVID-19 vaccine disinformation within public Facebook groups. In the subsequent section, we provide an in-depth overview of our methodology for pinpointing echo chambers of disinformation on the Facebook platform.