Waqas Mahmood1* and Muhammad Shahzad2
1Department of Mass Communication & Media Studies, GIFT University, Gujranwala, Pakistan
2Department of Media Studies, The Islamia University of Bahawalpur, Pakistan
Misinformation has become an important problem in the digital world which influences the minds of social media users. The level of influence gets researchers’ attention when users are exposed to misinformation even after the correction. The current research aimed to study the mediating role of the continued influence of misinformation between political and religious intolerance and user engagement. Facebook posts and Tweets (N = 200) were analyzed using content analysis to determine the relationship between variables. Findings revealed that the continued influence of misinformation positively mediates the relationship between intolerance and user engagement. The users are still exposed to misinformation even after correction and identification of the fact check tools which caused an increase in political and religious intolerance.
Over the last twenty years, the internet has enabled people to maintain their existing relationships more effectively across geography by offering them with new ways to communicate with other people that would otherwise have been difficult in person. The impact of Internet-based social networking sites (SNSs) on social relationships has been a topic of much debate, given the widespread use of social media (Dunbar, 2016). The advent of SNSs has given rise to a significant platform for the dissemination of news material, hence establishing a novel ecosystem that facilitates the proliferation of misinformation (Pennycook & Rand, 2019). In the realm of online communication, individuals possess the capability to disseminate a message to a vast multitude of recipients, potentially reaching an audience of millions (Damico & Krutka, 2018). Even though, social media makes it easier to build large social networks and connect people all over the world, the make-up of online social circles and the number of close friendships are not very different from their offline counterparts (Dunaway, 2021). The average number of friends in online social networks falls within the range of 100 to 200, which is comparable to the number of friends in offline inner circles. Similarly, the number of friends belonging to two closest circles in online social networks is normally about five and 15 (Scott et al., 2015). The new technologies carry an implicit promise, one that could be understood as expanding the boundless scope of social world. This restricts thinking to the result of a confluence of temporal and cognitive limitations (Dunbar et al., 2015). In the contemporary digital landscape characterized by Web 2.0, narratives have the potential to rapidly achieve viral status, irrespective of their veracity. The contributors include a diverse range of backgrounds including educated journalists, scientists, plain individuals, and influential governmental and business entities, often engaged in a struggle to establish legitimacy (Smith et al., 2018). The growth of social media, mobile devices, smartphones, and the internet has complicated and worsened the impact of health misinformation.
Digital platforms facilitate the spread of false information about health (Wu et al., 2023). The dissemination of inaccurate information has detrimental consequences, as it diminishes the overall level of knowledge and undermines the foundation of trust. The aforementioned issue is not a recent occurrence, and it is incumbent upon many stakeholders including technology corporations, media organizations, news outlets, and educators, to collectively tackle this matter. The individuals use personal information from your social media accounts, as well as those of your family, friends, and coworkers, to analyze your social connections, find areas of weakness, manipulate your anxieties, and exploit your preferences (Watts, 2018). The increasing use of social networks has significantly transformed various aspects of human behavior, such as the process of locating, organizing, and coordinating the groups of individuals with common interests. Additionally, the proliferation of information and news sources, along with the capacity to elicit and disseminate opinions and ideas across diverse subjects, has also experienced a substantial shift (Marcoux et al., 2021). Users can easily upload their videos, pictures, and textual statuses on social networking sites (SNSs), these networks offer great options, on the other hand, they also have many faults. Gatekeeping and hurdles are minimized because everyone is now a publisher, and can post anything, anytime, from anywhere (Qazvinian et al., 2011).
X users often engage in questioning rumors, with a particular emphasis on scrutinizing false rumors more extensively than those that ultimately prove to be real. Researchers believed that observing how people behave in groups could aid in identifying and separating the instances of false information (Lee & Shin, 2021). The spread of misinformation on WhatsApp is a global issue that has detrimental effects on elections, public health, and the security of marginalized communities. One well-known instance is that of 2018 Brazilian elections in which con artists managed to simultaneously send messages to hundreds of WhatsApp users. By surreptitiously scraping phone numbers logged into each user's device, new users and groups were automatically created to aid in the spread of virus (Kuru et al., 2022). Social media platforms provide direct access to primary data, however, a significant challenge lies in the ability to discern accurate information from falsehoods and rumors. Social media data often originates from users and may exhibit biases, inaccuracies, and subjectivity in several instances. Moreover, several individuals use social media platforms as a means to disseminate unfounded claims and inaccurate information (Palen & Hughes, 2018). Misinformation refers to the unintentional dissemination of erroneous material, while disinformation involves intentional deceit, frequently relying on blatant fabrications (Tumber & Waisbord, 2021). Misinformation is when people share false information online without meaning to, and disinformation is when people create and spread known lies on purpose. The examination of misinformation is a topic that receives little attention in academic and journalistic circles (Bakir & McStay, 2018). False or misleading information presented as real news, generally believed to be intentional but potentially unintentional, could be included in a broader definition of fake news. Fake news is presented as factual even though it has no basis (Muigai, 2019). Misinformation is ambiguous about the motivation behind falsehood. This is because it can be challenging to determine the source's intention, therefore researchers frequently use the term "misinformation" to refer to erroneous claims in general (Shin et al., 2018).
There seems to be a discernible trend towards a burgeoning era of propaganda, dis/misinformation, and media manipulation, which is further exacerbated by the prevailing political instability and election uncertainty which has been a defining feature of European politics in recent times. This is a disconcerting reality that terrorists and cybercriminals manipulate individuals' cognitive processes rather than only targeting their computer systems (Watts, 2018). The task of discerning the motives of individuals who disseminate misinformation, particularly when they are regular users of social media, is a challenging one. The digital era presents a confluence of players, a diverse range of communication channels, and an abundance of opportunities for misinformation (Tumber & Waisbord, 2021). Misinformation pertains to the dissemination of erroneous or false information without any deliberate effort to deceive. People belonging to a wide range of fields, such as public, authorities, academics, and journalists, create and/or spread false, misleading, or fraudulent information without meaning to which is called misinformation (Pennycook et al., 2020). The misinformation ecosystem has three primary actors, that is, authoritative sources of propaganda, websites disseminating false information, and individual purveyors of hoaxes.
The whole spectrum of both human and non-human intermediates which facilitates communication between sources and recipients of information which might be included in this category (Douglas, 2018). Opinion leaders influence other people's political ideas, attitudes, faiths, inspirations, and behaviors. Resultantly, they can manipulate the political ideas and beliefs of teenage social media users. If these users fail to verify the false information, the teens would be exposed to the propagation of misinformation (Mahmood et al., 2023). The rapid dissemination of misinformation and deception became more prevalent during the COVID-19 pandemic. While not a recent occurrence, the dissemination of false information has grown more apparent and intricate (Niemiec, 2020). Rumors and conspiracy theories may arise from a variety of sources, including both factual and inaccurate material, which may include misinformation, disinformation, and the genre of fake information and news (Egelhofer et al., 2020).
Citizens widely vary pertaining to their understanding of politically relevant facts, for instance some are informed, others are uninformed, and still, others are misinformed. Most research on misinformation comes from other fields, notably psychology (for instance, belief perseverance) and communications (for instance, source credibility). Sometimes insufficient context, false equivalency effects, or even a straightforward mistake that has not yet been corrected may lead to misinformation. Misinformed people are less likely to change their incorrect beliefs in response to new or contradicting information because they highly believe their knowledge than the ignorant (Nyhan & Reifler, 2012). In South Asian nations, such as Bangladesh, India, Pakistan, Myanmar, and Sri Lanka, on the other hand, misinformation campaigns primarily targeted religious feelings (Haque et al., 2020). The significance of trust and group dynamics in attempts to explain misinformation encounters on WhatsApp. Smaller, more intimate groups,—such as family and close friends—as well as those with comparable demographics and political views were more likely to be trusted by respondents (Kuru et al., 2023). Many WhatsApp videos feature scenes, such as religious leaders breaking the physical distance rules, scenes of police brutality, and an apology statement issued from the same religious leaders. Such videos could have unintended consequences, for instance dehumanizing religious people, publicly disparaging community leaders, and conveying the idea that COVID-19 precautions were imposed without consent and were only tolerated under duress. Most people believe that COVID-19 pandemic was a plot orchestrated by people from other religions to restrict Muslims from appearing in mosques and practicing their faith (Ittefaq et al., 2020). However, statistical analysis revealed that individuals who are exposed to misleading and misinformation have a more negative opinion of social media's effects on society. Individuals who are exposed to more false content are more likely to believe that technology causes political division among people and makes it easier for domestic politicians to manipulate them (Silver, 2019).
It is more challenging to rectify misinformation that stirs up strong emotions, is taken for granted, or for which there is a great deal of uncertainty. Not only can misinformation be readily disseminated on the internet, however, it can also be updated there in certain situations (Bode & Vraga, 2018). Social media has become a ubiquitous information source in today's digital landscape, influencing people's beliefs and behaviors as well as public discourse. On the other hand, the spread of misinformation on these platforms is a serious problem that could have a considerable impact on society. Even though, this problem is becoming more acute and widely acknowledged, there is still a significant knowledge vacuum regarding the dynamics and mechanisms of misinformation spreading on social media and the consequences this has for both individuals and communities. Insufficient research has been conducted on the mechanisms influencing the dissemination of misinformation on social media. The development of focused interventions requires an understanding of the propagation of misinformation including the role of influential factors and network structures. Understanding these gaps would help the research team create regulations and interventions that are appropriate for the rapidly changing social media landscape. The current study aimed to provide a thorough understanding of the spread of misinformation on social media by tackling these related issues. The study also indicated the continued influence of misinformation even after the correction in fake or manipulated information.
The current study aimed to address the following research objectives:
In the last decade, social media has developed as a fundamental part of our daily life with considerable financial, political, and societal consequences. While the impact of traditional media decreases, social media networks have been taken up around the planet at an extraordinary speed, revealing the astonishing nature of the social media phenomenon. For this reason alone, it is imperative to investigate the influence of social media (Sloan & Quan-Haase, 2017). New media platforms have the capacity for self-correction, yet the extent to which they engage in rectifying rumors is significantly limited. To mitigate the prevalence of disinformation, it is essential to implement a series of measures aimed at increasing the accuracy of corrections (Johnson & Kaye, 2010). Some social media sites have declared measures aimed at mitigating the dissemination of inaccurate or misleading information. In response, several social media businesses, including Facebook, have implemented a variety of algorithmic and policy modifications to mitigate the dissemination of inaccurate information. According to recent research, the persistence of misleading narratives on Facebook continues despite the implementation of alterations to the platform's news feed algorithm at the beginning of 2018 (Allcott et al., 2019). There is a pressing need for accurate information about the prevention and treatment of Ebola among individuals. However, it is important to note that the veracity of such information cannot be assured (Oyeyemi et al., 2014).
The observed impact of the perceived veracity of disinformation suggests that individuals are more inclined to spread information that they perceive to be truthful. Individual personality traits and particular incentives play a major role in the spread of false information through social media platforms (Chen, 2016). Misinformation brings down the education system, incorrect advertisements may negatively impact productivity in an organization, social media can violate people's privacy and incite violence among young people, and some pointless blogs may also have the power to incite youth to act inappropriately or violently (Siddiqui & Singh, 2016). The dissemination of inaccurate information induces widespread terror and dread among the populace, hence giving rise to a phenomenon known as ‘mass hysteria’ (Ferrara, 2015). Traditional news fact-checking consists of five steps. For instance, selecting which claims to investigate, reaching out to speakers, tracking down incorrect information, interacting with experts, and demonstrating how news organizations operate. Fact-checking increases a journalist's workload and has the potential to sway the content by combating misinformation (Ejaz et al., 2022). The consequences produced by misinformation are more likely to manifest when individuals have a heightened cognitive load or possess limited cognitive resources (Ecker et al., 2014).
The use of social media has been a persistent characteristic of the community's reaction to crisis occurrences. Many stakeholders, including the impacted people, professional media, and official crisis responders, are using the tools mentioned. Their utilization during the crisis disrupts the conventional methods of dissemination of information . Given the inherent danger of misinformation, it is essential for crisis responders to actively participate in and influence the internet discourse during crisis (Huang et al., 2015). Various actors have strategically utilized social media platforms for political purposes. These instances range from the persistent harassment of media outlets critical of the government in the Philippines to the manipulation of democratic processes in Britain and the United States during the year 2016, as well as the promotion of "coordinated inauthentic behavior" that has contributed to heightened tensions between India and Pakistan (Starbird et al., 2019).
Misinformation, gossip, and propaganda have long been seen as the prevalent features of human communication, with historical roots dating back to the encounter between Antony and Cleopatra during the Roman era. The advent of the Gutenberg printing press in 1493 resulted in a substantial increase in the dissemination of disinformation (Haque et al., 2020). Too many information actors can be found on the internet with conflicting interests which makes it quite difficult to consistently identify the trustworthy information along with the development of efficient methods to recognize false information. With the development of artificial intelligence (AI), it would be harder to differentiate between the writings of a human and a robot. The process of determining rumors has four main components. These components include the identification of rumors, the monitoring of their progression, the categorization of the posture taken toward the rumor, and the assessment of its truthfulness (Prakash & Madabushi, 2020).
The primary obstacles encountered while using social media in emergency contexts include prevalence of rumors and dissemination of incorrect information. The reliability of the platform is questionable due to the prevalence of misinformation since users often share news, search queries, and other content without verifying its accuracy (Reuter et al., 2017). There is an urgent need to scrutinize the accuracy and truthfulness of factual assertions that are significant for the general public. Both journalists and civilians dedicate a significant amount of time to this activity. The development of a completely automated fact-checking system is beyond the current capabilities (Maddock et al., 2015). The emergence of fake news in the modern era can be attributed to several key factors within the digital media landscape. These factors include decline in financial viability of traditional news sources, the accelerated pace of news cycle, the rapid dissemination of misinformation and disinformation through user-generated content and propagandists.
Moreover, it also includes the heightened emotional nature of online discourse and the growing number of individuals who exploit algorithms employed by social media platforms and internet search engines for financial gain (Bakir & McStay, 2018). The rectification of misinformation has proven to be a successful strategy to induce individuals in order to revise their ideas. By presenting them with factual information, individuals tend to exhibit a decrease in their adherence to previously held misconceptions. This phenomenon is seen across several settings, including social media platforms, where the rapid dissemination of misinformation is prevalent (Bode et al., 2020). The absence of precise and reliable information may give rise to an information void, thereby allowing the dissemination of misinformation. Moreover, those who experience fear and uncertainty tend to have increased vulnerability towards misinformation (Niemiec, 2020). The researchers are primarily concerned with investigating the significance of fact-checking and computer-assisted techniques in the automated identification of false information disseminated via internet platforms. A notable disparity is seen in the extent of fact-checking between genuine news material and false news content, with fact-checking being disseminated with a considerable temporal lag after the propagation of the initial disinformation (Egelhofer et al., 2020).
Leading Pakistani religious scholar and missionary group spokesperson Maulana Tariq Jameel told an audience that COVID-19 is the result of the "wrongdoing of women". Later, Jameel withdrew his comments, however, the false information had already begun to circulate. Some people, even among those who acknowledge the existence of the Coronavirus, believe that Muslims are immune to the virus. There is a story which claims that illness is God's wrath for the immorality of the unbelievers. This segment of the population, once more swayed by religious propaganda, thinks that Muslims are immune to COVID-19. This is because they perform ablution before each prayer and wash their hands and faces five times a day. In a press conference, another religious scholar and leader of the Jamiat Ulema-e-Islam political party, Fazl-ur-Rehman (JUIF), stated, "When you sleep, Coronavirus sleeps, when you die, the virus dies with you" (Ittefaq et al., 2020). A participant in the misinformation paradigm typically goes through three steps: seeing an event, getting false information after it happens, and taking a memory test at the end. According to some researchers, the misinformation effect might occur when participants are presented with misleading information, however, it wasn't encoded the first time, therefore their memory of the initial event remained unaffected (Antonio, 2015). The occurrence of misinformation effect can be regulated and stopped if there is a better understanding of the variables that could influence it, particularly when false memories have authorized implications (Dinneen, 2016). To reduce the negative effects of their networks, social media platforms should revise their privacy policies. Additional investigation is required to examine the fundamental mechanisms and the wider implications of these links in the developing landscape of social media and misinformation (Mahmood & Shahzad, 2023).
The way people react to information which has been corrected after being initially believed to be false has attracted considerable attention in research. These corrections are rarely fully effective which means that most people still rely on false information even after it has been corrected and acknowledged. This phenomenon has been referred to as the continued influence effect (CIE) (Lewandowsky et al., 2017). The continued influence of misinformation on human mind is mostly an influence on later cognition and if people are exposed to correct information, they still keep believing and sharing the false information (Seifert, 2002). Misinformation, or any information that is believed to be true, however, later proves to be false, may still have an impact on people's decisions and ways of thinking even after it has been corrected by a reliable source and even if the correction is understood and subsequently remembered. There is a suggestion that the ineffectiveness of corrections stems from the fact that a corrected myth tends to recur (Swire et al., 2017). Corrections frequently lessen the impact of false information on reasoning, however, they do not always do so. This phenomenon is applicable to subjects which are political as well as non-political (Aird et al., 2018). When a person has a straightforward and credible alternative to bridge the gap left by a retraction in their mental model of an event or conceptual connections, misinformation effects are typically not a problem. In situations where a straightforward substitute isn't accessible, individuals frequently persist in depending on withdrawn false information (Rapp & Braasch, 2023). Misinformation is unavoidable in today's world due to the quick spread of some certain news. Unfortunately, empirical data and real-world examples indicate that false information still influences peoples’ attitudes and actions. Given the significant practical implications of misinformation's persistent influence, it is essential to understand the strategies in order to reduce the potential negative impacts of fake news (Kan et al., 2021). When people conclude after receiving a correction, misinformation may have an impact on their comprehension processes. Online methods can be used to solve this issue successfully. This would make it possible to conduct more thorough research on the origins of influence in conclusions and decisions, the processing of corrections, and what happens to false information in the end (Johnson & Seifert, 1998).
H1 User engagement reflected in social media misinformation content is positively associated with political and religious intolerance levels.
H2 The continued influence of misinformation content positively mediates the relationship between user engagement and political and religious intolerance reflected in social media misinformation content.
The current study aimed to answer the following research questions:
RQ1: How does social media users’ engagement with the political and religious misinformation on X and Facebook affect intolerance?
RQ2: How do social media users react to misinformation even after the identification by the fact-check tools?
RQ3: What is the continued influence of misinformation on political and religious intolerance of Pakistani social media users?
Content analysis, network analysis/algorithm development, public opinion work (surveys, focus groups, interviews), and experimental design are the best methods that can be used to study misinformation (Lewandowsky et al., 2012). Content analysis is used to analyze the available data including pictures, videos, and posts on X and Facebook which have already been identified as misinformation by the two fact-check sources, that is, AFP Pakistan and Soch-Fact to investigate the relationship of three variables. These variables include continued influence of misinformation content, user engagement, and political and religious intolerance reflected in social media misinformation content. Data was collected from fact-check sources, that is, AFP Pakistan and the Sochlo fact-check database. The current study investigated three variables using a content analysis approach to measure the content of misinformation; i) intolerance, ii) user engagement, and iii) continued influence of misinformation. Identified Facebook posts and Tweets from January 2019-April 2022 by APF fact-check and Soch fact-check comprised the population of the study. All the political and religious posts and Tweets identified by the APF fact-check and Soch fact-check were the sample for this study. Each identified Facebook post and Tweet was the unit for the analysis. To meet a specific criterion for the purposive sampling technique, carefully chosen social media posts with at least 1000 engagements were used in this research.
Intolerance is categorized into three categories. These categories include risk, hate speech, and target where risk in the context of coding sheet most likely refers to possible drawbacks, injuries, or unfavorable effects connected to particular elements found in the identified misinformation content on the X and Facebook. Risk in religion refers to content coding that may suggest possible harm or unfavorable consequences associated with one's religious identity or beliefs. This could include dangers, such as conflict, discrimination based on religion, or other unfavorable outcomes related to one's religious affiliation. Group means looking at material that draws attention to possible drawbacks or difficulties that come with being a member of a particular social, racial, or cultural group. This may entail hazards connected to prejudice, stereotyping, or other unfavorable outcomes resulting from group dynamics.
Positive: The content pertaining to misinformation is considered positive if it is not against any gender, group or religion, therefore their life and status is not in any kind of danger.
Negative: The content is directly, or indirectly discussing about the gender, group, and religion, puts their lives in danger.
Neutral: The content related to gender, group, and religion not found in the unit of analysis.
Hate speech refers to the usage of offensive language, attacking groups, and victimization. One can identify and classify hate speech instances in the data being analyzed in a methodical manner. Below is a description of offensive language which probably entails the classification of statements that are insulting, disparaging, or contain racial slurs directed at specific people or groups due to their race, ethnicity, gender, religion, or other protected characteristics. With the category of hate speech, intolerance offers an organized framework to methodically examine and classify the various aspects of hate speech in the data.
Positive: The content pertaining to misinformation is considered positive if it does not use offensive language, does not attack groups, and victimize the others.
Negative: The content uses offensive language, attacks any groups, and victimizes the others.
Neutral: The material related to offensive language, attacking groups, and victimizing others is not found in the unit of analysis.
The content targeting a specific social, political, or religious group or party, is probably going to be coded. It may include references, conversations, or portrayals centered around a group identity, such as a political or religious party and social movement related to them.
Positive: The content related to misinformation is considered positive if it does not target any political or religious party/group, leaders, and people.
Negative: The content targeting any political or religious party/group, leaders, and individuals.
Neutral: The content related to target any political or religious party/group, leaders, and individuals is not found in the content.
The engagement is measured by the frequency of number of views, likes, comments, retweets, shares of the political and religious posts, and tweets structured in the form of texts, pictures, and videos which was identified by the fact-check tools.
To classify whether misinformation still affects, comments on Facebook posts and tweets would be studied even after the identification by the fact-check sources.
Yes: If content is fake and flagged by the fact-check sources, however, people are sharing or uploading or commenting on it.
No: If the content is fake and flagged by the fact-check sources and people are not sharing or uploading or commenting on it.
Neutral: No information found related to the continued influence.
Table 1
Cross-tabulation of Fact-check Sources and Platforms of Posts and Tweets (N = 200)
|
Fact-check Source |
Platform |
Total |
||
|
FB |
X |
FB + X |
||
|
AFP FC |
40 |
30 |
80 |
150 |
|
Soch FC |
14 |
08 |
28 |
50 |
|
Total |
54 |
38 |
108 |
200 |
Table 1 represents the distribution of posts and tweets from two fact-check sources, that is, AFP fact-check (AFP FC) and Soch fact-check (Soch FC), across various social media platforms, such as Facebook (FB) and X in detail in the cross-tabulation table. There are N = 200 posts and 200 tweets in the sample size. In particular, AFP fact-check contributes 30 posts on X, 40 on Facebook, and 80 posts total on both networks. In contrast, Soch fact-check has a total of 28 posts across 8 social media platforms including 14 on Facebook and 8 on X. Approximately, 54 posts on Facebook, 38 on X, and a total of 108 posts across the two platforms are revealed by adding up all the posts. A detailed analysis of the relationship between fact-check sources and the platforms selected to combat misinformation is made possible by cross-tabulation. For researchers looking to comprehend the distribution patterns of fact-check initiatives across social media platforms, the Table is an invaluable tool.
Table 2
Regression Analysis for Mediation of Continued Influence of Misinformation (CIM) between User Engagement and Intolerance (N = 200)
|
Variable |
B |
95%CI |
SE B |
β |
R2 |
ΔR2 |
|
Step 1 |
.06 |
.06*** |
||||
|
Constant |
5.38*** |
[4.731, 6.016] |
0.33 |
|
|
|
|
PRI |
0.13*** |
[-.36, .11] |
0.19 |
.08*** |
|
|
|
Step 2 |
|
.12 |
.06*** |
|||
|
Constant |
5.02*** |
[4.151, 5.895] |
0.45 |
|
|
|
|
PRI |
0.09*** |
[-.33 .15] |
0.13 |
.06*** |
||
|
CIM |
0.12*** |
[-.08, .32] |
0.11 |
.09*** |
||
Note. PRI = Political and Religious Intolerance Levels; CIM = Continued Influence of Misinformation Content; UE = User Engagement Reflected in Social Media Misinformation Content
***p < .001.
Table 2 shows that regression analysis for mediation was conducted to test the H1 and H2. It highlights the mediation effect of CIM (MV) between intolerance (IV) and user engagement (DV). Step 1 demonstrates a significant relationship between user engagement and intolerance indicating insignificant standardized regression coefficients (β = 0.06, p < 0.001), with a positive coefficient indicating a relationship between higher user engagement and higher level of intolerance. Additionally, a significant positive relationship between CIM and intolerance was found in step 2 after CIM was added to the model as a mediator (β = 0.09, p < 0.001). This suggests that a higher level of CIM is associated with a greater degree of intolerance. Interestingly, adding CIM causes an increase in R-squared (ΔR2 = 0.06), indicating that it adds 6% to the variance in intolerance. The hypothesis, that is, CIM mediates the connection between user engagement and intolerance is supported by these findings. CIM is mediating the IV and DV which means that intolerance and user engagement are still increasing even in the presence of CIM.
In the results, step 2 in Table 1 presents CIM as a mediator and determines a significant positive correlation (β = 0.09, p < 0.001) between CIM and intolerance, suggesting that higher levels of CIM are linked to higher levels of intolerance. The factor of CIM content mediates the relationship between user engagement and intolerance which is supported by these results. It suggests that the levels of intolerance and user engagement rise even in the presence of a CIM, indicating that people continue to engage with misinformation content even after the identification of the content by the fact-check tools (Wittenberg & Berinsky, 2020). The results highlight several important correlations and mediators that are relevant to this complex. Firstly, a possible relationship between user engagement and political and religious intolerance was revealed by the positive association between the two, indicating that higher engagement levels could be a factor in the rise in intolerance in these domains. The significant relationship determined between user engagement and CIM is also noteworthy, demonstrating the impact this phenomenon has on users' levels of engagement. This link also extended to intolerance based on religion and politics, emphasizing CIM as a catalyst to increase intolerance in discourse on social media. An insight into the mediation analysis revealed that the relationship between user engagement and intolerance is strongly influenced by CIM, which highlights the complex dynamics of misinformation on social media platforms.
Civil society can and should play the role of a counterbalance and an independent stakeholder, working alongside and in cooperation with private companies and platforms to flag and debunk misinformation. In a time when the consequences of misinformation on politics and religion can be significant and divisive, the current study underscored the imperative of continued efforts to enhance media literacy and critical thinking skills, such as media mindfulness and media mindedness. By doing so, individuals may be empowered to make more informed decisions and contribute to a more reliable and constructive digital discourse, particularly in matters related to politics and religion.
Further research is needed to delve into the specific mechanisms and determinants of media mindfulness and its relationship with misinformation exposure. It is also crucial to recognize that political and religious misinformation is a complex issue, influenced by various societal and psychological factors. It might be vital to look into the effectiveness of various intervention techniques to stop the spread of misinformation. Practical solutions could include evaluating the effects of platform-specific interventions, media literacy initiatives, and fact-checking campaigns. Longitudinal studies that monitor the behavior of individual users over time may be able to identify dynamic changes in how people engage with, consume, and share misinformation.
The author of the manuscript has no financial or non-financial conflict of interest in the subject matter or materials discussed in this manuscript.
The data associated with this study will be provided by the corresponding author upon request.