Social media users face a difficulty in distinguishing fake news (Allcott & Gentzkow,
2017; Au et al.,
2021; Borges & Gambarato,
2019) and this issue becomes more complex due to the rise of new technologies, such as deepfakes. Social media facilitate the fast spread of fake news (Kreft & Fydrych,
2018), and prior research shows that fake news enjoy higher levels of exposure (Timmer,
2017) due to the involvement of bots (Vosoughi et al.,
2018). Moreover, fake news affect the credibility of traditional news outlets (Fallis,
2021), with trust in such outlets measured at an historic low (Lazer et al.,
2018). Concurrently, the number of people relying on social media to find news has increased (Rubin et al.,
2016). Several studies point towards humans not being good at recognizing fake news (e.g., Bond & DePaulo,
2006; Rubin et al.,
2016). Without further training or tools, people score about 54% on tasks in which they need to distinguish truth and deception—only slightly better than chance. However, a number of questions regarding what can be done to counter the effect of fake news remain to be addressed (Au et al.,
2021), and scientific evidence for the use of certain tools is limited (Paredes et al.,
2021).
New technologies and recent events, such as the US elections and the Brexit referendum, seem to attract more interest in the topic (Zhou et al.,
2019). In fact, fake news lead to an increase in (political) polarization (Riedel et al.,
2017), and is frequently described as a single, straightforward phenomenon. However, a recent study points towards the need to consider it as a two-dimensional phenomenon (Egelhofer & Lecheler,
2019), suggesting to distinguish between fake news as a genre and fake news as a label. The former points towards the intentional creation of fake news, while the latter refers to the use of the term ‘fake news’ to invalidate the media. We focus on fake news as a genre, but when considering literature focusing on this dimension, definitions of fake news still contain many elements. A common element in these definitions is that fake news refers to messages, of any kind, containing false information (Bakir & McStay,
2018; Lazer et al.,
2018). Moreover, while fake news come in many forms, many authors see the imitation or mimicking of real news messages as an important aspect (Lazer et al.,
2018). Third, we often see fake news described as not verifiable through facts and figures (Gimpel et al.,
2020). We follow, the definitions of Lazer et al. (
2018), who define the concepts as “fabricated information that mimics news media content in form but not in organizational process or intent” (Lazer et al.,
2018, p. 1094). Misinformation here is simply defined as information that is either false or misleading (Lazer et al.,
2018).
The debates surrounding these concepts appear to focus on the role of social media and new technologies. One of the most striking technological developments arguably is the rise of deepfakes, which are fake videos that are developed using AI that allows them to seem like someone says or does something they actually never did (Dobber et al.,
2021). Deepfakes are feared to have impact in times of political elections as continuously improving an easily accessible technology makes it easier to fabricate such videos and more difficult to distinguish them from real ones (Fallis,
2021). Specifically, it is important to consider how deepfakes differ from photoshopped images. While photoshopped images mislead in terms of what we see, deepfakes also affect what we hear (Dobber et al.,
2021). The increasing extent to which deepfakes are perceived as realistic or real has impact on society. Prior research has evaluated the impact of deepfakes in the context of political microtargeting, an increasingly employed technique in which information is gathered on individuals to enable targeted information during, for example, electoral periods (Borgesius et al.,
2018). Previous research has emphasized that deepfakes have the potential to affect attitudes and, especially due to the rapid developments in terms of quality and ease of fabrication, should be expected to have more impact in the future (Dobber et al.,
2021).
Social media plays a large role in the spread of misinformation, both in the form of deepfakes and in other forms (Borges & Gambarato,
2019). While traditional media is characterized by a relative balance in the news that is presented, the goal of large social media corporations is to retain their users (Carlson,
2018). To achieve this, the content presented to users is tailored to their preferences through algorithms. While algorithms might seem neutral due to their data-driven nature, humans are involved in their training and biases inevitably are built into their design (Gillespie,
2014). Moreover, the inner processes of algorithms are unclear or difficult to understand (Carlson,
2018). As a result, filter bubbles and echo chambers are created due to increasing exposure to personalised content (Borges & Gambarato,
2019), which can lead to the reinforcements of existing beliefs and to intellectual isolation. Homogeneity in the content users are exposed to leads to polarisation of opinions, giving way to the growth of fake news (Kreft & Fydrych,
2018). Such homogeneous content and polarised opinions lead to lower acceptance of opposing views and novel information (Lazer et al.,
2018). Prior research confirms that people inherently are more likely to believe news that fits their existing beliefs (Hameleers & van der Meer,
2020). Fake news anticipates on this, and shows users what they want to see (Kreft & Fydrych,
2018). This suggests that fake news is more likely to be perceived to be true by those whose prior beliefs match the content provided. Moreover, the public may not be deceived directly by deepfakes, but that it does lead to feelings of uncertainty (Vaccari & Chadwick,
2020). This uncertainty, in turn, may lead to a decrease in trust in traditional news outlets. Moreover, such deepfakes affect attitudes regarding politicians (Dobber et al.,
2021), an effect that can be enhanced further by microtargeting practices (Borgesius et al.,
2018). Moreover, people are said to be vulnerable to fake news, and even those who do not mean to often participate in sharing fake news (Zhou et al.,
2019).
2.2 Countering Fake News
In addition, social media allows for quick and easy sharing of large volumes of content, which adds to the challenges of detecting and countering fake news (Zhang & Ghorbani,
2020). On top of this, the way in which news is presented has changed. An often used term to describe this is the ‘tabloitization of news’ (Rowe,
2011), referring to how the speed at which news is delivered is considered more important and revenues from advertisements play a large role. As news outlets want to ensure readers click on their articles, such focus on speed may have consequences for the extent to which articles are fact-checked, which may in turn blur the lines between facts and fiction or unverified information. The increase of such ‘clickbait news’ has often been connected to the developments regarding misinformation. Considering the impact of fake news, there have been several attempts to counter them. Fake news come in various forms, making detection difficult (Zhou et al.,
2019). Developing accurate measures is challenging, due to the above-mentioned large volumes of fake news shared on social media (Zhang & Ghorbani,
2020), but the fact that fake news consists of many different, complex aspects adds to this as well (Ruchansky et al.,
2017). Lazer et al. (
2018) identify two categories of measures, one of which refers to detection and intervention on platforms, the other focusing on empowering individuals. The former is about detection and intervention on platforms and involves the employment of algorithms. There exists a considerable body of literature focusing on how data mining can be employed to detect fake news of social media (Ciampaglia et al.,
2015; Conroy et al.,
2015; Shu et al.,
2017). Algorithms and AI simultaneously enable the rise and spread of fake news and help counter it (Kreft & Fydrych,
2018). The way social networking sites, such as Facebook, employ their algorithm to enhance consumer engagement, should also be employable for ensuring users are exposed to quality content (Lazer et al.,
2018). An example of this would be exposing users to diverse political content, rather than merely content confirming their existing beliefs. This could in turn reduce the effect of echo chambers, a phenomenon caused by and reinforcing the polarized political opinions (Borges & Gambarato,
2019).
The second category addresses the potential of empowering individuals. There have been initiatives to counter the effects of fake news by training social media users. For instance, Facebook released a tutorial with tips on how to recognize fake news (Brady et al.,
2017). Moreover, efforts to uncover the truth behind fake news stories have been made by fact-checkers (Hameleers & van der Meer,
2020). Using expert knowledge is not a new approach to countering fake news. It has, in fact been deployed for several decades (Fridkin et al.,
2015). However, fact-checking conducted by experts seems to have risen as a response to growing misinformation revolving around politics (Fridkin et al.,
2015). Recent studies show the potential of employing such experts (Clayton et al.,
2020). Fact-checkers can potentially reduce polarization and help dealing with partisan identities (Hameleers & van der Meer,
2020) and that they affect people’s evaluation of the accuracy of political messages (Fridkin et al.,
2015).
It is important to consider the limitations of deploying fact-checkers to counter the effects of fake news, as fact-checkers are only effective when correcting information that fits the prior beliefs of the person exposed to it (Hameleers & van der Meer,
2020). This means that fact-checking efforts bring those with polarized opinions on either side closer together, having potential to bridge the gap. Although the so-called backfire effect, explaining how presenting factual information to counter fake news will only lead to a stronger belief in the presented misinformation (Nyhan & Reifler,
2010) has raised concern, recent research emphasizes that evidence for such an effect is weaker than initially thought (Wood & Porter,
2019). However, merely employing fact-checking is not enough to deal with the fast-moving developments in the area of fake news (Ciampaglia et al.,
2015).
Although these two approaches are often discussed distinctly, there exists literature arguing for a more hybrid approach as well. It is, for instance, argued that machine-based and human-based approaches should not be seen as mutually exclusive (Okoro et al.,
2018). Moreover, the technologies that are currently used and developed are time-consuming and the fast-moving developments add to their complexity. Studies thus argue for the need to equip people with the right tools and knowledge to detect fake news (Zhang & Ghorbani,
2020). Moreover, to develop effective measures, joint effort of expert from all kinds of disciplines is necessary (Zhou et al.,
2019). While the potential impact of fact-checkers has been considered and recent studies point towards the potential of fact-checking efforts in reducing the effects of fake news, the evidence for such efforts is limited (Lazer et al.,
2018), and questions as to what type of protocols work and how they can be deployed remain, which means it is not clear how we can implement such methods and which factors are most important to consider.