Sie können Operatoren mit Ihrer Suchanfrage kombinieren, um diese noch präziser einzugrenzen. Klicken Sie auf den Suchoperator, um eine Erklärung seiner Funktionsweise anzuzeigen.
Findet Dokumente, in denen beide Begriffe in beliebiger Reihenfolge innerhalb von maximal n Worten zueinander stehen. Empfehlung: Wählen Sie zwischen 15 und 30 als maximale Wortanzahl (z.B. NEAR(hybrid, antrieb, 20)).
Findet Dokumente, in denen der Begriff in Wortvarianten vorkommt, wobei diese VOR, HINTER oder VOR und HINTER dem Suchbegriff anschließen können (z.B., leichtbau*, *leichtbau, *leichtbau*).
Deepfakes, künstlich erzeugte Inhalte, die reale Medien nachahmen sollen, stellen eine wachsende Bedrohung für die Gesellschaft dar, die erhebliche Auswirkungen auf den Geist und das Verhalten des Menschen hat. Dieser Scoping Review untersucht die potenziellen und tatsächlichen Schäden von Deepfakes und konzentriert sich dabei auf ihre Verwendung in den Bereichen Pornografie, Propaganda, Finanzbetrug und Marktplatztäuschung. Der Artikel geht den psychologischen und verhaltensbedingten Folgen einer vertieften Falschdarstellung nach, darunter Leid, Angst, falsche Erinnerungen und Einstellungsänderungen. Sie untersucht auch die gesellschaftlichen Auswirkungen von "Deep Fakes" und hebt Bedenken hinsichtlich der Glaubwürdigkeit der Medien und der Notwendigkeit von Regulierungen hervor. Der Bericht kommt zu dem Schluss, dass Deep Fakes zwar ernsthafte Bedrohungen darstellen, aber weitere Forschung erforderlich ist, um ihre Schäden vollständig zu verstehen und abzumildern. Fachleute erhalten Einblicke in die vielfältigen Gefahren von Deep Fakes und die dringende Notwendigkeit umfassender Strategien zur Bewältigung dieser Herausforderungen.
KI-Generiert
Diese Zusammenfassung des Fachinhalts wurde mit Hilfe von KI generiert.
Abstract
Deepfakes is a term for content generated by an artificial intelligence (AI) with the intention to be perceived as real. Deepfakes have gained notoriety in their potential misuse in disinformation, propaganda, pornography, defamation, or financial fraud. Despite prominent discussions on the potential harms of deepfakes, empirical evidence on the harms of deepfakes on the human mind remains sparse, wide, and unstructured. This scoping review presents an overview of the research on how deepfakes can negatively affect human mind and behavior. Out of initially 1,143 papers, 28 were included in the scoping review. Several types of harm were identified: Concerns and worries, deception consequences (including false memories, attitude shifts, sharing intention, and false investment choices), mental health harm (including distress, anxiety, reduces self-efficacy, and sexual deepfake victimization), and distrust in media. We conclude that deepfake harm ranges widely and is often hypothetical; hence, empirical investigated of potential harms on human mind and behavior and further methodological refinement to validate current findings is warranted.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Digital content has long been prone to manipulation for various reasons such as advertisement, art, entertainment, as well as for nefarious motives such as deception, fraud, propaganda, and slander. Technological development accelerates the usability and quality of digital content creation and manipulation. Recent developments of artificial intelligence (AI) has led to easy to access tools which allow the generation of completely artificial digital media. Generative adversarial networks (GANs) are AI models consisting of a generating component and an adversarial discriminator component whose interplay continually refines the model’s ability to generate the desired output (Creswell et al. 2018). GAN-based models are able to produce synthetic content indistinguishable from real content, which has been colloquially known as “deepfake”, a portmanteau of deep (learning) and fake (Chadha et al. 2021; Lyu 2020; Westerlund 2019). Deepfakes are AI-generated content created to be recognized as real, and can appear as pictures, videos, text, or audio (Farid 2022; Khanjani et al. 2023). Along modalities, human ability to detect deepfakes is at chance level (Diel et al. 2024a). Although deepfakes can hypothetically depict any type of content, they are renown and notorious for their use in the recreation of real humans. Using AI-based technologies such as face-swapping, deepfakes can be created by projecting one person’s face onto another in the target material (e.g., a video). While the initial use of deepfakes has often been humorous, severe misuse of deepfakes is found in pornography, political propaganda and disinformation, financial fraud and marketplace deception, and academic dishonesty (Albahar and Almalki 2019; Caldwell et al. 2020; (Campbell, et al., 2022), Plangger, Sands, and Kietzmann 2021; Fink 2019; Hamed et al. 2023; Ibrahim et al. 2023).
1.1 Deepfake pornography
Deepfakes first gained public awareness due to their use in the creation of AI-generated pornography of celebrities in 2017 as the first use of the term ‘deepfake’ occurred in this context (Fido et al. 2022; Popova 2020; Westerlund 2019). The vast majority of deepfakes on the internet are pornographic in nature (Ajder et al. 2019). Cases of revenge- or exortion-based deepfake pornography have been reported including targeting minors (FBI 2024; Mania 2024). In South Kora, which accounts for about 99% of deepfake pornography content (Home Security Heroes 2023), a recent legislation criminalized possession or consumption of deepfake pornography (Jung-joo 2025). Similarly, the creation of deepfake pornography has been criminalized in the United Kingdom (Gov.uk 2025), the publication of nonconsensual sexual deepfakes has been prohibited by the United States’ Take It Down Act (US Congress 2025), and the sharing of non-consensual sexual deepfakes by the EU Directive on Combating Violence against Women and Domestic Violence (European Parliament 2024). The use of a target person’s face for the creation of a sexual deepfake is typically done without their consent, which may lead to considerable harm to the person (Blanchard and Taddeo 2023). When used intentionally to damage a person or their reputation, sexual deepfakes can be considered a type of image-based sexual abuse (IBSA; McGlynn and Toparlak 2025; Rigotti et al. 2024). While conventional IBSA often involves the sharing of sexually explicit material of a target taken in a private real-life environment (e.g., sex footage), deepfakes enable the generation of sexual content involving situations the target has never personally been part of, such as in the role of a victim in violent sexual acts (Rousay 2023). Consequences of such novel forms of IBSA have so far not been investigated thoroughly. Nevertheless, such information is highly relevant for the estimation of personal harm and potential compensatory measures for victimization.
Anzeige
1.2 Propaganda and disinformation
Among the first deepfakes reaching wide publicity was a face-swapped video of Barack Obama by Comedian Jordan Peele (Agarwal et al. 2019; Caldera 2019). The potential misuse of deepfakes for political propaganda has since been discussed thoroughly (Myers 2021; Nasiri and Hashemzadeh 2025; Sophia 2025). Deepfakes may provide synthetic yet realistic content, enhancing the deceptive quality of disinformation, e.g., in the form of fake news (Botha and Pieterse 2020; Dack 2019). Systematic uses of deepfake-enhanced propaganda have been observed for political campaigns (Oosterkamp 2025) or terrorist recruitment (Al Waroi 2024). Deepfakes and large language models (LLMs) have reshaped the landscape of propaganda from ‘fake news’ to narrative-driven approaches of disinformation (Nasiri and Hashemzadeh 2025). Empirical evidence shows that political deepfakes can appear credible (Appel and Prietzel 2022; Barari et al. 2021; 2025), especially when the message is consistent with the consumers’ beliefs (Hameleers et al. 2022). This enhanced effect of deepfake persuasiveness by content-consumer congruence is especially important in the context of microtargeting: Microtargeting describes the practice of tailoring content to a specific individual according to available personal information (e.g., demographic data, personality). Algorithms can predict private individual information using large datasets on digital footprints such as visited sites or liked content (Blesik 2018; Kosinski 2024), and media content can be adapted to individual information to maximize persuasive power and shared via recommendation algorithms (e.g., Chouaki et al. 2022). Deepfakes may then be tailored to individuals and shared via platform recommendation systems, further enhancing their danger of propaganda (Dobber et al. 2021; Fathaigh et al. 2021; Kosinski 2024). Consequently, further harm may stem from shifts in attitudes (e.g., political leaning) or behavior (e.g., sharing deepfakes) as a consequence of being deceived.
1.3 Marketplace deception and financial fraud
Deepfakes have found various forms of use and misuse in the marketplace, such as in the form of AI-generated advertising (Chen 2024; Kietzmann et al. (Kietzmann, et al., 2021)) or fake reviews (Fagni et al. 2021). Deepfakes have further been used for financial fraud via identity theft by recreating a person’s appearance and voice for digital communication (Bateman (Bateman 2022); Gupta et al. 2025; Sheyte et al. 2025) as it enables to bypass security through personal authentication (Woolham 2024). The financial loss caused by AI-enabled financial fraud has been estimated to be over $12.5 billion in the year 2023 (Katanich 2024). Despite significant financial consequences, the harms or consequences on victims of deepfake financial fraud remain underexplored.
1.4 Societal impact of deepfakes
Deepfakes have been recognized as a serious threat to society by various national security services (Bundesministerium Inneres 2024; BSI 2024; Candian Security Intelligence 2023; Homeland Security 2022). Unsurprisingly, strategies to mitigate of the harms of deepfakes have already reached political levels; for example, the EU AI Act of 2025 has outlawed severe cases of deepfake-based identity fraud, mandates transparency of deepfake content through clear labels and prohibits manipulative use of AI-generated content (European Parliament 2025).
1.5 Research question
Deepfakes are recognized as a serious societal threat and the potential harms of deepfakes to individuals have been mentioned in various scientific articles as hypotheticals or as reported by news articles (e.g., Feeney 2021; Harris 2021; Johnson and Diakopoulus 2021; Karasavva et al. 2021; Trifonova and Venkatagiri 2024). Although human susceptibility to deepfake deception has been investigated well (Diel et al. 2024a), actual empirical evidence of the harms of deepfakes on human mind and behavior remains sparse. Furthermore, the types of harms of deepfakes have not yet been properly synthesized for the different domains of deepfake abuse (pornography, fraud, disinformation). Therefore, the goal of the present scoping review is to summarize current research on how deepfakes may negatively affect mind and behavior.
Anzeige
2 Methods
2.1 Review
Given the lack of previous reviews on this topic, and wide range of types of deepfake application, modalities misuses, and potential harms, a scoping review is conducted to provide a general, initial overview on the research field. A scoping review is considered suitable given the relative novelty of the topic. The review was conducted according to the PRISMA extension for scoping reviews, PRISMA-ScR (Preferred Reporting items for Systematic Reviews and Meta-Analyses extension for Scoping reviews; Tricco et al. 2018), adapted to the journal’s formatting and reporting requirements. Included studies were classified into categories after the literature selection. The scoping review was not pre-registered.
2.2 Inclusion criteria
The goal of the scoping review is to provide an overview of research on potential harms (direct or indirect negative consequences) caused by deepfake use and misuse on the human mind and behavior. Hence, studies were evaluated according to the following criteria:
1.
Human harm: empirical or focused hypothetical, direct or indirect negative consequences on the humans, including affective, behavioral, cognitive (including attitudes and concerns).
2.
Deepfake use: focus on the use of deepfake and their consequences; including direct (exposure to deepfakes) and indirect (caused by implemented strategies against deepfakes, or general prevalence of deepfakes) consequences; excluding works on AI in general without relevant information on deepfakes.
Focused hypothetical work includes theoretical essays or qualitative interviews on potential harms of deepfakes to the human mind and behavior; hence, research focusing on such hypothetical harms is included. General concerns about deepfakes (or specific misuse cases of deepfakes) are considered an indirect negative consequence, and hence research focusing on such (e.g., population surveys) is included. Research on AI in general is included if relevant information (human harm caused by deepfakes) is present in the manuscript even if deepfakes are not focused on. Finally, research focused on deepfake detection (or analogous evaluations of deepfakes such as credibility or authenticity) is excluded because detection ability in itself is not a negative consequence of deepfakes, and because human deepfake detection performance has already been systematically investigated in previous reviews (Diel et al. 2024a; Somoray, Miller, and Holmes 2025). However, harmful consequences of deepfake deception (e.g., sharing intention, attitude changes, or false memories) were considered appropriate to the scoping review’s research question and thus included as those are potential harms of deepfake use and have not been a topic of previous reviews.
2.3 Literature search
A literature search was conducted in April 2025. The search string used can be found in the supplementary materials. The databases IEEE, PubMed, and ScienceGov were searched, in addition to Google Scholar as a secondary source, but only the first 500 results were screened due to the search engine’s low specificity.
The search term was designed to include as many potential terms as possible to encompass the inclusion of three content topics: stimulus type (e.g., “deepfake”, “AI-generated”, “AI”), misuse (e.g., “abuse”, “manipulate”, “expose”), and negative effect (e.g., “distress”, “harm”). Content topics were combined with AND statements, leading to the search for studies which include stimulus type, misuse, and negative effect terms in their title or abstract. The total search term (divided by content topics) used for the search is found in Table A1.
2.4 Literature selection and extraction
Three independent raters selected the found literature according to the criteria above. Discrepancies between raters were discussed among all three raters until consensus was reached. After an initial set of studies were selected, three raters independently searched the studies and reference lists for additional potential research to include. Potential studies identified this way were then again discussed by the three independent raters until consensus was found.
2.5 Data charting
Data extraction was conducted by one rater and then verified and extended by the two other raters, independently. Again, discrepancies were discussed until consensus was found. Data was charted according to research type, type of deepfake misuse, deepfake modality, research question, measure, results, and thematic topic. Results of the data charting were synthesized by thematic topic, resulting in five categories: Concerns and attitudes, deception consequence, harm, hypothetical, and trust.
3 Results
3.1 Summary of findings
Findings will be presented by categorizing included papers into the five categories. A total of 28 papers were included in the scoping review (Table 1). Papers were divided into five categories: concerns and attitudes (5 papers), consequences of deception (10 papers), harm on mind and behavior (4 papers), hypothetical threats (7 papers), and trust (4 papers). The papers focusing on concerns and attitudes show that awareness and concerns about deepfakes were widespread across international samples, and that deepfake misuse was considered harmful. Papers focusing on consequences of deepfake deception found that successful deception by deepfakes can cause sharing of deepfakes across social media, changes in attitudes, false memories, or bad financial choices. Papers focusing on harm on mind and behavior found that having deepfakes revealed can decrease self-efficacy and cause distress; furthermore, victims of deepfake pornography show symptoms of distress and trauma. Hypothetical papers present hypothetical threats of deepfakes across a variety of sectors, such as education, the marketplace, or healthcare. Finally, papers focusing on trust shows that exposure to or awareness of deepfakes can lead to a distrust towards media in general.
3.2 Literature selection
A total of 1,143 papers were identified in the initial literature search, out of which 28 papers were included in this review. Literature selection is depicted in Fig. 1.
A summary of the included literature (including methods and results) is found in Table 1.
Including pre-print publications, one study was published in 2020, six studies in 2021, five in 2022, seven in 2023, seven in 2024, and two in 2025. Twelve studies used an experimental setup (Ahmed et al. 2021; 2023a; 2023b; Diel et al. 2024b; Dobber et al. 2021; Emett et al. 2024; Hughes et al. 2021; Hwang et al. 2021; Murphy et al. 2023; Shin and Lee 2022; Ternovski et al. 2022; Weikmann et al. 2025). Seven studies used survey-based methodology, conducting surveys of either the general population (Ahmed 2023; Bakir et al. 2024; Cochran et al. 2021; Flynn et al. 2022; Napshin et al. 2024) or focused on victimized populations (Rousay 2023), and one study utilized an experimental survey approach (Kugler et al. 2021). Five papers conducted qualitative interviews, three with experts (Bakir et al. 2024; Lundberg et al. 2025; Songja et al. 2024) and two with victims of deepfake IBSA (Flynn et al. 2022; Rousay 2023). One study utilized a mixed-methods approach (Murphy and Flynn 2022). Finally, four papers were reviews or essays focusing on potential risks or harms (Abdulai 2025; Mustak et al. 2023; Rini and Cohen 2022; Roe et al. 2024).
Included literature was categorized into five major themes identified at the end of the data charting process: Concerns and attitudes, deception consequences, harm, hypothetical, and trust. Results will be presented for each theme. Category- and paper-level summaries of the methods and results are provided in Table 1. A summary of the harms of deepfakes are shown in Fig. 2.
Fig. 2
A summary of the harms (observed and hypothetical) caused by deepfakes to human mind and behavior
Qualitative interviews on victims’ experiences and symptoms
Various symptoms (hopelessness, shame, guilt, depression, embarrassment, avoidance, disturst, suicidality, distress, vomiting, high blood pressure, insomnia, loss of appetite, inability to work, nightmares, intrusive memories
Revealing a deepfake after deception reduces media credibility in general
3.4 Concerns and attitudes
Five papers focused on concerns and attitudes surrounding deepfakes (Bakir et al. 2024; Cochran et al. 2021; Kugler et al. 2021; Napshin et al. 2024; Songja et al. 2024). Out of those, three were surveys on the general population (Bakir et al. 2024; Cochran et al. 2021; Napshin et al. 2024), one was a qualitative interview study on experts and AI students (Songja et al. 2024), and one study a type of experimental survey (Kugler et al. 2021).
Bakir et al. (2024) investigated concerns of 2,056 UK participants on the risks of emotionally manipulative AI, for which deepfakes were presented to participants as one example of a digital manipulation tool. The majority (59%) were uncomfortable with the use of AI-based manipulation tools for political and social causes, with specific concerns expressed about uncertainty about whether information is real or fake (72%) or artificially amplified (73%). Younger participants were found to be more comfortable with the use of digital manipulation tools.
Cochran et al. (2021) investigated awareness of and concerns about deepfakes in a sample of 319 US participants, as well as their views on a platform’s accountability for deepfakes. Almost half (48%) expressed at least a certain degree of familiarity with deepfakes and more than a third (37%) thought to have probably been deceived by deepfakes at least once. Higher levels of familiarity with deepfakes was associated with more internet and social media consumption as well as higher beliefs in personal accountability about deepfakes. Furthermore, respondents scored high (5.79, SD 1.2, on a 7-point Likert scale) on concerns about the impact of deepfakes, but also expressed amusement about deepfake media (5.17, SD 0.6). The belief of platform accountability was high (4.89, SD 1.6). Predictors in the beliefs of platform accountability were female gender and awareness of and concerns about deepfakes (positive) and beliefs in personal accountability and amusement by deepfakes (negative). Similarly, Napshin et al. (2024) investigated the association between concerns about deepfakes and perceptions of individual responsibility in a sample of 1,023 US participants. Concerns about deepfakes and younger age negatively predicted the perception of personal accountability. Humorous perception of deepfakes meanwhile positively predicted individual responsibility but negatively predicted concerns about deepfakes.
Songja et al. (2024) conducted qualitative interviews of 30 Thai individuals including 10 AI-related experts, 10 AI-related students, and 10 individuals of the general public, on risks and concerns regarding deepfakes. Half (53.3%) were aware of deepfakes, while many were concerned about deepfakes in fake news and advertisement (43.3%) or in relation to data theft or fake social media (36.7%). One fifth (20.0%) see deepfakes as an easy tool for malicious individuals, 13.3% express concerned about deepfakes of public figures, and 6.7% worry about a loss in job opportunities.
Finally, Kugler and Pace (2021) conducted multiple experimental surveys on how a total of 1,953 US participants judge the harm and blame and necessary punishment for deepfake abuse, especially in a pornographic context (deepfake IBSA). Participants rated scenarios of deepfake IBSA as more blameworthy, punishable, and harmful compared to other examples of deepfake abuse (e.g., defamation), and preferred legal over civil punishment for deepfake-based IBSA and defamation.
In summary, about half of the samples were aware of deepfakes (Cochran et al. 2021; Songja et al. 2024). Concerns were generally high (Cochran et al. 2021) and considerable proportions of the samples were aware of various potential misuses of deepfakes (Bakir et al. 2024; Sognja et al. 2023). Higher levels of concern were associated with lower personal accountability and higher platform accountability (Cochran et al. 2021; Napshin et al. 2024). Deepfake abuse was considered harmful and should be punishable according to participants, especially in the context of deepfake IBSA and defamation (Kugler and Pace 2021).
3.5 Deception consequences
Ten studies focused on negative consequences of deception by deepfakes (Ahmed et al. 2021; 2023a; 2023b; Dobber et al. 2021; Emett et al. 2024; Hughes et al. 2021; Hwang et al. 2021; Murphy and Flynn 2022; Murphy et al. 2023; Shin and Lee 2022). Out of those, five focused on predictors of the intention to share deepfake-based false information in social media (Ahmed et al. 2021; Ahmed and Chua 2023; Ahmed et al. 2023; Hwang et al. 2021; Shin and Lee 2022), two focused on false memories created by viewing deepfake content (Murphy and Flynn 2022; Murphy et al. 2023), two focused on changes in attitudes caused by viewing deepfake-based false information (Dobber et al. 2021; Hughes et al. 2021), and one focused on changes of financial behavior caused by viewing deepfake false information (Emett et al. 2024).
3.5.1 Sharing intention
Sharing a deepfake (e.g., as part of fake news or misinformation) can have negative consequences via the spread of false information. Ahmed (2021) investigated 440 US participants’ readiness to share a deepfake of a celebrity confession with or without a caption declaring the video to be a deepfake. Participants were more intended to share a deepfake if it is not labeled as such, and, surprisingly, high cognitive ability increases the intention to share a deepfake if not labeled, indicating higher vulnerability.
Ahmed and Chua (2023) investigated the role of deepfake modality (video deepfake, audio deepfake, cheapfake, i.e., low-quality video deepfake) on the intention to share of 309 US participants. Participants were more likely to share video deepfakes compared to audio variants or cheapfakes, and that high cognitive ability (measured via a word sum test as a test for general intelligence) negatively predicted sharing intention.
Ahmed et al. (2023) furthermore found in an experiment involving 8,070 participants across 8 countries that sharing intention of deepfakes was predicted by the perception of accuracy of the deepfake as well as fear of missing out, deficits in self-regulation, low cognitive ability, and social media use.
Hwang et al. (2021) found that in an experiment with 316 Korean participants, deepfake-enhanced fake news messages elicited a higher intention to share when participants did not receive media literacy education in the form of a 7-min presentation with information about disinformation and deepfakes before the experiment. General literacy education was found to be comparable, and at times superiorly effective, to that of deepfake-specific literacy education.
Finally, Shin and Lee (2022) found in an experiment involving 230 US participants that fake news stories shown with deepfake videos increased sharing intention especially when the content of the news stories were congruent with the participants’ opinions. Meanwhile, educating participants on the low costs involving deepfake production reduced sharing intention of deepfake news.
3.5.2 Attitude changes
Two studies included in this review investigated attitude changes following a deepfake presentation (Dobber et al. 2021; Hughes et al. 2021).
Dobber et al. (2021) investigated the effect of political deepfakes and microtargeting on changes in political attitudes in a sample of 278 Dutch participants. Political deepfakes lowered the attitudes of negatively depicted politicians in a deepfake video. When political deepfakes were combined with microtargeting (i.e., adapting the information to individuals based on factors like demographics, personality, or internet behavior), negative effects on attitudes towards the politician and their political party caused by a negative deepfake were amplified. There was no significant difference between the effects of deepfakes compared to genuine content.
Hughes et al. (2021) investigated the effect of deepfake content on shifts in explicit and implicit attitudes and intentions towards the individuals depicted in the deepfakes, across seven experiments in a sample of 2,558 participants, regardless of whether participants were aware that the content shown was a deepfake. However, the measurement of attitudes and behavioral intentions was not specified.
3.5.3 False memories
Two papers investigated how deepfakes may elicit false memories (Murphy and Flynn 2022; Murphy et al. 2023). Murphy and Flynn (2022) investigated the effects of presenting a fake news text story with a deepfake video versus with a photograph on false memory reports, on a sample of 672 participants. While about one third (32%) of participants reported at least one false memory and false memory rates were higher for the photography or deepfake video condition compared to a text-only condition, fake news with a deepfake video did not have an advantage over fake news with photography. In another study, Murphy et al. (2023) investigated the effect of presenting deepfake movie remakes on false memory with a sample of 436 participants. Deepfake videos were not more likely to lead to false memory compared to a text-only condition. Furthermore, participants in expressed discomfort with deepfake recasting, citing concerns about artistic integrity, shared viewing experience and the unsettling level of control allowed by this technology.
Hence, current research does not support the notion that deepfake content has an advantage in eliciting false memories compared to non-deepfake counterparts.
3.5.4 Financial fraud
One paper focused on the misuse of deepfakes in fake news to deceit investors into false investments (Emett et al. 2024). In three experiments, 536 participants acting as investors saw positive or negative deepfake-enhanced fake news related to companies and stocks before being asked about investment choices. Financial deepfake-enhanced fake news have shown to influence investment choices in both ways (increased investment after positive news; decreased investment after negative news), especially when deepfakes were realistic and when credibility judgments were based on intuitive rather than analytic judgments.
3.6 Harm for human mind and behavior
Four studies were classified as focused on mental harm caused by deepfake misuse (Diel et al., in review; Flynn et al. 2022; Rousay 2023; Weikman et al. 2025). Two of these studies investigated consequences of participants’ distress or self-efficacy when deepfakes were revealed (Diel et al., in review; Weikman et al. 2025). Two other studies focused on the consequences of deepfake IBSA on victims (Flynn et al. 2022; Rousay 2023).
3.6.1 Distress and self-efficacy
Diel et al. (in review) investigated the improvement of deepfake detection performance and changes in distress in a feedback-based task, at which 96 German participants received feedback on the correctness of their response after judging whether a shown face was deepfake or not. Distress, measured as arousal and valence, increased in participants who underwent the feedback-based training compared to control participants who did not receive feedback. Furthermore, participants who received feedback showed an increase in anxiety about AI and a decrease in self-efficacy, indicating that participants experience increased distress, anxiety, and reduced self-efficacy when confronted with their inability to detect deepfakes. Similarly, Weikmann et al. (2025) investigated effects of revealing a deepfake after deception on self-efficacy on 951 German participants. Self-efficacy decreased after revealing the deepfake, again showing a negative effect of deepfake deception on self-efficacy.
3.6.2 Deepfake IBSA victimization
Two papers focused on mental health harm caused by deepfake IBSA victimization (Flynn et al. 2022; Rousay 2023). Flynn et al. (2022) employed a general population survey on deepfake IBSA experiences with 6,109 participants, followed by a focused qualitative interview of 75 victims (25 each from the UK, New Zealand and Australia). In the survey, 14% of participants stated experiences with deepfake IBSA victimization in the past, with a higher risk of victimization for male, LGBT + , disabled, Black, Asian, Middle Eastern, and younger individuals. The qualitative interviews revealed personal experiences and symptoms of victims of deepfake IBSA abuse, notably strong fear, a sense of violation, dehumanization, and powerlessness; onset of weight gain or addictive behavior (smoking); and increases in avoidance behavior (e.g., deleting social media and social isolation). Despite the higher risk of victimization for men reported in the survey, no male participants disclosed experiences of deepfakes and digitally altered imagery abuse in the qualitative interviews.
Rousay (2023) similarly conducted a focused survey with 59 participants recruited in an online community for deepfake IBSA victims, followed by qualitative interviews with 5 victims (3 female, 2 male). The survey revealed that victims experienced monetization of their sexual deepfakes, and a higher number of male compared to female victims. Qualitative interviews revealed a range of symptoms post-deepfake IBSA, notably feelings of vulnerability, hopelessness, embarrassment, powerlessness, objectification, violation, degradation, and stigmatization; avoidance behavior (e.g., not thinking about the event, avoiding social media or leaving home); increases in distrust, depressive moods, anger, distress, shame, and guilt; a suicide attempt; psychosomatic stress symptoms such as vomiting and high blood pressure; insomnia; loss of appetite; inability to work; and intrusive memories in the form of images of the sexual deepfake material (especially in a violent context) and nightmares.
3.7 Hypothetical threats
Seven papers focused on hypothetical or predicted harm caused by deepfake (Abdulai 2025; Bakir et al. 2024; Lundberg et al. 2025; Martinez et al. 2024; Mustak et al. 2023; Rini and Cohen 2022; Roe et al. 2024). Four of these papers focused on the risk of deepfakes in certain settings or contexts, such as healthcare (Martinez et al. 2024), entertainment (Lundberg et al. 2025), marketplace (Mustak et al. 2023), or education (Roe et al. 2024). One paper focused on emotional manipulation by AI (Bakir et al. 2024), one paper on the dangers of AI for vulnerable populations (Abdulai 2025), and one paper on the potential dangers of deepfake victimization by impersonation or identity theft, especially in the context of deepfake IBSA (Rini and Cohen 2022).
Martinez et al. (2024) conducted a qualitative interview with 50 nursing students of risks and benefits of deepfake videos in healthcare. Core risks were divided into the themes information hoaxes (e.g., disinformation, manipulation of public opinion, defamation) and cyberbullying and other legal risks (e.g., cyberattacks, identity theft, loss of privacy), which could also lead to health risks through wrong information (e.g., deepfake-based disinformation about COVID-19).
Lundberg et al., (2025) investigated potential threats of deepfakes for news and entertainment using cross-sectional qualitative analyses on attitudes in online websites, and qualitative interviews of experts. Deepfakes may induce fear and confusion about non-existent threats and erode trust in news as a whole. Especially children may be vulnerable to deepfake deception and its consequences.
Mustak et al. (2023) conducted a review on potential risks of deepfakes in the marketplace by referring to 75 publications. As potential threats to consumers, the authors note that deepfakes can increase uncertainty in the marketplace, mislead customers and cause general distrust and psychological discomfort, and also mention marketplace deceptions through deepfakes that go beyond firm-customer interactions, e.g., employment prospects being harmed through inappropriate search results.
Roe et al. (2024) conducted a review on deepfake misuse by including 182 publications risks in education and noted disinformation, impersonation, defamation, and deepfake IBSA as potential risks.
Bakir et al. (2024) conducted qualitative expert interviews on the risks of emotional manipulation by AI, including deepfakes. Experts warn against emotional profiling in social media and AI-based exploitation of weakness and vulnerabilities, especially in children and other vulnerable groups.
Abdulai (2025) conducted a review on the risks of generative AI (including deepfakes) on vulnerable populations. Deepfakes could be used to elicit or trigger trauma, or to spread targeted misinformation about vulnerable populations. They pointed out that due to the invisibility of vulnerable minority groups in AI training sets and algorithms, these risks are amplified.
Finally, Rini and Cohen (2022) argued in an essay that deepfake-based defamation or IBSA may lead to objectification, illocutionary harm (i.e., harm caused by being forced to engage with the deepfake content), false memory, identity loss, gaslighting, and manipulation.
3.8 Trust
Finally, four papers focused on the negative impact of deepfakes on general trust in media and media platforms (Ahmed 2023; Ternovski et al. 2022; Vaccari and Chadwick 2020; Weikmann et al. 2025). All four papers focused on a reduction of media credibility through deepfakes.
Ahmed (2023) conducted a survey on concerns about deepfakes and their relation to social media news skepticism with 764 (1,244 unfiltered) US participants. Participants with higher concerns about deepfakes and previous deepfake exposure expressed higher skepticism towards news on social media platforms.
Ternovski et al. (2022) showed that warning participants about deepfakes does not lead to a better discernment of real and fake content, but instead decreases news credibility in general. Similarly, Vaccari and Chadwick (2020) found that uncertainty resulting from viewing deepfake videos negatively impacts trust in news on social media platforms in general. Finally, Weikmann et al. (2025) found that revealing a deepfake after deception decreased the credibility of media regardless of the medium.
4 Discussion
Potential threats and harms of deepfakes are versatile in application and consequences. The present scoping review showcases how different use examples of deepfake content (general social media, news, pornography, financial fraud) and how they may negatively affect humans, on an affective (distress, anxiety, concerns), behavioral (sharing deepfake news, acts of social isolation, investments) or cognitive (attitude shifts, reduced media credibility, false memories) level. Hypothetical threat scenarios were reported for a variety of contexts such as healthcare, education, marketplace, and defamation.
Concerns about deepfakes are widespread across the population (Sippy et al. 2024). Here, concerns about deepfakes were associated with stronger beliefs in platform accountability and greater skepticism towards news.
Concerns about deepfakes may be at least partially driven by news media coverage on deepfakes, typically focusing on forms of misuse such as fake news and pornography (Gosse & Burkell 2020; Sunvy et al. 2023; Tang, Yin, and Goh 2023). In fact, threats and harms of deepfakes explored by news are often hypothetical worst-case scenarios, which may worsen public concerns (Wahl-Jorgensen and Carlson 2021). In a scoping review on deepfake manipulation, Ching et al. (2025) warn against fear-mongering with hypothetical harms of deepfakes, with little empirical evidence suggesting a superiority of deepfake-based manipulation over non-deepfake variants. Hence, rising concerns about deepfakes may be the result of media outlets focusing on hypothetical threats rather than reports of genuine deepfake misuse.
Three studies found a decrease in media credibility due to deepfakes (Ahmed 2023; Ternovski et al. 2022; Vaccari and Chadwick 2020; Weikmann et al. 2025). Media skepticism was associated with higher concerns about deepfakes (Ahmed 2023) and increased after being exposed to deepfakes (Vaccari and Chadwick 2020; Weikmann et al. 2025), potentially due to a higher awareness of deepfakes’ deceptive qualities leading to an increase in general distrust. Ternovski et al. (2022) found that raising awareness about deepfakes led to an increase in general distrust instead of a better deepfake discernment performance. The latter may be related to the Continued Influence Effect (CIE), the observation that misinformation tends to affect reasoning after shown to be misinformation (Desai et al. 2020). Consistent with Ternovski et al’s (2022) findings, disclaimers have shown to not be effective tools to reduce the effects of false or harmful content in social media (Weber et al. 2022). Instead, increasing awareness on deepfakes may instead increase anxiety and concerns about deepfakes, reducing general media credibility. Furthermore, revealing the nature of deepfakes decreases self-efficacy and media credibility while increasing distress and anxiety about AI (Diel et al. 2024b; Weikmann et al. 2025) indicating that being confronted with the deceptive nature of deepfakes and one’s own inability to adequately discern deepfake material may also raise AI-related anxiety and distrust in media.
Several studies focused on the consequence of deepfake exposure, noting the risk of false memory or attitude caused by the content of deepfakes. Two studies found that deepfake elicit false memories, yet not more so than non-deepfakse false information (Murphy and Flynn 2022; 2023). Meanwhile, deepfakes may successfully shift attitudes especially when combined with microtargeting strategies (Dobber et al. 2021; Hughes et al. 2021). However, as noted by Ching et al. (2025), the majority of empirical research on deepfake manipulation does not use proper non-deepfake controls (e.g., text-based manipulation compared to a deepfake-based manipulation); hence, despite their general effectiveness, there is little evidence to unique threats of deepfakes compared to other forms of media manipulation. Nevertheless, even if deepfakes provide no unique advantage or effectiveness, their wide and easy accessibility enables a low-threshold tool to create and spread manipulative content.
Another danger of deepfake-based content in social media is the human proclivity to share the content, spreading potential negative effects such as misinformation or attitude changes. Several studies investigated the role of third variables (inter-individual differences of deepfake modality) on the intention to share a deepfake in social media. Predictors were deepfake quality, cognitive ability, fear of missing out, self-regulation, and social media use. However, a media literacy training could decrease sharing intention (Hwang et al. 2021), indicating potential countermeasures to deepfake spread in social media.
Deepfake pornography has been a prominent topic of misuse since the term deepfake has gained notoriety. Two studies investigated deepfake misuse as IBSA using survey-based and qualitative interview designs (Flynn et al. 2022; Rousay 2023). Both studies reveal that victims of deepfake IBSA report a variety of mental health symptoms, including negative feelings (hopelessness, shame, guilt, depression, distress), psychosomatic symptoms (vomiting, high blood pressure, loss of appetite or weight gain), disturbance in everyday life (inability to work or to leave home), distrust, and avoidance (deleting social media). Notably, subject also reported intrusive memories in the form of nightmares and, interestingly, of the sexual deepfake video material. The symptoms reflect those of posttraumatic stress disorder (PTSD), a debilitating mental disorder marked by symptoms of intrusion, hyperarousal, and avoidance following a traumatic event. It is noteworthy that one subject experienced intrusive memories of the sexual deepfake videos as if the events happened in person; this may highlight a unique danger of deepfake IBSA in which it may elicit intrusive memories to an event that never happened via identification with the individuals depicted in the videos. However, whether and to what degree victims’ symptoms reflect those of PTSD or other traumatic disorders is yet to be systematically investigated.
This review presents a general overview on the research on harms of deepfakes to human mind and behavior, showing that deepfakes are associated with widespread concerns, negative consequences of successful deception, and distress caused by targeted victimization. These societal threats have contributed to legislations outlawing certain cases of deepfake abuse and mandating flagging of AI content (European Parliament 2025), supporting the emergence of a “ethics by design” concept for generative AI. However, given that both humans and AI classifiers are subpar at identifying AI-generated content (Diel et al. 2024a, b; Sharma et al. 2025), the enforcement of deepfake disclosure is challenged by difficulties in proving whether media material is authentic or synthetic. Beyond legislations, The EU Trustworthy AI Principles have been developed as a guideline on the use of AI, including points such as transparency on AI use, privacy, fairness, and accountability. Similarly and akin to a “privacy by design” concept, the General Data Protection Regulation (GDPR) further ensures consumers’ control of their private data as well as organizations’ accountability for data handling (European Parliament 2016).
Generative AI is capable of inferring personal and personality information of its users (Hu et al. 2024). As generative AI can be used in conjunction with microtargeting strategies to increase the efficiency of manipulation via personalized advertisement (Dobber et al. 2021; Simchon et al. 2024), the protection of data which can be used to infer personal information becomes of high importance. Social media platforms’ recommendation systems can further be used or exploited to increase the spread of AI-generated propaganda or harmful content (Bösch and Divon 2024; Hobbs 2020). Although the GDPR ensures enhanced control over personal data, an analysis of a reform draft also reveals that data which cannot be used to identify an individual will no longer be considered personal data, even if said data can be used to statistically infer personal information (Noyb 2025). As third parties can use unprotected non-personal data to infer otherwise protected personal information (Kosinski et al. 2013), such a de-classification would endanger the protection of personal data. Microtargeting strategies and recommendation systems may then use this information to target individuals with manipulative AI-generated content tailored to individuals’ or groups’ vulnerabilities (Kosinski 2025). Hence, deepfakes may spread easily via recommendation systems and be tailored specifically to the statistical inferences of social media users’ or cohorts’ personal data despite privacy regulation laws, with hurdles limiting stricter regulations such as transparency, e.g., due to difficulties proving whether content is real or AI-generated.
Of the 28 papers included in this review, 7 were focusing on hypothetical harms in the form of discussions, reviews, or expert interviews. Five more papers used survey-type methodology, three utilized qualitative interviews. Finally, only 13 out of 28 papers provided experimental evidence. Hence, experimental research on the harms of deepfake is still relatively sparse, an nonexistent in certain relevant areas (e.g., harms caused by targeted victimizations).
Certain limitations have to be considered for this study. First, given that the goals of a scoping review is to provide an overview of a research field without critical appraisal of the studies’ quality (Tricco et al. 2018), the methodological quality of the studies included has not been evaluated. Although certain information has been categorized which can be used to infer methodical quality such as sample size or type of evidence (e.g., empirical correlation, experimental, theoretical), studies themselves have not been judged accordingly and the summary of results do not take methodological quality into account. Second, there is the possibility that studies have conducted research on deepfakes without using the included search terms which would have been missed in the literature search. Hence, there is a risk that the selected literature is not exhaustive. Third, while a wide range of terms related to harm have been used in the literature search, it is possible that some studies have used other terms to describe relevant information which could have been missed in the literature search. Hence, certain types of harms may have been missed during literature search.
5 Conclusion
The current scoping review presents an overview of the work on potential and actual harms of deepfakes on the human mind and behavior. Different use cases of deepfakes are identified (e.g., news, social media, pornography, education, healthcare), each with a variety of hypothetically discussed or empirically investigated dangers to humans. Further research is needed (1) to investigate potential or hypothetical harms, (2) to properly control for different conditions to further validate observed harms or dangers, and (3) to systematically measure the actual harms using reliable methods) in order to gain a deeper and more accurate understanding of the threats of deepfakes for humans.
Declarations
Conflict of interest
The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Search string used for the literature search. The topics (stimulus type, misuse, negative effect) were combined with AND statements
Topic
Search string
Stimulus type
("ai generated content" OR "ai generated media" OR "ai-generated content" OR "ai-generated media" OR "deep fake" OR "deep fakes" OR "deepfake" OR "deepfakes" OR "face swap" OR "face swaps" OR "face-swap" OR "face-swaps" OR "synthetic content" OR "synthetic image" OR "synthetic images" OR "synthetic media" OR "synthetic picture" OR "synthetic pictures") AND
Misuse
("abuse" OR "abused" OR "abuses" OR "disuse" OR "disused" OR "disuses" OR "exposed" OR "exposes" OR "exposing" OR "exposure" OR "manipulated" OR "manipulates" OR "manipulating" OR "manipulation" OR "misuse" OR "misuses" OR "misusing" OR "porn" OR "porno" OR "pornographic" OR "pornography" OR "pornos" OR "sexual" OR "sexually" OR "threat" OR "threatening" OR "threats") AND
Negative effect
("anxiety" OR "anxious" OR "depressed" OR "depressing" OR "depression" OR "distress" OR "distressed" OR "distressing" OR "distrust" OR "distrusted" OR "distrusting" OR "emotional response" OR "emotional state" OR "exploitation" OR "exploited" OR "exploiting" OR "harassed" OR "harassing" OR "harassment" OR "harm" OR "harmed" OR "harmful" OR "harmfully" OR "harming" OR "mistrust" OR "mistrusted" OR "mistrusting" OR "non ethical" OR "non-ethical" OR "stress" OR "stressed" OR "stressing" OR "survival" OR "survive" OR "survived" OR "survives" OR "surviving" OR "survivor" OR "survivors" OR "trauma" OR "traumatic" OR "traumatised" OR "traumatising" OR "traumatized" OR "traumatizing" OR "trust" OR "trusted" OR "trusting" OR "unethical" OR "victim" OR "victimization" OR "victims" OR "victim-survivor" OR "victim-survivors" OR "violence" OR "violent" OR "well being" OR "well-being")
Abdulai AF (2025) Is generative AI increasing the risk for technology-mediated trauma among vulnerable populations? Nurs Inq 32(1):e12686CrossRef
Agarwal S, Farid H, Gu Y, He M, Nagano K, Li H (2019) Protecting world leaders against deep fakes. In CVPR workshops (Vol. 1, No. 38).
Ahmed S (2021) Fooled by the fakes: cognitive differences in perceived claim accuracy and sharing intention of non-political deepfakes. Pers Individ Differ 182:111074CrossRef
Ahmed S (2023) Navigating the maze: deepfakes, cognitive ability, and social media news skepticism. New Media Soc 25(5):1108–1129CrossRef
Ahmed S, Ng SWT, Bee AWT (2023) Understanding the role of fear of missing out and deficient self-regulation in sharing of deepfakes on social media: evidence from eight countries. Front Psychol 14:1127507CrossRef
Ajder H, Patrini G, Cavalli F, Cullen L (2019) The state of deepfakes: landscape, threats, and impact. Deeptrace, Amsterdam, p 27
Al Waro MNAL (2024) False reality: deepfakes in terrorist propaganda and recruitment. Security Intell Terrorism J (SITJ) 1(1):41–59CrossRef
Albahar M, Almalki J (2019) Deepfakes: threats and countermeasures systematic review. J Theor Appl Inf Technol 97(22):3242–3250
Appel M, Prietzel F (2022) The detection of political deepfakes. J Comput-Mediat Commun 27(4):zmac008CrossRef
Bakir V, Laffer A, McStay A, Miranda D, Urquhart L (2024) On manipulation by emotional AI: UK adults’ views and governance implications. Front Sociol 9:1339834CrossRef
Barari S, Lucas C, Munger K (2021) Political deepfake videos misinform the public, but no more than other fake media. OSF Preprints 13:1–16
Barari S, Lucas C, Munger K (2025) Political deepfakes are as credible as other fake media and (sometimes) real media. J Polit 87(2):510–526CrossRef
Bateman J (2022) Deepfakes and synthetic media in the financial system: Assessing threat scenarios. Carnegie Endowment for International Peace.
Blesik T, Murawski M, Vurucu M, Bick M (2018). Applying big data analytics to psychometric micro-targeting. Machine Learning for Big Data Analysis, 1–30.
Bösch M, Divon T (2024) The sound of disinformation: TikTok, computational propaganda, and the invasion of Ukraine. New Media Soc 26(9):5081–5106CrossRef
Botha J, Pieterse H (2020) Fake news and deepfakes: A dangerous threat for 21st century information security. In: ICCWS 2020 15th International Conference on Cyber Warfare and Security. Academic Conferences and publishing limited (p. 57).
Campbell C, Plangger K, Sands S, Kietzmann J (2022) Preparing for an era of deepfakes and AI-generated ads: a framework for understanding responses to manipulated advertising. J Advert 51(1):22–38CrossRef
Chadha A, Kumar V, Kashyap S, Gupta M (2021) Deepfake: an overview. In: Proceedings of second international conference on computing, communications, and cyber-security: IC4S 2020 (pp. 557–566). Singapore: Springer Singapore.
Chen Y (2024) Exploring the effect of deepfake-based advertising on consumers’ attitudes and behaviors (Vol. 1). Working Paper Series WP 2024.
Ching D, Twomey J, Aylett MP, Quayle M, Linehan C, Murphy G (2025) Can deepfakes manipulate us? Assessing the evidence via a critical scoping review. PLoS ONE 20(5):e0320124CrossRef
Chouaki S, Bouzenia I, Goga O, Roussillon B (2022) Exploring the online micro-targeting practices of small, medium, and large businesses. Proc ACM Hum Comput Interact 6(CSCW2):1–23CrossRef
Cochran JD, Napshin SA (2021) Deepfakes: awareness, concerns, and platform accountability. Cyberpsychol Behav Soc Netw 24(3):164–172CrossRef
Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, Bharath AA (2018) Generative adversarial networks: an overview. IEEE Signal Process Mag 35(1):53–65CrossRef
Dack S (2019) Deep fakes, fake news, and what comes next. The Henry M. Jackson School of International Studies.
Desai SAC, Pilditch TD, Madsen JK (2020) The rational continued influence of misinformation. Cognition 205:104453CrossRef
Diel A, Lalgi T, Schröter IC, MacDorman KF, Teufel M, Bäuerle A (2024a) Human performance in detecting deepfakes: a systematic review and meta-analysis of 56 papers. Comput Human Behav Rep 16:100538CrossRef
Diel A, Bäuerle A, Teufel M (2024) Inability to detect deepfakes: Deepfake detection training improves detection accuracy, but increases emotional distress and reduces self-efficacy. But Increases Emotional Distress and Reduces Self-Efficacy.
Dobber T, Metoui N, Trilling D, Helberger N, De Vreese C (2021) Do (microtargeted) deepfakes have real effects on political attitudes? Int J Press/politics 26(1):69–91CrossRef
Emett SA, Eulerich M, Pickerd JS, Wood DA (2024) Short and synthetically distort: investor reactions to deepfake financial news. Available at SSRN 4869830.
Fagni T, Falchi F, Gambini M, Martella A, Tesconi M (2021) TweepFake: about detecting deepfake tweets. PLoS ONE 16(5):e0251415CrossRef
Farid H (2022) Creating, using, misusing, and detecting deep fakes. Journal of Online Trust and Safety, 1(4).
Fathaigh R, Dobber T, Zuiderveen Borgesius F, Shires J (2021) Microtargeted propaganda by foreign actors: an interdisciplinary exploration. Maastricht J Eur Comp Law 28(6):856–877CrossRef
Feeney M (2021) Deepfake laws risk creating more problems than they solve. Regulatory Transparency Project.
Fido D, Rao J, Harper CA (2022) Celebrity status, sex, and variation in psychopathy predicts judgements of and proclivity to generate and distribute deepfake pornography. Comput Human Behav 129:107141CrossRef
Fink GA (2019) Adversarial artificial intelligence. J Inf Warf 18(4):1–23
Flynn A, Powell A, Scott AJ, Cama E (2022) Deepfakes and digitally altered imagery abuse: a cross-country exploration of an emerging form of image-based sexual abuse. Br J Criminol 62(6):1341–1358CrossRef
Gosse C, Burkell J (2020) Politics and porn: how news media characterizes problems presented by deepfakes. Crit Stud Media Commun 37(5):497–511CrossRef
Gupta S, Kaushik A, Kaur P (2025) Deepfake Overview: Generation, Detection, Risks and Opportunities. Detection, Risks and Opportunities (March 21, 2025).
Hamed SK, Ab Aziz MJ, Yaakub MR (2023) A review of fake news detection approaches: a critical analysis of relevant studies and highlighting key challenges associated with the dataset, feature representation, and data fusion. Heliyon. https://doi.org/10.1016/j.heliyon.2023.e20382CrossRef
Hameleers M (2022) Separating truth from lies: comparing the effects of news media literacy interventions and fact-checkers in response to political misinformation in the US and Netherlands. Inf Commun Soc 25(1):110–126CrossRef
Harris KR (2021) Video on demand: what deepfakes do and how they harm. Synthese 199(5):13373–13391CrossRef
Hobbs R (2020) Propaganda in an age of algorithmic personalization: expanding literacy research and practice. Read Res Q 55(3):521–533CrossRef
Hu L, He H, Wang D, Zhao Z, Shao Y, Nie L (2024) Llm vs small model? large language model based text augmentation enhanced personality detection model. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 16, pp. 18234–18242).
Hughes S, Fried O, Ferguson M, Hughes C, Hughes R, Yao XD, Hussey I (2021) Deepfaked online content is highly effective in manipulating people’s attitudes and intentions (No. FERMILAB-PUB-21–182-T). Fermi National Accelerator Lab.(FNAL), Batavia, IL (United States).
Hwang Y, Ryu JY, Jeong SH (2021) Effects of disinformation using deepfake: the protective effect of media literacy education. Cyberpsychol Behav Soc Netw 24(3):188–193CrossRef
Ibrahim H, Liu F, Asim R, Battu B, Benabderrahmane S, Alhafni B, Adnan W, Alhanai T, AlShebli B, Baghdadi R, Bélanger JJ, Beretta E, Celik K, Chaqfeh M, Daqaq MF, Bernoussi ZE, Fougnie D, de Soto BG, Gandolfi A, Gyorgy A, Zaki Y (2023) Author correction: Perception, performance, and detectability of conversational artificial intelligence across 32 university courses. Scientific Reports, 13(1), 17101. https://doi.org/10.1038/s41598-023-43998-8
Karasavva V, Noorbhai A (2021) The real threat of deepfake pornography: a review of Canadian policy. Cyberpsychol Behav Soc Netw 24(3):203–209CrossRef
Katanich D (2024) It’s a scam! How deepfakes and voice cloning taps into your cash. EuroNews. Retrieved from (August 2025): https://www.euronews.com/ business/2024/04/10/its-a-scam-how-deepfakes-and-voicecloning-taps-into-your-cash
Khanjani Z, Watson G, Janeja VP (2023) Audio deepfakes: a survey. Front Big Data 5:1001063CrossRef
Kietzmann J, Mills AJ, Plangger K (2021) Deepfakes: perspectives on the future “reality” of advertising and branding. Int J Advert 40(3):473–485CrossRef
Kosinski M (2024) Using Big Data (Handbook of Social Psychology).
Kosinski M, Stillwell D, Graepel T (2013) Private traits and attributes are predictable from digital records of human behavior. Proceedings of the national academy of sciences, 110(15), 5802–5805.
Kugler MB, Pace C (2021) Deepfake privacy: attitudes and regulation. Nw UL Rev 116:611
Lundberg E, Mozelius P (2025) The potential effects of deepfakes on news media and entertainment. AI & Soc 40(4):2159–2170CrossRef
Lyu S (2020) Deepfake detection: Current challenges and next steps. arXiv preprint arXiv:2003.09234.
Mania K (2024) Legal protection of revenge and deepfake porn victims in the european union: findings from a comparative legal study. Trauma Violence Abuse 25(1):117–129. https://doi.org/10.1177/15248380221143772CrossRef
Martínez O N, Fernández-García D, Cuartero Monteagudo N, Forero-Rincón O (2024) Possible health benefits and risks of deepfake videos: A qualitative study in nursing students. Nursing Reports, 14(4), 2746–2757.
McGlynn C, Toparlak RT (2025) The ‘new voyeurism’: criminalizing the creation of ‘deepfake porn.’ J Law Soc 52(2):204–228CrossRef
Murphy G, Ching D, Twomey J, Linehan C (2023) Face/Off: changing the face of movies with deepfakes. PLoS ONE 18(7):e0287503CrossRef
Murphy G, Flynn E (2022) Deepfake false memories. In Memory Online (pp. 112–124). Routledge.
Mustak M, Salminen J, Mäntymäki M, Rahman A, Dwivedi YK (2023) Deepfakes: deceptions, mitigations, and opportunities. J Bus Res 154:113368CrossRef
Myers ME (2021) Propaganda, fake news, and deepfaking. In Understanding Media Psychology (pp. 161–181). Routledge.
Napshin S, Paul J, Cochran J (2024) Individual responsibility around deepfakes: it’s no laughing matter. Cyberpsychol Behav Soc Netw 27(2):105–110CrossRef
Nasiri S, Hashemzadeh A (2025) The evolution of disinformation from fake news propaganda to AI-driven narratives as deepfake. J Cyberspace Stud 9(1):229–250
European Parliament (2024) Directive (EU) 2025/1385 of the European Parliament and the Council of 14 May 2024 on combating violence against women and domestic violence. Retrieved from https://eur-lex.europa.eu/eli/dir/2024/1385/oj/eng (November 2025).
Popova M (2020) Deepfakes: an introduction. Porn Stud 7(4):350–351CrossRef
Rigotti C, McGlynn C, Benning F (2024) Image-based sexual abuse and EU law: a critical analysis. German Law J 25(9):1472–1493CrossRef
Rini R, Cohen L (2022) Deepfakes, deep harms. J Ethics Soc Phil 22:143
Roe J, Perkins M, Furze L (2024) Deepfakes and higher education: a research agenda and scoping review of synthetic media. J Univ Teach Learn Pract 21(10):1–22
Rousay V (2023) Sexual deepfakes and image-based sexual abuse: Victim-survivor experiences and embodied harms (Master's thesis, Harvard University).
Sharma VK, Garg R, Caudron Q (2025) A systematic literature review on deepfake detection techniques. Multimedia Tools Appl 84(20):22187–22229CrossRef
Shetye A, Jain N, Borade S, Kumar V (2025) Deepfake Technologies in Financial Fraud: Generation, Detection, and Mitigation Strategies. In 2025 Global Conference in Emerging Technology (GINOTECH) (pp. 1–6). IEEE.
Shin SY, Lee J (2022) The effect of deepfake video on news credibility and corrective influence of cost-based knowledge about deepfakes. Digit J 10(3):412–432
Simchon A, Edwards M, Lewandowsky S (2024) The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus 3(2):pgae035CrossRef
Sippy T, Enock F, Bright J, Margetts HZ (2024) Behind the Deepfake: 8% Create; 90% Concerned. Surveying public exposure to and perceptions of deepfakes in the UK. arXiv preprint arXiv:2407.05529.
Somoray K, Miller DJ, Holmes M (2025) Human performance in deepfake detection: a systematic review. Hum Behav Emerg Technol 2025(1):1833228CrossRef
Songja R, Promboot I, Haetanurak B, Kerdvibulvech C (2024) Deepfake AI images: should deepfakes be banned in Thailand? AI Ethics 4(4):1519–1531CrossRef
Sophia LI (2025) The social harms of AI-generated fake news: addressing deepfake and AI political manipulation. Dig Soc Virtual Governance 1(1):72–88CrossRef
Sunvy AS, Reza RB, Al Imran A (2023) Media coverage of deepfake disinformation: an analysis of three South-Asian countries. Informasi 53(2):295–308CrossRef
Tang Z, Yin SX, Goh DHL (2023) Understanding major topics and attitudes toward deepfakes: An analysis of news articles. In International Conference on Human-Computer Interaction (pp. 337–355). Cham: Springer Nature Switzerland.
Ternovski J, Kalla J, Aronow P (2022) The negative consequences of informing voters about deepfakes: evidence from two survey experiments. J Online Trust and Safety, 1(2).
Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, Straus SE (2018) PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Annals of internal medicine, 169(7), 467–473.
Trifonova P, Venkatagiri S (2024). Misinformation, Fraud, and Stereotyping: Towards a Typology of Harm Caused by Deepfakes. In Companion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing (pp. 533–538).
Vaccari C, Chadwick A (2020) Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Soc Media Soc 6(1):2056305120903408CrossRef
Wahl-Jorgensen K, Carlson M (2021) Conjecturing fearful futures: journalistic discourses on deepfakes. Journalism Pract 15(6):803–820
Weber S, Messingschlager T, Stein JP (2022) This is an Insta-vention! Exploring cognitive countermeasures to reduce negative consequences of social comparisons on Instagram. Media Psychol 25(3):411–440CrossRef
Weikmann T, Greber H, Nikolaou A (2025) After deception: how falling for a deepfake affects the way we see, hear, and experience media. Int J Press/politics 30(1):187–210CrossRef