Skip to main content
Top

From moral panic to pragmatic governance: reframing AI’s societal impacts in employment, education, and ethics

  • Open Access
  • 16-02-2026
  • Open Forum

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This article delves into the societal impacts of artificial intelligence, focusing on employment, education, and ethics. It challenges the narrative of moral panic surrounding AI, advocating for a pragmatic governance approach that addresses real-world issues. The text explores how AI influences job markets, educational outcomes, and ethical considerations, offering actionable solutions for each domain. It introduces the Neo-Triple Helix governance framework, which coordinates roles for government, industry, and universities to ensure credible and democratic oversight. The article also provides a historical perspective on AI's evolution, highlighting the need for evidence-based governance to manage AI's societal impacts effectively. By reframing AI's societal impacts, the article aims to guide decision-makers towards practical and measurable solutions.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Public debates about artificial intelligence (AI) oscillate between alarm and reassurance. This pattern aligns with classic analyses of moral panic—episodes of amplified public anxiety in which new technologies are cast as threats to social order through cycles of claims-making, media escalation, and calls for control (Cohen 2011; Critcher 2003; Hier 2008; Goode and Ben-Yehuda 1994). Digital technologies have repeatedly been processed through this repertoire, from videogames to social media (Dubèl et al. 2025; Marwick and Lewis 2017). Yet contemporary AI discourse reflects a distinctive combination of long-horizon uncertainties and evidence-based concerns about bias, opacity, and governance failures. These dual dynamics—panic signals and tractable problems—form the starting point for this paper.
Recent media coverage intensifies this oscillation between alarm and assurance. Headlines warning that “AI Is Taking Over the World” (The Guardian 2020), predicting that machines “Could Destroy Jobs” (The Times 2021), or framing AI as an “extinction-level threat” (CNN 2024) help constitute a discursive environment saturated with existential and socio-economic anxiety. The sociology of expectations shows how such promissory and catastrophic narratives co-produce hype cycles that shape investment priorities and policy agendas well before technical capacities are settled (Borup et al. 2006; Brown and Michael 2003). Communication research further demonstrates how “strong AI” imaginaries—sentience, superintelligence—amplify utopian and apocalyptic projections, while “weak AI” imaginaries sustain more grounded debates about capabilities, limits, and social embedding (Bory et al. 2025). These framings matter: positioning AI as an existential agent pulls attention toward speculative futures, whereas framing it as an infrastructural tool redirects scrutiny toward the actionable, present-day mechanisms through which risks and responsibilities are produced (Chubb et al. 2024).
Against this backdrop, we adopt a pragmatist risk-governance orientation. Rather than casting AI as either existential threat or technological salvation, this approach centers on observable failure surfaces—statistical bias, data lineage and provenance gaps, non-reproducible evaluation, unclear responsibility chains, and assurance dependence—and links each to concrete mechanisms of remediation. This stance aligns with risk-society accounts that highlight institutional reflexivity and “organised irresponsibility” in technologically complex systems (Beck 1992), with regulatory-science work showing how evidence infrastructures render contested technologies governable (Jasanoff 2003), and with analyses of the audit society, where checks lose meaning unless tied to performance evidence (Power 1997). In AI, a substantial toolset now translates principles into verifiable controls: model cards and datasheets (Gebru et al. 2021; Mitchell et al. 2019), algorithmic impact assessments and third-party audits (Raji et al. 2020), risk-management frameworks (NIST 2023), management-system standards (ISO/IEC 42001:2023), and risk-based regulation (EU AI Act 2024). While imperfect, these instruments anchor concerns in testable claims, measurable thresholds, and documentary trails. A pragmatic orientation, therefore, treats AI’s risks as identifiable, diagnosable, and governable, providing the foundation for the governance architecture developed later in the paper.
Although some fears—such as AI triggering human extinction—are clearly exaggerated, they draw attention to the wider societal and ethical stakes of rapid, poorly governed technological change (Bentley and Evans 2025; Hirsch-Kreinsen 2024; Tegmark 2017). Such anxieties echo long-standing patterns in which innovation is framed as both threat and opportunity. The crucial distinction, however, lies in whether concerns are channeled into actionable, evidence-based analysis or amplified into speculative, apocalyptic narratives. A pragmatic orientation focuses on the former: by articulating risks in tractable, empirically grounded terms rather than through moral-panic framings, it becomes possible to design targeted governance measures and policies capable of addressing real failure points.
In this article, moral panic refers to a discourse pattern characterized by disproportionate threat construction and volatile amplification, whereas pragmatism denotes a program for measurable risk remediation (see Table 1). Building on this distinction, we map widely voiced worries in employment, education, and ethics—three domains where AI anxieties most often crystallize—onto two elements: (a) their relevant empirical evidence base and (b) the governance controls capable of materially improving outcomes. This move responds to the tendency for recent practice-oriented literatures to remain under-recognized in broader public debate. We then develop a collaborative governance architecture grounded in the Neo-Triple Helix, coordinating universities, industry, government, standards bodies, and civil society to translate ethical principles into auditable practice while addressing power asymmetries and risks of assurance capture (Cai 2022; DeNardis 2014; Raji et al. 2020; Srnicek 2017).
Table 1
Differentiating moral panic and pragmatism in AI discourse
Aspect
Framing AI as a moral panic
Framing AI as a challenge
Definition
An exaggerated media and societal reaction characterized by fear and speculative narratives about the implications of AI
Identified, evidence-based issues arising from the implementation and integration of AI systems
Focus
Speculative fears and uncertainties, often amplified by sensational media discourse
Observable shortcomings in AI systems and their societal impacts
Examples of concerns
Problem orientated only:
Employment:
Automation and job displacement (e.g., autonomous vehicles replacing drivers, automated checkout systems replacing cashiers)
Unequal opportunities as AI creates high-skill jobs but leaves behind low-skill workers without reskilling opportunities
Education:
Inequitable access to AI-powered educational tools (e.g., Coursera, Duolingo) widening the digital divide
- Bias in predictive analytics disadvantaging students from underrepresented backgrounds due to biased training data
Ethics:
Algorithmic bias replicating existing inequalities (e.g., facial recognition misidentifying individuals based on race)
- Lack of transparency and accountability in decision-making (e.g., black-box systems in autonomous vehicle errors)
- Privacy and surveillance risks (e.g., facial recognition used for mass surveillance affecting marginalized communities)
Recognizing problems but with a solution-orientated approach:
Employment:
Reskilling workers to adapt to AI-driven job demands (e.g., training programs for transitioning from routine jobs to tech roles)
Ensuring equitable access to high-quality job opportunities created by AI advancements
Education:
Closing the digital divide to ensure underserved regions benefit from AI-powered tools (e.g., adaptive learning)
Developing fairness-aware machine learning to prevent biases in educational analytics
Ethics:
Implementing fairness-aware algorithms to mitigate bias during model development
Enhancing explainability in AI systems to ensure transparency and accountability
Strengthening regulations to prevent data misuse and uphold privacy rights
Framing
Dramatic and often lacking grounding in evidence-based risks
Rooted in evidence and framed to guide actionable, practical solutions
Outcomes
Heightened fear, resistance to AI adoption, and misinformed public debates
Informed discussions, targeted policy interventions, and balanced AI adoption
Approach to solutions
Focus on halting or heavily regulating AI to prevent perceived risks
Focus on addressing specific, identified issues through regulation, innovation, and collaboration
Role of media
Amplifies fears and uncertainties through sensational headlines and speculative reporting
Highlights real issues while promoting informed and balanced discourse
Governance and policy
Distrust in different stakeholders including government, academia, industry, etc., and their ability to keep pace with AI innovation
Proactive regulatory measures, such as UNESCO’s AI Ethics Guidelines, foster responsible innovation (Floridi and Cowls 2019)
The Quadruple Helix Model strengthens governance by integrating academic research (academia), policy creation (government), technological advances (industry), and public needs (society)
Impact on public perception
May lead to fear-driven skepticism, hindering constructive engagement with AI
Encourages constructive dialog and proactive solutions to maximize AI’s benefits
The article proceeds in four steps. First, it traces AI’s evolution into an upstream infrastructure of models, data, and compute, showing why accountability now hinges on lineage, reproducible evaluation, and provenance. Second, it examines social consequences across employment, education, and ethics, mapping widely voiced concerns onto the empirical evidence and the governance levers that can address them. Third, it develops a Neo-Triple Helix coordination framework—linking universities, industry, government, standards bodies, and civil society through participation mechanisms, assurance guarantees, and procurement levers—while specifying the scope conditions under which it can operate. Finally, it outlines a measurement agenda to support cross-site evaluation and concludes by reframing moral panic as an impetus for actionable, portable governance capable of supporting accountable AI development.

2 From moral panic to pragmatic uptake: a brief sociological history of AI

Public debate often treats AI’s sudden visibility—first with deep learning breakthroughs after 2012 and then the rapid diffusion of large-scale generative models from 2022 onward—as if it were an entirely new phenomenon. However, AI is a long-running project with distinct technical phases and institutional arrangements, each with different loci of authority, characteristic failure surfaces, and governance needs (McCorduck 2004; Nilsson 2010; Russell and Norvig 2021). Reconstructing this history is central to a pragmatic stance: it anchors claims about risk and benefit in how AI systems have actually been built and embedded over time, rather than in moral panic headlines.
The mid-twentieth-century imaginary of “thinking machines” mixed public fascination with apprehension about autonomy and control. Turing’s operational proposal for studying machine intelligence (Turing 1950) prompted both enthusiasm and skepticism; critics questioned whether symbolic manipulation could scale to robust intelligence (Dreyfus 1972). Pragmatically, early systems showed narrow but real value: ELIZA’s pattern matching (Weizenbaum 1966), SHRDLU’s language-guided manipulation (Winograd 1972), and Shakey’s integrated perception and action (Nilsson 2010). The moral-panic repertoire (fear of dehumanization, loss of control) traveled alongside practical progress in constrained domains—an early instance of what the sociology of expectations calls the co-production of hype and skepticism (Borup et al. 2006; Brown and Michael 2003). The 1970s–1980s expert-systems wave vested epistemic authority in hand-crafted rules and knowledge engineering. MYCIN and XCON demonstrated concrete benefits in well-bounded settings (McDermott 1982; Shortliffe 1976), even as brittleness and maintenance costs limited generalization (Russell and Norvig 2021). Panics about machines replacing human judgment were countered pragmatically by domain scoping, human-in-the-loop protocols, and liability rules tailored to decision support rather than autonomy—a pattern that still informs safety–critical deployments.
From the 1990s, statistical learning and increased compute/data yielded milestones—Deep Blue’s chess victory (Campbell et al. 2002), and later deep learning leaps in vision and games (Krizhevsky, Sutskever and Hinton 2012; Silver et al. 2016)—and underwrote routine applications such as spam filtering, recommendation, and navigation (Jordan and Mitchell 2015). Here, moral-panic narratives shifted toward surveillance, opacity, and manipulation, as data-driven services scaled (Crawford 2021; Zuboff 2019). Pragmatic countermeasures emerged: privacy and data-protection law, fairness and accountability research, and the first templates for auditing algorithmic systems (Barocas, Hardt and Narayanan 2019; Pasquale 2015).
Transformer architectures and massive pre-training turned AI from point solutions into an inference infrastructure whose defaults (training data, architectures, alignment layers, API exposure) travel across tasks and sectors (Bommasani et al. 2021; Vaswani et al. 2017). Generative models extend this to high-fidelity text, image, and audio synthesis (Goodfellow et al. 2014; Ho et al. 2020; Rombach, Blattmann, Lorenz, Esser and Ommer 2022). Sociologically, authority migrates upstream—from local application teams to model/compute providers and benchmark communities—so that lineage, reproducible evaluation, and content provenance become central to accountability (Bender et al. 2021; Plantin et al. 2018). Moral-panic cycles amplify fears of sentience or extinction; a pragmatic response focuses on observable failure surfaces: statistical bias and distribution shift, opaque data provenance, non-deterministic outputs, mis/disinformation risks, and unclear liability chains (Burrell 2016; Mitchell et al. 2019; Raji et al. 2020; Weidinger et al. 2022).
Seen through this history, AI’s current controversies are less about an unprecedented rupture than about governing an evolving infrastructure that pre-formats downstream practice. In workplaces, this means model defaults and API policies shape task allocation and oversight long before managers encounter specific tools; pragmatism thus asks for human–AI teaming protocols (escalation thresholds, override rights), targeted upskilling, and performance evidence linking productivity claims to wages and job quality (Acemoglu and Restrepo 2020; Brynjolfsson et al. 2023; Noy and Zhang 2023). In education, the same infrastructure can scaffold formative feedback yet threatens assessment integrity unless guardrails, disclosure norms, and provenance are built into learning management systems (Holmes et al. 2019; Kasneci et al. 2023; UNESCO 2023). In ethics and governance, pragmatism translates principles into verifiable controls: datasheets and model cards, open benchmarks with reproducible evaluations, incident databases with time-to-disclosure metrics, and risk-based obligations codified in frameworks such as NIST AI RMF 1.0, ISO/IEC 42001, and the EU AI Act (EU 2024; ISO/IEC 2023; Mitchell et al. 2019; NIST 2023). Throughout, moral-panic frames (from folk-devil narratives to extinction talk) remain sociologically important because they steer attention and investment, but their analytic value improves when paired with tractable mechanisms and measurable outcomes (Brown and Michael 2003; Cohen 2011; Goode and Ben-Yehuda 1994).
This historical reconstruction embeds the panic ↔ pragmatism contrast within concrete sociotechnical change (see Table —From Fear to Action: Differentiating Between AI's Moral Panic and Real Challenges). It sets the stage for the sections that follow by showing why today’s debates about employment, education, and ethics must grapple with AI as an upstream inference infrastructure: one that lowers the cost of analysis and synthesis while redistributing decision rights and responsibility, and therefore demands governance that prioritizes lineage, evaluation reproducibility, provenance, and meaningful avenues for contestation and redress (Amann et al. 2020; Floridi and Cowls 2019; Raji et al. 2020).

3 Employment: from automation alarm to task-level evidence

Public discourse about AI and employment often shifts from discussion of potential benefits to a mood of moral panic, with headlines that emphasize mass displacement and existential threats. Early task-based projections, such as Frey and Osborne’s estimates of automatable employment and related scenario planning (Manyika et al. 2017; Spencer 2025), helped focus attention on risk but also fostered a simplified narrative that “robots will take all jobs.” A more measured reading of the accumulating evidence suggests that impacts are nuanced and actionable: effects vary across tasks, sectors, and workers; short-run productivity gains are evident for certain activities; and the net effects on employment and wages depend on complementary investments, institutional settings, and how algorithmic management is governed.
Historical perspectives show how past technological revolutions have caused disruption and job upheaval, while also giving rise to new industries, new roles, and new opportunities. The late twentieth century technological shift—driven by automation, computing, and digitalization—illustrated how gains in efficiency could accompany a decline in traditional labor but also catalyze the growth of knowledge-based sectors and digital economies. Industrial automation reduced the demand for manual labor in fields like shipbuilding, mining, and heavy manufacturing (Harvey 1989), while computer-aided design and manufacturing increased production capabilities and precision. The social costs—long-term unemployment, job insecurity, and weakened career pathways—were real, but so too were the opportunities for new work and new skill sets.
It can be claimed that AI presents a similar duality. It can pose genuine risks to existing job structures, yet it also holds the potential to drive new opportunities, contingent on policy choices, workforce adaptation, and accessible reskilling. Randomized and field studies consistently show that generative and assistive AI can raise measured productivity for routine, well-specified cognitive work, with the largest gains accruing to less experienced workers and thus reducing some gaps in performance (Brynjolfsson et al. 2023; Noy and Zhang 2023). In software engineering, deployments of code assistants can shorten completion times and improve success rates on clearly defined problems, though they tend to shift effort toward review and verification when tasks are open-ended (Peng et al. 2023). Analyses that examine job exposure through task content and model capabilities suggest that many professional and administrative roles are augmented rather than replaced, with changes concentrated in task mixes and required skills (Eloundou et al. 2023; Felten et al. 2021).
On a macro level, caution is warranted against equating productivity gains with net employment growth or higher wages. Historical analyses of automation indicate that whether technology substitutes for or complements labor depends on task reallocation, demand elasticities, and institutional responses (Acemoglu and Restrepo 2020). Cross-country assessments find that generative AI is more likely to reconfigure jobs than to eliminate them outright, with higher exposure in clerical functions and uneven risks by gender and income groups (ILO 2023; OECD 2023; World Bank 2023). In short, augmentation appears most reliable when tasks are templated and evaluable; in open-ended or safety–critical contexts, error risk and oversight costs tend to rise. These employment impacts do not arise from individual AI tools alone. They reflect AI’s role as an inference infrastructure, where upstream choices in data and model design shape how tasks are assigned, monitored, and valued. As a result, job-quality risks appear long before organizations deploy a specific system.
Beyond headcounts, AI influences how work is coordinated and evaluated. The literature on algorithmic management shows that it can intensify monitoring, compress discretion, and shift some risk onto workers in sectors such as logistics, ride hailing, and content moderation, with these dynamics gradually expanding into “white collar” settings through dashboards, automated performance targets, and AI-mediated scheduling. The central policy question, therefore, is not only about displacement but about the governance of task delegation, accountability, and job quality in AI-enabled organizations.
To place these concerns in context, apocalyptic claims about “the end of work” tend to generalize from worst-case scenarios and overlook the levers that shape outcomes. A more pragmatic view emphasizes contingency: technologies can lead to different employment trajectories depending on training systems, bargaining institutions, procurement rules, and competition in the model/compute layer (Bessen 2019; Kenney and Zysman 2016). When organizations invest in complementary skills and redesign workflows to support human–AI teaming, productivity gains are more likely to translate into better jobs; conversely, pursuing cost-only automation with weak guardrails increases the likelihood of displacement and erosion of job quality (Acemoglu and Restrepo 2020).
The evidence base, thus, points toward an actionable agenda rather than a narrative of inevitability. Concrete steps include designing human–AI teaming with explicit escalation thresholds, override rights, and error budgets tied to task criticality, while measuring verification workload and quality drift throughput (Brynjolfsson et al. 2023; Peng et al. 2023). It also suggests targeted upskilling and clear pathways for mid-career workers, aligning curricula with AI-complementary skills such as data literacy, prompt-driven workflow design, and critical review (ILO 2023; OECD 2023). Safeguards for job quality in algorithmic management are warranted, including transparency about data inputs and performance metrics, audit trails for consequential decisions, and ensuring worker voice in deployment and evaluation (Kellogg et al. 2020; Rosenblat and Stark 2016). Attention to competition and rent sharing is also important, with monitoring of model/compute-layer concentration to ensure productivity gains do not decouple from wage growth and downstream investment (Bessen 2019; Kenney and Zysman 2016).
Ultimately, the best available evidence does not support a narrative of inevitable widespread redundancy, nor does it justify complacency. The likely trajectory is conditional: outcomes depend on how tasks are recomposed, who controls capability exposure and performance metrics, and whether institutions translate short-run productivity gains into enduring, broad-based improvements.

4 Education: from cheating fears to measured learning gains

Public debate around AI in education often swings between concerns about cheating, surveillance, and deskilling, and optimistic claims of frictionless personalization. This moral panic register can obscure a substantial body of empirical and practice-oriented work that has begun to specify where, how, and under what conditions AI can support learning—and where it can pose risks (Reich 2020; Selwyn 2019; Wang 2025). A pragmatic reading of the evidence suggests that there are identifiable conditions under which benefits may accrue, alongside clear risks, and concrete institutional levers that can influence outcomes. In education, many of these issues emerge from the upstream structure of AI’s inference infrastructure. The way models are trained, aligned, and sourced shapes how they generate feedback, handle uncertainty, and interact with assessment systems, which in turn affects integrity, accuracy, and equity in learning environments.
What we know works emerges from decades of research in Artificial Intelligence in Education (AIED), which indicates that well-designed systems can improve learning, particularly for routine practice and targeted feedback. Meta-analyses of intelligent tutoring systems report moderate to large effects on achievement relative to business-as-usual instruction (George 2025; Ma et al. 2014; VanLehn 2011), and field studies of “personalised learning” models show gains when curricula, teacher development, and data use are implemented as an integrated package rather than as standalone tools (Pane et al. 2015). Recent reviews of AI in higher education similarly find positive, though heterogeneous, effects for feedback, writing support, and study guidance, while cautioning that outcomes depend on pedagogy and context (Holmes et al. 2019; Zawacki Richter et al. 2019). Early work on generative AI aligns with this picture: large language models (LLMs) can scaffold idea generation, formative feedback, and language support, with the largest benefits observed for less-prepared learners when activities are structured and teacher mediated (Kasneci et al. 2023). Where the risks are real, those same capacities that enable scalable feedback can also challenge assessment validity and potentially widen inequalities if guardrails are not in place. Empirical studies indicate that performance gains from educational technology often track pre-existing advantages unless access, language fit, and digital literacies are explicitly addressed (Reich 2020; Zawacki Richter et al. 2019). Learning analytics research documents ethical considerations around opacity, consent, and secondary data use in schools (Slade and Prinsloo 2013; Williamson 2017). For LLMs in particular, concerns include hallucination, hidden training data biases, and the risk of task over-automation that could deskill learners if systems replace rather than augment reasoning (Holmes et al. 2019; Kasneci et al. 2023). From a machine learning perspective, fairness-aware methods are necessary but not sufficient: bias can re-enter through target labels, domain shift, or deployment practices (Mehrabi et al. 2021).
From panic to practice: international guidance has moved beyond generic principles toward actionable controls. UNESCO’s Guidance for Generative AI in Education and Research (2023) recommends limiting high-stakes decision automation, keeping teachers “in the loop,” documenting data sources, and building institutional capacity for prompt and rubric design (UNESCO 2023). OECD analyses similarly emphasize human oversight, the measurement of learning—not only tool usage—and governance for procurement and data protection (OECD 2023). Teacher-facing AIED research shows that co-designed orchestration tools are more likely to be used and to improve instruction than student-only automation (Holstein et al. 2019). A pragmatic causal story emerges: when AI supports learning, it tends to do so through three mediators—first, more time on task and deliberate practice enabled by instant feedback (Ma et al. 2014; VanLehn 2011); second, cognitive offloading of lower level tasks so teacher and student attention can shift to higher order reasoning (Holmes et al. 2019); and third, responsiveness—adaptive hints and examples calibrated to current performance (Pane et al. 2015). Conversely, harms may arise when automation targets ill-defined tasks (open-ended judgments), when incentives emphasize output quantity over understanding, or when data pipelines lack transparency and contestability (Mehrabi et al. 2021; Slade and Prinsloo 2013). What institutions can implement now points toward practice-oriented levers that translate principles into verifiable routines. SOPs for classroom use can require disclosure of AI assistance on graded work; the logging of prompts and outputs (provenance) within the LMS; and the separation of practice spaces (where AI is allowed) from assessment spaces (where it is restricted) (OECD 2023; UNESCO 2023).
Teacher-in-the-loop design should prioritize tools that expose model rationales or exemplars teachers can edit, and provide dashboards that surface uncertainty and suggest next pedagogical actions (Holstein et al. 2019). Assessment integrity could be supported by shifting high-stakes evaluation toward authentic, process-revealing tasks (draft trails, orals, in-class builds) while using AI for low-stakes formative support (Reich 2020; Selwyn 2019). Equity by design calls for budgeting for access (devices, connectivity), language support, and digital literacy instruction, and for evaluating tools against subgroup outcomes, not just averages (Mehrabi et al. 2021; Zawacki Richter et al. 2019). Measurement should track learning gains relative to baselines, not solely tool engagement, and should include error audits for AI-generated feedback and for student misconceptions corrected versus those introduced (Pane et al. 2015; UNESCO 2023).
The rationale behind these recommendations is not to sensationalize AI as an external threat to academic integrity or the teacher’s role, but to recognize that AI can support learning when it is embedded in pedagogy, oversight, and equity infrastructure, and that it may have diminishing value if used as a shortcut or if governance treats documentation and accountability as optional. The practical agenda—SOPs, teacher-in-the-loop design, equity budgeting, and outcome-focused evaluation—addresses these conditions directly (Holstein et al. 2019; Ma et al. 2014; OECD 2023; UNESCO 2023; VanLehn 2011).

5 Ethics: from abstract principles to auditable practice

The ethical stakes of AI center on accountability and explainability, fairness and bias, human autonomy, and privacy and surveillance; these dimensions interact and therefore require integrated responses rather than piecemeal fixes. In the realm of bias and fairness, empirical audits show that AI systems can reproduce and amplify existing social inequalities when trained on skewed data or deployed without domain-specific guardrails (Metcalf 2025). Benchmark studies have documented disparate error rates in facial analysis for darker skinned women (Buolamwini and Gebru 2018) and systematic disparities in algorithmic hiring and screening (Cowgill 2020; Raghavan et al. 2020). Survey syntheses map the technical sources of bias—labeling, sampling, and measurement—and the trade-offs among formal fairness criteria (Corbett Davies and Goel 2018; Mehrabi et al. 2021). Social science work further demonstrates how data infrastructures and institutional arrangements embed inequality upstream, so purely technical remedies are insufficient without organizational change (Benjamin 2019; Eubanks 2018). The implication is that fairness must be treated as a sociotechnical property, combining dataset governance, context-appropriate metrics, and procedures for appeal and redress. Echoing the dynamics observed in employment and education, also the ethical issues surrounding AI arise from its nature as an inference infrastructure. Opacity, authority, and data governance reside upstream in the training pipeline, conditioning how fairness, accountability, and explainability appear downstream. Ethical failures, thus, reflect systemic infrastructural factors rather than individual model mistakes.
On accountability and explainability, many high-performing models remain opaque in ways that hinder error tracing and responsibility assignment (Binns 2018; Kroll et al. 2017). While technical work on explainability and interpretability provides useful tools, there are limits: post hoc explanations can mislead, and in safety–critical domains fully interpretable models may be preferable (Adadi and Berrada 2018; Doshi-Velez and Kim 2017; Rudin 2019). Recent risk taxonomies emphasize that accountability must extend across the model lifecycle—data, training, evaluation, and deployment—and be evidenced through logs, documentation, and incident reporting (Raji et al. 2020; Weidinger et al. 2022). The implication is to move from “explainability in principle” to auditable evidence of performance and process.
Regarding human autonomy and manipulation, algorithmic decision support can improve consistency but may also displace judgment or steer behavior in ways that are difficult to detect (Bruneault and Laflamme 2021; Buccella 2025; Yeung 2018; Zuboff 2019). Empirical studies of automated eligibility and scoring illustrate how decision authority can drift from professionals to defaults unless escalation thresholds and override rights are explicit (Eubanks 2018). The implication is to design for human-in-the-loop control where humans have genuine intervention points, supported by training, documentation of model limits, and downstream accountability.
Privacy and surveillance concerns add another layer, as large-scale data collection, model inversion, and membership inference attacks show that personal data (or close proxies) can leak from models even after training (Carlini et al. 2021; Shokri et al. 2017). These risks are unevenly distributed, with marginalized groups bearing disproportionate harms (O’Neil 2016). Regulatory baselines—the GDPR and related jurisprudence—set duties of purpose limitation, data minimization, and rights to contest automated decisions (European Union 2016; Veale and Zuiderveen Borgesius 2021). The implication is that privacy assurance must be tested empirically (for example, through leakage audits and documentation of retention, deletion, and access controls), not merely asserted.
From principles to practice, the field has moved toward operational instruments rather than abstract value lists: model cards and datasheets document intended use, data lineage, and evaluation (Gebru et al. 2021; Mitchell et al. 2019); the NIST AI Risk Management Framework offers process controls for mapping, measuring, and managing risk (NIST 2023); ISO/IEC 42001 establishes an AI management-system standard; and the EU AI Act institutionalizes risk-based obligations—conformity assessment, logging, transparency, and human oversight—for high-risk uses (EU 2024). Persistent challenges include opacity by design (limited access to training data and alignment procedures), normative indeterminacy (no shared thresholds for “acceptable error” across tasks), and assurance capture (auditors reliant on vendor access) (Mittelstadt 2019; Morley et al. 2021; Raji et al. 2020; Weidinger et al. 2022). The implication is that credible governance requires verifiable controls—publicly inspectable documentation of data and model lineage, reproducible evaluations using registered test sets, incident databases with time-to-remedy metrics, and independence safeguards for audits.
The agenda suggested by these considerations is concrete. It begins with documenting and testing: requiring dataset provenance, model cards, and registered evaluation reports with replication artifacts (Gebru et al. 2021; Mitchell et al. 2019). It continues with measuring fairness in context: selecting domain-appropriate metrics, publishing error budgets, and reporting appeal and reversal rates (Corbett Davies and Goel 2018; Mehrabi et al. 2021). It then emphasizes designing for human control: specifying override rights, escalation thresholds, and training for end users (Eubanks 2018; Yeung 2018). It also calls for hardening privacy: testing for leakage and membership inference and aligning retention and deletion with regulatory duties (Carlini et al. 2021; European Union 2016; Shokri et al. 2017). Strengthening assurance involves adopting NIST and ISO processes and implementing independent audits with access guarantees to avoid capture (EU 2024; ISO/IEC 42001 2023; NIST 2023; Raji et al. 2020).
Taken together, the literature suggests that AI governance needs to be evidence-based, measurable, and auditable across the AI supply chain, ensuring that fairness, accountability, autonomy, and privacy are verifiable in practice.
For a summary of the key points, see Table 1. Differentiating moral panic and pragmatism in AI discourse showing contrasting interpretive frames, underlying values, and implications for ethical and regulatory debates.

6 Collaborative governance of AI: a Neo-Triple Helix perspective

Collaborative governance of AI through a Neo-Triple Helix: from moral panic to pragmatic oversight presents a way to understand how to manage AI in a practical, evidence-based manner. Public discourse often moves between existential alarm and optimistic forecasts of frictionless progress, a pattern that reflects moral panic in which diffuse fears, hostile elements, and media amplification can outpace policy and evidence (Cia 2022; Cohen 2011; Goode and Ben Yehuda 1994; Sioumalas-Christodoulou and Tympas 2025). A more constructive approach treats risks as identifiable and manageable through concrete controls—documentation, evaluation, redress, and oversight—rather than as abstract threats (Beck 1992; Mitchell et al. 2019; Raji et al. 2020). The Neo-Triple Helix (NTH) provides a sociologically grounded way to organize that pragmatism. It reframes the traditional university–industry–government relationship as a co-evolving ecosystem with fluid roles shaped by local institutions, platform orchestrators, standards bodies, and assurance intermediaries, while treating civil society actors—professional groups, unions, NGOs, and communities—as constitutive participants entering through standards processes, procurement, consultation, and oversight forums (Cai 2022; DeNardis 2014; Etzkowitz and Leydesdorff 2000; Ostrom 2010). In this lens, AI governance becomes a policy mix—regulation, supervision, mission-oriented programs, public procurement, standards, audits, and shared data/model infrastructures—that can be iteratively adjusted as new evidence accumulates (Kuhlmann and Rip 2018; Mazzucato 2018; Sabel and Zeitlin 2012).
Operationally, the state steers through risk-based obligations and public-interest infrastructures such as open evaluation suites and incident databases, as exemplified by the EU AI Act’s tiered duties for high-risk systems, the NIST AI Risk Management Framework, and the ISO/IEC 42001 management-system standard (EU 2024; ISO/IEC 2023; NIST 2023). Industry—model vendors, integrators, and deployers in health, finance, and education—governs capability exposure, guardrails, and post-deployment monitoring through documentation, testing, red teaming, and incident response (Raji et al. 2020). Universities and public research bodies contribute public value by advancing fairness, interpretability, and evaluation through datasheets, model cards, benchmark design, and incident taxonomies (Gebru et al. 2021; Mitchell et al. 2019). Intermediaries such as standards organizations, professional associations, auditors, and certification bodies translate broad norms into auditable practice and indicators, linking ethics to evidence (Gorwa 2019; Power 1997).
This ecosystemic view aligns with analyses of employment, education, and ethics and helps move the debate beyond panic. In employment, randomized and field studies show that generative AI can raise productivity on routine cognitive tasks, often with the greatest gains for less experienced workers, while creating new verification work in open-ended tasks (Brynjolfsson et al. 2023; Noy and Zhang 2023; Peng et al. 2023). Macro-level evidence cautions that productivity gains do not automatically lead to higher employment or wages without complementary investment and task reallocation (Acemoglu and Restrepo 2020; Eloundou et al. 2023; Webb 2020). The NTH guides concrete responses: social dialog institutions and skills agencies co-design human–AI teaming with escalation thresholds, override rights, and error budgets, while competition and procurement policies address rent concentration at model/compute layers (Kenney and Zysman 2016; Srnicek 2017). In education, guidance from UNESCO and the OECD specifies guardrails—limit high-stakes automation, keep teachers involved, protect data, and require provenance—and institutions report learning and writing gains from scaffolded AI feedback, particularly for less-prepared learners, contingent on pedagogy and integrity measures (Holmes et al. 2019; Kasneci et al. 2023; OECD 2023; UNESCO 2023). The NTH helps convert these principles into practical operations: disclosure norms for AI assistance, provenance and logging in learning management systems, recognition of teacher workload, and shared evaluation resources that enable cross-vendor comparison. In ethics and assurance, a practical toolkit exists—datasheets and model cards, risk-management frameworks, management-system standards, and end-to-end audit protocols—and these can be tied to verifiable evidence such as error budgets, appeal and reversal rates, time to remedy, and adoption of content provenance (Floridi and Cowls 2019; ISO/IEC 2023; Mitchell et al. 2019; Mittelstadt 2019; NIST 2023; Raji et al. 2020). Persistent opacity in training data, alignment procedures, and evaluation sets requires independent scrutiny and clear disclosure incentives (Weidinger et al. 2022).
Three families of instruments make the NTH actionable across sectors. First, participation mechanisms involve standing, well-resourced forums where affected publics and professional communities co-define evidence thresholds and review deployments—such as citizens’ assemblies, worker councils, and panels for patients and parents—consistent with civic epistemologies and experimentalist governance (Davis et al. 2012; Jasanoff 2003; Sabel and Zeitlin 2012). Second, assurance guarantees call for documented data and model lineage, reproducible evaluations on registered test suites, continuous logging for audits and incident investigations, independent red-teaming, and post-deployment monitoring with time-to-disclosure service-level agreements, complemented by third-party audits to reduce “compliance theater” and auditor dependence (EU 2024; ISO/IEC 2023; Mitchell et al. 2019; NIST 2023; Power 1997; Raji et al. 2020). Third, procurement levers use public and large-buyer procurement to embed accountability upstream, requiring complete documentation, externally verifiable evaluations, incident reporting, content provenance for synthetic material, energy-use reporting, data portability, and audit access, with tenders scored on accuracy, bias mitigation, and time to remedy alongside price (Kuhlmann and Rip 2018; Mazzucato 2018; Raji et al. 2020).
Progress can be measured with sector-specific indicators that connect micro-practices to macro-outcomes: in employment, metrics include task delegation maps, override and escalation rates, participation in upskilling, and wage and job-quality trends; in education, indicators cover disclosure and provenance rates, integrity incidents and reversals, and learning gains by baseline attainment; in ethics and assurance, measures track the use of datasheets and model cards, reproducibility of evaluations, time to remedy, and provenance adoption. Important limitations and scope conditions temper expectations. Concentrated model and compute markets can give some actors agenda-setting power and increase the risk of assurance capture when auditors rely on vendor access (Power 1997; Raji et al. 2020; Srnicek 2017). The evidence base is still skewed toward the Global North, which limits generalizability to data-poor contexts and different civic cultures (Birhane 2021; Mohamed et al. 2020). Sectoral differences mean that acceptable error budgets and escalation rights vary across hiring, clinical decision support, and education; a single checklist may not fit all cases (Amann et al. 2020; Selbst et al. 2019). Opacity by design remains a structural challenge despite formal audits (Weidinger et al. 2022). Administrative burdens can also overwhelm small firms and public agencies unless there is shared assurance infrastructure and common test resources. Framed against moral panic, the Neo-Triple Helix does not deny public concern; it channels it into participatory spaces, verifiable assurance, and procurement-backed incentives that make claims testable and responsibilities clear, while retaining democratic legitimacy as AI systems increasingly organize work, learning, and decision-making (DeNardis 2014; Gorwa 2019; Scott 2014).

7 Conclusion

This article has argued that much of today’s AI discourse oscillates between moral panic and pragmatic governance. Moral-panic framings—extinction talk, claims of inevitable mass redundancy, or “end of the university” narratives—help explain the volatility of public attention but do little to guide decision-makers (Brown and Michael 2003; Cohen 2011; Goode and Ben-Yehuda 1994). A pragmatic stance, by contrast, anchors judgments in mechanisms that can be observed, measured, and governed. Our core contribution has been threefold: (i) to reframe contemporary AI as an inference infrastructure—a stack of data, models, compute, and assurance processes that pre-formats downstream practice—thereby relocating explanation upstream and away from headline-driven fears (Bender et al. 2021; Plantin et al. 2018); (ii) to operationalize this stance through an indicator-ready program (lineage, reproducible evaluation, provenance, appeal and redress), turning ethics from principle to evidence (Gebru et al. 2021; Mitchell et al. 2019; Raji et al. 2020); and (iii) to specify a Neo-Triple Helix (NTH) architecture that coordinates roles for government, industry, and universities together with standards bodies and organized publics so that assurance is credible and democratic, not theatrical (Cai 2022; DeNardis 2014; Power 1997).
Viewed historically, AI’s trajectory—from expert systems to statistical learning to foundation models—confirms why panic-based generalities are analytically weak. Each phase relocated authority (from local rules to upstream model providers), altered failure surfaces (from brittleness to distribution shift and data opacity), and demanded new forms of oversight (from domain scoping to documentation and third-party audit) (Bommasani et al. 2021; Jordan and Mitchell 2015; Shortliffe 1976; Weidinger et al. 2022). The inference infrastructure lens, therefore, explains both the renewed visibility of AI and the kinds of controls that matter: lineage and access to training data and alignment procedures, documented evaluations that can be reproduced, incident logging with time-to-remedy metrics, and clear accountability chains across vendors and deployers (EU AI Act 2024; Mitchell et al. 2019; NIST 2023; Raji et al. 2020).
Applied to employment, the evidence does not support apocalyptic redundancy claims nor uncritical optimism. Randomized and field studies show sizable productivity gains for routinized cognitive work—often largest for less-experienced workers—coupled with new verification workloads and uneven effects when tasks are open-ended (Brynjolfsson et al. 2023; Noy and Zhang 2023; Peng et al. 2023). Macroeconomic work cautions that productivity gains do not automatically raise employment or wages; outcomes hinge on task reallocation, demand elasticities, and complementary investment (Acemoglu and Restrepo 2020; Autor 2015; ILO 2023). A pragmatic program, therefore, centers on human–AI teaming (escalation thresholds, override rights, error budgets), job-quality safeguards in algorithmic management (transparency, audit trails, worker voice), and skills pathways targeted to mid-career workers (Kellogg et al. 2020; Kenney and Zysman 2016; OECD 2023). Moral-panic narratives can be repurposed as motivation for this institutional work, but they cannot substitute for it.
In education, the panic repertoire focuses on cheating and deskilling; the empirical record is more differentiated. Decades of AIED research and recent studies of large language models indicate that well-scaffolded systems improve practice and writing—especially for lower prepared learners—when teachers remain in the loop and integrity rules are clear (Holmes et al. 2019; Kasneci et al. 2023; Ma et al. 2014; VanLehn 2011). Risks—assessment invalidity, inequities from uneven access and digital literacies, opacity in learning analytics—are real but governable with disclosure norms, provenance and logging in learning management systems, equity budgeting, and a shift toward process-revealing assessment (OECD 2023; UNESCO 2023; Williamson 2017). Here again, a pragmatic stance converts anxiety into auditable routines that track learning gains, integrity incidents and reversals, and subgroup outcomes, rather than tool usage alone (Pane et al. 2015).
In ethics, widely cited problems—bias, opacity, manipulation, and privacy—are best treated as sociotechnical and life-cycle properties. Technical mitigations must be combined with organizational procedures for contestation and redress. Empirical audits document disparate error for identity-relevant systems (Buolamwini and Gebru 2018) and hiring pipelines (Raghavan et al. 2020), while accountability research shows the limits of post-hoc explainability and the need for interpretable models in safety–critical settings (Doshi-Velez and Kim 2017; Rudin 2019). Privacy risks extend to model leakage and membership inference, which require testing, not assertion (Carlini et al. 2021; Shokri et al. 2017). The regulatory stack—NIST AI RMF, ISO/IEC 42001, and the EU AI Act—already points toward evidence-based oversight; the task is to prevent assurance capture by ensuring auditor independence and access to evaluation artifacts (EU AI Act 2024; ISO/IEC 42001 2023; NIST 2023; Power 1997; Raji et al. 2020).
The NTH lens integrates these domain findings. Governments steer through risk-based obligations and public-interest infrastructures (open benchmarks, incident databases), industry governs capability exposure and post-deployment monitoring, and universities build the methods and shared test resources that make claims comparable across vendors and sectors (Cai 2022; DeNardis 2014; Mitchell et al. 2019). Participation mechanisms—citizens’ assemblies, worker councils, patient and parent panels—translate diverse values into evidence thresholds for acceptable error and remedy (Jasanoff 2003; Sabel and Zeitlin 2012). Procurement levers and competition policy help align private incentives with public value by conditioning market access on documentation, reproducibility, provenance, and energy-use reporting, while guarding against rent concentration at the model/compute layer (Kuhlmann and Rip 2018; Srnicek 2017).
Several scope conditions temper these conclusions. The evidence base is skewed toward the Global North, limiting generalizability to data-poor contexts and different civic epistemologies (Birhane 2021; Mohamed, Png, and Isaac 2020). Sectoral heterogeneity means that error budgets and escalation rights must be domain-specific—hiring is not medicine, and schooling is not credit scoring (Amann et al. 2020; Selbst et al. 2019). Opacity by design—confidential training corpora, alignment procedures, and evaluation sets—remains a structural obstacle even under formal audits (Weidinger et al. 2022). Finally, assurance can devolve into ritual unless indicators track performance (time to remedy, appeal and reversal rates, distributional impacts), not just process (Power 1997).
For research, the agenda is cumulative and comparative: within-firm before/after designs, cross-firm matched comparisons, and sectoral fieldwork that stitch micro-case handling to meso-procedures and macro-settlements, reporting a common indicator set for agency, opacity, normativity, and automation across employment, education, and high-risk ethics domains (Mitchell et al. 2019; Morley et al. 2021; Scott 2014). For policy, the recommendation is to fund public evidence infrastructures (benchmark registries, incident databases), protect auditor and researcher access, and institutionalize participatory fora that make assurance responsive to those most affected (Davis et al. 2012; DeNardis 2014).
In sum, the paper’s contribution is to reposition the debate: moral panic names a recurring discourse pattern; pragmatism names a program of measurable remediation. Treating AI as an inference infrastructure clarifies where authority now sits and which levers work. The NTH provides an organizational blueprint for translating ethics into auditable practice while maintaining social legitimacy. If adopted, this orientation allows institutions to move past headline-driven swings and toward accountable gains in productivity, learning, and rights protection—outcomes that can be demonstrated rather than declared (Beck 1992; EU AI Act 2024; Jasanoff 2004; NIST 2023).

Acknowledgements

We are grateful to the anonymous reviewers for their careful reading and insightful comments. Their suggestions have strengthened the arguments and sharpened the focus of this work.

Declarations

Conflict of interest

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Download
Title
From moral panic to pragmatic governance: reframing AI’s societal impacts in employment, education, and ethics
Authors
Katarzyna Borkowska
David Jackson
Publication date
16-02-2026
Publisher
Springer London
Published in
AI & SOCIETY
Print ISSN: 0951-5666
Electronic ISSN: 1435-5655
DOI
https://doi.org/10.1007/s00146-026-02921-1
go back to reference Acemoglu D, Restrepo P (2020) Robots and jobs: evidence from U.S. labor markets. J Polit Econ 128(6):2188–2244CrossRef
go back to reference Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052CrossRef
go back to reference Amann J, Blasimme A, Vayena E, Frey D, Madai VI (2020) Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20:310. https://doi.org/10.1186/s12911-020-01332-6CrossRef
go back to reference Ananny M, Crawford K (2018) Seeing without knowing: limitations of the transparency ideal. New Media Soc 20(3):973–989CrossRef
go back to reference Barocas S, Hardt M, Narayanan A (2019) Fairness and machine learning: Limitations and opportunities. Open-access book manuscript. https://fairmlbook.org/pdf/fairmlbook.pdf
go back to reference Beck U (1992) Risk society: Towards a new modernity. Sage, London
go back to reference Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the dangers of stochastic parrots. Proc Facct 2021:610–623. https://doi.org/10.1145/3442188.3445922CrossRef
go back to reference Benjamin R (2019) Race after technology: Abolitionist tools for the New Jim Code. Polity, Cambridge
go back to reference Bentley S, Evans D (2025) Seeing is believing: societal differences in AI awareness and the link to AI-related beliefs. AI Soc. https://doi.org/10.1007/s00146-025-02554-wCrossRef
go back to reference Bessen JE (2019) AI and jobs: The role of demand. AEA Pap. Proc. 109: 479–484 Also available as: NBER Working Paper No. 24235 (rev. 2019). https://www.nber.org/papers/w24235
go back to reference Binns R (2018) Algorithmic accountability and public reason. Philos Technol 31(4):543–556. https://doi.org/10.1007/s13347-017-0263-5CrossRef
go back to reference Bommasani R et al (2021) On the opportunities and risks of foundation models. arXiv preprint https://arxiv.org/abs/2108.07258
go back to reference Borup M, Brown N, Konrad K, Van Lente H (2006) The sociology of expectations in science and technology. Technol Anal Strateg Manag 18(3–4):285–298CrossRef
go back to reference Bory P, Natale S, Katzenbach C (2025) Strong and weak AI narratives: an analytical framework. AI Soc 40:2107–2117. https://doi.org/10.1007/s00146-024-02087-8CrossRef
go back to reference Bowker GC, Star SL (1999) Sorting things out: Classification and its consequences. MIT Press.
go back to reference Brown N, Michael M (2003) A sociology of expectations: retrospecting prospects and prospecting retrospects. Technol Anal Strateg Manag 15(1):3–18. https://doi.org/10.1080/0953732032000046024CrossRef
go back to reference Bruneault F, Laflamme AS (2021) AI ethics: how can information ethics provide a framework to avoid usual conceptual pitfalls? An overview. AI & Soc 36:757–766. https://doi.org/10.1007/s00146-020-01077-wCrossRef
go back to reference Brynjolfsson E, Li D, Raymond L (2023) Generative AI at work. NBER Working Paper No. 31161. https://doi.org/10.3386/w31161
go back to reference Buccella A (2025) Ethically charged decisions and the future of ‘AI Ethics.’ AI Soc. https://doi.org/10.1007/s00146-025-02573-7CrossRef
go back to reference Bucher T (2018) If… then: Algorithmic power and politics. Oxford University Press.
go back to reference Buolamwini J, Gebru T (2018) Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc. FAT: 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html
go back to reference Burrell J (2016) How the machine “thinks”: Understanding opacity in ML algorithms. Big Data Soc., 3(1).
go back to reference Cai Y (2022) A neo-Triple Helix model of innovation ecosystems. Ind High Educ 36(1):3–19. https://doi.org/10.1177/09504222221081880MathSciNetCrossRef
go back to reference Callon M (1986) Some elements of a sociology of translation. In: Law J (ed) Power, action, and belief. Routledge, pp 196–233
go back to reference Campbell Jr M, Hoane AJ, Hsu Fh (2002) Deep Blue. Artif Intell 134(1–2):57–83. https://doi.org/10.1016/S0004-3702(01)00129CrossRef
go back to reference Carlini N et al (2021) Extracting training data from large language models. Proc. USENIX Security 2021.
go back to reference Chubb J, Reed D, Cowling P (2024) Expert views about missing AI narratives: is there an AI story crisis? AI Soc 39(1):1107–1126. https://doi.org/10.1007/s00146-022-01548-2CrossRef
go back to reference Cohen S (2011) Folk devils and moral panics, 3rd edn. Routledge, BostonCrossRef
go back to reference Corbett-Davies S, Goel S (2018) The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv preprint https://arxiv.org/abs/1808.00023
go back to reference Cowgill B, Dell’Acqua F, Deng S, Hsu D, Verma N, Chaintreau A (2020) Biased programmers? Or biased data? A field experiment in operationalizing AI ethics (arXiv:2012.02394). arXiv preprint https://arxiv.org/abs/2012.02394
go back to reference Crawford K (2021) Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University PressCrossRef
go back to reference Critcher C (2003) Moral panics and the media. Open University Press/McGraw-Hill Education, Maidenhead
go back to reference Davis KE, Kingsbury B, Merry SE (2012) Indicators as a technology of global governance. Law Soc Rev 46(1):71–104. https://doi.org/10.1111/j.1540-5893.2012.00473.xCrossRef
go back to reference DeNardis (2014) The global war for Internet governance. Yale University Press, New Haven.
go back to reference DiMaggio P, Powell W (1983) The iron cage revisited. Am Sociol Rev 48(2):147–160CrossRef
go back to reference Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv. preprint https://arxiv.org/abs/1702.08608
go back to reference Dreyfus HL (1972) What computers can’t do: A critique of artificial reason. Harper & Row, New York
go back to reference Dubèl R, Wolfers L, Jonkman J et al (2025) The next media-fueled moral technology panic? News media’s and audience’s views on ChatGPT. AI Soc 40:6761–6781. https://doi.org/10.1007/s00146-025-02417-4CrossRef
go back to reference Eloundou, T, Manning S, Mishkin P, Rock D (2023) GPTs are GPTs: An early look at the labor market impact potential. arXiv. preprint https://arxiv.org/abs/2303.10130
go back to reference Etzkowitz H, Leydesdorff L (2000) The dynamics of innovation: from National Systems and “Mode 2” to a Triple Helix of university–industry–government relations. Res Policy 29(2–3):109–123. https://doi.org/10.1016/S0048-7333(99)00055-4CrossRef
go back to reference EU European Union (2024) AI Act (final political agreement). EUR-Lex.
go back to reference Eubanks V (2018) Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press, New York
go back to reference Felten E, Raj M, Seamans R (2021) Occupational, industry, and geographic exposure to artificial intelligence: a novel dataset and its potential uses. Strateg Manag J 42(10):1936–1964. https://doi.org/10.1002/smj.3345CrossRef
go back to reference Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harv Data Sci Rev 1(1):535–545
go back to reference Gawer A, Cusumano MA (2014) Industry platforms and ecosystem innovation. J Prod Innov Manag 31(3):417–433CrossRef
go back to reference Gebru T et al (2021) Datasheets for datasets. Commun ACM 64(12):86–92. https://doi.org/10.1145/3458723CrossRef
go back to reference George A (2025) Beyond degrees: redefining higher education institutions as ethical AI hubs. AI & Soc 40:5599–5601. https://doi.org/10.1007/s00146-025-02303-zCrossRef
go back to reference Gillespie T (2018) Custodians of the Internet. Yale University Press, New Haven
go back to reference Goode E, Ben-Yehuda N (1994) Moral panics: culture, politics, and social construction. Am J Sociol 99(6):1492–1517
go back to reference Goodfellow I. Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio, Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in Neural Information Processing Systems (Vol. 27, pp. 2672–2680). Curran Associates, New York.
go back to reference Gorwa R (2019) What is platform governance? Inf Commun Soc 22(6):854–871CrossRef
go back to reference Harvey D (1989) The condition of postmodernity: An enquiry into the origins of cultural change. Blackwell, Oxford
go back to reference Hier SP (2008) Thinking beyond moral panic: risk, responsibility, and the politics of moralization. Theor Criminol 12(2):173–190. https://doi.org/10.1177/1362480608089239CrossRef
go back to reference Hirsch-Kreinsen H (2024) Artificial intelligence: a “promising technology.” AI & Soc 39:1641–1652. https://doi.org/10.1007/s00146-023-01629-wCrossRef
go back to reference Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. Adv Neural Inf Process Syst 33:6840–6851
go back to reference Holmes W, Bialik M, Fadel C (2019) Artificial intelligence in education: Promises and implications for teaching and learning. Center for Curriculum Redesign.
go back to reference Holstein K, McLaren BM, Aleven V (2019) Co-designing a real-time classroom orchestration tool to support teacher–AI complementarity. J Learn Anal 6(2):27–52. https://doi.org/10.18608/jla.2019.62.3CrossRef
go back to reference International Labour Organization (ILO) (2023) Generative AI and jobs: A global analysis of potential effects on job quantity and quality. Geneva: ILO. https://www.ilo.org/
go back to reference ISO (2023) ISO/IEC 42001:2023 International Organization for Standardization Management system for AI. https://www.iso.org/standard/81230.html
go back to reference Jasanoff S (2003) Technologies of humility. Nature 450:33CrossRef
go back to reference Jordan M, Mitchell T (2015) Machine learning: trends, perspectives, and prospects. Science 349:255–260MathSciNetCrossRef
go back to reference Kasneci E et al (2023) ChatGPT for good? On opportunities and challenges in education. Learn Individ Differ 103:102274CrossRef
go back to reference Kellogg KC, Valentine M, Christin A (2020) Algorithms at work. Annu Rev Sociol 46:365–389
go back to reference Kenney M, Zysman J (2016) The rise of the platform economy. Issues Sci Technol 32(3):61–69
go back to reference Krizhevsky A, Sutskever I, Hinton G (2012) ImageNet classification with deep CNNs. NeurIPS
go back to reference Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable algorithms. Univ Pa Law Rev 165(3):633–705
go back to reference Kuhlmann S, Rip A (2018) Next-generation innovation policy and grand challenges. Sci Public Policy 45(4):448–454CrossRef
go back to reference Ma W, Adesope OO, Nesbit JC, Liu Q (2014) Intelligent tutoring systems and learning outcomes: a meta-analysis. J Educ Psychol 106(4):901–918. https://doi.org/10.1037/a0037123CrossRef
go back to reference Manyika J, Chui M, Miremadi M, Bughin J, George K, Willmott P, Dewhurst M (2017) Jobs lost, jobs gained: Workforce transitions in a time of automation. McKinsey Global Institute, San Francisco
go back to reference Marwick A, Lewis R (2017) Media manipulation and disinformation online. Data Soc Res Inst. https://datasociety.net/library/media-manipulation-and-disinfo-online/
go back to reference McCorduck, P (2004) Machines who think: A personal inquiry into the history and prospects of artificial intelligence (2nd ed.). A. K. Peters.
go back to reference McDermott J (1982) R1/XCON at DEC. AI Mag 3(3):45–52
go back to reference Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2021) A survey on bias and fairness in machine learning. ACM Comput Surv 54(6):1–35. https://doi.org/10.1145/3457607CrossRef
go back to reference Metcalf T (2025) AI safety and regulatory capture. AI & Soc. https://doi.org/10.1007/s00146-025-02534-0CrossRef
go back to reference Mitchell M et al (2019) Model cards for model reporting. roc. FAccT 2019:220–229. https://doi.org/10.1145/3287560.3287596CrossRef
go back to reference Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507CrossRef
go back to reference Morley J, Floridi L, Kinsey L, Elhalal A (2021) Ethics as a service: a pragmatic operationalisation of AI ethics. Mind Mach 31(2):239–256. https://doi.org/10.1007/s11023-021-09540-2CrossRef
go back to reference Nilsson NJ (2010) The quest for artificial intelligence: A history of ideas and achievements. Cambridge University Press, Cambridge
go back to reference NIST (2023) AI Risk Management Framework (AI RMF 1.0). NIST Special Publication No. 1270.
go back to reference Noble S (2018) Algorithms of oppression. NYU Press, New YorkCrossRef
go back to reference Noy S, Zhang W (2023) Experimental evidence on the productivity effects of generative AI. Science 381:187–192. https://doi.org/10.1126/science.adh2586CrossRef
go back to reference O’Neil C (2016) Weapons of math destruction: How big data increases inequality and threatens democracy. Crown, New York
go back to reference OECD (2023) OECD AI in education: Policy and practice guidance. OECD Publishing.
go back to reference Ostrom E (2010) Beyond markets and states: polycentric governance of complex economic systems. Am Econ Rev 100(3):641–672CrossRef
go back to reference Pane JF, Steiner ED, Baird MD, Hamilton LS (2015) Continued progress: promising evidence on personalized learning. RAND Corp. https://doi.org/10.7249/RR1365CrossRef
go back to reference Pasquale F (2015) The black box society. Harvard University Press, CambridgeCrossRef
go back to reference Peng R et al. (2023) The impact of AI on developer productivity. arXiv. preprint https://arxiv.org/abs/2302.06590
go back to reference Plantin JC, Lagoze C, Edwards PN, Sandvig C (2018) Infrastructure studies meet platform studies. New Media Soc 20(1):293–310CrossRef
go back to reference Power M (1997) The audit society. Oxford University Press, Oxford
go back to reference Raji ID et al (2020) Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proc Facct 2020:33–44. https://doi.org/10.1145/3351095.3372873CrossRef
go back to reference Reich J (2020) Failure to disrupt: Why technology alone can’t transform education. Harvard University Press, CambridgeCrossRef
go back to reference Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B. (2022) High-resolution image synthesis with latent diffusion models. arXiv. preprint https://arxiv.org/abs/2112.10752
go back to reference Rudin C (2019) Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nat Mach Intell 1(5):206–215. https://doi.org/10.1038/s42256-019-0048-xCrossRef
go back to reference Russell S, Norvig P (2021) Artificial intelligence: A modern approach (4th, Global. Pearson, New York
go back to reference Sabel CF, Zeitlin J (2012) Experimentalist governance. In: Levi-Faur D (ed) The Oxford handbook of governance. Oxford University Press, Oxford, pp 169–183
go back to reference Scott WR (2014) Institutions and organizations, 4th edn. Sage, Thousand Oaks
go back to reference Selbst AD et al (2019) Fairness and abstraction in sociotechnical systems. Proc Facct 2019:59–68. https://doi.org/10.1145/3287560.3287598CrossRef
go back to reference Selwyn N (2019) Should robots replace teachers? Polity Press, Cambridge, AI and the future of education
go back to reference Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In Proc IEEE Symp Security Priv (SP) 2017:3–18. https://doi.org/10.1109/SP.2017.41CrossRef
go back to reference Shortliffe EH (1976) Computer-based medical consultations: MYCIN. Elsevier/North-Holland, New York
go back to reference Silver D et al (2016) Mastering Go with deep neural networks and tree search. Nature 529:484–489CrossRef
go back to reference Sioumalas-Christodoulou K, Tympas A (2025) AI metrics and policymaking: assumptions and challenges in the shaping of AI. AI Soc 40:4655–4670. https://doi.org/10.1007/s00146-025-02181-5CrossRef
go back to reference Slade S, Prinsloo P (2013) Learning analytics: ethical issues and dilemmas. Am Behav Sci 57(10):1510–1529. https://doi.org/10.1177/0002764213479366CrossRef
go back to reference Spencer DA (2025) AI, automation and the lightening of work. AI Soc 40:1237–1247. https://doi.org/10.1007/s00146-024-01959-3CrossRef
go back to reference Srnicek N (2017) Platform capitalism. Polity, Cambridge
go back to reference Strubell E, Ganesh A, McCallum A (2019) Energy and policy considerations for deep learning in NLP. Proc ACL 2019:3645–3650
go back to reference Tegmark M (2017) Life 3.0. Alfred A. Knopf, New York.
go back to reference Turing AM (1950) Computing machinery and intelligence. Mind 59:433–460. https://doi.org/10.1093/mind/LIX.236.433MathSciNetCrossRef
go back to reference UNESCO (2023) Guidance for generative AI in education and research
go back to reference VanLehn K (2011) The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educ Psychol 46(4):197–221. https://doi.org/10.1080/00461520.2011.611369CrossRef
go back to reference Wang R (2025) Not just a plus: rethinking the “AI + Education” illusion. AI Soc. https://doi.org/10.1007/s00146-025-02458-9CrossRef
go back to reference Webb M (2020) The impact of AI on the labor market: Task-based evidence. Stanford Working Paper
go back to reference Weidinger L. et al. (2022) Taxonomy of risks posed by language models. arXiv. preprint https://arxiv.org/abs/2112.04359
go back to reference Wilczek B, Thäsler-Kordonouri S, Eder M (2025) Government regulation or industry self-regulation of AI? Investigating the relationships between uncertainty avoidance, people’s AI risk perceptions, and their regulatory preferences in Europe. AI Soc 40:3797–3811. https://doi.org/10.1007/s00146-024-02138-0CrossRef
go back to reference Williamson B (2017) Big data in education: The digital future of learning, policy and practice. SAGE, Thousand Oaks.
go back to reference Winograd T (1972) Understanding natural language. Academic Press, New YorkCrossRef
go back to reference World Bank (2023) World development report 2023: Migrants, refugees, and societies. World Bank. https://doi.org/10.1596/978-1-4648-1900-1CrossRef
go back to reference Yeung K (2018) A study of algorithmic decision-making tools in public administration: towards algorithmic regulation? Philos Technol 31(4):637–653
go back to reference Zawacki-Richter O, Marín VL, Bond M, Gouverneur F (2019) Systematic review of research on artificial intelligence applications in higher education—where are the educators? Int J Educ Technol High Educ 16:39. https://doi.org/10.1186/s41239-019-0171-0CrossRef
go back to reference Zuboff S (2019) The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs, New York
go back to reference CNN (2024) AI could pose 'extinction-level' threat to humans and US must take action, experts warn. https://edition.cnn.com/2024/03/12/business/artificial-intelligence-ai-report-extinction (Accessed 6 Jan 2026)
go back to reference The Guardian (2020) AI is taking over the world – are we ready? https://www.theguardian.com/technology/ai-taking-over-world (Accessed Sept 2025)
go back to reference The Guardian (2024) Big Tech has distracted world from existential risk of AI, says top researcher. https://www.theguardian.com/technology/article/2024/may/25/big-tech-existential-risk-ai-scientist-max-tegmark-regulations (Accessed 6 Jan 2026)
go back to reference The Independent (2022) Can we trust AI to make ethical decisions? https://www.independent.co.uk/trust-ai-ethical-decisions (Accessed 30 Jun 2025)
go back to reference BBC News (2023) AI: Humanity’s friend or foe? https://www.bbc.co.uk/news/ai-humanity-friend-foe (Accessed 30 Jun 2025)
go back to reference The Times (2021) The rise of the machines: How AI could destroy jobs. https://www.thetimes.co.uk/ai-destroy-jobs (Accessed 30 Jun 2025)

Premium Partner

    Image Credits
    Neuer Inhalt/© ITandMEDIA, Nagarro GmbH/© Nagarro GmbH, AvePoint Deutschland GmbH/© AvePoint Deutschland GmbH, AFB Gemeinnützige GmbH/© AFB Gemeinnützige GmbH, USU GmbH/© USU GmbH, Ferrari electronic AG/© Ferrari electronic AG