Elsevier

Journal of Informetrics

Volume 11, Issue 3, August 2017, Pages 704-712
Journal of Informetrics

Regular article
Quantifying perceived impact of scientific publications

https://doi.org/10.1016/j.joi.2017.05.010Get rights and content

Highlights

  • Researchers tend to prefer their own publications to those by others. This bias is so strong that they prefer their own publications even to the most cited papers in their own field of research.

  • When assessing papers written by others, researchers perceive the impact of these papers in way that is not strongly correlated with citation impact.

  • Citation impact and perceived impact are significantly correlated only when researchers are asked to judge their own work.

Abstract

We report on an empirical verification of the degree to which citation numbers represent scientific impact as it is actually perceived by experts in their respective field. We run a survey of about 2000 corresponding authors who performed a pairwise impact assessment task across more than 20,000 scientific articles. Results of the survey show that citation data and perceived impact do not align well, unless one properly accounts for psychological biases that affect the opinions of experts with respect to their own papers vs. those of others. Researchers tend to prefer their own publications to the most cited papers in their field of research. There is only a mild positive correlation between the number of citations of top-cited papers and expert preference in pairwise comparisons. This also applies to pairs of papers with several orders of magnitude differences in their total number of accumulated citations. However, when researchers were asked to choose among pairs of their own papers, thus eliminating the bias favouring one's own papers over those of others, they did systematically prefer the most cited article. We conclude that, when scientists have full information and are making unbiased choices, expert opinion on impact is congruent with citation numbers.

Introduction

Metrics based on bibliographic data increasingly inform important decision-making processes in science, such as hiring, tenure, promotion, and funding (Bornmann & Daniel, 2006; Bornmann, Mutz, Marx, Schier, & Daniel, 2011; Bornmann, Wallon, & Ledin, 2008; Harzing, 2010; Hornbostel, Böhmer, Klingsporn, Neufeld, & von Ins, 2009; Lovegrove & Johnson, 2008). Many, if not most, of these bibliometric indicators are derived from article-level citation counts ([Bar-Ilan, 2008], [Van Noorden, 2010]), e.g. the h-index (Hirsch, 2005) and the journal impact factor (Garfield, 2006), due to the wide availability of extensive bibliographic records. However, they rest on the frequently undeclared assumption that citations are an accurate and reliable reflection of scientific impact. Bornmann and Daniel (2008) and Sarigöl, Pfitzner, Scholtes, Garas, and Schweitzer (2014) find that citations can serve as reasonable approximations of social indicators such as popularity or success, but they do not support the most important assumption underlying most work in the area of citation-based impact indicators, namely that citations quantify scientific impact. Here, we investigate this assumption directly by determining whether citation data truly reflect scientific impact as it is perceived by expert scholars in their respective fields. Such an assumption is central in the long-standing debate about the proper use of citation data (Garfield, 1979). One common argument against the use of citation data in assessment exercises rests in particular on the doubts about their validity as indicators of true scientific impact (Adler, Ewing, & Taylor, 2009; [MacRoberts and MacRoberts, 1996], [MacRoberts and MacRoberts, 2010]). In fact, the literature frequently confounds “impact”, “influence”, and “rank” with citation-derived metrics (Radicchi, Fortunato, & Castellano, 2008; Wang, Song, & Barabási, 2013; Wuchty, Jones, & Uzzi, 2007) although these metrics can vary along multiple distinct dimensions (Bollen, Van de Sompel, Hagberg, & Chute, 2009; Bornmann & Daniel, 2008).

Determining whether citations actually indicate scientific impact is an empirical question which cannot be resolved by theoretical discussions of the perceived or presumed benefits vs. demerits of citation data alone. For this reason, we decided to define and empirically quantify a novel post-publication metric, namely impact as perceived by the authors themselves. Our goal is to understand whether citation numbers “truly” reflect a particular dimension of impact: the influence or importance of papers in the daily practice of researchers.

We designed a large-scale survey at bigscience.soic.indiana.edu to collect responses from thousands of experienced scholars in a multitude of disciplines (Fig. A1). We asked researchers to make a pairwise decision with regards to their preference of one paper over the other. These decisions were made under two conditions: whether the articles were written by the expert scholars themselves or by other scholars. We then aggregated results over the entire population of respondents to quantify the degree of correlation between the pairwise preferences of respondents (i.e., perceived impact) and the actual difference in the number of citations accumulated (i.e., citation impact) for the pair of papers. Our results indicate that, from the perspective of individual researchers, their assessment of impact vs. impact judged from citation data are only related for pairs of their own papers. Every time that a paper not co-authored by the individual is involved in the estimation of perceived impact, this comparison shows null or negative correlation with citation impact.

Section snippets

Methods

To build the infrastructure needed for the survey, we took all scientific articles with a publication year up to 2013 from the Web of Science (WoS) database. We associated every article with the total number of citations accumulated until 2013 in the WoS citation network as an indication of its citation impact. We identified all articles that were associated with corresponding author(s) from three major public universities in the US: (1) Indiana University (IU), (2) University of Michigan

Perceived impact of authored vs. top-cited papers

Fig. 1 summarizes the relation between citation and perceived impact obtained from the analysis of comparisons between one paper taken from the OWN and the other from the TCD pool. Naturally, a paper in the TCD pool is very likely to have a citation impact larger than an article of the OWN, as the measure of the probability P(ca > cb|pa = o, pb = t) shows in Fig. 1a. Here, cx denotes the total number of citations accumulated by paper x, and px denotes the pool of the paper x (o stands for OWN, and t

Conclusions

In summary, our work quantifies the degree of correlation between citation impact and a new post-publication metric, namely impact as perceived by the authors themselves. Our survey serves to understand whether citation numbers “truly” reflect the impact of papers as it pertains to the daily practice of researchers. This is a particular and important dimension of impact that has been not quantified before.

In the present study, we neglected potential confounding factors that may affect the

Author contributions

Conceived and designed the analysis: Filippo Radicchi, Alexander Weissman and Johan Bollen.

Collected the data: Filippo Radicchi, Alexander Weissman.

Contributed data or analysis tools: Filippo Radicchi, Alexander Weissman.

Performed the analysis: Filippo Radicchi.

Wrote the paper: Filippo Radicchi and Johan Bollen.

Acknowledgements

We are grateful to all researchers who took part in our survey. This work uses Web of Science data by Thomson Reuters, provided by the Network Science Institute at Indiana University. This work is funded by the National Science Foundation (grant SMA-1446078 and SMA-1636636).

References (31)

  • J. Bar-Ilan

    Informetrics at the beginning of the 21st century – A review?

    Journal of Informetrics

    (2008)
  • R. Adler et al.

    Citation statistics

    Statistical Science

    (2009)
  • S.E. Asch

    Studies of independence and conformity: A minority of one against a unanimous majority

    Psychological Monographs

    (1956)
  • A.L. Barabási et al.

    Emergence of scaling in random networks

    Science

    (1999)
  • J. Bethlehem

    Selection bias in web surveys?

    International Statistical Review

    (2010)
  • J. Bollen et al.

    A principal component analysis of 39 scientific impact measures

    PLoS ONE

    (2009)
  • L. Bornmann et al.

    Selecting scientific excellence through committee peer review – a citation analysis of publications previously published to approval or rejection of post-doctoral research fellowship applicants?

    Scientometrics

    (2006)
  • L. Bornmann et al.

    What do citation counts measure? A review of studies on citing behavior

    Journal of Documentation

    (2008)
  • L. Bornmann et al.

    A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select manuscripts that are highly cited after publication?

    Journal of the Royal Statistical Society: Series A (Statistics in Society)

    (2011)
  • L. Bornmann et al.

    Does the committee peer review select the best applicants for funding? An investigation of the selection process for two European molecular biology organization programmes

    PLoS ONE

    (2008)
  • E. Garfield

    Is citation analysis a legitimate evaluation tool?

    Scientometrics

    (1979)
  • E. Garfield

    The history and meaning of the journal impact factor?

    Journal of the American Medical Association

    (2006)
  • C.A. Goodhart

    Problems of monetary management: The UK experience

    (1984)
  • A.W. Harzing

    The publish or perish book

    (2010)
  • J.E. Hirsch

    An index to quantify an individual's scientific research output

    Proceedings of the National Academy of Sciences of the United States of America

    (2005)
  • Cited by (26)

    • How status of research papers affects the way they are read and cited

      2022, Research Policy
      Citation Excerpt :

      These differences in response rates suggest that the respondents are somewhat more male and publish in slightly lower impact factor journals than the overall sample of research indexed by WOS. In our analyses, we removed self-citations (7.3%) from the dataset because authors have been shown to evaluate their own papers much more positively than others’ papers (Radicchi et al., 2017). Self-citations were self-reported via a checkbox.

    View all citing articles on Scopus
    View full text