Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Citation Merit of Scientific Publications

Abstract

We propose a new method to assess the merit of any set of scientific papers in a given field based on the citations they receive. Given a field and a citation impact indicator, such as the mean citation or the -index, the merit of a given set of articles is identified with the probability that a randomly drawn set of articles from a given pool of articles in that field has a lower citation impact according to the indicator in question. The method allows for comparisons between sets of articles of different sizes and fields. Using a dataset acquired from Thomson Scientific that contains the articles published in the periodical literature in the period 1998–2007, we show that the novel approach yields rankings of research units different from those obtained by a direct application of the mean citation or the -index.

Introduction

The scientific performance of a research unit (a university department, research institute, laboratory, region, or country) is often identified with its publications and the citations they receive. There are a variety of citations-based specific indices for assessing the impact of a set of articles. Among the most prominent are the mean citation and the -index, but there are many other possibilities. Regardless of the citation impact indicator used, the difficulty of comparing units that produce a different number of papers -even within a well-defined homogenous field- must be recognized. To better visualize the problem consider a concrete example. Suppose that we use we use as indicator the mean citation. Consider the articles published in Mathematics in 1998 and the citations they receive until 2007. The mean citation of papers published in Germany and Slovenia are 5.5 and 6.4, respectively. However, Germany produced 1,718 articles and Slovenia only 62. According to the mean citation criterion the set of Slovenian articles has greater relative impact than the German set. We will see, however, that according to the novel proposal introduced in this paper the performance exhibited by Germany has greater merit than that of Slovenia. No doubt this is an extreme example, but highlights a general difficulty that is present when comparing research units producing a different number of papers in the same field. Furthermore, we show that this difficulty in comparing sets of different sizes persists even if they are large. Thus, the problem in our example is not due to the small number of papers published in Slovenia. This difficulty is even more apparent for citation impact indicators that are size dependent, such as the -index [1], [2].

Comparisons across fields are even more problematic. Because of large differences in publication and citation practices, the numbers of citations received by articles in any two fields are not directly comparable. Of course, this is the problem originally addressed by relative indicators recommended by many authors [3][11]. A convenient relative impact indicator is the ratio between the unit’s observed mean citation and the mean citation for the field as a whole. Thus, after normalization, mean citations of research units in heterogeneous fields become comparable [12]. However, we argue that, as in the previous example of Germany and Slovenia, comparisons using normalized mean citations do not capture the citation merit of different set of articles.

The main aim of this paper is to propose a method to measure the citation merit of a set of articles a research unit publishes in a homogeneous field over a certain period. It should be clarified at the outset that the merit is conditional on the indicator used (mean, -index, median, percentage of highly cited papers, etc.) and on the set of articles used as reference, which we will call “population of interest” (usually all the world articles published in a field in a given period). Thus, a given set of papers in a certain field and time period may have different merit depending on the citation impact indicator used. Given a citation impact indicator, our method allows for comparisons between sets of papers of different sizes and fields. Thus, we will be able to make statements like “The scientific publications of Department X in field A have a greater citation merit than the publications of Department Y in field B”.

Our method is based on a very simple and intuitive idea. Given a field and a citation impact indicator, the merit of a given set of articles is identified with the percentile in which its observed citation impact lies on the distribution of citation impact values corresponding to all possible subsets of articles in that field. Suppose, for example, that the impact indicator is the mean citation, and that the population of interest is equal to all articles published in the world in a certain period in that field. In this case, the merit of a given set of papers is given by the probability that a randomly drawn set of articles in that field has a lower mean citation. Note that, since the merit of a set of papers of a research unit is associated with a percentile (or a probability), it is possible to compare two such percentiles for research units of different sizes working in different fields.

This method resembles that used in other areas such as, for example, Pediatrics where the growth status of a child is given by the percentile in which his/her weight lies within the weight distribution for children of the same age. In our case “same age” is equivalent to “same number of articles”. There is, however, an essential difference: in our case we do not compare the performance of a given research unit with articles with the performance of other existing research units with a similar number of articles, but with the distribution generated by all possible subsets of articles from a given pool of articles.

A related idea that also distinguishes between citation impact and citation merit can be found in [13] for the evaluation of scientific excellence in geographical regions or cities. The citation impact indicator they use is the percentage of articles in a city that belong to the top-10% most-highly cited papers in the world. As they say “the number of highly-cited papers for a city should be assessed statistically given the number of publications in total”. Thus, the scientific excellence of a city depends on the comparison between its observed and its expected number of highly cited papers.

The -index has become very popular because it can be seen as capturing both quantity and quality. The original proposal by Hirsch [14] was designed for the evaluation of individual researchers, but it can be easily extended to research units (A research unit has -index if of its articles have at least citations each, and the remaining articles have no more than citations each). However, due to its dependence on the number of articles, research units that have more articles also tend to have higher -index values. For the different institutions they study, Molinari and Molinari [1] show that an universal relation emerges across institutions that enable them to empirically decompose the -index as a product of a size independent factor and a size dependent one , where is the number of papers. This factor is then used among others by Kinney [2] who compares the scientific output of several U.S. institutions. In our case, we do not need to rely on any empirical estimation of the relation between -index and the size (moreover our methodology can be applied to all citation impact indicators).

In order to implement our method, a large dataset with information about world citation distributions in different homogeneous fields is required. In most of this paper, we use a dataset acquired from Thomson Scientific, consisting of all articles published in 1998–2007, and the citations they received during this period. We show that our approach yields rankings of research units quite different from those obtained by a direct application of the mean citation and the -index.

Methods

Consider a homogeneous scientific field (for example, Nuclear Physics, Molecular Biology, etc.). Suppose that we want to compare the relative merit of two sets of articles and . Denote by the vector of citations received over a fixed period by the articles in , and by the corresponding vector of citations for the articles in . Denote by the set of articles used as a “population of interest”, and by the vector of citations of the articles in . We require that , . In most applications in the paper we take as the set of all articles published in the world in a given year in that field.

We next need some citation impact indicator such as, for example, the mean citation or the -index. The mean citation is perhaps the most often-used indicator, but recently the -index has also become popular. Our method is silent about which is the most appropriate citation impact indicator. Given an indicator, we could compare and ’s impact by comparing the numbers and . As indicated in the Introduction, such a direct comparison has important drawbacks and is often misleading. Thus, we propose a way to compare the merit of any two vectors of citations using the information , , , , and .

Denote by the set of all subsets of of size We take as our sample space and the corresponding algebra is given by all the subsets of , i.e. We establish a probability function satisfying that all the simple events are equiprobable, i.e.where denotes the (finite) number of elements in the set , i.e .

Given the measure space we define the random variable which is just our chosen impact indicator restricted to sets of articles. The cumulative distribution function (CDF) of , , is defined by.

Note that denotes the probability that a subset of articles from has a vector of citations such that .

Definition.

The citation merit of a set of papers with citation vector is given by . We write .

Thus, we associate the citation merit of with the percentile in which the number lies in the distribution .

It should be emphasized that to determine the merit of a set of articles we just have to calculate a percentile (or probability) which does not require any statistical inference exercise.

In many cases we know the analytical expression of the function . For instance, in the case of the -index, the function can be calculated exactly as described by Molinari and Molinari ([1], Equations A3, A6). This is a combinatorial formula that only requires to know the vector of citations in the population of interest, .

However, in other instances, it might be difficult to calculate the analytical expression of . In these cases, one could approximate by taking random draws of . As in many empirical applications the cardinality of is large, the number of draws should be large (in our applications we use 100,000). Thus whenever it is difficult to compute the combinatorial formula that gives the exact value of we proceed as follows: Let , , be the vector of citations obtained in the -th draw. Apply the impact indicator to each of these sets and denote by the resulting vector. Let be the distribution function associated to such vector, so that gives the percentage of components in vector with a value equal or less than . Given a database with the information of , this is a feasible and simple approach to approximate the probability .

To further motivate our citation merit definition, think of the following hypothetical example. Consider a given field and period and suppose that each article has only one author, and each author has written only one article. Suppose that the research unit is a university department that has published papers, obtaining a citation impact level equal to . Suppose that instead of the actual department composition the chair could hire persons from the pool of world researchers who have written a paper in the same field, and let be the corresponding vector of citations. Assume that the chair of the department hires these people in a random way (so there is no difference from what a monkey would do). What would the probability be that , the citation impact level associated with such hypothetical random hiring, is lower than the actual value ? Such probability is our citation merit value .

Coming back to the example presented in the Introduction about the articles in Mathematics of Slovenia and Germany and judging by their mean citation of 6.3 and 5.5, Slovenia ranks higher than Germany. However, the merit values we obtain for the sets of papers of these two countries are 85.30 and 97.00, respectively. The probability that a set of 62 papers, randomly chosen from the pool of all papers published in Mathematics, have a mean lower than 6.3 is 85.30%, whereas the probability that a set of 1,718 papers have a mean lower than 5.5 is 97.00%. Thus, although the mean citation for Slovenia is higher than the mean citation for Germany, its merit is lower.

It is important to note that the result in this example is not just due to the fact that the “sample size” for Slovenia is very small (62 papers). Our empirical results provide many similar type of examples for larger number of papers. Consider, for example, the field of Engineering. Taiwan has 1,882 articles and mean citation of 5.58. Scotland has 610 articles and basically the same mean citation, 5.54. However, the merit of these two sets of articles are 31.20 and 36.00, respectively. Thus, even in cases with “large” number of articles our merit function might rank sets of articles differently from the rank obtained by the mean citation.

Given a citation impact indicator and a population of interest, the method just introduced allows us to compare sets of articles in the same field, and rank all of them in a unique way. Moreover, since the merit definition is associated with a percentile in a certain distribution, we can also make meaningful merit comparisons of sets of articles from different fields.

Results

We use a dataset acquired from Thomson Scientific, consisting of all publications in the periodical literature appearing in 1998–2007, and the citations they received during this period. Since we wish to address a homogeneous population, in this paper only research articles are studied. After disregarding review articles, notes, and articles with missing information about Web of Science category or scientific field, we are left with 8,470,666 articles. For each article, the dataset contains information about the number of citations received from the year of publication until 2007 (see [15], for a more detailed description of this database).

We only consider two citation impact indicators: the mean citation, and the -index. As already indicated, in the case of the -index, our merit function can be calculated analytically. In the case of the mean, the precise combinatorial formula for is complicated and, in practice, it is not feasible given the large size of our datasets. Then we use the approximation described above.

Since the mean and the standard deviation of are known, one could think of approximating using the Central Limit Theorem, at least for research units with large numbers of articles. However, for all scientific fields the distribution of is heavily skewed [15][19], and the underlying distribution might not have a finite variance, so that the Central Limit Theorem could fail even for research units with a large number of articles. (We have indeed checked that for the scientific fields used in the paper, and the sample sizes given the number of papers published by the research units considered, the distribution of the means of random samples is far from a normal distribution).

Countries

In a first exercise, research units are countries, and the homogeneous fields are identified with the 22 broad fields distinguished by Thomson Scientific, 20 in the natural sciences, and two in the social sciences. In an international context we must confront the problem raised by cooperation between countries: what should be done with articles written by authors belonging to two or more countries? Although this old issue admits different solutions (see [20] for a discussion), in this paper we side with many other authors in following a multiplicative strategy (see [21][23]). Thus, in every internationally co-authored article a whole count is credited to each contributing area. Excluding the Multidisciplinary field, for each of the remaining 21 fields we compute the citation merit of the papers of each country according to the mean citation and the -index, taking as the population of interest all papers published in the world in the corresponding field. We exclude the Multidisciplinary field because of the high heterogeneity of some of its journals. In doing so we exclude many high-impact articles published in, for example, Nature, Science and PNAS. One should incorporate such articles to their corresponding fields. This is, however, a laborious task that is beyond the scope of this paper. Figure 1 illustrates an example of our methodology when citation impact is measured by the -index for the articles published in 1998 in the field of Physics, their citations until 2007, and a selection of countries. For each different value of , Figure 1 shows the value of the -index corresponding to percentiles 10, 25, 50, 75 and 90 of the corresponding distribution , as well as the number of articles published by each country and its associated -index.

thumbnail
Figure 1. Field of Physics.

Papers published in 1998. Curves represent for each number of articles the corresponding value of the -index for which the merit is 10.00, 25.00, 50.00, 75.00 and 90.00 respectively. Points represent the number of papers and the values of the -index for different countries.

https://doi.org/10.1371/journal.pone.0049156.g001

Note that by just observing the -index of, for example, Japan, France, Germany, and Switzerland, it is difficult to assess their relative merit. The reason, of course, is that the -index is highly dependent on the number of articles. Thus, since Japan (9,600 articles), France (6,056), and Germany (9,598) produce more articles than Switzerland (2,028), they also have a higher -index. However, with our method we are able to compare the publications of these countries using , the percentile where the observed -index lies. It turns out that obtaining by chance an -index as high as the one of Switzerland -with 2,028 papers- is a much more “unlikely” event than obtaining the -index of any of the other three countries with their corresponding number of articles. Thus, our method assigns more merit to Switzerland (basically percentile 100) than to Japan (percentile 5.40), France (percentile 99.00), and Germany (percentile 99.90). Figure 1 also shows that the U.S. produces the largest number of articles, has the highest -index and, according to our methodology, basically reaches the 100 percentile. This is a feature that appears in most of the 21 fields that we have analyzed.

Table 1 continues with the case of articles published in Physics in 1998 and equivalent tables for the remaining 20 fields are available as supplementary information to this paper, Tables S1. For the forty countries with the largest production, the tables provide the -index, the mean citation, the corresponding values, and the confidence intervals for our approximation in the case of the mean citation. For example, Italy has an -index of 81, the sixth highest value in our sample. But if we look at the merit index , it falls to the eleventh position. It is observed that any of the two impact indices and its corresponding merit index produce different rankings. There are many examples where the discrepancy between the two is very large. Thus, our methodology delivers outcomes that are quite different from those obtained by the direct use of the mean citation or the -index criterion.

In some cases our methodology cannot discriminate enough between countries with very high merit indices. Consider for example the case of Clinical Medicine in Table 2, where Column 4 shows the merit index for a selection of countries when the citation impact is measured by the -index. All these countries, except Germany, have a very similar merit index close to 100%. The reason for this result is that we are using as a population of interest all articles published in the world, and the quality of the articles published by this selection of countries is much higher than that of the rest of the world. Therefore, there are not many corresponding subsets of articles with citation impact as high as those observed in the countries in question. One possible way to discriminate among these “very high quality” sets of articles is to take as population of interest, , only articles published in these countries. Column 5 in Table 2 shows the citation merit index in this case. Notice that when W contains all the papers published in the world France reaches the 99.4% percentile. However, in the case of W* –a set of papers of a much higher quality than the W set– basically about half of all subsets of size 13,822 have an -index higher than the one of France (140). Thus, in this case France’s percentile is 55.3%. Notice that changing the population of interest might produce a re-ranking of the citation merit. When is used, England obtains a higher citation merit than Belgium. However, the opposite is the case when the population of interest is . This possibility of re-ranking is not surprising since our notion of merit is based on the comparison of the observed -index with the probability of obtaining sets of articles with lower h-indices. Such probability depends on the distribution function associated to the population of interest.

University Departments and Laboratories

It could be argued that the broad fields so far analyzed are, in effect, too heterogeneous, a fact that may well diminish the value of our results. In this subsection we present comparisons of the merit of the publications of some selected university departments and laboratories in two more homogeneous scientific sub-fields (Thomson Scientific assigns articles to 219 Web of Science categories through the journals where they have been published, but many journals are simultaneously assigned to two or more categories. For a discussion of the strategies to deal with the problem raised by the multiple assignments of articles to Web of Science categories, see [24]). Tables 3 and 4 show the performance of some institutions in the sub-fields of Neuroscience and Economics, respectively (the data on the papers published by members of these departments has been obtained from the Web of Science of Thomson Scientific). The tables show the number of papers, the -index, the mean citation, and the corresponding .

As before, there are significant discrepancies between the ranking according to the direct citation impact indicator (-index or mean citation) and our merit function . Notice that many departments get a value of equal or very close to 100%. As already explained in the case of Clinical Medicine in Table 2, this is not surprising since all of them are top departments and the probability that we obtain by chance articles with such a high mean citation, or -index, from the set of world papers must be close to zero. As before, this lack of discrimination among top departments can be fixed by considering a different population of interest . In addition, for the case of the mean citation we can increase the number of random subsets used to estimate . So far, in our empirical results we have always drawn 100,000 random subsets (for each ). This might be more than enough for intermediate percentiles but not for percentiles very close to 100. However, given the purpose of this paper, we find of no practical importance that, for example, the differences in Table 3 between percentiles 99.96 and 99.49 are not statistically significant.

Discussion

In this paper we have proposed a new simple and intuitive method to assess the citation merit of any set of scientific papers in any field. One advantage of our approach is that it can be applied to a variety of problems. For example, it might be applied to rank scientific journals. The merit of a given journal that publishes articles in a year in a given field would be given by the probability that a subset of articles in that field are of lower quality according to some criterion as the mean citation or the -index (note that the merit of a journal is not the same as the merit of the authors who publish in the journal). A second advantage is the possibility of comparisons of the scientific merit of research units in different fields. This can be done because the merit of each research unit is associated with a probability (or percentile) that might be reasonable to compare across different fields.

As far as the international cooperation is concerned, it is well known that domestic and international publications are characterized by very different citation rates. Therefore, using whole counts as we have done in this paper, or following [20] recommendation in favor of using fractionalized counts to calculate citation indicators at the national level, might make a significant difference that it would be convenient to investigate.

In the empirical application of the method we have used two well-known and vastly different citation impact indicators: the mean citation and the -index. However, recall that, given their high skewness, the upper and lower parts of citation distributions are typically very different. Consequently, average-based indicators -such as the mean citation- may not adequately summarize these distributions. On the other hand, both the -index and many of the indicators of the same family have been criticized by some authors (see [25][27]). Therefore, it may be worthwhile to study the merit of research units according to some of the new indicators that are rapidly being suggested (see [26], [28][32]).

It is important to note that our approach is not trying to make any inference on the underlying model explaining the scientific output of the different units. For an overall assessment of the relative merit or performance of a research unit we should take into account many other variables, such as the budget, number of researchers, etc. Two research units with the same merit according to a set of citation indicators as understood in this paper may vastly differ in the productivity of its research staff or, more generally, in the efficiency with which scientific results are obtained from a complex input vector. Thus, we only provide a method to assess a research unit’s performance in a certain dimension, quite independently of the underlying model explaining why different units produce scientific publications of different citation impact and citation merit.

Supporting Information

Tables S1.

Analysis of the 40 countries with the largest number of papers in the corresponding field.

https://doi.org/10.1371/journal.pone.0049156.s001

(XLSX)

Acknowledgments

Conversations with P. Albarrán, J. Rodríguez-Puerta and P. Poncela are gratefully acknowledged. We also thank M.D. Collado and two anonymous referees for their helpful comments.

Author Contributions

Conceived and designed the experiments: JAC IO-O JR-C. Performed the experiments: JAC IO-O JR-C. Analyzed the data: JAC IO-O JR-C. Contributed reagents/materials/analysis tools: JAC IO-O JR-C. Wrote the paper: JAC IO-O JR-C.

References

  1. 1. Molinari J, Molinari A (2008) A new methodology for ranking scientific institutions. Scientometrics 75: 163–174.
  2. 2. Kinney A (2007) National scientific facilities and their science impact on nonbiomedical research. PNAS 104: 17943–17947.
  3. 3. Moed H, Burger W, Frankfort J, van Raan A (1985) The use of bibliometric data for the measurement of university research performance. Research Policy 14: 131–149.
  4. 4. Moed H, Brui RD, van Leeuwen T (1995) New bibliometrics tools for the assessment of national research performance: Database description, overview of indicators, and first applications. Scientometrics 33: 381–422.
  5. 5. Van Raan AFJ (2004) Measuring Science. Handbook of Quantitative Science and Technology Research, 19–50 Moed et al. (eds.).
  6. 6. Schubert A, Glänzel W, Braun T (1983) Relative citation rate: A new indicator for measuring the impact of publications. Proceedings of the First National Conference with International Participation in Scientometrics and Linguistics of Scientific Text. Varna.
  7. 7. Braun WG, Schubert A (1985) Scientometrics indicators. A 32 country comparison of publication productivity and citation impact. Singapore, Philadelphia: World Scientific Publishing Co. Pte. Ltd.
  8. 8. Schubert A, Braun T (1986) Relative indicators and relational charts for comparative assessment of publication output and citation impact. Scientometrics 9: 281–291.
  9. 9. Glänzel W, Schubert A, Braun T (2002) A relational charting approach to the world of basic research in twelve science fields at the end of the second millennium. Scientometrics 55: 335–348.
  10. 10. Vinkler P (1986) Evaluation of some methods for the relative assessment of scientific publications. Scientometrics 10: 157–177.
  11. 11. Vinkler P (2003) Relations of relative scientometric indicators. Scientometrics 58: 687–694.
  12. 12. Fortunato S, Radicchi F, Castellano C (2008) Universality of citation distribution: Toward an objective measure of scientific impact. PNAS 105: 17268–17272.
  13. 13. Bornmann L, Leydesdorff L (In press) Which cities produce more excellent papers than can be expected? a new mapping approach -using google maps- based on statistical significance testing. Journal of the American Society for Information Science and Technology.
  14. 14. Hirsch J (2005) An index to quantify an individual’s scientific research output. PNAS 102: 16569–16572.
  15. 15. Albarrán P, Crespo J, Ortuño I, Ruiz-Castillo J (2011) The skewness of science in 219 subfields and a number of aggregates. Scientometrics 88: 385–397.
  16. 16. Seglen P (2004) The skewness of science. Journal of the American Society for Information Science 43: 628–638.
  17. 17. Schubert A, Glänzel W, Braun T (1985) A new methodology for ranking scientific institutions. Scientometrics 12: 267–292.
  18. 18. GlänzelW (2007) Characteristic scores and scales: A bibliometric analysis of subject characteristics based on long-term citation observation. Journal of Informetrics 1: 92–102.
  19. 19. Albarrán P, Ruiz-Castillo J (2011) References made and citations received by scientific articles. Journal of the American Society for Information Science and Technology 62: 40–49.
  20. 20. Aksnes D, Schneider J, Gunnarsson M (2012) Ranking national research systems by citation indicators. a comparative analysis using whole and fractionalised counting methods. Journal of Informetrics 6: 36–43.
  21. 21. May R (1997) The scientific wealth of nations. Science 275: 793–796.
  22. 22. King D (2004) The scientific impact of nations. Nature 430: 311–316.
  23. 23. Albarrán P, Crespo J, Ortuño I, Ruiz-Castillo J (2010) A comparison of the scientific performance of the u.s. and europe at the turn of the 21st century. Scientometrics 88: 329–344.
  24. 24. Herranz N, Ruiz-Castillo J (2012) Multiplicative and fractional strategies when journals are assigned to several sub-fields. Journal of the American Society for Information Science and Technology. In press.
  25. 25. Bouyssou D, Marchant T (2011) Ranking scientists and departments in a consistent manner. Journal of the American Society for Information Science and Technology 62: 1761–1769.
  26. 26. Bouyssou D, Marchant T (2011) Biobliometric rankings of journals based on impact factors: An axiomatic approach. Journal of Informetrics 5: 75–86.
  27. 27. Waltman L, van Eck N (2012) The incosistency of the h-index. Journal of the American Society for Information Science and Technology DOI: 10.1002/asi.21678. In press.
  28. 28. Albarrán P, Ortuño I, Ruiz-Castillo J (2011) High- and low-impact citation measures: empirical applications. Journal of Informetrics 5: 122–145.
  29. 29. Ravallion M, Wagstaff A (2011) On measuring scholarly inuence by citations. Scientometrics 88: 321–337.
  30. 30. Leydesdorff L, Bornmann L (2011) Integrated impact indicators (i3) compared with impact factors (ifs): An alternative research design with policy implications. Journal of the American Society for Information Science and Technology 62: 2133–2146.
  31. 31. Leydesdorff L, Bornmann L, Mutz R, Opthof T (2011) Turning the tables on citation analysis one more time: Principles for comparing sets of documents. Journal of the American Society for Information Science and Technology 62: 1370–1381.
  32. 32. Rousseau R (2012) Basic properties of both percentile rank scores and the i3 indicator. Journal of the American Society for Information Science and Technology DOI: 10.1002/asi.21684. In press.