Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An Analysis of Peer-Reviewed Scores and Impact Factors with Different Citation Time Windows: A Case Study of 28 Ophthalmologic Journals

  • Xue-Li Liu ,

    hrcsj2009@163.com

    Affiliations Periodicals Publishing House, Xinxiang Medical University, Xinxiang, Henan Province, China, Henan Research Center for Science Journals, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Shuang-Shuang Gai,

    Affiliation Henan Research Center for Science Journals, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Shi-Le Zhang,

    Affiliation Henan Research Center for Science Journals, Xinxiang Medical University, Xinxiang, Henan Province, China

  • Pu Wang

    Affiliation Xiamen University Tan Kan Kee College Library, Zhangzhou, Fujian Province, China

Abstract

Background

An important attribute of the traditional impact factor was the controversial 2-year citation window. So far, several scholars have proposed using different citation time windows for evaluating journals. However, there is no confirmation whether a longer citation time window would be better. How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows with the peer-reviewed scores of ophthalmologic journals indexed by Science Citation Index Expanded (SCIE) database.

Methods

The peer-reviewed scores of 28 ophthalmologic journals were obtained through a self-designed survey questionnaire. Impact factors with different citation time windows (including 2IF, 3IF, 4IF, 5IF, and 6IF) of 28 ophthalmologic journals were computed and compared in accordance with each impact factor’s definition and formula, using the citation analysis function of the Web of Science (WoS) database. An analysis of the correlation between impact factors with different citation time windows and peer-reviewed scores was carried out.

Results

Although impact factor values with different citation time windows were different, there was a high level of correlation between them when it came to evaluating journals. In the current study, for ophthalmologic journals’ impact factors with different time windows in 2013, 3IF and 4IF seemed the ideal ranges for comparison, when assessed in relation to peer-reviewed scores. In addition, the 3-year and 4-year windows were quite consistent with the cited peak age of documents published by ophthalmologic journals.

Research Limitations

Our study is based on ophthalmology journals and we only analyze the impact factors with different citation time window in 2013, so it has yet to be ascertained whether other disciplines (especially those with a later cited peak) or other years would follow the same or similar patterns.

Originality/ Value

We designed the survey questionnaire ourselves, specifically to assess the real influence of journals. We used peer-reviewed scores to judge the journal evaluation effect of impact factors with different citation time windows. The main purpose of this study was to help researchers better understand the role of impact factors with different citation time windows in journal evaluation.

Introduction

Thomson Reuters announced the launch of an enhanced edition of Journal Citation Reports (JCR) on January 22, 2009 [12]. This enhanced JCR updated the 2007 edition (JCR-2007) released in July 2008 in part by adding three important bibliometric indicators: the Eigenfactor Score (EFS), the Article Influence Score (AIS) and the 5-year Impact Factor (5IF). 5IF was an important supplement to the traditional 2-year Impact Factor (2IF), the only distinction between them being the different citation time windows or timeframes used to calculate the impact factor (the former using a 5-year citation time window and the latter a 2-year citation time window) [34].

Some scholars [5] objected to the use of a 2-year citation time window for calculating impact factors as early as 1997, arguing that while a 2-year citation time window was suitable for some subjects, it was not suitable for others. Most scholars believed that the citation time used for calculating 2IF was too short and lacked statistical rationality [4, 6, 7, 8]. The 5IF emerged from just such a background. The 5IF has attracted widespread concern from scholars around the world [915] ever since it became a JCR journal evaluation indicator. However, would a longer citation time window be better? How did the journal evaluation effects of 3IF, 4IF, and 6IF comparing with 2IF and 5IF? In order to understand these questions, we made a comparative study of impact factors with different citation time windows, using ophthalmologic journals indexed by SCIE as our case studies.

Literature Review

Journal impact factor

It has been over fifty years since the concept of impact factor was first proposed by Garfield, a famous American bibliometric expert, in 1955 [1618]. In 1963, Garfield and Sher proposed using the concept of impact factor to re-evaluate the influence of journals [19]; Garfield officially established a precise concept and method of computing impact factor in 1972 [20]. Impact factor became a real bibliometric indicator for scientific journal evaluation in 1975, when the JCR database was built. Now confirmed as an important journal evaluation tool, the concept of impact factor has attracted widespread interest and been widely applied [2124]. However, its increasing popularity application as a tool for journal evaluation has revealed many major defects of the impact factor concept. Its main flaws have been identified as follows:

  1. The two-year citation window used to calculate impact factor is too short to evaluate the real impact of publications especially those with a later cited peak as occur in many social sciences in which maturation time of citations is slower [2526].
  2. “Journal impact factor,” as defined by the JCR database, does not exclude self-citation; this oversight has enabled many journals to manipulate their impact factor levels by improving self-citations.
  3. Although the numerator in the impact factor calculation was the total number of citations in a statistical year for all documents published within the previous two years, the denominator only included articles and reviews published during the previous two years.
  4. The method of computing impact factor did not accord with statistical principles. Citations of papers did not fit into normal distribution. As all were aware, when data does not fit normal distribution patterns, the mean is not a good parameter to describe the distribution of citations—despite this, journal impact factor was the average number of citations in a statistical year for papers published within the previous two years.
  5. The impact factor couldn’t be used to evaluate journals across disciplines, which greatly limited its value.

As the use of impact factor in journal evaluation rapidly increased, debates about its various shortcomings became more and more fierce. Although the debate about the validity of impact factor in journal evaluation has never been fully resolved, the current study focuses mainly on the problem of the citation time window in assessing impact factor.

Citation time window of impact factor

An important attribute of the traditional impact factor was the controversial 2-year citation window developed by Martyn and Gilchrist [27]. The 2-year citation window used by JCR has been repeatedly criticized within the bibliometric community; there is a consensus that a citation window of only two years is far too short to calculate impact factor in many fields. For example, Huang and Lin argued that calculating journal impact factor over two years was too short and unfair to many disciplines, and proposed either a longer general citation window or different citation windows for different fields [28]. Other researchers thought that impact factor was a lagging indicator with a narrower time window; in many fields, citations accumulate slowly and the 2-year time window seems too short [29]. Campanario also argued that the two-year citation window was too brief to capture all relevant scientific impact [15]. In place of the shorter citation time window of 2IF, some scholars suggested using a longer citation window, while others proposed an evaluation indicator that could be used for any set of documents with any citation window. Dorta-Gonzalez and Dorta-Gonzalez wrote that, while in some cases two years was long enough to measure performance, in other cases, three or more years could be necessary [8]. Leydesdorff, Zhou & Bornmann pointed out that Thomson Reuters had extended the IF through a five-year variant (5IF) in response to criticism that a 2-year citation window might be too short for disciplines with slower turnovers [30]. Vanclay argued that, in social work, eight to ten years seemed a minimally appropriate time period [31]; this was consistent with recommendations to increase the citation window of Thomson ISI impact factors to ten years [32]. Leydesdorff & Bornmann proposed the I3 indicator (Integrated Impact Indicators) that could be used with any citation window [33]. Indicators for the Scopus database, such as SNIP and SJR, used a 3-year citation time window and a different definition of citable items [34].

To learn more about the evaluation effect of impact factors with different citation time windows, and to investigate the potential benefits of a longer citation time window, we conducted this study to provide a useful reference work for researchers in the field of bibliometrics.

Methodology

Research objects

There were 56 ophthalmologic journals in JCR-2012. We selected 28 journals with self-citation rates of less than 20% that were published by United States and indexed by the SCIE database between 2007 and 2012. Considering that ophthalmologic scholars in one country may not be familiar with journals published in other countries, we selected journals published in the United States to avoid scoring errors in the questionnaire survey. In addition, we selected journals with a self-citation rate of less than 20% so that excessive self-citation would not make a difference in the impact factors of various journals. In order to calculate journals’ impact factors using different citation time windows, we had to request journals indexed by SCIE database between 2007 and 2012 (for example: a journal’s 6IF in 2013 = citation in 2013 of papers published between 2007 and 2012/the number of citable documents published by the journal between 2007 and 2012).

Research methods

Methods of calculating impact factor in 2013.

Although the JCR database releases the 2IF and 5IF of journals annually, our studies also involved 3IF, 4IF, and 6IF, which couldn’t be obtained from the JCR database. For this reason, we had to calculate impact factors with different citation time windows by hand, using the citation analysis function of the Web of Science(WoS)database. Such computing methods have been explained in various documents [3536]. For comparison purposes, 2IF and 5IF were also obtained by manual computation instead of using them released by the JCR.

Peer-reviewed scores of journals.

We designed a survey questionnaire about the influence of different ophthalmologic journals on ophthalmologic researchers worldwide to accumulate peer-reviewed journal scores. Our survey on the influence of journals was designed to mirror their real status in the minds of researchers [3738]. Harnad considered peer evaluation the most important recognized standard and method for verifying the validity of a citation index [39]. Considering subject classification in JCR database was more clear and obtained easily, we selected ophthalmologic journals and authors included by SCIE database as respondents. We thought those authors who published papers in ophthalmologic journals in SCIE database were researchers on the profession of ophthalmology and deeply understood their professional journals, thus they could evaluate journals’ impact from a professional point of view. In addition, to keep journals’ ranking from influencing questionnaire score, we ranked ophthalmologic journals in alphabetical order and told the investigators to give a score to each journal by their academic impact from 1.0 to 10.0 (the score is accurate to 1 decimal place, the “academic impact” in the questionnaire did not mean “impact factor” or any other indicator. It’s more like how useful and relevant the articles published in the journal were to author’ research, or how important the author thought the journal was in the field of ophthalmology). We got 8525 effective E-mail addresses of authors who published papers in SCIE database from 2008 to 2012 and sent 7742 questionnaires successfully. Finally, we got 291 responses and the response rate was 3.76% which was consistent with the result Shao Peiji [40] reported based on questionnaire survey of E-mail address. There were 239 valid responses and the rate of valid responses was 82.13%. All responses were from 36 countries and 61 of them were from American researchers. In the current study, we thoroughly processed the peer-reviewed results [41] of ophthalmologic journals in our questionnaires in 2012. As noted, all evaluation results related to American ophthalmologists and researchers reviewing ophthalmologic journals published in the United States, in order to avoid distorting our figures by asking peer-reviewed experts about unfamiliar journals from other countries or regions. Through this process, we received 61 valid responses from American scholars evaluating American journals. We calculated the total score for each journal as its final score. If a scholar left the section blank or said that a journal was ‘not familiar,’ such responses were equal to zero.

Ophthalmologic journals’ cited peak age.

We retrieved papers published by 28 ophthalmologic journals for every year between 2001 and 2006. Through the citation analysis function of the WoS database, we analyzed changes in the number of citations for each year after papers were published. Finally we ascertained the peak age for citations of ophthalmologic journals, using the diachronic method[42].

Statistical process.

SPSS 22.0 statistical software was used. Correlations between indicators were analyzed using the Spearman rank correlation test. Comparisons between impact factors with different citation time windows were made using the Kruskal-Wallis H test. The test level was alpha = 0.05.

Results and Discussion

Peer-reviewed scores of journals and impact factors with different citation time windows in 2013

Through this survey, we obtained the peer-reviewed scores of 28 ophthalmologic journals. At the same time, we calculated the journals’ impact factors within different citation time windows in 2013. These results were shown in Table 1.

thumbnail
Table 1. Peer-reviewed scores of 28 ophthalmologic journals and impact factors with different citation time windows in 2013.

https://doi.org/10.1371/journal.pone.0135583.t001

As Table 1 demonstrates, nearly all journals with a higher peer-reviewed score had higher impact factors in the different citation time windows. The categorization of journals based on impact factors with different citation time windows was consistent. Sorting journals by their peer-reviewed scores produced results that were consistent with those achieved by studying impact factors with different citation time windows. The consistency in results produced by sorting journals using first, the peer-reviewed score and second, impact factors with different citation time windows suggested that it was reasonable to use impact factors with different citation time windows to evaluate journals.

Correlations between peer-reviewed scores and impact factors with different citation time windows in 2013

According to data shown in Table 1, we drew scatter diagrams of peer-reviewed scores of 28 ophthalmologic journals and their impact factors with different citation time windows in 2013 to further investigate the relationships between them (Fig 1), and no significant differences in Fig 1 are observed. The correlations between the journals’ peer-reviewed scores and impact factors with different citation time windows, as well as the correlations between impact factors with different citation time windows are illustrated in Table 2.

thumbnail
Fig 1. Correlations between the peer-reviewed scores of 28 ophthalmologic journals and impact factors with different citation time windows.

https://doi.org/10.1371/journal.pone.0135583.g001

thumbnail
Table 2. Correlations between the peer-reviewed scores of 28 journals and impact factors with different citation time windows.

https://doi.org/10.1371/journal.pone.0135583.t002

In Table 2, note that peer-reviewed scores correlated well with all impact factors, regardless of which citation time window was used (all correlation coefficients between 0.58 and 0.67). In addition, correlations between 2IF, 3IF, 4IF, 5IF, and 6IF are all very high, with the lowest correlation coefficients being 0.971. It is easy to see that the evaluation results of 2IF, 3IF, 4IF, 5IF, and 6IF as journal evaluation indicators were highly consistent regardless of how their size changed.

Comparisons between impact factors with different citation time windows

Considering that the distribution of impact factors with different citation time windows did not reflect normality, we chose, in the current study, to carry out a Kruskal-Wallis non-parametric test of 2IF, 3IF, 4IF, 5IF, and 6IF for 28 journals in 2013. The results were χ2 = 0.640, P = 0.958, which indicates that the differences between the impact factors of 28 ophthalmologic journals with different citation time windows in 2013 were not significant.

In previous studies, many scholars [4,43] believed that 5IF was larger than 2IF for most journals, deducing that a long-term impact factor would be larger than a short-term impact factor for most journals. However, the results of the current study suggest that 4IF was the highest, followed by 6IF and 5IF; 2IF was genuinely the smallest. This doesn’t mean that the impact factor associated with a longer citation time window would be larger for all journals. Papers in some subjects quickly reached their cited peak, but in these cases, citations also tended to diminish fast. In other words, some documents aged very quickly. In these subjects, there may be cases of the impact factor associated with a longer citation time window being lower than that with a shorter window. Some scholars have suggested using different citation time windows to compute the impact factors of journals in different disciplines [3,8].

Cited peak age of ophthalmologic papers

We carried out an advanced retrieval type using the ISSN of 28 ophthalmologic journals. Next, we searched for papers each year between 2001 and 2006 and analyzed their citations for each successive year after the year of publication. In addition, we confirmed the cited peak age by observing the trends in citation evolution for published ophthalmologic papers. These results are shown in Fig 2.

thumbnail
Fig 2. Citation evolution trends for papers published each year between 2001 and 2006 in 28 ophthalmologic journals.

https://doi.org/10.1371/journal.pone.0135583.g002

As Fig 2 illustrates, citations in the first year after any year of publication were low, with the citation counts for all papers increasing rapidly both in the first and second years. The growth rate slowed down and citations reached their peak in the third year. Citations of papers published in 2003 reached their peak in 2006 and began to decline in the fourth year. Only the citations of papers published in 2004 reached their peak in the fourth year and began to decline in the fifth year. The results above are highly consistent with the finding that correlations between peer-reviewed scores and 3IF and 4IF were the highest in Table 2.

Della-Sala and others [14] at the University of Edinburgh in the U.K. concluded that the 2-year citation time window emphasized in recent studies was not very useful for evaluating journals in subjects whose cited peak came more slowly (in a slow-moving field). Hence, some scholars have argued for using an impact factor with a longer citation time window in journal evaluation. In 1998, Garfield [4445] selected journals whose impact factor rankings were in the top 100 and 101–200 to calculate their 15IF and 7IF and to compare with one-year IF sorting. No great disparity was found in Garfield’s study, which was consistent with the current study finding that there are high correlations between impact factors with different citation time windows. Our study did not support that longer citation time windows were preferable; if anything, the correlation with peer-reviewed scores was somewhat less strong when the longest citation time window was used (6IF). Thus, for journals in a particular discipline, a longer citation time window may not be better for assessing impact factors.

Conclusions

In this study, we used peer-reviewed journal rankings obtained through a survey as a gold standard to investigate the journal evaluation results of impact factors with different citation time windows. In general, in the field of ophthalmology, there were high correlations between journals’ peer-reviewed scores and impact factors with different citation time windows. There were also high correlations between impact factors with different citation time windows. It is clear that the length of a citation time window should be consistent with the cited peak.

As everyone is aware, different disciplines involve different properties and stages of development. For this reason, the values of impact factors with different citation time windows for journals in different subjects are necessarily different. However, by conducting empirical research within the field of ophthalmology, we found high correlations between 2IF, 3IF, 4IF, 5IF, and 6IF for evaluating journals.

In addition, we found high correlations between 2IF, 3IF, 4IF, 5IF, 6IF, and peer-reviewed scores for ophthalmologic journals in 2013, suggesting the scientific validity of using impact factors with different citation time windows to evaluate journals. For ophthalmologic journals it does not seem like longer citation time windows are preferable.

Our study results showed that the citation time window should be consistent with the cited peak of documents in a particular discipline. For example, if citations of documents published in a particular subject in (t) year reach their cited peak in (t+3) years, then the most appropriate citation time window would might be three years. For a particular subject, the length of the citation time window should be used to compute the impact factor in combination with the cited peak of the documents.

Finally, we must acknowledge the limitations of our study, in that we selected only ophthalmologic journals as research objects. The topic therefore should be investigated further to see whether other disciplines, in particular those with a later cited peak, followed the same or a similar pattern. In addition, Although citation peaks may follow a particular time window for individual journals, these citation time windows are very unlikely to remain fixed and may indeed be variable depending on the current events in a particular discipline, for these questions we will make further studies in the future.

Supporting Information

S1 Dataset. Data of questionnaire survey from American ophthalmologic researchers giving each American ophthalmology journal a score.

https://doi.org/10.1371/journal.pone.0135583.s001

(XLS)

Author Contributions

Conceived and designed the experiments: XLL. Performed the experiments: XLL. Analyzed the data: XLL SSG SLZ PW. Contributed reagents/materials/analysis tools: XLL. Wrote the paper: XLL.

References

  1. 1. Ren SL. Eigenfactor:importance of analyzing journals and papers based on cited network. Chinese Journal of Scientific and Technology Periodicals.2009; 20:415–418.
  2. 2. Jacsó P. Differences in the rank position of journals by Eigenfactor metrics and the five-year impact factor in the Journal Citation Reports and the Eigenfactor Project web site. Online Information Review. 2010; 34:496–508.
  3. 3. Vanclay JK. Impact factor: outdated artefact or stepping-stone to journal certification?. Scientometrics.2012; 92211–238.
  4. 4. van Nierop E. The introduction of the 5-year impact factor: does it benefit statistics journals?. Statistica Neerlandica. 2010; 64:71–76.
  5. 5. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997; 314:498–502. pmid:9056804
  6. 6. Sombatsompop N, Markpin T, Premkamolnetr N. A modified method for calculating the Impact Factors of journals in ISI Journal Citation Reports: Polymer Science Category in 1997–2001. Scientometrics. 2004; 60:217–235.
  7. 7. van Leeuwen TN, Moed HF, Reedijk J. Critical comments on Institute for Scientific information impact factors: a sample of inorganic molecular chemistry journals. Journal of Information Science. 1999; 25:489–498.
  8. 8. Dorta-Gonzalez P, Dorta-Gonzalez MI. Impact maturity times and citation time windows: the 2-year maximum journal impact factor. Journal of Informetrics. 2013; 7:593–602.
  9. 9. Zhao X. An analysis of the 5-year impact factor in JCR. Journal of Library Science in China. 2010;36:120–126.
  10. 10. Liu XL, Fang HL, Ding J, Wang MY. Correlation of publication frequency with impact factors and 5-year impact factor in 1058 medical journals in SCI-expanded database. Chinese Journal of Scientific and Technology Periodicals. 2011;22:211–214.
  11. 11. Yu G, Zhou XY. Comparing study of three impact factors of journals in six subjects. Studies in Science of Science. 2008; 183–188.
  12. 12. Jacso P. Five-year impact factor data in the Journal Citation Reports. Online Information Review. 2009; 33:603–614.
  13. 13. Della-Sala S, Grafman J. Cortex 2009 5-year and 2-year Impact Factor: 4.1. Cortex.2010; 46:1069–1069.
  14. 14. Della Sala S, Grafman J. Five-year impact factor.Cortex. 2009; 45:911–911.
  15. 15. Campanario JM. Empirical study of journal impact factors obtained using the classical 2-year citation window versus a five-year citation window. Scientometrics. 2011a; 87:189–204.
  16. 16. Garfield E. Citation indexes for science: a new dimension in documentation through association of ideas. Science. 1955; 122:103–111.
  17. 17. Finardi U. Correlation between Journal Impact Factor and Citation Performance: An experimental study. J. Informetr. 2013; 7: 357–370.
  18. 18. Pérez-Hornero P, Arias-Nicolás JP, Pulgarín AA, Pulgarín A. An annual JCR impact factor calculation based on Bayesian credibility formulas,.Journal of Informetrics. 2013; 7:1–9.
  19. 19. Garfield E, Sher IH. New factors in the evaluation of scientific literature through citation indexing. American Documentation. 1963; 14:195–201.
  20. 20. Garfield E. Citation analysis as a tool in journal evaluation. Science & Justice. 1972; 178:471–479.
  21. 21. Betz CL. Impact factor and other new developments. Journal of Pediatric Nursing-Nursing Care of Children & Families. 2014; 29: 1–2.
  22. 22. Tort ABL, Targino ZH, Amaral OB. Rising Publication Delays Inflate Journal Impact Factors. Plos One. 2012;7:1–7.
  23. 23. Campanario JM, Coslado MA. Benford’s law and citations, articles and impact factors of scientific journals. Scientometrics. 2011; 88:421–432.
  24. 24. Rizkallah J, Sin DD. Integrative Approach to Quality Assessment of Medical Journals UsingImpact Factor, Eigenfactor, and Article Influence Scores. Plos One. 2010;5:1–10.
  25. 25. Dorta-González P, Dorta-González MI. Comparing journals from different fields of science and social science through a JCR subject categories normalized impact factor. Scientometrics. 2013; 95:645–672.
  26. 26. Dorta-González P, Dorta-González MI, Suárez-Vega R. An approach to the author citation potential: Measures of scientific performance which are invariant across scientific fields. Scientometrics. 2015; 102:1467–1496.
  27. 27. Martyn J, Gilchrist A. An Evaluation of British Scientific Journals (1 ed.): Aslib.1968
  28. 28. Huang MH, Lin WYC. The influence of journal self-citations on journal impact factor and immediacy index. Online Information Review. 2012; 36:639–654.
  29. 29. Anonymous. Beware the impact factor. Nature Materials. 2013; 12: 89–89. pmid:23340463
  30. 30. Leydesdorff L, Zhou P, Bornmann L. How Can Journal Impact Factors Be Normalized Across Fields of Science? An Assessment in Terms of Percentile Ranks and Fractional Counts. Journal of the American Society for Information Science and Technology. 2013; 64:96–107.
  31. 31. Vanclay JK. Ranking forestry journals using the h-index. Journal of Informetrics. 2008; 2:326–334.
  32. 32. Ha TC, Tan SB, Soo KC. The journal impact factor: Too much of an impact?. Annals Academy of Medicine Singapore. 2006;35:911–916.
  33. 33. Leydesdorff L, Bornmann L. Integrated Impact Indicators Compared With Impact Factors: An Alternative Research Design With Policy Implications. Journal of the American Society for Information Science and Technology. 2011; 62:2133–2146.
  34. 34. Dorta-Gonzalez P, Dorta-Gonzalez MI, Santos-Penate DR, Suarez-Vega R. Journal topic citation potential and between-field comparisons: The topic normalized impact factor.Journal of Informetrics. 2014; 8:406–418.
  35. 35. Liu XL. Method of predicting impact factor for journals indexed in SCI based on Web of Science database. Science-Technology & Publication. 2014; 87–91.
  36. 36. Gai SS, Liu XL, Zhang SL. Prediction and structural analysis of impact factor for journals indexed in SCI: a case study of Nature. Chinese Journal of Scientific and Technology Periodicals. 2014;25:980–984.
  37. 37. He Y, Qiu JP Method and demonstration research by scientometrics on the selection of peer-review experts. Library and Information Service. 2012; 56: 33–37.
  38. 38. Weale AR, Bailey M, Lear P. The level of non-citation of articles within a journal as a measure of quality: a comparison to the impact factor. BMC Medical Research Methodology. 2004; 4:1–8.
  39. 39. Harnad S. Validating research performance metrics against peer rankings. Ethics in Science and Environmental Politics.2008; 103–107.
  40. 40. Shao PJ, Fang JM, Judy D. Research of the web-based survey on the development of E-Business in China, Canada, and Taiwan. Proceedings of the 5th Asian e-Business Workshop. KAIST Press, Jcju, Korea. 2005.
  41. 41. Wang P, Liu XL, Liu RY. SNIP, SJR its modified index SNIP2,SJR2 in the use of journal evaluation. Chinese Journal of Scientific and Technology Periodicals. 2014; 25:833–838.
  42. 42. Yoshikane F, Suzuki T. Diversity of fields in patent citations: synchronic and diachronic changes. Scientometrics, 2014; 98:1879–1897.
  43. 43. Shubert E. Use and misuse of the impact factor. Systematics and Biodiversity.2012; 10:391–394.
  44. 44. Garfield E. Long-term vs. short-term journal impact: does it matter?. Scientist. 1988a; 12: 11–12.
  45. 45. Garfield E. Long-term vs. short-term impact: part II: the second 100 highest-impact journals. Scientist.1988b;12:12–13.