Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK’s national research assessment exercise

  • Kushwanth Koya ,

    k.koya@northumbria.ac.uk

    Affiliation iSchool @ Department of Computers and Information Sciences, Faculty of Engineering and Environment, Pandon Building, Camden Street, Northumbria University, Newcastle City Campus, Newcastle-upon-Tyne, United Kingdom

  • Gobinda Chowdhury

    Affiliation iSchool @ Department of Computers and Information Sciences, Faculty of Engineering and Environment, Pandon Building, Camden Street, Northumbria University, Newcastle City Campus, Newcastle-upon-Tyne, United Kingdom

Correction

20 Dec 2017: Koya K, Chowdhury G (2017) Correction: Metric-based vs peer-reviewed evaluation of a research output: Lesson learnt from UK’s national research assessment exercise. PLOS ONE 12(12): e0190337. https://doi.org/10.1371/journal.pone.0190337 View correction

Abstract

Purpose

There is a general inquisition regarding the monetary value of a research output, as a substantial amount of funding in modern academia is essentially awarded to good research presented in the form of journal articles, conferences papers, performances, compositions, exhibitions, books and book chapters etc., which, eventually leads to another question if the value varies across different disciplines. Answers to these questions will not only assist academics and researchers, but will also help higher education institutions (HEIs) make informed decisions in their administrative and research policies.

Design and methodology

To examine both the questions, we applied the United Kingdom’s recently concluded national research assessment exercise known as the Research Excellence Framework (REF) 2014 as a case study. All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF’s results and HEFCE’s funding allocations.

Findings

A world leading output earns between £7504 and £14,639 per year within the REF cycle, whereas an internationally excellent output earns between £1876 and £3659, varying according to their area of research. Secondly, an investigation into the impact rating of 25315 journal articles submitted in five areas of research by UK HEIs and their awarded funding revealed a linear relationship between the percentage of quartile-one journal publications and percentage of 4* outputs in Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles.

Practical implications

The findings provide an indication of the monetary value of a research output, from the perspectives of government funding for research, and also what makes a good output, i.e. whether a relationship exists between good quality output and the source of its publication. The findings may also influence future REF submission strategies in HEIs and ascertain that the impact rating of the journals is not necessarily a reflection of the quality of research in every discipline, and this may have a significant influence on the future of scholarly communications in general.

Originality

According to the author’s knowledge, this is the first time an investigation has estimated the monetary value of a good research output.

Introduction

Research is evaluated in several forms and despite years of debate to find an effective and efficient method, the academic community is yet to reach a consensus. Peer review has been the oldest form of research evaluation and stands firm in spite of several disputes surrounding its functioning. Several databases and metrics such as Web of Science, Scopus, Scholar, InCites, SciVal, h-index and Altmetrics attempt to establish the quality of research through publication profile and citation profile or both [13]. However, these measures remain questionable due to the narrow interpretations they produce and are often confined to academic evaluation [46]. Consequently, some alternative approaches such as the Web-impact metrics, societal impact, and a combination of principles such as the Leiden Manifesto were proposed [1, 2, 7]. Funding good research is essential for the survival of science, and progressive countries, invest between 2% to 3% of their gross domestic product on research and development activities, a good portion concentrated in Higher Educations Institutions (HEIs), which has proven to be extremely beneficial for multiple societal aspects [8]. However, two fundamental questions still remain unanswered, viz. (1) what is the economic value of a research output, as perceived by governments or agencies that fund research, and (2) what makes a good research output, and more specifically is there a direct relationship between the quality of a research output, as determined through its monetary value, and the source of its publication This study aims to address the following questions:

  1. What is the monetary value of an output showcasing good research?
  2. Does the value vary amongst different disciplines?
  3. Is there a relationship between the value of a research output and the reputation of its publication source; and
  4. Can the assigned value of a research output alter the nature of science and research in a country?

The recently concluded national research evaluation exercise in UK, called REF2014, has been used as a case study to find answers to these questions.

UK universities

Like in any other countries, HEIs play an essential role in UK society. According to a latest Universities UK report, the UK HE sector contributed £39.9 billion, equivalent to 2.8% of the UK’s gross domestic product (GDP), and employed 757, 268 individuals in 2011 [9]. According to the Higher Education Statistics Agency (HESA), a typical UK HEI’s revenue break-down is as follows: 35% tuition fee, 30% funding council grants, 16% research grants and contracts, 1% from endowments/ investment income, and 18% from other sources i.e. alumni donations etc. [10]. Visibly, a large portion of revenue for the HEIs come from the funding councils, which generally award the funding based on performance, thus making research evaluation and the financial returns of research conducted an important question for academia to inquire. Considering the recently concluded Research Excellence Framework (REF) 2014, the UK’s national research assessment exercise, as a case study offers a chance to answer the question and also an opportunity for other research intensive countries to compare their performance-based research funding. It may be argued that the amount of money available to distribute to the HEIs is very much dependent on the available budget for a particular government, and hence the monetary value of the research outputs will not provide us a definitive figure, and therefore may not be applicable to others. However, since the REF 2014 is a national exercise, and it determines the annual funding for research for all the HEIs, and all the HEI staff in the country, for six to seven years (until a similar exercise, or an alternative, takes place), it has an impact on the research and scholarly activities of the entire country for several years. Hence, we decided to use the REF2014 datasets to find answers to the research questions mentioned earlier.

What is the REF 2014?

The REF 2014 was a research evaluation exercise conducted by a combined team of organisations, namely the Higher Education Funding Council for England (HEFCE) and Wales (HEFCW), the Scottish Funding Council (SFC) and the Department for Employment and Learning (DEL) of Northern Ireland to measure the quality of research at various HEIs in the United Kingdom [11]. It is a performance-based HEI research funding system whose results inform the higher education funding bodies to allocate funding each year to HEIs based on their performance [7]. It also plays a vital role in an HEI’s ability to secure funding from other sources, league table scores, reputation and attracting talent in terms of students and academics [12].

The results from the current REF assisted in the yearly disbursal of £1.6 billion per year to UK based higher education and research institutions until the next such exercise, possibly commissioned for 2021. The results of REF 2014 led to drastic alterations in funding allocations when compared to the previous REF (RAE 2008). An HEI lost about 17.1% (£14.2 million) of its funding and in another exceptional case, an HEI lost 45% of its funding. The maximum gain by any HEI stood at 12.4% (£7.1 million) [13, 14]. The repercussions of such fluctuations are considerable to the future of research at UK HEIs.

To submit for the REF, the HEIs had to choose the areas of research (called Units of Assessment/ UoAs) out of the available 36 UoAs, which they wished to be evaluated upon and prepared their submission in a prescribed format. The submissions for the REF 2014 were evaluated by 1052 individuals, of which 77% were academics and 23% were users (individuals who apply HEI research and collaborators outside academia), under the guidance of 36 expert sub-panel chairs, additionally supported by four main panel chairs to evaluate and determine the quality of research. Research was adjudged into five categories; 4* (world leading), 3* (internationally excellent), 2* (recognised internationally), 1* (recognised nationally) and unclassified (REF, 2014). The overall quality of research was assessed through a combination of quality of research outputs (65% weightage) in terms of rigour, originality and significance; ‘impact’ of research (20% weightage), a new factor introduced in REF evaluation, assessing the ‘reach and significance’ of research on multiple societal factors; and research environment (15% weightage), in terms of ‘vitality and sustainability’ i.e. PhD completions, laboratory facilities and wider disciplinary contributions [11].

Research outputs

HEIs submitted various types of research outputs for evaluation i.e. journal and conference articles, books, book chapters, edited books, patents, design, artefacts, software, exhibitions and compositions etc. The submitted outputs were evaluated and graded into five categories as previously mentioned. However, only 4* (world leading) and 3* (internationally excellent) outputs were eligible for funding and the final weightage, fairly taking into account the number of staff members who had submitted for the UoA from the HEI, thus minimising quantitative bias. Finally, funding is allocated based on the weightage acquired by the HEIs, which is a sum of the number of 3* outputs and four times the number of 4* outputs. According to HEFCE’s pre-submission guidelines, 4* outputs received four times higher funding than 3* outputs and the allocation of funding varied across disciplines as research expenses vary in different research disciplines (for example, laboratory-based research incurs higher expenses than library-based research). Post-REF 2014 data reveals that research outputs alone led to a total allocation of £661.3 million Pounds in research money to UK HEIs per year, not considering the ‘London weighting’ which was exclusively granted to HEIs located in London due to higher costs associated with the capital.

Evaluation of quality of the output

According to HEFCE, the outputs were evaluated upon their ‘originality, significance and rigour’ in comparison to international standards [11, 15]. HEFCE advised HEIs against choosing outputs with high citation indices for submission, rather select outputs which the HEIs affirm as high quality. However, in some cases, the citation data of outputs and significance of outputs beyond academia were considered as indicators of quality by the sub-panels [15].

Methods

Six HEIs were randomly chosen from each of the 36 UoAs of the REF. The HEIs’ percentage of 4* (X) and 3* (Y) research outputs was noted from the REF’s results, in addition to considering the staff count (A) of each HEI. This allowed the calculation of the number of 4* (B) and 3* (C) outputs considered for weightage (W).

Weightage (W) was calculated as the sum of four times the number of 4* and 3* research outputs.

The total funding awarded (FA) for each HEI under each UoA was noted from HEFCE’s funding allocation table (http://www.hefce.ac.uk/funding/annallocns/1516/research/). Assuming all outputs to be rated 4*, the value of each 4* output (F) is obtained by dividing the total funding award (FA) by the weightage (W).

According to the HEFCE, the value of each 4* output is four times the value of each 3* output. Hence, the value of each 3* output (T) is obtained by dividing the value of each 4* output (F) by 4.

Example of the calculation

For clarity of the above calculation, let us consider the case of the University of Cambridge under the General Engineering UoA. 37.4% (X) of its research was rated 4* and 55.8% (Y) was rated 3* with a staff count of 177.20 (A) FTE. It received £5,328,295 (FA) from HEFCE for its outputs performance in General Engineering. The number of 4* (B) and 3* (C) outputs considered for weightage (W) can be obtained as follows

Thus weightage is obtained by the sum of four times the number of 4* and the number of 3* outputs.

Assuming all outputs were awarded a 4* rating, the value of each 4* output (F) is obtained by dividing the total funding received (FA) by the weightage (W). As 4* outputs are four times the value of 3* outputs, the value of each 3* (T) output is obtained by dividing the value of each 4* output by 4.

At this point, it is important to understand REF’s instructions. The REF required each staff member considered for submission to submit four outputs each [15]. Cambridge for the General Engineering UoA presented 177.20 FTE staff and submitted 616 outputs. Ideally, Cambridge should have submitted 708.8 (No. of staff submitted multiplied by 4). However due to specific circumstances (i.e. career breaks, early career researchers etc.) submitted 616 outputs for evaluation. The REF is aware of such circumstances and is considerate by not penalising the HEI, taking into account the phase of a researcher’s career and personal circumstances. The REF further calculates the rating based on the ideal number of submissions, but not the actual number of submissions. In the case of Cambridge, 37.4% of the ideal 708.8 were rated 4* and 55.8% of the ideal 708.8 were rated 3*, which takes the number of 4* submissions to 265.09 and the number of 3* submissions to 395.51.

So, the total funding for outputs in the General Engineering UoA for Cambridge was

By multiplying the number of 4* and 3* submissions with their respective value and adding them up will give us the final amount of funding acquired by Cambridge. In other words, if every member of Cambridge staff had to submit 4 outputs they would get the same amount of money that they have received with lower number of output (616 as opposed to 709) because of specific staff circumstances. This not only is a simplified explanation of the REF’s working, it also corroborates our calculations about the value of each 4* and 3* output.

Design to observe relationship between funding awarded and publication source of the outputs

As it was impossible to identify the REF rating of an individual output, we performed an indirect measure by investigating the proportion of HEIs submitted outputs published in quartile-one (Q1) journals and its relationship to funding acquired. We decided to find out whether any direct relation existed between the monetary value of an output and the reputation of its source of publication as measured through journal impact factor.

Five REF UoAs; clinical medicine (Panel A, UoA 1), physics (Panel B, UoA 9), psychology/psychiatry/neuroscience (Panel A, UoA 4), anthropology/development studies (Panel C, UoA 24) and classics (Panel D, UoA 31) were chosen from the available 36 UoAs under four main panels. Each chosen UoA came under each main panel of assessment, except clinical medicine and psychology/psychiatry/neuroscience which came under panel A.

However, since it was not possible to get the necessary data directly from the REF2014 results, i.e. it was not possible to find out which output got a 4* rating, we decided to use an alternative approach. By using the Thompson Reuter’s Journal Citation Report against the submitted journal papers for each HEI in the chosen UoA, we identified how many of the submitted articles were in top quartile journals, and accordingly we prepared a rank list of the HEIs in a given UoA based on the number of Q1 publications. This list was plotted against their percentage of 4* outputs to find any relationship.

All the journal articles submitted by English HEIs in each of the UoA—10986 for Clinical Medicine; 5302 for Physics; 7484 for Psychology, Psychiatry and Neuroscience; 1198 for Anthropology and Development Studies; and 345 for Classics—a total of 25315 articles and their corresponding journal’s quartile score was noted through Thomson Reuter’s Journal Citation Reports. Quartile score is calculated for each journal in every subject category according to the quarter where its impact factor falls under. Thus, a quartile 1 (Q1) journal is one whose impact factor falls in the top 25% of all journals within the same subject category. The quartile scores of all the journals for the year 2013 were considered for this study as the quartile scores for 2014 came out in mid-2015 and the only data available for sub-panel members during the REF evaluation in 2014 would have been the data from 2013. Some journals cannot completely associate with a single specific subject category. In such cases the nearest related subject category to the UoA was considered while noting the quartile scores.

Subsequently, all the journal articles submitted by the HEIs, whose journals were in the Q1 category were considered, allowing the calculation of all the HEIs percentage of Q1 publications, which was compared against percentage of 4* publications.

Data

All the data for this study is sourced from the openly available publications which arose from the digital repositories of REF’s results and HEFCE’s funding allocations.

Statistics and analysis

All the data for the value calculation part of the study were transferred from sources and analysed using the formulas feature in MS Excel. For the next part of the study MS Excel assisted in transferring the data from sources and calculation of the HEIs percentage of Q1 publications. Thereafter, the data was visualised using IBM’s SPSS Statistics 22, in addition to verifying the linear relationship between percentage of Q1 publications and funding awarded per FTE staff in HEIs using a bivariate Pearson correlation test [16, 17].

Results & discussion

Monetary values of good research outputs in various UoAs

Using the formula mentioned in the previous section, it was noted that each internationally excellent output (3* in the parlance of REF2014) was awarded between £1876 and £3659, whereas a world leading output (4* in the parlance of REF2014) was awarded four times the award for an internationally excellent output, between £7504 and £14639, varying according to the UoA (discipline). Engineering based subjects, pure and environmental sciences were the highest earners; £3659 for internationally excellent and £14639 for world leading outputs. Humanities, language and area studies were the lowest earners; £1879 for internationally excellent and £7504 for world leading outputs. Health related subjects, clinical medicine, biological and agricultural sciences received £3280 and £13123, for an internationally excellent and world leading output respectively. The financial awards for outputs in the remaining subject areas are described in Table 1. The awards for the outputs presented are for a one year period. For an entire assessment period, the outputs will fetch six times the figures stated above, realistically assuming an assessment period to be six years in the UK.

thumbnail
Table 1. Value of each 3* and 4* output in different units of assessment.

https://doi.org/10.1371/journal.pone.0179722.t001

The above figures, however, should not be directly used to calculate the total number of outputs submitted by a specific HEI under each UoA because the number of outputs submitted is weighted by the number of people who submitted. Additionally, the monetary value of outputs in different disciplines should not undermine or exaggerate the value of research in different disciplines. Further calculations can be found in the web-appendix MS Excel spreadsheets.

Does publication of research in high impact journals make it good research?

This section discusses how the research question no. 3 was investigated through examining the relationship between the chosen HEIs percentage of outputs in Q1 journals and percentage of 4* publications. Table 2 indicates the submission characteristics of the five chosen UoAs. HEFCE advised the HEIs that the evaluation is primarily based on ‘originality, significance and rigour’ of the output. However, in its entirety, the evaluation framework becomes a subjective decision of the evaluator. A potential method to recognise quality of an output is to observe the quality or rank of its journal based on the journal impact factor.

thumbnail
Table 2. Journal article and 4* statistics of five UoAs submitted for the REF 2014 (REF Executive Summaries).

https://doi.org/10.1371/journal.pone.0179722.t002

All the outputs submitted by multiple UK HEIs in the chosen UoAs were filtered for journal article submissions and the impact rating of every article’s journal was mapped using Thomson Reuters’ Journal Citation Reports. All the quartile 1 (Q1) articles were filtered, which allowed the estimation of percentage of Q1 publications in all HEIs in the five UoAs (Tables 37).

thumbnail
Table 4. HEI’s Q1% and 4*% in psychology/psychiatry/neuroscience UoA.

https://doi.org/10.1371/journal.pone.0179722.t004

thumbnail
Table 6. HEI’s Q1% and 4*% in anthropology & development studies UoA.

https://doi.org/10.1371/journal.pone.0179722.t006

A scatter plot was employed to observe any linear relationship between percentage of 4* outputs and percentage of Q1 publications in multiple UK HEIs which had submitted under the five UoAs. The different plots indicate a linear relationship between the percentages of Q1 publications and 4* outputs at various HEIs in the Clinical Medicine (r = 0.526/ n = 24/ p = 0.008), Physics (r = 0.496/ n = 32/ p = 0.004) and Psychology/Psychiatry/Neuroscience (r = 0.827/ n = 65/ p = 0) UoAs (Figs 13). However, no relationship was found for the Classics (r = 0.324/ n = 18/ p = 0.189) and Anthropology/Development Studies (r = 0.034/ n = 20/ p = 0.888) UoAs (Figs 4 and 5).

thumbnail
Fig 1. 4*% vs Q1% of various HEIs in clinical medicine UoA.

https://doi.org/10.1371/journal.pone.0179722.g001

thumbnail
Fig 2. 4*% vs Q1% of various HEIs in psychology/psychiatry/neuroscience UoA.

https://doi.org/10.1371/journal.pone.0179722.g002

thumbnail
Fig 4. 4*% vs Q1% of various HEIs in anthropology/development studies UoA.

https://doi.org/10.1371/journal.pone.0179722.g004

Exploring further, we performed a simple linear regression for UoAs 1, 4 and 9 to investigate if Q1 percentage is a good predictor of 4* percentage a HEI can achieve.

For UoA 1 (Clinical medicine), the ANOVA indicated model significance (F[1, 22] = 8.417, p = .008) and 24.4% of variance in 4* percentage can be explained by a HEI’s Q1 percentage. The line equation to predict 4* percentage is y = 1.332(x)+(-102.245) which is significant (t = 2.901, p = .008).

For UoA 4 (Psychology/Psychiatry/Neuroscience), the ANOVA indicated model significance (F[1, 63] = 135.977, p<0) and 67.8% of variance in 4* percentage can be explained by a HEI’s Q1 percentage. The line equation to predict 4* percentage is y = 0.514(x)+(-12.614) which is significant (t = 11.661, p<0).

For UoA 9 (Physics), the ANOVA indicated model significance (F[1, 30] = 9.772, p = .004) and 22.1% of variance in 4* percentage can be explained by a HEI’s Q1 percentage. The line equation to predict 4* percentage is y = 0.302(x)+(-7.648) which is significant (t = 3.126, p = .004). For UoA 9, this test was important as an outlier could have skewed the dataset, however as Fig 3 indicates, the relationship is linear at the further end of the x and y axis.

UoA 4 appears to have the strongest link with an r value (Pearson correlation) of 0.827, although the r values are not exceptionally high in clinical medicine and physics, the scatter plot explains the trend of Q1 publications scoring high in the REF. The findings indicate that the outcome of judgements made on the quality of research either by peer-reviewed government ranking (REF results) and metrics-based ranking (JIF) largely remain the same in disciplines where journals are considered the main channels of research communication. There is ample literature suggesting the relationship between expert review decisions and bibliometrics [1821]. A similar study on Italy’s national research assessment exercise found similar claims that in pure and natural sciences, the perspectives on quality of research is either similar or superior to national research assessment exercises [22]. This study supports this claim, implying that quantitative measures are capable of evaluating research quality that are comparable to expert review based government research assessment [22]. Additionally, such a system instils public trust in the utilisation of public funds in HEIs as performance metrics are readily available for public view [23]. However, JIF has also been indicated to inefficiently evaluate the quality of research and quality mercantilism in general isn’t an appropriate evaluation technique [20, 2426].

Can the value of a good research output inform an HEI’s policies?

This section discusses how the research question no. 4 was investigated. The REF’s executive summaries supplied complete information about various UoA’s output submissions, HEIs submitted, category A staff, early career researchers, average 4* and 3* percentages. This assisted in calculating average submissions of each HEI, number of outputs rated 4* and 3*, average submissions per staff number and average number of submissions submitted per staff number rated as 4* and 3*. Category C staff’s outputs were used solely to rate, however were not considered for funding, thus were excluded from our analysis. The average 4* and 3* submissions per staff member as mentioned in the last two columns of Table 8 inform their potential contribution of performance-based funding to the HEI in the UK. For example, a single staff member in the Area Studies UoA submitted 3.58 outputs out of which 0.84 and 1.42 outputs are rated 4* and 3* respectively. Taking these average figures it is possible to predict the income generated by an average member of staff through their REF outputs. Considering a hypothetical situation where an HEI’s department has 5 staff members, they can produce 4.2 outputs of 4* quality and 7.1 outputs of 3* quality out of the 17.9 outputs they would have submitted. The value of 4* and 3* outputs in Area Studies UoA is £7505 and £1876 respectively, which when multiplied by the number produced 4* and 3* outputs and summated gives the total funding the staff have contributed to the HEI, which in this case is £44840.6.

thumbnail
Table 8. Average submission characteristics of various UoAs.

https://doi.org/10.1371/journal.pone.0179722.t008

As the results are based on averages, HEIs can set themselves benchmarks to improve their performance through internal evaluations and predicting their performance in the future research assessment exercises becomes a possibility. The results inform an HEI by allowing it to take strategic decisions through altering its policies in the following ways:

  1. It informs an HEI the amount of funding an academic can bring into the department.
  2. Predict the future income for a given department based on the number of staff.
  3. Interdisciplinary research sits in two different departments. For example, information sciences either come under UoA 11 or UoA 36. As UoA 11 offers higher income for good research outputs, HEIs which have submitted their information sciences research in UoA 36 may consider submitting in UoA 11 for the next exercise.
  4. The results assist HEIs investment and financial strategy by informing their potential income generation through performance-based research funding. For example, an HEI can recruit more academics in the Engineering department so as to increase their chances of acquiring funding. Hence, the investment decision can influence the future of science.

Conclusion

Our investigation of the REF as a case study reveals that in the UK a world leading research output earns £7504 to £14,639 and an internationally excellent research output earns £1876 to £3659, varying according to their areas of research, per year in a REF cycle. This answers our inquiry into knowing the monetary value of a good research output and subsequent disciplinary differences. Although this assigned monetary value of research output is dependent on a country’s budget, it has implications for the progress of science and research. For example,

  • The results can provide a reference to compare the monetary value of good research outputs in different countries i.e. Italy’s Research Quality Evaluation (VQR), Netherlands’s Standard Evaluation Protocols (SEP). According to HESA (2013), the funding pot available for UK universities significantly reduced from 2008, which was recently addressed by the Universities UK’s 2015 call to increase science research funding [27].
  • The figures obtained through this investigation would allow the HEIs to forecast and build strategies for investment in different disciplines that may have implications for the progress of science and research in general.
  • Additionally, this investigation can be applied by UK HEIs into their strategies of submission for the next research assessment exercise. This answers our inquiry to know the potential policy implications arising by extricating the monetary value of good research outputs.

Our further investigation to observe any relationship between reputation of publication source and quality of a research output revealed a linear relationship between the percentage of quartile-one (Q1) journal publications and funding allocation in the Clinical Medicine, Physics and Psychology/Psychiatry/Neuroscience UoAs, and no relationship was found in the Classics and Anthropology/Development Studies UoAs, due to the fact that most publications in the latter two disciplines are not journal articles. This partly answers our final question and therefore we recommend a similar investigation into the rest of the thirty-one UoAs which would offer a clearer picture, adding, the existence of academic literature either confirming the relationship or refuting it [25, 28, 29].

Supporting information

S2 File. Value of a research output in various UoAs.

https://doi.org/10.1371/journal.pone.0179722.s002

(XLSX)

Author Contributions

  1. Conceptualization: KK GC.
  2. Data curation: KK GC.
  3. Formal analysis: KK GC.
  4. Investigation: KK GC.
  5. Methodology: KK GC.
  6. Project administration: KK GC.
  7. Resources: KK GC.
  8. Supervision: KK GC.
  9. Validation: KK GC.
  10. Visualization: KK GC.
  11. Writing – original draft: KK GC.
  12. Writing – review & editing: KK GC.

References

  1. 1. Bornmann L. What is societal impact of research and how can it be assessed? A literature survey. Journal of the American Society for Information Science and Technology. 2013 Feb 1;64(2):217–33.
  2. 2. Kousha K & Thelwall M. Web impact metrics for research assessment. in Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact. Edited by Cronin B and Sugimoto CR. MIT Press: Cambridge, 2014.
  3. 3. Priem J. Altmetrics. in Beyond Bibliometrics: Harnessing Multidimensional Indicators of Scholarly Impact. Edited by Cronin B and Sugimoto CR. MIT Press: Cambridge, 2014.
  4. 4. Moed HF, Halevi G. Multidimensional assessment of scholarly research impact. Journal of the Association for Information Science and Technology. 2015 Oct 1;66(10):1988–2002.
  5. 5. Wilsdon J. In defense of the Research Excellence Framework. The Guardian. Available at http://www.theguardian.com/science/political-science/2015/jul/27/in-defence-of-the-ref (accessed 29 October, 2015); 2015.
  6. 6. Wilsdon J. The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management. SAGE; 2016 Jan 20.
  7. 7. Hicks D, Wouters P, Waltman L, De Rijcke S, Rafols I. The Leiden Manifesto for research metrics. Nature. 2015 Apr 23;520(7548):429. pmid:25903611
  8. 8. Martin BR. Assessing the impact of basic research on society and the economy. In Rethinking the impact of basic research on society and the economy (WF-EST International Conference, 11 May 2007), Vienna, Austria 2007 May.
  9. 9. Universities UK. The impact of universities on the UK economy. Universities UK. Available at www.universitiesuk.ac.uk/highereducation. (accessed 11 November, 2015); 2015.
  10. 10. HESA. Finance Plus 2011–2012. Cheltenham: HESA; 2013.
  11. 11. REF. About the REF. Research Assessment Framework 2014. Available at http://www.ref.ac.uk/about/. (accessed September 15, 2016); 2014.
  12. 12. The Complete University Guide. Methodology in building the league tables for UK Universities. Available at http://www.thecompleteuniversityguide.co.uk/league-tables/methodology. (accessed January 24, 2016); 2016.
  13. 13. HEFCE. Annual funding allocations. Available at http://www.hefce.ac.uk/funding/annallocns/. (accessed September 12, 2016); 2015.
  14. 14. Jump P. Winners and losers in HEFCE funding allocations. Times Higher Education. 2015 Mar;26:6–9.
  15. 15. REF. Assessment framework and guidance on submissions. Research Assessment Framework 2014. Available at http://www.ref.ac.uk/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf. (accessed 29 October, 2015); 2014
  16. 16. Cramer D. Advanced quantitative data analysis. McGraw-Hill Education (UK); 2003 Jul 1.
  17. 17. Cramer D. Fundamental statistics for social research: step-by-step calculations and computer techniques using SPSS for Windows. Psychology Press; 1998.
  18. 18. Waltman L. A review of the literature on citation impact indicators. Journal of Informetrics. 2016 May 31;10(2):365–91.
  19. 19. Vieira ES, Cabral JA, Gomes JA. How good is a model based on bibliometric indicators in predicting the final decisions made by peers?. Journal of Informetrics. 2014 Apr 30;8(2):390–405.
  20. 20. Jayasinghe UW, Marsh HW, Bond N. Peer review in the funding of research in higher education: The Australian experience. Educational Evaluation and Policy Analysis. 2001 Dec;23(4):343–64.
  21. 21. Abramo G, D’Angelo CA, Caprasecca A. Allocative efficiency in public research funding: Can bibliometrics help?. Research policy. 2009 Feb 28;38(1):206–15.
  22. 22. Abramo G, D’Angelo CA. Evaluating research: from informed peer review to bibliometrics. Scientometrics. 2011 Jun 1;87(3):499–514.
  23. 23. Feeney MK, Welch EW. Realized publicness at public and private research universities. Public Administration Review. 2012 Mar 1;72(2):272–84.
  24. 24. Marks MS, Marsh M, Schroer TA, Stevens TH. Misuse of journal impact factors in scientific assessment. Traffic. 2013 Jun 1;14(6):611–2. pmid:23682643
  25. 25. Vanclay JK. Impact factor: outdated artefact or stepping-stone to journal certification?. Scientometrics. 2012 Aug 1;92(2):211–38.
  26. 26. Watermeyer R, Hedgecoe A. Selling ‘impact’: peer reviewer projections of what is needed and what counts in REF impact case studies. A retrospective analysis. Journal of Education Policy. 2016 Sep 2;31(5):651–65.
  27. 27. Universities UK. Universities UK backs committee call for increase in science funding. Press Release. Universities UK. Available at http://www.universitiesuk.ac.uk/highereducation/Pages/UniversitiesUKbackscommitteecallforincreaseinsciencefunding.aspx#.Vkns7HbhBpg (accessed 17 November, 2015); 2015.
  28. 28. Marks MS, Marsh M, Schroer TA, Stevens TH. Misuse of journal impact factors in scientific assessment. Traffic. 2013 Jun 1;14(6):611–2. pmid:23682643
  29. 29. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ: British Medical Journal. 1997 Feb 15;314(7079):498. pmid:9056804