Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation

Abstract

Introduction

Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals.

Methods

A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach’s alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19).

Results

Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity.

Conclusions

Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual assessment and for evaluating the impact of specific interventions to improve EBP.

Introduction

Since the middle of the 90s, Evidence-Based Practice (EBP) has become an increasingly important paradigm in health care, as it provides a framework for resolving problems related to everyday clinical practice. EBP assessment in healthcare related professions is usually conducted in the form of self-reported instruments [14]. This is due to the impossibility of conducting standardised observation of individual professional practice, from the point of view of both human and material resources.

Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed [1]. Shortcomings have been detected with respect to their design and development, and the processes of psychometric validation, that is, the provision of solid evidence of reliability and validity. Hence, it still remains to develop tools that rigorously operationalise the EBP construct and submit its items to obtaining adequate evidence of reliabilty and validity [5].

Some systematic reviews have revealed the low prevalence of instruments aimed at measuring EBP from a transdiciplinary perspective [59], even though this is considered an important characteristic for their potential usefulness [4]. The first instruments to be developed on EBP from this perspective turned out to be very poor as far as evidence of their psychometric properties were concerned [1,6,7,10]. Neither was their latent structure adequately assessed, and emphasis was placed mostly on the sole identification of barriers and/or facilitators to the use of EBP. Along these lines, the recent proposals of instruments concerning EBP, such as the one by Kaper et al [9], continue to present problems as regards the lack of consideration of the EBP measuring process as a whole, that is, understanding practice as an inherently dynamic process.

Attempts to operationalise the process based on a deeper theoretical analysis of the construct did not include all the steps in said process. Besides, in all cases they were designed for application in a single discipline [1,3,4] and, from the evidence provided, continue to present significant shortcomings in their psychometric behaviour [1113]. By way of example, in the McEvoy transprofessional instrument [5], which despite being able to be considered one of the most adequate ones to date, the operationalisation of the construct was not comprehensive and its field of validation was reduced to academic competencies. Thereby, the instrument excluded aspects related to the work context or practice setting, resources and support [5].

In order to address the shortcomings and needs pointed out in the literature, the aim of this study was to undergo a psychometric validation of a new transprofessional tool that aims to measure the EBP construct through a latent structure that is able to cover the core contents of the areas of interest included in its theoretical definition.

Materials and methods

Psychometric validation process was conducted in three stages, following the standards published for the elaboration of psychological and educational tests by the American Psychological Association (APA) and the International Test Commission (ITC) [1416], as well as the COSMIN protocol for the assessment of the quality of measures in the field of health [17].

Stage 1 covered the HS-EBP development. From the theoretical frameworks more used for EBP and its barriers and facilitators [1821], a multidisciplinary team composed by experts in EBP developed a first proposal of items to be included. This scheme included five dimensions, with their operational definition (areas of interest): Beliefs-Attitudes (perceived importance, priority, motivation and/or willingness to participate in activities related to EBP, impact, repercussion on patients and relevance), Results of scientific research (posing uncertainty questions to be searched in sources of evidence, appraisal of findings and its application to clinical practice), Development of profesional practice (use of professional experience in problem solving), Assessment of results (knowledge and use of results measures, information analysis and making decisions based on information analysis) and Barriers-Facilitators (contextual and structural support, culture for EBP). This conceptual model is available elsewhere [22]. Finally, a series of items were proposed to be assigned to each of the areas, some of which were created ex novo while others were obtained from the ones included in other existing questionnaires, which had been identified based on a series of systematic reviews of the scientific literature [1,3,4].

Stage 2 consisted of obtaining evidence of apparent and content validity of the HS-EBP questionnaire through two differentiated Delphi studies. The first was conducted with a group of 48 professionals who were recent graduates from four selected key professions: medicine, nursing, physiotherapy and psychology; and in the second case based on a group of 32 experts in EBP from the aforementioned key professions [22].

Finally, stage 3, the aim of the present paper, comprised the process used to assess the rest of the psychometric properties of the HS-EBP questionnaire from a pilot test and a subsequent sample validation test in turn conducted in two phases.

Participants

Both for the pilot test and for the sample validation test, professionals belonging to Health Sciences were recruited, particularly to Medicine, Nursing, Physiotherapy, and Psychology. The pilot test sample was extracted only from Balearic Islands, and the validation sample from all the Spanish country through a non-probability sampling of volunteers.

Procedure

Both the pilot test and the sample test were cross-sectional, multicentre, validation studies. All the participants voluntarily completed the corresponding electronic version of the HS-EBP questionnaire implemented through the online survey creation tool “Limesurvey” (https://www.limesurvey.org/es/). A Likert scale ranging from 1 to 10 was used for all items according to the degree of agreement with the statements they contained: the higher the score, the greater the degree of agreement. In all the versions of the questionnaire, additional items were added to collect data related to sociodemographics and practice.

The pilot study was conducted on the 73-item version of the HS-EBP questionnaire resulting from the prior Delphi studies to obtain evidence of apparent and content validity [22]. Meanwhile, the sample validation test was carried out on the 72-item version that arose from the pilot test. After analysing the psychometric properties of the obtained scores, a 60-item reduced version was extracted. The measurement model showed a five-factor structure: Beliefs and attitudes (D1), Results from scientific research (D2), Development of professional practice (D3), Assessment of results (D4) and Barriers/Facilitators (D5). In the sample validation test, the subjects had to complete the rest of the instruments included therein in order to increase the nomological network of the EBP construct and to obtain evidence of criterion validity.

The computerised protocol included the criterion variables Knowledge/Skills and Practice from the Spanish adaptation of the Evidence-Based Practice Questionnaire (EBPQ-19) [23]; the Spanish adaptation of the Scale on Resistance to Change (RTC) [24]; the Spanish version of the Maslach Burnout Inventory (MBI) [25]; and the “Intrinsic Motivation” factor from the Professional Quality of Life questionnaire (CVP-35) [26,27]. All of these showed adequate evidence of reliability and validity in their respective psychometric validation studies. A negative relationship between EBP and RTC was expected, such that individuals who have a greater predisposition to resistance to change are less likely to apply EBP. In particular, this relationship was expected between D1 (Beliefs and attitudes) and all the subscales of the RTC as well as between the dimensions related to the “EBP process” (D2, D3 and D4) and the subscales of “Search for routines”, “Short-term focus” and “Cognitive rigidity”. Likewise, a negative relationship was also expected between EBP and burnout, specifically regarding to the dimensions related to the “EBP process” (D2, D3 and D4). Finally, a positive correlation was hypothesised between EBP and the “Intrinsic motivation” subscale of the CVP variable.

Data analysis

Data analysis was carried out using SPSS Statistics 20.0 (Chicago, IL, USA) and LISREL 8.8 [28]. Only the results of the subjects who had filled in all the items in the HS-EBP questionnaire were taken into account, such that incomplete protocols were eliminated. No data imputation methods were applied.

In the pilot test, an analysis of internal consistency was performed (Cronbach’s alpha) for the scores of each latent factor in the questionnaire, and then an Exploratory Factor Analysis (EFA) after an initial review of the data to determine their suitability for this analysis [29,30]. A factor extraction method was conducted using Principal Component Analysis (PCA) by applying the Kaiser criterion, and the structure was optimised with a Varimax rotation. These analyses were implemented in order to refine the psychometric behaviour of the items in the version from the prior Delphi studies.

The sample validation test was performed in two stages. In the first stage, the same type of analysis described for the pilot study was conducted in order to obtain a more parsimonious reduced version with a better goodness-of-fit for the psychometric properties of the scores obtained. Thereby, all items that showed worse psychometric behaviour using three qualitative assessment criteria for each individual item were eliminated or reformulated: a) results of the reliability analysis of the dimension upon eliminating each item, b) factor loadings of the items in the EFA, and c) results obtained in the analysis of the content validity evidence of each item (prior Delphi study) regarding its relevance criterion [22].

About the reduced version of the HS-EBP questionnaire, scores’ reliability was analysed through Cronbach’s alpha, and the Intraclass Correlation Coefficient (ICC) for the 5 latent factors [31]. As regards the evidence of validity of the measurement model, a Confirmatory Factor Analysis (CFA) was performed using the maximum likelihood method, after checking the multivariate normality assumption through the PRELIS 2 programme included in LISREL 8.8. Its purpose was to contrast the latent dimension structure a priori in accordance with the operationalised definition of the EBP construct.

To assess the overal fit of the model, the following goodness-of-fit indexes were used: χ2, the χ2/df function, the Root Mean Square Error of Approximation (RMSEA), its confidence interval at 90%, and the value of p(RMSEA<0.05), as well as the Standardized Root Mean Squared Residual (SRMR), the Comparative Fit Index (CFI) and the Goodness-of-Fit Index (GFI). A model comparison approach was used considering several latent structures: one-factor, three-factor (by adding the scores related to the “EBP process”, that is dimensions D2, D3 and D4 of the questionnaire) and five-factor model. A Chi-square test on the discrepancy values and the Akaike Information Criterion (AIC) were obtained to compare the relative fit between models. A model was considered to fit the data if χ2 was not significant, χ2/df <3, RMSEA<0.05 or p(RMSEA<0.05)≥0.05, SRMR<0.08, and CFI ≥0.95 [32,33]. Analytic fit for the factor loadings were also assessed [34] and the correlations between latent factors were also analysed. A 95% confidence level was adopted for the statistical significance of factor loadings.

The evidence of criterion validity of the scores obtained through the non-parametric correlations was assessed, as the normality assumption of the distribution of most of the variables was not fulfilled. Correlations between the dimensions of the HS-EBP questionnaire and the criterion variables considered (that were hypothesised to hold a theoretical relationship with the EBP construct) were estimated. Evidence was obtained of convergent validity of the correlations of the scores of the dimensions of the HS-EBP questionnaire with those of the EBPQ-19 questionnaire.

Finally, in order to obtain evidence of decision validity, the instrument’s classification capacity was assessed, by taking the subjects’ prior training in EBP as a discrimination variable. Respondents were classified in 4 groups: no training in EBP, basic training, intermediate training, and advanced training, and their scores were compared in the different dimensions of the questionnaire through one-way ANOVA. In addition, the robust tests of Brown-Forsythe and Welch were applied in the event of failure of the normality assumption, thereby the degree of convergence between the results was analysed.

Ethical considerations

The study was approved by the Research Ethics Committee (REC) of the University of the Balearic Islands (registration number 3566). The study was conducted according to the ethical guidelines of the Declaration of Helsinki and the privacy of data was respected (Ley Orgánica 15/1999 on the Protection of Personal Data). Explanatory letters of the study were sent to all participants concerning the computerised protocol, which included all the variables considered, and confidentiality of responses was guaranteed. Completing and sending off the questionnaires was considered consent to participate.

Results

The pilot test was conducted on a sample of 211 Health Science professionals from Balearic Islands. The median age of the subjects was 38 years, with an interquartile range of 17 years, and 66.4% were women. By profession, there were 38.4% nurses, 30.3% physiotherapists, 10.9% doctors, 9.5% psychologists, and 3.8% from other health professions. A Cronbach’s alpha coefficient of 0.87, 0.94, 0.34, 0.86 and 0.86 was obtained for each of the five dimensions of the questionnaire, that is, respectively, for the factors "Beliefs and attitudes” (D1), “Results from scientific research” (D2), “Development of professional practice” (D3), “Assessment of results” (D4) and “Barriers/Facilitators” (D5). The dataset complied with the eligibility criteria for factor analysis: with an adequate value of 0.87 for the Kaiser-Meyer-Olkin index (KMO), and despite a significant result for Bartlett’s test of sphericity (p<0.001). A PCA was performed by applying the Kaiser criterion and a Varimax rotation, obtaining 17 factors with eigenvalues greater than or equal to 1. This structure was clearly inadequate, wherefore the extraction to 5 factors was subsequently forced, and eigenvalues of 17.94, 5.22, 3.60, 3.27 and 2.69 were obtained for D2, D1, D4, D5 and D3 respectively, which enabled 44.83% of the variance to be explained. Based on an analysis of these results, it was decided to reformulate the wording of the items that scored inversely in all the dimensions, as they had obtained the worst results of internal consistency in their dimension and showed abnormal behaviour in the latent structure. Given the low reliability of D3 (13 items) and the inconsistency in the affiliation of its items to any of the factors in the dimensional structure of the questionnaire, it was decided to apply a PCA exclusively on this dimension so as to analyse the behaviour of the items therein. In the forced extraction to a single factor of D3, only 6 items loaded above 0.40 (explaining 18.19% of the total variance). Based on this result, only the items with the best psychometric behaviour were kept, that is a greater consistency in the factor analysed in the PCA (items 9, 7, 13, 11, 10 and 1 ordered from highest to lowest factorial weight), while reformulating the content of items 9 and 1. It was likewise decided to reformulate items 3, 4 and 5, re-reverse items 2 and 12, and eliminate items 6 and 8, as they presented the worst psychometric behaviour. Three new items were created to attempt to cover the areas of interest that had become under-represented as a result of the modifications or eliminations carried out.

The resulting refined version of the pilot test was the object of analysis of the validation sample test. It was performed on a sample of 869 professionals from different professions related to the Health Sciences throughout the whole Spanish country (see Table 1).

thumbnail
Table 1. Sociodemographic characteristics of the validation sample.

https://doi.org/10.1371/journal.pone.0177172.t001

Reliability analysis on this version of the questionnaire (72 items) obtained the following values of Cronbach’s alpha for the five dimensions: 0.92, 0.96, 0.87, 0.94 and 0.87 (from D1 to D5, respectively). With regard to the factorial structure, previous statistics were adequate: KMO = 0.96, despite the fact that the Bartlett’s sphericity test was statistically significant (p<0.001), and the determinant of the correlation matrix between items had a value very close to 0. The PCA forced to 5 factors obtained eigenvalues of 24.44, 5.24, 4.10, 3.40 and 2.34 for D2, D1, D4, D5 and D3, respectively. This model explained 54.90% of the total variance; namely, 33.95% for the factor “Results from scientific research” (D2), 7.28% for the factor “Beliefs and attitudes” (D1), and 5.70%, 4.74% and 3.25% for each of the other three remaining factors, respectively: “Assessment of Results” (D4), “Barriers-Facilitators” (D5) and “Development of professional practice” (D3). An analysis of the psychometric behaviour of the items, both with respect to reliability and validity, enabled the elimination of two items from D1 (items 10 and 4), another two items from D2 (items 15 and 16) and from D3 (items 2 and 7); as well as three items from D4 (items 15, 1 and 5) and another three from D5 (items 3, 13 and 4).

Results of the psychometric analyses conducted on the reduced version (60 items), obtained from the above refinement process, are pointed out below. High internal consistency was confirmed for the 5 dimensions, with values for Cronbach’s alpha of: 0.93, 0.96, 0.84, 0.94 and 0.91, from D1 to D5, respectively. The ICC values for each of the 5 dimensions were: ICC = 0.53 (CI 95%: 0.5-.55) for D1; ICC = 0.63 (CI 95%: 0.61–0.65) for D2; ICC = 0.35 (CI 95%: 0.32–0.37) for D3; ICC = 0.57 (CI 95%: 0.54–0.60) for D4; and ICC = 0.47 (CI 95%: 0.44–0.49) for D5.

In the CFA, the best fit corresponded to the five-factor model, compared to the single-factor and three-factor models. Difference between models was statistically significant in the Chi-square test, and the significant difference between AIC values with respect to the worse fit of the three-factor model support this result (see Table 2). All the goodness-of-fit indexes for the five-factor model were adequate except for the Chi-square value, which was statistically significant: χ2 = 4906.46 df = 1700 p<0.01; χ2/df = 2.89; ICC = 5370.46; RMSEA = 0.049 CI90% RMSEA = [0.047; 0.050] p(RMSEA<0.05) = 0.89; SRMR = 0.067; CFI = 0.99.

thumbnail
Table 2. Results for the fit of Model comparison approach about the latent structure of the reduced version of the HS-EBP questionnaire.

https://doi.org/10.1371/journal.pone.0177172.t002

In relation to the five-factor model, factor loading of the items was estimated, where all the saturations were statistically significant with t values greater than 2.00 in absolute value. Factor loadings for each dimension are shown in Table 3 (as a result of CFA, each item is hypothesized to be related to a single dimension and the rest of factor loadings are constrained to a null value).

thumbnail
Table 3. Item factor loadings in the five-factor model for the reduced version of the HS-EBP questionnaire.

https://doi.org/10.1371/journal.pone.0177172.t003

In general, all items obtained moderate factor loadings for the five-factor model, always above .40, ranging from .48 to .84 in D1, from .69 to .90 in D2, from .61 to .83 in D4, and from .53 to .86. Regarding to Dimension 3, 8 from its 10 items obtained adequate loadings (ranging from .41 to .96), and 2 items showed inadequate values: item 4 (.21) and item 6 (.38).

A moderate correlation between all the dimensions of the questionnaire was also obtained, with the highest value in the dimensions related to the “EBP process” (see Table 4).

thumbnail
Table 4. Correlation matrix between latent factors in the reduced version of the HS-EBP questionnaire.

https://doi.org/10.1371/journal.pone.0177172.t004

With respect to evidence of criterion validity, statistically significant negative correlations were found between D2, D3 and D4 (“EBP process”) and D1 (Beliefs and attitudes) and the criterion variables “Search for routines”, “Short-term focus” and “Emotional reaction to imposed change” for the RTC scale, as well as between D5 (Barriers-Facilitators) and “Search for routines” and “Emotional reaction”. Significant negative correlations were also found between these dimensions in the “EBP process” and D5 (Barriers-Facilitators) with the different criterion variables in the MBI scale. Likewise, significant positive correlations were also obtained between all the dimensions in the HS-EBP questionnaire and the subscale of “Intrinsic Motivation” in the CVP-35 scale. Lastly, the existence of positive significant correlations can be appreciated between the dimensions of “Knowledge/skills” and “Practice” for the EBPQ-19 questionnaire and all the dimensions of the HS-EBP questionnaire, which provides evidence of convergent validity for the HS-EBP questionnaire (see Table 5).

thumbnail
Table 5. Non-parametric correlation matrix between HS-EBP factors and RTC, MBI, CVP-35 and EBPQ-19 subscales.

https://doi.org/10.1371/journal.pone.0177172.t005

Finally, in relation to evidence of decision validity, ANOVA results show significant differences between levels of training in all the dimensions of the HS-EBP questionnaire; specifically in D1 (F3,865 = 10.58, p<0.0001), D2 (F3,865 = 37.25, p<0.0001), D3 (F3,865 = 3.57, p = 0.014), D4 (F3,865 = 4.56, p = 0.004), and in D5 (F3,865 = 6.50, p<0.0001). The robust tests (Welch and Brown Forsythe tests) also obtained statistically significant values for all factors. Post hoc analyses were applied to compare the different pairs of means corresponding to the different levels of training in each of the dimensions. Significant differences were found between the “advanced” level of training and the rest of the training levels in D2. In the other two dimensions related to the process, namely D3 and D4, there were only significant differences between the “advanced” level and the “with no EBP training” level. In relation to the other two dimensions of the HS-EBP questionnaire (reduced version), the most noteworthy was again the existence of significant differences between the “advanced” level and the “with no EBP training” level, in D1 and D5 (see Table 6).

thumbnail
Table 6. One-way ANOVA for the five factors of HS-EBP questionnaire and the four levels of training in EBP.

https://doi.org/10.1371/journal.pone.0177172.t006

Discussion

The aim of this study was to undergo a psychometric validation of a new transprofessional tool to measure the core contents of EBP. The development and psychometric validation process of the HS-EBP questionnaire involved over 1080 professionals from 4 Health Science professions: medicine, nursing, physiotherapy, and psychology. The HS-EBP questionnaire aimed to cover the shortcomings pointed out in accordance with the established methodological design, following the standards recommended by the APA and the ITC for the construction of tests [1416], and the COSMIN protocol for assessing quality of measures in the field of health [17].

The pilot study and the subsequent sample validation test made it possible to analyse and refine the version of the HS-EBP questionnaire from the prior version obtained from the content validitation process [22], obtaining a reduced version. This reduced version obtained an adequate degree of internal consistency for the five dimensions. As a novel contribution in relation to the EBP measuring instruments published to date, the dimensions of the HS-EBP were subjected to estimation of the ICC, introducing a greater degree of exigency in the estimation of the instrument’s reliability. The results point towards a moderate degree of agreement in the ICC of three of the five dimensions, substantial in D2, and fair in D3, according to the classification of Streiner & Norman [31].

Regarding the latent structure, confirmatory analyses revealed a better fit for the five-factor model, and provide evidence to corroborate the hypothesised dimensional structure. Few instruments concerning EBP have used confirmatory models [10,23]. Thus, from the point of view of the psychometric evidence, confirmatory analysis constitutes one of the strengths of the HS-EBP questionnaire with respect to most of the ones developed to date.

Based on the results of the measuring model and reliability estimation, D3 could be psychometrically improved. This dimension had also presented certain difficulties during the studies conducted to obtain evidence of content validity [22]. Also items 4 and 6 obtained factor loadings lower than .40 and a psychometric refinement is needed, taking into account the operationalised contents. Nevertheless, according to their content validity, they were conserved in this dimension while further studies are carried out. These issues with this attribute are not new in the literature, and they might reflect the difficulty associated with the operationalisation of what is probably the most complex part of assessing the EBP process, due to its complex dynamic nature [19,21,35]. In fact, no previous psychometric instrument in the literature had considered measuring this part of the process. Given the difficulties presented, this dimension must be followed up and possibly improved in subsequent review processes of the instrument by carrying out new sample tests in order to optimise its quality.

The results obtained with respect to the criterion variables considered point towards those practitioners prone to evidence-based being “less resistant” to any situation of change, tending to experience a lower degree of discomfort, lack of enthusiasm, and anxiety when facing situations of profesional change. Moreover, they also showed less concern for change, and more receptivity towards the potential benefits of EBP. Finally, these practitioners were less likely to be oriented towards highly predictable and conventional tasks, procedures or professional surroundings. In addition, this profile of individuals would also show a lower degree of burnout, with fewer feelings of emotional and affective exhaustion, negative attitudes, and/or depersonalisation, and a greater perception of personal fulfilment with their work and intrinsic motivation.

These results may contribute to expand the nomological network and theoretical framework of the EBP construct, but always with the caution of the limitations of a cross-sectional design. However, it is the first time a trans-professional instrument has been developed in which evidence of criterion reliability is obtained with respect to external variables, constituting one of the strengths of this study. For instance, McEvoy’s [5] trans-professional instrument is one of the most complete instruments in terms of its domains structure. Nevertheless, it must be taken into account that it includes only the measurement of the use of EBP, excluding the dimension related to work context or practice environment, due to the fact that the authors developed it initially in the academic field in order to assess the development of competencies in EBP. Kaper´s [9] instrument is another recently developed transdisciplinary instrument; however it is limited to the mere identification of barriers and/or facilitators for the transfer of the results of scientific research into practice, which although important, constitutes only one part of the EBP construct. This same limitation in measuring EBP is common in the pioneer EBP measuring instruments of a trans-disciplinary nature [68]. In short, none of these instruments were created based on a comprehensive development process of the operational definition of the EBP construct intended to be measured, as suggested by the standards recommended by the ITC and the APA for the construction of tests [1416].

From a non-causal but correlational approach, the HS-EBP questionnaire’s scores allowed to differentiate between the “advanced” level of training in EBP and the rest of the levels analysed. It was dimension 2 “Knowledge/skills and behaviours of professionals with respect to the use of results from scientific research” that enabled a better discriminative capacity. However, no difference was produced between the “no training” level and the “basic” level of training in EBP in the scores of any of the dimensions. These results are similar to those obtained by McEvoy et al [5], where no statistically significant differences were found between the different levels of training with respect to the overall attitudes of professionales to EBP. Yet the present study also provides evidence that the “advanced” level obtains significantly higher scores in D1 (Beliefs and attitudes) of the HS-EBP compared to the other levels. This result adds value to the importance that the development of competency in EBP could be deep enough (advanced level). This fact is reinforced by evidence that other studies provide regarding the fact that “attitudes significantly moderate behaviour” [36]. Thereby, the “advanced” level brings about significant changes, not only in the acquisition of knowledge/skills in EBP, but also in positive attitudes and beliefs, and this is reflected in professional practice that is more consistent with the principles of EBP.

By way of limitations of the study, the type of sampling–non random–and the potential bias of self-selection, owing to the voluntary nature of the participation of the subjects, could be identified. However, the undesired effect of these biases may be alleviated by both the size of the sample used and the fact that four professions–well differentiated in their characteristics, which could be considered as representative of the rest of Health-related professions–were represented.

To complete the psychometric validation process, there still remains for the very near future the need to check that the scores from the HS-EBP questionnaire, especially in the dimensions related to the “EBP process”, are able to predict the results of an objective measuring test of EBP obtained through direct observation of the regular daily practice of the professionals. This criterion could be considered a gold standard for obtaining evidence of decision validity.

To counter the social desirability bias inherent in this type of instruments, it is also necessary to consider the objective measurement of knowledge/skills for EBP, as suggested in the conclusions of the most recent systematic reviews regarding EBP measuring instruments in different disciplines related to the Health Sciences, in which the need for the development and assessment of evaluation tools based on objective competency is manifested [3,4]. No less important is the need to obtain evidence concerning the instrument’s sensitivity to change.

Conclusions

The HS-EBP questionnaire was rigorously developed and the methodological design used made it possible to obtain suitable evidence of reliability and validity regarding its scores through a range of different professions in the field of health sciences. The tool makes it possible to assess the different dimensions of the EBP construct as a process put into practice to respond to every clinical situation (problem) arising in the daily practice of professionals. Thus, it enables all the elements included in the theoretical definition and proposal of operationalisation thereof to be measured.

This includes the assessment of the different components that are started in the clinical reasoning process prior to decision-making: results from scientific research, clinical experience, and the professional’s ability for clinical judgement. It also includes other sources of information that may become part of a professional’s reasoning process, such as those related to the opinions of work colleagues, etc. Finally, it also enables the assessment of results on health as a final component of the process to be evaluated. Likewise, the HS-EBP allows to assess the main factors at individual and organisational level that influence above all this process of clinical reasoning and decision making, such as the very beliefs and attitudes of professionals towards EBP, and the organisational aspects of the healthcare system in which the professionals carry out their practice.

In short, the validity findings of the questionnaire are promising in terms of the use proposed for it in assessing the EBP construct at individual level, and for evaluating the impact of specific interventions to improve EBP. Thus, the HS-EBP questionnaire will enable its use in clinical practice for diagnostic and interventional approaches, and it is recommended to researchers in the field going beyond along this line, so that in future studies thereon and/or their measuring instruments, these criterion variables may continue to be used in order to obtain scientific evidence regarding these aspects. Obtaining all this evidence of validity from different sources contributes to the achievement of an adequate degree of construct validity of the test scores, as an overall unitary concept of validity.

Supporting information

S1 File. Original language (Spanish) of Health Sciences Evidence Based Questionnaire (HS-EBP).

https://doi.org/10.1371/journal.pone.0177172.s001

(DOCX)

S2 File. English version of Health Sciences Evidence Based Questionnaire (HS-EBP).

https://doi.org/10.1371/journal.pone.0177172.s002

(DOCX)

S4 File. Dataset.

Validation study matrix.

https://doi.org/10.1371/journal.pone.0177172.s004

(SAV)

Author Contributions

  1. Conceptualization: JCF JDP JMM MBV PSF ASA.
  2. Data curation: JCF JMM ASA.
  3. Formal analysis: JCF JDP ASA.
  4. Funding acquisition: JCF.
  5. Investigation: JCF JDP JMM MBV PSF ASA.
  6. Methodology: JCF JDP JMM MBV PSF ASA.
  7. Project administration: JCF.
  8. Supervision: JCF.
  9. Visualization: JCF JDP JMM MBV PSF ASA.
  10. Writing – original draft: JCF JDP JMM MBV ASA.
  11. Writing – review & editing: JCF JDP JMM MBV PSF ASA.

References

  1. 1. Fernández-Domínguez JC, Sesé-Abad A, Morales-Asencio JM, Oliva-Pascual-Vaca A, Salinas-Bueno I, de Pedro-Gómez JE. Validity and reliability of instruments aimed at measuring Evidence-Based Practice in Physical Therapy: a systematic review of the literature. J Eval Clin Pract. 2014;20: 767–78. pmid:24854712
  2. 2. Scurlock-Evans L, Upton P, Upton D. Evidence-Based Practice in physiotherapy: a systematic review of barriers, enablers and interventions. Physiotherapy. 2014;100(3): 208–19. pmid:24780633
  3. 3. Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70: 2181–95. pmid:24866084
  4. 4. Buchanan H, Siegfried N, Jelsma J. Survey Instruments for Knowledge, Skills, Attitudes and Behaviour Related to Evidence-based Practice in Occupational Therapy: A Systematic Review. Occup Ther Int. 2016;23(2): 59–90. pmid:26148335
  5. 5. McEvoy M, Williams M, Olds T. Development and psychometric testing of a trans-professional evidence-based practice profile questionnaire. Med Teach. 2010;32(9): e373–80. pmid:20795796
  6. 6. Pollock AS, Legg L, Langhorne P, Sellars C. Barriers to achieving evidence-based stroke rehabilitation. Clin Rehabil. 2000;14(6): 611–7. pmid:11128736
  7. 7. Metcalfe C, Lewin R, Wisher S, Perry S, Bannigan K, Moffett JK. Barriers to implementing the evidence base in four NHS therapies: dietitians, occupational therapists, physiotherapists, speech and language therapists. Physiotherap. 2001;87(8): 433–441.
  8. 8. Palfreyman S, Tod A, Doyle J. Comparing evidence-based practice of nurses and physiotherapists. Br J Nurs. 2003;12(4): 246–253. pmid:12671571
  9. 9. Kaper NM, Swennen MH, van Wijk AJ, Kalkman CJ, van Rheenen N, van der Graaf Y, et al. The ‘‘evidence-based practice inventory”: reliability and validity was demonstrated for a novel instrument to identify barriers and facilitators for Evidence Based Practice in health care. J. Clin. Epidemiol. 2015; 68(11), 1261–9. pmid:26086726
  10. 10. Bernal G, Rodríguez-Soto NdelC. Development and psychometric properties of the Evidence-based Professional Practice Scale (EBPP-S). P R Health Sci J. 2010;29(4): 385–90.
  11. 11. Fritsche L, Greenhalgh T, Falck-Ytter Y, Neumayer HH, Kunz R. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. BMJ. 2002;325(7376): 1338–41. pmid:12468485
  12. 12. Ramos KD, Schafer S, Tracz SM. Validation of the Fresno test of competence in evidence based medicine. BMJ. 2003;326(7384):319–21. pmid:12574047
  13. 13. Tilson JK. Validation of the modified Fresno test: assessing physical therapists’ evidence based practice knowledge and skills. BMC Med Educ. 2010;10:38. pmid:20500871
  14. 14. American Educational Research Association, American Psychological Association & National Council on Measurement in Education. Standards for educational and psychological testing. Washington DC: American Educational Research Association; 1999.
  15. 15. American Educational Research Association, American Psychological Association & National Council on Measurement in Education. Standards for educational and psychological testing. Washington DC: American Educational Research Association; 2014.
  16. 16. International Test Commission. ITC guidelines on quality control, test analysis and reporting of test scores; 2013. Available: https://www.intestcom.org/files/guideline_quality_control.pdf
  17. 17. Mokkink LB, Terwee CB, Knol DL, Stratford PW, Alonso J, Patrick DL, et al. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: a clarification of its content. BMC Med Res Methodol 2010;10(1):22.
  18. 18. Williams B, Perillo S, Brown T. What are the factors of organisational culture in health care settings that act as barriers to the implementation of evidence-based practice? A scoping review. Nurse Educ Today. 2015;35(2):e34–41. pmid:25482849
  19. 19. DiCenso A, Guyatt G, Ciliska D. Evidence-based nursing: A guide to clinical practice. London: Elsevier Health Sciences; 2014.
  20. 20. Melnyk BM. Building cultures and environments that facilitate clinician behavior change to evidence-based practice: what works? Worldviews Evid Based Nurs. 2014;11(2):79–80. pmid:24597576
  21. 21. Dawes M, Summerskill W, Glasziou P, Cartabellotta A, Martin J, Hopayian K, et al. Sicily statement on evidence-based practice. BMC Med Educ 2005; 5(1):1. pmid:15634359
  22. 22. Fernández-Domínguez JC, Sesé-Abad A, Morales-Asencio JM, Sastre-Fullana P, Pol-Castaneda S, de Pedro-Gómez JE. Content Validity of a Health Science Evidence-Based Practice questionnaire (HS-EBP) with a web-based modified-Delphi approach. Int J Qual Health Care. 2016; 28(6):764–773. pmid:27655793
  23. 23. De Pedro-Gómez JE, Morales-Asencio JM, Sesé-Abad A, Bennasar-Veny M, Ruiz-Román MJ, Muñoz-Ronda F. Validation of the Spanish version of the Evidence Based Practice Questionnaire in Nurses. Rev Esp Salud Publica. 2009;83(4):577–86. pmid:19893885
  24. 24. Arciniega LM, González L. Validation of the Spanish-language version of the resistance to change scale. Pers Individ Dif. 2009;46(2): 178–182.
  25. 25. Seisdedos N. MBI Inventario “Burnout de Maslach”. Madrid, España: TEA ediciones; 1997
  26. 26. Cabezas C. La calidad de vida de los profesionales. FMC. 2000; 7(Supl 7): 53–68.
  27. 27. Martín J, Cortés JA, Morente M, Caboblanco M, Garijo J, Rodríguez A. Metric characteristics of the Professional Quality of Life Questionnaire [QPL-35] in primary care professionals. Gac Sanit. 2004;18(2): 129–36. pmid:15104973
  28. 28. Jöreskog KG, Sörbom D. LISREL 8.80 for Windows [Computer software]. Lincolnwood, Illinois, USA: Scientific Software International; 2006
  29. 29. Pallant J. SPSS survival manual: A step by step guide to data analysis using SPSS (version 15). 3rd ed. Crows Nest: Allen and Unwin; 2007.
  30. 30. Tabachnick BG, Fidell LS. Using multivariate statistics. 5th ed. Boston: Allyn and Bacon; 2007.
  31. 31. Streiner DL, Norman GR. Health measurement scales: a practical guide to their development and use. Oxford: Oxford University Press; 2008.
  32. 32. Hu L, Bentler M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6(1): 1–55.
  33. 33. Schreiber JB, Nora A, Stage FK, Barlow EA, King J. Reporting Structural Equation Modeling and Confirmatory Factor Analysis Results: A Review. J Educ Res. 2006;99(6): 323–328.
  34. 34. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33: 159–74. pmid:843571
  35. 35. Benner PE. From novice to expert: Excellence and power in clinical nursing practice. New Jersey: Pearson Education; 2001.
  36. 36. Kraus SJ. Attitudes and the prediction of behavior: A meta-analysis of the empirical literature. Pers Soc Psychol Bull. 1995;21(1): 58–75.