Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Negative Effects of Psychological Treatments: An Exploratory Factor Analysis of the Negative Effects Questionnaire for Monitoring and Reporting Adverse and Unwanted Events

  • Alexander Rozental ,

    alexander.rozental@psychology.su.se

    Affiliation Division of Clinical Psychology, Department of Psychology, Stockholm University, Stockholm, Sweden

  • Anders Kottorp,

    Affiliations Division of Occupational Therapy, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, Department of Occupational Therapy, University of Illinois at Chicago, Chicago, United States of America

  • Johanna Boettcher,

    Affiliation Department of Clinical Psychology and Psychotherapy, Freie Universität Berlin, Berlin, Germany

  • Gerhard Andersson,

    Affiliations Division of Psychiatry, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden

  • Per Carlbring

    Affiliation Division of Clinical Psychology, Department of Psychology, Stockholm University, Stockholm, Sweden

Abstract

Research conducted during the last decades has provided increasing evidence for the use of psychological treatments for a number of psychiatric disorders and somatic complaints. However, by focusing only on the positive outcomes, less attention has been given to the potential of negative effects. Despite indications of deterioration and other adverse and unwanted events during treatment, little is known about their occurrence and characteristics. Hence, in order to facilitate research of negative effects, a new instrument for monitoring and reporting their incidence and impact was developed using a consensus among researchers, self-reports by patients, and a literature review: the Negative Effects Questionnaire. Participants were recruited via a smartphone-delivered self-help treatment for social anxiety disorder and through the media (N = 653). An exploratory factor analysis was performed, resulting in a six-factor solution with 32 items, accounting for 57.64% of the variance. The derived factors were: symptoms, quality, dependency, stigma, hopelessness, and failure. Items related to unpleasant memories, stress, and anxiety were experienced by more than one-third of the participants. Further, increased or novel symptoms, as well as lack of quality in the treatment and therapeutic relationship rendered the highest self-reported negative impact. In addition, the findings were discussed in relation to prior research and other similar instruments of adverse and unwanted events, giving credence to the items that are included. The instrument is presently available in eleven different languages and can be freely downloaded and used from www.neqscale.com.

Introduction

Psychological treatments have the potential of alleviating mental distress and enhancing well-being for many patients suffering from psychiatric disorders and somatic complaints. Research of such methods as cognitive behavior therapy (CBT) indicate that they are effective and can have long-term benefits, both in research settings and in regular outpatient clinics [13]. Meanwhile, different ways of increasing access to psychological treatments have been explored, both by introducing national guidelines and recommendations to health-care providers [46], and by investigating the usefulness of Internet or smartphone delivered treatment interventions [79]. However, although promising in relation to disseminating best available care, little attention has thus far been given to the potential of negative effects of psychological treatments [10]. Most clinical trials focus on the average treatment outcome and the number of patients achieving clinical significant change, that is, attaining a positive result that fulfills a predetermined diagnostic criterion or is beyond a statistical cutoff, while ignoring the fact that some patients might also experience adverse or unwanted events [1113]. In comparison to pharmacological research, studies involving psychological treatments seldom report the possibility of negative effects [14]. A recent review showed that only one-fifth of a large number of randomized controlled trials mentioned the occurrence of harm [15]. The situation has more or less remained the same throughout history, presumably because efforts were made to determine the efficacy of psychological treatments and establish their position in relation to medicine [16], thereby missing to examine the probability of negative effects during the treatment period. Adverse and unwanted events were, however, mentioned in an evaluation in the 1950’s regarding the Cambridge-Somerville Youth Study of delinquent adolescents, indicating that a larger proportion of those assigned to the intervention group actually were to commit more crimes than those allocated to the control group [17]. Likewise, Bergin [18] was able to provide evidence of patients deteriorating in seven different outcome studies, arguing that between five to ten percent consistently seem to deteriorate. Although obtaining critique for the difficulty of determining a causal relationship [19], that is, proving that the treatment interventions and not any other circumstances are responsible for the patients faring worse, the numbers have been confirmed in later reviews and across various treatment modalities and psychiatric disorders [2022], suggesting that deterioration is to be expected and controlled for to reverse a negative treatment trend [23].

Deterioration is, however, far from the only negative effect that might occur during psychological treatments. Hadley and Strupp [24] were early to recognize a wide range of adverse and unwanted events, for instance, social stigma, dependency, and novel symptoms. Similarly, Mays and Franks [25], introducing the term negative outcome, argued that any type of significant decline in one or more areas of functioning during the treatment period should be regarded as negative, not just deterioration in symptomatology. Others have also implied that nonresponse, dropout, and interpersonal difficulties may be perceived as negative effects [2628], although the prospect of establishing a cause-effect relationship is complex owing to the influence of other factors, most notably, the natural fluctuations in psychiatric disorders, the undesirable impact of everyday stressors, as well as what perspective is being used to judge whether or not a negative effect has occurred. Strupp and Hadley [29], for example, presented a tripartite model for assessing the positive as well as negative effects of psychological treatments, suggesting that the outcome will depend on the eye of the beholder: the patient, the therapist, or society at large. A specific response occurring during the treatment period, for instance, increased anxiety in an exposure exercise, might be perceived as negative by the patient, but can be expected and perhaps even regarded as beneficial by the therapist providing the treatment. Thus, even though there are reasons to assume that other types of negative effects exist in psychological treatments, determining their occurrence is complicated and warrants both theoretical and methodological considerations.

Different suggestions on how to monitor and report negative effects have, nonetheless, been put forward, and the need for more research has been emphasized [3032]. Deterioration has, for instance, long been regarded as a relatively straightforward method for assessing the number of patients faring worse on a given outcome measure [33]. In addition, both therapist and patient administered measures have been proposed. One early attempt was the Vanderbilt Negative Indicators Scale (VNIS), a comprehensive therapist rating system to determine the occurrence of various negative effects using tape-recorded sessions [34]. The VNIS distinguished between 42 different items on five different subscales (scored 0–5), e.g., unrealistic expectations (patient personal qualities), deficiencies in therapeutic commitment (therapist personal qualities), inflexible use of therapeutic techniques (errors in technique), poor therapeutic relationship (patient-therapist interaction), and poor match (global session ratings). Albeit highly ambitious and theory driven [35], the initial evaluation only consisted of two samples of 10 and 18 patients, and the internal consistencies and interrater reliability revealed great irregularity [36]. As for their relationship with treatment outcome, errors in technique showed the strongest association, although the results seemed to vary between treatment modalities and few correlations remained significant after partialling out the effect of the other subscales. Also, with the exception for a limited number of psychodynamic psychotherapy studies [37, 38], the VNIS never became popular by researchers or therapists. Other instruments have been proposed since then, such as, the Experiences of Therapy Questionnaire (ETQ) [39]. A principal component analysis was used on data from 716 patients undergoing or having prior experiences of being in psychological treatment, revealing a rotated solution of five components explaining 53.4% of the variance. Of the original 103 items that were generated, 63 were retained (scored 1–5), e.g., “My therapist doesn’t seem to understand what I want to get out of therapy” (Item 11). The components included such areas as negative therapist (e.g., lack of empathy), pre-occupying therapy (e.g., feeling alienated), beneficial therapy (e.g., increased insight), idealization of therapist (e.g., feeling dependent on the therapist), and passive therapist (e.g., inexperienced therapist) [40]. The components were subsequently related to different sociodemographic variables, type of psychological treatment, frequency of sessions, and reasons for entering and discontinuing therapy, indicating that younger patients terminated early on because the therapist was too passive or unable to solve any problems, and that many patients believed their therapy was ineffective. Albeit using a large and heterogeneous sample in terms of psychiatric disorders and treatment modalities, the generation of items was not entirely clear and included both negative and positive effects, rather than providing an instrument that solely investigates adverse or unwanted events. Furthermore, all comparisons were made post hoc and not according to any initial hypotheses, increasing the risk of obtaining spurious findings. Linden [41], on the other hand, presented a different approach to examining negative effects, the Unwanted to Adverse Treatment Reaction (UE-ATR) checklist, a therapist-administered instrument for assessing a wide range of potential adverse and unwanted effects, for instance, lack of clear treatment results, prolongation of treatment, and non-compliance of the patient. The therapist is also supposed to determine how the negative effects were linked to the psychological treatment using a five-step scale ranging from unrelated to related, and an evaluation of their severity level, e.g., mild, moderate, and severe. Conceptually, the UE-ATR resembles the VNIS in that the negative effects can involve different areas of life, not only deterioration of symptomatology, and that the relationship with treatment is not always clear. However, as stated by Linden and Schermuly-Haupt [30], the instrument is more of a tool for improving the therapist’s ability to detect negative effects than a scale with distinguishable psychometric properties, although it has been used in at least one clinical trial [42]. As for other instruments, the Inventory for the Assessment of Negative Effects of Psychotherapy (INEP) has also been put forward [43]. After performing a literature review and consulting psychotherapy researchers, 120 items were generated (scored on a three-step scale regarding change or a four-step scale in terms of agreement), such as, “I feel addicted to my therapist” (Item 10). Of these, 52 items were selected and distributed to 195 patients that had undergone psychological treatment and who were recruited via advertisements. Using a principal component analysis and a confirmatory factor analysis, the results yielded a rotated solution of five or seven components/factors, depending on the type of analysis; intrapersonal changes, intimate relationship, stigmatization, emotions, workplace, therapeutic malpractice, and family and friends, accounting for 46.7 or 55.8% of the variance (the final version consists of 21 items). Interestingly, the results indicated that more patients in behavioral than psychodynamic or nondirective therapy felt forced by their therapist to implement certain interventions, while patients in nondirective therapy had longer periods of depression after the treatment period, and patients in psychodynamic therapy more frequently felt offended by their therapist. Although carefully developed and providing some useful recommendations, most notably, asking the patient to differentiate between negative effects of their treatment and other circumstances, the INEP is difficult to score and assess in relation to treatment outcome as it does not include a clear and coherent scale. Further, several items could be criticized on theoretical grounds, for instance, “I have trouble finding insurance or am anxious to apply for new insurances” (Item 8), as it might not always be applicable in different contexts. Also, a large number of items seem to convey malpractice issues, such as, “My therapist attacked me physically” (Item 19), and not negative effects of properly performed psychological treatments. Although they most certainly will have a negative impact, it could be argued that malpractice issues are related to the unethical behavior of a therapist rather than a feature of the treatment interventions [11].

Hence, in order to address some of the shortcomings that have been mentioned, a new instrument for assessing negative effects of psychological treatments was developed: the Negative Effects Questionnaire (NEQ). Items were generated by consulting a number or researchers [32], distributing open-ended questions [42], analyzing patient responses using qualitative method [44], and a comprehensive literature review. The purpose of this process was to present an instrument that is based on both theoretical considerations and empirical findings, with items being systematically derived, reasonable to expect, and comprehensible for the patient. The overall purpose of the current study is to determine the validity and factor structure of the instrument, and to examine what items should be retained in a final version. This is believed to result in an instrument that is accessible and easier to administer by researchers and therapists, which might aid the investigation of negative effects in a variety of different psychological treatments and to explore their relationship with treatment outcome. Providing an instrument that can identify adverse and unwanted events during the treatment period may also help therapists identify patients at risk of faring worse, and to offer other treatment interventions as a way of reversing a negative treatment trend.

Methods

Item design

Items were carefully generated using a consensus statement regarding the monitoring and reporting of negative effects [32], findings from a treatment outcome study of patients with social anxiety disorder that probed for adverse and unwanted events [42], the results of a qualitative content analysis of the responses from four different clinical trials [44], and a literature review of books and published articles on negative effects. This is in line with the recommendations by Cronbach and Meehl [45], advising researchers to articulate the theoretical concept of an instrument before developing and testing it empirically in order to increase content validity. Also, instead of restricting the number of items to be included in a final version, the concept of overinclusiveness was adapted, that is, embracing more items than necessary to aid the statistical analyses necessary for detecting those that are related to the underlying construct(s) [46]. Subsequently, 60 items were generated, characterized by interpersonal issues, problems with therapeutic relationship, deterioration, novel symptoms, stigma, dependency, hopelessness, difficulties understanding the treatment content, as well as problems implementing the treatment interventions. An additional open-ended question was also included for the investigation of negative effects that might have been experienced but were not listed, i.e., “Describe in your own words whether there were any other negative incidents or effects, and what characterized them”. Further, in order to assess the readability and understanding of the items, cognitive interviews were conducted on five individuals unrelated to the current study and without any prior knowledge of negative effects or psychological treatments, i.e., encouraging them to read the items out load and speak freely of whatever comes to mind [47]. Cognitive interviews are often suggested as a way of pretesting an instrument so that irrelevant or difficult items can be revised and to increase its validity [48]. In relation to the proposed items, several minor changes were made, e.g., rephrasing or clarifying certain expressions. In addition, the instrument included general information about the possibility of experiencing negative effects, and was comprised of three separate parts; 1) “Did you experience this?” (yes/no) 2) “If yes–here is how negatively it affected me” (not at all, slightly, moderately, very, and extremely), and 3) “Probably caused by” (the treatment I received/other circumstances). The instrument is scored 0–4 and contains no reversed items as this may introduce errors or artifacts in the responses [49].

Data collection

The instrument was distributed via the Internet using an interface for administering surveys and self-report measures, Limesurvey (www.limesurvey.org). Participants were recruited via two different means in order to include a diverse and heterogeneous sample: patients undergoing a smartphone-delivered self-help treatment for social anxiety disorder based on CBT (N = 189) [50], and individuals responding to an article on negative effects of psychological treatments featured in the largest morning newspaper in Sweden as well as a Swedish public radio show on science with the same topic, (N = 464), yielding a total sample size of 653. As for the treatment group, patients were instructed to complete the instrument on negative effects while responding to the outcome measures at the post treatment assessment, resulting in a response rate of 90.4%. In terms of the media group, information on negative effects and the purpose of the current study was presented on a website specifically created for the purpose of the current study (www.psykoterapiforskning.se), where the individuals were instructed to fill out the instrument and information on sociodemographics, rendering a response rate of 49.4% (defined as those who entered the website and completed the instrument). Inclusion criteria for the treatment group, that is, to be included in the clinical trial, were; above 30 points on the Liebowitz Social Anxiety Scale–Self-Report [51], social anxiety disorder according to The Mini-International Neuropsychiatric Interview (MINI) [52], access to an IPhone, at least 18 years of age, and being a Swedish resident. Suicidality, ongoing psychological treatment, or a recent commencement or alteration of any psychotropic medication were all reasons for exclusion from the clinical trial. With regard to the media group, inclusion criteria comprised only of having undergone or being in psychological treatment sometime during the last two years. None of the two groups received any monetary compensation to complete the instrument.

Statistical analysis

All data was assembled and organized in one main dataset, and the statistical analyses were performed on IBM SPSS Statistics, version 22. As the purpose of the current study was to present an instrument for assessing negative effects of psychological treatments, only items that were attributable to treatment by the participants were analyzed. In order to determine the validity and factor structure of the instrument, an exploratory factor analysis (EFA) was conducted using principal axis factoring. This method is suitable for assessing theoretically interesting latent constructs rather than to test a specific hypothesis [53], corresponding to the purpose of the current study. Also, for an EFA to be appropriate, the level of measurement must be considered to be interval, or, at least quasi-interval, which could be assumed for the data that were collected [54]. In comparison to other methods for investigating the underlying dimensions of an instrument, such as, principal component analysis, an EFA also accounts for measurement error, argued to result in more realistic assumptions [55]. As for the rotated solution used for extracting the number of factors, an oblique rotation was implemented using direct oblimin with delta set to zero and the number of iterations set to 40. As discussed by Browne [56], an oblique rotation permits factors to be correlated, which orthogonal rotation does not, and is thus more representative of social science data where it is reasonable to assume that different factors in the same instrument will in fact correlate to some degree. Additional analyses implemented for considering the appropriateness of EFA were the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy, assessing the potential for finding distinct and reliable factors, the Bartlett’s Test of Sphericity, which indicates if the correlations between items are significantly different from zero, as well as the Determinant, checking for a reasonable level of correlations. In addition, item-item correlations < .30 or >.90 were considered to see if items measure the same underlying construct and to investigate the risk of multicollinearity. In order to establish the validity of the extracted factor solution, several methods were used. Eigenvalues greater than one, the Kaiser criterion, was only utilized as a preliminary analysis, given that it has been found to result in both over- and underfactoring [57]. The scree test was then implemented to visually inspect the number of factors that precedes the last major drop in eigenvalues [58], although it needs to be validated by other means as it is deemed a highly subjective procedure [59]. Hence, parallel analysis was performed, i.e., comparing the obtained factor solution with one derived from data that is produced at random with the same number of cases and variables, meaning that the correct number of factors should equal to eigenvalues higher than those that are randomly generated [60]. As SPSS does not perform parallel analysis, syntax from O’Connor [61] was used. Moreover, to examine the validity of the factor solution across samples, a stability analysis was conducted by making SPSS select half of the cases at random and then retesting the factor solution [53], with similar results indicating if its relatively stable. The interpretability of the factors was also checked to see if it was reasonable and fits well with prior theoretical assumptions and empirical findings [62].

Ethical considerations

All data included in the current study were manually imputed by the participants and assigned an auto generated identification code, i.e., 1234abcd, allowing complete anonymity. As for the treatment group, ethical approval was obtained by the Regional Ethical Board in Stockhom, Sweden (Dnr: 2014/680-31/3), and written informed consent was collected by letter at the pre treatment assessment. The consent form included information regarding the clinical trial, how to contact the principal investigator, data management and confidentiality, and the right to obtain a copy of one’s personal record in accordance with the Swedish Personal Data Act. With regard to the media group, information about the authors as well as the current study was provided, and a written informed consent with the same details as above was submitted digitally before responding to the instrument. Moreover, the results are only presented on group level, and great consideration was made in order not to disclose the identity of a specific participant.

Results

Participants

A total sample of 653 participants was included in the current study, with a majority being women (76.6%), in their late thirties, and in a relationship (60%). A large proportion had at least a university degree (62%) and were either employed (52.7%) or students (25.1%). In terms of the reason for receiving psychological treatment according to the participants themselves, anxiety disorders were most prevalent (48.4%), compared to mixed anxiety/depression (14.1%), depression (10.1%), and other psychiatric disorders or psychological problems (27.4%). As for the therapeutic orientation the participants believed they had received, cognitive/behavioral was predominant (61.3%), which includes several different modalities, e.g., schema therapy, cognitive therapy, as well as acceptance and commitment therapy, followed by psychodynamic psychotherapy (17.2%). Prior or ongoing psychotropic medication was also relatively common (38.3%). See Table 1 for an overview of the participants, divided by means of recruitment.

thumbnail
Table 1. Sociodemographic characteristics of participants divided by means of recruitment.

https://doi.org/10.1371/journal.pone.0157503.t001

Principal axis factoring

The preliminary assessment revealed a KMO of .94 and that the Bartlett’s Test of Sphericity was significant. Also, the Determinant indicated a reasonable level of correlations, suggesting that the data was suitable for performing an EFA. None of the off-diagonal items had correlations of >.90, suggesting no risk of multicollinearity. However, fourteen items had a large number of correlations of < .30 and were therefore subject for further investigation. Furthermore, four items specifically related to Internet-based psychological treatments, e.g., “I wasn’t satisfied by the user interface in which the treatment was being delivered” (Item 58), only consisted of correlations below the threshold and were deemed susceptible for removal. The communality estimates of the extracted factor solution, which reflects each item’s variance explained by all of the factors in the model, resulted in an average of .52, recommending the use of the scree test as an aid to the Kaiser criterion to determine the number of factors to retain. In terms of the former, a three-factor solution seemed reasonable, but using the latter, five factors had an eigenvalue greater than one, with an additional two factors being >.90, explaining a variance of 45.50%. Albeit resulting in two factor solutions, retaining seven factors was regarded most appropriate and was used for further examination.

A closer inspection of the extracted factor solution indicated that two items could be removed as the correlations were too small or because they would enhance the internal consistency if replaced. Moreover, the seventh factor was only comprised of items that conveyed negative effects of Internet-based psychological treatments, which previously had been found to be unrelated to the underlying construct(s). Therefore, a six factor solution seemed more sensible to maintain, whereby an EFA was performed using only six factors and with the problematic items having been removed. The results indicated that four factors were above the Kaiser criterion, one was >.90, and one resulted in an eigenvalue of .68, accounting for 57.64% of the variance. Although the last factor was well below the threshold, it was considered appropriate for retention due to theoretical reasons, that is, reflecting the experience of failure during psychological treatment. For a full overview of the specific items, the six-factor solution, and the correlations between each item and their respective factor can be found in Table 2.

thumbnail
Table 2. Principal axis factoring for a six factor solution using oblique rotation.

https://doi.org/10.1371/journal.pone.0157503.t002

In order to validate the six-factor solution, a parallel analysis was performed using a permutation test of 1000 iterations with the same number of cases and variables as the original dataset. That is, similar to bootstrapping procedures, a total of 1000 random datasets were produced, and an average eigenvalue and 95% Confidence Interval (CI) was reported for each factor. Both according to the scree test and a comparison between the eigenvalues obtained in the six-factor solution and the parallel analysis indicated that the original factor solution was reasonable to retain. Hence, none of the six factors were below the mean eigenvalues or 95% CI of the random of the randomly generated datasets. For a visual inspection please refer to Fig 1.

Further, as a measure of validity across samples, a stability analysis was conducted by making SPSS randomly select half of the cases and retesting the factor solution. The results indicated that the same six-factor solution could be retained, albeit with slightly different eigenvalues, implying stability. A review of the stability analysis can be obtained in Table 3.

thumbnail
Table 3. Stability analysis of the six-factor solution using a randomly selected sample.

https://doi.org/10.1371/journal.pone.0157503.t003

Factor solution

The final factor solution consisted of six factors, which included 32 items. A closer inspection of the results revealed one factor related to “symptoms”, e.g., “I felt more worried” (Item 4), with ten items reflecting different types of symptomatology, e.g., stress and anxiety. Another factor was linked to “quality”, e.g., “I did not always understand my treatment” (Item 23), with eleven items characterized by deficiencies in the psychological treatment, e.g., difficulty understanding the treatment content. A third factor was associated with “dependency”, e.g., “I think that I have developed a dependency on my treatment” (Item 20), with two items indicative of becoming overly reliant on the treatment or therapist. A fourth factor was related to “stigma”, e.g., “I became afraid that other people would find out about my treatment” (Item 14), with two items reflecting the fear of being perceived negatively by others because of undergoing treatment. A fifth factor was characterized by “hopelessness”, e.g., “I started thinking that the issue I was seeking help for could not be made any better” (Item 18), with four items distinguished by a lack of hope. Lastly, a sixth factor was linked to “failure”, e.g., “I lost faith in myself” (Item 8), with three items connected to feelings of incompetence and lowered self-esteem.

Table 4 contains the means, standard deviations, internal consistencies, and correlations among the factors. With regard to the full instrument, α was .95, while it ranged from .72-.93 for the specific factors: lowest for stigma, and highest for quality. The largest correlations were obtained between quality and hopelessness, r = .55, symptoms and failure, r = .50, and hopelessness and failure, r = -.49.

thumbnail
Table 4. Means, standard deviations, internal consistencies, and correlates among the obtained factors.

https://doi.org/10.1371/journal.pone.0157503.t004

In terms of the items that were most frequently endorsed as occurring during treatment, participants experienced; “Unpleasant memories resurfaced” (Item 13), 38.4%, “I felt like I was under more stress” (Item 2), 37.7%, and “I experienced more anxiety” (Item 3), 37.2%. Likewise, the items that had the highest self-rated negative impact were; “I felt that the quality of the treatment was poor” (Item 29), 2.81 (SD = 1.10), “I felt that the issue I was looking for help with got worse” (Item 12), 2.68 (SD = 1.44), and “Unpleasant memories resurfaced” (Item 13), 2.62 (SD = 1.19). A full review of the items can be obtained in Table 5.

thumbnail
Table 5. Items, number of responses, mean level of negative impact, and standard deviations.

https://doi.org/10.1371/journal.pone.0157503.t005

Discussion

The current study evaluated a new instrument for assessing different types of negative effects of psychological treatments; the NEQ. Items were generated using consensus among researchers, experiences by patients having undergone treatment, and a literature review. The instrument was subsequently administered to patients having received a smartphone-delivered self-help treatment for social anxiety disorder and individuals recruited via two media outlets, having received or were currently receiving treatment. An investigation using EFA revealed a six-factor solution with 32 items, defined as: symptoms, quality, dependency, stigma, hopelessness, and failure. Both a parallel analysis and a stability analysis suggested that the obtained factor solution could be valid and stable across samples, with an excellent internal consistency for the full instrument and acceptable to excellent α for the specific factors. The results are in line with prior theoretical assumptions and empirical findings, giving some credibility to the factors that were retained. Symptoms, that is, deterioration and distress unrelated to the condition for which the patient has sought help, have frequently been discussed in the literature of negative effects [24, 26, 30]. Research suggests that 5–10% of all patients fare worse during the treatment period, indicating that deterioration is not particularly uncommon [63]. Furthermore, evidence from a clinical trial of obsessive-compulsive disorder indicates that 29% of the patients experienced novel symptoms [64], suggesting that other types of adverse and unwanted events may occur. Anxiety, worry, and suicidality are also included in some of the items of the INEP [43], implying that various symptoms are to be expected in different treatment settings. However, these types of negative effects might not be enduring, and, in the case of increased symptomatology during certain interventions, perhaps even expected. Nonetheless, given their occurrence, the results from the current study recommends the monitoring of symptoms by using the NEQ in case they affect the patient’s motivation and adherence. Likewise, the perceived quality of the treatment and relationship with the therapist are reasonable to influence well-being and the patient’s motivation to change, meaning that a lack of confidence in either one may have a negative impact. This is evidenced by the large correlation between quality and hopelessness, suggesting that it could perhaps affect the patient’s hope of attaining some improvement. Research has revealed that expectations, specific techniques, and common factors, e.g., patient and therapist variables, may influence treatment outcome [65]. In addition, several studies on therapist effects have revealed that some could potentially be harmful for the patient, inducing more deterioration in comparison to their colleagues [66], and interpersonal issues in treatment have been found to be detrimental for some patients [67]. Similarly, difficulties understanding the treatment or purpose of specific interventions could be regarded as negative by the patient, presumably affecting both expectations and self-esteem. Items reflecting deficiencies and lack of credibility of the treatment and therapist are also included in both the ETQ and INEP [39, 43], making it sensible to expect negative effects due to lack of quality. With regard to dependency, the empirical findings are less clear. Patients becoming overly reliant on their treatment or therapist have frequently been mentioned as a possible adverse and unwanted event [13, 24, 41], but the evidence has been missing. In reviewing the results from questionnaires, focus groups, and written complaints, a recent study indicated that 17.9% of the surveyed patients felt more dependent and isolated by undergoing treatment [68]. Both the ETQ and INEP also contain items that are related to becoming addicted to treatment or the therapist [39, 43]. Hence, it could be argued that dependency may occur and is problematic if it prevents the patient from becoming more self-reliant. However, the idea of dependency as being detrimental is controversial given that it is contingent on both perspective and theoretical standpoint. Dependency may be regarded as negative by significant others, but not necessarily by the patient [29]. Also, dependency could be seen as beneficial with regard to establishing a therapeutic relationship, but adverse and unwanted if it hinders the patient from ending treatment and becoming an active agent [69]. Determining the issue of dependency directly, as in using the NEQ, could shed some more light on this matter and warrants further research. In terms of stigma, little is currently known about its occurrence, characteristics, and potential impact. Linden and Schermuly-Haupt [30] discuss it as a possible area for assessing negative effects. Being afraid that others might find out about one’s treatment is also mentioned in the INEP [43]. Given the fact that much have been written about stigma and its interference with mental health care [7072], there is reason to assume that the idea of being negatively perceived by others for having a psychiatric disorder or seeking help could become a problem in treatment. However, whether stigma should be perceived as a negative effect attributable to treatment or other circumstances, e.g., social or cultural context, remains to be seen. As for hopelessness, the relationship is much clearer. Lack of improvement and not believing that things can get better are assumed to be particularly harmful in treatment [28], and could be associated with increased hopelessness [73]. Hopelessness is, in turn, connected to several negative outcomes, most notably, depression and suicidality [74], thus being of great importance to examine during treatment. Hopelessness is included in instruments of depression, e.g., the Beck Depression Inventory [75], “I feel the future is hopeless and that things cannot improve” (Item 2), and is vaguely touched upon in the ETQ [39], i.e., referring to non-improvement. Assessing it more directly by using the NEQ should therefore be of great value, particularly given its relationship with more severe adverse events. Lastly, failure has been found to be linked to increased stress and decreased well-being [76], especially if accompanied by an external as opposed to internal attributional style [77], making it difficult to adequately cope with setbacks [78]. Experiencing difficulties during treatment, as well as not improving, could be presumed to be negative for the patient, resulting in lower self-esteem and competency. Correlations between the factors give some support for this idea, as both symptoms and hopelessness revealed moderate to large associations with failure. The ETQ mentions failure in one of its items [39], but only in terms of the therapist making the patient feel incompetent. Feelings of failure could be particularly damaging if it leads to drop out and prevents the patient from seeking treatment in the future, suggesting that the NEQ might be useful for monitoring this issue more closely.

As to the items that were most frequently endorsed as occurring during treatment, unpleasant memories, stress, and anxiety were each experienced by more than one-third of the participants in the current study. Other items associated with symptoms were also common, indicating that adverse and unwanted events linked to novel and increased symptomatology in treatment should be reasonable to expect. This is further evidence by the fact that this factor alone accounted for 36.58% of the variance in the EFA. In addition, five items related to the quality of the treatment were each endorsed by at least one-quarter of the participants, suggesting that this too might constitute a recurrent type of negative effect. Items related to the same two factors also contributed with the highest self-rated negative impact, implying that perceiving the treatment or therapeutic relationship as deficient, or experiencing different types of symptoms could be harmful for the patient. Thus, in order to prevent negative effects from occurring, different actions might be necessary to ensure a good treatment-patient fit, i.e., the right type of treatment for a particular patient, instilling confidence, as well as dealing with the patient’s expectations of treatment and bond with the therapist. Additionally, monitoring and managing symptoms by using the NEQ would also be important [23], especially given the fact that many therapists are unaware or have not received adequate training of negative effects in treatment [79].

The current study indicates that negative effects of psychological treatments seem to occur and can be assessed using the NEQ, revealing several distinct but interrelated factors. Several limitations, however, need to be considered in reviewing the results. First, distribution of the instrument was made to patients at post treatment assessment or to individuals remembering their treatment retrospectively, with few participants presently being in treatment. Thus, there is a strong risk of recall effects exerting an influence, e.g., forgetting some adverse and unwanted events that have occurred, or only recognizing negative effects that happened early on or very late in treatment, i.e., primacy-recency effects [48]. Administering the NEQ on more than one occasion, e.g., mid-assessment, could perhaps prevent some of this problem and is therefore recommended in future studies. Although, recurrently probing for negative effects may pose a risk of inadvertently inducing adverse and unwanted events, i.e., making the patient more aware of certain incidents, which also needs to be recognized. Moreover, it may be important to explore whether the negative effects that are reported differ between those currently undergoing psychological treatment and those that have recently ended it, particularly because it could be affected by the treatment interventions they are receiving. This is also true for different treatment modalities, as it could be argued that the participants in the treatment group experienced negative effects that are very specific for a smartphone-delivered self-help treatment for social anxiety disorder. The inclusion of the media group, which was more heterogeneous in nature, may have prevented some of this problem, but further research should be conducted with more diverse samples in mind. Second, providing a list of negative effects is regarded as an aid for the participants in order to recollect adverse and unwanted events that might have been experienced during treatment. However, such alternatives could also potentially affect the responses made by the participant, that is, choosing among negative effects that may not otherwise have been considered [80]. Given that the items included in the NEQ were partly developed using the results from open-ended questions, the alternatives should nevertheless still reflect adverse and unwanted events that are reasonable to assume among the participants. Third, with regard to the sensitive issue surrounding negative effects of psychological treatments, an instrument probing for adverse and unwanted events is probably prone to produce social desirability or induce other types of biases. Krosnick [48] provides a lengthy discussion on this issue, suggesting that norms, cohesion, and personal characteristics influence a participant’s ability to respond truthfully and validly. It could be argued that patients that are satisfied with the outcome of their treatment choose not to respond because of gratitude toward the researcher or therapist. Similarly, patients that are displeased with their treatment or therapist may decline to answer, or, alternatively, exaggerate the responses in order to convey their discontent. This is particularly relevant in relation to the media group, where the participants were recruited on the grounds of having experienced negative effects, making it plausible that only those who were unhappy about their psychological treatments responded, creating selection bias. Hence, future investigations should aim to replicate the findings in the current study by distributing the NEQ to random samples, for instance, at different outpatient clinics. Likewise, despite a low dropout rate from the treatment group (9.6%), it is possible that those who did not complete the post treatment assessment, including the NEQ, may have been those who experienced deterioration, nonresponse, or adverse and unwanted events to a greater degree. Thus, the findings in the current study may have missed negative effects that were perceived but just not reported. Again, distributing the NEQ not only at post treatment assessment should avoid some of this shortcoming, as would follow-up interviews on those who choose not to continue with the treatment program. Fourth, administering an instrument that includes 60 items pose a risk of introducing a cognitive load on the participants, especially if used in adjunct to other measures. This could have affected the validity of the responses as research indicates that participants often try to preserve mental resources when filling out different questionnaires, compromising the quality for more arbitrarily chosen answers [80]. In relation to the individuals in the media group this may not have been an issue, but for the patients in the treatment group the instrument developed for the current study was one of seven outcome measures to be completed. Thus, for future studies, the problem of cognitive load needs to be considered. The NEQ now consists of 32 items and should avoid some of this problem, but the administration of the instrument on a separate occasion is nonetheless recommended. Fifth, albeit the current study has provided some evidence of negative effects of psychological treatments, the association between its occurrence and implications for outcome is still unclear. Adverse and unwanted events that arise during treatment might be a transient phenomenon related to either the natural fluctuations in psychiatric disorders or treatment interventions that are negatively experienced by the patient, but helpful in the long-run. Alternatively, such negative effects may have an impact that prevents the patient from benefitting from treatment, resulting in deterioration, hopelessness, and a sense of failure. To investigate this issue, the NEQ therefore needs to be accompanied by other outcome measures. By collecting data from several time points throughout treatment and relating it to more objective results, both at post treatment assessment and follow-up, it should be possible to determine what type of impact adverse and unwanted events actually have for the patient. Sixth, even though there exist several methods for validating a factor solution from an EFA, the findings are still to some extent a result of making subjective choices [53]. Relying solely on the Kaiser criterion or scree test provide a relatively clear criterion for obtaining the factor solution, such as, using eigenvalues greater than one as a cutoff, but risk missing factors that are theoretically relevant for the underlying construct(s) [54]. Likewise, such methods often lead to over- or underfactoring and is thus not regarded as the only mean for determining the number of factors to retain [57]. In the current study, a six-factor solution seemed most reasonable, particularly as it fits well with prior theoretical assumptions and empirical findings, which is one way of validating the results [62]. A parallel analysis and a stability analysis also provided some support for the findings, but such methods also have a number of limitations [53]. Most notably, factors that are randomly generated still have to be compared to a factor solution that is subjectively chosen, and the selection of a random number of cases to retest the factors are still derived from the same sample. Thus, it should be noted that replications are needed to fully ascertain if the obtained factor solution is truly valid and stable across samples. This would, however, warrant recruiting patients and individuals from additional settings, and to implement alternative statistical methods, such as Rasch-analysis, which has some benefits in investigating data where the level of measurement can be assumed to be quasi-interval [81]. Lastly, using EFA to determine theoretically interesting latent constructs does not imply that the items that were not retained are inapt, only that they did not fit the uni- or multidimensionality of the final factor solution. Hence, some of the items that were originally generated may still be clinically relevant, and the open-ended question included in the instrument may in the future reveal other items that are of interest.

Conclusions

The current study tested an instrument for measuring adverse and unwanted events of psychological treatments, the NEQ, and was evaluated using EFA. The results revealed a six-factor solution with 32 items, defined as: symptoms, quality, dependency, stigma, hopelessness, and failure, accounting for 57.64% of the variance. Unpleasant memories, stress, and anxiety were experienced by more than one-third of the participants, and the highest self-rated negative impact was linked to increased or novel symptoms, as well as lack of quality in the treatment and therapeutic relationship.

Availability

The NEQ is freely available for use in research and clinical practice At time of writing, the instrument has been translated by professional translators into the following languages, available for download via the website www.neqscale.com: Danish, Dutch, English, Finnish, French, German, Italian, Japanese, Norwegian, Spanish, and Swedish.

Acknowledgments

The authors of the current study would like to thank Swedish Research Council for Health, Working Life, and Welfare (FORTE 2013–1107) for their generous grant that allowed the development and testing of the instrument for measuring adverse and unwanted events of psychological treatments. Peter Alhashwa and Angelica Norström are also thanked for the help with collecting the data.

Author Contributions

Conceived and designed the experiments: AR PC. Performed the experiments: AR PC. Analyzed the data: AR AK PC. Wrote the paper: AR AK JB GA PC.

References

  1. 1. Hofmann SG, Asnaani A, Vonk IJJ, Sawyer AT, Fang A. The efficacy of Cognitive Behavioral Therapy: A review of meta-analyses. Cognitive Therapy and Research. 2012;36(5):427–40. pmid:23459093
  2. 2. Cuijpers P, Berking M, Andersson G, Quigley L, Kleiboer A, Dobson KS. A Meta-Analysis of Cognitive-Behavioural Therapy for Adult Depression, Alone and in Comparison With Other Treatments. Can J Psychiat. 2013;58(7):376–85. pmid:WOS:000321485400002.
  3. 3. Tolin DF. Is cognitive–behavioral therapy more effective than other therapies?: A meta-analytic review. Clin Psychol Rev. 2010;30(6):710–20. pmid:20547435
  4. 4. McHugh RK, Barlow DH. The dissemination and implementation of evidence-based psychological treatments: A review of current efforts. Am Psychol. 2010;65(2):73–84. Epub 2010/02/10. pmid:20141263.
  5. 5. Shafran R, Clark DM, Fairburn CG, Arntz A, Barlow DH, Ehlers A, et al. Mind the gap: Improving the dissemination of CBT. Behav Res Ther. 2009;47(11):902–9. pmid:19664756.
  6. 6. Clark DM. Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: The IAPT experience. Int Rev Psychiatr. 2011;23(4):318–27. pmid:WOS:000296233100003.
  7. 7. Andersson G, Cuijpers P, Carlbring P, Riper H, Hedman E. Guided Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: A systematic review and meta-analysis. World Psychiatry. 2014;13(3):288–95. pmid:25273302
  8. 8. Cuijpers P, Donker T, van Straten A, Li J, Andersson G. Is guided self-help as effective as face-to-face psychotherapy for depression and anxiety disorders? A systematic review and meta-analysis of comparative outcome studies. Psychol Med. 2010;40(12):1943–57. pmid:WOS:000283800600002.
  9. 9. Ly KH, Truschel A, Jarl L, Magnusson S, Windahl T, Johansson R, et al. Behavioural activation versus mindfulness-based guided self-help treatment administered through a smartphone application: a randomised controlled trial. BMJ Open. 2014;4(1):e003440–e. pmid:24413342
  10. 10. Parry GD, Crawford MJ, Duggan C. Iatrogenic harm from psychological therapies—Time to move on. The British Journal of Psychiatry. 2016;208:210–2. pmid:26932481
  11. 11. Lilienfeld SO. Psychological treatments that cause harm. Perspect Psychol Sci. 2007;2(1):53–70. pmid:WOS:000207450300005.
  12. 12. Castonguay LG, Boswell JF, Constantino MJ, Goldfried MR, Hill CE. Training implications of harmful effects of psychological treatments. Am Psychol. 2010;65(1):34–49. pmid:WOS:000273680300004.
  13. 13. Berk M, Parker G. The elephant on the couch: side-effects of psychotherapy. Aust Nz J Psychiat. 2009;43(9):787–94. pmid:WOS:000268832000001.
  14. 14. Vaughan B, Goldstein MH, Alikakos M, Cohen LJ, Serby MJ. Frequency of reporting of adverse events in randomized controlled trials of psychotherapy vs. psychopharmacotherapy. Comprehensive Psychiatry. 2014;55(4):849–55. Epub 2014/03/19. pmid:24630200; PubMed Central PMCID: PMC4346151.
  15. 15. Jonsson U, Alaie I, Parling T, Arnberg FK. Reporting of harms in randomized controlled trials of psychological interventions for mental and behavioral disorders: A review of current practice. Contemp Clin Trials. 2014;38(1):1–8. Epub 2014/03/13. pmid:24607768.
  16. 16. Barlow DH. Negative effects from psychological treatments: A perspective. Am Psychol. 2010;65(1):13–20. pmid:WOS:000273680300002.
  17. 17. Powers E, Witmer HL. An experiment in the prevention of delinquency: the Cambridge Somerville youth study. New York: Columbia University Press; 1951. 649 s. p.
  18. 18. Bergin AE. The Empirical Emphasis in Psychotherapy: A Symposium—The Effects of Psychotherapy: Negative Results Revisited. Journal of Counseling Psychology. 1963;10(3):244–50. pmid:WOS:A1963CBB0300006.
  19. 19. May PRA. For better or worse? Psychotherapy and variance change: A critical review of the literature. The Journal of Nervous and Mental Disease. 1971;152(3):184–92. pmid:4926620
  20. 20. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is It Time for Clinicians to Routinely Track Patient Outcome? A Meta-Analysis. Clinical Psychology: Science and Practice. 2003;10(3):288–301.
  21. 21. Whipple JL, Lambert MJ, Vermeersch DA, Smart DW, Nielsen SL, Hawkins EJ. Improving the effects of psychotherapy: The use of early identification of treatment failure and problem-solving strategies in routine practice. Journal of Counseling Psychology. 2003;50(1):59–68. pmid:WOS:000180351200007.
  22. 22. Bergin AE. The deterioration effect: A reply to Braucht. J Abnorm Psychol. 1970;75(3):300–2. pmid:5423021
  23. 23. Boswell JF, Kraus DR, Miller SD, Lambert MJ. Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions (vol 25, pg 6, 2015). Psychother Res. 2015;25(1):Iii–Iii. pmid:WOS:000345502700014.
  24. 24. Hadley SW, Strupp HH. Contemporary Views of Negative Effects in Psychotherapy—Integrated Account. Arch Gen Psychiat. 1976;33(11):1291–302. pmid:WOS:A1976CL18600001.
  25. 25. Mays DT, Franks CM. Negative Outcome in Psychotherapy and What to do About It. New York: Springer Publishing Company, Inc.; 1985.
  26. 26. Crown S. Contraindicators and dangers of psychotherapy. The British Journal of Psychiatry. 1983;143:436–41. pmid:6640210
  27. 27. Mays DT, Franks CM. Getting worse: Psychotherapy or no treatment—The jury should still be out. Prof Psychol. 1980;11(1):78–92.
  28. 28. Dimidjian S, Hollon SD. How would we know if psychotherapy were harmful? Am Psychol. 2010;65(1):21–33. pmid:WOS:000273680300003.
  29. 29. Strupp HH, Hadley SW. A tripartite model of mental health and therapeutic outcomes. With special reference to negative effects in psychotherapy. The American psychologist. 1977;32(3):187–96. Epub 1977/03/01. pmid:848783.
  30. 30. Linden M, Schermuly-Haupt M-L. Definition, assessment and rate of psychotherapy side effects. World Psychiatry. 2014;13(3):306–9. pmid:25273304
  31. 31. Peterson AL, Roache JD, Raj J, Young-McCaughan S. The need for expanded monitoring of adverse events in behavioral health clinical trials. Contemp Clin Trials. 2013;34(1):152–4. pmid:23117077
  32. 32. Rozental A, Andersson G, Boettcher J, Ebert DD, Cuijpers P, Knaevelsrud C, et al. Consensus statement on defining and measuring negative effects of Internet interventions. Internet Interventions. 2014;1(1):12–9.
  33. 33. Mohr DC, Beutler LE, Engle D, Shohamsalomon V, Bergan J, Kaszniak AW, et al. Identification of patients at risk for nonresponse and negative outcome in psychotherapy. J Consult Clin Psych. 1990;58(5):622–8. pmid:WOS:A1990EC86500017.
  34. 34. Suh CS, Strupp HH, O'Malley SS. The Vanderbilt process measures: The Psychotherapy Process Scale (VPPS) and the Negative Indicators Scale (VNIS). In: Greenberg LS, editor. The psychotherapeutic process: A research handbook. New York, NY, US: Guilford Press; 1986. p. 285–323.
  35. 35. Binder JL, Strupp HH. ''Negative process'': A recurrently discovered and underestimated facet of therapeutic process and outcome in the individual psychotherapy of adults. Clin Psychol-Sci Pr. 1997;4(2):121–39. pmid:WOS:A1997WW06800003.
  36. 36. Sachs JS. Negative Factors in Brief Psychotherapy—an Empirical-Assessment. J Consult Clin Psych. 1983;51(4):557–64. pmid:WOS:A1983RA97900012.
  37. 37. Crits-Christoph P, Cooper A, Luborsky L. The Accuracy of Therapists Interpretations and the Outcome of Dynamic Psychotherapy. J Consult Clin Psych. 1988;56(4):490–5. pmid:WOS:A1988P528000003.
  38. 38. Eaton TT, Abeles N, Gutfreund JM. negative indicators, therapeutic alliance, and therapy outcome. Psychother Res. 1993;3(2):115–23.
  39. 39. Parker G, Fletcher K, Berk M, Paterson A. Development of a measure quantifying adverse psychotherapeutic ingredients: The Experiences of Therapy Questionnaire (ETQ). Psychiat Res. 2013;206(2–3):293–301. pmid:WOS:000317882300024.
  40. 40. Parker G, Paterson A, Fletcher K, McClure G, Berk M. Construct validity of the Experiences of Therapy Questionnaire (ETQ). BMC psychiatry. 2014;14. Artn 369 pmid:WOS:000348157300001.
  41. 41. Linden M. How to define, find and classify side effects in psychotherapy: From unwanted events to adverse treatment reactions. Clin Psychol Psychot. 2013;20(4):286–96. pmid:WOS:000321434700002.
  42. 42. Boettcher J, Rozental A, Andersson G, Carlbring P. Side effects in Internet-based interventions for Social Anxiety Disorder. Internet Interventions. 2014;1(1):3–11.
  43. 43. Ladwig I, Rief W, Nestoriuc Y. What are the Risks and Side Effects to Psychotherapy?—Development of an Inventory for the Assessment of Negative Effects of Psychotherapy (INEP). Verhaltenstherapie. 2014;24(4):252–63. pmid:WOS:000346742000003.
  44. 44. Rozental A, Boettcher J, Andersson G, Schmidt B, Carlbring P. Negative effects of Internet interventions: A qualitative content analysis of patients' experiences with treatments delivered online. Cognitive Behaviour Therapy. 2015;44(3):223–36. pmid:25705924
  45. 45. Cronbach LL, Meehl PE. Construct validity in psychological test. Psychol Bull. 1955;52:281–302. pmid:13245896
  46. 46. Clark LA, Watson D. Constructing validity: Basic issues in objective scale development. Psychol Assessment. 1995;7(3):309–19. pmid:WOS:A1995TP48600009.
  47. 47. Drennan J. Cognitive interviewing: verbal data in the design and pretesting of questionnaires. J Adv Nurs. 2003;42(1):57–63. pmid:WOS:000181653100015.
  48. 48. Krosnick JA. Survey research. Annu Rev Psychol. 1999;50(1):537–67.
  49. 49. Hinkin T. A review of scale development practices in the study of organizations. Journal of Management. 1995;21(5):967–88.
  50. 50. Miloff A, Marklund A, Carlbring P. The Challenger app for social anxiety disorder: New advances in mobile psychological treatment. Internet Interventions. 2015;2(4):382–91.
  51. 51. Liebowitz MR. Social phobia. Modern Problems of Pharmacopsychiatry. 1987;22:141–73. pmid:2885745
  52. 52. Sheehan DV, Lecrubier Y, Sheehan KH, Amorim P, Janavs J, Weiller E, et al. The Mini-International Neuropsychiatric Interview (MINI): The development and validation of a structured diagnostic psychiatric interview for DSM-IV and ICD-10. J Clin Psychiat. 1998;59:22–33. pmid:WOS:000077584300005.
  53. 53. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ. Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods. 1999;4(3):272–99. pmid:WOS:000082696900004.
  54. 54. Floyd FJ, Widaman KF. Factor analysis in the development and refinement of clinical assessment instruments. Psychol Assessment. 1995;7(3):286–99. pmid:WOS:A1995TP48600007.
  55. 55. Loehlin JC. Component Analysis Versus Common Factor-Analysis—a Case of Disputed Authorship. Multivar Behav Res. 1990;25(1):29–31. pmid:WOS:A1990DG61400002.
  56. 56. Browne MW. An overview of analytic rotation in exploratory factor analysis. Multivar Behav Res. 2001;36(1):111–50. pmid:WOS:000168462400005.
  57. 57. Zwick WR, Velicer WF. Comparison of five rules for determining the number of components to retain. Psychol Bull. 1986;99(3):432–42.
  58. 58. Cattell RB. The Scree Test For The Number Of Factors. Multivariate Behav Res. 1966;1(2):245–76. pmid:26828106.
  59. 59. Kaiser HF. The Application of Electronic-Computers to Factor-Analysis. Educational and Psychological Measurement. 1960;20(1):141–51. pmid:WOS:A1960CCC3600014.
  60. 60. Horn JL. A Rationale and Test for the Number of Factors in Factor-Analysis. Psychometrika. 1965;30(2):179–85. pmid:WOS:A1965CBT8600005.
  61. 61. O'Connor BP. SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behav Res Meth Ins C. 2000;32(3):396–402. pmid:WOS:000089596300003.
  62. 62. Comrey AL. Common methodological problems in factor analyic studies. J Consult Clin Psych. 1978;46:648–59.
  63. 63. Lambert M. Outcome in psychotherapy: The past and important advances. Psychotherapy. 2013;50(1):42–51. pmid:23505980
  64. 64. Moritz S, Fieker M, Hottenrott B, Seeralan T, Cludius B, Kolbeck K, et al. No pain, no gain? Adverse effects of psychotherapy in obsessive-compulsive disorder and its relationship to treatment gains. J Obsess-Compuls Rel. 2015;5:61–6. pmid:WOS:000355372100009.
  65. 65. Lambert MJ, Barley DE. Research summary on the therapeutic relationship and psychotherapy outcome. Psychotherapy: Theory, Research, Practice, Training. 2001;38(4):357–61.
  66. 66. Saxon D, Barkham M. Patterns of Therapist Variability: Therapist Effects and the Contribution of Patient Severity and Risk. J Consult Clin Psych. 2012;80(4):535–46. pmid:WOS:000306861800002.
  67. 67. Mohr DC, Burns MN, Schueller SM, Clarke G, Klinkman M. Behavioral Intervention Technologies: Evidence review and recommendations for future research in mental health. Gen Hosp Psychiat. 2013;35(4):332–8. pmid:WOS:000321107600002.
  68. 68. Leitner A, Märtens M, Koschier A, Gerlich K, Liegl G, Hinterwallner H, et al. Patients’ Perceptions of Risky Developments During Psychotherapy. Journal of Contemporary Psychotherapy. 2013;43(2):95–105.
  69. 69. Nutt DJ, Sharpe M. Uncritical positive regard? Issues in the efficacy and safety of psychotherapy. J Psychopharmacol. 2008;22(1):3–6. pmid:WOS:000253575500001.
  70. 70. Wahl OF. Mental health consumers' experience of stigma. Schizophrenia Bull. 1999;25(3):467–78. pmid:WOS:000082049500006.
  71. 71. Link BG, Struening EL, Neese-Todd S, Asmussen S, Phelan JC. Stigma as a barrier to recovery: The consequences of stigma for the self-esteem of people with mental illnesses. Psychiatric Services. 2001;52(12):1621–6. pmid:11726753
  72. 72. Corrigan P. How stigma interferes with mental health care. Am Psychol. 2004;59(7):614–25. pmid:WOS:000224284200003.
  73. 73. Stewart JW, Mercier MA, Quitkin FM, McGrath PJ, Nunes E, Young J, et al. Demoralization predicts nonresponse to cognitive therapy in depressed outpatients. Journal of Cognitive Psychotherapy. 1993;7(2):105–16.
  74. 74. Beck AT, Brown G, Berchick RJ, Stewart BL, Steer RA. Relationship between hopelessness and ultimate suicide: a replication with psychiatric outpatients. Am J Psychiat. 1990;147(2):190–5. pmid:2278535
  75. 75. Beck AT, Erbaugh J, Ward CH, Mock J, Mendelsohn M. An Inventory for Measuring Depression. Arch Gen Psychiat. 1961;4(6):561–&. pmid:WOS:A19611019400001.
  76. 76. Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. 2000;55(1):68–78. pmid:11392867
  77. 77. Klein DC, Fencii-Morse E, Seligman ME. Learned helplessness, depression, and the attribution of failure. Journal of Personality and Social Psychology. 1976;33(5):508–16. pmid:1271223
  78. 78. Abela JRZ, Seligman MEP. The hopelessness theory of depression: A test of the diathesis-stress component in the interpersonal and achievement domains. Cognitive Therapy and Research. 2000;24(4):361–78.
  79. 79. Bystedt S, Rozental A, Andersson G, Boettcher J, Carlbring P. Clinicians' perspectives on negative effects of psychological treatments. Cogn Behav Ther. 2014;43(4):319–31. pmid:25204370; PubMed Central PMCID: PMCPMC4260663.
  80. 80. Schwarz N. Self-reports: How the questions shape the answers. Am Psychol. 1999;54(2):93–105.
  81. 81. Wright B. Solving measurement problems with the Rasch model. Journal of Educational Measurement. 1977;14(2):97–116.