Unqualified statements and dichotomous judgments about validity or invalidity in complex arenas are unlikely to be scientifically or clinically useful.
What we believe has not been adequately appreciated, however, is the extent to which the use of RCT methodologies to validate ESTs requires a set of additional assumptions that are themselves neither well validated nor broadly applicable to most disorders and
treatments: that psychopathology is highly malleable, that most patients can be treated for a single problem or disorder, that psychiatric disorders can be treated independently of personality factors unlikely to change in brief treatments, and that experimental methods provide a gold standard for identifying useful psychotherapeutic packages.
Any exercise of clinical judgment represents a threat to internal validity in controlled trials because it reduces standardization of the experimental manipulation and hence renders causal inferences ambiguous. A good clinician in an efficacy study … is one who adheres
closely to the manual, does not get sidetracked by material the patient introduces that diverges from the agenda set forth in the manual, and does not succumb to the seductive siren of clinical experience.
Rather than focusing on treatment packages constructed in the laboratory designed to be transported to clinical practice and assuming that any single design (RCTs) can answer all clinically meaningful questions, as a field we might do well to realign our goals, from trying to provide clinicians with step-by-step instructions for treating decontextualized symptoms or syndromes to offering them empirically tested interventions and empirically
supported theories of change that they can integrate into empirically informed treatments.
Unfortunately … the “empirically supported treatments” (EST) movement, which has largely dominated the discussion of evidence-based practice in recent years, has been characterized by a set of assumptions that impede sound understanding of the sources of therapeutic change and generate biased conclusions about what therapeutic approaches are actually helpful to patients.
Restricting of “researchable” therapeutic work to patients with a specific DSM diagnosis fails to consider that for a large portion of the people who enter therapists’ offices, defining the problem is to a very large extent the problem.
To make manualization a requirement for regarding a treatment approach as evidence-based is not a reflection of a commitment to scientific rigor, but a political ploy that effectively excludes from lists of evidence-based treatments a variety of treatments for which there is a substantial body of evidence, but which do not happen to have approached the task of empirical validation via the particular strategies that the “EST” movement advocates.
If one cannot employ one’s favorite methods to study a phenomenon or theoretical claim, the fault does not lie in the phenomenon, it lies in the methods.
When “EST” advocates treat (naturalistic process-outcome studies) as irrelevant to the determination of what therapeutic approaches are “empirically supported” they engage in a kind of deceptive causistry similar to that which characterized for years the tobacco companies’ denial of the adverse health effects of cigarettes.
“EST” advocates have been much better at public relations that at science.
While those championing Evidence Based Practice in Psychology (EBPP) may have the best of intentions, it is my opinion that the net effect on the practice of psychotherapy is and will be far more limiting and damaging than helpful.
Despite what I see as lip service to the broader context and complex considerations inherent in works in psychotherapy, there has been a decided thrust to narrowly study the efficacy of manualized interventions in randomized controlled trials.
...Traditional measures of therapy outcome are neither penetrating enough nor specific enough to individual cases to yield a sufficiently nuanced picture of what has changed or why"
The superiority of CBT turned out to be an artifact of including non-bona fide therapies in the comparisons (e.g.supportive counseling). In other words, CBT was not significantly more beneficial than noncognitive and nonbehavioral treatments that were intended to be therapeutic rather than merely serving as a convenient control group for researchers' favored therapy.
The intent here is not to demonize EBP - any approach can be just the ticket for a particular client—but rather expose its limitations because it is often wielded as a mandate for competent and ethical practice. Such edicts are gross misrepresentations of the data and blatant misuses of the evidence.
Most research regarding evidence-based practice is conducted by the very founders of the approach under study. In such circumstances, up to 40% of the results can be attributed to what is called "allegiance effects," or the researchers’ bias toward their own models.
Thousands of studies have found no difference among approaches. While a few studies have reported a favorable finding for one approach or another, the amount of studies finding differences are no more than one would expect from chance. For example, Cognitive Behavioral Therapy (CBT) proponents often point to 15 comparisons showing an advantage for CBT—however, there are 2985 comparisons that show no difference (Wampold, 2001).
There is a certain seductive appeal to the idea of making psychological interventions dummy proof, where the users—the client and the therapist—are basically irrelevant. This product view of therapy is perhaps the most empirically vacuous aspect of EBP because the treatment itself accounts for so little of outcome variance, while the client and the therapist—and their relationship—account for so much.
Alliance scores accounted for up to 21% of the variance, while treatment differences accounted for at most 2% of outcome variance ... What clients bring to the process—their attributes, struggles, motivations, and social supports—accounts for 40 percent of the variance (Lambert, 1992); clients are the engine of change (Bohart & Tallman, 1999).
Given the data, we believe that continuing to invest precious time and resources in the development and dissemination of EBP is misguided.
RCT’s (Randomized Controlled Trials) are powerful research tools, but the structure of the RCT requires high levels of standardization of treatment and minimizes flexibility – an element Strupp already noted as vital in 1963. Thus, the upshot of the effort to protect the public by identifying bona fide effective modes of therapy narrowed the broad concept of evidence to the kinds of treatments and data that were amenable to evaluation by RCT.
There are also some interesting unintended consequences of how the outcome dilemma has evolved. The ECT focus on efficacy, symptom specificity, and the RCT method largely bypassed the question what makes a treatment distinct as an entity … my more pessimistic side fears we have not yet resolved the underlying issues Strupp and Eysenck were arguing about: the relation between treatment and technique, and symptom and problem.
But perhaps we will have to wait for the emerging field of neuroimaging to mature and fill in the links between the therapy process and the underlying change mechanisms they trigger.
The standardization implied by citing manuals to define levels of an experimental independent variable is at best illusory and at worst deceptive.
As Strupp (1963) pointed out and I have elaborated here, both independent and dependent variables are highly problematic in psychotherapy research, contaminated by context, responsiveness, and global judgments.
Therapy is a fluid, dynamic process, one involving a complex and nuanced series of interchanges. Forcing clinicians to adopt “truncated and prescriptive” treatments may well strip therapy of the very interpersonal processes critical to its success.
...differential allegiance could occur if one therapist team feels greater enthusiasm for its treatment model than another.
...it is remarkable and worrisome that researchers strongly allie to 1 treatment consciously or unconsciously overlook the potential bias of therapist allegiance.
Therapist allegiance remains a crucial, unstudied factor in psychotherapy research. Strenght of belief in a therapy may affect the therapist's comfort and authenticity in condcuting treatment, the therapy's plausability for the patient, and, thus, the strenght of the therapeutic alliance.
We recommend that all clinical trials (crossed or nested) measure and report therapist allegiance...