Dr. John Murphy Professor, University of Central Arkansas

Back Up Next

Evidence-based practice (EBP) is considered the gold standard in the helping professions. How does EBP figure into brief outcome-informed intervention?

I admire those who search for the truth. I avoid those who find it. French Motto

 

This information is adapted from Chapter 8 of the book:

    Murphy, J. J., & Duncan, B. L. (2007). Brief intervention for school problems (2nd ed.): Outcome-informed strategies. New York: Guilford Press. (www.guilford.com)

Evidence-based practice (EPB) is another idea from the medical model that has been shoe-horned into school-based practice. Our intent here is not to demonize EBP—any approach can be just the ticket for a particular client—but rather expose its limitations because it is often wielded as a mandate for competent and ethical practice. Such edicts are gross misrepresentations of the data and blatant misuses of the evidence.

What exactly is an evidenced-based practice? It is just an approach that has established itself better than placebo or intervention as usual in only two independent clinical trials. Such demonstrations of efficacy are not really saying that much; intervention of nearly any kind has demonstrated its superiority over placebo for nearly 50 years! This research, for all its pomp and circumstance, tells us nothing that we already do not know: Intervention works. 

To be sure, there is a seductive appeal to the idea of making interventions dummy proof, where the users—the client and the practitioner—are basically irrelevant and all one needs to do is diagnose the child and apply the EBP. The assumption is that specific technical operations are largely responsible for client improvement—that active (unique) ingredients of a given approach produce different effects with different disorders. In effect, this assumption likens intervention to a pill, with discernable unique ingredients that can be shown to have more potency than other active ingredients of other drugs.

There are (at least) two empirical arguments that cast doubt upon this assumption. First is the dodo bird verdict, which as we have seen, colorfully summarizes the robust finding that specific intervention approaches do not show specific effects or superiority over other models. While a few studies have reported a favorable finding for one approach or another, the amount of studies finding differences are no more than one would expect from chance.

The second argument shining a light on the empirical pitfalls of evidence-based practice emerges from estimates regarding the impact of specific technique on outcome, which are important but relatively small compared to client and relationship factors. Wampold’s (2001) meta-analysis, as outlined in the book, The Great Psyhotherapy Debate, assigns only 1% of the variance of change to specific technique. Moreover, other factors have far more “evidence” supporting them. Wampold (2001) portions 7% of the overall variance of outcome to the alliance, and from 6 to 9% to practitioner effects. As demonstrated throughout this book, the largest source of variance (87%), virtually ignored by EBP, is accounted for by client factors. The “pill” view of intervention that EBP promotes is perhaps the most empirically vacuous aspect because the approach or technique itself accounts for so little of outcome variance, while the client and the practitioner—and their relationship—account for so much.

Intervention and counseling is not an uninhabited landscape of technical procedures. It cannot be described without the client and practitioner, co-adventurers in a journey across largely uncharted terrain. EBPs simply do not map enough of the intervention territory to make them worthwhile guides. Given the data, we believe that any attempts to mandate EPB are misguided and far outreach the findings, obscuring the importance of the client and alliance to successful outcome.