From evidence-based practice to practice-based evidence

In Brief

There is no denying that Evidence-Based Practices (EBPs) have increasingly dominated the clinical landscape across the past 30 years. Originating from the early work of behavioral psychologists’ attempts to apply scientific principles to predict and influence human change, the field has now blossomed into hundreds of protocols and structured approaches that have been proven effective across a range of mental and behavioral health conditions. From post-traumatic stress disorder in children to insomnia in the elderly, EBPs have sought to identify repeatable processes and therapeutic steps that reliably produce meaningful improvements - and in most cases with great success. This sentiment is famously exemplified by the leading EBP organization, the Association for Behavioral and Cognitive Therapies, in their now viral marketing slogan “CBT Works!”

The Research <> Practice Gap


However, the rise in EBPs over the decades has rather unfortunately not been met with improvements in mental health prevalence or outcomes within the general population. Despite the continued development and refinement of EBPs, global and national mental health prevalence is on the rise and treatment outcomes are largely stagnant and in some cases declining with time, particularly among racial and ethnic minority groups, individuals with dual diagnoses or comorbidity, and youth under the age of 25. In light of these trends, there is a clear disconnect between the effects observed in the clinical laboratory and the real-world impact that these practices and procedures are having on the lived experiences of treatment-seeking individuals in the general population.


This disconnect is referred to as the research <> practice gap


The research <> practice gap exists due to the very nature of the way in which EBPs are developed and validated. Clinical trials are often characterized by a high degree of structure and fidelity to the model under investigation, homogeneous populations (often white, middle class), controlled patient histories, and with a bias toward excluding individuals who don’t fit a strict set of criteria for participation in the research study. While this method of conducting research excels at producing favorable outcomes, it does little in the way of creating a context that mirrors the clinical realities we most often deal with in our day-to-day practice.


In the real world, the treatment approach we take with our patients simply cannot match the structure and controlled nature that is seen in a clinical research trial. As a result, we are left to figure out how to flexibly adapt a particular EBP to fit the needs of the specific individual who is sitting in front of us. In other words, we are often forced to adapt our approach to meet the needs of the individual. By adapting our approach, we are naturally going “off script” with regard to the conditions that the EBP was initially validated. When scaled across caseloads and time, going “off script” inherently leads to the provision of service that is no longer empirically supported, which defeats the initial intention behind providing EPBs in the first place!


The purpose of elucidating this systemic and cyclical issue outlined above is not to place blame on the clinicians and providers who are not able to deliver mental health services with high fidelity to the EBP models. On the contrary, the purpose is to bring to light the limitations within EBPs as it relates to the research <> practice gap and the need for a better system to evaluate how effective our strategies and interventions are objectively, right here and now, on a person by person bases, rather than generalizing averaged results to individual contexts.


In this way, simply practicing an EBP can be viewed as a necessary but insufficient step in being an evidence-based mental health provider. The only way to truly own this identity is through the systematic measurement of therapeutic outcomes within your very own practice: hereby referred to as practice-based evidence.

Bridging the gap with practice-based evidence


Practice-based evidence can be defined as the systematic collection of patient-reported measures associated with a particular treatment goal or desired outcome. These measures can be diagnosis-specific such as the PHQ-9 for depression and the GAD-7 for anxiety, treatment-specific such as the AAQ-II for Acceptance and Commitment Therapy, or process-specific such as the FMI for mindfulness and the ARM-5 for the therapeutic relationship. In all cases, the purpose of collecting practice-based evidence is to objectively understand the relationship between your actions as a mental health provider and the unique response of your patients on an individual basis. By doing so, you are naturally collecting “evidence” to support the intention that the EBP you are choosing to practice is having a real impact on the patients you serve. Moreover, you are able to identify instances when the research <> practice gap is leading to the EBP not being impactful for patient at hand - and whether or not your “off script” approach is landing favorably and pushing the patient toward their goals.


In this way, it comes as no surprise that decades of research support the fact that patient outcomes improve when mental health providers begin to integrate practice-based evidence into their clinical work. This is especially the case when working with complex or high-severity patient presentations in which progress (or lack thereof) if often difficult for you to observe and your patient to self-report. Measurement allows you to pick up on lack of progress early in the treatment process, thus presenting the opportunity to intervene, change, or add treatment services swiftly and proactively.


More broadly, collecting this type of objective information over time arms you with a higher degree of precision regarding patient progress when compared to relying solely on clinical judgement and subjective intuition. While all of us clinicians - myself included - naturally tend to have a bias toward thinking that clinical judgment alone is sufficient to understanding our patients experiences, we need to hold ourselves to a higher standard. Taking a different perspective, would you accept that a new treatment approach was worthy of becoming an EBP if it was validated solely on the clinical judgement of its originators? If the answer is no, then why would you accept that it is okay to practice this way independently? We owe it to our patients to ensure that we can successfully bridge the research <> practice gap with practice-based evidence and deliver effective services in an individualized and context-sensitive manner.


Only then can we truly say that we are “evidence-based”

Latest Articles
See all posts