How our research program helps our clients

2020 Update: Our recent outcomes analysis (clients treated in 2016-2019) shows our clients experiencing markedly better improvement than in typical therapy, using tough analytic methods and virtually all client data (excluding single-session clients, clients who opt-out of analyzing their scores, and clients under 18 years old).

Why we measure

Research shows it’s not the brand of therapy that makes the difference, but the track record of the therapist that best predicts clinical improvement. Why do we measure? To be accountable, and to improve what we do and how we do it. There are few consistently excellent therapists (of course, no therapist is helpful with every client). The Colorado Center’s mission is to find these therapists, measure the quality of their treatment, and support them in doing outstanding work.

There are 5000 therapists in the Denver/Boulder metro area, and there are hundreds of psychotherapy methods. What makes us different is our relentless focus on results that matter to you, whether you are coming to resolve a specific problem or for a fundamental shift in how you experience the rest of your life.

improvement in Well Being from 2018-2020 line graph
Average Effectiveness of Treatments from Meta-Analytic Research vs. Effectiveness of Therapists at The Colorado Center Bar Graph

How we measure

Whatever your reasons for seeking therapy, it is good to know if your well-being is improving along the way. A few paragraphs down, we provide links for our recent peer-reviewed research. But for now, let’s keep it simple: We use a practical and accurate method for measuring the results of therapy. And by briefly checking how things are going at each session, we get more of a chance to change direction if what we are doing isn’t helpful.

How have you felt in the past week? How did this session go? We want your honest answers to these questions and we provide you with a graph of change over time to see if we are heading in the right direction. For our ongoing research, we may (on occasion) ask a few additional questions; but if so, we rarely take longer than 4 or 5 minutes.

Our results: clients change a lot more than in typical therapy

Between 2011 and 2014, our therapists were in the top 10-15% of therapists who measure their outcomes, with our clients showing outstanding results overall, using standard instruments and methods for measuring change. Naturally, as scientist-practitioners, we engaged in a full-on assault of our data to see what would remain after we challenged our own findings. After another four years of monitoring our results with these more rigorous methods, our results remain very strong into 2020.

We’re cleaning up the wild world of outcome measurement

After developing and applying much tougher measurement standards than are typically used, our clients from 2014-2016 still reported a solid improvement from therapy. And our clients who worked with us from 2016-2018 showed even more clinical improvement in their well-being. Sure, using a harder grading system might not make us look as good. But if we don’t apply tough standards, how real are our results? For example, the publishers of one therapy outcome instrument advertise that by using their instrument, therapists can document significant change in more than 95% of their clients. That’s crazy. We think that such easy grading will preserve a therapist’s self-image rather than serving his or her clients, and it prevents us from seeing the difference between effective and ineffective therapy. We take an interest in distinguishing between these, and with the generous help of our clients’ feedback, we have published articles on better statistics and methods in prestigious peer-reviewed journals such as Psychotherapy Research and Psychological Assessment.

These therapists at The Colorado Center have accumulated enough data for a statistically reliable estimate of our effectiveness. Click on our names to see the details.

more details on our methodology and assessment tools

In our evidence-based approach to therapy (called Feedback-Informed Treatment or “FIT”), we use the Rating of Outcome Scale (RŌS), a peer-reviewed, rigorously validated ultra-brief instrument that measures change in clients’ well-being. And we use other outcome measures from time to time as part of our ongoing research program.*

Over the last several years, we have determined that there are three ways to toughen our methods so that the outcomes we report are more accurate, reliable, and meaningful:

  1. Toughen the statistics (we now use a repeated-measures-corrected effect size statistic and change statistics based on more conservative reliability coefficients than the less-appropriate and easy ones usually used. These more rigorous methods have been shown to stabilize–and dampen–outcomes compared with using the usual pre-treatment standard deviation…For our stats-geek colleagues, stop using Cronbach’s alpha for reliability! Take the training wheels off and use an appropriate test-retest coefficient from an untreated, community sample!  A repeated-measures-corrected ES also avoids varying slopes and intercepts among different severity-adjustment regression equations from different instruments and samples, and is still an easily interpretable ES; see Seidel, Miller, & Chow, 2014, for details).

  2. Toughen the instruments (different instruments measuring the same “well-being” can have different sensitivities to distress and change and can give different results; we now use an ultra-brief, highly practical pair of instruments called the ROSES that show less of this ‘swing’ and provide a more conservative estimate of change. The ROSES are more practical than very long measures and they are free for clinicians to download; see Seidel, Andrews, et al., 2016, for details).

  3. Toughen the way we administer them (we still have a way to go with this one, but so far we have learned that repeatedly giving the same questions session after session may have an exaggerating effect on some outcome instruments, and we are researching the impact of switching between measures and other methods on dampening and stabilizing our reported outcomes, giving a more conservative but “solid” estimate of change).

We know that most of our clients don’t care about all this stuff. Heck, most of our colleagues don’t care about it! But our mission is to provide the highest quality therapy from the best therapists in Colorado. How can we know how we are doing if we don’t bother to find out, or if we use a test that would make most therapists look “above average”?

The 2016-2018/2019 graph above was created from our clients’ well-being data using the validated Rating of Outcome Scale (RŌS) and T-scores that were normed via published community-sample metrics.**

*Our director and chief statistician, Jason Seidel, is sought out by individuals and agencies worldwide to consult on practice-based-evidence methodologies. He helped facilitate the NREPP/SAMHSA certification process for FIT as an Evidence Based Practice and was Director of Research for the International Center for Clinical Excellence, based in Chicago, from 2009-2019. Scott Miller, Ph.D., co-developer of Client-Directed, Outcome-Informed Treatment (now called FIT) has described Jason as “an expert clinician and scholar whose knowledge base is only exceeded by his compassion for the people he works with in his clinical practice.” Daniel Buccino, LCSW, Clinical Supervisor at Johns Hopkins University has called him “a rare individual, one of the few people who can make psychometrics not only understandable but downright interesting and relevant.”

**Our methods included: T-score conversion of all change scores (normed to community–not clinical–samples) and repeated-measures-corrected effect sizes. Beating up our data this way creates some difficulty in comparisons with plain-vanilla pre-post effect sizes. Colleagues: we encourage you to call us if you want to check on our current methodologies and learn more about them.

Bibliography of psychotherapy outcomes research

• Asay, T.P., Lambert, M.J., Gregersen, A.T., & Goates, M.K. (2002). Using patient-focused research in evaluating treatment outcome in private practice. Journal of Clinical Psychology, 58(10), 1213-1225.
• Anker, M.G. et al. (2011). Footprints of couple therapy: Client reflections at follow-up. Journal of Family Psychotherapy, 22, 22-45.
• Anker, M.G., Duncan, B.L., & Sparks, J.A. (2009). Using client feedback to improve couple therapy outcomes: A randomized clinical trial in a naturalistic setting. Journal of Consulting and Clinical Psychology, 77(4), 693-704.
• Barkham, M., Margison, F., Leach, C., Lucock, M., Mellor-Clark, J., Evans, C., Benson, L., Connell, J., & Audin, K. (2001). Service profiling and outcomes benchmarking using the CORE-OM: Toward practice-based evidence in the psychological therapies. Journal of Consulting and Clinical Psychology, 69(2), 184-196.
• Bringhurst, D.L., Watson, C.S., Miller, S.D., & Duncan, B.L. (2006). The reliability and validity of the outcome rating scale: A replication study of a brief clinical measure. Journal of Brief Therapy, 5(1), 23-29.
• Brown, G.S. (2006) Accountable Behavioral Health Alliance: Non-Clinical Performance Improvement Project: Oregon Change Index. 
• Brown, G.S. (2009). Regence Blue Cross/Blue Shield Provider Outcomes. Retrieved from: https://psychoutcomes.org/bin/view/RegenceProviders/WebHome
• Brown, G.S., Lambert, M.J., Jones, E.R., & Minami, T. (2005). Identifying highly effective psychotherapists in a managed care environment. American Journal of Managed Care, 11(8), 513-520.
• Campbell, A., & Hemsley, S. (2009). Outcome rating scale and session rating scale in psychological practice: Clinical utility of ultra-brief measures. Clinical Psychologist, 13, 1-9.
• Chow, D. L., Miller, S. D., Seidel, J. A., Kane, R. T., Thornton, J., & Andrews, W. P. (2015). The role of deliberate practice in the development of highly effective psychotherapists. Psychotherapy, 52(3), 337-345. DOI: 10.1037/pst0000015.
• Duncan, B.L., Miller, S.D., Sparks, J.A., Claud, D.A., Reynolds, L.R., Brown, J., Johnson, L.D. (2003). The session rating scale: Preliminary psychometric properties of a “working alliance” inventory. Journal of Brief Therapy, 3(1), 3-11.
• Duncan, B.L., Miller, S.D., & Sparks, J.A. (2004). The heroic client: A revolutionary way to improve effectiveness through client-directed, outcome-informed therapy. San Francisco: Jossey-Bass.
• Duncan, B.L., Miller, S.D., Wampold, B.E., & Hubble, M.A. (2009). The heart and soul of change, 2nd Ed.: Delivering what works in therapy. Washington, D.C.: APA Press.
• Duncan, B., Sparks, J., Miller, S., Bohanske, R., Claud, D. (2006). Giving youth a voice: A preliminary study of the reliability and validity of a brief outcome measure for children, adolescents, and caretakers. Journal of Brief Therapy, 5, 71-87.
• Ericsson, K. A. (2006). The Influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance. (pp. 683-703). Cambridge: Cambridge University Press.
• Gawande, A. (2004, December 6). The bell curve: What happens when patients find out how good their doctors really are? The New Yorker Online. Available through http://www.ihi.org
• Hafkenscheid, A., Duncan, B.L., & Miller, S.D. (2010). The Outcome and Session Rating Scales: A cross-cultural examination of the psychometric properties of the Dutch translation. Journal of Brief Therapy, 7 (1&2), 1-12.
• Hannan, C., Lambert, M.J., Harmon, C., Nielsen, S.L., Smart, D.W., Shimokawa, K., & Sutton, S.W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology: In Session, 61(2), 155-163.
• Hansen, N.B., Lambert, M.J., & Forman, E.M. (2002). The psychotherapy dose-response effect and its implications for treatment delivery services. Clinical Psychology: Science and Practice, 9(3), 329-343.
• Harmon, S.C., Lambert, M.J., Smart, D.M., Hawkins, E., Nielsen, S.L., Slade, K., & Lutz, W. (2007). Enhancing outcome for potential treatment failures: Therapist-client feedback and clinical support tools. Psychotherapy Research, 17(4), 379-392.
• Hawkins, E.J., Lambert, M.J., Vermeersch, D.A., Slade, K.L., & Tuttle, K.C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14(3), 308-327.
• Hubble, M.A, Duncan, B.L. & Miller, S.D. (1999). The heart and soul of change: What works in therapy. Washington, D.C.: American Psychological Association.
• Lambert, M.J. (2004). Bergin and Garfield’s handbook of psychotherapy and behavior change, 5th Ed. New York: Wiley.
• Lambert, M. J., Whipple, J. L., Bishop, M. J., Vermeersch, D. A., Gray, G. V., & Finch, A. E. (2002). Comparison of empirically-derived and rationally-derived methods for identifying patients at risk for treatment failure. Clinical Psychology and Psychotherapy, 9, 149-164.
• Miller, S.D., & Duncan, B.L. (2004). The Outcome and Session Rating Scales: Administration and Scoring Manual. Chicago, IL: ISTC.
• Miller, S.D., Duncan, B.L., Brown, J., Sparks, J.A., & Claud, D.A. (2003). The outcome rating scale: A preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of Brief Therapy, 2(2), 91-100.
• Miller, S.D., Duncan, B.L., & Hubble, M.A. (2004). Beyond integration: The triumph of outcome over process in clinical practice. Psychotherapy in Australia, 10(2), 2-19.
• Miller, S.D., Duncan, B.L., Sorrell, R., Brown, G.S., & Chalk, M.B. (2006). Using outcome to inform therapy practice. Journal of Brief Therapy, 5(1), 5-22.
• Miller, S. D., , Hubble, M. A., Chow, D., & Seidel, J. (2015). Beyond measures and monitoring: Realizing the potential of Feedback-Informed Treatment. Psychotherapy, 52(4), 449-457. DOI: 10.1037/pst0000031.
• Miller, S. D., Hubble, M. A., Chow, D. L., & Seidel, J. A. (2013). The outcome of psychotherapy: Yesterday, today, and tomorrow. Psychotherapy, 50(1), 88-97.
• Miller, S. D., Maeschalck, C., Axsen, R., & Seidel, J. (2011). The International Center for Clinical Excellence Core Competencies. http://centerforclinicalexcellence.com/wp-content/plugins/buddypress-group-documents/documents/1281032711-CoreCompetencies.PDF
• Owen, J., Miller, S. D., Seidel, J., & Chow, D. L. (2016). The working alliance in treatment of military adolescents. Journal of Consulting and Clinical Psychology, 84(3), 200-210. http://dx.doi.org/10.1037/ccp0000035.
• Reese, R.J., Gillespy, A., Owen, J.J., Flora, K.L., Cunningham, L.C., Archie, D., & Marsden, T. (2013). The influence of demand characteristics and social desirability on clients’ ratings of the therapeutic alliance. Journal of Clinical Psychology, 69(7), 696-709.
• Reese, R.J., Norsworthy, L.A., and Rowlands, S.R. (2009a). Does a continuous feedback system improve psychotherapy outcome. Psychotherapy: Theory, Research, Practice, Training, 46, 418-431.
• Reese, R. J., Usher, E. L., Bowman, D., Norsworthy, L., Halstead, J., Rowlands, S., & Chisholm, R. (2009b). Using client feedback in psychotherapy training: An analysis of its influence on supervision and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157-168.
• Seidel, J. A. (2012, August). Feedback-informed treatment: The devil is in the details. In C. D. Goodheart (Chair), Practice-based evidence of psychotherapy’s effectiveness. Symposium conducted at the meeting of the American Psychological Association, Orlando, FL.
• Seidel, J. A. (2012). Using Feedback-Informed Treatment (FIT) to build a premium-service, private-pay practice. In C. E. Stout (Ed.). Getting Better at Private Practice (pp. 279-291). New York: Wiley.
• Seidel, J. A. (2006, November-December). The survival of psychotherapy: How humanistic accountability will transform our profession and your practice. Colorado Psychological Association Bulletin, 34(7), 6-9. Reprinted in: (2006, November) The Clinical Practitioner, 1(4), 10-13. Available at http://www.nappp.org
• Seidel, J. A., Andrews, W. P., Owen, J., Miller, S. D., & Buccino, D. L. (2016). Preliminary validation of the Rating of Outcome Scale and equivalence of ultra-brief measures of well-being. Psychological Assessment. Advance online publication. doi: 10.1037/pas0000311
• Seidel, J. A., & Miller, S. D. (2012). Manual 4: Documenting change: A primer on measurement, analysis, and reporting. In B. Bertolino, & S. D. Miller (Eds.), ICCE Manuals on Feedback-Informed Treatment (Vols. 1-6). Chicago: ICCE Press.
• Seidel, J. A., Miller, S. D., & Chow, D. L. (2014). Effect size calculations for the clinician: Methods and comparability. Psychotherapy Research, DOI: 10.1080/10503307.2013.840812
• Tilsen, J., Maeschalck, C., Seidel, J., Robinson, W., & Miller, S. D. (2012). Manual 5: Feedback-informed clinical work: Specific populations and service settings. In B. Bertolino, & S. D. Miller (Eds.), ICCE Manuals on Feedback-Informed Treatment (Vols. 1-6). Chicago: ICCE Press.
• Wampold, B.E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, N.J.: Lawrence Erlbaum.
• Wehr, T.A., Moul, D.E., Barbato, G., Giesen, H.A., Seidel, J.A., Barker, C., & Bender, C. (1993). Conservation of photoperiod-responsive mechanisms in humans, American Journal of Physiology, 265 (Regulatory Integrative Comparative Physiology, 34), R846-R857.

ROSES Downloads (Rating of Outcomes and Session Experience Scales)

ROS Rating of Outcome Scale and SES Session Experience Scale

Request Appointment

Please tell us a little about what you’re looking for, and we will respond within two business days. Otherwise, click here to book an initial call directly with a therapist of your choice.