how our research program helps our clients
search our site:
why we measure
Research shows it's not the brand of therapy that makes the difference, but the track record of the therapist that best predicts clinical improvement. Why do we measure? To be accountable, and to improve what we do and how we do it. There are few consistently excellent therapists (of course, no therapist is helpful with every client). The Colorado Center's mission is to find these therapists, validate their quality by rigorously measuring them, and support them in doing outstanding work.
There are 5000 therapists in the Denver/Boulder metro area, and there are hundreds of brands and methods of psychotherapy. What makes us different is our relentless focus on results that matter to you, whether you are coming to resolve a very specific problem or for a fundamental shift in how you experience the rest of your life.
how we measure
Whatever your reasons for seeking therapy, it is good to know if your well-being is improving along the way. A few paragraphs down, we provide links for our recent peer-reviewed research. But for now, let's keep it simple: We use a practical and accurate method for measuring the results of therapy. And briefly checking at each session gives us more of a chance to change direction if what we are doing isn't helpful.
How have you felt in the past week? How did this session go? We want your honest answers to these questions and we provide you with a graph of change over time to see if we are heading in the right direction. For our ongoing research, we may (on occasion) ask a few additional questions; but if so, we rarely take longer than 4 or 5 minutes.
our results: clients change a lot more than in typical therapy
Between 2011 and 2014, our therapists were in the top 10-15% of therapists who measure their outcomes, with our clients showing outstanding results overall, using standard instruments and methods for measuring change. Naturally, as scientist-practitioners, we engaged in a full-on assault of our data to see what would remain after we challenged our own findings.
we're cleaning up the wild world of outcome measurement
After developing and applying much tougher measurement standards than are typically used, our clients from 2014-2016 still reported a solid improvement from therapy. Here is an example of what happens to one of our therapists' outcome data when we apply these tougher standards (our clients' distress in their first session doesn't appear as extreme with the new method, and their improvement over the course of therapy doesn't look as extreme, either).
Sure, using a harder grading system doesn't make us look as good; but if we don't apply tough standards, it can be hard to know how real our results are. For example, the publishers of one therapy outcome instrument advertise that therapists using their instrument can document significant change in more than 95% of their clients. That's crazy. We think that such easy grading will preserve a therapist's self-image rather than serving his or her clients, and it prevents us from seeing the difference between effective and ineffective therapy in the long term. A couple of recent critics of easy grading have put it this way: "Too many rhinestones are masquerading as diamonds." We take an interest in distinguishing between these, and with the generous help of our clients' feedback about therapy and their well-being over time, we have published articles on better statistics and methodology in prestigious peer-reviewed journals such as Psychotherapy Research and Psychological Assessment.
These therapists at The Colorado Center have accumulated enough data for a statistically reliable estimate of our effectiveness. Click on our names to see the details.
Jason Seidel, Psy.D.
Kristen Morrison, Ph.D.
Irina Banfi-Mare, Psy.D.
Elizabeth Nelson, Ph.D.
Amy Stambuk, LCSW
more details on our methodology and assessment tools
In our evidence-based approach to therapy (called Feedback-Informed Treatment or "FIT"), we use the Rating of Outcome Scale (RŌS), a peer-reviewed, rigorously validated ultra-brief instrument that measures change in clients' well-being. And we use other outcome measures from time to time as part of our ongoing research program.*
Over the last several years, we have determined that there are three ways to toughen our methods so that the outcomes we report are more accurate, reliable, and meaningful:
Toughen the statistics (we now use a repeated-measures-corrected effect size statistic and change statistics based on more conservative reliability coefficients than have become common. These more rigorous methods have been shown to stabilize--and dampen--outcomes compared with using the usual pre-treatment standard deviation…For our stats-geek colleagues, it also avoids varying slopes and intercepts among severity-adjustment regression equations from different instruments and samples, and is still an easily interpretable ES; see Seidel, Miller, & Chow, 2014, for details).
Toughen the instruments (different instruments measuring the same "well-being" can have different sensitivities to distress and change and can give different results; we now use an ultra-brief, highly practical pair of instruments called the ROSES that show less of this 'swing' and provide a more conservative estimate of change. The ROSES are more practical than very long measures and they are free for clinicians to download; see Seidel, Andrews, et al., 2016, for details).
Toughen the way we administer them (we still have a way to go with this one, but so far we have learned that repeatedly giving the same questions session after session may have an exaggerating effect on some outcome instruments, and we are researching the impact of switching between measures and other methods on dampening and stabilizing our reported outcomes, giving a more conservative but "solid" estimate of change).
*Our director and chief statistician, Jason Seidel, is sought out by individuals and agencies worldwide to consult on practice-based-evidence methodologies. He helped facilitate the NREPP/SAMHSA certification process for FIT as an Evidence Based Practice and is Director of Research for the International Center for Clinical Excellence, based in Chicago. Scott Miller, Ph.D., co-developer of Client-Directed, Outcome-Informed Treatment (now called FIT) has described Jason as "an expert clinician and scholar whose knowledge base is only exceeded by his compassion for the people he works with in his clinical practice." Daniel Buccino, LCSW, Clinical Supervisor at Johns Hopkins University has called him "a rare individual, one of the few people who can make psychometrics not only understandable but downright interesting and relevant."
**These instruments and methods included: T-score conversion of all change scores [normed to community--not clinical--samples], repeated-measures-corrected effect sizes, crossover administrations between the Outcome Rating Scale (ORS) and the less sensitive Rating of Outcome Scale (RŌS), and using the RŌS without the ORS). Beating up our data this way creates a bit of turmoil in terms of interpretation (to our colleagues: we know it's messy in the transition, but eventually our data will be more robust and reliable than sticking with 'Version 1.0'). Colleagues: we encourage you to call us if you want to check on our current methodologies and learn more about them.
• Asay, T.P., Lambert, M.J., Gregersen, A.T., & Goates, M.K. (2002). Using patient-focused research in evaluating treatment outcome in private practice. Journal of Clinical Psychology, 58(10), 1213-1225.
• Anker, M.G. et al. (2011). Footprints of couple therapy: Client reflections at follow-up. Journal of Family Psychotherapy, 22, 22-45.
• Anker, M.G., Duncan, B.L., & Sparks, J.A. (2009). Using client feedback to improve couple therapy outcomes: A randomized clinical trial in a naturalistic setting. Journal of Consulting and Clinical Psychology, 77(4), 693-704.
• Barkham, M., Margison, F., Leach, C., Lucock, M., Mellor-Clark, J., Evans, C., Benson, L., Connell, J., & Audin, K. (2001). Service profiling and outcomes benchmarking using the CORE-OM: Toward practice-based evidence in the psychological therapies. Journal of Consulting and Clinical Psychology, 69(2), 184-196.
• Bringhurst, D.L., Watson, C.S., Miller, S.D., & Duncan, B.L. (2006). The reliability and validity of the outcome rating scale: A replication study of a brief clinical measure. Journal of Brief Therapy, 5(1), 23-29.
• Brown, G.S. (2006) Accountable Behavioral Health Alliance: Non-Clinical Performance Improvement Project: Oregon Change Index. Retrieved from: http://www.clinical-informatics.com/ABHA/OCI%20PIP.doc
• Brown, G.S. (2009). Regence Blue Cross/Blue Shield Provider Outcomes. Retrieved from: https://psychoutcomes.org/bin/view/RegenceProviders/WebHome
• Brown, G.S., Lambert, M.J., Jones, E.R., & Minami, T. (2005). Identifying highly effective psychotherapists in a managed care environment. American Journal of Managed Care, 11(8), 513-520.
• Campbell, A., & Hemsley, S. (2009). Outcome rating scale and session rating scale in psychological practice: Clinical utility of ultra-brief measures. Clinical Psychologist, 13, 1-9.
• Chow, D. L., Miller, S. D., Seidel, J. A., Kane, R. T., Thornton, J., & Andrews, W. P. (2015). The role of deliberate practice in the development of highly effective psychotherapists. Psychotherapy, 52(3), 337-345. DOI: 10.1037/pst0000015.
• Duncan, B.L., Miller, S.D., Sparks, J.A., Claud, D.A., Reynolds, L.R., Brown, J., Johnson, L.D. (2003). The session rating scale: Preliminary psychometric properties of a "working alliance" inventory. Journal of Brief Therapy, 3(1), 3-11.
• Duncan, B.L., Miller, S.D., & Sparks, J.A. (2004). The heroic client: A revolutionary way to improve effectiveness through client-directed, outcome-informed therapy. San Francisco: Jossey-Bass.
• Duncan, B.L., Miller, S.D., Wampold, B.E., & Hubble, M.A. (2009). The heart and soul of change, 2nd Ed.: Delivering what works in therapy. Washington, D.C.: APA Press.
• Duncan, B., Sparks, J., Miller, S., Bohanske, R., Claud, D. (2006). Giving youth a voice: A preliminary study of the reliability and validity of a brief outcome measure for children, adolescents, and caretakers. Journal of Brief Therapy, 5, 71-87.
• Ericsson, K. A. (2006). The Influence of experience and deliberate practice on the development of superior expert performance. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance. (pp. 683-703). Cambridge: Cambridge University Press.
• Gawande, A. (2004, December 6). The bell curve: What happens when patients find out how good their doctors really are? The New Yorker Online. Available through http://www.ihi.org
• Hafkenscheid, A., Duncan, B.L., & Miller, S.D. (2010). The Outcome and Session Rating Scales: A cross-cultural examination of the psychometric properties of the Dutch translation. Journal of Brief Therapy, 7 (1&2), 1-12.
• Hannan, C., Lambert, M.J., Harmon, C., Nielsen, S.L., Smart, D.W., Shimokawa, K., & Sutton, S.W. (2005). A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology: In Session, 61(2), 155-163.
• Hansen, N.B., Lambert, M.J., & Forman, E.M. (2002). The psychotherapy dose-response effect and its implications for treatment delivery services. Clinical Psychology: Science and Practice, 9(3), 329-343.
• Harmon, S.C., Lambert, M.J., Smart, D.M., Hawkins, E., Nielsen, S.L., Slade, K., & Lutz, W. (2007). Enhancing outcome for potential treatment failures: Therapist-client feedback and clinical support tools. Psychotherapy Research, 17(4), 379-392.
• Hawkins, E.J., Lambert, M.J., Vermeersch, D.A., Slade, K.L., & Tuttle, K.C. (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research, 14(3), 308-327.
• Hubble, M.A, Duncan, B.L. & Miller, S.D. (1999). The heart and soul of change: What works in therapy. Washington, D.C.: American Psychological Association.
• Lambert, M.J. (2004). Bergin and Garfield's handbook of psychotherapy and behavior change, 5th Ed. New York: Wiley.
• Lambert, M. J., Whipple, J. L., Bishop, M. J., Vermeersch, D. A., Gray, G. V., & Finch, A. E. (2002). Comparison of empirically-derived and rationally-derived methods for identifying patients at risk for treatment failure. Clinical Psychology and Psychotherapy, 9, 149-164.
• Miller, S.D., & Duncan, B.L. (2004). The Outcome and Session Rating Scales: Administration and Scoring Manual. Chicago, IL: ISTC.
• Miller, S.D., Duncan, B.L., Brown, J., Sparks, J.A., & Claud, D.A. (2003). The outcome rating scale: A preliminary study of the reliability, validity, and feasibility of a brief visual analog measure. Journal of Brief Therapy, 2(2), 91-100.
• Miller, S.D., Duncan, B.L., & Hubble, M.A. (2004). Beyond integration: The triumph of outcome over process in clinical practice. Psychotherapy in Australia, 10(2), 2-19.
• Miller, S.D., Duncan, B.L., Sorrell, R., Brown, G.S., & Chalk, M.B. (2006). Using outcome to inform therapy practice. Journal of Brief Therapy, 5(1), 5-22.
• Miller, S. D., , Hubble, M. A., Chow, D., & Seidel, J. (2015). Beyond measures and monitoring: Realizing the potential of Feedback-Informed Treatment. Psychotherapy, 52(4), 449-457. DOI: 10.1037/pst0000031.
• Miller, S. D., Hubble, M. A., Chow, D. L., & Seidel, J. A. (2013). The outcome of psychotherapy: Yesterday, today, and tomorrow. Psychotherapy, 50(1), 88-97.
• Miller, S. D., Maeschalck, C., Axsen, R., & Seidel, J. (2011). The International Center for Clinical Excellence Core Competencies. http://centerforclinicalexcellence.com/wp-content/plugins/buddypress-group-documents/documents/1281032711-CoreCompetencies.PDF
• Owen, J., Miller, S. D., Seidel, J., & Chow, D. L. (2016). The working alliance in treatment of military adolescents. Journal of Consulting and Clinical Psychology, 84(3), 200-210. http://dx.doi.org/10.1037/ccp0000035.
• Reese, R.J., Gillespy, A., Owen, J.J., Flora, K.L., Cunningham, L.C., Archie, D., & Marsden, T. (2013). The influence of demand characteristics and social desirability on clients' ratings of the therapeutic alliance. Journal of Clinical Psychology, 69(7), 696-709.
• Reese, R.J., Norsworthy, L.A., and Rowlands, S.R. (2009a). Does a continuous feedback system improve psychotherapy outcome. Psychotherapy: Theory, Research, Practice, Training, 46, 418-431.
• Reese, R. J., Usher, E. L., Bowman, D., Norsworthy, L., Halstead, J., Rowlands, S., & Chisholm, R. (2009b). Using client feedback in psychotherapy training: An analysis of its influence on supervision and counselor self-efficacy. Training and Education in Professional Psychology, 3, 157-168.
• Seidel, J. A. (2012, August). Feedback-informed treatment: The devil is in the details. In C. D. Goodheart (Chair), Practice-based evidence of psychotherapy's effectiveness. Symposium conducted at the meeting of the American Psychological Association, Orlando, FL.
• Seidel, J. A. (2012). Using Feedback-Informed Treatment (FIT) to build a premium-service, private-pay practice. In C. E. Stout (Ed.). Getting Better at Private Practice (pp. 279-291). New York: Wiley.
• Seidel, J. A. (2006, November-December). The survival of psychotherapy: How humanistic accountability will transform our profession and your practice. Colorado Psychological Association Bulletin, 34(7), 6-9. Reprinted in: (2006, November) The Clinical Practitioner, 1(4), 10-13. Available at http://www.nappp.org
• Seidel, J. A., Andrews, W. P., Owen, J., Miller, S. D., & Buccino, D. L. (2016). Preliminary validation of the Rating of Outcome Scale and equivalence of ultra-brief measures of well-being. Psychological Assessment. Advance online publication. doi: 10.1037/pas0000311
• Seidel, J. A., & Miller, S. D. (2012). Manual 4: Documenting change: A primer on measurement, analysis, and reporting. In B. Bertolino, & S. D. Miller (Eds.), ICCE Manuals on Feedback-Informed Treatment (Vols. 1-6). Chicago: ICCE Press.
• Seidel, J. A., Miller, S. D., & Chow, D. L. (2014). Effect size calculations for the clinician: Methods and comparability. Psychotherapy Research, DOI: 10.1080/10503307.2013.840812
• Tilsen, J., Maeschalck, C., Seidel, J., Robinson, W., & Miller, S. D. (2012). Manual 5: Feedback-informed clinical work: Specific populations and service settings. In B. Bertolino, & S. D. Miller (Eds.), ICCE Manuals on Feedback-Informed Treatment (Vols. 1-6). Chicago: ICCE Press.
• Wampold, B.E. (2001). The great psychotherapy debate: Models, methods, and findings. Mahwah, N.J.: Lawrence Erlbaum