Measuring the fit between therapist and client and the outcome of treatment seems all well and good with high-functioning, private-practice clients. But what about clients who receive public behavioral health care—what about folks diagnosed as severely mentally ill, addicted to drugs or alcohol, or living on the streets? Well, we had these same questions as we embarked on a journey to address a recommendation from an accreditation body that our agency start measuring outcomes.
The work of Scott Miller, Barry Duncan, and Jacqueline Sparks came to our attention through presentations, word of mouth, and their book, The Heroic Client. After years of hearing all the hoopla about evidence-based practice, we resonated with the idea of involving clients in determining the “fit” and “effect” of services, or what the authors termed “practice-based evidence.” And, frankly, we had to do something to comply with the recommendation.
We remained skeptical, however. Actually, skeptical doesn’t quite capture the feelings of some of the addiction counselors in our pilot project who were asked to use the Outcome Rating Scale (ORS) and Session Rating Scale (SRS) at an urban, an inner-city clinic. Some of the seasoned and talented group were downright cynical—and had the right to be. After all, this might well have been just another misdirected edict from above, following a conga line of new procedures and paradigm shifts. The group raised concerns that the measures wouldn’t work with homeless clients addicted to drugs or with those in severe crisis or suffering from mental illness. They complained it would be too difficult to remember to give the measures in the throes of their work, and that it would take too much time. Besides, they argued, “those” clients won’t be honest or go along with the idea, no matter how it’s presented. Of course, they also bemoaned the addition of any paperwork to their already ridiculously overpapered lives. It was a tumultuous period in the agency’s life, to say the least.
About this time, serendipitously, Barry Duncan was in town for a presentation. We invited him to tour the Downtown Eastside of Vancouver, home to one of our pilot sites—the poorest neighborhood in Canada, infamous for its open drug market, high level of injected drug use, and epidemic levels of hepatitis, HIV, homelessness, and mental health and addiction problems. We shared our doubts about using the client-directed ORS and SRS with such a marginalized, complex population, but Barry assured us that others have succeeded with similar populations throughout the United States and around the world.
We learned that diverse settings serving the most disenfranchised clients not only realized improved outcomes, but also reduced length of stay, cancellations, and no-shows. For example, Mary Haynes at Community Health and Counseling Services in Bangor, Maine—an agency that serves the “severely and persistently mentally ill”—reduced length of stay by 72 percent in case management, 59 percent in therapy, and 47 percent in residential programs. Dave Claud at the Center for Family Services in West Palm Beach, Florida, a Community Mental Health Clinic serving a broad base of clients, decreased cancellations by 40 percent and no-shows by 25 percent. Bill Plum of the Center for Alcohol and Drug Treatment in Duluth, Minnesota, saw retention improve from 50 to 82 percent. Finally, Bob Bohanske, serving the severely mentally ill and other disadvantaged populations at Southwest Behavioral Health in Phoenix, Arizona, achieved gains in effectiveness and efficiency that were impressive enough to convince the State of Arizona to adopt the ORS and SRS as a Best Practice. Barry put us in touch with these leaders and suggested that we join the Heroic Agencies List (Jacqueline Sparks’s brainchild) of more than 500 people from 14 countries who discuss the measures and client-directed work, provide support, and share information about their implementation experiences.
After attending Scott Miller and Barry Duncan’s Training of Trainers Conference, we began training the pilot teams, while incorporating regular individual and group consultations. The clinicians indicated that the give-and-take of the group consultations were most beneficial, as they learned from one another about successes and challenges in using the measures. Initial discomfort with the measures came from awkwardness at introducing and scoring the instruments, uneasiness with “the numbers” and what they meant, and, perhaps most important, a nagging fear about how the information about effectiveness would be used.
It’s particularly important that therapists understand that the measures aren’t going to be used punitively in any way, but rather as a tool to help them find the right approach, keep clients engaged, and improve outcomes. Without feedback about effectiveness, how can we learn, as individuals and agencies, to be better? Over time, the counselors came to value the measures as compasses to guide treatment—as a way to open up conversations about core issues and identify failing cases early so that adjustments could be made. Perhaps the most valid test of the counselors’ ultimate acceptance of the measures came when we surveyed them and found that all 14 who participated in the pilot chose to continue using the measures. Assuaging fears about punitive uses and providing clinicians time, ongoing support, and supervision, we learned, are critical components to sustained integration of the measures.
On the client side of things, most had no qualms about completing the measures. Occasionally, they refused for various reasons—not liking to fill in forms, not wanting to evaluate their therapist, etc.—but these challenges were often overcome by offering the option of completing the measures orally or just trying it for a few sessions. Most clients reported feeling empowered when asked for their feedback, and appreciated a more active involvement in their treatment. Some even reported that the measures helped them connect their distress to their substance use.
One such client, an amphetamine addict, defended his daily drug habit, pointing to his ability to hold down a job and keep friends. He was adamant that no problem existed. But after a few sessions of using the measures, he suddenly quit using! He explained that the more he reflected about the areas of his life measured by the ORS, the more he realized that it was the drug use that created the discontent indicated by his score. Of course all clients won’t gain such dramatic insights, but the measures do readily lend themselves to client reflection about their lives and the assignment of personal meaning to their entries on the scales.
Incorporating formal client feedback caused a sea change in our whole outlook. We now expect that clients will benefit from our services, or that we’ll make changes to ensure that they do. We believe that our clients have a right to services that are beneficial, and that recovery is a probability, not just a possibility. Using the ORS and SRS allows us to be accountable, not only to our agency (and accrediting board), but more important, to our clients.
After a year of using the measures and entering the data into the ASIST program (www.talkingcure.com), we’ve found that our results with these difficult clients are better than those achieved in randomized clinical trials, as well as the national norms established by more than 300,000 administrations of the measures. Despite its misgivings, the staff forged ahead, tracking results and pushing for excellence, striving to be supershrinks for these difficult and demanding clients.