My longstanding rule is to design your program to treat your population, not some idealized group of subjects who did well in a research setting with a significantly different structure.

iStock_000028313630_MediumNot long ago I got into a discussion with a counselor who insisted his program used a harm reduction approach because it didn’t require clients to abstain from drugs while in its IOP. “We leave that decision up to the client,” he said, with pride.

“But how does that reduce harm?” I asked. He explained that it encourages them to enroll in treatment, where otherwise they might refuse. That was based on research that showed that addicts who remained in treatment did better than those who dropped out.

The ironic part is that only a few months earlier, two active clients had OD’ed on drugs between sessions. Both survived, but it sure seemed to me like poor evidence for the harm reduction value of his approach. They might well have done better had somebody insisted they remain drug-free while in treatment, and monitored them to make sure.

It’s true that more clients will enroll in your program if you relax standards around drinking or drug use. But that doesn’t necessarily mean they’re getting better. It reminds me of those smoking cessation experiments where smokers were offered a generous financial incentive for completing the program, but at the end, only about 17% had actually stopped smoking. The comparison group was required to make a much stronger commitment, and while far fewer signed up, the rate of successful cessation was about 3 times that of the first group.

Obviously, neither one is ideal. Be good if we had a way to predict who belonged where. But as far as I can determine, we don’t.

My first instinct would be to require those with a severe problem to commit to abstinence, but I don’t doubt fewer would sign up. I’m guessing they’d be mostly those who’d already tried to quit on their own, unsuccessfully. But I’m also aware that programs get paid by the patient, so maybe I’m just whistling in the wind.

Some counselors find a compromise. “I’m not going to refuse to treat someone because they won’t commit to being drug-free,” a therapist, himself in recovery, explained. “But I also don’t like pretending an addict can do something when I know damn well he can’t. I just assessed the guy as somebody with a severe, relapsing SUD, now I’m supposed to pat him on the head and be encouraging when he tells me he’s going to keep using? That’s not empathy, that’s patronizing.” His solution: “Be direct. Remind him of his previous failures and suggest we do an experiment. We set up a program to control his drug use. Then we’ll monitor it closely– drug tests, reports from his family, session attendance, that sort of thing – so we know if it’s really working. We don’t take his word for it. Then when the experiment period is over, we see how well he did, and then decide where to go from there.”

A contingency contract, in other words. “What if he refuses?” I ask. “I send him up the street to the program where they don’t care all that much what happens to you as long as you’re current on the bill.”

“Look, I understand the desire to keep using,” he continued. “I did, for a full decade after I knew something was wrong. But at some point every addict has to face reality. It hurts, but it also helps.”

My longstanding rule is to design your program to treat your population, not some idealized group of subjects who did well in a research setting with a significantly different structure. Don’t decide on the basis of philosophy. Instead, look at it in terms of what’s likely to produce the best results, based on a close analysis of the people you work with– whoever they may be.


No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URL


Leave a comment

*


Subscribe to RecoverySI via Email


Top Commenters

24 comments
Joyce Goodale
12 comments
9 comments
8 comments
8 comments

New Content for Programs


 




RecoverySI on Twitter