Recovery-focused interventions: too messy to study?
“Not everything that can be counted counts and not everything that counts can be counted.”
Albert Einstein
These last few weeks I’ve seen several people writing on the subject of substance use disorder treatment calling for us ‘to stick with the evidence’ of what works. Research evidence should, quite rightly, inform policy and practice.
There’s an assumption underlying this premise and it’s a big one. The hierarchical ladder of research methodology is based on the scientific method and on the premise that it is possible to understand the world in which we live in terms of cause and effect. As it turns out, life’s a bit more messy.
The randomised controlled trial (RCT)
In terms of effective research structures, the randomised controlled trial (RCT) stands tall. It’s the typical standard that researchers use to reassure us that an intervention is causing an effect. In my GP days I was involved in RCTs exploring medications for various health conditions – they often shifted practice to improve treatment outcomes for patients.
Methodologies like the RCT provide results that can lead to evidence-based practice. In medicine this has been defined as:
The conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.
Sakett et al, BMJ 1996
I’ll draw your attention to the word ‘current’. Here’s the thing: we don’t know what we don’t yet know. We’re even less likely to get to know it if we’re not looking for it. The evidence base for interventions is undeniably not complete, which means when people say, for example, ‘there’s no evidence that residential treatment works to help people achieve their goals’, it doesn’t mean that residential treatment does not help people achieve their goals. In fact as many have shown, there is evidence for residential treatment but it does need to be better developed. One might question why the evidence lags a bit. Is it because few are looking for it?
Recovery and the randomised controlled trial (RCT)
When the Scottish Government began to develop its previous drugs strategy in 2007, (The Road to Recovery) resistant voices claimed that there was ‘no evidence’ to support recovery. They cited data to show that, in terms of the evidence base, opioid replacement therapy was the most solidly-based treatment for those with opioid use disorder.
There is generally agreement that those voices were right in terms of where the weight of the evidence lay. But the issues are more nuanced that they might first appear. If you look at the end points studied: reduction in illicit drug use, reduction in deaths, reduction in blood-borne virus transmission and reduction in crime, then of course the evidence is there. That’s good and well, but when you begin to examine quality of life outcomes the evidence looks less solid. As I’ve stated before, I believe in medication assisted treatment (MAT), but I’m also open to examining its shortcomings. In the British Medical Journal, Sanger and colleagues (2018) wrote:
Recent guidelines indicate there is little consistent evidence to evaluate the effectiveness of MATs. Reviews evaluating MAT effectiveness have found great variability in outcomes between studies, making it difficult to establish a real treatment effect. Each study measures a different set of treatment outcomes that define success in arbitrary or convenient terms.
The authors continue:
This is a substantial limitation in addiction research that must be overcome to reach a consensus on which treatment outcome domains should be the goals. So even when the evidence appears strong, there are still issues to be resolved.
When it comes down to recovery research evidence, it’s not only that the research focus is not on recovery – part of the issue is that it’s also hard to pin down precisely what recovery means, which makes it difficult to study. Nevertheless, in 2010, the review of the evidence base – Research for Recovery – found that there was evidence to support recovery.
A problem with the evidence imbalance is that medicalised treatment is easier to study than complex, multiple-stranded psychosocial interventions. One lends itself to the RCT and one does not. Furthermore, if all you study are medication-based interventions it’s obviously correct to say that’s where the evidence lies (although, as before, there are legitimate questions about which end points you use to cite efficacy). But what matters to researchers and the public health agenda may not be what matters to patients. So are we missing things that count?
RCT strengths and weaknesses
In his book Circles of Recovery, Keith Humphreys lays out the strengths and weaknesses of the RCT. He points out that it has high internal validity – a strength. This means we can make robust inferences about a treatment effect. He goes on to make the point that there are difficulties with generalisability and utility.
RCTs will usually have inclusion and exclusion criteria. In terms of the trial, who is allowed to join it will limit how far we can apply the findings in everyday practice. As I say, I’ve been involved in research using RCT methodology. Typically we will exclude people with major physical health problems (e.g. hepatitis C) and major mental health disorders (e.g. depression, self-harm, personality disorders, eating disorders, bipolar disorder, past history of psychosis etc). In other words, we exclude the sort of people we see every day in our work. Typical patients. This has an impact on what happens when we apply the research to real-life practice.
Generally you need to get large numbers of people enrolled in RCTs to ‘power’ them. Large populations mean you establish ‘central tendencies’. The evidence that is gathered might not be applicable to individuals – in other words to you and me: it’s not necessarily ‘externally valid’.
Then there’s bias – pharmaceutically-funded studies tend to favour the medication being trialled, all of us have our own acknowledged or hidden biases and there are too many bits of research unpublished because they didn’t show the ‘right’ results. Bias and spin can also affect what folk do with study evidence. It’s not hard to rubbish research and it’s not hard to bend it to make it appear to say something it actually doesn’t say or to fail to highlight uncomfortable findings.
Fortunately, RCTs are not the only way to get accurate results. Humphreys says, ‘The common conviction that RCTs always generate more accurate estimates of treatment effects is simply incorrect.’ However, the grading of evidence is biased towards RCTs meaning that other useful research may be inadequately considered or overlooked (Irving et al, 2017).
Opportunities and challenges
So, in addiction and recovery research we need to be careful how we set up research and how we interpret and use it. Being overconfident can be harmful, just as ignoring the evidence can be. We need to be open to other ways of evidencing the impact of treatment and recovery community support and interventions.
We can get caught up in that catch 22 that the RCT is not the best methodology for studying recovery interventions and processes. Yet, we’ll still look almost exclusively to RCTs or ‘high rungs’ on our evidence ladder for evidence to guide our practice.
Practice-based evidence
One way of reframing evidence-gathering is to use the concept of practice-based evidence. In this approach, ‘the real, messy, complicated world is not controlled’ (Swisher, 2010). ‘Patients are not controlled as research subjects, who must meet certain inclusion/exclusion criteria. Rather they are grouped together by factors they share. This type of research respects that people are complex, and don’t readily fit the “cause and effect” model of science.’
This answers a different question to ‘does X cause Y’. This different question is more interested in what’s happening with the individual and their world, however messy. On top of this, we should be involving patients in setting research outcome points and then measuring the intervention against what matters to the patient.
Here we go again
After years of neglect, residential rehabilitation in Scotland has just had a significant increase in resource. Some are saying that this is an example of an intervention with a poor evidence base and suggesting that the resource would be better used elsewhere ‘in evidence-based interventions’. What a blind spot we continue to have.
You see, the tragic thing is that we were asked to improve the evidence base in 2010 with a plea for research into longer term outcomes, after the Scottish Government published its review of the evidence base on recovery. Then in 2012 the Chief Medical Officer at the time, Sir Harry Burns, asked the independent Drug Strategy Delivery Commission (DSDC – an advisory group to government) to undertake a review of opiate replacement treatments (MAT), but also to consider the concerns that had been raised:
“In particular what research evidence existed to allow objective evaluation of the relative benefits of residential rehabilitation interventions and how widespread was their use in Scotland.”
The DSDC identified the lack of recovery research in Scotland and suggested that this should be addressed. Examples of what might be done were: examining the effectiveness of abstinence programmes and long term outcomes – and examining staff attitudes and recovery. I was a member of the DSDC at the time and expressed my concern that a new research strategy for Scotland should be balanced and not mostly focussed on harm reduction and medical interventions. The report, published in 2013, states:
Academics interviewed, repeatedly acknowledged the urgent need for better research into the effectiveness of a whole range of treatments and interventions which may influence the development of or recovery from problematic substance use.
Neglect
In terms of research, residential rehab, mutual aid, lived experience recovery organisations (LEROs) and community recovery programmes have been largely neglected. We have generally seen neither a sense of urgency nor ‘better research’. Recovery interventions and processes may be more complex and messy to study, but that doesn’t mean we should not try. Just as we need balance in our treatment system, we need balance in addiction research. As Swisher (2010) says, ‘We [must] return to practice informing research, and research informing practice. They are an inseparable team and neither element is complete on its own.’
If we’ve not got that balance, interventions like residential treatment will continue to be branded as ‘not evidence-based’, even as people are healed and lives transformed within their walls.
Continue the discussion on Twitter @DocDavidM
Irving M, Eramudugolla R, Cherbuin N, Anstey KJ. A Critical Review of Grading Systems: Implications for Public Health Policy. Eval Health Prof. 2017 Jun;40(2):244-262. doi: 10.1177/0163278716645161. Epub 2016 May 10. PMID: 27166012.
Sanger N, Shahid H, Dennis BB, Hudson J, Marsh D, Sanger S, Worster A, Teed R, Rieb L, Tugwell P, Hutton B, Shea B, Beaton D, Corace K, Rice D, Maxwell L, Samaan MC, de Souza RJ, Thabane L, Samaan Z. Identifying patient-important outcomes in medication-assisted treatment for opioid use disorder patients: a systematic review protocol. BMJ Open. 2018 Dec 4;8(12):e025059. doi: 10.1136/bmjopen-2018-025059. PMID: 30518592; PMCID: PMC6286642.
Swisher AK. Practice-based evidence. Cardiopulm Phys Ther J. 2010;21(2):4.