Sixty-eight studies and (nearly) nothin’ on: the poor state of the recruitment intervention methodology literature and what to do about it
Primary: Shaun Treweek Author(s): Shaun . Treweek Marie Pitkethly Jonathan Cook Cynthia Fraser Elizabeth Mitchell Frank Sullivan Catherine Jackson Tyna K Taskila Heidi Gardner University of Aberdeen University of Dundee University of Oxford University of Aberdeen Hull York Medical School University of St Andrews University of Central Lancashire University of Greenwich University of Aberdeen
SCT Annual Meeting 2018
Trial recruitment keeps trialists awake at night. It is not hard to find examples of trials that were delayed, underpowered or abandoned because of poor recruitment. Identifying strategies that improve trial recruitment would benefit patients, trialists and health research. This is a substantial update to our Cochrane systematic review that aims to quantify the effects of strategies to improve recruitment of participants to randomised trials.
Randomised evaluations of recruitment interventions embedded within a host randomised trial were eligible. An extensive search strategy was used (covering six electronic databases, including MEDLINE. Title/Abstract screening and full text assessment along with data extraction, risk of bias and GRADE assessments were conducted independently by two reviewers.
The risk difference and the 95% confidence interval (CI) to describe the effect in individual trials was calculated and combined where possible with assessed. GRADE was used to judge the certainty we had in the evidence coming from each comparison.
The review screened 24,432 abstracts and has 68 included studies involving over 74,000 people. Included studies came from the period 1986 - 2017. We found 72 comparisons within the included studies but only three have a GRADE High certainty of the conclusion based upon the available evidence:
Open trials rather than blinded, placebo trials. The absolute improvement in recruitment was 10% (95% CI 7% to 13%).
Telephone reminders to people who do not respond to a postal invitation. The absolute improvement was 6% (95% CI 3% to 9%).
Using a bespoke, user-tested participant information leaflet. This method spent a lot of time on graphic design and working with people like those to be recruited to decide what should be in the participant information leaflet and what it should look like. This made little or no difference to recruitment: absolute improvement was 1% (95% CI -1% to 3%).
Intervention 3 above was evaluated as part of a coordinated and collaborative evaluation (called START), answering its question within two years. The majority of other interventions have remained with substantial uncertainty around their effect for over a decade. Eight other comparisons had GRADE Moderate certainty of the evidence, generally because of there being a solitary evaluation. All of these comparisons would benefit from replication.
Confidence in everything else is low. A combination of design flaws, solitary evaluations, poor precision and indirect outcomes as a result of using hypothetical host trials means we can conclude little from the bulk of trial recruitment evaluations.
The evidence available to support evidence-informed trial recruitment strategies is remarkably thin; the literature is characterised by single evaluations of new recruitment interventions, poorly done. More focus and replication is needed. The review provides this focus by highlighting priority interventions for evaluation based on GRADE assessments. For the three highest priority interventions the review also provides protocols describing how these interventions should be evaluated.