We list here details on the many collaborative projects which are led by the Hubs, and funded by the Network. Details on current funding opportunities can be found here.
Network-funded projects have included workshops, primary research, development of guidance and training events. The summary below captures the projects by year of funding and as projects are completed supporting documents and outputs will be added for more information on this research.
We have also funded a cohort of PhD studentships in trials methodology research, and support eight Working Groups.
Patient Public Involvement (PPI) remains fundamental to high quality applied research, and there is increasing guidance on effective methods, as well as new NIHR standards. However, the application of PPI in methodology research brings new challenges, and it is critical that learning from PPI in this field is captured and shared to promote good practice. The transition from the MRC HTMR Network to the MCR-NIHR TMRP provides an excellent opportunity to synthesise learning from a selection of patient-oriented projects with a variety of PPI methods funded in the MRC HTMR network.
We propose a workshop including teams from a range of projects funded within HTMR. This will include METHODICAL study (R62), PACT (R46), and additional projects led by collaborators on the new TMRP and facilitated through the HTMR (PRIORITY I & II) together with PPI contributors from those teams and from relevant external organisations. These will be strengthened with social media chats and dialogue on specific issues.
We will consider the role and impact of PPI using the new definition in each of the invited projects, and undertake small group breakout work to map the PPI to existing frameworks, highlight challenges and identify potential solutions. We will develop guidance and principles for enhancing effective PPI in methodology projects.
Increasing calls for transparency, reproducibility and data sharing within clinical trials have led to a greater emphasis on the early development and publication of robust Statistical Analysis Plans (SAPs).
Funded by the HTMR Network , a minimum set of items for SAPs was agreed following an extensive and iterative development process involving funders, regulatory authorities, journals and researchers to address a marked variation in practice. The subsequent publication in JAMA was supported by an editorial piece and has been cited 36 times in the last 18 months according to google scholar, highlighting the international importance of this research.
The guidance is now widely used within UK registered CTU Standard Operating Procedures and is listed within the EQUATOR website, the Clinical Trials Toolkit and the Global Health Network resources. In addition, multiple international requests have been made for a downloadable checklist to be made available, based on the contents list, in a similar manner to CONSORT etc.
A number of ongoing initiatives demonstrate the relevance of the SAP work, but also the failure of it to be included or cited. For example, the FDA require ‘appropriate statistical content’ to be made publically available alongside the results of clinical trials, but they fail to define what this entails. Data sharing policies from the Wellcome Trust and the NIHR also fail to cite the guidance. In addition, SAPs continue to be published in the supplementary material of journals, or are requested as part of the submission process but not used. Journals do not provide any guidance to their reviewers on what they should expect with regards to the SAPs they are being provided as part of peer review. Funding is requested for a research assistant from 1st August to 31st December at 0.2FTE to maximise the impact of the SAP Work by:
• Creating a downloadable version of the SAP checklist from a dedicated website as well as the EQUATOR website. The development of a small dedicated website will place the guidance in line with similar pieces of work (e.g. PRISMA, CONSORT) and will allow tracking of key metrics (visitors and downloads) to measure impact. This resource will be disseminated via social media, newsletters and emails to CTUs, network partners, funders, journals and regulatory agencies.
• Challenging funders of clinical trials (e.g. Wellcome Trust, NIHR) to request SAPS alongside protocols to ensure pre-specification of statistical analysis plans within the public domain. They will also be encouraged to cite the SAP checklist and use it as an assessment tool within funding awards and data sharing policies.
• Lobbying the FDA to cite the guidance when defining ‘appropriate’ statistical content within the National Institutes of Health Final Rule for Clinical Trials Registration and Results Information Submission.
• Challenging journals (e.g. Trials, Clinical Trials, JAMA) to recommend publication of SAPs either as a stand alone publication or alongside protocols, using the checklist as a peer review assessment tool.
We plan to utilise existing contacts within key organisation made during the SAP project as well as new connections made through the newly formed MRC-NIHR TMRP to deliver this.
A recent systematic review of randomised trials of online interventions demonstrates a growing trend over time. A random sample of 100 of these trials suggests that online interventions are most commonly used for health promotion (42%) or mental health issues (32%); however, the remaining 26% of trials covered an additional 14 clinical areas, including cancer (4%), diabetes (3%) and neurology (3%).
More broadly, digital health interventions (DHI) can present distinct advantages over traditional interventions in certain clinical settings, including increased accessibility, convenience and value for money. However, given that DHI involve not only medicine but also behaviour, computing and engineering, their evaluation presents particular challenges which may not be adequately addressed using conventional biomedical methods, such as randomised trials. Trials are useful for providing evidence for a drug or therapy where the format does not change over time; however, DHI may need to be continuously updated in order to meet changing demands relating to software compatibility, evidence-based content and the “look and feel” of the intervention. As such, a trial may provide initial underlying evidence of the effectiveness of a DHI, but evidence of its continuing usefulness in light of changing digital environments over time will need to be obtained using iterative methods more commonly employed in engineering than medicine.
Health economic assessments of DHI also differ from those required for typical drug interventions, for example, because of the need to allow funding for the continued evolution of DHI after the trial, as well as the atypical economies of scale, with high fixed costs and low cost per user, and the common uncertainty about what to treat as “sunk” costs. Measuring participant engagement with DHIs also requires special consideration; various methods exist but their reliability is not guaranteed. Public Health England have successfully created an evaluation toolkit which presents a workable practical approach to evaluate such interventions. Implementation of such methods by UKCRC registered Clinical Trials Units (CTUs) is not likely to be straightforward, however, given the need to use unfamiliar research methods which are not typically included in clinical evaluations.
Funds are requested to cover the costs of hosting a workshop (to be held in November/December 2019) for CTU statisticians, information system developers, health economists, chief investigators and a funder representative to discuss the issues when evaluating DHI. Participation will be restricted to those with experience of designing, running and analysing DHI trials.
This workshop will include presentations from proposed keynote speakers (including S Dodd to present findings from the systematic review of online intervention trials) followed by an open discussion to identify the practical demands of carrying out such evaluations.
Under the Trials Methodology Research Partnership (TMRP) the new Trial Conduct Working Group (TCWG) is the only working group whose membership will change significantly. This involves the merging of two existing, effective, working groups, the current Trial Conduct Working Group and the Recruitment Working Group (RWG). To date, these groups have largely worked independently and their merger will provide an opportunity to encourage and promote collaboration both internally in the WG and externally in the TMRP and beyond.
We believe that an initial face to face meeting is key to forge new relationships and collaborations within the new TCWG and encourage all members to have buy-in and commitment to the group .
We propose a one day workshop to discuss and agree the remit and scope of the TCWG and identify core activities relevant for trial conduct (including areas for research and opportunities for development of other outputs such as ‘good practice’ documents) that reflect the priorities of the multidisciplinary group. This event will facilitate a smooth transition to the new WG structure and continue the legacy of the previous RWG and TCWG.
The workshop will have three sessions:
Session 2: Trial conduct working group remit and scope.
Session2: Organisation of the working group.
An economic evaluation conducted alongside a randomised controlled trial (RCT) aims to establish the value for money that a given intervention represents. The cost-effectiveness results are at risk of bias if researchers do not report all the results, or conduct post hoc analyses without labelling them as such. Pre-specifying the planned analyses by means of a health economics analysis plan (HEAP) can reduce the risk of bias, and allow decision makers to have confidence in the results.
However, there is little guidance available to help researchers write a HEAP. A well-attended Hub-funded workshop on HEAPs (held in October 2015) established that there is a need for guidance on the appropriate content of HEAPs. The original HEAPs project therefore aimed to derive a consensus position on the items that should be included in a HEAP. Delphi consensus methodology was employed: two rounds of questionnaires and a final item selection meeting allowed us to identify 58 essential items and 9 optional items that should be considered when preparing a HEAP. This list is currently being piloted in trials at an appropriate stage, and a template is in preparation.
We propose to run a training workshop to introduce the HEAPs template to an audience of health economists actively engaged in conducting economic evaluations alongside RCTs. The training event will place the HEAP within the framework of trial documentation (including the protocol and statistical analysis plan), and will provide a hands-on introduction to implementing the template. This will increase the impact of the template by providing guidance at an early stage of its implementation.
Introduction and aims:
Recruitment is a challenge for many trials and one contributory factor is clinician engagement. This situation is often worse in surgical trials, where preferences for specific interventions are strong and research-active senior surgeons rare. Recently, Trainee Research Collaborative (TRC) networks of trainee surgeons have formed. These TRC networks have designed, conducted and successfully delivered research, including trials.
In the original project we aimed to understand key elements contributing to successful surgical trials run by TRCs to develop strategies to enhance clinician engagement in other clinical areas.
The aim of the proposed impact/dissemination work is to develop short ‘digital’ (video and written) stories targeted at surgical clinicians to enhance clinician engagement in trials and discuss these with clinicians active in trials research.
The original project was a qualitative study to inform a stakeholder workshop. This included 1) non-participant observation of TRC-linked surgical trials and TRC meetings, and 2) semi-structured interviews with key clinicians and trials units staff. Interviewees and meetings were purposively sampled across a range of TRCs, geographical locations and surgical areas. Thematic analysis of transcripts was used to identify key themes in the data relating to the barriers and facilitators to successful trial conduct by TRCs. Findings will inform a meeting with stakeholders to develop strategies to enhance clinicians’ engagement in trials which could subsequently be tested in other specialties and inform post-graduate clinician training.
The proposed impact/dissemination project will use existing data and strategies identified in the stakeholder workshop to develop short digital stories using a method of Integrated Participant Storytelling. Stories will then be discussed with clinicians in a focus group.
Proposed impact/dissemination – Several short digital stories (video and paper-based) which share strategies to enhance clinician engagement with trials.
Clinical trial investigators have a long tradition of designing RCTs to answer the question “Does it work?” or “Is there a beneficial effect of the treatment compared with some other treatment or treatment as usual?”. However, they often have little knowledge or experience to also address questions such “How does it work?”, “What are the underlying mechanisms or targets of the intervention?”, or “What factors involved in the therapy make it work better”.
For example, in randomised controlled trials of surgical interventions a procedure may be delayed or started but not completed with an intraoperative conversion to a different procedure occurring. More generally departures from the randomised allocation are not uncommon in surgery. In mental health, the intervention may require attending a number of therapy sessions, and different patients will attend varying numbers. Session attendance may also be linked to the strength of the relationship with the therapist but such process measures are often only measured in the intervention group.
There are significant methodological challenges in performing mechanistic evaluations, and accounting for departures from randomised allocations in order to assess efficacy, rather than effectiveness. These include poor uptake of, and limitations to, existing methods. There is a particular need for robust methods for making valid causal inference in explanatory analyses of the mechanisms of treatment-induced change in clinical outcomes.
Applications to the NIHR EME Programme may include mechanistic evaluations but there is a need to improve this component of proposals and to increase the capacity within the network for such studies to be designed and performed. .
One challenge, regardless of the type of intervention, is to unbiasedly estimate the effect of a mediator on an outcome in the presence of likely unmeasured confounding between the mediator and outcome. Another is extending current methods to binary outcomes.
We plan to organise two training days , in October 2018 and October 2019, provisionally in Lancaster, and will target researchers working in EME and especially those designing new EME studies and preparing applications to the EME funding scheme.
These training days will maintain the momentum that we have built up, through our first workshop grant, in training researchers in methods for EME studies. We propose that the programme for the training days will follow closely that for our training day held in May 2018.
We developed a revised risk of bias tool for parallel-group, cluster- and cross-over randomized trials (RoB 2.0). The tool was accompanied with detailed guidance, examples, templates of the tool, and training videos (available from www.riskofbias.info). We also held multiple international training and dissemination events for RoB 2.0.
The new tool is a major upgrade to the original Cochrane risk of bias tool. It is more comprehensive and covers all threats to internal validity. Users answer several, relatively factual, signalling questions within five bias domains. For each domain, algorithms provide suggested domain-level risk-of-bias judgements based on the answers to the signalling questions.
The revised RoB 2.0 tool is a significant improvement, but it is more complex than its predecessor, which could be a barrier to its widespread uptake. It will soon be adopted by Cochrane, but non-Cochrane authors may still choose to use the old tool if they perceive it to be simpler.
The aim of this impact project is to facilitate implementation of RoB 2.0 by producing an interactive, easy-to-use online version in which structure, guidance and a suite of informative examples are seamlessly integrated.
We will develop a web application for risk-of-bias assessments with interactive help, including linked examples. The implementation will display only the signalling questions that are relevant based on previous answers, and facilitate assessment by displaying relevant information succinctly on-screen. It will apply decision algorithms to suggest domain-level judgements based on users’ answers to signalling questions.
The web application will make the tool easier to use, which will increase its uptake. In addition, deposited risk-of-bias assessments of large numbers of trials could be used for machine learning and meta-epidemiological research, to improve transparency of clinical research and decrease research waste.
The original HTMR Network award was to develop, deliver and evaluate RCT recruiter training workshops to enhance recruitment and informed consent. We developed and delivered four 1-day workshops to surgeons and research nurses. We showed that these workshops, with a focus on addressing the emotional and intellectual challenges of recruiting patients to surgical RCTs, increased confidence with recruitment, raised awareness of hidden challenges and impacted positively on self-assessed recruitment practice.1
Since then we have delivered three further workshops and had several requests to provide this training within specific RCTs. We have also used discrete elements of the training material to train medical students and surgical trainees in RCT recruitment through collaborations with the Universities of Birmingham (GRANULE) and Oxford (BOSTiC).
Clearly there is a need to continue training recruiters in addressing recruitment difficulties. One of the key outcomes from our original project was the challenges and discomfort that recruiters have with responding to patients’ treatment preferences and conveying equipoise as part of this. This cut across different trial contexts and health professionals and is an area that warrants further attention.
We therefore intend to increase the dissemination and impact of our original project by: (1) further refining and advancing the training material by reviewing recent literature and audio-recordings of recruiter-patient discussions across a range of diverse RCTs to identify effective practice in managing the challenging aspects of RCT recruitment discussions; (2) disseminating the refined training material more widely by tailoring it to different audiences (expanding to trial designers in addition to trial recruiters); and (3) developing a sustainable annual short training course to optimise RCT recruitment and informed consent.
The original funded 1-year project was to consolidate and develop the COMET Initiative. Specific objectives included periodic searching of the literature to identify relevant work and maintain an up-to-date database of COS, development of the website as a resource for those interested in outcome measurement, and organisation of international meetings. This also provided a starting point for methodological research in this area, with the goal of improving methodological standards for COS development.
This impact project will build on the original work by creating a network of LMIC researchers with an interest in COS development and application. Raising awareness and sharing ideas and experiences should start a dialogue to determine how COMET can support the development and application of COS in LMICs.
To achieve this, we propose to offer bursaries to researchers from LMICs to attend and participate in COMET VII. We propose building on COMET’s links to Cochrane, by inviting Cochrane Centre Directors and leads from China, South Asia, Brasil and Africa. We should like to invite five LMIC researchers from the various groups in these regions.
Specific objectives of this proposal are to raise awareness about the importance of COS development and application, discuss issues of generalisability of COS in relation to LMICs, and develop a strategy for COMET to support the development, dissemination and use of COS in LMICs.
Recruitment and retention challenges are well documented and have been identified as key methodological research priorities by UK CTUs. Whilst this focus has led to an increase in the quantity of research directed at these challenges, evidence for effective interventions is limited and navigating the literature to identify strategies relevant to different types of trials remains difficult. The flagship output from the HTMR recruitment working group was the launch of an online searchable database of recruitment research in 2016, helping trialists to identify interventions and areas for future methodological research. Project delivery engaged 24 researchers from 7 institutions and 3 countries. 1139 users from 18 countries have undertaken 1061 searches during the first nine months. The database is currently being updated with 2015-2016 publications (completion Nov 2017). However, no such resource exists for retention research. Retention may be influenced by recruitment methods but this aspect is often unexplored. We will develop ORRCA2 to organise and map retention research, linking with ORRCA to explore connections and overlap between recruitment and retention research.
Search strategies for major databases will be adapted from the Cochrane Methodology Review of retention interventions. Eligible studies will include retention strategy evaluations, descriptive studies identifying risk factors, qualitative research and relevant case reports. Articles exploring treatment adherence or statistical methods for handling missing data will be excluded.
A matrix of retention research domains and a searchable database will be developed as per ORRCA. A mapping exercise will explore current methodological research topics and the potential to evaluate recruitment and retention simultaneously outcomes in future SWATs or nested randomised studies. Text mining methodology will be explored to increase sustainability of ORRCA and ORRCA 2.
A significant challenge in the evaluation of all phases of surgical device development is outcome reporting. Numerous outcomes are reported, making data synthesis difficult and risking outcome reporting bias. RCTs have methodologically improved in part due to the development of COSs. However, these have focused on specific conditions and/or surgical procedures. The COMET database provides details of COSs that have been developed or are under development. At present, there are no COS specifically developed to aid earlier and later phase studies on innovative surgical technologies and devices. Consequently, it is difficult for stakeholders to compare key outcomes. Adverse events may be under reported (reporting bias) or ill-defined. It is therefore difficult to judge whether to proceed to full evaluation or abandon new technology.
The proposed workshop is intended for the first time to bring together key stakeholders in the development and evaluation of surgical technologies and devices to consider mandated core outcome reporting. It will describe the current problem with outcome reporting in this field and the pitfalls, present COSs as a solution to this problem to use from early to later phase trials and consider methods for live outcome reporting in registries and on-line journals to optimise learning. The workshop will establish whether it will be feasible to develop a generic COS for the reporting of all surgical devices and establish next steps to achieve this.
Classically, single-arm trial designs have been the standard approach in oncology research for conducting a phase II clinical trial. In recent years however, acknowledgement of the limitations associated with single-arm studies has seen an increased call for the use of randomisation in phase II. Indeed, numerous discussion articles have now put forward arguments for and against the use of single-arm designs in this setting, and several simulation studies have been conducted to try to identify which approach should be preferred. At the very least, these presentations together indicated that the percentage of phase II trials using randomisation should increase.
Nonetheless, a large number of phase II trials continue to be conducted using single-arm designs, with a contemporary review indicating at least 50% of UK Clinical Trial Units (CTUs) had recently been involved in such a study (Jaki 2013, Clin Trials 10:344-46). Therefore, it is possible that the design of many publicly funded trials remains sub-optimal for a reason that could easily be rectified.
Accordingly, in this project, we will first circulate a questionnaire to each of the CTUs within the UKCRC Registered CTU Network, with the aim of establishing key factors behind their choice to utilise either a single-arm or randomised approach in any phase II clinical trials they have been involved in. Following the completion of this survey, amongst those expressing interest, the CTUs responsible for conducting the largest number of phase II trials will be visited for more in-depth follow-up meetings.
Finally, the results of these discussions will, in combination with the expertise of several researchers, aid the development of a guidance document on the recommended contemporary design of phase II trials. This document should be of great value to the trials community for helping ensure future phase II trials are designed in the most appropriate manner.
Randomised trials typically involve collecting pre-randomisation information about key demographic and clinical characteristics of participants. Randomisation of treatment ensures that participants in the different arms of the trial are comparable, which means that an unadjusted comparison of outcomes between arms provides an unbiased estimate of the treatment effect. However, adjusting for pre-randomisation measurements within a regression model may increase the statistical power. Whether an adjusted or unadjusted analysis should be performed as the primary analysis remains a contentious issue.
Current guidelines regarding the optimal way to incorporate covariates into the primary analysis of randomised trials recommend including a limited number of pre-specified covariates, and explicitly recommend against data-adaptive model selection procedures and including interactions between treatment and covariates into the model. However, some theoretical work suggests that data-adaptive approaches may have benefits, and including interactions can help protect against model misspecification. Further, most of the theoretical work underpinning guidelines is based on continuous outcome measures, but many randomised trials have time-to-event outcomes.
The aim of the proposed work is to re-analyse a small number of published trials, using a range of covariate-adjustment analysis strategies, focusing particularly on time-to-event outcomes, in order to explore the benefits and limitations of each approach. Key issues uncovered will be subsequently explored in simulation studies. We aim to provide practical guidance to trialists regarding the optimal way to account for baseline covariates in the primary analysis of a randomised trial.
To identify the key content and develop a template for health economic analysis plans that will guide analysts in conducting economic evaluations alongside randomised controlled trials (RCTs).
The use of SAPs, drawn up in advance of the analysis phase of a trial, is an accepted means of reducing bias in reporting the results of RCTs. However, while health economic analysis plans (HEAPs) to guide trialists in conducting economic evaluations alongside RCTs are becoming more widespread, they lag behind SAPs in terms of standardisation and acceptance. In October 2015, an HTMR-funded workshop involving fifty (predominantly academic) participants was held to discuss some of the issues associated with HEAPs. Feedback from the workshop suggested that health economists would value guidance and clarity on the appropriate content of a HEAP.
Building on recent work led by the NW Hub to create guidance for SAPs, we propose to develop a template for a HEAP. We plan to use ‘real time’ Delphi methodology, which involves presenting dynamic feedback to participants, to gain a consensus opinion on the relevant content of a HEAP.
A literature review will identify published HEAPs and additional examples will be sought from practising health economists in order to derive a list of items for inclusion in an electronic Delphi survey. A panel of experts will be recruited and, after seeding the survey with responses from randomly selected attendees of the workshop, the survey will be opened to the panel for a period of one month. The project team will convene at the end of the survey to discuss the results with invited participants in a consensus meeting. The final list of items will be developed into a template HEAP, which will be disseminated widely.
Previous work has identified the need for reporting guidelines for trials using adaptive designs. The CONSORT guidance framework has substantially improved the reporting of randomised controlled trials. However, there is no guidance tailored for the reporting of adaptive trials. This proposal is part of a larger project that will develop and disseminate a CONSORT extension for adaptive trials.
This proposal will improve the quality and international impact of the developed guidance through three activities.
The first is a consensus workshop that will bring together around 30 multidisciplinary experts in adaptive trials from around the world. The workshop participants will consist of representatives from academia, industry and regulatory agencies. By including international representatives in person (rather than via telephone), we believe the developed guidance will be of much higher quality and more relevant to a worldwide audience.
The second is a dissemination workshop that will present the guidance to a UK audience. We will invite around 60 individuals that represent a variety of stakeholders including medical journal editors, academic trialists and industry representatives.
The third is to present the guidance at the SCT conference in 2018, which will help disseminate the guidance beyond the UK.
We expect the development of the guidance to raise important gaps in methodology that need to be addressed. Thus, the dissemination workshop will also involve discussions to identify these. In the longer term we hope that this may lead to additional useful methodology research.
Based on this work we will be able to provide guidance for researchers on the most suitable approach to use. We will also aim to identify gaps in methodology that future work can address.
Within the last decade, significant developments in the field of mixed methods mean that literature on integration is now widely available. Moreover, in the last year, clinical trials experts have predicted that the use of mixed methods will gather pace over the next few years where the key challenge will lie in integration. As this juncture, there is an opportunity for the HTMR to accelerate this development.
Up to twenty experts in mixed methods and clinical trials will attend a two day summit on the integration of qualitative and quantitative methods in RCTs. Day one will involve presentation and facilitated discussion leading to an authoritative overview of the current strengths and weaknesses in the integration of mixed methods in clinical trials. On day two, attendees will identify the next steps required to provide the trials community with guidance on integration. Project outputs will comprise (i) an open access position paper on integration in clinical trials and (ii) an application to the MRC Methodology Research Programme in June 2017 for funds to develop guidance. By convening the summit, the HTMR will help equip the trials community with new skills and techniques in the integration of quantitative and qualitative methods that can be applied by researchers in their own practice.
Aims: This project aims to 1) identify the key elements leading to successful trials conducted by surgical TRCs and 2) synthesise these findings to develop strategies to enhance clinician engagement in trials across other clinical specialties, including clinician training.
Methods: A qualitative research study to inform a one day trialist stakeholders workshop. The research will include: 1) non-participant observation of TRC-linked surgical trials and TRC meetings, and 2) in-depth semi-structured interviews with key clinicians and trials units staff. Trials and linked personnel will be purposefully sampled across three geographical locations to include a variety of surgical specialties, clinicians and TRCs. Meetings and interviews will be audio-recorded and transcribed. Thematic analysis of transcripts will use the constant comparative method.
Outputs: Results will be synthesised and reviewed by co-applicants. These will inform a meeting with key trialists and clinical stakeholders to develop strategies to enhance clinicians’ engagement in trials which could subsequently be tested in other specialties and inform post-graduate clinician training. Peer-review publications and presentations.
We plan a piece of literature work (analyses of NIHR HTA funded main trials with an internal pilot study to explore their decision making in relation to progression criteria) and a two-day event. The combined activities will build on our individual expertise and lead to guidance documents to be published, material to be made available on the web and further meetings with clinical trialists and funding bodies to widely implement the recommendations for when to conduct each type of pilot and/or feasibility study design.
In the 2013 World Health Report, there was an unequivocal statement that unless low- and middle-income countries (LMICs) become the generators and not the recipients of research data, there will never be any real improvements in public health outcomes in these most underserved regions of the world. This lack of data is the result of too few studies being conducted in low-income settings. Therefore undertaking methodology research is required to identify the gaps and issues in order to optimise design, ease and quality with the aim of making research relevant and accessible to health care workers in these regions. The MRC HTMR Network has established a portfolio of methodology research of relevance to the UK trials community. It is important, and timely, to establish the relevance of this work to LMIC settings, but also to identify gaps where further research would benefit clinical trials in LMICs. The aim of this project is to identify trials methodology research priorities in LMICs. We predict each region can gain valuable lessons from the other.
The MRC/NIHR EME Programme funds clinical studies testing both if an intervention works in a defined population of patients, and providing opportunity to understand disease or treatment mechanisms. However, there remain significant methodological challenges in performing mechanistic evaluations and accounting for departures from randomised allocations in order to assess efficacy (rather than effectiveness). Uptake of the available methods has been limited, though increasing in popularity.
Our aim is to organise a workshop and training day centred on how to use causal methods to understand mechanisms of action for treatments. The workshop will invite experts in the area to identify important topics and challenges and to discuss ways in which these methods might be utilised more in clinical research studies. A representative from the EME board will give an overview of the scheme, and how they consider mechanisms evaluation when evaluating applications. The results of this workshop will inform a training day approximately six months later for researchers who are interested in incorporating mechanistic methods into their clinical studies.
We will hold a one-day workshop in Lancaster in 2016. We will invite key stakeholders from the area, including representatives from the EME board, and discuss issues and suitable methods for different types of intervention: pharmaceutical, psychotherapy, behaviour change, and surgery. The outcomes of the workshop will be used to develop a training day for researchers in clinical studies.
Six months after the one day workshop we will hold a one day training workshop, also provisionally in Lancaster. This workshop will be aimed at clinicians and methodologists new to the area and who are in the process of designing mechanistic studies, and intending to apply for EME funding. We plan to use organisations such as the NIHR Clinical Research Network, the RDS, and EME to advertise to find applicants.
Cost-effectiveness models are used in health economic decision making to compare the costs and effects of competing strategies for the management of disease. These decision recommendations are uncertain due to limitations in the available evidence. Value of information calculations measure the expected improvement in our decision recommendations, on the monetary scale, if we reduce (EVSI) or eliminate (EVPPI) uncertainty by gathering further evidence. EVPPI and EVSI can therefore be used to guide research funding decisions, and inform trial design. However, as EVPPI and EVSI involve the expectation of a maximum of a conditional expectation, 2-level nested Monte Carlo simulation and sometimes additional Markov chain Monte Carlo simulation is necessary. This is very computationally intensive and often impractical.
This project aims to assess the potential of efficient sampling techniques to reduce the computational burden of EVPPI and EVSI. Simulations where the decision doesn’t change contribute nothing to EVPPI and EVSI. One approach is therefore to use importance sampling and stratified sampling schemes to sample more frequently in the space where decisions change, with appropriate re-weighting.
We will develop importance sampling methods for use in the computation of EVPPI and EVSI, and explore their performance on a range of examples. Another approach is to use a method from pricing financial derivatives, such as simple call and put options, which also rely on the estimation a maximum of several processes. We will explore whether numerical techniques developed in this area of mathematical finance, in particular for non-Normal underlying stocks, can be applied to the estimation of the EVPPI and EVSI. We will meet with technical experts in simulation methodology and pricing financial derivatives, to explore how these techniques can be applied to EVPPI and EVSI.
We will then apply the methodology to some illustrative examples, and present the work in a focused meeting.
Phase I dose-escalation studies are essential to determine the safe dosing range of a novel compound. Despite the poor operating characteristics of algorithmic methods such as the 3+3 design, superior model-based strategies are still rarely used. One of the main reasons why these Bayesian adaptive designs are not implemented is the lack of easy-to-use and accessible software. This project seeks to develop software for model-based dose-escalation studies. A standalone, fully documented system will be developed that allows investigators to plan, explore and conduct such studies without the need for technical expertise in the underlying methods. To ensure that the software is fit for purpose, it will be rigorously tested by different user groups (clinical experts, principal investigators, trial managers, statisticians) and training workshops will be held to facilitate uptake.
Stratified medicine has the potential to significantly improve the benefit-risk ratio in the treatment of many diseases. A randomised controlled trial (RCT) design is perceived as the gold standard for demonstrating the clinical utility of a biomarker-guided approach to treatment. To this end, a large number of biomarker-guided trial designs have been proposed in the literature, but navigating through this literature can be challenging and there is little guidance on which design is optimal in a given setting.A systematic review of biomarker-guided trial designs has recently been undertaken by Antoniou, Jorgensen and Kolamunnage-Dona (co-applicants on this proposal), which identified over 200 relevant papers. The review identified that there is significant variability between authors in terms of the terminology used and descriptions of the different designs, which has resulted in significant ambiguity and confusion regarding biomarker-guided trial designs. To address this problem, in this project, we propose to develop a website using interactive visualisation to provide a user-friendly and easily accessible resource for informing those embarking on a biomarker-guided trial on the most optimal design. The website will initially mirror the findings of the systematic review, but in a much more accessible format, and will subsequently be extended to provide a truly interactive tool allowing searches for the optimal design in a given setting as well as sample size estimates. The idea for the project has stemmed from feedback from attendees of conferences and meetings where the systematic review work was presented, which suggested a real need for information on the different trial designs to be available in an easily accessible and user-friendly format.
Slow recruitment and poor retention are common challenges to the successful delivery of clinical trials. Patient and public involvement (PPI) in research has the potential to enhance recruitment and retention in clinical trials, but there have been few attempts to investigate this experimentally. The aim of this project is to develop a PPI intervention aimed at improving recruitment and/or retention in surgical trials.
The project will consist of 4 stages:
The use of statistical analysis plans (SAPs), drawn up in advance of the analysis phase of a trial, is an accepted means of reducing bias in reporting the results of randomised controlled trials (RCTs). However, while health economic analysis plans (HEAPs) to guide trialists in conducting economic evaluations alongside RCTs are becoming more widespread, they lag behind SAPs in terms of standardisation and acceptance, and there is a fundamental question over whether they add value to the trial process at all. In the collective experience of members of the Health Economic Resource Use and Costs Working Group, there is currently substantial variation in the structure, format and content of HEAPs, and no real agreement on either their purpose or appropriate methods of oversight. Clarity on the need for, and appropriate usage of, HEAPs would be advantageous.
We therefore propose to hold a workshop on HEAPs for about 50 attendees in Bristol, with three key aims. First, we plan to review the limited number of currently available guidelines addressing aspects of HEAPs. We also intend to collate information about the current usage of HEAPs in terms of their structure, content and purpose. Finally, we aim to provide a forum in which health economists and other interested parties engaged in applied economic evaluations can open a dialogue on appropriate methods of standardisation with a view to creating guidance in future work.On 20th October 2015, a workshop was held in Bristol to discuss issues with 'Health Economics Analysis Plans (HEAPs)'. This cross-Hub collaboration between Bristol, Bangor and Oxford universities was well-attended; 50 participants heard presentations from speakers covering a range of topics before breaking into smaller groups to discuss the appropriate content, oversight and potential changes to HEAPs.
Resource-use measurement in economic evaluations conducted alongside randomised controlled trials (RCTs) is commonly carried out by asking patients to provide information via a questionnaire or diary. However, there is no well validated instrument available that is flexible enough to work across different health conditions and care settings. Health economists tend to use unvalidated bespoke instruments for each trial, which results in unnecessary repetition of work, and hinders comparisons between trials.
To identify a core set of economically important resource-use items that are suitable for future inclusion in a modular patient-reported resource-use measure.
Instruments currently lodged in DIRUM (the Database of Instruments for Resource-Use Measurement, www.dirum.org), and additional instruments sourced from health economists, will be examined to determine the items of resource use that are commonly collected in RCTs. A comprehensive list of care items encountered will be compiled; items will then be categorised into ‘domains’ describing different types of healthcare (e.g. inpatient care, community care or medication). The item list will be systematically reduced to 10-20 key items per domain to form the basis for the Delphi survey.
A Delphi panel comprising patients, health economists and trialists from varying backgrounds will be engaged. In round 1 of the Delphi process, professional participants will be asked to rate the items according to their economic importance in a trial context, while patients will be asked to assign ratings based on the item’s relevance. Participants will also be asked to suggest additional items of healthcare resource use for consideration.
Items deemed insufficiently important according to predefined criteria encompassing both professional and patient responses will be dropped. A second Delphi round will be undertaken in which feedback from the first round will be presented to participants. A third Delphi round may be conducted if significant differences of opinion remain.
People doing clinical trials and other types of health research often struggle when trying to choose the outcomes to measure which would be of most use to the patients, practitioners and policy makers who will use their research to help them make decisions. These difficulties for trialists are passed on to those producing and reading systematic reviews of trials, many of whom have experienced the frustration of finding that the original researchers either did not measure certain outcomes or measured them in such different ways that it is difficult or impossible to compare, contrast or combine the studies. Much could be gained if there was an agreed minimum set of core outcomes for each medical condition, which were measured and reported in all clinical trials in that area. The COMET Initiative, launched in 2010, has brought together an international network of individuals and organisations interested in the development, application and promotion of such core outcome sets (COS).
A review of the literature has identified core outcome sets in nearly 200 medical conditions, and we are also aware of over 30 ongoing projects. COMET has put this information together in a publically available, searchable database. To date there has been no formal quality assessment of these studies and there is a pressing need to do so using internationally recognised criteria, both for COS developers and for trialists using COS.
This proposal requests funds to host a consensus meeting which is part of a larger study to develop the quality assessment instrument for studies developing COS. The outputs could impact immediately on the increasing number of ongoing and planned COS studies.
The HTMR Network has provided funding to support a workshop focussed on new Chief Investigators. The workshop is likely to be run again in 2016. Dates are to be confirmed. To add your name to our mailing list please email us.The workshop is targeted to recently funded Chief Investigators on RCTs.