This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on http://www.researchprotocols.org, as well as this copyright and license information must be included.
Motivational interviewing (MI) has been shown to effectively improve self-management for youth living with HIV (YLH) and has demonstrated success across the youth HIV care cascade—currently, the only behavioral intervention to do so. Substantial barriers prevent the effective implementation of MI in real-world settings. Thus, there is a critical need to understand how to implement evidence-based practices (EBPs), such as MI, and promote behavior change in youth HIV treatment settings as risk-taking behaviors peak during adolescence and young adulthood.
This study aims to describe the Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) protocol of a tailored MI (TMI) implementation-effectiveness trial (ATN 146 TMI) to scale up an EBP in multidisciplinary adolescent HIV settings while balancing flexibility and fidelity. This protocol is part of the Scale It Up program described in this issue.
This study is a type 3, hybrid implementation-effectiveness trial that tests the effect of TMI on fidelity (MI competency and adherence to program requirements) while integrating findings from two other ATN protocols described in this issue—ATN 153 Exploration, Preparations, Implementation, Sustainment and ATN 154 Cascade Monitoring. ATN 153 guides the mixed methods investigation of barriers and facilitators of implementation, while ATN 154 provides effectiveness outcomes. The TMI study population consists of providers at 10 adolescent HIV care sites around the United States. These 10 clinics are randomly assigned to 5 blocks to receive the TMI implementation intervention (workshop and trigger-based coaching guided by local implementation teams) utilizing the dynamic wait-listed controlled design. After 12 months of implementation, a second randomization compares a combination of internal facilitator coaching with the encouragement of communities of practice (CoPs) to CoPs alone. Participants receive MI competency assessments on a quarterly basis during preimplementation, during the 12 months of implementation and during the sustainment period for a total of 36 months. We hypothesize that MI competency ratings will be higher among providers during the TMI implementation phase compared with the standard care phase, and successful implementation will be associated with improved cascade-related outcomes, namely undetectable viral load and a greater number of clinic visits among YLH.
Participant recruitment began in August 2017 and is ongoing. As of mid-May 2018, TMI has 150 active participants.
This protocol describes the underlying theoretical framework, study design, measures, and lessons learned for TMI, a type 3, hybrid implementation-effectiveness trial, which has the potential to scale up MI and improve patient outcomes in adolescent HIV settings.
ClinicalTrials.gov NCT03681912; https://clinicaltrials.gov/ct2/show/NCT03681912 (Archived by WebCite at http://www.webcitation.org/754oT7Khx)
DERR1-10.2196/11200
The National Institutes for Health Office of AIDS Research called for implementation science (IS) to address the behavioral research-practice gap [
Motivational interviewing (MI) is a collaborative, goal-oriented method of communication designed to strengthen intrinsic motivation in an atmosphere of acceptance, compassion, and autonomy support [
A key tension in IS lies between strict fidelity to EBP program requirements and flexibility in adapting to the community context [
An EBP is considered sustained if core elements are maintained with fidelity—typically 1 year postimplementation [
Communities of practice (CoPs) are another strategy to promote the uptake and sustainability of EBP. A CoP is a group of people who learn together and create common practices based on (1) a shared domain of knowledge, tools, language, and stories that creates a sense of identity and trust to promote learning and member participation; (2) a community of people who create the social fabric for learning and sharing, inquiry, and trust; and (3) shared practice made up of frameworks, tools, references, language, stories, and documents that community members share. They can vary in the level of formality, membership (shared discipline or across disciplines), and method of communication (eg, face-to-face and Web-based). They are supposed to be nonhierarchical and can change their agenda to suit the needs of members. While the study of CoPs to promote fidelity in the implementation of EBPs is in its infancy, preliminary findings are promising [
Efficient fidelity measurement can aid sustainability by providing supervisors with easily used tools for ongoing quality assurance [
In the face of competing demands for health care resources, the importance to establish not just the efficacy of EBPs but also their relative economic value has increased. A recent editorial noted that despite the prevalence of economic evaluation in health services research, there is a dearth of studies on the cost-effectiveness of implementing EBPs [
The aim of this paper is to describe Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) 146 Tailored Motivational Interviewing (TMI) to study the scale up of an EBP in multidisciplinary adolescent HIV care settings while balancing flexibility and fidelity. The protocol is part of the
ATN 146 TMI is part of the
Tailored motivational interviewing (TMI) schedule of assessments.
Eligible participants include all youth HIV care providers (eg, physicians, nurses, mental health clinicians, and paraprofessional staff) who have at least 4 hours of contact with youth for HIV prevention or care. Study coordinators at each clinic work with the research team to introduce the project and recruit participants by scheduling and conducting introductory meetings. After the introductory meetings, a study coordinator from each site sends provider contact information (email and phone number) to the research team that contacts potential participants to provide information and schedule quarterly assessments. A participant is considered enrolled once he or she reviews the information sheet and completes a research element (ie, at least one fidelity assessment). A central institutional review board (IRB) is used to establish a master reliance agreement via the “SMART” or Streamlined, Multisite, Accelerated Resources for Trials IRB Reliance platform. This is designed to harmonize and streamline the IRB review process for multisite studies, while ensuring a high level of protection for research participants across sites. Participants (medical providers) at each site provide informed consent before any study activities. This study has been approved as an expedited protocol at the central IRB site. HIV care and prevention providers may choose to opt out of the study without penalty. A participant meets the criteria for premature discontinuation upon withdrawal of consent before the project’s completion or stops working in the clinic during the study.
The implementation intervention strategies follow the phases of the EPIS model [
The exploration phase involves a multilevel assessment of system, organization, provider, and client characteristics using qualitative and quantitative assessments. ATN 153 EPIS [
In the preparation phase, a continuous information feedback loop is created such that information gathered during the assessments are used by the iTeam to make adjustments to the implementation strategies while maintaining fidelity to the EBP and mandatory implementation intervention components. The iTeam has monthly conference calls during this period to member-check the barrier and facilitator data and iteratively draft locally customized implementation strategies.
Dynamic Adaptation Process to balance fidelity and flexibility using monthly implementation team meetings. MI: motivational interviewing.
Implementation begins with a 12-hour skills workshop [
The iTeam continues to monitor adaptations at the provider and inner and outer organizational contexts as well as any fidelity drift and plan for sustainability.
In the sustainment phase, the iTeam is encouraged to meet without external facilitation to review client and system data and address barriers and facilitators to ongoing EBP fidelity. The iTeam guides the site to develop a CoP and are given a manual of possibly group activities to support MI fidelity. The sites randomized to IF receive .1 full time equivalent for the facilitator who must achieve advanced competency by the end of the implementation period and complete a 5-session facilitator training.
The research design requires randomization of sites in blocks to the MI implementation intervention. The 10 clinics receive random assignation to 5 blocks to receive the TMI implementation intervention. Every 2 months, 2 clinics are randomized to begin the implementation intervention and the others remain in the wait-listed condition. This continues until the last block is randomized. To allow sufficient time for scheduling and planning the initial workshop component, each wave of randomization occurs 6 months prior to the initiation of implementation. After 1 year of implementation (1-year postworkshop), regardless of the block, sites receive rerandomization to IF plus CoP or CoP alone.
Fidelity is assessed on a quarterly basis for 36 months throughout the study (preimplementation, 12 months of implementation, and sustainment). Provider competence ratings (primary outcome) are collected quarterly preimplementation, once a week for the first 2 weeks of implementation (to support the coaching process), and quarterly during the rest of the year of implementation and, then, quarterly during sustainment. Across clinics, providers preceding the implementation intervention will form the control or comparison group, and the providers following the start of the implementation intervention will form the intervention group. After 1 year of implementation, regardless of the block, sites receive rerandomization to either IF monitoring and coaching plus the encouragement of CoPs or CoP alone.
Each site receives the same incentive budget (the equivalent of US $50 per staff member, or approximately US $3000 in total) and will determine whether incentives will be provided episodically or after program completion. The iTeam decides whether incentives should be delivered directly to individuals for completion of program requirements, utilize a lottery system, or provide a group reward when all site providers adhere to program requirements.
Appointy, a Web-based scheduling system, is used to schedule fidelity assessments and coaching sessions. Providers are sent an invitation link through Appointy to create an account. Providers can view the hours that are available from the research team to schedule their roleplay and a coaching session. A confirmation email is sent to providers to confirm their booking. In addition, providers have the advantage of rescheduling or canceling their appointments if needed. Canceled or “no show” appointments are tracked along with completed appointments in REDCap, a Web-based database management program. If providers fail to schedule through Appointy, the research team uses direct contact methods (phone or email) to schedule their roleplay or coaching session.
Every 3 months over the 36 months of the study, providers complete a 15-minute, phone-based standard patient interaction developed in our previous studies [
A trained independent rater codes the interactions with the MI Coach Rating Scale [
Motivational Interviewing Coach Rating Scale.
ATN 154 Cascade Monitoring [
ATN 153 EPIS [
We will confirm the distribution for outcome modeling using graphical or descriptive procedures. The descriptive trajectory for each provider on each outcome will be plotted using “spaghetti plots” [
Analyses will be conducted using mixed-effects regression models (eg, Raudenbush and Bryk [
The cascade outcomes will be analyzed using a similar approach. For the viral load and appointment adherence outcomes, the model will be specified as described for the provider competence outcome, testing for changes in the viral load and appointment adherence slopes from the preimplementation to implementation to sustainment phases. For the outcomes that are cross-sectional within phases—new diagnosis and receipt of counseling and testing services—phase-level indicators will test for changes in the rate of new diagnoses and receipt of C&T. Furthermore, planned comparisons will be specified to compare the rates between the implementation and sustainment phases.
Because there are multiple phases over time for each provider and clinic, the primary question is whether provider competence slopes change from phase-to-phase. The approach used to estimate the statistical power is recommended by Hox [
Estimate power for a single-level regression model as the
Compute the
Penalize the actual sample size for the nesting effects using the design effect formula (ie,
For aim 2, the power estimate reflects the ability to detect a difference in the overall level of the primary outcome of provider competence between groups. Power was estimated as detailed for aim 1. With 10 clinics that have 15 providers each and 4 measurements of competence, there are 600 nonindependent observations. These observations provide the statistical power of 214 independent observations, and adjusting for nesting within clinics, they provide the statistical power of 70 independent observations. As such, the proposed sample is sufficient for detecting a small-to-medium effect of
For the provider competence outcome, the data structure is identical to that described for aim 1. For the adherence to program requirements outcome, the data are from the sustainment phase only, with repeated measurements of adherence to fidelity assessments and coaching sessions (level 1) nested within providers (level 2) nested within clinics (level 3).
To evaluate the outcomes for aim 2, including provider competence, completion of fidelity assessments, and completion of coaching sessions, a dichotomous indicator will be added at clinic level to differentiate clinics randomized to CoP plus IF from those randomized to CoP alone. For the provider competence outcome, in the model detailed for aim 1, cross-level interactions will be specified between this condition indicator and the level-2 sustainment phase indicator, along with the level-1 growth term for the sustainment phase. This will test the extent to which changes in provider competence during the sustainment phase differ for clinics receiving CoP plus IF and those receiving CoP alone. Likewise, the model can be simplified to test for a difference in the average level of provider competence, rather than change over time, during this phase. For the adherence to program requirements outcomes, the data are dichotomous, and as such, analyzed according to a binomial outcome distribution, reflecting each provider’s completion of planned fidelity assessments and coaching sessions. The clinic-level condition indicator will test for a difference between CoP plus IF and CoP alone in the average rate of adherence to program requirements during the sustainment phase.
For aim 2, the power estimate reflects the ability to detect a difference in the overall level of the primary outcome of provider competence between groups. Power was estimated as detailed for aim 1. With 10 clinics that have 15 providers each and 4 measurements of competence, there are 600 nonindependent observations. These observations provide the statistical power of 214 independent observations, and adjusting for nesting within clinics, they provide the statistical power of 70 independent observations. As such, the proposed sample is sufficient for detecting a small-to-medium effect of
Our research questions for this component of the project are as follows: (1) why were some providers and not others able to integrate the competent use of MI into their practice with adolescent patients? (2) Why did some providers sustain MI over time? and (3) why were some sites good host settings for an initiative designed to promote the use of MI in routine clinical practice? To address these questions, data coding and analysis will proceed in a 3-phase process. First, consistent with Morgan’s [
All coding will be conducted using NVivo Version 10. For reliability, a random selection of 30% of the interviews will be independently coded. Coding will be monitored to maintain a kappa coefficient of ≥0.90 [
We will specify costs of implementation for budgeting further scale up as well as the incremental benefit of TMI and the addition of an internal facilitator on provider TMI competence and cascade outcomes over time. The cost-effectiveness analysis for the study is designed to measure costs and consequences of changes in the implementation over the 36 months of study follow-up to help inform the investigators of the economic consequences of the varying amount of resources used in the EPIS components of the study. Data will be collected on resource use and costs using a modification of the Drug Abuse Treatment Cost Analysis Program method [
TMI was launched in August 2017 and is ongoing. Currently, blocks 1-3 (see
Clinic site block numbers, target enrollment, and consenting participants.
Clinical site | Block number | Target enrollment (N=165) | Consenting participants (N=146) |
Memphis | 1 | 15 | 16 |
Philadelphia | 1 | 15 | 11 |
Brooklyn | 2 | 15 | 16 |
Miami | 2 | 15 | 14 |
Baltimore | 3 | 15 | 11 |
San Diego | 3 | 15 | 12 |
Birmingham | 4 | 15 | 15 |
Tampa | 4 | 15 | 17 |
Los Angeles | 5 | 15 | 14 |
Washington DC | 5 | 15 | 20 |
ATN 146 tests the effect of an MI implementation intervention on fidelity (primary outcome) and patient appointment adherence and viral suppression. The proposed design not only has the potential to expand MI to multidisciplinary adolescent HIV settings but may also provide opportunities to improve the implementation of other EBPs by providing a cost-effective implementation schematic. It is true that some, if not most, care providers have already received some exposure to MI; however, adequate competence is essential for successful implementation. The study also tests 2 approaches to sustainability. Finally, using mixed methods from the ATN 156 (EPIS protocol paper) [
Lessons learned thus far include the following:
Although the sites have a strong history of research participation, IS studies are new to the network. Sites required significant education prior to the study initiation to ensure a complete understanding of the protocol and delineation of site staff responsibilities while avoiding coercion for what are optional IS studies.
There appears to be marked variability in adherence to program requirements across sites, which we hypothesize will be explained by data collected regarding implementation factors guided by the EPIS model [
Sufficient resources must be allocated to provider recruitment and retention as would be done in a traditional efficacy trial with patients.
iTeams need significant guidance from protocol staff (external facilitators) throughout the phases of implementation.
It is difficult to obtain patient perspectives in an expedited protocol without resources to obtain patient consent. However, we are supporting sites to collect deidentified client satisfaction ratings from all youth who attend clinic during the course of the study.
The real-world clinical context of TMI presents a number of challenges to be addressed by the research design, including the small number of available sites, budget limitations for travel for site training, and inability to randomize providers within sites because of contamination. As such, traditional randomized and cluster randomized designs are not viable options. Utilizing a dynamic wait-list controlled design addresses these barriers, while a second randomization provides a targeted test of the implementation and sustainment interventions.
In conclusion, the TMI study addresses the gap between behavioral research and clinical practice with a type 3 hybrid effectiveness-implementation trial. This protocol describes the study’s underlying theoretical framework, design, measures, and lessons learned. If successful, TMI will have a considerable impact on provider MI competence and positive outcomes on the youth HIV care cascade. Although this intervention is being implemented with MI at multidisciplinary adolescent HIV settings, it can be adapted for delivery of other EBPs in this setting as well as MI implementation in other health care contexts.
Adolescent Medicine Trials Network for HIV/AIDS Interventions
communities of practice
evidence-based practice
Exploration, Preparation, Implementation, and Sustainment model
internal facilitation
institutional review board
implementation science
implementation team
motivational interviewing
National Institutes of Health
Tailored Motivational Interviewing Implementation Intervention
youth living with HIV
This work was supported by the National Institutes of Health (NIH) Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN 146; PI: KM) as part of the FSU/CUNY Scale It Up Program (U19HD089875; MPI: SN and JTP). The content is solely the responsibility of the authors and does not represent the official views of the funding agencies. The authors would like to thank Amy Pennar, Sarah Martinez, Monique Green-Jones, Jessica De Leon, Lindsey McCracken, Liz Kelman, Xiaoming Li, Kit Simpson, Julia Sheffler, Scott Jones, and Sonia Lee.
None declared.