Scientific Papers

Comparing organization-focused and state-focused financing strategies on provider-level reach of a youth substance use treatment model: a mixed-method study | Implementation Science


We describe this study using the Standards for Reporting Implementation Studies [24] (see Additional file 1). Relevant details are summarized here, but also see our published research protocol for more details [25]. All procedures were approved by the RAND Corporation IRB (Protocol #2020-N0887), including the use of data previously collected from organization-focused grantees [18, 19, 26] and new data collection from state-focused grantees.

Study overview

The current study used a longitudinal, mixed-method [27] design to compare A-CRA reach between state-focused and organization-focused grant recipients. Leveraging SAMHSA grant initiatives that all targeted the same EBP (i.e., A-CRA) and provided the same implementation support (e.g., training and consultation) allowed for a natural experiment to examine the impact of the two financing strategies: organization-focused versus state-focused grants. Our research questions, detailed in Table 1, were as follows: (1) How did A-CRA reach rates differ by grant type? and (2) What factors influenced A-CRA reach outcomes?

Table 1 Mixed-method design for A-CRA provider-level reach outcomes study by research question

Given that the state-focused financing strategy was designed by SAMHSA to improve implementation outcomes beyond the organization-focused initiative, for research question 1, we tested SAMHSA’s hypothesis that reach would be higher among organizations participating in the state-focused grants relative to the organization-focused grants. Our design used a natural experiment, without random assignment of treatment organizations to the different grant types, and we prioritized producing externally valid evidence on the impact of each grant strategy in large-scale initiatives. This design does not fully isolate the effects of grant type on outcomes; however, we also tested the robustness of our findings against major internal validity threats with a series of sensitivity analyses.

We hypothesized for research question 2 that A-CRA reach outcomes would be moderated by two key factors shown to influence implementation: [28, 29] external financial support for services [13, 30] and internal leadership support [31, 32]. We used qualitative interview data to identify the barriers and facilitators of A-CRA implementation, providing a complementary qualitative data source on quantitatively measured reach rates in order to extend our understanding of how reach outcomes functioned in practice (a mixed-method complementarity-elaboration function [27]).

Project context

The project design and hypotheses are grounded in the Exploration, Preparation, Implementation, Sustainment (EPIS) framework [28, 29], which describes how public service organizations implement EBPs by navigating a multi-phase process influenced by contextual factors across multilevel domains (innovation, inner context, outer context, bridging factors). In the exploration phase, the SAMHSA Center for Substance Abuse Treatment [33] selected A-CRA for implementation; other EBPs were also included in some initiatives, but we focused on A-CRA to minimize variation in outcomes due to innovation (i.e., EBP) characteristics. Briefly, A-CRA [2] is a psychosocial treatment model that uses cognitive-behavioral and family therapy techniques (e.g., functional analysis, communication skills) to replace factors supporting substance use with alternative activities and behaviors; it can be delivered in individual or group formats, with caregivers attending some sessions alone or with the youth. Chestnut Health Systems (CHS), the organization that developed A-CRA and conducts A-CRA training and research, created a certification protocol that supports high-fidelity delivery across providers through the use of a treatment manual, initial training (with behavioral role-plays), observed practice in delivering A-CRA, and consultation support and feedback until certification is achieved [34].

Through its grant-funded initiatives, SAMHSA provided funding, oversight, and leadership for A-CRA implementation to (i) four cohorts of organization-focused grantees across 26 states, awarded between 2006 and 2010, followed by (ii) four cohorts of state-focused grantees across 22 states, awarded between 2012 and 2017. During the preparation and implementation phases, organizations or states applied for, received, and executed SAMHSA grants. All grants targeted inner context (intra-organizational) factors at treatment organizations through training and certification for A-CRA clinicians and supervisors; leadership from treatment organizations selected which providers were available and interested to participate in the training. State-focused grants further targeted outer context (extra-organizational) factors at the state level and bridging factors that connected state substance use agencies and treatment organizations; with these changes, SAMHSA aimed to maximize the number of providers offering A-CRA in their states and, thus, enhance reach outcomes relative to organization-focused grants. Reach outcomes are the culmination of the implementation phase, i.e., grant funding period, after which the sustainment phase begins. Next, we describe the two grant types, and Table 2 details their characteristics.

Table 2 Characteristics of organization-focused and state-focused grant strategies

Organization-focused grants

SAMHSA awarded ~ $900,000 USD (across a 3-year period) to each treatment organization. These grants supported A-CRA implementation by paying for A-CRA delivery, A-CRA supervision, and other related activities. Our prior work showed that grantees commonly experienced initial success in implementing A-CRA with fidelity and reducing youth substance use, but difficulty sustaining the model post-funding [17,18,19].

State-focused grants

SAMHSA awarded ~ $3–4 million USD each (across a 3- to 4-year period; sometimes extended to 6 years) to state agencies that administered publicly funded SUD services. One-third of the grant funds paid for states to develop EBP-focused infrastructure, such as funding and training. Some grants required that state agencies propose “demonstration site” organizations, which established partnerships with the state agency and implemented A-CRA first before the state began broad dissemination to other organizations.

Participants

Treatment organizations

We used CHS certification records, conducted semi-structured interviews, and collected survey data from clinicians and supervisors at treatment organizations that implemented A-CRA (inner context). For interviews and surveys, we sampled organizations within the 5-year period following the completion of grant funding. Individuals who were currently or recently employed as a clinician or clinical supervisor responsible for youth SUD treatment were eligible to participate; those knowledgeable about A-CRA implementation were preferred.

For organization-focused grantees, all 82 eligible organizations (in 27 states) were invited for interviews. Organizations that participated in state-focused grants were more numerous, so we created a comparable sample of 82 organizations (in 18 states) by randomly selecting up to five organizations per state, including up to three “demonstration sites.” To ensure a comparable sample size for state-focused grants, we re-selected 17 organizations across nine states (because five had closed, 11 no longer provided youth treatment, and one had merged with a site that was already part of our sample); CHS records showed that selected and not-selected organizations did not differ on A-CRA certification rates (χ2(1) = 1.45, p = .23) or turnover rates (χ2(1) = 0.82, p = .36), even after replacement.

The interviewed sample included 154 organizations (94% of the 164 selected), with 249 provider participants (39% clinicians, 33% supervisor-clinicians, 28% supervisors). For CHS records, data from all trained providers (566 organization-focused, 417 state-focused) at the selected organizations were included. For 11 organizations that had providers trained under both grant types, we used the first (organization-focused) observation in our primary analyses, but we explored other ways of handling those organizations in sensitivity analyses (described later).

State substance use agencies

We also interviewed administrators from the state agencies (outer context) that received SAMHSA state-focused grants. Participants were currently or recently employed in a relevant leadership position and were eligible within the 5-year period after funding ended. Thirty-two administrators from 100% of the 18 eligible states participated; we conducted group interviews when multiple individuals from the same state agency participated.

Data sources and collection procedures

Table 3 summarizes the data collection activities and specific measures used to evaluate reach outcomes. CHS maintains a database of SAMHSA grantees, from which we extracted administrative records used to contact eligible clinicians, supervisors, and state administrators for interviews and surveys. Recruitment was done via email, phone, and/or mail, and data were collected via telephone or Microsoft Teams for interviews and Confirmit for surveys. Informed consent was obtained for each activity, no personally identifiable information was collected, and we de-identified participants’ data upon collection. We offered $25 USD for each interview and survey, although some participants (mostly state agency administrators) considered participation part of their job and declined compensation.

Table 3 Measures used to compare state-focused versus organization-focused grants on their A-CRA provider-level reach outcomes

A-CRA certification records

CHS created a database of certification outcomes for all sampled treatment organizations, which specified any certifications achieved by each clinician or supervisor trained in A-CRA during the grant period as well as relevant descriptive information.

Semi-structured interviews

Interview protocols used a combination of open-ended questions and standardized probes [37]. Protocols were tailored to each participants’ role and, for treatment organizations, whether the organization was still delivering A-CRA (see Additional file 2 for protocols). The interviews explored state- and/or organization-level approaches to implementing A-CRA, how A-CRA implementation was supported during the funding period, sustainability planning, and (state administrators only) state infrastructure developed through their grant, as well as sustainment-focused questions for future analysis. Interviews were audio-recorded and transcribed (with transcripts de-identified) and lasted approximately 45–60 min.

Provider surveys

Following each provider interview, respondents were sent a web-based survey that collected standardized measures (including potential moderating factors) and other descriptive information (see Additional file 3 for all survey items), which again include sustainment-related questions not analyzed here. The surveys took approximately 25 min to complete. Of the treatment organizations interviewed, 90% had at least one survey completed.

Measures

Grant characteristics

Participants’ involvement in SAMHSA grants was determined from CHS records and later verified during the consent process and initial interview questions. Characteristics recorded included grant type, start and end years, and length.

Reach outcomes

We used CHS administrative records to define reach [23] as the proportion of providers in a treatment organization that were certified in A-CRA by the end of the grant period, out of all individuals who were eligible for certification (i.e., completed the initial training). A-CRA clinician certification is based on proficient demonstration of A-CRA procedures (i.e., clinical techniques or activities). “First-level” certification indicates proficiency in nine core procedures and is required for independent delivery of A-CRA; optional “full” certification indicates proficiency in an additional 10 procedures (19 total) [2]. Additional optional certifications are available to indicate proficiency with transition-age youth (TAY; ages 18–25), which adds two other procedures to the first-level and full certifications, and in A-CRA supervision, which is based on demonstrated ability to supervise within the A-CRA model and rate A-CRA procedures and session fidelity. We constructed six reach variables representing the proportion of participants who achieved any certification—which was the primary outcome of interest—as well as first-level, full, TAY (first-level or full), or supervisor certifications.

We also examined descriptive variables including time to certification, whether the participant left the organization (i.e., turnover status), and fidelity scores for rated sessions resulting in passing scores. It was important to verify that each certified individual achieved fidelity (i.e., adherence to the A-CRA model with adequate competence or skill), since fidelity is associated with client SUD outcomes [34, 38,39,40,41,42,43]. An average rating of ≥ 3 out of 5—based on a review of recorded sessions—is required across the relevant procedures for a given A-CRA certification [34, 38,39,40,41,42,43].

Moderators

We proposed two prespecified moderators of A-CRA reach outcomes [25], but the survey measures for these moderators could not be analyzed due to difficulties with scoring responses (“external support” measure) or high missingness at the organization level (leadership measure, which was only administered to clinicians). Instead, we used similar subscales from the Program Sustainability Assessment Tool, a measure of eight EBP sustainment capacity domains [35, 36]; this measure was administered to all participants, so all organizations with survey responses had data for the analysis (n = 138). We used the funding stability subscale to capture external financial support for A-CRA and the organizational capacity subscale for internal leadership support of A-CRA.

Descriptive measures of A-CRA barriers and facilitators

We collected other interview and survey data that measured constructs from EPIS. Innovation measures captured participants’ perceptions of and attitudes toward the A-CRA model. Inner context measures described the treatment organizations delivering A-CRA, outer context measures described extra-organizational factors that affected A-CRA reach, and bridging factors described factors that linked inner and outer contexts (including grant-funded activities). We asked about the impacts of the ongoing COVID-19 pandemic in the state-focused sample, but those data were only applicable to reach outcomes in three states with active state-focused grants in 2020.

Analysis plan

Quantitative data analysis

To compare reach rates between grant types, we fit a series of multivariable linear regression models with reach rate as the organization-level dependent variables. Each regression included grant type and several covariates: turnover rate, number of individuals trained, number of supervisors who pursued certification, grant end date, and quarters of grant funding. These covariates helped address threats to interpreting the “grant type” variable (e.g., grant end date controls for time trends) and account for important factors identified in our qualitative analyses (e.g., turnover). Moderator variables (funding stability, organizational capacity) were entered at the organization level, along with an interaction term with grant type. Moderator variables were averaged for all ratings at that organization; if moderator items were missing, we imputed their mean value from other organizations with the same A-CRA sustainment status.

The initial examination of reach rates revealed that both TAY outcomes had sample sizes too small for an adequately powered analysis (ns = 34 for first-level TAY, 29 for full TAY), per our initial power calculations [25]. Therefore, we report only descriptive statistics for those outcomes and focused our modeling on the following four outcomes: any, first-level, full, and supervisor certification. Furthermore, any certification was the primary outcome (it subsumes the other certifications), so we used the threshold p < .05 for statistical significance rather than controlling the experiment-wide error rate.

Supplemental and sensitivity analyses

We tested alternate model specifications such as beta regression, robust standard errors, and state-level fixed effects (with and without clustering standard errors), but the findings changed minimally so we report the basic models. We also conducted a series of sensitivity analyses that characterized threats to internal validity in our design. Specifically, we examined (a) observed secular trends using a non-equivalent dependent variable [44, 45], delivery of buprenorphine, and an evidence-based medication for opioid use disorder and (b) patterns of findings across sub-samples that lent insight into the impacts of grant type (for example, we examined outcomes for demonstration sites vs. other state-focused grantees). All details of these analyses and the findings are reported in Additional file 4.

Qualitative analysis of interviews

We analyzed interviews using conventional content analysis [46] to identify the barriers and facilitators to A-CRA reach during grant funding periods. As a starting point for our analyses, we used a codebook from a prior published analysis of interviews with organization-focused grantees [17]; the second author of this study led that work.

To analyze the state-focused interview transcripts, we developed a codebook in Microsoft Excel with code definitions and exemplars [47], organized by the four EPIS domains (outer and inner contexts, bridging factors, and innovation characteristics) and further classified by determinant type of barrier and/or facilitator. We then used the NVivo qualitative software program to organize and analyze transcripts, iteratively adding newly identified codes to the codebook. Two research assistants coded most transcripts, but two interviewers also contributed. The lead author provided feedback on the initial coding and reviewed specific passages on request throughout; the coding team also met biweekly to discuss the progress and refine the coding as needed.

Finally, we developed written summaries of the content of codes (with exemplar quotes), focusing on implementation-specific barriers and facilitators. The lead author and second author (who led the previous analysis [17]) incorporated details about the similarities and differences between the barriers and facilitators that participants described for each grant type. All co-authors reviewed and helped further revise the summaries to ensure credibility and completeness.



Source link