Scientific Papers

Adaptations for remote research work: a modified web-push strategy compared to a mail-only strategy for administering a survey of healthcare experiences | BMC Medical Research Methodology


The VA Bedford Healthcare System Institutional Review Board approved this study and its waiver of informed consent. All methods were carried out in accordance with relevant guidelines and regulations.

Sample

As part of an observational research study on experiences of coordination between primary and specialty care, we administered a survey [8] to patients receiving care through the Veterans Health Administration (VA), a large integrated health system. The questionnaire is the patient version of measures previously developed for PCPs [9] and specialists [10, 11]. We used the VA Corporate Data Warehouse (CDW) to identify patients aged ≥ 18 who, in the six months prior, had [1] seen a clinician in one of eight medical subspecialties included in the study, and [2] seen their PCP before and after the index specialty care visit.

Questionnaire content

We used the previously-developed Coordination of Specialty Care – Patient Survey, which includes 39 items across 10 multi-item scales that measure the patient experience of specialty care coordination. Constructs measured by the questionnaire are Patient-Centered Care Coordination, Specialist Communication, Access to Specialist Care, Specialist Knowledge of Patient History, Referral Shared Decision Making, Tests & Medications: Patient Information, Tests & Medications: Get Results, Overall Trust & Specialty Care Engagement of PCP, Coordination of Care – Specialist & PCP, and Team Planning for Patient Self-Care. Scale scores range from 1 to 5 (from disagree to agree). Data on internal consistency reliabilities (Cronbach’s alpha) and inter-scale correlations (zero order Pearson’s correlations) are available in Additional File 1. We included a single-item measure of overall satisfaction with VA health care (1–6 scale from very dissatisfied to very satisfied), self-reported physical and mental health status (1–5 scales from poor to excellent), and demographic items.

Data collection

We used an MO administration for the pilot study. By the time we completed analysis of pilot data and were ready to complete the study, the COVID-19 pandemic was well underway. We pivoted to the MWP strategy for the large launch. Both administrations used the same invitation letter, study fact sheet, and opt-out postcards.

Mail only (MO) – pilot administration

From August-November 2019 we sent up to 3 mailings to a sample of 527 patients. The first mailing included a notification letter and postage-paid opt-out card. The second mailing included a cover letter, a study fact sheet, the questionnaire with a pre-paid return envelope, an opt-out card, and the incentive of a VA-branded notepad, pen, and tote bag. The third mailing included a reminder/thank you letter, study fact sheet, and the questionnaire with a pre-paid return envelope.

Modified web-push (MWP) – large launch

From September 2020-May 2021 we contacted 5,288 Veterans by mail up to 3 times. Each mailing included an invitation letter, study fact sheet, and postage-paid opt-out card (first two mailings only). The letter contained the questionnaire URL (shortened for accessibility using tinyurl.com) and a QR code. Reminder letters provided the option of calling to complete the questionnaire by phone or request it mailed. Participants were sent a $10 gift card to a national pharmacy chain upon our receipt of their completed survey. Online responses were collected by Qualtrics; study staff input paper and phone responses into the Qualtrics database including a flag variable that distinguished these respondents.

Statistical analysis

We compared the MO and MWP samples (respondents and non-respondents) on demographic characteristics from CDW: sex, age, rural/urban status, and the VA Care Assessment Needs (CAN) score [12], which identifies patients at high risk for hospitalization and mortality. We also compared respondents only on five characteristics from the survey: education, self-reported physical health, self-reported mental health, having help completing the survey, and overall satisfaction with VA care. We used independent samples t-tests, Wilcoxon two-sample tests, and chi-square tests as appropriate.

We compared raw response rates and rates after controlling for demographic differences between the two samples. We calculated response rates in a manner aligned with AAPOR recommendations [13]. The numerator was calculated as (Complete responses + Partial responses). The denominator was calculated as (Complete responses + Partial responses) + (Refusal and Breakoff + Non-contact + Other) + (Unknown Household/Unknown Other). For these response rate comparisons only, we controlled for demographic differences in the samples by calculating and applying propensity score weights using the five CDW-based demographic characteristics. Propensity scores were calculated by first running a binary logistic regression predicting survey mode from the five CDW variables simultaneously. The predicted value for each participant calculated from this model was taken as the propensity score. Weights of 1/propensity score were applied to the MWP sample and weights of 1/(1-propensity score) were applied to the MO sample. Applying the propensity scores as weights in this manner equates the samples on these covariates and is a preferred method for controlling for extraneous variables when sample size is small and/or the number of control variables is large [14]. Supplemental analyses compared the demographic variables used in calculating propensity scores across samples with propensity score weights applied using independent samples t-tests or chi-square tests as appropriate. These supplemental analyses (Additional File 2) confirmed that weighting successfully controlled for demographic differences between samples. For MWP respondents we reported the frequency of final responses by mode.

Although controlling for demographic differences was important for a fair comparison of response rates, we also wanted to evaluate whether the two survey modes produced respondents with different demographics, as this provides information about the potential for non-response bias. We compared demographics of respondents to non-respondents within each mode separately using independent samples t-tests, Wilcoxon two-sample tests, and chi-square tests, and then we conducted separate binary logistic regression analyses, one for each demographic variable, to examine whether there were significant interactions between each demographic characteristic and mode in predicting the likelihood of responding. These analyses tell us whether any associations between demographic characteristics and the likelihood of responding differ across modes (e.g., whether the relationship between responding and age differed for MO vs. MWP).

Finally, we examined whether survey mode led to a different respondent profile in terms of experience of care coordination. We compared coordination scale scores between MO and MWP respondents with independent samples t-tests using both raw scale scores and propensity score-weighted scale scores. For these analyses among respondents only, a second, separate set of propensity scores were calculated and applied. Here, propensity scores were calculated to balance the two respondent sub-samples using all ten Table 1 variables. Supplemental analyses (Additional File 2) confirmed that these new propensity weights successfully controlled for differences on all 10 variables from Table 1 across respondent sub-samples. In addition, we calculate and report effect size estimates for these comparisons (Cohen’s d) and interpret the values as small, moderate, or large based on established standards [15].



Source link