Clinical Trial Details
— Status: Completed
Administrative data
NCT number |
NCT04927845 |
Other study ID # |
IRB20-1337 |
Secondary ID |
|
Status |
Completed |
Phase |
N/A
|
First received |
|
Last updated |
|
Start date |
May 19, 2021 |
Est. completion date |
June 10, 2022 |
Study information
Verified date |
August 2022 |
Source |
Harvard University |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Interventional
|
Clinical Trial Summary
With the COVID-19 pandemic completely altering the landscape of higher education, students
have been experiencing more stress than ever. With Harvard University's plan for students to
return to campus for the 2021-2022 academic year, offering an online mental health program
such as StriveWeekly could provide students with stress management support as they transition
back after 1.5 years of remote learning. This study will use a randomized controlled trial
design to test the effectiveness of a waitlist versus StriveWeekly. This study will allow us
to test if a program that has previously demonstrated effectiveness with university students
in reducing anxiety and depression symptoms will still be effective after the unprecedented
amount of stressors during a global pandemic.
Primary aim: We aim to evaluate the effectiveness of StriveWeekly in preventing or reducing
symptoms of anxiety and depression. The use of a waitlist condition will allow us to
experimentally assess if the online intervention is responsible for decreasing / preventing
worsened anxiety, depression, and stress symptoms over time. Given the previously established
effectiveness of StriveWeekly as an indicated prevention program, we expect students in the
intervention condition to experience significantly better symptoms compared to the waitlist
from baseline to posttest. Alternatively, if the transition back from remote learning and/or
the broad pandemic context interferes with the acceptability or effectiveness of
StriveWeekly, then we might expect to see little to no significant differences between the
online intervention condition and waitlist condition from baseline to posttest.
Secondary aims include: (a) testing moderators of intervention effectiveness and (b)
evaluating the intervention in terms of acceptability (e.g., feedback on program name;
demographically representativeness of student user sample; satisfactory adherence and
satisfaction rates). Exploratory moderation analyses across groups will help determine
whether or not the intervention condition produces unique or additive effects for students
with certain characteristics over and above changes demonstrated by similar students in the
waitlist condition. Acceptability analyses will allow for more nuanced evaluation of
StriveWeekly's effectiveness as a program, beyond its ability to facilitate symptom
reduction.
Description:
Recruitment. Recruitment materials will be distributed via a mass email to all enrolled
students, announcements over house emails lists (e.g., academic departments), and social
media announcements.
Pre-trial needs assessment and pilot. Prior to the full RCT, we will conduct a campus-wide
needs assessment survey to gather information about student needs and preferences related to
mental health programming. Student responses will inform the specifics of StriveWeekly
implementation during the academic year (e.g., preferred timing of programming,
appropriateness of content across a diverse student population). After the needs assessment,
we will invite a small group of students to participate in a small pilot of the revised
StriveWeekly platform. Students who participate in the pilot will be invited to provide their
program feedback via online surveys. Students who participate in the pilot data collection
will still be eligible for the full RCT study, but may be excluded from the final data
analysis sample.
Pragmatic trial design. For a trial design using self-guided online intervention, it is
important that the design mimics intended intervention use (Fleming et al., 2018). As
examples, overly stringent inclusion criteria limit generalizability, or face-to-face
assessments may provide added benefit beyond the intervention itself (Fleming et al., 2018).
Therefore, as much as possible our methods simulate how a real-world campus might offer
online services as usual. First, we are employing cluster randomization, for reasons
elaborated below. Second, participants in either condition will be allowed to access other
on- or off-campus mental health services and resources; in our statistical analyses we will
control for service use rather than excluding such students. Third, all data collection and
participant communications will be electronic rather than in-person to: (a) include all
students remote or on-campus, (b) avoid unintentionally bolstering motivation (e.g., inducing
social desirability to please researchers), and (c) avoid adding barriers (e.g., time
demands, concerns about privacy). Finally, survey compensation amount will be modest enough
to increase participant response rates without artificially inflating adherence rates or
self-reported improvement due to financial incentive.
Random assignment. Cluster randomization will be used to assign students according to their
residential affiliation to the immediate intervention condition or the waitlist condition
(i.e., delayed access). Although cluster randomization can introduce statistical confounds
for analyzing intervention outcomes at the individual participant level, they can be
preferred: (a) to avoid intervention "contamination" effects (e.g., if participants in both
conditions can regularly interact and thus might exchange health-related knowledge), and (b)
if it allows the intervention to be delivered as it would be in real practice (Cook, Delong,
Murray, Vollmer, & Heagerty, 2016). Moreover, the benefits of cluster randomization by
residential house/dorm affiliation for this trial are crucial for the social aspects of the
StriveWeekly program. For example, students will know who else is concurrently participating
in the program (e.g., any of their friends in X, Y, Z house), allowing for peer-to-peer
engagement. Also, this will allow for easier coordination of any optional complementary
programming by residential staff at each house/dorm. A randomizer was already used to assign
half the freshmen and upperclassman residential buildings to each condition.
Data collection. Prior to beginning any research procedures, students will provide their
informed consent online via Qualtrics. Participants will be required to login to Qualtrics
via HarvardKey Shibboleth, which will be configured to only allow currently active Harvard
accounts. Once consent has been obtained for an individual student, they will be directed to
an online survey for the study baseline assessment. The baseline survey will be open for two
weeks. Students assigned to the intervention group will receive an access code in the email,
allowing them to access the online platform and set-up their account. The intervention group
will then be active for seven weeks, after which the posttest survey will open to the
intervention group and waitlist group for one-two weeks. Thereafter, students who were
assigned to the waitlist group will gain access to the online intervention for seven weeks.
After this delayed access group completes the intervention, there will be a follow-up survey
for all participants from both conditions. See timeline table below.