Clinical Trial Details
— Status: Completed
Administrative data
NCT number |
NCT04747327 |
Other study ID # |
834772 |
Secondary ID |
|
Status |
Completed |
Phase |
N/A
|
First received |
|
Last updated |
|
Start date |
February 7, 2021 |
Est. completion date |
December 30, 2023 |
Study information
Verified date |
February 2024 |
Source |
University of Pennsylvania |
Contact |
n/a |
Is FDA regulated |
No |
Health authority |
|
Study type |
Interventional
|
Clinical Trial Summary
In a series of controlled, randomized experiments, we will systematically manipulate exposure
to health-related messages and/or survey methods to examine the effects on behavioral
intention.
There are various strategies used to influence health-related decision making and the effects
of health behavior have had mixed results. In particular, incentive-based interventions have
often failed to increase healthy behavior. We will examine 1) the role of behavioral
motivation to increase sleep or exercise and 2) current levels of sleep or exercise when
predicting who is interested in a mock RCT invitation to increase each behavior using
financial or social incentives.
In addition to the above focus on sleep and exercise, we will also examine another important
health behavior: vaccination. Embedded within experiments studying effects of incentives on
vaccination decisions, will conduct methodological tests. In particular, we will estimate the
effects of using different methods of measuring the study outcome (vaccine intention).
Description:
Incentives for Sleep and Exercise:
This experiment will estimate enrollment bias for randomized clinical trials offering to
incentivize behavior change. In this experiment, we will test whether those who are most
motivated to change behavior are also most likely to enroll in a (hypothetical) RCT when
offering financial or social incentives for behavior change.
We hypothesize that those most likely to enroll are already motivated to change their
behavior prior to enrollment, which could be bias trials towards the null. We will test this
hypothesis by estimating if motivation to change a behavior predicts interest in joining RCT
targeting that behavior. We will also test if baseline behavior predicts interest in joining
these RCT.
We will conduct this experiment using mock invitations to learn about and potentially join a
RCT. The study outcome will be responses to this invitation. We will not offer invitations to
an actual trial, but the stimuli (mock invitations to a "ghost" trial) and task (response to
the invitation) fundamentally resemble a trial's counterparts. The invitations will specify
an opportunity to earn financial or social incentives for improving a healthy behavior.
We will invite participants to earn financial or non-financial incentives for increasing
their sleep or exercise. In this study, the primary outcome will measure if they were "not
interested," "slightly interested," or "very interested" in participating.
Prior to receiving their invitation, they will complete an online questionnaire measuring
their motivation to increase each behavior, plus their recent behavior and
socio-demographics.
Separate analyses will be conducted for financial and social incentives and for sleep vs
exercise trial invitations. We will report point estimates and 95% confidence intervals (CI)
for behavior and motivation.
The analyses will examine if their interest in joining the RCT is predicted by 1) their
baseline behavior (i.e., amount of sleep or exercise), or 2) their motivation to change the
specific behavior. As noted above, we hypothesize the their level of motivation to change a
specific behavior will predict interest in a trial targeting that behavior.
Testing Relative Large and Small Vaccination Incentives:
Using a separate sample, this experiment will test whether policies offering large or small
financial incentives are likely to strengthen COVID-19 vaccine intention. This experiment
will randomize individuals to one of four study arms that include 1) a control condition, 2)
an educational message, 3) a message about the relatively large financial incentive, or 4) a
message about the relatively small financial incentive. The goal of this study is to estimate
if either type of incentive policy is likely to have negative effects on vaccine intention,
as some experts have warned.
When analyzing the effects of relatively large and small financial incentives on vaccination
intentions, we will report point estimates and 95% CIs for the overall sample and demographic
sub-groups. We will also report summary statistics for all the overall sample and
sub-populations. We will test whether, compared to a control condition, either of the
financial incentives increase, decrease, or have no effect on the percentage who want to
vaccinate. In a fourth study arm, subjects will receive an educational message that will also
be compared to the control condition.
Testing Vaccine Incentives Plus Different Measures of Vaccine Intention:
In a related experiment, we will separately test the effects of 10 experimental conditions,
with a counter-balanced experimental manipulation using an FDA approval message, plus a
control condition. The goal of this study is to compare the effects of a wider variety of
vaccine interventions that experts have proposed, including incentives and mandates.
In addition, we will also randomize individuals to questionnaires using different methods of
measuring vaccine intention, the study outcome.
Comparing different methods of estimating vaccine intention: Embedded within the experiment
testing different proposed vaccine policies, we will test if methodological differences in
the response option for the primary outcome effect the percent reporting "yes". To do this,
we will test 2 (Yes and No) vs 3 (Yes, No and Unsure) level response options and randomly
order both sets.
This methodological experiment will examine whether the proportion responding "yes" to the
same question (about whether they want to vaccinate soon) varies depending on the order of
response options and whether they include a maybe/unsure option. We will run cross tabs and
chi-square tests for the 2 vs 3 response levels and the order. The instrumentation tests will
be conducted for COVID-19 vaccination boosters, the initial shots, plus vaccination against
influenza.
When testing the effects of potential vaccine policies, the control group, with no vaccine
policy presented will be compared to: cash incentives for $1000, $200, or $100; a $1,000 tax
credit; lotteries for $100,000, $200,000, or $1 million; $1,000 tax on the unvaccinated; and
mandates by employers or airlines, bars, and restaurants. The main outcome is whether they
would want to get vaccinated soon given the hypothetical vaccine policy.
(Those assigned to the employer mandate condition will be excluded from analyses if they
report being unlikely to have an employer.)
OLS specification will be our main result and the other measures are provided as robustness
checks.
The OLS model can exclude all the demographic controls and run the binary dependent variable
on the treatment variables. (Note that this approach is legitimate because the treatments are
being randomized across respondents.) The treatments include the financial policies
(incentives and penalties of different amounts and types) and mandates (of different types)
being noted in a message.
Type of model: We will perform pairwise t tests of percent of respondents answering "Yes"
comparing those treated with an incentive to the control group. We will perform these
pairwise tests on subsets by race, gender, income, education, and other socio-demographics.
Additionally, we will conduct these pairwise tests on by type of treatment. Comparing lottery
to cash incentive, comparing positive incentive vs. penalty, comparing size of incentive, and
comparing employer mandates against the control.
We will also conduct regression analyses on the pooled dataset where the left-hand side
observations are individual responses where those answering "Yes" will be coded as 1 and
those answering, "No" or "Unsure" will be coded zero. We will include a set of controls
(race, gender, income, education, etc) as well as an indicator variable reflecting whether
the respondent received a treatment. Regression models will include ordinary least squares,
probit, nearest neighbor matching, and propensity score matching. We will also run these
regressions where the treatment variable is split up into several indicator variables
reflecting the type of treatment provided as well as an indicator for FDA approval.
We will estimate a model-alternatively using ordinary least squares and logistic
regression-with a binary-outcome dependent variable (equal to one if the respondent wanted to
be vaccinated, and otherwise equal to zero). For explanatory variables, we include dummy
variables for each of the ten treatment arms.
Criteria for statistical significance: We will use .05 as our threshold for statistical
significance.
Sample size calculation for the survey experiment comparing 10 different vaccine policies: We
estimate that if the final sample sizes for each condition include at least 300 subjects, we
can detect about 5% or larger difference. We plan to double the allocation for the control
and $1000 conditions to allow for planned comparisons.
All experiments: Each subject will be randomized to a condition. Participants will be
randomized to reduce the chance that observed effects are due to unmeasured factors. In
addition, all study procedures were automated, which improves the control over how the
experiment is conducted, allowing all procedures to be consistently standardized.
The studies will enroll national, theory-based samples recruited through MTurk and/or
Prolific platforms. To reduce enrollment bias, recruitment and enrollment materials will
describe the research in vague terms (e.g., "we are interested in learning your opinions and
preferences related to health). Each experiment will also measure socio-demographic variables
for descriptive purposes.
Recommended data cleaning procedures for each experiment: Attention checks can identify those
who should be excluded from the main analyses. (Regardless of performance on the attention
check, all participants will be compensated for their time.) Analyses will exclude those with
duplicate IDs or a high fraud score, We will conduct analyses that include and exclude those
who finished the fastest (fastest 5%).
Replication studies will include the same study design and procedures.