|
Help us improve digital forms!
People in the U.S. spend over 10 billion hours each year completing government forms. Take five minutes to help OES reduce this burden and make digital forms easier and more accessible.
You can help by filling out this short typical form.
|
|
Evaluation and Evidence Training Series
Open to Federal executive branch employees only. Register on MAX.gov.
What To Do When You Can’t Randomize
Wednesday, August 17th from 3:00-4:30 PM ET
Randomized experiments may be an ideal way to prove a program’s impact, but they are not always feasible. Explore other research designs that shed light on a program’s impact.
|
|
Survey Uptake Decisions with Transparent Default Choices
OES developed and administered a survey experiment to estimate how responsive decision makers are to information about program impact when assessing the value of a program. OES included a transparency statement to appear with a default selection of a response to a survey question.
The question asked respondents if they would agree to receive future survey invitations, and the “Yes” response was pre-selected (by default) for all respondents. Next to the “Yes” response, the following transparency statement was included: “NOTE: We have preselected this option because we want to have enough respondents for future surveys to help build evidence to improve government services.”
Respondents who saw the final survey question were randomly assigned to see the question with either a transparency statement following the default selection or a standard default selection with no transparency statement.
Including the transparency statement increased the acceptance of the default (“Yes”) option by 14.7 percentage points. This difference was statistically significant (p = .001).
|
|
New Guidance Papers
Two new methodological guidance papers are now available on our Evaluation Resources page.
The first paper focuses on how to select appropriate multinomial tests for population comparisons.
The second paper provides resources and guidance for causal evaluations using quasi-experimental designs (QEDs).
|
|
|
|