Recently, we compared the DLABSS volunteer pool to online samples from MTurk and nationally representative sample from ANES and CPS. Building off our earlier analysis, we further benchmark the DLABSS panel by replicating the same three experiments replicated by Berinsky et al (2012) to evaluate the effectiveness of MTurk as a social science research tool.
The three experiments include an analysis by Rasinski (1989) that investigates the effects of question wording on people’s preferences for government spending, the “Asian Disease Problem” reported by Tversky & Kahneman (1981) that examines the effect of framing on preferences, and Kam and Simas’s (2010) investigation of the effect of an individual’s risk orientation on their preference for certain policy choices. Below we present a table for each experiment that compares the replicated results in DLABSS to the original experiment and the MTurk replication results.
We begin with Rasinski (1989), who finds that when asking people if they think the government is spending too much, too little, or about the right amount on welfare vs. on assistance to the poor significantly changes how one responds to the question. Specifically, people are more inclined to support increased spending when it is phrased as “assistance to the poor” compared to “welfare.” Table 1 below shows the percent of people within each group that believe too little is being spent on these programs.
Framing effects on preference for redistribution: DLABSS, GSS, and MTurk
The replication performed by DLABSS is in line with the findings of Rasinski and the Berinsky et al MTurk replication. All experiments show a significant difference between the “poor” and “welfare” groups, with the poor group always suggesting too little is being spent more than the welfare group. The DLABSS replication poor group matched the original experiment’s poor group exactly, while the DLABSS welfare group was higher than both of the other platforms. It is possible that slight shifts in public opinion since the original experiment result in this difference.
Second, we replicate the well-known Asian Disease Problem popularized by Tversky & Kahneman (1981), who present two different groups with the problem of a “rare Asian disease” threatening their country and suggests two possible programs to deal with the disease. In brief, the authors find that respondents primed with a “die” frame that describes policy options in terms of deaths (rather than lives saved) are more likely to choose probabilistic (rather than certain) outcomes. Table 2 below shows the percent of respondents that prefer the certain outcome for each frame in DLABSS, MTurk and the original study.
The Asian Disease Problem
Across all platforms, people in the positively framed group prefer the certain outcome to the probabilistic outcome, and vice versa for the negatively framed group. The original experiment displays the strongest difference in groups, while the online laboratories find slightly smaller but equally significant effects.
Lastly, we look at a more recent experiment by Kam & Simas (2010), which finds that the amount of risk that people are willing to accept affects their preferences for different policies. Specifically, those who are willing to accept higher amounts of risk are more likely to support probabilistic policy outcomes. Table 3 below displays the results of probit regressions for the three different platforms. We present only the sign and significance for each coefficient for simplicity, but note that the coefficient magnitudes and levels of significance are nearly the same across platforms.
The Effects of Risk Acceptance and Framing on Policy Preferences
Overall, we find that the DLABSS panel is able to successfully replicate all three experiments replicated using MTurk by Berinsky et al. with no significant or alarming differences. Our findings bode well for the future of DLABSS, and online volunteer labs in general, as an effective social science research platform.