More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model

Höglinger, Marc; Jann, Ben (15 February 2016). More Is Not Always Better: An Experimental Individual-Level Validation of the Randomized Response Technique and the Crosswise Model (University of Bern Social Sciences Working Papers 18). Bern: University of Bern

[img]
Preview
Text
Hoeglinger-Jann-2016-MTurk.pdf - Published Version
Available under License BORIS Standard License.

Download (6MB) | Preview
[img]
Preview
Text (documentation of analysis/log files)
Hoeglinger-Jann-2016-MTurk-Analysis.pdf - Supplemental Material
Available under License BORIS Standard License.

Download (176kB) | Preview

Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may therefore be subject to considerable misreporting. To mitigate such misreporting, various indirect techniques for asking sensitive questions, such as the randomized response technique (RRT), have been proposed in the literature. In our study, we evaluate the viability of several variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6,505). Our results indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT, we do observe a reduction of false negatives (that is, an increase in the proportion of cheaters who admit having cheated). At the same time, however, there is an increase in false positives (that is, an increase in non-cheaters who falsely admit having cheated). Overall, our findings suggest that none of the implemented sensitive questions techniques substantially outperforms direct questioning. Furthermore, our study demonstrates the importance of distinguishing false negatives and false positives when evaluating the validity of sensitive question techniques.

Item Type:

Working Paper

Division/Institute:

03 Faculty of Business, Economics and Social Sciences > Social Sciences > Institute of Sociology

UniBE Contributor:

Jann, Ben

Subjects:

300 Social sciences, sociology & anthropology

Series:

University of Bern Social Sciences Working Papers

Publisher:

University of Bern

Language:

English

Submitter:

Ben Jann

Date Deposited:

21 Jun 2016 10:51

Last Modified:

05 Dec 2022 14:55

BORIS DOI:

10.7892/boris.81526

URI:

https://boris.unibe.ch/id/eprint/81526

Actions (login required)

Edit item Edit item
Provide Feedback