Abstract. A number of studies have shown that nonresponse bias is not a function of the nonresponse rate for a survey, even though the mathematics of nonresponse is clear—the rate of nonresponse should be related to the level of nonresponse bias. Thus, one paradox is why nonresponse rates predict nonresponse bias so poorly. This talk examines several theories about why the relationship between nonresponse rate and nonresponse biases is apparently so weak. Because of this weak relationship, several alternatives to the nonresponse rate have been proposed as measures of the risk of nonresponse bias. An influential meta-analysis by Groves and Peytcheva, however, indicates that most of the variance in nonresponse bias is within-survey rather than across surveys. This suggests that no single number can predict the average level of nonresponse bias for a survey. Thus, the second paradox is that no survey-level measure is likely to be useful for assessing the overall quality of a data collection effort or managing the field work. Finally, many researchers have proposed that adaptive designs can be used to improve data quality. However, given the first two paradoxes (the weak relation between nonresponse bias and nonresponse rates and the absence of a good summary measure of the impact of nonresponse), it is not clear how well researchers can redirect efforts to improve the quality of a survey. The paper reviews both empirical efforts and simulation studies to evaluate responsive and adaptive designs and finds that, as expected, the gains are quite small or nonexistent.