The Household Pulse Survey (HPS) conducted by the U.S. Census Bureau is a unique survey that provided timely data on the effects of the COVID-19 Pandemic on American households and continues to provide data on other emergent social and economic issues. Because the survey has a response rate in the single digits and only has an online response mode, there are concerns about nonresponse and coverage bias. In this paper, we match administrative data from government agencies and third-party data to HPS respondents to examine how representative they are of the U.S. population. For comparison, we create a benchmark of American Community Survey (ACS) respondents and nonrespondents and include the ACS respondents as another point of reference. Overall, we find that the HPS is less representative of the U.S. population than the ACS. However, performance varies across administrative variables, and the existing weighting adjustments appear to greatly improve the representativeness of the HPS. Additionally, we look at household characteristics by their email domain to examine the effects on coverage from limiting email messages in 2023 to addresses from the contact frame with at least 90% deliverability rates, finding no clear change in the representativeness of the HPS afterwards.
-
Incorporating Administrative Data in Survey Weights for the 2018-2022 Survey of Income and Program Participation
October 2024
Working Paper Number:
CES-24-58
Response rates to the Survey of Income and Program Participation (SIPP) have declined over time, raising the potential for nonresponse bias in survey estimates. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we modify various parts of the SIPP weighting algorithm to incorporate such data. We create these new weights for the 2018 through 2022 SIPP panels and examine how the new weights affect survey estimates. Our results show that before weighting adjustments, SIPP respondents in these panels have higher socioeconomic status than the general population. Existing weighting procedures reduce many of these differences. Comparing SIPP estimates between the production weights and the administrative data-based weights yields changes that are not uniform across the joint income and program participation distribution. Unlike other Census Bureau household surveys, there is no large increase in nonresponse bias in SIPP due to the COVID-19 Pandemic. In summary, the magnitude and sign of nonresponse bias in SIPP is complicated, and the existing weighting procedures may change the sign of nonresponse bias for households with certain incomes and program benefit statuses.
View Full
Paper PDF
-
Incorporating Administrative Data in Survey Weights for the Basic Monthly Current Population Survey
January 2024
Working Paper Number:
CES-24-02
Response rates to the Current Population Survey (CPS) have declined over time, raising the potential for nonresponse bias in key population statistics. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we take two approaches. First, we use administrative data to build a non-parametric nonresponse adjustment step while leaving the calibration to population estimates unchanged. Second, we use administratively linked data in the calibration process, matching income data from the Internal Return Service and state agencies, demographic data from the Social Security Administration and the decennial census, and industry data from the Census Bureau's Business Register to both responding and nonresponding households. We use the matched data in the household nonresponse adjustment of the CPS weighting algorithm, which changes the weights of respondents to account for differential nonresponse rates among subpopulations.
After running the experimental weighting algorithm, we compare estimates of the unemployment rate and labor force participation rate between the experimental weights and the production weights. Before March 2020, estimates of the labor force participation rates using the experimental weights are 0.2 percentage points higher than the original estimates, with minimal effect on unemployment rate. After March 2020, the new labor force participation rates are similar, but the unemployment rate is about 0.2 percentage points higher in some months during the height of COVID-related interviewing restrictions. These results are suggestive that if there is any nonresponse bias present in the CPS, the magnitude is comparable to the typical margin of error of the unemployment rate estimate. Additionally, the results are overall similar across demographic groups and states, as well as using alternative weighting methodology. Finally, we discuss how our estimates compare to those from earlier papers that calculate estimates of bias in key CPS labor force statistics.
This paper is for research purposes only. No changes to production are being implemented at this time.
View Full
Paper PDF
-
The Impact of Household Surveys on 2020 Census Self-Response
July 2022
Working Paper Number:
CES-22-24
Households who were sampled in 2019 for the American Community Survey (ACS) had lower self-response rates to the 2020 Census. The magnitude varied from -1.5 percentage point for household sampled in January 2019 to -15.1 percent point for households sampled in December 2019. Similar effects are found for the Current Population Survey (CPS) as well.
View Full
Paper PDF
-
Investigating the Use of Administrative Records in the Consumer Expenditure Survey
March 2018
Working Paper Number:
carra-2018-01
In this paper, we investigate the potential of applying administrative records income data to the Consumer Expenditure (CE) survey to inform measurement error properties of CE estimates, supplement respondent-collected data, and estimate the representativeness of the CE survey by income level. We match individual responses to Consumer Expenditure Quarterly Interview Survey data collected from July 2013 through December 2014 to IRS administrative data in order to analyze CE questions on wages, social security payroll deductions, self-employment income receipt and retirement income. We find that while wage amounts are largely in alignment between the CE and administrative records in the middle of the wage distribution, there is evidence that wages are over-reported to the CE at the bottom of the wage distribution and under-reported at the top of the wage distribution. We find mixed evidence for alignment between the CE and administrative records on questions covering payroll deductions and self-employment income receipt, but find substantial divergence between CE responses and administrative records when examining retirement income. In addition to the analysis using person-based linkages, we also match responding and non-responding CE sample units to the universe of IRS 1040 tax returns by address to examine non-response bias. We find that non-responding households are substantially richer than responding households, and that very high income households are less likely to respond to the CE.
View Full
Paper PDF
-
The Nature of the Bias When Studying Only Linkable Person Records: Evidence from the American Community Survey
April 2014
Working Paper Number:
carra-2014-08
Record linkage across survey and administrative records sources can greatly enrich data and improve their quality. The linkage can reduce respondent burden and nonresponse follow-up costs. This is particularly important in an era of declining survey response rates and tight budgets. Record linkage also creates statistical bias, however. The U.S. Census Bureau links person records through its Person Identification Validation System (PVS), assigning each record a Protected Identification Key (PIK). It is not possible to reliably assign a PIK to every record, either due to insufficient identifying information or because the information does not uniquely match any of the administrative records used in the person validation process. Non-random ability to assign a PIK can potentially inject bias into statistics using linked data. This paper studies the nature of this bias using the 2009 and 2010 American Community Survey (ACS). The ACS is well-suited for this analysis, as it contains a rich set of person characteristics that can describe the bias. We estimate probit models for whether a record is assigned a PIK. The results suggest that young children, minorities, residents of group quarters, immigrants, recent movers, low-income individuals, and non-employed individuals are less likely to receive a PIK using 2009 ACS. Changes to the PVS process in 2010 significantly addressed the young children deficit, attenuated the other biases, and increased the validated records share from 88.1 to 92.6 percent (person-weighted).
View Full
Paper PDF
-
Connected and Uncooperative: The Effects of Homogenous and Exclusive Social Networks on Survey Response Rates and Nonresponse Bias
January 2024
Working Paper Number:
CES-24-01
Social capital, the strength of people's friendship networks and community ties, has been hypothesized as an important determinant of survey participation. Investigating this hypothesis has been difficult given data constraints. In this paper, we provide insights by investigating how response rates and nonresponse bias in the American Community Survey are correlated with county-level social network data from Facebook. We find that areas of the United States where people have more exclusive and homogenous social networks have higher nonresponse bias and lower response rates. These results provide further evidence that the effects of social capital may not be simply a matter of whether people are socially isolated or not, but also what types of social connections people have and the sociodemographic heterogeneity of their social networks.
View Full
Paper PDF
-
Receipt of Public and Private Food Assistance Across the Rural-Urban Continuum Before and During the COVID-19 Pandemic: Analysis of Current Population Survey Data
August 2025
Working Paper Number:
CES-25-51
Background: The nutrition safety net in the United States is critical to supporting food security among households in need. Food assistance in the United States includes both government-funded food programs and private community-based providers who distribute food to in need households. The COVID-19 pandemic impacted experiences of food security and use of private and public food assistance resources. However, this may have differed for households residing in urban versus rural areas. We explored receipt of Supplemental Nutrition Assistance Program (SNAP) benefits or food from community-based emergency food providers across a detailed measure of the rural-urban continuum before and during the COVID-19 pandemic.
Methods: We linked restricted use Current Population Survey Food Security Supplement data to census-tract level United States Department of Agriculture Rural-Urban Commuting Area codes to estimate prevalence of self-reported SNAP participation and receipt of emergency food support across temporal (2015-2019 versus 2020-2021) and socio-spatial (urban, large rural city/town, small rural town, or isolated rural town/area) dimensions. We report prevalences as point estimates with 95% confidence intervals, all weighted for national representation.
Results:
The weighted prevalence of self-reported SNAP participation was 8.9% (8.7-9.2%) in 2015-2019 and 9.1% (8.5-9.5%) in 2020-2021 in urban areas, 11.4% (10.8-12.2%) in 2015-2019 and 11.6% (10.5-12.9%) in 2020-2021 in large rural towns/cities, 13.4% (12.3-14.6%) in 2015-2019 and 12.3% (10.5-14.5%) in 2020-2021 in small rural towns, and 9.7% (8.6-10.9%) in 2015-2019 and 10.9% (8.8-13.4% )in 2020-2021 isolated rural towns. The weighted prevalence of self-reported receipt of emergency food was 4.9% (4.8-5.1%) in 2015-2019 and 6.2% (5.8-6.5%) in 2020-2021 in urban areas, 6.8% (6.2-7.4%) in 2015-2019 and 7.6% (6.6-8.6%) in 2020-2021 in large rural towns/cities, 8.1% (7.3-9.1%) in 2015-2019 and 7.1% (5.7-8.8%) in 2020-2021 in small rural towns, and 6.8% (5.9-7.7%) in 2015-2019 and 8.5% (6.7-10.6%) in 2020-2021 isolated rural towns.
Conclusion: Households in rural communities use public and private food assistance at higher rates than urban areas, but there is variation across communities depending on the level of rurality.
View Full
Paper PDF
-
Understanding the Quality of Alternative Citizenship Data Sources for the 2020 Census
August 2018
Working Paper Number:
CES-18-38R
This paper examines the quality of citizenship data in self-reported survey responses compared to administrative records and evaluates options for constructing an accurate count of resident U.S. citizens. Person-level discrepancies between survey-collected citizenship data and administrative records are more pervasive than previously reported in studies comparing survey and administrative data aggregates. Our results imply that survey-sourced citizenship data produce significantly lower estimates of the noncitizen share of the population than would be produced from currently available administrative records; both the survey-sourced and administrative data have shortcomings that could contribute to this difference. Our evidence is consistent with noncitizen respondents misreporting their own citizenship status and failing to report that of other household members. At the same time, currently available administrative records may miss some naturalizations and capture others with a delay. The evidence in this paper also suggests that adding a citizenship question to the 2020 Census would lead to lower self-response rates in households potentially containing noncitizens, resulting in higher fieldwork costs and a lower-quality population count.
View Full
Paper PDF
-
Response Error & the Medicaid undercount in the CPS
December 2016
Working Paper Number:
carra-2016-11
The Current Population Survey Annual Social and Economic Supplement (CPS ASEC) is an important source for estimates of the uninsured population. Previous research has shown that survey estimates produce an undercount of beneficiaries compared to Medicaid enrollment records. We extend past work by examining the Medicaid undercount in the 2007-2011 CPS ASEC compared to enrollment data from the Medicaid Statistical Information System for calendar years 2006-2010. By linking individuals across datasets, we analyze two types of response error regarding Medicaid enrollment - false negative error and false positive error. We use regression analysis to identify factors associated with these two types of response error in the 2011 CPS ASEC. We find that the Medicaid undercount was between 22 and 31 percent from 2007 to 2011. In 2011, the false negative rate was 40 percent, and 27 percent of Medicaid reports in CPS ASEC were false positives. False negative error is associated with the duration of enrollment in Medicaid, enrollment in Medicare and private insurance, and Medicaid enrollment in the survey year. False positive error is associated with enrollment in Medicare and shared Medicaid coverage in the household. We discuss implications for survey reports of health insurance coverage and for estimating the uninsured population.
View Full
Paper PDF
-
Where Are Your Parents? Exploring Potential Bias in Administrative Records on Children
March 2024
Working Paper Number:
CES-24-18
This paper examines potential bias in the Census Household Composition Key's (CHCK) probabilistic parent-child linkages. By linking CHCK data to the American Community Survey (ACS), we reveal disparities in parent-child linkages among specific demographic groups and find that characteristics of children that can and cannot be linked to the CHCK vary considerably from the larger population. In particular, we find that children from low-income, less educated households and of Hispanic origin are less likely to be linked to a mother or a father in the CHCK. We also highlight some data considerations when using the CHCK.
View Full
Paper PDF