-
Nonresponse and Coverage Bias in the Household Pulse Survey: Evidence from Administrative Data
October 2024
Working Paper Number:
CES-24-60
The Household Pulse Survey (HPS) conducted by the U.S. Census Bureau is a unique survey that provided timely data on the effects of the COVID-19 Pandemic on American households and continues to provide data on other emergent social and economic issues. Because the survey has a response rate in the single digits and only has an online response mode, there are concerns about nonresponse and coverage bias. In this paper, we match administrative data from government agencies and third-party data to HPS respondents to examine how representative they are of the U.S. population. For comparison, we create a benchmark of American Community Survey (ACS) respondents and nonrespondents and include the ACS respondents as another point of reference. Overall, we find that the HPS is less representative of the U.S. population than the ACS. However, performance varies across administrative variables, and the existing weighting adjustments appear to greatly improve the representativeness of the HPS. Additionally, we look at household characteristics by their email domain to examine the effects on coverage from limiting email messages in 2023 to addresses from the contact frame with at least 90% deliverability rates, finding no clear change in the representativeness of the HPS afterwards.
View Full
Paper PDF
-
Incorporating Administrative Data in Survey Weights for the 2018-2022 Survey of Income and Program Participation
October 2024
Working Paper Number:
CES-24-58
Response rates to the Survey of Income and Program Participation (SIPP) have declined over time, raising the potential for nonresponse bias in survey estimates. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we modify various parts of the SIPP weighting algorithm to incorporate such data. We create these new weights for the 2018 through 2022 SIPP panels and examine how the new weights affect survey estimates. Our results show that before weighting adjustments, SIPP respondents in these panels have higher socioeconomic status than the general population. Existing weighting procedures reduce many of these differences. Comparing SIPP estimates between the production weights and the administrative data-based weights yields changes that are not uniform across the joint income and program participation distribution. Unlike other Census Bureau household surveys, there is no large increase in nonresponse bias in SIPP due to the COVID-19 Pandemic. In summary, the magnitude and sign of nonresponse bias in SIPP is complicated, and the existing weighting procedures may change the sign of nonresponse bias for households with certain incomes and program benefit statuses.
View Full
Paper PDF
-
Estimating the Potential Impact of Combined Race and Ethnicity Reporting on Long-Term Earnings Statistics
September 2024
Working Paper Number:
CES-24-48
We use place of birth information from the Social Security Administration linked to earnings data from the Longitudinal Employer-Household Dynamics Program and detailed race and ethnicity data from the 2010 Census to study how long-term earnings differentials vary by place of birth for different self-identified race and ethnicity categories. We focus on foreign-born persons from countries that are heavily Hispanic and from countries in the Middle East and North Africa (MENA). We find substantial heterogeneity of long-term earnings differentials within country of birth, some of which will be difficult to detect when the reporting format changes from the current two-question version to the new single-question version because they depend on self-identifications that place the individual in two distinct categories within the single-question format, specifically, Hispanic and White or Black, and MENA and White or Black. We also study the USA-born children of these same immigrants. Long-term earnings differences for the 2nd generation also vary as a function of self-identified ethnicity and race in ways that changing to the single-question format could affect.
View Full
Paper PDF
-
Citizenship Question Effects on Household Survey Response
June 2024
Working Paper Number:
CES-24-31
Several small-sample studies have predicted that a citizenship question in the 2020 Census would cause a large drop in self-response rates. In contrast, minimal effects were found in Poehler et al.'s (2020) analysis of the 2019 Census Test randomized controlled trial (RCT). We reconcile these findings by analyzing associations between characteristics about the addresses in the 2019 Census Test and their response behavior by linking to independently constructed administrative data. We find significant heterogeneity in sensitivity to the citizenship question among households containing Hispanics, naturalized citizens, and noncitizens. Response drops the most for households containing noncitizens ineligible for a Social Security number (SSN). It falls more for households with Latin American-born immigrants than those with immigrants from other countries. Response drops less for households with U.S.-born Hispanics than households with noncitizens from Latin America. Reductions in responsiveness occur not only through lower unit self-response rates, but also by increased household roster omissions and internet break-offs. The inclusion of a citizenship question increases the undercount of households with noncitizens. Households with noncitizens also have much higher citizenship question item nonresponse rates than those only containing citizens. The use of tract-level characteristics and significant heterogeneity among Hispanics, the foreign-born, and noncitizens help explain why the effects found by Poehler et al. were so small. Linking administrative microdata with the RCT data expands what we can learn from the RCT.
View Full
Paper PDF
-
Revisiting Methods to Assign Responses when Race and Hispanic Origin Reporting are Discrepant Across Administrative Records and Third Party Sources
May 2024
Working Paper Number:
CES-24-26
The Best Race and Ethnicity Administrative Records Composite file ('Best Race file') is an composite file which combines Census, federal, and Third Party Data (TPD) sources and applies business rules to assign race and ethnicity values to person records. The first version of the Best Race administrative records composite was first constructed in 2015 and subsequently updated each year to include more recent vintages, when available, of the data sources originally included in the composite file. Where updates were available for data sources, the most recent information for persons was retained, and the business rules were reapplied to assign a single race and single Hispanic origin value to each person record. The majority of person records on the Best Race file have consistent race and ethnicity information across data sources. Where there are discrepancies in responses across data sources, we apply a series of business rules to assign a single race and ethnicity to each record. To improve the quality of the Best Race administrative records composite, we have begun revising the business rules which were developed several years ago. This paper discusses the original business rules as well as the implemented changes and their impact on the composite file.
View Full
Paper PDF
-
Where Are Your Parents? Exploring Potential Bias in Administrative Records on Children
March 2024
Working Paper Number:
CES-24-18
This paper examines potential bias in the Census Household Composition Key's (CHCK) probabilistic parent-child linkages. By linking CHCK data to the American Community Survey (ACS), we reveal disparities in parent-child linkages among specific demographic groups and find that characteristics of children that can and cannot be linked to the CHCK vary considerably from the larger population. In particular, we find that children from low-income, less educated households and of Hispanic origin are less likely to be linked to a mother or a father in the CHCK. We also highlight some data considerations when using the CHCK.
View Full
Paper PDF
-
Examining Racial Identity Responses Among People with Middle Eastern and North African Ancestry in the American Community Survey
March 2024
Working Paper Number:
CES-24-14
People with Middle Eastern and North African (MENA) backgrounds living in the United States are defined and classified as White by current Federal standards for race and ethnicity, yet many MENA people do not identify as White in surveys, such as those conducted by the U.S. Census Bureau. Instead, they often select 'Some Other Race', if it is provided, and write in MENA responses such as Arab, Iranian, or Middle Eastern. In processing survey data for public release, the Census Bureau classifies these responses as White in accordance with Federal guidance set by the U.S. Office of Management and Budget. Research that uses these edited public data relies on limited information on MENA people's racial identification. To address this limitation, we obtained unedited race responses in the nationally representative American Community Survey from 2005-2019 to better understand how people of MENA ancestry report their race. We also use these data to compare the demographic, cultural, socioeconomic, and contextual characteristics of MENA individuals who identify as White versus those who do not identify as White. We find that one in four MENA people do not select White alone as their racial identity, despite official guidance that defines 'White' as people having origins in any of the original peoples of Europe, the Middle East, or North Africa. A variety of individual and contextual factors are associated with this choice, and some of these factors operate differently for U.S.-born and foreign-born MENA people living in the United States.
View Full
Paper PDF
-
Incorporating Administrative Data in Survey Weights for the Basic Monthly Current Population Survey
January 2024
Working Paper Number:
CES-24-02
Response rates to the Current Population Survey (CPS) have declined over time, raising the potential for nonresponse bias in key population statistics. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we take two approaches. First, we use administrative data to build a non-parametric nonresponse adjustment step while leaving the calibration to population estimates unchanged. Second, we use administratively linked data in the calibration process, matching income data from the Internal Return Service and state agencies, demographic data from the Social Security Administration and the decennial census, and industry data from the Census Bureau's Business Register to both responding and nonresponding households. We use the matched data in the household nonresponse adjustment of the CPS weighting algorithm, which changes the weights of respondents to account for differential nonresponse rates among subpopulations.
After running the experimental weighting algorithm, we compare estimates of the unemployment rate and labor force participation rate between the experimental weights and the production weights. Before March 2020, estimates of the labor force participation rates using the experimental weights are 0.2 percentage points higher than the original estimates, with minimal effect on unemployment rate. After March 2020, the new labor force participation rates are similar, but the unemployment rate is about 0.2 percentage points higher in some months during the height of COVID-related interviewing restrictions. These results are suggestive that if there is any nonresponse bias present in the CPS, the magnitude is comparable to the typical margin of error of the unemployment rate estimate. Additionally, the results are overall similar across demographic groups and states, as well as using alternative weighting methodology. Finally, we discuss how our estimates compare to those from earlier papers that calculate estimates of bias in key CPS labor force statistics.
This paper is for research purposes only. No changes to production are being implemented at this time.
View Full
Paper PDF
-
The 2010 Census Confidentiality Protections Failed, Here's How and Why
December 2023
Authors:
Lars Vilhuber,
John M. Abowd,
Ethan Lewis,
Nathan Goldschlag,
Robert Ashmead,
Daniel Kifer,
Philip Leclerc,
Rolando A. Rodr�guez,
Tamara Adams,
David Darais,
Sourya Dey,
Simson L. Garfinkel,
Scott Moore,
Ramy N. Tadros
Working Paper Number:
CES-23-63
Using only 34 published tables, we reconstruct five variables (census block, sex, age, race, and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable tabulated at the census block level, at most 20.1% of reconstructed records can differ from their confidential source on even a single value for these five variables. Using only published data, an attacker can verify that all records in 70% of all census blocks (97 million people) are perfectly reconstructed. The tabular publications in Summary File 1 thus have prohibited disclosure risk similar to the unreleased confidential microdata. Reidentification studies confirm that an attacker can, within blocks with perfect reconstruction accuracy, correctly infer the actual census response on race and ethnicity for 3.4 million vulnerable population uniques (persons with nonmodal characteristics) with 95% accuracy, the same precision as the confidential data achieve and far greater than statistical baselines. The flaw in the 2010 Census framework was the assumption that aggregation prevented accurate microdata reconstruction, justifying weaker disclosure limitation methods than were applied to 2010 Census public microdata. The framework used for 2020 Census publications defends against attacks that are based on reconstruction, as we also demonstrate here. Finally, we show that alternatives to the 2020 Census Disclosure Avoidance System with similar accuracy (enhanced swapping) also fail to protect confidentiality, and those that partially defend against reconstruction attacks (incomplete suppression implementations) destroy the primary statutory use case: data for redistricting all legislatures in the country in compliance with the 1965 Voting Rights Act.
View Full
Paper PDF
-
Where to Build Affordable Housing?
Evaluating the Tradeoffs of Location
December 2023
Working Paper Number:
CES-23-62R
How does the location of affordable housing affect tenant welfare, the distribution of assistance, and broader societal objectives such as racial integration? Using administrative data on tenants of units funded by the Low-Income Housing Tax Credit (LIHTC), we first show that characteristics such as race and proxies for need vary widely across neighborhoods. Despite fixed eligibility requirements, LIHTC developments in more opportunity-rich neighborhoods house tenants who are higher income, more educated, and far less likely to be Black. To quantify the welfare implications, we build a residential choice model in which households choose from both market-rate and affordable housing options, where the latter must be rationed. While building affordable housing in higher-opportunity neighborhoods costs more, it also increases household welfare and reduces city-wide segregation. The gains in household welfare, however, accrue to more moderate-need, non-Black/Hispanic households at the expense of other households. This change in the distribution of assistance is primarily due to a 'crowding out' effect: households that only apply for assistance in higher-opportunity neighborhoods crowd out those willing to apply regardless of location. Finally, other policy levers'such as lowering the income limits used for means-testing'have only limited effects relative to the choice of location.
View Full
Paper PDF