Benefit receipt in major household surveys is often underreported. This misreporting leads to biased estimates of the economic circumstances of disadvantaged populations, program takeup, and the distributional effects of government programs, and other program effects. We use administrative data on Food Stamp Program (FSP) participation matched to American Community Survey (ACS) and Current Population Survey (CPS) household data. We show that nearly thirty-five percent of true recipient households do not report receipt in the ACS and fifty percent do not report receipt in the CPS. Misreporting, both false negatives and false positives, varies with individual characteristics, leading to complicated biases in FSP analyses. We then directly examine the determinants of program receipt using our combined administrative and survey data. The combined data allow us to examine accurate participation using individual characteristics missing in administrative data. Our results differ from conventional estimates using only survey data, as such estimates understate participation by single parents, non-whites, low income households, and other groups. To evaluate the use of Census Bureau imputed ACS and CPS data, we also examine whether our estimates using survey data alone are closer to those using the accurate combined data when imputed survey observations are excluded. Interestingly, excluding the imputed observations leads to worse ACS estimates, but has less effect on the CPS estimates.
-
Using Linked Survey and Administrative Data to Better Measure Income: Implications for Poverty, Program Effectiveness and Holes in the Safety Net
October 2015
Working Paper Number:
CES-15-35
We examine the consequences of underreporting of transfer programs in household survey data for several prototypical analyses of low-income populations. We focus on the Current Population Survey (CPS), the source of official poverty and inequality statistics, but provide evidence that our qualitative conclusions are likely to apply to other surveys. We link administrative data for food stamps, TANF, General Assistance, and subsidized housing from New York State to the CPS at the individual level. Program receipt in the CPS is missed for over one-third of housing assistance recipients, 40 percent of food stamp recipients and 60 percent of TANF and General Assistance recipients. Dollars of benefits are also undercounted for reporting recipients, particularly for TANF, General Assistance and housing assistance. We find that the survey data sharply understate the income of poor households, as conjectured in past work by one of the authors. Underreporting in the survey data also greatly understates the effects of anti-poverty programs and changes our understanding of program targeting, often making it seem that welfare programs are less targeted to both the very poorest and middle income households than they are. Using the combined data rather than survey data alone, the poverty reducing effect of all programs together is nearly doubled while the effect of housing assistance is tripled. We also re-examine the coverage of the safety net, specifically the share of people without work or program receipt. Using the administrative measures of program receipt rather than the survey ones often reduces the share of single mothers falling through the safety net by one-half or more.
View Full
Paper PDF
-
BIAS IN FOOD STAMPS PARTICIPATION ESTIMATES IN THE PRESENCE OF MISREPORTING ERROR
March 2013
Working Paper Number:
CES-13-13
This paper focuses on how survey misreporting of food stamp receipt can bias demographic estimation of program participation. Food stamps is a federally funded program which subsidizes the nutrition of low-income households. In order to improve the reach of this program, studies on how program participation varies by demographic groups have been conducted using census data. Census data are subject to a lot of misreporting error, both underreporting and over-reporting, which can bias the estimates. The impact of misreporting error on estimate bias is examined by calculating food stamp participation rates, misreporting rates, and bias for select household characteristics (covariates).
View Full
Paper PDF
-
A METHOD OF CORRECTING FOR MISREPORTING APPLIED TO THE FOOD STAMP PROGRAM
May 2013
Working Paper Number:
CES-13-28
Survey misreporting is known to be pervasive and bias common statistical analyses. In this paper, I first use administrative data on SNAP receipt and amounts linked to American Community Survey data from New York State to show that survey data can misrepresent the program in important ways. For example, more than 1.4 billion dollars received are not reported in New York State alone. 46 percent of dollars received by house- holds with annual income above the poverty line are not reported in the survey data, while only 19 percent are missing below the poverty line. Standard corrections for measurement error cannot remove these biases. I then develop a method to obtain consistent estimates by combining parameter estimates from the linked data with publicly available data. This conditional density method recovers the correct estimates using public use data only, which solves the problem that access to linked administrative data is usually restricted. I examine the degree to which this approach can be used to extrapolate across time and geography, in order to solve the problem that validation data is often based on a convenience sample. I present evidence from within New York State that the extent of heterogeneity is small enough to make extrapolation work well across both time and geography. Extrapolation to the entire U.S. yields substantive differences to survey data and reduces deviations from official aggregates by a factor of 4 to 9 compared to survey aggregates.
View Full
Paper PDF
-
The Measurement of Medicaid Coverage in the SIPP: Evidence from California, 1990-1996
September 2002
Working Paper Number:
CES-02-21
This paper studies the accuracy of reported Medicaid coverage in the Survey of Income and Program Participation (SIPP) using a unique data set formed by matching SIPP survey responses to administrative records from the State of California. Overall, we estimate that the SIPP underestimates Medicaid coverage in the California populaton by about 10 percent. Among SIPP respondents who can be matched to administrative records, we estimate that the probability someone reports Medicaid coverage in a month when they are actually covered is around 85 percent. The corresponding probability for low-income children is even higher ' at least 90 percent. These estimates suggest that the SIPP provides reasonably accurate coverage reports for those who are actually in the Medicaid system. On the other hand, our estimate of the false positive rate (the rate of reported coverage for those who are not covered in the administrative records) is relatively high: 2.5 percent for the sample as a whole, and up to 20 percent for poor children. Some of this is due to errors in the recording of Social Security numbers in the administrative system, rather than to problems in the SIPP.
View Full
Paper PDF
-
Reporting of Indian Health Service Coverage in the American Community Survey
May 2018
Working Paper Number:
carra-2018-04
Response error in surveys affects the quality of data which are relied on for numerous research and policy purposes. We use linked survey and administrative records data to examine reporting of a particular item in the American Community Survey (ACS) - health coverage among American Indians and Alaska Natives (AIANs) through the Indian Health Service (IHS). We compare responses to the IHS portion of the 2014 ACS health insurance question to whether or not individuals are in the 2014 IHS Patient Registration data. We evaluate the extent to which individuals misreport their IHS coverage in the ACS as well as the characteristics associated with misreporting. We also assess whether the ACS estimates of AIANs with IHS coverage represent an undercount. Our results will be of interest to researchers who rely on survey responses in general and specifically the ACS health insurance question. Moreover, our analysis contributes to the literature on using administrative records to measure components of survey error.
View Full
Paper PDF
-
MISCLASSIFICATION IN BINARY CHOICE MODELS
May 2013
Working Paper Number:
CES-13-27
We derive the asymptotic bias from misclassification of the dependent variable in binary choice models. Measurement error is necessarily non-classical in this case, which leads to bias in linear and non-linear models even if only the dependent variable is mismeasured. A Monte Carlo study and an application to food stamp receipt show that the bias formulas are useful to analyze the sensitivity of substantive conclusions, to interpret biased coefficients and imply features of the estimates that are robust to misclassification. Using administrative records linked to survey data as validation data, we examine estimators that are consistent under misclassification. They can improve estimates if their assumptions hold, but can aggravate the problem if the assumptions are invalid. The estimators differ
in their robustness to such violations, which can be improved by incorporating additional information. We propose tests for the presence and nature of misclassification that can help to choose an estimator.
View Full
Paper PDF
-
The Use of Administrative Records and the American Community Survey to Study the Characteristics of Undercounted Young Children in the 2010 Census
May 2018
Working Paper Number:
carra-2018-05
Children under age five are historically one of the most difficult segments of the population to enumerate in the U.S. decennial census. The persistent undercount of young children is highest among Hispanics and racial minorities. In this study, we link 2010 Census data to administrative records from government and third party data sources, such as Medicaid enrollment data and tenant rental assistance program records from the Department of Housing and Urban Development, to identify differences between children reported and not reported in the 2010 Census. In addition, we link children in administrative records to the American Community Survey to identify various characteristics of households with children under age five who may have been missed in the last census. This research contributes to what is known about the demographic, socioeconomic, and household characteristics of young children undercounted by the census. Our research also informs the potential benefits of using administrative records and surveys to supplement the U.S. Census Bureau child population enumeration efforts in future decennial censuses.
View Full
Paper PDF
-
Medicare Coverage and Reporting
December 2016
Working Paper Number:
carra-2016-12
Medicare coverage of the older population in the United States is widely recognized as being nearly universal. Recent statistics from the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) indicate that 93 percent of individuals aged 65 and older were covered by Medicare in 2013. Those without Medicare include those who are not eligible for the public health program, though the CPS ASEC estimate may also be impacted by misreporting. Using linked data from the CPS ASEC and Medicare Enrollment Database (i.e., the Medicare administrative data), we estimate the extent to which individuals misreport their Medicare coverage. We focus on those who report having Medicare but are not enrolled (false positives) and those who do not report having Medicare but are enrolled (false negatives). We use regression analyses to evaluate factors associated with both types of misreporting including socioeconomic, demographic, and household characteristics. We then provide estimates of the implied Medicare-covered, insured, and uninsured older population, taking into account misreporting in the CPS ASEC. We find an undercount in the CPS ASEC estimates of the Medicare covered population of 4.5 percent. This misreporting is not random - characteristics associated with misreporting include citizenship status, year of entry, labor force participation, Medicare coverage of others in the household, disability status, and imputation of Medicare responses. When we adjust the CPS ASEC estimates to account for misreporting, Medicare coverage of the population aged 65 and older increases from 93.4 percent to 95.6 percent while the uninsured rate decreases from 1.4 percent to 1.3 percent.
View Full
Paper PDF
-
Within and Across County Variation in SNAP Misreporting: Evidence from Linked ACS and Administrative Records
July 2014
Working Paper Number:
carra-2014-05
This paper examines sub-state spatial and temporal variation in misreporting of participation in the Supplemental Nutrition Assistance Program (SNAP) using several years of the American Community Survey linked to SNAP administrative records from New York (2008-2010) and Texas (2006-2009). I calculate county false-negative (FN) and false-positive (FP) rates for each year of observation and find that, within a given state and year, there is substantial heterogeneity in FN rates across counties. In addition, I find evidence that FN rates (but not FP rates) persist over time within counties. This persistence in FN rates is strongest among more populous counties, suggesting that when noise from sampling variation is not an issue, some counties have consistently high FN rates while others have consistently low FN rates. This finding is important for understanding how misreporting might bias estimates of sub-state SNAP participation rates, changes in those participation rates, and effects of program participation. This presentation was given at the CARRA Seminar, June 27, 2013
View Full
Paper PDF
-
Incorporating Administrative Data in Survey Weights for the 2018-2022 Survey of Income and Program Participation
October 2024
Working Paper Number:
CES-24-58
Response rates to the Survey of Income and Program Participation (SIPP) have declined over time, raising the potential for nonresponse bias in survey estimates. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we modify various parts of the SIPP weighting algorithm to incorporate such data. We create these new weights for the 2018 through 2022 SIPP panels and examine how the new weights affect survey estimates. Our results show that before weighting adjustments, SIPP respondents in these panels have higher socioeconomic status than the general population. Existing weighting procedures reduce many of these differences. Comparing SIPP estimates between the production weights and the administrative data-based weights yields changes that are not uniform across the joint income and program participation distribution. Unlike other Census Bureau household surveys, there is no large increase in nonresponse bias in SIPP due to the COVID-19 Pandemic. In summary, the magnitude and sign of nonresponse bias in SIPP is complicated, and the existing weighting procedures may change the sign of nonresponse bias for households with certain incomes and program benefit statuses.
View Full
Paper PDF