Benefit receipt in major household surveys is often underreported. This misreporting leads to biased estimates of the economic circumstances of disadvantaged populations, program takeup, and the distributional effects of government programs, and other program effects. We use administrative data on Food Stamp Program (FSP) participation matched to American Community Survey (ACS) and Current Population Survey (CPS) household data. We show that nearly thirty-five percent of true recipient households do not report receipt in the ACS and fifty percent do not report receipt in the CPS. Misreporting, both false negatives and false positives, varies with individual characteristics, leading to complicated biases in FSP analyses. We then directly examine the determinants of program receipt using our combined administrative and survey data. The combined data allow us to examine accurate participation using individual characteristics missing in administrative data. Our results differ from conventional estimates using only survey data, as such estimates understate participation by single parents, non-whites, low income households, and other groups. To evaluate the use of Census Bureau imputed ACS and CPS data, we also examine whether our estimates using survey data alone are closer to those using the accurate combined data when imputed survey observations are excluded. Interestingly, excluding the imputed observations leads to worse ACS estimates, but has less effect on the CPS estimates.
-
Using Linked Survey and Administrative Data to Better Measure Income: Implications for Poverty, Program Effectiveness and Holes in the Safety Net
October 2015
Working Paper Number:
CES-15-35
We examine the consequences of underreporting of transfer programs in household survey data for several prototypical analyses of low-income populations. We focus on the Current Population Survey (CPS), the source of official poverty and inequality statistics, but provide evidence that our qualitative conclusions are likely to apply to other surveys. We link administrative data for food stamps, TANF, General Assistance, and subsidized housing from New York State to the CPS at the individual level. Program receipt in the CPS is missed for over one-third of housing assistance recipients, 40 percent of food stamp recipients and 60 percent of TANF and General Assistance recipients. Dollars of benefits are also undercounted for reporting recipients, particularly for TANF, General Assistance and housing assistance. We find that the survey data sharply understate the income of poor households, as conjectured in past work by one of the authors. Underreporting in the survey data also greatly understates the effects of anti-poverty programs and changes our understanding of program targeting, often making it seem that welfare programs are less targeted to both the very poorest and middle income households than they are. Using the combined data rather than survey data alone, the poverty reducing effect of all programs together is nearly doubled while the effect of housing assistance is tripled. We also re-examine the coverage of the safety net, specifically the share of people without work or program receipt. Using the administrative measures of program receipt rather than the survey ones often reduces the share of single mothers falling through the safety net by one-half or more.
View Full
Paper PDF
-
BIAS IN FOOD STAMPS PARTICIPATION ESTIMATES IN THE PRESENCE OF MISREPORTING ERROR
March 2013
Working Paper Number:
CES-13-13
This paper focuses on how survey misreporting of food stamp receipt can bias demographic estimation of program participation. Food stamps is a federally funded program which subsidizes the nutrition of low-income households. In order to improve the reach of this program, studies on how program participation varies by demographic groups have been conducted using census data. Census data are subject to a lot of misreporting error, both underreporting and over-reporting, which can bias the estimates. The impact of misreporting error on estimate bias is examined by calculating food stamp participation rates, misreporting rates, and bias for select household characteristics (covariates).
View Full
Paper PDF
-
A METHOD OF CORRECTING FOR MISREPORTING APPLIED TO THE FOOD STAMP PROGRAM
May 2013
Working Paper Number:
CES-13-28
Survey misreporting is known to be pervasive and bias common statistical analyses. In this paper, I first use administrative data on SNAP receipt and amounts linked to American Community Survey data from New York State to show that survey data can misrepresent the program in important ways. For example, more than 1.4 billion dollars received are not reported in New York State alone. 46 percent of dollars received by house- holds with annual income above the poverty line are not reported in the survey data, while only 19 percent are missing below the poverty line. Standard corrections for measurement error cannot remove these biases. I then develop a method to obtain consistent estimates by combining parameter estimates from the linked data with publicly available data. This conditional density method recovers the correct estimates using public use data only, which solves the problem that access to linked administrative data is usually restricted. I examine the degree to which this approach can be used to extrapolate across time and geography, in order to solve the problem that validation data is often based on a convenience sample. I present evidence from within New York State that the extent of heterogeneity is small enough to make extrapolation work well across both time and geography. Extrapolation to the entire U.S. yields substantive differences to survey data and reduces deviations from official aggregates by a factor of 4 to 9 compared to survey aggregates.
View Full
Paper PDF
-
Reporting of Indian Health Service Coverage in the American Community Survey
May 2018
Working Paper Number:
carra-2018-04
Response error in surveys affects the quality of data which are relied on for numerous research and policy purposes. We use linked survey and administrative records data to examine reporting of a particular item in the American Community Survey (ACS) - health coverage among American Indians and Alaska Natives (AIANs) through the Indian Health Service (IHS). We compare responses to the IHS portion of the 2014 ACS health insurance question to whether or not individuals are in the 2014 IHS Patient Registration data. We evaluate the extent to which individuals misreport their IHS coverage in the ACS as well as the characteristics associated with misreporting. We also assess whether the ACS estimates of AIANs with IHS coverage represent an undercount. Our results will be of interest to researchers who rely on survey responses in general and specifically the ACS health insurance question. Moreover, our analysis contributes to the literature on using administrative records to measure components of survey error.
View Full
Paper PDF
-
Medicare Coverage and Reporting
December 2016
Working Paper Number:
carra-2016-12
Medicare coverage of the older population in the United States is widely recognized as being nearly universal. Recent statistics from the Current Population Survey Annual Social and Economic Supplement (CPS ASEC) indicate that 93 percent of individuals aged 65 and older were covered by Medicare in 2013. Those without Medicare include those who are not eligible for the public health program, though the CPS ASEC estimate may also be impacted by misreporting. Using linked data from the CPS ASEC and Medicare Enrollment Database (i.e., the Medicare administrative data), we estimate the extent to which individuals misreport their Medicare coverage. We focus on those who report having Medicare but are not enrolled (false positives) and those who do not report having Medicare but are enrolled (false negatives). We use regression analyses to evaluate factors associated with both types of misreporting including socioeconomic, demographic, and household characteristics. We then provide estimates of the implied Medicare-covered, insured, and uninsured older population, taking into account misreporting in the CPS ASEC. We find an undercount in the CPS ASEC estimates of the Medicare covered population of 4.5 percent. This misreporting is not random - characteristics associated with misreporting include citizenship status, year of entry, labor force participation, Medicare coverage of others in the household, disability status, and imputation of Medicare responses. When we adjust the CPS ASEC estimates to account for misreporting, Medicare coverage of the population aged 65 and older increases from 93.4 percent to 95.6 percent while the uninsured rate decreases from 1.4 percent to 1.3 percent.
View Full
Paper PDF
-
The Measurement of Medicaid Coverage in the SIPP: Evidence from California, 1990-1996
September 2002
Working Paper Number:
CES-02-21
This paper studies the accuracy of reported Medicaid coverage in the Survey of Income and Program Participation (SIPP) using a unique data set formed by matching SIPP survey responses to administrative records from the State of California. Overall, we estimate that the SIPP underestimates Medicaid coverage in the California populaton by about 10 percent. Among SIPP respondents who can be matched to administrative records, we estimate that the probability someone reports Medicaid coverage in a month when they are actually covered is around 85 percent. The corresponding probability for low-income children is even higher ' at least 90 percent. These estimates suggest that the SIPP provides reasonably accurate coverage reports for those who are actually in the Medicaid system. On the other hand, our estimate of the false positive rate (the rate of reported coverage for those who are not covered in the administrative records) is relatively high: 2.5 percent for the sample as a whole, and up to 20 percent for poor children. Some of this is due to errors in the recording of Social Security numbers in the administrative system, rather than to problems in the SIPP.
View Full
Paper PDF
-
Measuring Income of the Aged in Household Surveys: Evidence from Linked Administrative Records
June 2024
Working Paper Number:
CES-24-32
Research has shown that household survey estimates of retirement income (defined benefit pensions and defined contribution account withdrawals) suffer from substantial underreporting which biases downward measures of financial well-being among the aged. Using data from both the redesigned 2016 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) and the Health and Retirement Study (HRS), each matched with administrative records, we examine to what extent underreporting of retirement income affects key statistics such as reliance on Social Security benefits and poverty among the aged. We find that underreporting of retirement income is still prevalent in the CPS ASEC. While the HRS does a better job than the CPS ASEC in terms of capturing retirement income, it still falls considerably short compared to administrative records. Consequently, the relative importance of Social Security income remains overstated in household surveys'53 percent of elderly beneficiaries in the CPS ASEC and 49 percent in the HRS rely on Social Security for the majority of their incomes compared to 42 percent in the linked administrative data. The poverty rate for those aged 65 and over is also overstated'8.8 percent in the CPS ASEC and 7.4 percent in the HRS compared to 6.4 percent in the linked administrative data. Our results illustrate the effects of using alternative data sources in producing key statistics from the Social Security Administration's Income of the Aged publication.
View Full
Paper PDF
-
MISCLASSIFICATION IN BINARY CHOICE MODELS
May 2013
Working Paper Number:
CES-13-27
We derive the asymptotic bias from misclassification of the dependent variable in binary choice models. Measurement error is necessarily non-classical in this case, which leads to bias in linear and non-linear models even if only the dependent variable is mismeasured. A Monte Carlo study and an application to food stamp receipt show that the bias formulas are useful to analyze the sensitivity of substantive conclusions, to interpret biased coefficients and imply features of the estimates that are robust to misclassification. Using administrative records linked to survey data as validation data, we examine estimators that are consistent under misclassification. They can improve estimates if their assumptions hold, but can aggravate the problem if the assumptions are invalid. The estimators differ
in their robustness to such violations, which can be improved by incorporating additional information. We propose tests for the presence and nature of misclassification that can help to choose an estimator.
View Full
Paper PDF
-
Exploring Differences in Employment between Household and Establishment Data
April 2009
Working Paper Number:
CES-09-09
Using a large data set that links individual Current Population Survey (CPS) records to employer-reported administrative data, we document substantial discrepancies in basic measures of employment status that persist even after controlling for known definitional differences between the two data sources. We hypothesize that reporting discrepancies should be most prevalent for marginal workers and marginal jobs, and find systematic associations between the incidence of reporting discrepancies and observable person and job characteristics that are consistent with this hypothesis. The paper discusses the implications of the reported findings for both micro and macro labor market analysis
View Full
Paper PDF
-
Comparison of Survey, Federal, and Commercial Address Data Quality
June 2014
Working Paper Number:
carra-2014-06
This report summarizes matching of survey, commercial, and administrative records housing units to the Census Bureau Master Address File (MAF). We document overall MAF match rates in each data set and evaluate differences in match rates across a variety of housing characteristics. Results show that over 90 percent of records in survey data from the American Housing Survey (AHS) match to the MAF. Commercial data from CoreLogic matches at much lower rates, in part due to missing address information and poor match rates for multi-unit buildings. MAF match rates for administrative records from the Department of Housing and Urban Development are also high, and open the possibility of using this information in surveys such as the AHS.
View Full
Paper PDF