-
The Icing on the Cake: The Effects of Monetary Incentives on Income Data Quality in the SIPP
January 2024
Working Paper Number:
CES-24-03
Accurate measurement of key income variables plays a crucial role in economic research and policy decision-making. However, the presence of item nonresponse and measurement error in survey data can cause biased estimates. These biases can subsequently lead to sub-optimal policy decisions and inefficient allocation of resources. While there have been various studies documenting item nonresponse and measurement error in economic data, there have not been many studies investigating interventions that could reduce item nonresponse and measurement error. In our research, we investigate the impact of monetary incentives on reducing item nonresponse and measurement error for labor and investment income in the Survey of Income and Program Participation (SIPP). Our study utilizes a randomized incentive experiment in Waves 1 and 2 of the 2014 SIPP, which allows us to assess the effectiveness of incentives in reducing item nonresponse and measurement error. We find that households receiving incentives had item nonresponse rates that are 1.3 percentage points lower for earnings and 1.5 percentage points lower for Social Security income. Measurement error was 6.31 percentage points lower at the intensive margin for interest income, and 16.48 percentage points lower for dividend income compared to non-incentive recipient households. These findings provide valuable insights for data producers and users and highlight the importance of implementing strategies to improve data quality in economic research.
View Full
Paper PDF
-
Registered Report: Exploratory Analysis of Ownership Diversity and Innovation in the Annual Business Survey
March 2023
Working Paper Number:
CES-23-11
A lack of transparency in specification testing is a major contributor to the replicability crisis that has eroded the credibility of findings for informing policy. How diversity is associated with outcomes of interest is particularly susceptible to the production of nonreplicable findings given the very large number of alternative measures applied to several policy relevant attributes such as race, ethnicity, gender, or foreign-born status. The very large number of alternative measures substantially increases the probability of false discovery where nominally significant parameter estimates'selected through numerous though unreported specification tests'may not be representative of true associations in the population. The purpose of this registered report is to: 1) select a single measure of ownership diversity that satisfies explicit, requisite axioms; 2) split the Annual Business Survey (ABS) into an exploratory sample (35%) used in this analysis and a confirmatory sample (65%) that will be accessed only after the publication of this report; 3) regress self-reported new-to-market innovation on the diversity measure along with industry and firm-size controls; 4) pass through those variables meeting precision and magnitude criteria for hypothesis testing using the confirmatory sample; and 5) document the full set of hypotheses to be tested in the final analysis along with a discussion of the false discovery and family-wise error rate corrections to be applied. The discussion concludes with the added value of implementing split sample designs within the Federal Statistical Research Data Center system where access to data is strictly controlled.
View Full
Paper PDF
-
Comparing the 2019 American Housing Survey to Contemporary Sources of Property Tax Records: Implications for Survey Efficiency and Quality
June 2022
Working Paper Number:
CES-22-22
Given rising nonresponse rates and concerns about respondent burden, government statistical agencies have been exploring ways to supplement household survey data collection with administrative records and other sources of third-party data. This paper evaluates the potential of property tax assessment records to improve housing surveys by comparing these records to responses from the 2019 American Housing Survey. Leveraging the U.S. Census Bureau's linkage infrastructure, we compute the fraction of AHS housing units that could be matched to a unique property parcel (coverage rate), as well as the extent to which survey and property tax data contain the same information (agreement rate). We analyze heterogeneity in coverage and agreement across states, housing characteristics, and 11 AHS items of interest to housing researchers. Our results suggest that partial replacement of AHS data with property data, targeted toward certain survey items or single-family detached homes, could reduce respondent burden without altering data quality. Further research into partial-replacement designs is needed and should proceed on an item-by-item basis. Our work can guide this research as well as those who wish to conduct independent research with property tax records that is representative of the U.S. housing stock.
View Full
Paper PDF
-
Effects of a Government-Academic Partnership: Has the NSF-Census Bureau Research Network Helped Improve the U.S. Statistical System?
January 2017
Authors:
Lars Vilhuber,
John M. Abowd,
Daniel Weinberg,
Jerome P. Reiter,
Matthew D. Shapiro,
Robert F. Belli,
Noel Cressie,
David C. Folch,
Scott H. Holan,
Margaret C. Levenstein,
Kristen M. Olson,
Jolene Smyth,
Leen-Kiat Soh,
Bruce D. Spencer,
Seth E. Spielman,
Christopher K. Wikle
Working Paper Number:
CES-17-59R
The National Science Foundation-Census Bureau Research Network (NCRN) was established in 2011 to create interdisciplinary research nodes on methodological questions of interest and significance to the broader research community and to the Federal Statistical System (FSS), particularly the Census Bureau. The activities to date have covered both fundamental and applied statistical research and have focused at least in part on the training of current and future generations of researchers in skills of relevance to surveys and alternative measurement of economic units, households, and persons. This paper discusses some of the key research findings of the eight nodes, organized into six topics: (1) Improving census and survey data collection methods; (2) Using alternative sources of data; (3) Protecting privacy and confidentiality by improving disclosure avoidance; (4) Using spatial and spatio-temporal statistical modeling to improve estimates; (5) Assessing data cost and quality tradeoffs; and (6) Combining information from multiple sources. It also reports on collaborations across nodes and with federal agencies, new software developed, and educational activities and outcomes. The paper concludes with an evaluation of the ability of the FSS to apply the NCRN's research outcomes and suggests some next steps, as well as the implications of this research-network model for future federal government renewal initiatives.
View Full
Paper PDF
-
Decennial Census Return Rates: The Role of Social Capital
January 2017
Working Paper Number:
CES-17-39
This paper explores how useful information about social and civic engagement (social capital)
might be to the U.S. Census Bureau in their efforts to improve predictions of mail return rates for the Decennial Census (DC) at the census tract level. Through construction of Hard-to-count (HRC) scores and multivariate analysis, we find that if information about social capital were available, predictions of response rates would be marginally improved.
View Full
Paper PDF
-
Coverage and Agreement of Administrative Records and 2010 American Community Survey Demographic Data
November 2014
Working Paper Number:
carra-2014-14
The U.S. Census Bureau is researching possible uses of administrative records in decennial census and survey operations. The 2010 Census Match Study and American Community Survey (ACS) Match Study represent recent efforts by the Census Bureau to evaluate the extent to which administrative records provide data on persons and addresses in the 2010 Census and 2010 ACS. The 2010 Census Match Study also examines demographic response data collected in administrative records. Building on this analysis, we match data from the 2010 ACS to federal administrative records and third party data as well as to previous census data and examine administrative records coverage and agreement of ACS age, sex, race, and Hispanic origin responses. We find high levels of coverage and agreement for sex and age responses and variable coverage and agreement across race and Hispanic origin groups. These results are similar to findings from the 2010 Census Match Study.
View Full
Paper PDF
-
2010 American Community Survey Match Study
July 2014
Working Paper Number:
carra-2014-03
Using administrative records data from federal government agencies and commercial sources, the 2010 ACS Match Study measures administrative records coverage of 2010 ACS addresses, persons, and persons at addresses at different levels of geography as well as by demographic characteristics and response mode. The 2010 ACS Match Study represents a continuation of the research undertaken in the 2010 Census Match Study, the first national-level evaluation of administrative records data coverage. Preliminary results indicate that administrative records provide substantial coverage for addresses and persons in the 2010 ACS (92.7 and 92.1 percent respectively), and less extensive though substantial coverage, for person-address pairs (74.3 percent). In addition, some variation in address, person and/or person-address coverage is found across demographic and response mode groups. This research informs future uses of administrative records in survey and decennial census operations to address the increasing costs of data collection and declining response rates.
View Full
Paper PDF
-
A METHOD OF CORRECTING FOR MISREPORTING APPLIED TO THE FOOD STAMP PROGRAM
May 2013
Working Paper Number:
CES-13-28
Survey misreporting is known to be pervasive and bias common statistical analyses. In this paper, I first use administrative data on SNAP receipt and amounts linked to American Community Survey data from New York State to show that survey data can misrepresent the program in important ways. For example, more than 1.4 billion dollars received are not reported in New York State alone. 46 percent of dollars received by house- holds with annual income above the poverty line are not reported in the survey data, while only 19 percent are missing below the poverty line. Standard corrections for measurement error cannot remove these biases. I then develop a method to obtain consistent estimates by combining parameter estimates from the linked data with publicly available data. This conditional density method recovers the correct estimates using public use data only, which solves the problem that access to linked administrative data is usually restricted. I examine the degree to which this approach can be used to extrapolate across time and geography, in order to solve the problem that validation data is often based on a convenience sample. I present evidence from within New York State that the extent of heterogeneity is small enough to make extrapolation work well across both time and geography. Extrapolation to the entire U.S. yields substantive differences to survey data and reduces deviations from official aggregates by a factor of 4 to 9 compared to survey aggregates.
View Full
Paper PDF
-
Estimating Measurement Error in SIPP Annual Job Earnings: A Comparison of Census Bureau Survey and SSA Administrative Data
July 2011
Working Paper Number:
CES-11-20
We quantify sources of variation in annual job earnings data collected by the Survey of Income and Program Participation (SIPP) to determine how much of the variation is the result of measurement error. Jobs reported in the SIPP are linked to jobs reported in an administrative database, the Detailed Earnings Records (DER) drawn from the Social Security Administration's Master Earnings File, a universe file of all earnings reported on W-2 tax forms. As a result of the match, each job potentially has two earnings observations per year: survey and administrative. Unlike previous validation studies, both of these earnings measures are viewed as noisy measures of some underlying true amount of annual earnings. While the existence of survey error resulting from respondent mistakes or misinterpretation is widely accepted, the idea that administrative data are also error-prone is new. Possible sources of employer reporting error, employee under-reporting of compensation such as tips, and general differences between how earnings may be reported on tax forms and in surveys, necessitates the discarding of the assumption that administrative data are a true measure of the quantity that the survey was designed to collect. In addition, errors in matching SIPP and DER jobs, a necessary task in any use of administrative data, also contribute to measurement error in both earnings variables. We begin by comparing SIPP and DER earnings for different demographic and education groups of SIPP respondents. We also calculate different measures of changes in earnings for individuals switching jobs. We estimate a standard earnings equation model using SIPP and DER earnings and compare the resulting coefficients. Finally exploiting the presence of individuals with multiple jobs and shared employers over time, we estimate an econometric model that includes random person and firm effects, a common error component shared by SIPP and DER earnings, and two independent error components that represent the variation unique to each earnings measure. We compare the variance components from this model and consider how the DER and SIPP differ across unobservable components.
View Full
Paper PDF
-
Lessons for Targeted Program Evaluation: A Personal and Professional History of the Survey of Program Dynamics
August 2007
Working Paper Number:
CES-07-24
The Survey of Program Dynamics (SPD) was created by the 1996 welfare reform legislation to facilitate its evaluation. This paper describes the evolution of that survey, discusses its implementation, and draws lessons for future evaluation. Large-scale surveys can be an important part of a portfolio of evaluation methods, but sufficient time must be given to data collection agencies if a high-quality longitudinal survey is expected. Such a survey must have both internal (agency) and external (policy analyst) buy-in. Investments in data analysis by agency staff, downplayed in favor of larger sample sizes given a fixed budget, could have contributed to more external acceptance. More attention up-front to reducing the potentially deleterious effects of attrition in longitudinal surveys, such as through the use of monetary incentives, might have been worthwhile. Given the problems encountered by the Census Bureau in producing the SPD, I argue that ongoing multi-purpose longitudinal surveys like the Survey of Income and Program Participation are potentially more valuable than episodic special-purpose surveys.
View Full
Paper PDF