Optimal Stratified Sampling for Probability-Based Online Panels
September 2025
Working Paper Number:
CES-25-69
Abstract
Document Tags and Keywords
Keywords
Keywords are automatically generated using KeyBERT, a powerful and innovative
keyword extraction tool that utilizes BERT embeddings to ensure high-quality and contextually relevant
keywords.
By analyzing the content of working papers, KeyBERT identifies terms and phrases that capture the essence of the
text, highlighting the most significant topics and trends. This approach not only enhances searchability but
provides connections that go beyond potentially domain-specific author-defined keywords.
:
data census,
census data,
survey,
respondent,
average,
hispanic,
trend,
budget,
population,
rate,
census bureau,
sampling,
sample,
use census,
assessing
Tags
Tags are automatically generated using a pretrained language model from spaCy, which excels at
several tasks, including entity tagging.
The model is able to label words and phrases by part-of-speech,
including "organizations." By filtering for frequent words and phrases labeled as "organizations", papers are
identified to contain references to specific institutions, datasets, and other organizations.
:
Computer Assisted Telephone Interviews and Computer Assisted Personal Interviews,
American Community Survey,
Health and Retirement Study,
National Opinion Research Center,
Census Bureau Disclosure Review Board
Similar Working Papers
Similarity between working papers are determined by an unsupervised neural
network model
know as Doc2Vec.
Doc2Vec is a model that represents entire documents as fixed-length vectors, allowing for the
capture of semantic meaning in a way that relates to the context of words within the document. The model learns to
associate a unique vector with each document while simultaneously learning word vectors, enabling tasks such as
document classification, clustering, and similarity detection by preserving the order and structure of words. The
document vectors are compared using cosine similarity/distance to determine the most similar working papers.
Papers identified with 🔥 are in the top 20% of similarity.
The 10 most similar working papers to the working paper 'Optimal Stratified Sampling for Probability-Based Online Panels' are listed below in order of similarity.
-
Working PaperCTC and ACTC Participation Results and IRS-Census Match Methodology, Tax Year 2020
December 2024
Working Paper Number:
CES-24-76
The Child Tax Credit (CTC) and Additional Child Tax Credit (ACTC) offer assistance to help ease the financial burden of families with children. This paper provides taxpayer and dollar participation estimates for the CTC and ACTC covering tax year 2020. The estimates derive from an approach that relies on linking the 2021 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) to IRS administrative data. This approach, called the Exact Match, uses survey data to identify CTC/ACTC eligible taxpayers and IRS administrative data to indicate which eligible taxpayers claimed and received the credit. Overall in tax year 2020, eligible taxpayers participated in the CTC and ACTC program at a rate of 93 percent while dollar participation was 91 percent.View Full Paper PDF
-
Working PaperEITC Participation Results and IRS-Census Match Methodology, Tax Year 2021
December 2024
Working Paper Number:
CES-24-75
The Earned Income Tax Credit (EITC), enacted in 1975, offers a refundable tax credit to low income working families. This paper provides taxpayer and dollar participation estimates for the EITC covering tax year 2021. The estimates derive from an approach that relies on linking the 2022 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) to IRS administrative data. This approach, called the Exact Match, uses survey data to identify EITC eligible taxpayers and IRS administrative data to indicate which eligible taxpayers claimed and received the credit. Overall in tax year 2021 eligible taxpayers participated in the EITC program at a rate of 78 percent while dollar participation was 81 percent.View Full Paper PDF
-
Working PaperThe Impact of Household Surveys on 2020 Census Self-Response
July 2022
Working Paper Number:
CES-22-24
Households who were sampled in 2019 for the American Community Survey (ACS) had lower self-response rates to the 2020 Census. The magnitude varied from -1.5 percentage point for household sampled in January 2019 to -15.1 percent point for households sampled in December 2019. Similar effects are found for the Current Population Survey (CPS) as well.View Full Paper PDF
-
Working PaperConnected and Uncooperative: The Effects of Homogenous and Exclusive Social Networks on Survey Response Rates and Nonresponse Bias
January 2024
Working Paper Number:
CES-24-01
Social capital, the strength of people's friendship networks and community ties, has been hypothesized as an important determinant of survey participation. Investigating this hypothesis has been difficult given data constraints. In this paper, we provide insights by investigating how response rates and nonresponse bias in the American Community Survey are correlated with county-level social network data from Facebook. We find that areas of the United States where people have more exclusive and homogenous social networks have higher nonresponse bias and lower response rates. These results provide further evidence that the effects of social capital may not be simply a matter of whether people are socially isolated or not, but also what types of social connections people have and the sociodemographic heterogeneity of their social networks.View Full Paper PDF
-
Working PaperAn Economist's Primer on Survey Samples
September 2000
Working Paper Number:
CES-00-15
Survey data underlie most empirical work in economics, yet economists typically have little familiarity with survey sample design and its effects on inference. This paper describes how sample designs depart from the simple random sampling model implicit in most econometrics textbooks, points out where the effects of this departure are likely to be greatest, and describes the relationship between design-based estimators developed by survey statisticians and related econometric methods for regression. Its intent is to provide empirical economists with enough background in survey methods to make informed use of design-based estimators. It emphasizes surveys of households (the source of most public-use files), but also considers how surveys of businesses differ. Examples from the National Longitudinal Survey of Youth of 1979 and the Current Population Survey illustrate practical aspects of design-based estimation.View Full Paper PDF
-
Working PaperIncorporating Administrative Data in Survey Weights for the 2018-2022 Survey of Income and Program Participation
October 2024
Working Paper Number:
CES-24-58
Response rates to the Survey of Income and Program Participation (SIPP) have declined over time, raising the potential for nonresponse bias in survey estimates. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we modify various parts of the SIPP weighting algorithm to incorporate such data. We create these new weights for the 2018 through 2022 SIPP panels and examine how the new weights affect survey estimates. Our results show that before weighting adjustments, SIPP respondents in these panels have higher socioeconomic status than the general population. Existing weighting procedures reduce many of these differences. Comparing SIPP estimates between the production weights and the administrative data-based weights yields changes that are not uniform across the joint income and program participation distribution. Unlike other Census Bureau household surveys, there is no large increase in nonresponse bias in SIPP due to the COVID-19 Pandemic. In summary, the magnitude and sign of nonresponse bias in SIPP is complicated, and the existing weighting procedures may change the sign of nonresponse bias for households with certain incomes and program benefit statuses.View Full Paper PDF
-
Working PaperNonresponse and Coverage Bias in the Household Pulse Survey: Evidence from Administrative Data
October 2024
Working Paper Number:
CES-24-60
The Household Pulse Survey (HPS) conducted by the U.S. Census Bureau is a unique survey that provided timely data on the effects of the COVID-19 Pandemic on American households and continues to provide data on other emergent social and economic issues. Because the survey has a response rate in the single digits and only has an online response mode, there are concerns about nonresponse and coverage bias. In this paper, we match administrative data from government agencies and third-party data to HPS respondents to examine how representative they are of the U.S. population. For comparison, we create a benchmark of American Community Survey (ACS) respondents and nonrespondents and include the ACS respondents as another point of reference. Overall, we find that the HPS is less representative of the U.S. population than the ACS. However, performance varies across administrative variables, and the existing weighting adjustments appear to greatly improve the representativeness of the HPS. Additionally, we look at household characteristics by their email domain to examine the effects on coverage from limiting email messages in 2023 to addresses from the contact frame with at least 90% deliverability rates, finding no clear change in the representativeness of the HPS afterwards.View Full Paper PDF
-
Working PaperGradient Boosting to Address Statistical Problems Arising from Non-Linkage of Census Bureau Datasets
June 2024
Working Paper Number:
CES-24-27
This article introduces the twangRDC package, which contains functions to address non-linkage in US Census Bureau datasets. The Census Bureau's Person Identification Validation System facilitates data linkage by assigning unique person identifiers to federal, third party, decennial census, and survey data. Not all records in these datasets can be linked to the reference file and as such not all records will be assigned an identifier. This article is a tutorial for using the twangRDC to generate nonresponse weights to account for non-linkage of person records across US Census Bureau datasets.View Full Paper PDF
-
Working PaperThe Work Disincentive Effects of the Disability Insurance Program in the 1990s
February 2006
Working Paper Number:
CES-06-05
In this paper we evaluate the work disincentive effects of the Disability Insurance program during the 1990s. To accomplish this we construct a new large data set with detailed information on DI application and award decisions and use two different econometric evaluation methods. First, we apply a comparison group approach proposed by John Bound to estimate an upper bound for the work disincentive effect of the current DI program. Second, we adopt a Regression-Discontinuity approach that exploits a particular feature of the DI eligibility determination process to provide a credible point estimate of the impact of the DI program on labor supply for an important subset of DI applicants. Our estimates indicate that during the 1990s the labor force participation rate of DI beneficiaries would have been at most 20 percentage points higher had none received benefits. In addition, we find even smaller labor supply responses for the subset of 'marginal' applicants whose disability determination is based on vocational factors.View Full Paper PDF
-
Working PaperWhen and Why Does Nonresponse Occur? Comparing the Determinants of Initial Unit Nonresponse and Panel Attrition
September 2023
Working Paper Number:
CES-23-44
Though unit nonresponse threatens data quality in both cross-sectional and panel surveys, little is understood about how initial nonresponse and later panel attrition may be theoretically or empirically distinct phenomena. This study advances current knowledge of the determinants of both unit nonresponse and panel attrition within the context of the U.S. Census Bureau's Survey of Income and Program Participation (SIPP) panel survey, which I link with high-quality federal administrative records, paradata, and geographic data. By exploiting the SIPP's interpenetrated sampling design and relying on cross-classified random effects modeling, this study quantifies the relative effects of sample household, interviewer, and place characteristics on baseline nonresponse and later attrition, addressing a critical gap in the literature. Given the reliance on successful record linkages between survey sample households and federal administrative data in the nonresponse research, this study also undertakes an explicitly spatial analysis of the place-based characteristics associated with successful record linkages in the U.S.View Full Paper PDF