Optimal Stratified Sampling for Probability-Based Online Panels
September 2025
Working Paper Number:
CES-25-69
Abstract
Document Tags and Keywords
Keywords
Keywords are automatically generated using KeyBERT, a powerful and innovative
keyword extraction tool that utilizes BERT embeddings to ensure high-quality and contextually relevant
keywords.
By analyzing the content of working papers, KeyBERT identifies terms and phrases that capture the essence of the
text, highlighting the most significant topics and trends. This approach not only enhances searchability but
provides connections that go beyond potentially domain-specific author-defined keywords.
:
data census,
census data,
survey,
respondent,
average,
hispanic,
trend,
budget,
population,
rate,
census bureau,
sampling,
sample,
use census,
assessing
Tags
Tags are automatically generated using a pretrained language model from spaCy, which excels at
several tasks, including entity tagging.
The model is able to label words and phrases by part-of-speech,
including "organizations." By filtering for frequent words and phrases labeled as "organizations", papers are
identified to contain references to specific institutions, datasets, and other organizations.
:
Computer Assisted Telephone Interviews and Computer Assisted Personal Interviews,
American Community Survey,
Health and Retirement Study,
National Opinion Research Center,
Census Bureau Disclosure Review Board
Similar Working Papers
Similarity between working papers are determined by an unsupervised neural
network model
know as Doc2Vec.
Doc2Vec is a model that represents entire documents as fixed-length vectors, allowing for the
capture of semantic meaning in a way that relates to the context of words within the document. The model learns to
associate a unique vector with each document while simultaneously learning word vectors, enabling tasks such as
document classification, clustering, and similarity detection by preserving the order and structure of words. The
document vectors are compared using cosine similarity/distance to determine the most similar working papers.
Papers identified with 🔥 are in the top 20% of similarity.
The 10 most similar working papers to the working paper 'Optimal Stratified Sampling for Probability-Based Online Panels' are listed below in order of similarity.
-
Working PaperCTC and ACTC Participation Results and IRS-Census Match Methodology, Tax Year 2020
December 2024
Working Paper Number:
CES-24-76
The Child Tax Credit (CTC) and Additional Child Tax Credit (ACTC) offer assistance to help ease the financial burden of families with children. This paper provides taxpayer and dollar participation estimates for the CTC and ACTC covering tax year 2020. The estimates derive from an approach that relies on linking the 2021 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) to IRS administrative data. This approach, called the Exact Match, uses survey data to identify CTC/ACTC eligible taxpayers and IRS administrative data to indicate which eligible taxpayers claimed and received the credit. Overall in tax year 2020, eligible taxpayers participated in the CTC and ACTC program at a rate of 93 percent while dollar participation was 91 percent.View Full Paper PDF
-
Working PaperEITC Participation Results and IRS-Census Match Methodology, Tax Year 2021
December 2024
Working Paper Number:
CES-24-75
The Earned Income Tax Credit (EITC), enacted in 1975, offers a refundable tax credit to low income working families. This paper provides taxpayer and dollar participation estimates for the EITC covering tax year 2021. The estimates derive from an approach that relies on linking the 2022 Current Population Survey Annual Social and Economic Supplement (CPS ASEC) to IRS administrative data. This approach, called the Exact Match, uses survey data to identify EITC eligible taxpayers and IRS administrative data to indicate which eligible taxpayers claimed and received the credit. Overall in tax year 2021 eligible taxpayers participated in the EITC program at a rate of 78 percent while dollar participation was 81 percent.View Full Paper PDF
-
Working PaperAn Economist's Primer on Survey Samples
September 2000
Working Paper Number:
CES-00-15
Survey data underlie most empirical work in economics, yet economists typically have little familiarity with survey sample design and its effects on inference. This paper describes how sample designs depart from the simple random sampling model implicit in most econometrics textbooks, points out where the effects of this departure are likely to be greatest, and describes the relationship between design-based estimators developed by survey statisticians and related econometric methods for regression. Its intent is to provide empirical economists with enough background in survey methods to make informed use of design-based estimators. It emphasizes surveys of households (the source of most public-use files), but also considers how surveys of businesses differ. Examples from the National Longitudinal Survey of Youth of 1979 and the Current Population Survey illustrate practical aspects of design-based estimation.View Full Paper PDF
-
Working PaperIncorporating Administrative Data in Survey Weights for the 2018-2022 Survey of Income and Program Participation
October 2024
Working Paper Number:
CES-24-58
Response rates to the Survey of Income and Program Participation (SIPP) have declined over time, raising the potential for nonresponse bias in survey estimates. A potential solution is to leverage administrative data from government agencies and third-party data providers when constructing survey weights. In this paper, we modify various parts of the SIPP weighting algorithm to incorporate such data. We create these new weights for the 2018 through 2022 SIPP panels and examine how the new weights affect survey estimates. Our results show that before weighting adjustments, SIPP respondents in these panels have higher socioeconomic status than the general population. Existing weighting procedures reduce many of these differences. Comparing SIPP estimates between the production weights and the administrative data-based weights yields changes that are not uniform across the joint income and program participation distribution. Unlike other Census Bureau household surveys, there is no large increase in nonresponse bias in SIPP due to the COVID-19 Pandemic. In summary, the magnitude and sign of nonresponse bias in SIPP is complicated, and the existing weighting procedures may change the sign of nonresponse bias for households with certain incomes and program benefit statuses.View Full Paper PDF
-
Working PaperGradient Boosting to Address Statistical Problems Arising from Non-Linkage of Census Bureau Datasets
June 2024
Working Paper Number:
CES-24-27
This article introduces the twangRDC package, which contains functions to address non-linkage in US Census Bureau datasets. The Census Bureau's Person Identification Validation System facilitates data linkage by assigning unique person identifiers to federal, third party, decennial census, and survey data. Not all records in these datasets can be linked to the reference file and as such not all records will be assigned an identifier. This article is a tutorial for using the twangRDC to generate nonresponse weights to account for non-linkage of person records across US Census Bureau datasets.View Full Paper PDF
-
Working PaperThe Impact of Household Surveys on 2020 Census Self-Response
July 2022
Working Paper Number:
CES-22-24
Households who were sampled in 2019 for the American Community Survey (ACS) had lower self-response rates to the 2020 Census. The magnitude varied from -1.5 percentage point for household sampled in January 2019 to -15.1 percent point for households sampled in December 2019. Similar effects are found for the Current Population Survey (CPS) as well.View Full Paper PDF
-
Working PaperWithin and Across County Variation in SNAP Misreporting: Evidence from Linked ACS and Administrative Records
July 2014
Working Paper Number:
carra-2014-05
This paper examines sub-state spatial and temporal variation in misreporting of participation in the Supplemental Nutrition Assistance Program (SNAP) using several years of the American Community Survey linked to SNAP administrative records from New York (2008-2010) and Texas (2006-2009). I calculate county false-negative (FN) and false-positive (FP) rates for each year of observation and find that, within a given state and year, there is substantial heterogeneity in FN rates across counties. In addition, I find evidence that FN rates (but not FP rates) persist over time within counties. This persistence in FN rates is strongest among more populous counties, suggesting that when noise from sampling variation is not an issue, some counties have consistently high FN rates while others have consistently low FN rates. This finding is important for understanding how misreporting might bias estimates of sub-state SNAP participation rates, changes in those participation rates, and effects of program participation. This presentation was given at the CARRA Seminar, June 27, 2013View Full Paper PDF
-
Working PaperNonresponse and Coverage Bias in the Household Pulse Survey: Evidence from Administrative Data
October 2024
Working Paper Number:
CES-24-60
The Household Pulse Survey (HPS) conducted by the U.S. Census Bureau is a unique survey that provided timely data on the effects of the COVID-19 Pandemic on American households and continues to provide data on other emergent social and economic issues. Because the survey has a response rate in the single digits and only has an online response mode, there are concerns about nonresponse and coverage bias. In this paper, we match administrative data from government agencies and third-party data to HPS respondents to examine how representative they are of the U.S. population. For comparison, we create a benchmark of American Community Survey (ACS) respondents and nonrespondents and include the ACS respondents as another point of reference. Overall, we find that the HPS is less representative of the U.S. population than the ACS. However, performance varies across administrative variables, and the existing weighting adjustments appear to greatly improve the representativeness of the HPS. Additionally, we look at household characteristics by their email domain to examine the effects on coverage from limiting email messages in 2023 to addresses from the contact frame with at least 90% deliverability rates, finding no clear change in the representativeness of the HPS afterwards.View Full Paper PDF
-
Working PaperConnected and Uncooperative: The Effects of Homogenous and Exclusive Social Networks on Survey Response Rates and Nonresponse Bias
January 2024
Working Paper Number:
CES-24-01
Social capital, the strength of people's friendship networks and community ties, has been hypothesized as an important determinant of survey participation. Investigating this hypothesis has been difficult given data constraints. In this paper, we provide insights by investigating how response rates and nonresponse bias in the American Community Survey are correlated with county-level social network data from Facebook. We find that areas of the United States where people have more exclusive and homogenous social networks have higher nonresponse bias and lower response rates. These results provide further evidence that the effects of social capital may not be simply a matter of whether people are socially isolated or not, but also what types of social connections people have and the sociodemographic heterogeneity of their social networks.View Full Paper PDF
-
Working PaperBIAS IN FOOD STAMPS PARTICIPATION ESTIMATES IN THE PRESENCE OF MISREPORTING ERROR
March 2013
Working Paper Number:
CES-13-13
This paper focuses on how survey misreporting of food stamp receipt can bias demographic estimation of program participation. Food stamps is a federally funded program which subsidizes the nutrition of low-income households. In order to improve the reach of this program, studies on how program participation varies by demographic groups have been conducted using census data. Census data are subject to a lot of misreporting error, both underreporting and over-reporting, which can bias the estimates. The impact of misreporting error on estimate bias is examined by calculating food stamp participation rates, misreporting rates, and bias for select household characteristics (covariates).View Full Paper PDF