Statistical agencies face a dual mandate to publish accurate statistics while protecting respondent privacy. Increasing privacy protection requires decreased accuracy. Recognizing this as a resource allocation problem, we propose an economic solution: operate where the marginal cost of increasing privacy equals the marginal benefit. Our model of production, from computer science, assumes data are published using an efficient differentially private algorithm. Optimal choice weighs the demand for accurate statistics against the demand for privacy. Examples from U.S. statistical programs show how our framework can guide decision-making. Further progress requires a better understanding of willingness-to-pay for privacy and statistical accuracy.
-
Revisiting the Economics of Privacy: Population Statistics and Confidentiality Protection as Public Goods
January 2017
Working Paper Number:
CES-17-37
We consider the problem of determining the optimal accuracy of public statistics when increased accuracy requires a loss of privacy. To formalize this allocation problem, we use tools from statistics and computer science to model the publication technology used by a public statistical agency. We derive the demand for accurate statistics from first principles to generate interdependent preferences that account for the public-good nature of both data accuracy and privacy loss. We first show data accuracy is inefficiently undersupplied by a private provider. Solving the appropriate social planner's problem produces an implementable publication strategy. We implement the socially optimal publication plan for statistics on income and health status using data from the American Community Survey, National Health Interview Survey, Federal Statistical System Public Opinion Survey and Cornell National Social Survey. Our analysis indicates that welfare losses from providing too much privacy protection and, therefore, too little accuracy can be substantial.
View Full
Paper PDF
-
Why the Economics Profession Must Actively Participate in the Privacy Protection Debate
March 2019
Working Paper Number:
CES-19-09
When Google or the U.S. Census Bureau publish detailed statistics on browsing habits or neighborhood characteristics, some privacy is lost for everybody while supplying public information. To date, economists have not focused on the privacy loss inherent in data publication. In their stead, these issues have been advanced almost exclusively by computer scientists who are primarily interested in technical problems associated with protecting privacy. Economists should join the discussion, first, to determine where to balance privacy protection against data quality; a social choice problem. Furthermore, economists must ensure new privacy models preserve the validity of public data for economic research.
View Full
Paper PDF
-
An In-Depth Examination of Requirements for Disclosure Risk Assessment
October 2023
Authors:
Ron Jarmin,
John M. Abowd,
Ian M. Schmutte,
Jerome P. Reiter,
Nathan Goldschlag,
Victoria A. Velkoff,
Michael B. Hawes,
Robert Ashmead,
Ryan Cumings-Menon,
Sallie Ann Keller,
Daniel Kifer,
Philip Leclerc,
Rolando A. RodrÃguez,
Pavel Zhuravlev
Working Paper Number:
CES-23-49
The use of formal privacy to protect the confidentiality of responses in the 2020 Decennial Census of Population and Housing has triggered renewed interest and debate over how to measure the disclosure risks and societal benefits of the published data products. Following long-established precedent in economics and statistics, we argue that any proposal for quantifying disclosure risk should be based on pre-specified, objective criteria. Such criteria should be used to compare methodologies to identify those with the most desirable properties. We illustrate this approach, using simple desiderata, to evaluate the absolute disclosure risk framework, the counterfactual framework underlying differential privacy, and prior-to-posterior comparisons. We conclude that satisfying all the desiderata is impossible, but counterfactual comparisons satisfy the most while absolute disclosure risk satisfies the fewest. Furthermore, we explain that many of the criticisms levied against differential privacy would be levied against any technology that is not equivalent to direct, unrestricted access to confidential data. Thus, more research is needed, but in the near-term, the counterfactual approach appears best-suited for privacy-utility analysis.
View Full
Paper PDF
-
Releasing Earnings Distributions using Differential Privacy: Disclosure Avoidance System For Post Secondary Employment Outcomes (PSEO)
April 2019
Working Paper Number:
CES-19-13
The U.S. Census Bureau recently released data on earnings percentiles of graduates from post secondary institutions. This paper describes and evaluates the disclosure avoidance system developed for these statistics. We propose a differentially private algorithm for releasing these data based on standard differentially private building blocks, by constructing a histogram of earnings and the application of the Laplace mechanism to recover a differentially-private CDF of earnings. We demonstrate that our algorithm can release earnings distributions with low error, and our algorithm out-performs prior work based on the concept of smooth sensitivity from Nissim, Raskhodnikova and Smith (2007).
View Full
Paper PDF
-
The 2010 Census Confidentiality Protections Failed, Here's How and Why
December 2023
Authors:
Lars Vilhuber,
John M. Abowd,
Ethan Lewis,
Nathan Goldschlag,
Robert Ashmead,
Daniel Kifer,
Philip Leclerc,
Rolando A. RodrÃguez,
Tamara Adams,
David Darais,
Sourya Dey,
Simson L. Garfinkel,
Scott Moore,
Ramy N. Tadros
Working Paper Number:
CES-23-63
Using only 34 published tables, we reconstruct five variables (census block, sex, age, race, and ethnicity) in the confidential 2010 Census person records. Using the 38-bin age variable tabulated at the census block level, at most 20.1% of reconstructed records can differ from their confidential source on even a single value for these five variables. Using only published data, an attacker can verify that all records in 70% of all census blocks (97 million people) are perfectly reconstructed. The tabular publications in Summary File 1 thus have prohibited disclosure risk similar to the unreleased confidential microdata. Reidentification studies confirm that an attacker can, within blocks with perfect reconstruction accuracy, correctly infer the actual census response on race and ethnicity for 3.4 million vulnerable population uniques (persons with nonmodal characteristics) with 95% accuracy, the same precision as the confidential data achieve and far greater than statistical baselines. The flaw in the 2010 Census framework was the assumption that aggregation prevented accurate microdata reconstruction, justifying weaker disclosure limitation methods than were applied to 2010 Census public microdata. The framework used for 2020 Census publications defends against attacks that are based on reconstruction, as we also demonstrate here. Finally, we show that alternatives to the 2020 Census Disclosure Avoidance System with similar accuracy (enhanced swapping) also fail to protect confidentiality, and those that partially defend against reconstruction attacks (incomplete suppression implementations) destroy the primary statutory use case: data for redistricting all legislatures in the country in compliance with the 1965 Voting Rights Act.
View Full
Paper PDF
-
Validating Abstract Representations of Spatial Population Data while considering Disclosure Avoidance
February 2020
Working Paper Number:
CES-20-05
This paper furthers a research agenda for modeling populations along spatial networks and expands upon an empirical analysis to a full U.S. county (Gaboardi, 2019, Ch. 1,2). Specific foci are the necessity of, and methods for, validating and benchmarking spatial data when conducting social science research with aggregated and ambiguous population representations. In order to promote the validation of publicly-available data, access to highly-restricted census microdata was requested, and granted, in order to determine the levels of accuracy and error associated with a network-based population modeling framework. Primary findings reinforce the utility of a novel network allocation method'populated polygons to networks (pp2n) in terms of accuracy, computational complexity, and real runtime (Gaboardi, 2019, Ch. 2). Also, a pseudo-benchmark dataset's performance against the true census microdata shows promise in modeling populations along networks.
View Full
Paper PDF
-
Simultaneous Edit-Imputation for Continuous Microdata
December 2015
Working Paper Number:
CES-15-44
Many statistical organizations collect data that are expected to satisfy linear constraints; as examples, component variables should sum to total variables, and ratios of pairs of variables should be bounded by expert-specified constants. When reported data violate constraints, organizations identify and replace values potentially in error in a process known as edit-imputation. To date, most approaches separate the error localization and imputation steps, typically using optimization methods to identify the variables to change followed by hot deck imputation. We present an approach that fully integrates editing and imputation for continuous microdata under linear constraints. Our approach relies on a Bayesian hierarchical model that includes (i) a flexible joint probability model for the underlying true values of the data with support only on the set of values that satisfy all editing constraints, (ii) a model for latent indicators of the variables that are in error, and (iii) a model for the reported responses for variables in error. We illustrate the potential advantages of the Bayesian editing approach over existing approaches using simulation studies. We apply the model to edit faulty data from the 2007 U.S. Census of Manufactures. Supplementary materials for this article are available online.
View Full
Paper PDF
-
Confidentiality Protection in the Census Bureau Quarterly Workforce Indicators
February 2006
Working Paper Number:
tp-2006-02
The QuarterlyWorkforce Indicators are new estimates developed by the Census Bureau's Longitudinal
Employer-Household Dynamics Program as a part of its Local Employment Dynamics
partnership with 37 state Labor Market Information offices. These data provide detailed quarterly
statistics on employment, accessions, layoffs, hires, separations, full-quarter employment
(and related flows), job creations, job destructions, and earnings (for flow and stock categories of
workers). The data are released for NAICS industries (and 4-digit SICs) at the county, workforce
investment board, and metropolitan area levels of geography. The confidential microdata - unemployment
insurance wage records, ES-202 establishment employment, and Title 13 demographic
and economic information - are protected using a permanent multiplicative noise distortion factor.
This factor distorts all input sums, counts, differences and ratios. The released statistics are analytically
valid - measures are unbiased and time series properties are preserved. The confidentiality
protection is manifested in the release of some statistics that are flagged as "significantly distorted
to preserve confidentiality." These statistics differ from the undistorted statistics by a significant
proportion. Even for the significantly distorted statistics, the data remain analytically valid for
time series properties. The released data can be aggregated; however, published aggregates are
less distorted than custom postrelease aggregates. In addition to the multiplicative noise distortion,
confidentiality protection is provided by the estimation process for the QWIs, which multiply imputes
all missing data (including missing establishment, given UI account, in the UI wage record
data) and dynamically re-weights the establishment data to provide state-level comparability with
the BLS's Quarterly Census of Employment and Wages.
View Full
Paper PDF
-
The Privacy-Protected Gridded Environmental Impacts Frame
December 2024
Working Paper Number:
CES-24-74
This paper introduces the Gridded Environmental Impacts Frame (Gridded EIF), a novel privacy-protected dataset derived from the U.S. Census Bureau's confidential Environmental Impacts Frame (EIF) microdata infrastructure. The EIF combines comprehensive administrative records and survey data on the U.S. population with high-resolution geospatial information on environmental hazards. While access to the EIF is restricted due to the confidential nature of the underlying data, the Gridded EIF offers a broader research community the opportunity to glean insights from the data while preserving confidentiality. We describe the data and privacy protection process, and offer guidance on appropriate usage, presenting practical applications.
View Full
Paper PDF
-
Are We Overstating the Economic Costs of Environmental Protection?
May 1997
Working Paper Number:
CES-97-12
Reported expenditures for environmental protection in the U.S. are estimated to exceed $150 billion annually or about 2% of GDP. This estimate is often used as an assessment of the burden of current regulatory efforts and a standard against which the associated benefits are measured. This makes it a key statistic in the debate surrounding both current and future environmental regulation. Little is known, however, about how well reported expenditures relate to true economic cost. True economic cost depends on whether reported environmental expenditures generate incidental savings, involve uncounted burdens, or accurately reflect the total cost of environmental protection. This paper explores the relationship between reported expenditures and economic cost in a number of major manufacturing industries. Previous research has suggested that an incremental $1 of reported environmental expenditures increases total production costs by anywhere from $1 to $12, i.e., increases in reported costs probably understate the actual increase in economic cost. Surprisingly, our results suggest the reverse, that increases in reported costs may overstate the actual increase in economic cost. Our results are based a large plant-level data set for eleven four-digit SIC industries. We employ a cost-function modeling approach that involves three basic steps. First, we treat real environmental expenditures as a second output of the plant, reflecting perceived environmental abatement efforts. Second, we model the joint production of conventional output and environmental effort as a cost-minimization problem. Third, we calculate the effect of an incremental dollar of reported environmental expenditures at the plant, industry, and manufacturing sector levels. Our approach differs from previous work with similar data by considering a large number of industries, using a cost-function modeling approach, and paying particular attention to plant-specific effects. Our preferred, fixed-effects model obtains an aggregate estimate of thirteen cents in increased costs for every dollar of reported incremental pollution control expenditures, with a standard error of sixty-one cents. This single estimate, however, conceals the wide range of values observed at the industry and plant level. We also find that estimates using an alternative, random-effects model are uniformly higher. Although the higher, random-effects estimates are more consistent with previous work, we believe they are biased by omitted variables characterizing differences among plants. While further research is needed, our results suggest that previous estimates of the economic cost associated with environmental expenditures have been biased upward and that the possibility of overstatement is quite real. Key words: environmental costs, fixed-effects, translog cost model
View Full
Paper PDF