-
CONSTRUCTION OF REGIONAL INPUT-OUTPUT TABLES FROM ESTABLISHMENT-LEVEL MICRODATA: ILLINOIS, 1982
August 1993
Working Paper Number:
CES-93-12
This paper presents a new method for use in the construction of hybrid regional input-output tables, based primarily on individual returns from the Census of Manufactures. Using this method, input- output tables can be completed at a fraction of the cost and time involved in the completion of a full survey table. Special attention is paid to secondary production, a problem often ignored by input-output analysts. A new method to handle secondary production is presented. The method reallocates the amount of secondary production and its associated inputs, on an establishment basis, based on the assumption that the input structure for any given commodity is determined not by the industry in which the commodity was produced, but by the commodity itself -- the commodity-based technology assumption. A biproportional adjustment technique is used to perform the reallocations.
View Full
Paper PDF
-
The Importance of Establishment Data in Economic Research
August 1993
Working Paper Number:
CES-93-10
The importance and usefulness of establishment microdata for economic research and policy analysis is outlined and contrasted with traditional products of statistical agencies -- aggregate cross-section tabulations. It is argued that statistical agencies must begin to seriously rethink the way they view establishment data products.
View Full
Paper PDF
-
Evidence on IO Technology Assumptions From the Longitudinal Research Database
May 1993
Working Paper Number:
CES-93-08
This paper investigates whether a popular IO technology assumption, the commodity technology model, is appropriate for specific United States manufacturing industries, using data on product composition and use of intermediates by individual plants from the Census Longitudinal Research Database. Extant empirical research has suggested the rejection of this model, owing to the implication of aggregate data that negative inputs are required to make particular goods. The plant-level data explored here suggest that much of the rejection of the commodity technology model from aggregative data was spurious; problematic entries in industry-level IO tables generally have a very low Census content. However, among the other industries for which Census data on specified materials use is available, there is a sound statistical basis for rejecting the commodity technology model in about one-third of the cases: a novel econometric test demonstrates a fundamental heterogeneity of materials use among plants that only produce the primary products of the industry.
View Full
Paper PDF
-
Multiple Classification Systems For Economic Data: Can A Thousand Flowers Bloom? And Should They?
December 1991
Working Paper Number:
CES-91-08
The principle that the statistical system should provide flexibility-- possibilities for generating multiple groupings of data to satisfy multiple objectives--if it is to satisfy users is universally accepted. Yet in practice, this goal has not been achieved. This paper discusses the feasibility of providing flexibility in the statistical system to accommodate multiple uses of the industrial data now primarily examined within the Standard Industrial Classification (SIC) system. In one sense, the question of feasibility is almost trivial. With today's computer technology, vast amounts of data can be manipulated and stored at very low cost. Reconfigurations of the basic data are very inexpensive compared to the cost of collecting the data. Flexibility in the statistical system implies more than the technical ability to regroup data. It requires that the basic data are sufficiently detailed to support user needs and are processed and maintained in a fashion that makes the use of a variety of aggregation rules possible. For this to happen, statistical agencies must recognize the need for high quality microdata and build this into their planning processes. Agencies need to view their missions from a multiple use perspective and move away from use of a primary reporting and collection vehicle. Although the categories used to report data must be flexible, practical considerations dictate that data collection proceed within a fixed classification system. It is simply too expensive for both respondents and statistical agencies to process survey responses in the absence of standardized forms, data entry programs, etc. I argue for a basic classification centered on commodities--products, services, raw materials and labor inputs--as the focus of data collection. The idea is to make the principle variables of interest--the commodities--the vehicle for the collection and processing of the data. For completeness, the basic classification should include labor usage through some form of occupational classification. In most economic surveys at the Census Bureau, the reporting unit and the classified unit have been the establishment. But there is no need for this to be so. The basic principle to be followed in data collection is that the data should be collected in the most efficient way--efficiency being defined jointly in terms of statistical agency collection costs and respondent burdens.
View Full
Paper PDF
-
Measuring Total Factor Productivity, Technical Change And The Rate Of Returns To Research And Development
May 1991
Working Paper Number:
CES-91-03
Recent research indicates that estimates of the effect of research and development (R&D) on total factor productivity growth are sensitive to different measures of total factor productivity. In this paper, we use establishment level data for the flat glass industry extracted from the Census Bureau's Longitudinal Research Database (LRD) to construct three competing measures of total factor productivity. We then use these measures to estimate the conventional R&D intensity model. Our empirical results support previous finding that the estimated coefficients of the model are sensitive to the measurement of total factor productivity. Also, when using microdata and more detailed modeling, R&D is found to be a significant factor influencing productivity growth. Finally, for the flat glass industry, a specific technical change index capturing the learning-by-doing process appears to be superior to the conventional time trend index.
View Full
Paper PDF
-
Published Versus Sample Statistics From The ASM: Implications For The LRD
January 1991
Working Paper Number:
CES-91-01
In principle, the Longitudinal Research Database ( LRD ) which links the establishments in the Annual Survey of Manufactures (ASM) is ideal for examining the dynamics of firm and aggregate behavior. However, the published ASM aggregates are not simply the appropriately weighted sums of establishment data in the LRD . Instead, the published data equal the sum of LRD-based sample estimates and nonsample estimates. The latter reflect adjustments related to sampling error and the imputation of small-establishment data. Differences between the LRD and the ASM raise questions for users of both data sets. For ASM users, time-series variation in the difference indicates potential problems in consistently and reliably estimating the nonsample portion of the ASM. For LRD users, potential sample selection problems arise due to the systematic exclusion of data from small establishments. Microeconomic studies based on the LRD can yield misleading inferences to the extent that small establishments behave differently. Similarly, new economic aggregates constructed from the LRD can yield incorrect estimates of levels and growth rates. This paper documents cross-sectional and time-series differences between ASM and LRD estimates of levels and growth rates of total employment, and compares them with employment estimates provided by Bureau of Labor Statistics and County Business Patterns data. In addition, this paper explores potential adjustments to economic aggregates constructed from the LRD. In particular, the paper reports the results of adjusting LRD-based estimates of gross job creation and destruction to be consistent with net job changes implied by the published ASM figures.
View Full
Paper PDF
-
THE RELATIONSHIPS AMONG ACQUIRING AND ACQUIRED FIRMS' PRODUCT LINES
September 1990
Working Paper Number:
CES-90-12
This study develops detailed information on the relationships among the activities of acquiring and acquired firms at and near the time of merger for a sample of 94 takeovers undertaken between 1977-1982. We focus on takeovers for two reasons. First, takeovers are an important and controversial phenomenon. Second, takeovers allow us to look at marginal changes, admittedly large ones, in the firm's boundaries. Thus, they provide a useful way of examining relationships among activities of the firm without having to go into great detail regarding the historical decisions that generated the firm's current structure. While the individual establishment is our basic data unit, in this study we aggregate the activities of the firm to the line of business (LOB) level. Each LOB of an acquired firm is classified as to its relationship horizontal, vertical (upstream or downstream), and conglomerate to the LOBs of the acquiring firm. Using these categorizations we aggregate the LOB-level information to the firm level to investigate the degree to which our sample of mergers is specialized to particular types of relationships. While we find a significant group of unspecialized takeovers, most appear to fit a specific category. We also look at the pattern of closed operations immediately following the takeover. Closings are generally concentrated in operations involving horizontal relationships. Finally, we consider the pattern of relationships between hostile and friendly takeovers and whether takeover premiums vary by type of merger. Merger premiums are not related to the type of relationship between the acquiring and acquired firm, but they are tied to whether the takeover is friendly or hostile.
View Full
Paper PDF
-
The Classification of Manufacturing Industries: an Input-Based Clustering of Activity
August 1990
Working Paper Number:
CES-90-07
The classification and aggregation of manufacturing data is vital for the analysis and reporting of economic activity. Most organizations and researchers use the Standard Industrial Classification (SIC) system for this purpose. This is, however, not the only option. Our paper examines an alternative classification based on clustering activity using production technologies. While this approach yields results which are similar to the SIC, there are important differences between the two classifications in terms of the specific industrial categories and the amount of information lost through aggregation.
View Full
Paper PDF
-
Estimating A Multivariate Arma Model with Mixed-Frequency Data: An Application to Forecasting U.S. GNP at Monthly Intervals
July 1990
Working Paper Number:
CES-90-05
This paper develops and applies a method for directly estimating a multivariate, autoregressive moving-average (ARMA) model with mixed-frequency, time-series data. Unlike standard, single-frequency methods, the method does not require the data to be transformed to a single frequency (by temporally aggregating higher-frequency data to lower frequencies for interpolating lower-frequency data to higher frequencies) or the model to be restricted by frequency. Subject to computational constraints, the method can handle any number of variable and frequencies. In addition, variable can be treated as temporally aggregated and observed with errors and delays. The key to the method is to view lower-frequency data as periodically missing and to use the missing-data variant of the Kalman filter.
In the application, a bivariate, ARMA model is estimated with monthly observations on total employment and quarterly observations on real GNP, in the U.S., for January 1958 to December 1978. The estimated model is, then, used to compute monthly forecasts of the variables for 1 to 12 months ahead, for January 1979 to December 1988. Compared with GNP forecasts, in particular, for similar periods produced by established econometric and time series models, present GNP forecasts are generally more accurate for 1 to 4 months ahead and about equally or slightly less accurate for 5 to 12 months ahead. The application, thus, shows that the present method is tractable and able to effectively exploit cross-frequency sample information, in ARMA estimate and forecasting, which standard methods cannot exploit at all.
View Full
Paper PDF
-
Longitudinal Economic Data At The Census Bureau: A New Database Yields Fresh Insight On Some Old Issues
January 1990
Working Paper Number:
CES-90-01
This paper has two goals. First, it illustrates the importance of panel data with examples taken from research in progress using the U.S. Census Bureau's Longitudinal Research Database ( LRD ). Although the LRD is not the result of a "true" longitudinal survey, it provides both balanced and unbalanced panel data sets for establishments, firms, and lines of business. The second goal is to integrate the results of recent research with the LRD and to draw conclusions about the importance of longitudinal microdata for econometric research and time series analysis. The advantages of panel data arise from both the micro and time series aspects of the observations. This also leads us to consider why panel data are necessary to understand and interpret the time series behavior of aggregate statistics produced in cross-section establishment surveys and censuses. We find that typical homogeneity assumptions are likely to be inappropriate in a wide variety of applications. In particular, the industry in which an establishment is located, the ownership of the establishment, and the existence of the establishment (births and deaths) are endogenous variables that cannot simply be taken as time invariant fixed effects in econometric modeling.
View Full
Paper PDF