In this paper we discuss and analyze a classical economic puzzle: whether differences in factor intensities reflect patterns of specialization or the co-existence of alternative techniques to produce output. We use observations on a large cross-section of U.S. manufacturing plants from the Census of Manufactures, including those that make goods primary to other industries, to study differences in production techniques. We find that in most cases material requirements do not depend on whether goods are made as primary products or as secondary products, which suggests that differences in factor intensities usually reflect patterns of specialization. A few cases where secondary production techniques do differ notably are discussed in more detail. However, overall the regression results support the neoclassical assumption that a single, best-practice technique is chosen for making each product.
-
Evidence on IO Technology Assumptions From the Longitudinal Research Database
May 1993
Working Paper Number:
CES-93-08
This paper investigates whether a popular IO technology assumption, the commodity technology model, is appropriate for specific United States manufacturing industries, using data on product composition and use of intermediates by individual plants from the Census Longitudinal Research Database. Extant empirical research has suggested the rejection of this model, owing to the implication of aggregate data that negative inputs are required to make particular goods. The plant-level data explored here suggest that much of the rejection of the commodity technology model from aggregative data was spurious; problematic entries in industry-level IO tables generally have a very low Census content. However, among the other industries for which Census data on specified materials use is available, there is a sound statistical basis for rejecting the commodity technology model in about one-third of the cases: a novel econometric test demonstrates a fundamental heterogeneity of materials use among plants that only produce the primary products of the industry.
View Full
Paper PDF
-
Price Dispersion in U.S. Manufacturing
October 1989
Working Paper Number:
CES-89-07
This paper addresses the question of whether products in the U.S. Manufacturing sector sell at a single (common) price, or whether prices vary across producers. The question of price dispersion is important for two reasons. First, if prices vary across producers, the standard method of using industry price deflators leads to errors in measuring real output at the firm or establishment level. These errors in turn lead to biased estimates of the production function and productivity growth equation as shown in Abbott (1988). Second, if prices vary across producers, it suggests that producers do not take prices as given but use price as a competitive variable. This has several implications for how economists model competitive behavior.
View Full
Paper PDF
-
The Classification of Manufacturing Industries: an Input-Based Clustering of Activity
August 1990
Working Paper Number:
CES-90-07
The classification and aggregation of manufacturing data is vital for the analysis and reporting of economic activity. Most organizations and researchers use the Standard Industrial Classification (SIC) system for this purpose. This is, however, not the only option. Our paper examines an alternative classification based on clustering activity using production technologies. While this approach yields results which are similar to the SIC, there are important differences between the two classifications in terms of the specific industrial categories and the amount of information lost through aggregation.
View Full
Paper PDF
-
Estimating the Distribution of Plant-Level Manufacturing Energy Efficiency with Stochastic Frontier Regression
March 2007
Working Paper Number:
CES-07-07
A feature commonly used to distinguish between parametric/statistical models and engineering models is that engineering models explicitly represent best practice technologies while the parametric/statistical models are typically based on average practice. Measures of energy intensity based on average practice are less useful in the corporate management of energy or for public policy goal setting. In the context of company or plant level energy management, it is more useful to have a measure of energy intensity capable of representing where a company or plant lies within a distribution of performance. In other words, is the performance close (or far) from the industry best practice? This paper presents a parametric/statistical approach that can be used to measure best practice, thereby providing a measure of the difference, or 'efficiency gap' at a plant, company or overall industry level. The approach requires plant level data and applies a stochastic frontier regression analysis to energy use. Stochastic frontier regression analysis separates the energy intensity into three components, systematic effects, inefficiency, and statistical (random) error. The stochastic frontier can be viewed as a sub-vector input distance function. One advantage of this approach is that physical product mix can be included in the distance function, avoiding the problem of aggregating output to define a single energy/output ratio to measure energy intensity. The paper outlines the methods and gives an example of the analysis conducted for a non-public micro-dataset of wet corn refining plants.
View Full
Paper PDF
-
Exploring New Ways to Classify Industries for Energy Analysis and Modeling
November 2022
Working Paper Number:
CES-22-49
Combustion, other emitting processes and fossil energy use outside the power sector have become urgent concerns given the United States' commitment to achieving net-zero greenhouse gas emissions by 2050. Industry is an important end user of energy and relies on fossil fuels used directly for process heating and as feedstocks for a diverse range of applications. Fuel and energy use by industry is heterogeneous, meaning even a single product group can vary broadly in its production routes and associated energy use. In the United States, the North American Industry Classification System (NAICS) serves as the standard for statistical data collection and reporting. In turn, data based on NAICS are the foundation of most United States energy modeling. Thus, the effectiveness of NAICS at representing energy use is a limiting condition for current
expansive planning to improve energy efficiency and alternatives to fossil fuels in industry. Facility-level data could be used to build more detail into heterogeneous sectors and thus supplement data from Bureau of the Census and U.S Energy Information Administration reporting at NAICS code levels but are scarce. This work explores alternative classification schemes for industry based on energy use characteristics and validates an approach to estimate facility-level energy use from publicly available greenhouse gas emissions data from the U.S. Environmental Protection Agency (EPA). The approaches in this study can facilitate understanding of current, as well as possible future, energy demand.
First, current approaches to the construction of industrial taxonomies are summarized along with their usefulness for industrial energy modeling. Unsupervised machine learning techniques are then used to detect clusters in data reported from the U.S. Department of Energy's Industrial Assessment Center program. Clusters of Industrial Assessment Center data show similar levels of correlation between energy use and explanatory variables as three-digit NAICS codes. Interestingly, the clusters each include a large cross section of NAICS codes, which lends additional support to the idea that NAICS may not be particularly suited for correlation between energy use and the variables studied. Fewer clusters are needed for the same level of correlation as shown in NAICS codes. Initial assessment shows a reasonable level of separation using support vector machines with higher than 80% accuracy, so machine learning approaches may be promising for further analysis. The IAC data is focused on smaller and medium-sized facilities and is biased toward higher energy users for a given facility type. Cladistics, an approach for classification developed in biology, is adapted to energy and process characteristics of industries. Cladistics applied to industrial systems seeks to understand the progression of organizations and technology as a type of evolution, wherein traits are inherited from previous systems but evolve due to the emergence of inventions and variations and a selection process driven by adaptation to pressures and favorable outcomes. A cladogram is presented for evolutionary directions in the iron and steel sector. Cladograms are a promising tool for constructing scenarios and summarizing directions of sectoral innovation.
The cladogram of iron and steel is based on the drivers of energy use in the sector. Phylogenetic inference is similar to machine learning approaches as it is based on a machine-led search of the solution space, therefore avoiding some of the subjectivity of other classification systems. Our prototype approach for constructing an industry cladogram is based on process characteristics according to the innovation framework derived from Schumpeter to capture evolution in a given sector. The resulting cladogram represents a snapshot in time based on detailed study of process characteristics. This work could be an important tool for the design of scenarios for more detailed modeling. Cladograms reveal groupings of emerging or dominant processes and their implications in a way that may be helpful for policymakers and entrepreneurs, allowing them to see the larger picture, other good ideas, or competitors. Constructing a cladogram could be a good first step to analysis of many industries (e.g. nitrogenous fertilizer production, ethyl alcohol manufacturing), to understand their heterogeneity, emerging trends, and coherent groupings of related innovations.
Finally, validation is performed for facility-level energy estimates from the EPA Greenhouse Gas Reporting Program. Facility-level data availability continues to be a major challenge for industrial modeling. The method outlined by (McMillan et al. 2016; McMillan and Ruth 2019) allows estimating of facility level energy use based on mandatory greenhouse gas reporting. The validation provided here is an important step for further use of this data for industrial energy modeling.
View Full
Paper PDF
-
The Extent and Nature of Establishment Level Diversification in Sixteen U.S. Manufacturing Industries
August 1990
Working Paper Number:
CES-90-08
This paper examines the heterogeneity of establishments in sixteen manufacturing industries. Basic statistical measures are used to decompose product diversification at the establishment level into industry, firm, and establishment effects. The industry effect is the weakest; nearly all the observed heterogeneity is establishment specific. Product diversification at the establishment level is idiosyncratic to the firm. Establishments within a firm exhibit a significant degree of homogeneity, although the grouping of products differ across firms. With few exceptions, economies of scope and scale in production appear to play a minor role in the establishment's mix of outputs.
View Full
Paper PDF
-
CONSTRUCTION OF REGIONAL INPUT-OUTPUT TABLES FROM ESTABLISHMENT-LEVEL MICRODATA: ILLINOIS, 1982
August 1993
Working Paper Number:
CES-93-12
This paper presents a new method for use in the construction of hybrid regional input-output tables, based primarily on individual returns from the Census of Manufactures. Using this method, input- output tables can be completed at a fraction of the cost and time involved in the completion of a full survey table. Special attention is paid to secondary production, a problem often ignored by input-output analysts. A new method to handle secondary production is presented. The method reallocates the amount of secondary production and its associated inputs, on an establishment basis, based on the assumption that the input structure for any given commodity is determined not by the industry in which the commodity was produced, but by the commodity itself -- the commodity-based technology assumption. A biproportional adjustment technique is used to perform the reallocations.
View Full
Paper PDF
-
Price Dispersion In U.S. Manufacturing: Implications For The Aggregation Of Products And Firms
March 1992
Working Paper Number:
CES-92-03
This paper addresses the question of whether products in the U.S. Manufacturing sector sell at a single (common) price, or whether prices vary across producers. Price dispersion is interesting for at least two reasons. First, if output prices vary across producers, standard methods of using industry price deflators lead to errors in measuring real output at the industry, firm, and establishment level which may bias estimates of the production function and productivity growth. Second, price dispersion suggests product heterogeneity which, if consumers do not have identical preferences, could lead to market segmentation and price in excess of marginal cost, thus making the current (competitive) characterization of the Manufacturing sector inappropriate and invalidating many empirical studies. In the course of examining these issues, the paper develops a robust measure of price dispersion as well as new quantitative methods for testing whether observed price differences are the result of differences in product quality. Our results indicate that price dispersion is widespread throughout manufacturing and that for at least one industry, Hydraulic Cement, it is not the result of differences in product quality.
View Full
Paper PDF
-
Estimating market power Evidence from the US Brewing Industry
January 2017
Working Paper Number:
CES-17-06R
While inferring markups from demand data is common practice, estimation relies on difficult-to-test assumptions, including a specific model of how firms compete. Alternatively, markups can be inferred from production data, again relying on a set of difficult-to-test assumptions, but a wholly different set, including the assumption that firms minimize costs using a variable input. Relying on data from the US brewing industry, we directly compare markup estimates from the two approaches. After implementing each approach for a broad set of assumptions and specifications, we find that both approaches provide similar and plausible markup estimates in most cases. The results illustrate how using the two strategies together can allow researchers to evaluate structural models and identify problematic assumptions.
View Full
Paper PDF
-
Globalization and Price Dispersion: Evidence from U.S. Trade Flows
March 2010
Working Paper Number:
CES-10-07
Historically, the integration of international markets has corresponded with decreasing prices for traded goods due to higher competition among suppliers, scale economies, and consumption demand. In recent years, product differentiation and multinational firm pricing behavior across markets and between suppliers make it difficult to assess the degree to which this still occurs. Using a confidential panel dataset comprising the universe of U.S. import trade transactions between 1992 and 2007, this paper explores the change in prices for imported commodities across American trade partners. Overall price dispersion appears to decline, albeit unevenly, over time; nevertheless, there is considerable heterogeneity within commodity groups, geographic regions, and income levels, which may owe to increased product and quality differentiation within commodity categories. Unusually, after controlling for gravity trade factors, trade openness and extensive measures of globalization are positively associated with price dispersion, which suggests a more disaggregated approach both at the commodity and firm level to account for these differences.
View Full
Paper PDF