This story draft by @keynesian has not been reviewed by an editor, YET.
Author:
(1) David Staines.
4 Calvo Framework and 4.1 Household’s Problem
4.3 Household Equilibrium Conditions
4.5 Nominal Equilibrium Conditions
4.6 Real Equilibrium Conditions and 4.7 Shocks
5.2 Persistence and Policy Puzzles
6 Stochastic Equilibrium and 6.1 Ergodic Theory and Random Dynamical Systems
7 General Linearized Phillips Curve
8 Existence Results and 8.1 Main Results
9.2 Algebraic Aspects (I) Singularities and Covers
9.3 Algebraic Aspects (II) Homology
9.4 Algebraic Aspects (III) Schemes
9.5 Wider Economic Interpretations
10 Econometric and Theoretical Implications and 10.1 Identification and Trade-offs
10.4 Microeconomic Interpretation
Appendices
A Proof of Theorem 2 and A.1 Proof of Part (i)
B Proofs from Section 4 and B.1 Individual Product Demand (4.2)
B.2 Flexible Price Equilibrium and ZINSS (4.4)
B.4 Cost Minimization (4.6) and (10.4)
C Proofs from Section 5, and C.1 Puzzles, Policy and Persistence
D Stochastic Equilibrium and D.1 Non-Stochastic Equilibrium
D.2 Profits and Long-Run Growth
E Slopes and Eigenvalues and E.1 Slope Coefficients
E.4 Rouche’s Theorem Conditions
F Abstract Algebra and F.1 Homology Groups
F.4 Marginal Costs and Inflation
G Further Keynesian Models and G.1 Taylor Pricing
G.3 Unconventional Policy Settings
H Empirical Robustness and H.1 Parameter Selection
I Additional Evidence and I.1 Other Structural Parameters
I.3 Trend Inflation Volatility
The scope of this section is to provide additional empirical evidence, extending beyond the parameters of the main model. This section is divided into three parts. The first considers other structural parameters not present under Calvo but used with Rotemberg. The second applies these to the claims made in Proposition 11. The third and final subsection looks at the evidence concerning the volatility of trend inflation, supporting Section 9.5 and Point 2.
There are two sub-divisions, one for the price adjustment parameter and the other for the substitution parameter. Microeconometric studies are emphasised.
I.1.1 cp = 50
At the upper end are estimates of four percent, derived from calculations by Willis [2006], using magazine price data, originally analysed by Cecchetti [1986]. Slade [1998] comes to a similar estimate for salted crackers. [141] The most appealing estimate comes from Zbaracki et al. [2004] who come to a figure of 1.22%. It considers a broader range of costs associated with the price setting process, including information gathering and internal communication costs, as well as customer communication and negotiation, whereas other studies focus mainly on physical menu costs, which are found to constitute a small percentage of the total. It even finds evidence of a (small) portion of convex adjustment costs, consistent with the Rotemberg framework.
The principle limitation is coverage. There is only one firm, observed for one year 1997-1998; in fact there is just one set of price changes. This obviously raises concerns with generalization. The emphasis in the paper on negotiations with large customers, which is less common in services, and suggests 1.22% could be an overestimate for the whole economy.
At the lower end, estimates can be as low as 0.5% in some retail contexts (Levy et al. [1997], Levy et al. [1998], Dutta et al. [1999] and Bergen et al. [2008]). These are likely to be biased downwards, as a reflection of the overall cost of price changing, since they include the impact of less costly sales price changes. [142] The frequency of sales varies across countries, as documented for example in Berardi et al. [2015]. They are much more common in the United States and Great Britain than France. This is likely the result of regulatory restrictions in France (see Freeman et al. [2008]). This suggests that firms’ adjustment costs might vary between countries.
This discussion underscores the importance of incorporating heterogeniety into macroeconomics and using microeconomic data to discipline and test our models. This should remain a research priority. Although, as I explain earlier Calvo should prove a better model to build from than Rotemberg. For quantitative analysis, in keeping with the previous arguments, I will shade the Zbaracki et al. [2004] number by setting
I.1.2 θ = 6
There are two main sources of estimates, demand elasticities and markups. For the central estimate, I lean closer to the former, whilst using the latter for robustness. A meta-analysis by Bajzik et al. [2020] propounds a central estimate of 4, with 6 the highest plausible level. However, there is an element of extrapolation here, since this model is closed economy and exporters might be systematically different from non-exporters due to selection effects. De Loecker and Warzynski [2012] suggest this. The authors’ preferred estimate 3.8 derives from annual data. This is systematically lower than with quarterly observations, the relevant business cycle frequency. [144]
The most direct evidence comes from Rosenthal-Kay et al. [2021] who estimate seven million demand elasticities, from a widely used marketing database. They come up with an average estimate a little below 3. Although their prior mitigates against negative estimates, they still find that around a quarter of firms have elasticities less than unity. Also around 40% are less than 2.
These results do not fit the basic theory. Recall that monopolists should always price on the inelastic portion of their demand curve and models based around monopolistic competition blow up when θ < 1. Likely culprits include uniform pricing, where firms set the same price in all markets, as documented by Cavallo et al. [2014b], Cavallo [2017] and DellaVigna and Gentzkow [2019], or fair pricing based on the idea that customers punish firms seen to be unfairly profiting from for example external events (see Rotemberg [2011], Cavallo et al. [2014a] and Gagnon and López-Salido [2020]).
Naturally, these extensions are too complicated for a stylized model. I advocate an upward adjustment of the mean, to remove the effect of these low values. In the present informal environment, it is surely adequate to select 4 as a nearby candidate calibration.
For markup estimates the most significant challenge is deciding which costs are fixed. Accounting data distinguishes two main types, Cost of Goods Sold (COGS) and Selling General and Administrative (SG & A). De Loecker et al. [2020] assign COGS to variable costs and (SG & A) to fixed. In the United States this measure increases rapidly from 10% in 1980 to 60% in 2016. They find similar patterns in other parts of the world in De Loecker and Eeckhout [2018].
Nevertheless, very large increases in profit share do not seem to be consistent with other macroeconomic developments, according to Basu [2019], Autor et al. [2020] and Karabarbounis and Neiman [2019]. Indeed, when (SG & A) are thought of as variable, markups are much smaller between 5 and 20%, typically around 10−15% with much steadier changes (see Hall [2018], Traina [2018] and Kirov and Traina [2021]). [145] Moreover, there are concerns that efficiency enhancing processes might be driving markup increases in some sectors or subsets of firms (see Crouzet and Eberly [2019], Autor et al. [2020] and Rossi-Hansberg et al. [2021]). These forces lie outside the scope of the model. Syverson [2019] provides extensive discussion of these and other issues. Overall, it appears reasonable to consider estimates with θ equal to 4 and 6, as well as some with 8. The lower value seems to better represent the elasticity of substitution. 6 better represents markups, where it is consistent with a steady state value of 20%; 8 is a robustness check, consistent with the lower values reported when the whole of (SG & A) is regarded as fixed.
This paper is available on arxiv under CC 4.0 license.
[141] Levy and Young [2004] documents extreme cases of price rigidity including a period of 73 years when the price of a nickel coke remained unchanged. Although, this is likely specific to the price point and its role in marketing.
[142] In fact, it is common for prices to revert back to their previous value- consistent with very low adjustment costs (see Anderson et al. [2017])
[143] There are issues with determining the "right" inflation rate when your sample contains only one observation. The difficulty is that 1997-1998 was not a period of stable inflation, due probably to the onset of the Asian Financial crisis and an associated collapse in oil prices. In 1998 the rate of growth of US GDP deflator fell back to 1.1% from 1.6% the year before. The movement in CPI was more muted falling from 1.7% to 1.6%. This figures are insufficient reason to deviate from the benchmark of 2% trend inflation. Data comes from https://fred.stlouisfed.org/series/GDPDEF and https://fred.stlouisfed.org/series/CPIAUCSL respectively.
[144] Nevertheless, they are able to rule out θ < 2.5, which justifies the restrictions in Theorem
[145] They are also more aligned with evidence, like De Loecker and Warzynski [2012], that use price rather than just revenue data, a practice criticised by Bond et al. [2021].
[146] Supportive survey evidence is discussed in Coibion et al. [2018a], Andrade et al. [2022], Born et al. [2023], Weber et al. [2022], and Candia et al. [2023]. Consult Sheffrin [1996] for more extensive discussion of bringing the Lucas model to the data. Ball [2012], Ball [2014], Lucas Jr and Nicolini [2015] and Benati et al. [2021] are recent estimates of money