paint-brush
New Evidence Changes How We Understand Inflation and Monetary Policyby@keynesian
939 reads
939 reads

New Evidence Changes How We Understand Inflation and Monetary Policy

by Keynesian TechnologyDecember 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This paper critiques and refines the New Keynesian model by exposing flaws in the benchmark framework for inflation dynamics. It introduces a corrected approach that integrates stochastic equilibrium and backward substitution, offering fresh insights into the relationship between inflation, output, and monetary policy.
featured image - New Evidence Changes How We Understand Inflation and Monetary Policy
Keynesian Technology HackerNoon profile picture

Abstract

1 Introduction

2 Mathematical Arguments

3 Outline and Preview

4 Calvo Framework and 4.1 Household’s Problem

4.2 Preferences

4.3 Household Equilibrium Conditions

4.4 Price-Setting Problem

4.5 Nominal Equilibrium Conditions

4.6 Real Equilibrium Conditions and 4.7 Shocks

4.8 Recursive Equilibrium

5 Existing Solutions

5.1 Singular Phillips Curve

5.2 Persistence and Policy Puzzles

5.3 Two Comparison Models

5.4 Lucas Critique

6 Stochastic Equilibrium and 6.1 Ergodic Theory and Random Dynamical Systems

6.2 Equilibrium Construction

6.3 Literature Comparison

6.4 Equilibrium Analysis

7 General Linearized Phillips Curve

7.1 Slope Coefficients

7.2 Error Coefficients

8 Existence Results and 8.1 Main Results

8.2 Key Proofs

8.3 Discussion

9 Bifurcation Analysis

9.1 Analytic Aspects

9.2 Algebraic Aspects (I) Singularities and Covers

9.3 Algebraic Aspects (II) Homology

9.4 Algebraic Aspects (III) Schemes

9.5 Wider Economic Interpretations

10 Econometric and Theoretical Implications and 10.1 Identification and Trade-offs

10.2 Econometric Duality

10.3 Coefficient Properties

10.4 Microeconomic Interpretation

11 Policy Rule

12 Conclusions and References


Appendices

A Proof of Theorem 2 and A.1 Proof of Part (i)

A.2 Behaviour of ∆

A.3 Proof Part (iii)

B Proofs from Section 4 and B.1 Individual Product Demand (4.2)

B.2 Flexible Price Equilibrium and ZINSS (4.4)

B.3 Price Dispersion (4.5)

B.4 Cost Minimization (4.6) and (10.4)

B.5 Consolidation (4.8)

C Proofs from Section 5, and C.1 Puzzles, Policy and Persistence

C.2 Extending No Persistence

D Stochastic Equilibrium and D.1 Non-Stochastic Equilibrium

D.2 Profits and Long-Run Growth

E Slopes and Eigenvalues and E.1 Slope Coefficients

E.2 Linearized DSGE Solution

E.3 Eigenvalue Conditions

E.4 Rouche’s Theorem Conditions

F Abstract Algebra and F.1 Homology Groups

F.2 Basic Categories

F.3 De Rham Cohomology

F.4 Marginal Costs and Inflation

G Further Keynesian Models and G.1 Taylor Pricing

G.2 Calvo Wage Phillips Curve

G.3 Unconventional Policy Settings

H Empirical Robustness and H.1 Parameter Selection

H.2 Phillips Curve

I Additional Evidence and I.1 Other Structural Parameters

I.2 Lucas Critique

I.3 Trend Inflation Volatility

1 Introduction

Empirical understanding in macroeconomics has made great strides over recent years. A growing body of model-free evidence shows that price and wage rigidity are ubiquitous. This is borne out by many micro-econometric studies of price adjustment such as Alvarez et al. [2006], Dhyne et al. [2006], Gagnon [2009], Klenow and Malin [2010], Vermeulen et al. [2012], Berardi et al. [2015] and Kehoe and Midrigan [2015],[1] whilst individual wages adjust slowly with particular resistance to nominal reductions, as found for example by Fehr and Goette [2005], Dickens et al. [2007], Barattieri et al. [2014], Kaur [2019] and Grigsby et al. [2021]. A variety of quasi-natural experiments reveal that aggregate demand shocks are quantitatively important business cycle drivers (see Auerbach and Gorodnichenko [2012b], Auerbach and Gorodnichenko [2012a], Auerbach and Gorodnichenko [2013], Bils et al. [2013], Acconcia et al. [2014], Mian and Sufi [2014], Chodorow-Reich and Karabarbounis [2016] and ChodorowReich et al. [2019]). Monetary policy shocks can have large and possibly longlasting impacts on output- (see Christiano et al. [1999], Romer and Romer [2004], Velde [2009], Gertler and Karadi [2015], Jordà et al. [2020a], Jordà et al. [2020b], and Palma [2022]).


This work has bolstered the New Neo-Classical Synthesis discussed in Goodfriend and King [1997] and Snowdon and Vane [2005]. The idea is to analyze monetary policy by adding sticky prices or frictional adjustment into otherwise standard optimizing models from the Real Business Cycle (RBC) tradition. The goal has been to generate credible micro-founded complements to the aggregate demand and aggregate supply equations of Old Keynesian economics.


In this goal the literature has so far failed. Inflation, surely the key variable in New Keynesian economics, is excessively forward-looking, in fact under the current benchmark model it has no intrinsic persistence and is not identified without entirely ad hoc shocks. Moreover, there appears no short run trade-off between inflation and employment or output stabilization christened the Divine Coincidence by Blanchard and Galí [2007] so central banks can "fine-tune" away any inefficient fluctuations.[2] I show that all these findings are a figment of erroneous solutions to the benchmark Calvo model. The task of this paper is to construct the simplest possible correct formulation of New Keynesian economics that can serve as a platform for future theoretical, empirical and mathematical work.


The paper has three complementary tasks. The first is to uncover the dynamic properties of the New Keynesian Phillips curve. The second is to sketch its major econometric implications and theoretical rationale. Lastly, I will analyze whether and under what conditions a solution to a DSGE exists and provide precise parametric answers for the benchmark model.


The benchmark New Keynesian Phillips obtained by linearizing the Calvo model at ZINSS does not represent the dynamical properties of the underlying stochastic system however small shocks are. Except for price dispersion dynamics at ZINSS are forward-looking to all orders of approximation. By contrast, the underlying non-linear system is persistent with probability one, featuring lags of inflation and shock terms. It is already known that a hybrid system pertains when there is non-zero trend inflation- (see Damjanovic and Nolan [2010], Coibion and Gorodnichenko [2011], Ascari and Sbordone [2014] and Kurozumi and Van Zandweghe [2017] and Qureshi and Ahmad [2021]). This can be formalized as the Non-Stochastic Bifurcation by considering limits of the linear approximation as inflation goes to zero. I demonstrate Stochastic Bifurcation by showing that when inflation is precisely zero neighboring stochastic systems have equivalent dynamic properties, so trend inflation describes behavior around ZINSS.


The correct solution requires backward substitution steps, which is where the lag terms appear. The difference between the two systems is called a wallcrossing singularity. Formally, the boundary between the two solutions of the benchmark model is a two dimensional surface with a one dimensional wall leading into a three dimensional hole. Informally, it is formed by equating the differing steps between the two solutions and integrating. The wall is the inflation equality that is necessary to place the economy inside the singular surface. The second component is an equality connecting inflation, lagged and present marginal costs, reflecting a backward substitution step missing at ZINSS. It reflects the inter-temporal substitution motive present under staggered optimization and thus a change in the monetary transmission mechanism. It introduces a cost-push channel into the Phillips curve, so the Central Bank or financial conditions directly influence the pricing decisions of firms. The third dimension of the (inner) hole represents the cancellation of the demand shock with its lag.


These changes seem to fit a wide variety of existing evidence. Barth III and Ramey [2001], Gaiotti and Secchi [2006] and Chowdhury et al. [2006] provide impressive support for a cost channel of monetary policy. These forces seem to be particularly strong for firms under financial distress according to Antoun de Almeida [2015], Gilchrist et al. [2017], Meinen and Roehe [2018], Palmén [2020], Abbate et al. [2023] and Montero and Urtasun [2021].[3] The impact on the coefficients of the Phillips curve are dramatic. The responsiveness of inflation to the output gap drops, typically from close to one to around zero. The lagged inflation coefficient is always larger than that of expected inflation. These results accord with recent empirical estimates including Fuhrer [2006], Mavroeidis et al. [2014], Ball and Mazumder [2020], Hindrayanto et al. [2019], Bobeica and Jarociński [2019], Hooper et al. [2020], Zobl and Ertl [2021] and Ball and Mazumder [2021].[4] [5]


All the analysis is underpinned by a rigorous theory of stochastic equilibrium derived from ergodic theory, the branch of mathematics concerned with the longterm behavior of dynamical systems. A stochastic equilibrium is the state of the economy today that implies the probability of any future event is equal to its long-run (time) average. Crucially, I am able to construct this equilibrium explicitly.


A large literature exists in economics and related disciplines that uses the theory of ergodic processes to prove existence and often uniqueness also, of general equilibrium (see for example Stokey [1989], Hopenhayn [1992], Hopenhayn and Prescott [1992], Stachurski [2002], Li and Stachurski [2014], Kamihigashi and Stachurski [2016], Brumm et al. [2017], Açıkgöz [2018], Borovička and Stachurski [2020], Marinacci and Montrucchio [2019], Kirkby [2019], Hu and Shmaya [2019], Light and Weintraub [2022] and Pohl et al. [2023]). All of these results are confined to small models lacking features, such as endogenous capital and labor accumulation or nominal rigidity, of primary interest to applied macroeconomists.[6] This paper is the first to precisely define the long-run equilibrium conditions for a nonlinear stochastic model with no closed form solution, in variables of immediate economic interest. None have addressed criticality the possibility that if a model does not meet certain conditions it does not have any solution.


I supply wide-ranging comparative statics under weak assumptions. Unlike previous results, that rely on global restrictions, mine leverage the stochastic equilibrium which allows me to deduce global characteristics from local properties of the steady state.[7] Moreover, I am able to undertake novel experiments, comparing any stochastic steady state with its non-stochastic counterpart. Its mathematical significance is addressed in the next section. More progress in this area may come as a byproduct of stronger ties with mathematics.


Macroeconomists have long been aware that approximations taken at nonstochastic steady states might not be accurate or dynamically representative, in particular when considering financial markets and risk premia. Since Coeurdacier et al. [2011] and Juillard [2011] it has been common to include the effect of higher order deviation terms when calculating equilibrium from which perturbations are analyzed. The main focus has naturally been on financial markets and the movements of risk premia in particular. Ascari et al. [2018b] is the most notable New Keynesian thus far. The concept of stochastic equilibrium here formalizes and clarifies these ideas.


Stochastic equilibrium constitutes a full probabilistic description of an economy. This gives them an intrinsic uniqueness property for systems driven by continuous shocks. This extends to the the probabilistic future path, which corresponds with the definition of recursive equilibrium in previous New Keynesian economics and the infinite time solution of mean-field game with common noise in mathematics. I also prove an equivalent finite dimensional state space form exists for benchmark New Keynesian models, comparable to previous definitions from classical economics (see Prescott and Mehra [1980] and Mehra [2006]). This technique is crucial to the analysis here and is bound to have wide application.


This striking result comes however with a powerful converse. Models that macroeconomists previously thought had multiple equilibria in fact have none. In particular for a standard class of DSGE models, when the limiting value of a linear approximation around its stochastic equilibrium is indeterminate, what is actually happening is that the expectations of one or more of the forward-looking variables in the underlying non-linear model is exploding. This is because the underlying optimization will blow up and welfare will collapse whenever the model is expected to reach one of the boundaries. This point is more obvious when there are too many eigenvalues outside the unit circle and a state variable drives the blow-up. In fact, this is the only barrier to a solution of the Calvo model around ZINSS, overturning current wisdom. In these situations we should think of the DSGE model as being misspecified, in the same way that econometricians see a correlation between non-stationary variables as statistically spurious.[8]


The result reinterprets the well-known Blanchard and Kahn [1980] eigenvalue conditions. The requirement for existence of a solution to the non-linear model is the uniqueness condition evaluated around the stochastic steady state. My approach generalizes existing linearization techniques. The non-stochastic linear approximation is the limit of the stochastic approximation as shocks become arbitrarily small. Therefore existing linearization techniques are generically (away from bifurcation points) correct, confirming the existing intuition from a different asymptotic experiment. Away from the small noise limit, the stochastic linear approximation features stochastic coefficients. This is because the derivative at the stochastic steady state features expectations of non-linear functions. This creates novel technical challenges, which I discuss. Simplicity and comparability with past work motivate focus on small noise limits, in the quantitative part.


Contrary to current opinion, Rotemberg and Calvo are never equivalent in stochastic equilibrium. In fact, there are no settings for the standard policy rule where both exist. This is because Rotemberg is unaffected by the singularity impacting Calvo. This equivalence was the primary motivation for using Rotemberg pricing, although, it does mean that results derived under the existing benchmark could be viewed as pertaining to the Rotemberg model. This, rather than the effect of high order terms, is the likely explanation for differences in performance under global solution methods (see Leith and Liu [2016]).


A double limit system emerges depending on whether price dispersion is regarded as first or second order around ZINSS. This situation is called polydromy. First order price dispersion (∆) reflects a volatile policy regime. The non-volatile regime emerges when average shocks are very small. More broadly, ∆ can be viewed as real rigidity- in the sense of Ball and Romer [1990]- the effect of price rigidity on the flexible economy. This suggests that nominal rigidity alone might be sufficient for modelling inflation dynamics, overturning a current of thought starting with Ball and Romer [1991].


If we are prepared to focus on the neighborhood of ZINSS then an advantage of my approach over trend inflation is that I can downplay or remove entirely the role of price dispersion. At empirically relevant values, price dispersion is increasing and convex in inflation. Stochastic equilibrium worsens the problem, consistent with the empirical estimates from DSGE models (see Ascari et al. [2018b]). However, microeconomic evidence typically suggests a much more muted relationship between price dispersion and inflation at low positive levels (see Gagnon [2009], Coibion et al. [2015], Wulfsberg [2016], Nakamura et al. [2018], Alvarez et al. [2018], Sheremirov [2020], Anayi et al. [2022] and Adam et al. [2023]). Broadly, consistent with these results, if price dispersion arises in the linear approximation around ZINSS it is independent of the first order dynamics of inflation. Furthermore, the approximation will be less forward-looking than its positive trend inflation counterpart (Ascari and Sbordone [2014]). A small literature has developed that looks at extensions of the benchmark model that give less extreme predictions concerning nominal dispersion (see Bakhshi et al. [2007], Kurozumi [2016], Kurozumi and Van Zandweghe [2016] and Hahn [2022]). Together they may justify considering an inflation rate closer to zero than for example the common Central Bank inflation target of two percent, without resorting to counterfactual considerations, such as full nominal price indexation.[9]


The analysis here yields deep econometric implications. Firstly, the structural model and the demand shocks are identified with a standard policy rule under the null hypothesis that the model is correct.[10] This remedies a fundamental inconsistency in the existing framework. Additional persistence and a degree of symmetry in the coefficients should improve small sample properties.


Existing work is biased. Under the null hypothesis that the model is correct, there is Econometric Duality; an equivalence between constraints on the re-optimization of the representative firm and statistical restrictions on the econometrician seeking to fit the Phillips curve model to the data. Stochastic equilibrium offers new sources of identification. This new theory promises new challenges with a tighter link between macroeconomic modelling and econometric theory.


The paper carries powerful results for all aspects of the Lucas Jr [1976] critique. On the one hand, the original equivalence result is wrong. Keynesian models contain lagged variables reflecting staggered optimization that the neoclassical framework does not. On the other hand, the notion of a trade-off where monetary activism causes an adverse movement in the Phillips curve schedule is borne out by the analysis of the coefficients. In fact, I show that the benchmark price Phillips curve is spurious in the sense that its slope is zero at a standard parametization and can be negative, which fits the message of his paper. The problem lies with the transmission mechanism which is entirely inter-temporal.


This is the essence of Output Neutrality. Where inter-temporal forces fall away, current inflation- as determined by optimal pricing- depends only on past and present pricing incentives, reflected in the lag and the expectation of the lead of inflation. Indeed, down this limit, inflation equals a half its lag and a half its future. These features agree with Taylor pricing, where inflation is determined by a weighted average of its lagged and future values, also with equal weight on past and future inflation. Consistent with output neutrality, the coefficients on marginal costs (or the output gap) sum to zero, as under Calvo. This creates a sturdy bridge between microeconomics and macroeconomics, as well as clear commonality between alternative models.


Furthermore, the focus on mapping between micro and macroeconomic behavior returns to the fore in the guise of the a priori bifurcation analysis. This implies that the previous framework was not truly reflecting its underlying microfoundations, reflected in the re-optimization constraints. Finally, we can see the instances of non-existence through this lens, as they imply non-trivial barriers between microeconomic and macroeconomic inference.


The demise of Divine Coincidence brings wide-ranging benefits. It breaks the previous reliance on unintuitive mark-up shocks to shift the Phillips curve (see Le et al. [2011] and Fratto and Uhlig [2020]). These shocks are widely dismissed as credible explanations for inflationary bursts by surveys of leading economists (see Vaitilingum [25 February, 2022]), along with the natural policy prescription price controls.[11] It will allow macroeconomists to avoid invoking an Effective Lower Bound (ELB) on nominal interest rates to generate policy trade-offs. This is particularly relevant following recent increases in interest rates in major economies. It chimes with empirical evidence that ELB was not important in the last decade because Quantitative Easing (QE) seemed to mimic interest rate cuts, structural macroeconomic relationships appeared stable and deflation was missing particularly in the UK,[12] (see for example Wu and Xia [2016], Dahlhaus et al. [2018], Kuttner [2018], Dell’Ariccia et al. [2018], Wu [2018], Matousek et al. [2019], Di Maggio et al. [2020] and Weale and Wieladek [2022] (QE); Auerbach and Gorodnichenko [2017], Garín et al. [2019], Debortoli et al. [2019] and Mertens and Williams [2021] (structural).[13] Central Bank independence has surely been one of the most successful policy experiments in economic history (see for instance Alesina and Summers [1993], Cukierman et al. [1993], Bernanke et al. [1999] Acemoglu et al. [2008], Balls and Stansbury [1 May, 2017], and Garriga and Rodriguez [2020]). It is high time an intuitive benchmark model were provided to guide day-to-day deliberations. The solution here is a first step in this regard, although more work is needed to understand the effect of supply shocks.


Finally, there are significant implications of the possibility for inactive policy. From a microeconomic point of view it can be seen as indicating stability of general equilibrium without aggregate shocks, against the rigidity of individual prices. This might help explain why policymakers have traditionally been unconcerned by microeconomic shocks.


On the macroeconomic ledger it has implications for both current and historical policy regimes. Macroeconomists often see stabilization through the prism of the Taylor principle- which states that inflation is controlled by raising the real interest rate in response to deviations of inflation from equilibrium. The policy rule derived here implies that it is not possible to immediately adjust interest rates to control inflation. This necessitates gradual changes of policy stance. This is in keeping with Central Bank best practice so-called "coarse-tuning" (Lindbeck [1992]). It is usually implemented via Inflation Forecast-Targeting (Kohn [2009], Svensson [2010] and Svensson [2012]). This is where policy and projections for future policy are adjusted to yield a desirable expected path for inflation and real activity, consistent with medium term stability. This is usually defined as forecast inflation and output gap sufficiently close to target after a time frame of 18 months to 3 years.[14]


Lastly, the policy result is of significance to our understanding of economic history. It helps to rationalize the survival of non-interventionist regimes like the Gold Standard where the "Rules of the Game" forbade active stabilization policy (see Barsky and Summers [1988], Bayoumi et al. [1997] and Bordo and Schwartz [2009]). Secondly, it should make it easier to analyze hypotheses like secular stagnation (Hansen [1939] and Summers [2015]) which require longlasting liquidity traps not possible in the existing representative agent setup. Lastly, it should support credible quantitative assessment of the benefits of modern macroeconomic management.


The subsequent section gives a more detailed description of the proof techniques suitable for a technical audience. It also sets the paper within the mathematical literature. Although, designed to be accessible, it could be skipped by less mathematically inclined readers.


Author:

(1) David Staines.


This paper is available on arxiv under CC 4.0 license.


[1] This is also true of online prices, (see Cavallo and Rigobon [2016], Cavallo [2017], Gorodnichenko and Talavera [2017], Gorodnichenko et al. [2018] and Cavallo [2018]).


[2] The term is frequently attributed to Walter Heller, Chief Economic Adviser to President Kennedy (see for example http://connection.ebscohost.com/c/referenceentries/40422478/fine-tuning-1960s-economics). It referred originally to fiscal policy in an "Old Keynesian" setup. Scepticism about the concept was focal to monetarist opposition to traditional Keynesian macroeconomics, see for example Friedman [1968] and Snowdon and Vane [2005].


[3] On the inter-temporal substitution front there is substantial support for an inter-temporal aggregate demand equation in both subjective beliefs and revealed preferences (see Coibion et al. [2023], Dräger and Nghiem [2021], Duca-Radu et al. [2021]), as well as discussion of previous microeconometric evidence in Appendix H.1.


[4] None of the specifications are directly comparable to the theoretical model, which underlines the importance of structural estimation, although, Fuhrer [2006] is the closest. Many use unemployment rather than the output gap as a forcing variable. This has an august pedigree going back to Phillips [1958] and Phelps [1968]. It is popular since there is better data. The two can be connected via Okun’s law- the business cycle relationship between unemployment and output deviations, which appears empirically strong (Ball et al. [2013]). Nevertheless, it remains a priority to derive and test micro-founded wage Phillips curves.


[5] Many of these studies are able to demonstrate stable relationships over recent times, despite structural change and the effects of the recent financial crisis (see also Stock and Watson [2020] and Candia et al. [2021]). Substantial inflation persistence is robust to different levels of aggregation, policy regime and plausible assumptions about trends in other macroeconomic variables, for evidence consult for example Clark [2006], Altissimo et al. [2009], Vaona and Ascari [2012]; O’Reilly and Whelan [2005], Beechey and Österholm [2012], Gerlach and Tillmann [2012]; Cogley and Sargent [2002], Stock and Watson [2016] and Kejriwal [2020].


[6] Results of most applied interest have concerned the Krusell and Smith [1998] benchmark model of capital accumulation under incomplete markets and aggregate risk, such as Cao [2020] and Pröhl [2023]. However, these contain a degree of contingency (and are therefore not full existence proofs).


[7] A literature on monotone comparative statics has developed around Milgrom and Roberts [1994], Milgrom and Shannon [1994], Milgrom and Segal [2002] and Athey [2002]. More recently, convexity conditions have been popular following Acemoglu and Jensen [2013] and Jensen [2018]. In fact, once the stochastic equilibrium has been derived the subsequent steps are often considerably easier than these results or recent extensions, with the promises of applications to a wider set of models.


[8] Consult Kennedy [2003] for insightful discussion, and Hamilton [1995] for technical exposition. Granger and Newbold [1974] and Phillips [1986] are prominent original papers.


[9] It is in fact possible to remove the impact of price dispersion on the dynamics of marginal costs by assuming that labor is firm-specific (see Coibion and Gorodnichenko [2008] and Eggertsson and Singh [2019]). I find this result useful for theoretical expositions, although not fully convincing. It does not however, remedy the economic distortion on consumer welfare. Moreover, nominal dispersion would naturally return if wage rigidity were included in a Calvo fashion. Finally, utilities, unionized or minimum wage labor and general purpose technologies like IT, transport, logistics and office infrastructure surely imply a sizeable common component of firms’ cost base.


[10] This is because under the null the rational expectations model is white noise so there are no valid instruments. Subjective expectations data solve this problem.


[11] The article and the actual answers are available at the following addresses: https://voxeu.org/article/inflation-market-power-and-price-controls-igm-forum-survey https://www.igmchicago.org/surveys/inflation-market-power-and-price-controls/ Aparicio and Cavallo [2021] analyze a set of price controls on Argentinian supermarkets and find that they had a limited effect on inflation that was reversed once they were removed.


[12] The Fisherian channel where deflation drives up the real interest rate has been missing in the recent low interest rate spell. The extreme example of this lack of deflation phenomenon is the United Kingdom. From February 2009, just before the Bank cut its headline rate to a then record low of 0.5% to July 2018, immediately before the next time the base rate exceeded this level, the Consumer Price Index grew at an average annual of 2.55% in excess of the 2% mandated target. Data for price levels and policy changes are available from the sites below, results are robust to plausible changes of dating https://www.ons.gov.uk/economy/inflationandpriceindices/timeseries/d7bt/mm23 https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp


[13] Any friction could in principle break Divine Coincidence, Blanchard and Galí [2007] use real wage rigidity. It is not clear whether Central Banks are concerned or able to correct real market failures over and above their stabilization objectives. It is debatable whether these alternative frictions are really first order at business cycle frequency. The best candidate is financial frictions. This has been a particular focus since the Great Recession. However, this recent interest has been accompanied by alternative instruments (see Clement [2010], Hanson et al. [2011], Duncan and Nolan [2015], Aikman et al. [2019] and Kashyap [2020]). Financial concerns were not a significant in monetary policymaking at major Central Banks previously, according to Baxa et al. [2013], Rotemberg [2013], Rotemberg [2015] and Oet and Lyytinen [2017]. In fact, financial shocks do not seem to be very important outside of crisis times, where they seem to operate like standard demand shocks (see Mian and Sufi [2014], Muir [2017], Mian and Sufi [2018], Huber [2018], Gertler and Gilchrist [2018], Benguria and Taylor [2020] and Haque and Magnusson [2021]).


[14] Time series methods and policymakers’ wisdom suggest that it takes between 18 months and two years for a change in monetary policy to have its maximum impact on inflation. This result seems to be robust across changes in policy regimes according to Bernanke et al. [1999] (see p 315-320), Batini and Nelson [2001], Gerlach and Svensson [2003] and recent analysis by Goodhart and Pradhan [2023]. On its website the Bank of England advises the general public that: "Monetary policy operates with a time lag of about two years." (http://www.bankofengland.co.uk/monetarypolicy/Pages/overview.aspx) However, the Bank publishes forecasts three years ahead and frequently talks about "inflation returning to target by the three year horizon" consistent with a longer view of the stabilization and empirical work by Havranek and Rusnak [2013]. (http://www.bankofengland.co.uk/publications/Pages/inflationreport/infrep.aspx) Practices are similar at other leading inflation targeting Central Banks.