Fed Data Dependent Talking Points Are Here to Stay

Last week, reflecting on his interview with Austin Goolsbee, Kai Ryssdal opened Marketplace with two questions:

1.)   “How much more [data is] the Fed going to need before it says, ‘we are done [raising interest rates]’?” and

2.)   How long is it sustainable for the fed to keep saying, ‘we just need to see more [data]’?”

The clear and correct answer from Jeanna Smialek (NYT) and Amara Omeokwe (WSJ) was: as long as the Fed wants to because they need to maintain flexibility in their decision-making processes. Both Smailek and Omeakwe point out that over the past two years, the Fed was caught off guard by sustained and higher-than-expected inflation. While attempting get inflation down and keep it down, the Fed is navigating economic uncertainty that might require policy revision in light of incoming data points running counter to previous expectations — thus, the need for flexibility and the harping on data dependency talking points. Flexibility also helps with credibility. Despite not anticipating the long duration of this bout of inflation, the Fed has been able to remain credible by slowing bringing down inflation without committing to specific targets on a specific timeline. By committing to watching the data, Fed officials are letting the public know that they are working hard, with every dip in inflation counting toward the Fed’s credibility.

            Ahead of the next FOMC meeting (19-20 September), we’ve entered the Fed blackout period, so we will not hear any data dependence talking points until Jerome Powell enters the briefing room at 14:00 EST next Wednesday. This interim gives us a moment to consider data dependent monetary policy beyond the current economic moment and consider why this method of data dependent decision-making is likely to stick around.

            The Fed operates in a world of  known unknowns - where we know there are things we do not know. In pursuit of its dual mandate from Congress to achieve maximum employment and price stability, Fed officials need to ascertain two keep economic variables: r* (the riskless real interest rate consistent with price stability) and u* (an unemployment rate consistent with the maximum level of employment). However, r* and u* are inherently unknowable. Simply, the Fed knows that the indicators necessary to achieve its mandate (r* and u*) are unknown.

            In order to fulfill the gap of the known unknowns, a core element of the Fed’s work becomes gathering and analyzing data that can help construct estimations of r* and u* at given moments in the economy. Since r* is “not directly observable” estimates are built from a wide range of data such as the Consumer Price Index (CPI), the Purchases Price Index (PPI), the Personal Consumption Indicators Index, GDP, and unemployment. u* is notoriously more elusive. Maximum employment is “not directly measurable and changes over time owing largely to non-monetary factors” (Federal Reserve, 2012 and 2023).

To estimate maximum employment the Fed must analyze a myriad of indicators: Labour Force Participation Rate, Employment to Population Ratio, Job Openings and Labour Turnover Surveys, the Beveridge Curve, Non-Accelerating Inflation Rate of Unemployment, etc. Additionally, as many of the underlying attributes of u* are non-monetary, opinions on and analysis of u* often differ among Fed officials[1]. For example, the President of the Atlanta Fed, Raphael Bosniac, argues that “long-term maximum employment means everyone finds gainful work consistent with their full potential,”. This assessment complicates the idea of employment beyond the usual macroeconomic data points by recognizing that job opportunities arise not only based on macro conditions but also based on an individual’s circumstances — a reality that requires u* estimations to be built from an expanded data pool including race, education, age, training and experience levels, and geography.

Altogether, the pursuit of the dual mandate becomes, in practice, a chase toward r* and u* estimations through the collection and analysis of economic data. As such, the Fed will continue to hold on to its data dependency talking points. Officials are also likely to keep using them even beyond the current economic moment due the Average Inflation Target (AIT) strategy established in 2020.

            In 2020, the Fed transitioned from targeting 2% inflation at each period to targeting 2% average inflation over a longer term. With AIT the goal is to keep the public inflation expectation at 2% while untethering the 2% goal from each decision point. In practice, the Fed does not specify the long-run horizon of AIT. Therefore, the horizon for estimations on r* and u* are also beyond the immediate policy-making period, potentially well into the future. Under this scenario, not only does the chase for more data to fulfill the Fed’s dual mandate remain critical, but also the timeline for tracking that data becomes indefinite. Through AIT the Fed is able to remain flexible, based on incoming data, at decision making points.

Moreover, AIT enables the Fed to maintain its credibility by sticking to a clear decision-making framework without being held accountable for getting inflation to 2% by a specific date. Credibility is especially crucial in the fight against inflation, just ask the BOE. The short-run non-neutrality of a central bank (the real ability of monetary policy to impact the economy) is best characterized as being in conversation with other actors in the economy. Simply, through policy rate path expectations (i.e. communication on what will happen next), the central bank and the household have a certain degree of influence on one another. Households have an upward and self-fulfilling bias towards inflation; therefore, communication from the central bank on its ability to keep inflation under control needs be seen as credible and legitimate lest household inflation expectations run amok.

Fed officials cannot disclose the point at which they would have enough data to begin reducing interest rates without publicly disclosing (if one exists) the Fed’s timeline for AIT — in turn losing its decision making flexibility and potentially damaging their credibility by not reaching 2% inflation within a set period. Until Fed officials reveal or are questioned about the timeframe is for judging AIT at 2%, we should only expect to continue to hear talking points based on the search for more data. While it may seem repetitive and unrevealing, when Fed officials refer to data-dependency, they are executing a strategy essential to managing inflation expectations and fulfilling the dual mandate.

 


NB: The New York Fed does have a research team who constructed two models attempting to evaluate r*.

[1] Barcena and Wessel, 2022; Albert and Valetta, 2022; Goodman-Bacon, 2022

To read the briefing on Data Dependency in Fed Policy Making that inspired this post, click here.

Next
Next

The Anatomy and Implications of Repealing the Reporting Requirement of the UN’s 1533 Sanctions Regime in the Democratic Republic of the Congo