Home » Economy » The Taylor Rule: An Economic Model for Monetary Policy

The Taylor Rule: An Economic Model for Monetary Policy

The Taylor rule is an interest rate forecasting model invented and perfected by famed economist John Taylor in 1992 and outlined in his landmark 1993 study, “Discretion Vs. Policy Rules in Practice.”

Taylor operated in the early 1990s with credible assumptions that the Federal Reserve determined future interest rates based on the rational expectations theory of macroeconomics. This is a backward-looking model that assumes if workers, consumers and firms have positive expectations for the future of the economy, interest rates don’t need an adjustment. The problem with this model is not only that it is backward-looking, but it also doesn’t take into account long-term economic prospects.

The Phillips Curve

The Phillips curve was the last discredited rational expectations theory model that attempted to forecast the tradeoff between inflation and employment. The problem again was while short-term expectation may have been correct, long-term assumptions based on these models are difficult, and how can adjustments be made to an economy if the interest rate action taken was wrong? Here, monetary policy was based more on discretion than concrete rules. What economists found was they couldn’t imply monetary expectations based on rational expectation theories any longer, particularly when an economy didn’t grow or stagflation was the result of a recent interest rate change. This situation brought rise to the Taylor rule.


The formula used for the Taylor rule looks like this:

i= r* + pi + 0.5 (pi-pi*) + 0.5 ( y-y*)

i = nominal fed funds rate Where:

r* = real federal funds rate (usually 2%)
pi = rate of inflation
p* = target inflation rate
Y = logarithm of real output
y* = logarithm of potential output

What this equation says is the difference between a nominal and real interest rate is inflation. Real interest rates are factored for inflation while nominal rates are not. Here we are looking at possible targets of interest rates, but this can’t be accomplished in isolation without looking at inflation. To compare rates of inflation or non-inflation, one must look at the total picture of an economy in terms of prices.

Three Factors That Drive Inflation

Prices and inflation are driven by three factors: the consumer price index, producer prices and the employment index. Most nations in the modern day look at the consumer price index as a whole rather than look at core CPI. Taylor recommends this method, as core CPI excludes food and energy prices. This method allows an observer to look at the total picture of an economy in terms of prices and inflation. Rising prices means higher inflation, so Taylor recommends factoring the rate of inflation over one year (or four quarters) for a comprehensive picture.

Taylor recommends the real interest rate should be 1.5 times the inflation rate. This is based on the assumption of an equilibrium rate that factors the real inflation rate against the expected inflation rate. Taylor calls this the equilibrium, a 2% steady state, equal to a rate of about 2%. Another way to look at this is the coefficients on the deviation of real GDP from trend GDP and the inflation rate. Both methods are about the same for forecasting purposes. But that’s only half of the equation—output must be factored in as well.

Determining Total Economic Output

The total output picture of an economy is determined by productivity, labor force participation and changes in employment. For the equation, we look at real output against potential output. We must look at GDP in terms of real and nominal GDP, or, to use the words of John Taylor, actual vs. trend GDP. To do this, we must factor in the GDP deflator, which measures prices of all goods produced domestically. We do this by dividing nominal GDP by real GDP and multiplying this figure by 100. The answer is the figure for real GDP. We are deflating nominal GDP into a true number to fully measure total output of an economy.

The product of the Taylor rule is three numbers: an interest rate, an inflation rate and a GDP rate, all based on an equilibrium rate to gauge the proper balance for an interest rate forecast by monetary authorities.

How the Federal Reserve Should Adjust Interest Rates

The rule for policymakers is this: The Federal Reserve should raise rates when inflation is above target or when GDP growth is too high and above potential. The Fed should lower rates when inflation is below the target level or when GDP growth is too slow and below potential. When inflation is on target and GDP is growing at its potential, rates are said to be neutral. This model aims to stabilize the economy in the short term and to stabilize inflation over the long term. To properly gauge inflation and price levels, apply a moving average of the various price levels to determine a trend and to smooth out fluctuations. Perform the same functions on a monthly interest rate chart. Follow the fed funds rate to determine trends.

While the Taylor rule has served economies in good economic times, it can also serve as a gauge for bad economic times. Suppose a central bank held interest rates too low for too long. This is what causes asset bubbles, so interest rates must eventually be raised to balance inflation and output levels. A further problem of asset bubbles is money supply levels rise far higher than is needed to balance an economy suffering from inflation and output imbalances. Many thought the central bank was to blame—at least partly—for the housing crisis in 2007-2008. The reasoning is interest was kept too low in the years following the dot-com bubble and leading up to the housing market crash in 2008 (see chart). Had the central bank followed the Taylor rule during this time, which indicated the interest rate should be much higher, the bubble may have been smaller, as less people would have been incentivized to buy homes. Taylor himself has argued the crisis would have been significantly smaller if the central bank had followed rules-based monetary policy.