The timely detection of recessions is critical for an effective policy response. Yet, in the US, the official declaration from the NBER Business Cycle Dating Committee arrives many months, sometimes more than a year, after a recession has begun. For policymakers who need to make decisions in real time, this delay is impractical.
To address this lag, researchers have for many decades developed algorithms to detect recession starts as early as possible (see Hamilton 2011 for a survey of the literature). Among existing algorithms, threshold rules that use the unemployment rate, such as the Sahm (2019) rule, have been found to perform well at detecting recessions early (Crump et al. 2020).
However, such rules are based on a single noisy measure of the economy. By combining unemployment data with vacancy data, it is possible to obtain a less noisy signal, and thus to detect recessions more quickly and robustly (Michaillat and Saez 2025). The basis for this insight is the Beveridge curve: at the onset of every recession, unemployment rises sharply just as job vacancies drop (Figure 1).
Figure 1 Monthly US unemployment and vacancy rates, April 1929–May 2025
Sources: Barnichon (2010), Petrosky-Nadeau and Zhang (2021), and Bureau of Labour Statistics.
In a new paper, I take the next logical step: instead of using one specific formula to filter and combine labour market data, the algorithm systematically searches for the optimal way to do so (Michaillat 2025). The goal is to find the best possible lens to view the data, to extract as much real-time information as possible from them.
The recession detection algorithm that I develop first generates millions of recession classifiers, each processing the unemployment and vacancy data in a unique way, and each using a unique recession threshold. The algorithm then only keeps the classifiers that make no errors over the 1929–2021 training period. To be selected, a classifier must therefore identify all 15 US recessions between 1929 and 2021 without a single false positive. This leaves us with over two million historically perfect classifiers.
Having millions of perfect classifiers creates a new challenge: which one to choose? To solve this, the algorithm evaluates classifiers on two key dimensions: how much they anticipate recession starts and how precise that signal is. By plotting each classifier’s average detection error against the standard deviation of the detection error, the algorithm identifies an anticipation-precision frontier – classifiers that offer the best combinations of foresight and accuracy (Figure 2). From this frontier, the algorithm then picks all the classifiers whose detection error’s standard deviation is below 3 months – which guarantees that the width of the 95% confidence interval for the estimated recession start date is less than 1 year. Overall, the algorithm selects an ensemble of 7 recession classifiers.
Figure 2 Two million perfect recession classifiers for the US, April 1929–December 2021, and the anticipation-precision frontier
Notes: The figure displays the mean and standard deviation of the detection errors for 2,343,752 perfect classifiers, which detect the 15 recessions that occurred between 1929 and 2021 without false positives.
So, is the US economy in a recession now? The classifier ensemble provides a single, real-time recession probability. When I apply the ensemble to the most recent data, it says that the recession probability has reached 71% in May 2025 (Figure 3). Since mid-2022, the combination of rising unemployment and falling vacancies has triggered 5 of the 7 classifiers in the ensemble, pushing the recession probability up. The recession probability first became positive late in 2023, when 3 of the 7 classifiers got triggered. The recession further increased in the middle of 2024, when 2 additional classifiers got activated. Currently, only 2 of the 7 classifiers in the ensemble remain inactivated.
Figure 3 Recession probability from the classifier ensemble trained on US data, April 1929–December 2021
Notes: The probability is the average of the probabilities given by the individual classifiers in the ensemble (thin orange lines). The classifiers in the ensemble are selected from the high-precision segment of the 1929–2021 anticipation-precision frontier.
Given the recent records on the US stock market, isn’t it strange that the algorithm produces such a high recession probability? What explains it in the underlying labour market data? This substantial recession probability is due to the noticeable decrease in the number of job vacancies and increase in the number of job seekers since the middle of 2022 (Figure 1). The vacancy rate fell by 43%, from 7.40% in April 2022 down to 4.21% in April 2025. The unemployment rate increased by 23%, from 3.46% in January 2023 up to 4.24% in May 2025. (These data can be visualised here). Such adverse changes in labour market conditions are typically associated with a recession, which is why 5 of the 7 classifiers switched on since the middle of 2023.
What is odd about the current period is that the recession risk has remained elevated without increasing further for about a year. By contrast, during the 15 recessions of the training period, once the recession probability turned positive, it rapidly rose to 1. The reason for this strange behaviour is that, after cooling significantly between the middle of 2022 and the middle of 2024, the US labour market stopped cooling and remained at a standstill between the middle of 2024 and today. Such a pause in cooling is very unusual.
To verify the model’s reliability, I performed several backtests. For instance, I trained the algorithm using data only up to December 1984 and asked it to detect all subsequent recessions. All the classifiers in the ensemble built from 1929–1984 data did very well, correctly identifying all four recessions in the 1985–2021 test period – including the dot-com bust and the Great Recession – without any false positives. Even without seeing any data past 1984, the classifier ensemble detected the Great Recession in good time, with its recession probability surging by the summer of 2008, providing a clear and timely warning. In fact, the performance of the algorithm over the entire testing period, 1985–2021, is surprisingly good: the standard deviation of errors averages only 1.4 months, and the mean error averages only 1.2 months.
What is the current recession risk according to the classifier ensemble trained on 1929–1984 data? That classifier ensemble assigns a recession probability of 83% to current data (Figure 4). This is because 5 of the 6 classifiers in the ensemble are currently triggered. In fact, according to that ensemble, the recession probability reached 100% in the middle of 2024, before dropping slightly in early 2025. The classifier ensembles produced in other backtests also produce elevated recession probabilities in May 2025.
Figure 4 Recession probability from the classifier ensemble trained on US data, April 1929–December 1984
Notes: The probability is the average of the probabilities given by the individual classifiers in the ensemble (thin orange lines). The classifiers in the ensemble are selected from the high-precision segment of the 1929–1984 anticipation-precision frontier.
Next, to ensure that the algorithm is not overfitting the data, I run a placebo test. I ask the algorithm to detect a series of events that are random, as frequent as recessions but not connected in any way to the US economy. The events that I pick are the deaths of 15 US first ladies between 1929 and 2021: Helen Taft, Lou Hoover, Frances Cleveland, Edith Roosevelt, Grace Coolidge, Edith Wilson, Eleanor Roosevelt, Mamie Eisenhower, Bess Truman, Pat Nixon, Jacqueline Kennedy, Lady Bird Johnson, Betty Ford, Nancy Reagan, and Barbara Bush.
The algorithm is unable to detect these placebo events (Figure 5). The standard deviation of the algorithm’s detection errors for the first-lady deaths exceeds 40 years – while this standard deviation was less than 2 months for recessions. This stark failure in the first-lady placebo test demonstrates that the model identifies genuine economic signals rather than spuriously fitting the data.
Figure 5 Placebo test of the algorithm: Detecting the deaths of US first ladies with labour market data
Finally, I examine whether it is possible to detect recessions earlier and more accurately by applying the algorithm to some of the product market data used by the NBER Dating Committee. Unlike the algorithm presented in this paper, the NBER relies mostly on product market metrics (production, sales, consumption) to date recessions. The committee states that it does not look at the unemployment rate because it is ‘lagging’ and ‘noisy’. So I apply the algorithm to data on industrial production – which are also available since 1929.
I find that the combination of unemployment and vacancy rates consistently outperforms industrial production in providing early and accurate recession signals; in fact the unemployment rate alone outperforms industrial production (Figure 6). Even when combined, unemployment and industrial production data do not achieve the same level of anticipation and precision as the unemployment-vacancy pairing, suggesting that current official dating practices might benefit from a greater emphasis on unemployment and vacancies.
Figure 6 Comparing labour market classifiers to product market classifiers
Overall, this new algorithm shows that the US labour market is sending an unambiguous signal: the conditions characteristic of a recession are not on the horizon – they are already here. If it turns out, once the dust has settled, that the US economy is not in a recession, what would we learn? In that case, the algorithm could be retrained on the new data. The 5 classifiers that mistakenly detected the recession would be eliminated, while the 2 classifiers that did not detect a recession would be retained. Given that several of the classifiers on the frontier do signal a recession, the anticipation-precision frontier would shift out. We would therefore learn that detecting recessions with labour market data is harder than previous recessions suggested, because the current period is not following previous patterns.
References
Barnichon, R (2010), “Building a composite help-wanted index”, Economics Letters 109(3): 175–78.
Crump, R, D Giannone, and D Lucca (2020), “Reading the tea leaves of the US business cycle”, Liberty Street Economics, Federal Reserve Bank of New York.
Hamilton, J D (2011), “Calling recessions in real time”, International Journal of Forecasting 27(4): 1006–1026.
Michaillat, P (2025), “Early and accurate recession detection using classifiers on the anticipation-precision frontier”, arXiv:2506.09664v3.
Michaillat, P, and E Saez (2025), “Has the recession started?”, Oxford Bulletin of Economics and Statistics.
Petrosky-Nadeau, N, and L Zhang (2021), “Unemployment crises”, Journal of Monetary Economics 117: 335–53.
Sahm, C (2019), “Direct stimulus payments to individuals”, in H Boushey, R Nunn, and J Shambaugh (eds.), Recession Ready: Fiscal Policies to Stabilize the American Economy, Washington, DC: Brookings Institution.