From Chessprogramming wiki
Jump to: navigation, search

Home * Search * Selectivity * Reductions * Razoring

Unlike Alpha-Beta, classical Razoring [2] prunes branches forward, if the static evaluation of a move is less than or equal to Alpha. It assumes that from any given position my opponent will be able to find at least one move that improves his position, the Null Move Observation. There are various implementations of Razoring, somehow diminishing pruning and reductions near the horizon. Razoring either prunes forward based on a reduced search, even the quiescence search, or it reduces based on evaluation or quiescence search as well.

Pre Frontier Nodes

The idea introduced by John Birmingham and Peter Kent [3] was applied at so-called pre frontier (depth = 2) expected All-Nodes. Before searching deeper, all moves were generated, made, evaluated, unmade and then inserted inside a sorted move list. While fetching the sorted moves in decreasing order - as long as the evaluation of each move exceeds alpha - they were tried as usual. Once a move statically does no longer improve alpha, this and all further moves (sorted below) are pruned without any further investigation.

Deep Razoring

There is even a more aggressive pruning technique at depth = 4 nodes, called Deep Razoring. The weaker assumption is my opponent will be able to improve his position with a yourmove-mymove-yourmove sequence.

Limited Razoring

Ernst A. Heinz introduced Limited Razoring, pushing the idea of futility pruning one step further to pre-pre-frontier nodes (depth = 3), he combines it with the depth-reduction mechanism of deep razoring [4] .

fscore = mat_balance(current) + razor_margin;
if (!extend
 && (depth == PRE_PRE_FRONRIER)
 && (fscore <= alpha)
 && (opposing_pieces(current) > 3)
   ) depth = PRE_FRONRIER;

Amir Ban's Definition

Amir Ban in a CCC discussion with Robert Hyatt on Razoring versus Pruning [5] :

Razoring is supposed to be a sort of forward pruning where rather than skipping an entire subtree, you search it to a reduced depth, typically one less than normal depth. The advantage is that you get most of the saving but with much lower risk than pruning entire subtrees. Razoring is the only forward pruning technique Junior uses, with a depth reduction of one (half-ply). Seems like Crafty uses the same definition ... 



In 2022 razoring was reintroduced to Stockfish [6] by Michael Chaly. The implementation features a quadratic depth-dependent margin. Certain precautions were made later on to prevent razoring from interfering with mate finding.

// Step 7. Razoring (~1 Elo)
// If eval is really low check with qsearch if it can exceed alpha, if it can't,
// return a fail low.
if (eval < alpha - 512 - 293 * depth * depth)
    value = qsearch<NonPV>(pos, ss, alpha - 1, alpha);
    if (value < alpha && std::abs(value) < VALUE_TB_WIN_IN_MAX_PLY)
        return value;

Crafty 15.17

This code snippet appears in Crafty 15.17 [7] (~1998) [8] with fractional reductions based on evaluation at pre frontier nodes (depth = 2), already controversially discussed between Vincent Diepeveen and Robert Hyatt in 2002 [9] :

|                                                          |
|   now we toss in the "razoring" trick, which simply says |
|   if we are doing fairly badly, we can reduce the depth  |
|   an additional ply, if there was nothing at the current |
|   ply that caused an extension.                          |
|                                                          |
      if (depth<3*INCREMENT_PLY && depth>=2*INCREMENT_PLY &&
          !tree->in_check[ply] && extensions == -60) {
        register int value=-Evaluate(tree,ply+1,ChangeSide(wtm),
        if (value+50 < alpha) extensions-=60;

Dropping into Q-search

A different razoring approach was proposed by Robert Hyatt in CCC [10] based on an idea by Tord Romstad [11] , and abandoned in Crafty after cluster testing [12] . Dropping into the quiescence search is intended inside a PVS framework at strong expected All-nodes near the tips (pre-pre-frontier nodes or below, depth <= 3), that is a null window (alpha = beta - 1) is passed, and the static evaluation is by a margin (~three pawns) below beta (or <= alpha). If under these conditions the quiescence search confirms the fail-low characteristics, its score is trusted and returned.

*                                                          *
* now we try a quick Razoring test. If we are within 3     *
* plies of a tip, and the current eval is 3 pawns (or      *
* more) below beta, then we just drop into a q-search      *
* to try to get a quick cutoff without searching more in   *
* a position where we are way down in material.            *
*                                                          *
if (razoring_allowed && depth <= razor_depth) {
   if (alpha == beta - 1) { // null window ?
      if (Evaluate(tree, ply, wtm, alpha, beta) + razor_margin < beta) { // likely a fail-low node ?
         value = QuiesceChecks(tree, alpha, beta, wtm, ply);
         if (value < beta)
            return value; // fail soft


Similar code appears in Jury Osipov's open source engine Strelka 2.0 [13] , failing a bit harder. The interesting thing is the missing new_value < beta condition in the depth = 1 case. If the static evaluation indicates a fail-low node, but q-search fails high, the score of the reduced fail-high search is returned, since there was obviously a winning capture raising the score, and one assumes a quiet move near the horizon will not do better [14] .

Strelka.c line 393 (slightly modified for clarification) user:GerdIsenberg

  value = eval + 125;
  if (value < beta) {
    if (depth == 1) {
      new_value = qsearch(...);
      return max(new_value, value);
    value += 175;
    if (value < beta && depth <= 3) {
      new_value = qsearch(...);
      if (new_value < beta)
         return max(new_value, value);

In the Rybka Forum thread on Strelka 2.0 [15] , Anthony Cozzie states on this feature [16] :

I'm also pretty amazed at how aggressively the search prunes. Not only will Strelka drop straight into the q-search on a depth-3 search, but because it orders all captures first, it will reduce almost all noncapture/check moves.

See also

Classical Razoring is known for being risky. It likely was a progress in Shannon Type-B programs and a fixed width. Todays programs more likely rely on:


Forum Posts

1995 ...

2000 ...

2005 ...

2010 ...

2015 ...

2020 ...

External Links

Frank Zappa, Warren Cuccurullo, Denny Walley, Ike Willis, Tommy Mars, Peter Wolf, Arthur Barrow, Vinnie Colaiuta, Ed Mann


  1. William of Ockham – Sketch labeled "frater Occham iste", from a manuscript of Ockham's Summa Logicae, 1341
  2. John Birmingham, Peter Kent (1977). Tree-searching and tree-pruning techniques. Advances in Computer Chess 1, reprinted 1988 in Computer Chess Compendium
  3. John Birmingham, Peter Kent (1977). Tree-searching and tree-pruning techniques. Advances in Computer Chess 1, reprinted 1988 in Computer Chess Compendium
  4. Ernst A. Heinz (1998). Extended Futility Pruning. ICCA Journal, Vol. 21, No. 2, including Limited Razoring at Pre-Pre-Frontier Nodes
  5. Re: Razoring? by Amir Ban, CCC, January 27, 1999
  6. Reintroduce razoring. by Vizvezdenec · Pull Request #3921 · official-stockfish/Stockfish · GitHub, Github, February 4, 2022
  7. Robert Hyatt's FTP Page, <source> crafty-15.17.zip, search.c
  8. Re: Razoring by Sven Schüle, CCC, December 25, 2011
  9. razoring in crafty version 16.9, mid 1999 by Vincent Diepeveen, August 21, 2002
  10. Razoring... by Robert Hyatt, CCC, October 09, 2008
  11. Re: Razoring by Marco Costalba, CCC, December 26, 2011
  12. Re: futility pruning - razoring by Robert Hyatt, CCC, September 16, 2009
  13. Download by sdchess.ru
  14. Razoring by Joe Stella, rgcc, August 31, 1995
  15. Strelka 2.0 by Vasik Rajlich, Rybka Forum, January 11, 2008
  16. Strelka 2.0 pg 4 by Anthony Cozzie, Rybka Forum, January 11, 2008

Up one Level