# Difference between revisions of "Razoring"

Home * Search * Selectivity * Reductions * Razoring

Unlike Alpha-Beta, classical Razoring [2] prunes branches forward, if the static evaluation of a move is less than or equal to Alpha. It assumes that from any given position my opponent will be able to find at least one move that improves his position, the Null Move Observation. There are various implementations of Razoring, somehow diminishing pruning and reductions near the horizon. Razoring either prunes forward based on a reduced search, even the quiescence search, or it reduces based on evaluation or quiescence search as well.

# Pre Frontier Nodes

The idea introduced by John Birmingham and Peter Kent [3] was applied at so-called pre frontier (depth = 2) expected All-Nodes. Before searching deeper, all moves were generated, made, evaluated, unmade and then inserted inside a sorted move list. While fetching the sorted moves in decreasing order - as long as the evaluation of each move exceeds alpha - they were tried as usual. Once a move statically does no longer improve alpha, this and all further moves (sorted below) are pruned without any further investigation.

# Deep Razoring

There is even a more aggressive pruning technique at depth = 4 nodes, called Deep Razoring. The weaker assumption is my opponent will be able to improve his position with a yourmove-mymove-yourmove sequence.

# Limited Razoring

Ernst A. Heinz introduced Limited Razoring, pushing the idea of futility pruning one step further to pre-pre-frontier nodes (depth = 3), he combines it with the depth-reduction mechanism of deep razoring [4] .

```fscore = mat_balance(current) + razor_margin;
if (!extend
&& (depth == PRE_PRE_FRONRIER)
&& (fscore <= alpha)
&& (opposing_pieces(current) > 3)
) depth = PRE_FRONRIER;
```

# Amir Ban's Definition

Amir Ban in a CCC discussion with Robert Hyatt on Razoring versus Pruning [5] :

```Razoring is supposed to be a sort of forward pruning where rather than skipping an entire subtree, you search it to a reduced depth, typically one less than normal depth. The advantage is that you get most of the saving but with much lower risk than pruning entire subtrees. Razoring is the only forward pruning technique Junior uses, with a depth reduction of one (half-ply). Seems like Crafty uses the same definition ...
```

# Implementations

## Crafty 15.17

This code snippet appears in Crafty 15.17 [6] (~1998) [7] with fractional reductions based on evaluation at pre frontier nodes (depth = 2), already controversially discussed between Vincent Diepeveen and Robert Hyatt in 2002 [8] :

```/*
----------------------------------------------------------
|                                                          |
|   now we toss in the "razoring" trick, which simply says |
|   if we are doing fairly badly, we can reduce the depth  |
|   an additional ply, if there was nothing at the current |
|   ply that caused an extension.                          |
|                                                          |
----------------------------------------------------------
*/
if (depth<3*INCREMENT_PLY && depth>=2*INCREMENT_PLY &&
!tree->in_check[ply] && extensions == -60) {
register int value=-Evaluate(tree,ply+1,ChangeSide(wtm),
-(beta+51),-(alpha-51));
if (value+50 < alpha) extensions-=60;
}
```

## Dropping into Q-search

A different razoring approach was proposed by Robert Hyatt in CCC [9] based on an idea by Tord Romstad [10] , and abandoned in Crafty after cluster testing [11] . Dropping into the quiescence search is intended inside a PVS framework at strong expected All-nodes near the tips (pre-pre-frontier nodes or below, depth <= 3), that is a null window (alpha = beta - 1) is passed, and the static evaluation is by a margin (~three pawns) below beta (or <= alpha). If under these conditions the quiescence search confirms the fail-low characteristics, its score is trusted and returned.

```/*
************************************************************
*                                                          *
* now we try a quick Razoring test. If we are within 3     *
* plies of a tip, and the current eval is 3 pawns (or      *
* more) below beta, then we just drop into a q-search      *
* to try to get a quick cutoff without searching more in   *
* a position where we are way down in material.            *
*                                                          *
************************************************************
*/
if (razoring_allowed && depth <= razor_depth) {
if (alpha == beta - 1) { // null window ?
if (Evaluate(tree, ply, wtm, alpha, beta) + razor_margin < beta) { // likely a fail-low node ?
value = QuiesceChecks(tree, alpha, beta, wtm, ply);
if (value < beta)
return value; // fail soft
}
}
}
```

## Strelka

Similar code appears in Jury Osipov's open source engine Strelka 2.0 [12] , failing a bit harder. The interesting thing is the missing new_value < beta condition in the depth = 1 case. If the static evaluation indicates a fail-low node, but q-search fails high, the score of the reduced fail-high search is returned, since there was obviously a winning capture raising the score, and one assumes a quiet move near the horizon will not do better [13] .

Strelka.c line 393 (slightly modified for clarification) user:GerdIsenberg

```  value = eval + 125;
if (value < beta) {
if (depth == 1) {
new_value = qsearch(...);
return max(new_value, value);
}
value += 175;
if (value < beta && depth <= 3) {
new_value = qsearch(...);
if (new_value < beta)
return max(new_value, value);
}
}
```

In the Rybka Forum thread on Strelka 2.0 [14] , Anthony Cozzie states on this feature [15] :

```I'm also pretty amazed at how aggressively the search prunes. Not only will Strelka drop straight into the q-search on a depth-3 search, but because it orders all captures first, it will reduce almost all noncapture/check moves.
```

Classical Razoring is known for being risky. It likely was a progress in Shannon Type-B programs and a fixed width. Todays programs more likely rely on: