Engine Testing

From Chessprogramming wiki
Revision as of 07:25, 6 July 2024 by ShawnXu (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Home * Engine Testing

The ever-optimistic Wile E. Coyote [1]

Engine Testing,
the process either to eliminate bugs and to measure performance of a chess engine. New implementations of move generation are tested with Perft, while new features and tuning of search and evaluation are verified via SPRT testing, (historically) test-positions and by playing matches against other engines.

Bug Hunting




The modern, preferred method to test strength modifications.


Running sets of test-positions with number of solutions per fixed time-frame is useful to prove whether things are broken after program changes or to get hints about missing knowledge. But one should be careful to tune engines based on test-position results, since solving (possible tactical) test-positions does not necessarily correlate with practical playing strength in matches against other opponents.


Most testing involves running different versions of a program in matches, and comparing results.

Time Controls

Generally speaking, for testing changes that don't alter the search tree itself, but only affect performance (eg. move generation) can be tested with given fixed nodes, fixed time or fixed depth. In all other cases the time management should be left to the engine to simulate real tournament conditions. On the other hand, debugging is much easier under fixed conditions as the games become deterministic.

A side from the type of time control one also has to decide on how much time should be spent per game, ie. what the average quality of the games should be like. While one can test more changes in the a certain time at short time controls, it is also relevant how a certain change scales to different strengths. So for example should one increase the R in Null move pruning to 3 in depths > 7, this change may only be effectively tested on time controls where this new condition is triggered frequently enough, ie. where the average search depth is far greater than seven. It is hard to generalize, but on average changes of the search functions (LMR, nullmove, futility or similar pruning, reductions and extensions ) tend to be more sensitive to the time control than the tuning of evaluation parameters.


During testing the engines should ideally play the same style of openings they would play in a normal tournament, so not to optimize them for different types of positions. One option is to use the engines own opening book or one can use opening suites, a set of quiet test positions. In the latter case the same opening suit would be used for each tournament conducted and furthermore each position is played a second time with colors reversed. With these measures one can try to minimize the disparity between tests caused by different openings.

Tournament Manager

User interfaces or command line tools for UCI and Chess Engine Communication Protocol compatible engines in engine-engine matches are mentioned under Tournament Manager.


Chess Server

One can also test an engine's performance by comparing it to other programs on the various internet platforms [2] . In this case the different hardware and features like different Endgame Tablebases or Opening Books have to be considered.


The question whether certain results actually indicates a strength increase or not, can be answered with


Test Results

Notable Bugs


Forum Posts

1995 ...

2000 ...

2005 ...




2010 ...





2015 ...

Re: Static evaluation test posistions by Ferdinand Mosca, CCC, November 26, 2015 » Python




Re:Basic automated testing by Andrew Grant, CCC, September 30, 2018 » OpenBench


2020 ...



External Links


Up one Level