Difference between revisions of "Automated Tuning"

From Chessprogramming wiki
Jump to: navigation, search
Line 360: Line 360:
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74877 Evaluation & Tuning in Chess Engines] by [[Andrew Grant]], [[CCC]], August 24, 2020 » [[Ethereal]]
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74877 Evaluation & Tuning in Chess Engines] by [[Andrew Grant]], [[CCC]], August 24, 2020 » [[Ethereal]]
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74955 Train a neural network evaluation] by [[Fabio Gobbato]], [[CCC]], September 01, 2020 » [[NNUE]]
 
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74955 Train a neural network evaluation] by [[Fabio Gobbato]], [[CCC]], September 01, 2020 » [[NNUE]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75012 Speeding Up The Tuner] by [[Dennis Sceviour]], [[CCC]], September 06, 2020
  
 
=External Links=  
 
=External Links=  

Revision as of 13:43, 7 September 2020

Home * Automated Tuning

Engine Tuner [1]

Automated Tuning,
an automated adjustment of evaluation parameters or weights, and less commonly, search parameters [2], with the aim to improve the playing strength of a chess engine or game playing program. Evaluation tuning can be applied by mathematical optimization or machine learning, both fields with huge overlaps. Learning approaches are subdivided into supervised learning using labeled data, and reinforcement learning to learn from trying, facing the exploration (of uncharted territory) and exploitation (of current knowledge) dilemma. Johannes Fürnkranz gives a comprehensive overview in Machine Learning in Games: A Survey published in 2000 [3], covering evaluation tuning in chapter 4.

Playing Strength

A difficulty in tuning and automated tuning of engine parameters is measuring playing strength. Using small sets of test-positions, which was quite common in former times to estimate relative strength of chess programs, lacks adequate diversity for a reliable strength predication. In particular, solving test-positions does not necessarily correlate with practical playing strength in matches against other opponents. Therefore, measuring strength requires to play many games against a reference opponent to determine the win rate with a certain confidence. The closer the strength of two opponents, the more games are necessary to determine whether changed parameters or weights in one of them are improvements or not, up to several tens of thousands. Playing many games with ultra short time controls has became de facto standard with todays strong programs, as for instance applied in Stockfish's Fishtest, using the sequential probability ratio test (SPRT) to possibly terminate a match early [4].

Parameter

Quote by Ingo Althöfer [5] [6]:

It is one of the best arts to find the right SMALL set of parameters and to tune them.
Some 12 years ago I had a technical article on this ("On telescoping linear evaluation functions") in the ICCA Journal, Vol. 16, No. 2, pp. 91-94, describing a theorem (of existence) which says that in case of linear evaluation functions with lots of terms there is always a small subset of the terms such that this set with the right parameters is almost as good as the full evaluation function. 

Mathematical Optimization

Mathematical optimization methods in tuning consider the engine as a black box.

Methods

Instances

Advantages

  • Works with all engine parameters, including search
  • Takes search-eval interaction into account

Disadvantages

Reinforcement Learning

Reinforcement learning, in particular temporal difference learning, has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by Arthur Samuel in his Checkers player [7]. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the score of the root position after a quiescence search became closer to the score of the full search. This TD method was generalized and formalized by Richard Sutton in 1988 [8], who introduced the decay parameter λ, where proportions of the score came from the outcome of Monte Carlo simulated games, tapering between bootstrapping (λ = 0) and Monte Carlo (λ = 1). TD-λ was famously applied by Gerald Tesauro in his Backgammon program TD-Gammon [9] [10], its minimax adaptation TD-Leaf was successful used in eval tuning of chess programs [11], with KnightCap [12] and CilkChess [13] as prominent samples.

Instances

Engines

Supervised Learning

Move Adaptation

One supervised learning method considers desired moves from a set of positions, likely from grandmaster games, and tries to adjust their evaluation weights so that for instance a one-ply search agrees with the desired move. Already pioneering in reinforcement learning some years before, move adaptation was described by Arthur Samuel in 1967 as used in the second version of his checkers player [15], where a structure of stacked linear evaluation functions was trained by computing a correlation measure based on the number of times the feature rated an alternative move higher than the desired move played by an expert [16]. In chess, move adaptation was first described by Thomas Nitsche in 1982 [17], and with some extensions by Tony Marsland in 1985 [18]. Eval Tuning in Deep Thought as mentioned by Feng-hsiung Hsu et al. in 1990 [19], and later published by Andreas Nowatzyk, is also based on an extended form of move adaptation [20]. Jonathan Schaeffer's and Paul Lu's efforts to make Deep Thought's approach work for Chinook in 1990 failed [21] - nothing seemed to produce results that were as good than their hand-tuned effort [22].

Value Adaptation

A second supervised learning approach used to tune evaluation weights is based on regression of the desired value, i.e. using the final outcome from huge sets of positions from quality games, or other information supplied by a supervisor, i.e. in form of annotations from position evaluation symbols. Often, value adaptation is reinforced by determining an expected outcome by self play [23].

Advantages

  • Can modify any number of weights simultaneously - constant time complexity

Disadvantages

  • Requires a source for the labeled data
  • Can only be used for evaluation weights or anything else that can be labeled
  • Works not optimal when combined with search

Regression

Regression analysis is a statistical process with a substantial overlap with machine learning to predict the value of an Y variable (output), given known value pairs of the X and Y variables. While linear regression deals with continuous outputs, logistic regression covers binary or discrete output, such as win/loss, or win/draw/loss. Parameter estimation in regression analysis can be formulated as the minimization of a cost or loss function over a training set [24], such as mean squared error or cross-entropy error function for binary classification [25]. The minimization is implemented by iterative optimization algorithms or metaheuristics such as Iterated local search, Gauss–Newton algorithm, or conjugate gradient method.

Linear Regression

The supervised problem of regression applied to move adaptation was used by Thomas Nitsche in 1982, minimizing the mean squared error of a cost function considering the program’s and a grandmaster’s choice of moves, as mentioned, extended by Tony Marsland in 1985, and later by the Deep Thought team. Regression used to adapt desired values was described by Donald H. Mitchell in his 1984 masters thesis on evaluation features in Othello, cited by Michael Buro [26] [27]. Jens Christensen applied linear regression to chess in 1986 to learn point values in the domain of temporal difference learning [28].

Logistic Regression

Since the relationship between win percentage and pawn advantage is assumed to follow a logistic model, one may treat static evaluation as single-layer perceptron or single neuron ANN with the common logistic activation function, performing the perceptron algorithm to train it [30]. Logistic regression in evaluation tuning was first elaborated by Michael Buro in 1995 [31], and proved successful in the game of Othello in comparison with Fisher's linear discriminant and quadratic discriminant function for normally distributed features, and served as eponym of his Othello program Logistello [32]. In computer chess, logistic regression was applied by Arkadiusz Paterek with Gosu [33], later proposed by Miguel A. Ballicora in 2009 as used by Gaviota [34], independently described by Amir Ban in 2012 for Junior's evaluation learning [35], and explicitly mentioned by Álvaro Begué in a January 2014 CCC discussion [36], when Peter Österlund explained Texel's Tuning Method [37], which subsequently popularized logistic regression tuning in computer chess. Vladimir Medvedev's Point Value by Regression Analysis [38] [39] experiments showed why the logistic function is appropriate, and further used cross-entropy and regularization.

Instances

See also

Publications

1959

1960 ...

1970 ...

1980 ...

1985 ...

1990 ...

1995 ...

2000 ...

Gerald Tesauro (2001). Comparison Training of Chess Evaluation Functions.  » SCP, Deep Blue

2005 ...

2006

2007

2008

2009

2010 ...

2011

2012

2013

2014

2015 ...

2016

2017

2018

2020 ...

Forum Posts

1997 ...

2000 ...

2005 ...

Re: Adjusting weights the Deep Blue way by Pradu Kannan, Winboard Forum, September 01, 2008
Re: Insanity... or Tal style? by Miguel A. Ballicora, CCC, April 02, 2009 [62]

2010 ...

2014

Re: How Do You Automatically Tune Your Evaluation Tables by Álvaro Begué, CCC, January 08, 2014
The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014 » Texel's Tuning Method
Re: The texel evaluation function optimization algorithm by Álvaro Begué, CCC, January 31, 2014 » Cross-entropy

2015 ...

Re: txt: automated chess engine tuning by Sergei S. Markoff, CCC, February 15, 2016 » SmarThink
Re: Piece weights with regression analysis (in Russian) by Fabien Letouzey, CCC, May 04, 2015
Re: Genetical tuning by Ferdinand Mosca, CCC, August 20, 2015

2016

Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016 [68]

2017

Re: Texel tuning method question by Peter Österlund, CCC, June 07, 2017
Re: Texel tuning method question by Ferdinand Mosca, CCC, July 20, 2017 » Python
Re: Texel tuning method question by Jon Dart, CCC, July 23, 2017
Re: tool to create derivates of a given function by Daniel Shawul, CCC, November 07, 2017 [71]

2018

2019

2020 ...

External Links

Engine tuning from Wikipedia
Self-tuning from Wikipedia

Engine Tuning

Optimization

optimize - Wiktionary
Entropy maximization from Wikipedia
Linear programming from Wikipedia
Nonlinear programming from Wikipedia
Simplex algorithm from Wikipedia
Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia
SPSA Algorithm
Stochastic approximation from Wikipedia
Stochastic gradient descent from Wikipedia
AdaGrad from Wikipedia

Machine Learning

reinforcement - Wiktionary
reinforce - Wiktionary
supervisor - Wiktionary
temporal - Wiktionary

Statistics/Regression Analysis

regression - Wiktionary
regress - Wiktionary

Code

Misc

References

  1. A vintage motor engine tester located at the James Hall museum of Transport, Johannesburg, South Africa - Engine tuning from Wikipedia
  2. Yngvi Björnsson, Tony Marsland (2001). Learning Search Control in Adversary Games. Advances in Computer Games 9, pp. 157-174. pdf
  3. Johannes Fürnkranz (2000). Machine Learning in Games: A Survey. Austrian Research Institute for Artificial Intelligence, OEFAI-TR-2000-3, pdf - Chapter 4, Evaluation Function Tuning
  4. Fishtest Distributed Testing Framework by Marco Costalba, CCC, May 01, 2013
  5. Re: Zappa Report by Ingo Althöfer, CCC, December 30, 2005 » Zappa
  6. Ingo Althöfer (1993). On Telescoping Linear Evaluation Functions. ICCA Journal, Vol. 16, No. 2, pp. 91-94
  7. Arthur Samuel (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal July 1959
  8. Richard Sutton (1988). Learning to Predict by the Methods of Temporal Differences. Machine Learning, Vol. 3, No. 1, pdf
  9. Gerald Tesauro (1992). Temporal Difference Learning of Backgammon Strategy. ML 1992
  10. Gerald Tesauro (1994). TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play. Neural Computation Vol. 6, No. 2
  11. Don Beal, Martin C. Smith (1999). Learning Piece-Square Values using Temporal Differences. ICCA Journal, Vol. 22, No. 4
  12. Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). Experiments in Parameter Learning Using Temporal Differences. ICCA Journal, Vol. 21, No. 2, pdf
  13. The Cilkchess Parallel Chess Program
  14. EXchess also uses CLOP
  15. Arthur Samuel (1967). Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress. pdf
  16. Johannes Fürnkranz (2000). Machine Learning in Games: A Survey. Austrian Research Institute for Artificial Intelligence, OEFAI-TR-2000-3, pdf
  17. Thomas Nitsche (1982). A Learning Chess Program. Advances in Computer Chess 3
  18. Tony Marsland (1985). Evaluation-Function Factors. ICCA Journal, Vol. 8, No. 2, pdf
  19. Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, Andreas Nowatzyk (1990). A Grandmaster Chess Machine. Scientific American, Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.
  20. see 2.1 Learning from Desired Moves in Chess in Kunihito Hoki, Tomoyuki Kaneko (2014). Large-Scale Optimization for Evaluation Functions with Minimax Search. JAIR Vol. 49
  21. Jonathan Schaeffer, Joe Culberson, Norman Treloar, Brent Knight, Paul Lu, Duane Szafron (1992). A World Championship Caliber Checkers Program. Artificial Intelligence, Vol. 53, Nos. 2-3,ps
  22. Jonathan Schaeffer (1997, 2009). One Jump Ahead. 7. The Case for the Prosecution, pp. 111-114
  23. Bruce Abramson (1990). Expected-Outcome: A General Model of Static Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 2
  24. Loss function - Use in statistics - Wkipedia
  25. "Using cross-entropy error function instead of sum of squares leads to faster training and improved generalization", from Sargur Srihari, Neural Network Training (pdf)
  26. Michael Buro (1995). Statistical Feature Combination for the Evaluation of Game Positions. JAIR, Vol. 3
  27. Donald H. Mitchell (1984). Using Features to Evaluate Positions in Experts' and Novices' Othello Games. Masters thesis, Department of Psychology, Northwestern University, Evanston, IL
  28. Jens Christensen (1986). Learning Static Evaluation Functions by Linear Regression. in Tom Mitchell, Jaime Carbonell, Ryszard Michalski (1986). Machine Learning: A Guide to Current Research. The Kluwer International Series in Engineering and Computer Science, Vol. 12
  29. Random data points and their linear regression. Created with Sage by Sewaqu, November 5, 2010, Wikimedia Commons
  30. Re: Piece weights with regression analysis (in Russian) by Fabien Letouzey, CCC, May 04, 2015
  31. Michael Buro (1995). Statistical Feature Combination for the Evaluation of Game Positions. JAIR, Vol. 3
  32. LOGISTELLO's Homepage
  33. Arkadiusz Paterek (2004). Modelowanie funkcji oceniającej w grach. University of Warsaw, zipped ps (Polish, Modeling of an evaluation function in games)
  34. Re: Insanity... or Tal style? by Miguel A. Ballicora, CCC, April 02, 2009
  35. Amir Ban (2012). Automatic Learning of Evaluation, with Applications to Computer Chess. Discussion Paper 613, The Hebrew University of Jerusalem - Center for the Study of Rationality, Givat Ram
  36. Re: How Do You Automatically Tune Your Evaluation Tables by Álvaro Begué, CCC, January 08, 2014
  37. The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014
  38. Определяем веса шахматных фигур регрессионным анализом / Хабрахабр by WinPooh, April 27, 2015 (Russian)
  39. Piece weights with regression analysis (in Russian) by Vladimir Medvedev, CCC, April 30, 2015
  40. log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10 from Wolfram|Alpha
  41. SPSA Tuner for Stockfish Chess Engine by Joona Kiiski
  42. Re: Adjusting weights the Deep Blue way by Pradu Kannan, Winboard Forum, September 01, 2008
  43. MATLAB from Wikipedia
  44. Adaptive coordinate descent from Wikipedia
  45. CLOP for Noisy Black-Box Parameter Optimization by Rémi Coulom, CCC, September 01, 2011
  46. CLOP slides by Rémi Coulom, CCC, November 03, 2011
  47. thesis on eval function learning in Arimaa by Jon Dart, CCC, December 04, 2015
  48. Tuning floats by Stephane Nicolet, FishCooking, April 12, 2018
  49. MMTO for evaluation learning by Jon Dart, CCC, January 25, 2015
  50. Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia
  51. high dimensional optimization by Warren D. Smith, FishCooking, December 27, 2019
  52. Arasan 19.2 by Jon Dart, CCC, November 03, 2016 » Arasan's Tuning
  53. Limited-memory BFGS from Wikipedia
  54. Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016
  55. LM-BFGS from Wikipedia
  56. LM-CMA source code
  57. CMA-ES from Wikipedia
  58. Re: Texel tuning method question by Jon Dart, CCC, July 23, 2017
  59. Re: multi-dimensional piece/square tables by Tony P., CCC, January 28, 2020 » Piece-Square Tables
  60. Evaluation & Tuning in Chess Engines by Andrew Grant, CCC, August 24, 2020
  61. Thomas Anantharaman (1997). Evaluation Tuning for Computer Chess: Linear Discriminant Methods. ICCA Journal, Vol. 20, No. 4
  62. The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014
  63. Rémi Coulom (2011). CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning. Advances in Computer Games 13
  64. Amir Ban (2012). Automatic Learning of Evaluation, with Applications to Computer Chess. Discussion Paper 613, The Hebrew University of Jerusalem - Center for the Study of Rationality, Givat Ram
  65. Kunihito Hoki, Tomoyuki Kaneko (2014). Large-Scale Optimization for Evaluation Functions with Minimax Search. JAIR Vol. 49, pdf
  66. brtzsnr / txt — Bitbucket by Alexandru Mosoi
  67. Home — TensorFlow
  68. Limited-memory BFGS from Wikipedia
  69. alonamaloh / ruy_tune — Bitbucket by Álvaro Begué
  70. Maximum likelihood estimation from Wikipedia
  71. Jacobian matrix and determinant from WIkipedia
  72. skopt API documentation
  73. Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv:1406.2572
  74. Learning/Tuning in SlowChess Blitz Classic by Jonathan Kreuzer, CCC, June 15, 2020
  75. Introducing PET by Ed Schröder, CCC, June 27, 2018
  76. Tool for automatic black-box parameter optimization released by Rémi Coulom, CCC, June 20, 2010
  77. CLOP for Noisy Black-Box Parameter Optimization by Rémi Coulom, CCC, September 01, 2011
  78. Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016
  79. Re: Eval tuning - any open source engines with GA or PBIL? by Jon Dart, CCC, December 06, 2014
  80. Re: The texel evaluation function optimization algorithm by Jon Dart, CCC, March 12, 2014
  81. Eval tuning - any open source engines with GA or PBIL? by Hrvoje Horvatic, CCC, December 04, 2014
  82. ROCK* black-box optimizer for chess by Jon Dart, CCC, August 31, 2017
  83. Fat Fritz 1.1 update and a small gift by Albert Silver. ChessBase News, March 05, 2020
  84. Great input about Bayesian optimization of noisy function methods by Vivien Clauzon, CCC, June 16, 2020

Up one Level