Difference between revisions of "Automated Tuning"

From Chessprogramming wiki
Jump to: navigation, search
(86 intermediate revisions by 2 users not shown)
Line 26: Line 26:
  
 
==Instances==
 
==Instances==
 +
* [[ACPP]]
 +
* [[Amoeba]]
 +
* [[BBChess (SI)#DifferentialEvolution|Differential Evolution in BBChess]]
 +
* [[Deuterium]]
 
* [[Falcon#GA|Genetic Algorithm in Falcon]]
 
* [[Falcon#GA|Genetic Algorithm in Falcon]]
 
* [[Stockfish's Tuning Method]]
 
* [[Stockfish's Tuning Method]]
Line 35: Line 39:
 
* [https://en.wikipedia.org/wiki/Time_complexity Time complexity] issues with increasing number of weights to tune
 
* [https://en.wikipedia.org/wiki/Time_complexity Time complexity] issues with increasing number of weights to tune
 
<span id="ReinformentLearning"></span>
 
<span id="ReinformentLearning"></span>
=Reinforment Learning=
+
=Reinforcement Learning=
[[Reinforcement Learning|Reinforcement learning]], in particular [[Temporal Difference Learning|temporal difference learning]], has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by [[Arthur Samuel]] in his [[Checkers]] player <ref>[[Arthur Samuel]] ('''1959'''). ''[http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74!OpenDocument Some Studies in Machine Learning Using the Game of Checkers]''. IBM Journal July 1959</ref>. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the [[Score|score]] of the [[Root|root position]] after a [[Quiescence Search|quiescence search]] became closer to the score of the full search. This TD method was generalized and formalized by [[Richard Sutton]] in 1988 <ref>[[Richard Sutton]] ('''1988'''). ''Learning to Predict by the Methods of Temporal Differences''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Vol. 3, No. 1, [http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf pdf]</ref>, who introduced the decay parameter '''λ''', where proportions of the score came from the outcome of [https://en.wikipedia.org/wiki/Monte_Carlo_method Monte Carlo] simulated games, tapering between [https://en.wikipedia.org/wiki/Bootstrapping#Artificial_intelligence_and_machine_learning bootstrapping] (λ = 0) and Monte Carlo (λ = 1). [[Temporal Difference Learning#TDLamba|TD-λ]] was famously applied by [[Gerald Tesauro]] in his [[Backgammon]] program [https://en.wikipedia.org/wiki/TD-Gammon TD-Gammon] <ref>[[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]</ref> <ref>[[Gerald Tesauro]] ('''1994'''). ''TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play''. [http://www.informatik.uni-trier.de/~ley/db/journals/neco/neco6.html#Tesauro94 Neural Computation Vol. 6, No. 2]</ref>, its [[Minimax|minimax]] adaption [[Temporal Difference Learning#TDLeaf|TD-Leaf]] was successful used in eval tuning of chess programs <ref>[[Don Beal]], [[Martin C. Smith]] ('''1999'''). ''Learning Piece-Square Values using Temporal Differences.'' [[ICGA Journal#22_4|ICCA Journal, Vol. 22, No. 4]]</ref>, with [[KnightCap]] <ref>[[Jonathan Baxter]], [[Andrew Tridgell]], [[Lex Weaver]] ('''1998'''). ''Experiments in Parameter Learning Using Temporal Differences''. [[ICGA Journal#21_2|ICCA Journal, Vol. 21,  No. 2]], [http://cs.anu.edu.au/%7ELex.Weaver/pub_sem/publications/ICCA-98_equiv.pdf pdf]</ref> and [[CilkChess]] <ref>[http://supertech.csail.mit.edu/chess/ The Cilkchess Parallel Chess Program]</ref> as prominent samples.
+
[[Reinforcement Learning|Reinforcement learning]], in particular [[Temporal Difference Learning|temporal difference learning]], has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by [[Arthur Samuel]] in his [[Checkers]] player <ref>[[Arthur Samuel]] ('''1959'''). ''[http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74!OpenDocument Some Studies in Machine Learning Using the Game of Checkers]''. IBM Journal July 1959</ref>. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the [[Score|score]] of the [[Root|root position]] after a [[Quiescence Search|quiescence search]] became closer to the score of the full search. This TD method was generalized and formalized by [[Richard Sutton]] in 1988 <ref>[[Richard Sutton]] ('''1988'''). ''Learning to Predict by the Methods of Temporal Differences''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Vol. 3, No. 1, [http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf pdf]</ref>, who introduced the decay parameter '''λ''', where proportions of the score came from the outcome of [https://en.wikipedia.org/wiki/Monte_Carlo_method Monte Carlo] simulated games, tapering between [https://en.wikipedia.org/wiki/Bootstrapping#Artificial_intelligence_and_machine_learning bootstrapping] (λ = 0) and Monte Carlo (λ = 1). [[Temporal Difference Learning#TDLamba|TD-λ]] was famously applied by [[Gerald Tesauro]] in his [[Backgammon]] program [https://en.wikipedia.org/wiki/TD-Gammon TD-Gammon] <ref>[[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]</ref> <ref>[[Gerald Tesauro]] ('''1994'''). ''TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play''. [http://www.informatik.uni-trier.de/~ley/db/journals/neco/neco6.html#Tesauro94 Neural Computation Vol. 6, No. 2]</ref>, its [[Minimax|minimax]] adaptation [[Temporal Difference Learning#TDLeaf|TD-Leaf]] was successful used in eval tuning of chess programs <ref>[[Don Beal]], [[Martin C. Smith]] ('''1999'''). ''Learning Piece-Square Values using Temporal Differences.'' [[ICGA Journal#22_4|ICCA Journal, Vol. 22, No. 4]]</ref>, with [[KnightCap]] <ref>[[Jonathan Baxter]], [[Andrew Tridgell]], [[Lex Weaver]] ('''1998'''). ''Experiments in Parameter Learning Using Temporal Differences''. [[ICGA Journal#21_2|ICCA Journal, Vol. 21,  No. 2]], [http://cs.anu.edu.au/%7ELex.Weaver/pub_sem/publications/ICCA-98_equiv.pdf pdf]</ref> and [[CilkChess]] <ref>[http://supertech.csail.mit.edu/chess/ The Cilkchess Parallel Chess Program]</ref> as prominent samples.
  
 
==Instances==
 
==Instances==
Line 57: Line 61:
 
<span id="SupervisedLearning"></span>
 
<span id="SupervisedLearning"></span>
 
=Supervised Learning=
 
=Supervised Learning=
==Move Adaption==
+
==Move Adaptation==
<span id="MoveAdaption"></span>One [[Supervised Learning|supervised learning]] method considers desired moves from a set of positions, likely from grandmaster games, and tries to adjust their evaluation weights so that for instance a one-ply search agrees with the desired move. Already pioneering in reinforcement learning some years before, move adaption was described by [[Arthur Samuel]] in 1967 as used in the second version of his checkers player <ref>[[Arthur Samuel]] ('''1967'''). ''Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress''. [http://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf pdf]</ref>, where a structure of stacked linear evaluation functions was trained by computing a correlation measure based on the number of times the feature rated an alternative move higher than the desired move played by an expert <ref>[[Johannes Fürnkranz]] ('''2000'''). ''Machine Learning in Games: A Survey''. [https://en.wikipedia.org/wiki/Austrian_Research_Institute_for_Artificial_Intelligence Austrian Research Institute for Artificial Intelligence], OEFAI-TR-2000-3, [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-2000-31.pdf pdf]</ref>. In chess, move adaption was first described by [[Thomas Nitsche]] in 1982 <ref>[[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]</ref>, and with some extensions by [[Tony Marsland]] in 1985 <ref>[[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]</ref>. [[Eval Tuning in Deep Thought]] as mentioned by [[Feng-hsiung Hsu]] et al. in 1990 <ref>[[Feng-hsiung Hsu]], [[Thomas Anantharaman]], [[Murray Campbell]], [[Andreas Nowatzyk]] ('''1990'''). ''[http://www.disi.unige.it/person/DelzannoG/AI2/hsu.html A Grandmaster Chess Machine]''. [[Scientific American]], Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.</ref>, and later published by [[Andreas Nowatzyk]], is also based on an extended form of move adaption <ref>see ''2.1  Learning from Desired Moves in Chess'' in [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49]</ref>. [[Jonathan Schaeffer|Jonathan Schaeffer's]] and [[Paul Lu|Paul Lu's]] efforts to make Deep Thought's approach work for [https://en.wikipedia.org/wiki/Chinook_%28draughts_player%29 Chinook] in 1990 failed <ref>[[Jonathan Schaeffer]], [[Joe Culberson]], [[Norman Treloar]], [[Brent Knight]], [[Paul Lu]], [[Duane Szafron]] ('''1992'''). ''A World Championship Caliber Checkers Program''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 53, Nos. 2-3,[http://webdocs.cs.ualberta.ca/%7Ejonathan/Papers/Papers/chinook.ps ps]</ref> - nothing seemed to produce results that were as good than their hand-tuned effort <ref>[[Jonathan Schaeffer]] ('''1997, 2009'''). ''[http://www.springer.com/computer/ai/book/978-0-387-76575-4 One Jump Ahead]''. 7. The Case for the Prosecution, pp. 111-114</ref>.
+
<span id="MoveAdaption"></span>One [[Supervised Learning|supervised learning]] method considers desired moves from a set of positions, likely from grandmaster games, and tries to adjust their evaluation weights so that for instance a one-ply search agrees with the desired move. Already pioneering in reinforcement learning some years before, move adaptation was described by [[Arthur Samuel]] in 1967 as used in the second version of his checkers player <ref>[[Arthur Samuel]] ('''1967'''). ''Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress''. [http://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf pdf]</ref>, where a structure of stacked linear evaluation functions was trained by computing a correlation measure based on the number of times the feature rated an alternative move higher than the desired move played by an expert <ref>[[Johannes Fürnkranz]] ('''2000'''). ''Machine Learning in Games: A Survey''. [https://en.wikipedia.org/wiki/Austrian_Research_Institute_for_Artificial_Intelligence Austrian Research Institute for Artificial Intelligence], OEFAI-TR-2000-3, [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-2000-31.pdf pdf]</ref>. In chess, move adaptation was first described by [[Thomas Nitsche]] in 1982 <ref>[[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]</ref>, and with some extensions by [[Tony Marsland]] in 1985 <ref>[[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]</ref>. [[Eval Tuning in Deep Thought]] as mentioned by [[Feng-hsiung Hsu]] et al. in 1990 <ref>[[Feng-hsiung Hsu]], [[Thomas Anantharaman]], [[Murray Campbell]], [[Andreas Nowatzyk]] ('''1990'''). ''[http://www.disi.unige.it/person/DelzannoG/AI2/hsu.html A Grandmaster Chess Machine]''. [[Scientific American]], Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.</ref>, and later published by [[Andreas Nowatzyk]], is also based on an extended form of move adaptation  <ref>see ''2.1  Learning from Desired Moves in Chess'' in [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49]</ref>. [[Jonathan Schaeffer|Jonathan Schaeffer's]] and [[Paul Lu|Paul Lu's]] efforts to make Deep Thought's approach work for [https://en.wikipedia.org/wiki/Chinook_%28draughts_player%29 Chinook] in 1990 failed <ref>[[Jonathan Schaeffer]], [[Joe Culberson]], [[Norman Treloar]], [[Brent Knight]], [[Paul Lu]], [[Duane Szafron]] ('''1992'''). ''A World Championship Caliber Checkers Program''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 53, Nos. 2-3,[http://webdocs.cs.ualberta.ca/%7Ejonathan/Papers/Papers/chinook.ps ps]</ref> - nothing seemed to produce results that were as good than their hand-tuned effort <ref>[[Jonathan Schaeffer]] ('''1997, 2009'''). ''[http://www.springer.com/computer/ai/book/978-0-387-76575-4 One Jump Ahead]''. 7. The Case for the Prosecution, pp. 111-114</ref>.
  
==Value Adaption==
+
==Value Adaptation ==
<span id="ValueAdaption"></span>A second supervised learning approach used to tune evaluation weights is based on [https://en.wikipedia.org/wiki/Regression regression] of the desired value, i.e. using the final outcome from huge sets of positions from quality games, or other information supplied by a supervisor, i.e. in form of annotations from [https://en.wikipedia.org/wiki/Chess_annotation_symbols#Position_evaluation_symbols position evaluation symbols]. Often, value adaption is reinforced by determining an expected outcome by self play <ref>[[Bruce Abramson]] ('''1990'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=44404 Expected-Outcome: A General Model of Static Evaluation]''. [[IEEE#TPAMI|IEEE Transactions on Pattern Analysis and Machine Intelligence]], Vol. 12, No. 2</ref>.
+
<span id="ValueAdaption"></span>A second supervised learning approach used to tune evaluation weights is based on [https://en.wikipedia.org/wiki/Regression regression] of the desired value, i.e. using the final outcome from huge sets of positions from quality games, or other information supplied by a supervisor, i.e. in form of annotations from [https://en.wikipedia.org/wiki/Chess_annotation_symbols#Position_evaluation_symbols position evaluation symbols]. Often, value adaptation is reinforced by determining an expected outcome by self play <ref>[[Bruce Abramson]] ('''1990'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=44404 Expected-Outcome: A General Model of Static Evaluation]''. [[IEEE#TPAMI|IEEE Transactions on Pattern Analysis and Machine Intelligence]], Vol. 12, No. 2</ref>.
  
 
==Advantages==
 
==Advantages==
Line 71: Line 75:
 
<span id="Regression"></span>
 
<span id="Regression"></span>
 
=Regression=
 
=Regression=
[https://en.wikipedia.org/wiki/Regression_analysis Regression analysis] is a [https://en.wikipedia.org/wiki/Statistics statistical process] with a substantial overlap with machine learning to [https://en.wikipedia.org/wiki/Prediction predict] the value of an [https://en.wikipedia.org/wiki/Dependent_and_independent_variables Y variable] (output), given known value pairs of the X and Y variables. Parameter estimation in regression analysis can be formulated as the [https://en.wikipedia.org/wiki/Mathematical_optimization minimization] of a [https://en.wikipedia.org/wiki/Loss_function cost or loss function] over a [https://en.wikipedia.org/wiki/Training_set training set] <ref>[https://en.wikipedia.org/wiki/Loss_function#Use_in_statistics Loss function - Use in statistics - Wkipedia]</ref>, such as  [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] or [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] for [https://en.wikipedia.org/wiki/Binary_classification binary classification] <ref>"Using [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] instead of [https://en.wikipedia.org/wiki/Mean_squared_error sum of squares] leads to faster training and improved generalization", from [https://en.wikipedia.org/wiki/Sargur_Srihari Sargur Srihari], [http://www.cedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.2-Training.pdf Neural Network Training] (pdf)</ref>. The minimization is implemented by [[Iteration|iterative]] optimization [[Algorithms|algorithms]] or [https://en.wikipedia.org/wiki/Metaheuristic metaheuristics] such as [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search], [https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm Gauss–Newton algorithm], or [https://en.wikipedia.org/wiki/Conjugate_gradient_method conjugate gradient method].  
+
[[FILE:Linear regression.svg|border|right|thumb|300px|[https://en.wikipedia.org/wiki/Linear_regression Linear Regression] <ref>Random data points and their [https://en.wikipedia.org/wiki/Linear_regression linear regression]. [https://commons.wikimedia.org/wiki/File:Linear_regression.svg Created] with [https://en.wikipedia.org/wiki/Sage_%28mathematics_software%29 Sage] by Sewaqu, November 5, 2010, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
 +
 
 +
[https://en.wikipedia.org/wiki/Regression_analysis Regression analysis] is a [https://en.wikipedia.org/wiki/Statistics statistical process] with a substantial overlap with machine learning to [https://en.wikipedia.org/wiki/Prediction predict] the value of an [https://en.wikipedia.org/wiki/Dependent_and_independent_variables Y variable] (output), given known value pairs of the X and Y variables. While [https://en.wikipedia.org/wiki/Linear_regression linear regression] deals with continuous outputs, [https://en.wikipedia.org/wiki/Logistic_regression logistic regression] covers binary or discrete output, such as win/loss, or win/draw/loss. Parameter estimation in regression analysis can be formulated as the [https://en.wikipedia.org/wiki/Mathematical_optimization minimization] of a [https://en.wikipedia.org/wiki/Loss_function cost or loss function] over a [https://en.wikipedia.org/wiki/Training_set training set] <ref>[https://en.wikipedia.org/wiki/Loss_function#Use_in_statistics Loss function - Use in statistics - Wkipedia]</ref>, such as  [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] or [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] for [https://en.wikipedia.org/wiki/Binary_classification binary classification] <ref>"Using [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] instead of [https://en.wikipedia.org/wiki/Mean_squared_error sum of squares] leads to faster training and improved generalization", from [https://en.wikipedia.org/wiki/Sargur_Srihari Sargur Srihari], [http://www.cedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.2-Training.pdf Neural Network Training] (pdf)</ref>. The minimization is implemented by [[Iteration|iterative]] optimization [[Algorithms|algorithms]] or [https://en.wikipedia.org/wiki/Metaheuristic metaheuristics] such as [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search], [https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm Gauss–Newton algorithm], or [https://en.wikipedia.org/wiki/Conjugate_gradient_method conjugate gradient method].  
 
<span id="LinearRegression"></span>
 
<span id="LinearRegression"></span>
 
==Linear Regression==
 
==Linear Regression==
{|
+
The supervised problem of regression applied to [[Automated Tuning#MoveAdaption|move adaptation]] was used by [[Thomas Nitsche]] in 1982, minimizing the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] of a cost function considering the program’s and a grandmaster’s choice of moves, as mentioned, extended by [[Tony Marsland]] in 1985, and later by the [[Deep Thought]] team. Regression used to [[Automated Tuning#ValueAdaption|adapt desired values]] was described by [[Donald H. Mitchell]] in his 1984 master thesis on evaluation features in [[Othello]], cited by [[Michael Buro]] <ref>[[Michael Buro]] ('''1995'''). ''[https://www.jair.org/index.php/jair/article/view/10146 Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref> <ref>[[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Master thesis, Department of Psychology, [[Northwestern University]], Evanston, IL</ref>. [[Jens Christensen]] applied [https://en.wikipedia.org/wiki/Linear_regression linear regression] to chess in 1986 to learn [[Point Value|point values]] in the domain of [[Temporal Difference Learning|temporal difference learning]] <ref>[[Jens Christensen]] ('''1986'''). ''[http://link.springer.com/chapter/10.1007/978-1-4613-2279-5_9?no-access=true Learning Static Evaluation Functions by Linear Regression]''. in [[Tom Mitchell]], [[Jaime Carbonell]],  [[Ryszard Michalski]] ('''1986'''). ''[http://link.springer.com/book/10.1007/978-1-4613-2279-5 Machine Learning: A Guide to Current Research]''. The Kluwer International Series in Engineering and Computer Science, Vol. 12</ref>.  
|-
+
 
| style="vertical-align:top;" | The supervised problem of regression applied to [[Automated Tuning#MoveAdaption|move adaption]] was used by [[Thomas Nitsche]] in 1982, minimizing the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] of a cost function considering the program’s and a grandmaster’s choice of moves, as mentioned, extended by [[Tony Marsland]] in 1985, and later by the [[Deep Thought]] team. Regression used to [[Automated Tuning#ValueAdaption|adapt desired values]] was described by [[Donald H. Mitchell]] in his 1984 masters thesis on evaluation features in [[Othello]], cited by [[Michael Buro]] <ref>[[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref> <ref>[[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Masters thesis, Department of Psychology, [[Northwestern University]], Evanston, IL</ref>. [[Jens Christensen]] applied [https://en.wikipedia.org/wiki/Linear_regression linear regression] to chess in 1986 to learn [[Point Value|point values]] in the domain of [[Temporal Difference Learning|temporal difference learning]] <ref>[[Jens Christensen]] ('''1986'''). ''[http://link.springer.com/chapter/10.1007/978-1-4613-2279-5_9?no-access=true Learning Static Evaluation Functions by Linear Regression]''. in [[Tom Mitchell]], [[Jaime Carbonell]],  [[Ryszard Michalski]] ('''1986'''). ''[http://link.springer.com/book/10.1007/978-1-4613-2279-5 Machine Learning: A Guide to Current Research]''. The Kluwer International Series in Engineering and Computer Science, Vol. 12</ref>.  
 
| [[FILE:Linear regression.svg|border|left|thumb|baseline|300px|[https://en.wikipedia.org/wiki/Linear_regression Linear Regression] <ref>Random data points and their [https://en.wikipedia.org/wiki/Linear_regression linear regression]. [https://commons.wikimedia.org/wiki/File:Linear_regression.svg Created] with [https://en.wikipedia.org/wiki/Sage_%28mathematics_software%29 Sage] by Sewaqu, November 5, 2010, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
 
|}
 
 
<span id="LogisticRegression"></span>
 
<span id="LogisticRegression"></span>
 
==Logistic Regression==
 
==Logistic Regression==
{|
+
[[FILE:SigmoidTexelTune.gif|border|right|thumb|300px|link=http://wolfr.am/1al3d5B|[https://en.wikipedia.org/wiki/Logistic_function Logistic function] <ref>[http://wolfr.am/1al3d5B log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10] from [https://en.wikipedia.org/wiki/Wolfram_Alpha Wolfram|Alpha]</ref> ]]
|-
+
 
| style="vertical-align:top;" | Since the relationship between [[Pawn Advantage, Win Percentage, and Elo|win percentage and pawn advantage]] is assumed to follow a [https://en.wikipedia.org/wiki/Logistic_model logistic model], one may treat static evaluation as [[Neural Networks#Perceptron|single-layer perceptron]] or single [https://en.wikipedia.org/wiki/Artificial_neuron neuron] [[Neural Networks|ANN]] with the common [https://en.wikipedia.org/wiki/Logistic_function logistic] [https://en.wikipedia.org/wiki/Activation_function activation function], performing the perceptron algorithm to train it <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015</ref>. [https://en.wikipedia.org/wiki/Logistic_regression Logistic regression] in evaluation tuning was first elaborated by [[Michael Buro]] in 1995 <ref>[[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref>, and proved successful in the game of [[Othello]] in comparison with [[Mathematician#RFisher|Fisher's]] [https://en.wikipedia.org/wiki/Kernel_Fisher_discriminant_analysis linear discriminant] and quadratic [https://en.wikipedia.org/wiki/Discriminant discriminant] function for [https://en.wikipedia.org/wiki/Normal_distribution normally distributed] features, and served as eponym of his Othello program ''Logistello'' <ref>[https://skatgame.net/mburo/log.html LOGISTELLO's Homepage]</ref>. In computer chess, logistic regression was proposed by [[Miguel A. Ballicora]] in a 2009 [[CCC]] post, as applied to [[Gaviota]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009</ref>, was independently described by [[Amir Ban]] in 2012 for [[Junior|Junior's]] evaluation learning <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>, and explicitly mentioned by [[Álvaro Begué]] in a January 2014 [[CCC]] discussion <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014</ref>, when [[Peter Österlund]] explained [[Texel's Tuning Method]] <ref>[http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=555522&t=50823 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014</ref>, which subsequently popularized logistic regression tuning in computer chess. [[Vladimir Medvedev|Vladimir Medvedev's]] [[Point Value by Regression Analysis]] <ref>[http://habrahabr.ru/post/254753/ Определяем веса шахматных фигур регрессионным анализом / Хабрахабр] by [[Vladimir Medvedev|WinPooh]], April 27, 2015 (Russian)</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168 Piece weights with regression analysis (in Russian)] by [[Vladimir Medvedev]], [[CCC]], April 30, 2015</ref> experiments showed why the [https://en.wikipedia.org/wiki/Logistic_function logistic function] is appropriate, and further used [https://en.wikipedia.org/wiki/Cross_entropy cross-entropy]  and [https://en.wikipedia.org/wiki/Regularization_%28mathematics%29 regularization].
+
Since the relationship between [[Pawn Advantage, Win Percentage, and Elo|win percentage and pawn advantage]] is assumed to follow a [https://en.wikipedia.org/wiki/Logistic_model logistic model], one may treat static evaluation as [[Neural Networks#Perceptron|single-layer perceptron]] or single [https://en.wikipedia.org/wiki/Artificial_neuron neuron] [[Neural Networks|ANN]] with the common [https://en.wikipedia.org/wiki/Logistic_function logistic] [https://en.wikipedia.org/wiki/Activation_function activation function], performing the perceptron algorithm to train it <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015</ref>. [https://en.wikipedia.org/wiki/Logistic_regression Logistic regression] in evaluation tuning was first elaborated by [[Michael Buro]] in 1995 <ref>[[Michael Buro]] ('''1995'''). ''[https://www.jair.org/index.php/jair/article/view/10146 Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref>, and proved successful in the game of [[Othello]] in comparison with [[Mathematician#RFisher|Fisher's]] [https://en.wikipedia.org/wiki/Kernel_Fisher_discriminant_analysis linear discriminant] and quadratic [https://en.wikipedia.org/wiki/Discriminant discriminant] function for [https://en.wikipedia.org/wiki/Normal_distribution normally distributed] features, and served as eponym of his Othello program ''Logistello'' <ref>[https://skatgame.net/mburo/log.html LOGISTELLO's Homepage]</ref>. In computer chess, logistic regression was applied by [[Arkadiusz Paterek]] with [[Gosu]] <ref>[[Arkadiusz Paterek]] ('''2004'''). ''Modelowanie funkcji oceniającej w grach''. [[University of Warsaw]], [https://www.mimuw.edu.pl/~paterek/mfog.ps.gz zipped ps] (Polish, Modeling of an evaluation function in games)</ref>, later proposed by [[Miguel A. Ballicora]] in 2009 as used by [[Gaviota]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009</ref>, independently described by [[Amir Ban]] in 2012 for [[Junior|Junior's]] evaluation learning <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>, and explicitly mentioned by [[Álvaro Begué]] in a January 2014 [[CCC]] discussion <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014</ref>, when [[Peter Österlund]] explained [[Texel's Tuning Method]] <ref>[http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=555522&t=50823 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014</ref>, which subsequently popularized logistic regression tuning in computer chess. [[Vladimir Medvedev|Vladimir Medvedev's]] [[Point Value by Regression Analysis]] <ref>[http://habrahabr.ru/post/254753/ Определяем веса шахматных фигур регрессионным анализом / Хабрахабр] by [[Vladimir Medvedev|WinPooh]], April 27, 2015 (Russian)</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168 Piece weights with regression analysis (in Russian)] by [[Vladimir Medvedev]], [[CCC]], April 30, 2015</ref> experiments showed why the [https://en.wikipedia.org/wiki/Logistic_function logistic function] is appropriate, and further used [https://en.wikipedia.org/wiki/Cross_entropy cross-entropy]  and [https://en.wikipedia.org/wiki/Regularization_%28mathematics%29 regularization].
| [[FILE:SigmoidTexelTune.gif|border|left|thumb|baseline|300px|link=http://wolfr.am/1al3d5B|[https://en.wikipedia.org/wiki/Logistic_function Logistic function] <ref>[http://wolfr.am/1al3d5B log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10] from [https://en.wikipedia.org/wiki/Wolfram_Alpha Wolfram|Alpha]</ref> ]]
 
|}
 
  
 
==Instances==
 
==Instances==
 
* [[Arasan#Tuning|Arasan's Tuning]]
 
* [[Arasan#Tuning|Arasan's Tuning]]
 +
* [[Ethereal]]
 
* [[Eval Tuning in Deep Thought]]
 
* [[Eval Tuning in Deep Thought]]
 +
* [[FabChess]]
 +
* [[Gosu]]
 +
* [[Koivisto]]
 
* [[Minimax Tree Optimization]] (MMTO or the Bonanza-Method in [[Shogi]])
 
* [[Minimax Tree Optimization]] (MMTO or the Bonanza-Method in [[Shogi]])
 
* [[Point Value by Regression Analysis]]
 
* [[Point Value by Regression Analysis]]
Line 115: Line 120:
 
==1970 ...==
 
==1970 ...==
 
* [[Arnold K. Griffith]] ('''1974'''). ''[http://www.sciencedirect.com/science/article/pii/0004370274900277 A Comparison and Evaluation of Three Machine Learning Procedures as Applied to the Game of Checkers]''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 5, No. 2
 
* [[Arnold K. Griffith]] ('''1974'''). ''[http://www.sciencedirect.com/science/article/pii/0004370274900277 A Comparison and Evaluation of Three Machine Learning Procedures as Applied to the Game of Checkers]''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 5, No. 2
 +
* [[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1976'''). ''[https://link.springer.com/book/10.1007%2F978-3-642-48294-6 Foundations of Optimization]''. Lecture Notes in Economics and Mathematical Systems, Vol. 122, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
 +
* <span id="NonlinearProgramming1st"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1979'''). ''Nonlinear Programming: Theory and Algorithms''.  [https://en.wikipedia.org/wiki/Wiley_(publisher) Wiley] » [[#NonlinearProgramming2nd|2nd]], [[#NonlinearProgramming3rd|3rd edition]]
 
==1980 ...==
 
==1980 ...==
 
* [[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]
 
* [[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]
* [[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Masters thesis, Department of Psychology, [[Northwestern University]], Evanston, IL
+
* [[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Master thesis, Department of Psychology, [[Northwestern University]], Evanston, IL
 
==1985 ...==  
 
==1985 ...==  
 
* [[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]
 
* [[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]
Line 142: Line 149:
 
* [[Paul E. Utgoff]], [http://dblp.uni-trier.de/pers/hd/c/Clouse:Jeffery_A= Jeffery A. Clouse] ('''1991'''). ''[http://scholarworks.umass.edu/cs_faculty_pubs/193/ Two Kinds of Training Information for Evaluation Function Learning]''.  [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts, Amherst], Proceedings of the AAAI 1991
 
* [[Paul E. Utgoff]], [http://dblp.uni-trier.de/pers/hd/c/Clouse:Jeffery_A= Jeffery A. Clouse] ('''1991'''). ''[http://scholarworks.umass.edu/cs_faculty_pubs/193/ Two Kinds of Training Information for Evaluation Function Learning]''.  [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts, Amherst], Proceedings of the AAAI 1991
 
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
 
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
* [[Ingo Althöfer]] ('''1993'''). ''On Telescoping Linear Evaluation Functions.'' [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]], pp. 91-94
+
* [[Ingo Althöfer]] ('''1993'''). ''On Telescoping Linear Evaluation Functions.'' [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]]
 +
* <span id="NonlinearProgramming2nd"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#HDSherali|Hanif D. Sherali]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1993'''). ''Nonlinear Programming: Theory and Algorithms''. 2nd edition, [https://en.wikipedia.org/wiki/Wiley_(publisher) Wiley] » [[#NonlinearProgramming1st|1st]], [[#NonlinearProgramming3rd|3rd edition]]
 
* [[Peter Mysliwietz]] ('''1994'''). ''Konstruktion und Optimierung von Bewertungsfunktionen beim Schach.'' Ph.D. thesis (German)
 
* [[Peter Mysliwietz]] ('''1994'''). ''Konstruktion und Optimierung von Bewertungsfunktionen beim Schach.'' Ph.D. thesis (German)
 
==1995 ...==  
 
==1995 ...==  
* [[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3
+
* [[Michael Buro]] ('''1995'''). ''[https://www.jair.org/index.php/jair/article/view/10146 Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search''. [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/ccm/www/papers/ml.ps ps] or [http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9B2A0CCA8B1AFB594A879799D974111A?doi=10.1.1.53.9742&rep=rep1&type=pdf pdf] from [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.9742 CiteSeerX]
+
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search''. [http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9B2A0CCA8B1AFB594A879799D974111A?doi=10.1.1.53.9742&rep=rep1&type=pdf pdf]
 
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search'' (Talk), [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/ccm/www/talks/tune.ps ps]
 
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search'' (Talk), [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/ccm/www/talks/tune.ps ps]
 
* [[Johannes Fürnkranz]] ('''1996'''). ''Machine Learning in Computer Chess: The Next Generation.'' [[ICGA Journal#19_3|ICCA Journal, Vol. 19, No. 3]], [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-96-11.ps.gz zipped ps]
 
* [[Johannes Fürnkranz]] ('''1996'''). ''Machine Learning in Computer Chess: The Next Generation.'' [[ICGA Journal#19_3|ICCA Journal, Vol. 19, No. 3]], [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-96-11.ps.gz zipped ps]
Line 160: Line 168:
 
* [[Johannes Fürnkranz]], [[Miroslav Kubat]] (eds.) ('''2001'''). ''[https://www.novapublishers.com/catalog/product_info.php?products_id=720 Machines that Learn to Play Games]''. Advances in Computation: Theory and Practice, Vol. 8,. [https://en.wikipedia.org/wiki/Nova_Publishers NOVA Science Publishers]
 
* [[Johannes Fürnkranz]], [[Miroslav Kubat]] (eds.) ('''2001'''). ''[https://www.novapublishers.com/catalog/product_info.php?products_id=720 Machines that Learn to Play Games]''. Advances in Computation: Theory and Practice, Vol. 8,. [https://en.wikipedia.org/wiki/Nova_Publishers NOVA Science Publishers]
 
: [[Gerald Tesauro]] ('''2001'''). ''[http://dl.acm.org/citation.cfm?id=644397 Comparison Training of Chess Evaluation Functions]''.  » [[SCP]], [[Deep Blue]]
 
: [[Gerald Tesauro]] ('''2001'''). ''[http://dl.acm.org/citation.cfm?id=644397 Comparison Training of Chess Evaluation Functions]''.  » [[SCP]], [[Deep Blue]]
* [[Graham Kendall]], [[Glenn Whitwell]] ('''2001'''). ''[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=934299&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D934299 An Evolutionary Approach for the Tuning of a Chess Evaluation Function using Population Dynamics]''. Proceedings of the 2001 Congress on Evolutionary Computation, Vol. 2, [http://red.cs.nott.ac.uk/~gxk/papers/cec2001chess.pdf pdf]
+
* [[Graham Kendall]], [[Glenn Whitwell]] ('''2001'''). ''[https://ieeexplore.ieee.org/document/934299?tp=&arnumber=934299&url=http:%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D934299 An Evolutionary Approach for the Tuning of a Chess Evaluation Function using Population Dynamics]''. Proceedings of the 2001 Congress on Evolutionary Computation, Vol. 2, [http://www.cs.nott.ac.uk/~pszgxk/papers/cec2001chess.pdf pdf]
 
* [[Yngvi Björnsson]], [[Tony Marsland]] ('''2001'''). ''Learning Search Control in Adversary Games''. [[Advances in Computer Games 9]], pp. 157-174. [http://www.ru.is/faculty/yngvi/pdf/BjornssonM01b.pdf pdf]
 
* [[Yngvi Björnsson]], [[Tony Marsland]] ('''2001'''). ''Learning Search Control in Adversary Games''. [[Advances in Computer Games 9]], pp. 157-174. [http://www.ru.is/faculty/yngvi/pdf/BjornssonM01b.pdf pdf]
 
* [[Michael Buro]] ('''2002'''). ''Improving Mini-max Search by Supervised Learning.'' [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 134, No. 1, [http://www.cs.ualberta.ca/~mburo/ps/logaij.pdf pdf]
 
* [[Michael Buro]] ('''2002'''). ''Improving Mini-max Search by Supervised Learning.'' [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 134, No. 1, [http://www.cs.ualberta.ca/~mburo/ps/logaij.pdf pdf]
Line 168: Line 176:
 
* [[Petr Aksenov]] ('''2004'''). ''[http://joypub.joensuu.fi/publications/masters_thesis/aksenov_genetic/index_en.html Genetic algorithms for optimising chess position scoring]'', Master's thesis, [ftp://cs.joensuu.fi/pub/Theses/2004_MSc_Aksenov_Petr.pdf pdf]
 
* [[Petr Aksenov]] ('''2004'''). ''[http://joypub.joensuu.fi/publications/masters_thesis/aksenov_genetic/index_en.html Genetic algorithms for optimising chess position scoring]'', Master's thesis, [ftp://cs.joensuu.fi/pub/Theses/2004_MSc_Aksenov_Petr.pdf pdf]
 
* [[Mathieu Autonès]], [[Aryel Beck]], [[Phillippe Camacho]], [[Nicolas Lassabe]], [[Hervé Luga]], [[François Scharffe]] ('''2004'''). ''[http://link.springer.com/chapter/10.1007/978-3-540-24650-3_1 Evaluation of Chess Position by Modular Neural network Generated by Genetic Algorithm]''. [http://www.informatik.uni-trier.de/~ley/db/conf/eurogp/eurogp2004.html#AutonesBCLLS04 EuroGP 2004]
 
* [[Mathieu Autonès]], [[Aryel Beck]], [[Phillippe Camacho]], [[Nicolas Lassabe]], [[Hervé Luga]], [[François Scharffe]] ('''2004'''). ''[http://link.springer.com/chapter/10.1007/978-3-540-24650-3_1 Evaluation of Chess Position by Modular Neural network Generated by Genetic Algorithm]''. [http://www.informatik.uni-trier.de/~ley/db/conf/eurogp/eurogp2004.html#AutonesBCLLS04 EuroGP 2004]
* [[Henk Mannen]], [[Marco Wiering]] ('''2004'''). ''[http://scholar.google.com/citations?view_op=view_citation&hl=en&user=xVas0I8AAAAJ&cstart=20&pagesize=80&citation_for_view=xVas0I8AAAAJ:7PzlFSSx8tAC Learning to play chess using TD(λ)-learning with database games]''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04
+
* [[Henk Mannen]], [[Marco Wiering]] ('''2004'''). ''Learning to play chess using TD(λ)-learning with database games''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04
 +
* [[Arkadiusz Paterek]] ('''2004'''). ''Modelowanie funkcji oceniającej w szachach''. Masters thesis, [[University of Warsaw]] (Polish, Modeling of an evaluation function in chess)
 +
* [[Arkadiusz Paterek]] ('''2004'''). ''Modelowanie funkcji oceniającej w grach''. [[University of Warsaw]], [https://www.mimuw.edu.pl/~paterek/mfog.ps.gz zipped ps] (Polish, Modeling of an evaluation function in games)
 
==2005 ...==  
 
==2005 ...==  
 
* [[Dave Gomboc]], [[Michael Buro]], [[Tony Marsland]] ('''2005'''). ''Tuning Evaluation Functions by Maximizing Concordance''. [https://en.wikipedia.org/wiki/Theoretical_Computer_Science_%28journal%29 Theoretical Computer Science], Vol. 349, No. 2, [http://www.cs.ualberta.ca/%7Emburo/ps/tcs-learn.pdf pdf]
 
* [[Dave Gomboc]], [[Michael Buro]], [[Tony Marsland]] ('''2005'''). ''Tuning Evaluation Functions by Maximizing Concordance''. [https://en.wikipedia.org/wiki/Theoretical_Computer_Science_%28journal%29 Theoretical Computer Science], Vol. 349, No. 2, [http://www.cs.ualberta.ca/%7Emburo/ps/tcs-learn.pdf pdf]
Line 174: Line 184:
 
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf]
 
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf]
 
'''2006'''
 
'''2006'''
 +
* <span id="NonlinearProgramming3rd"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#HDSherali|Hanif D. Sherali]], [[Mathematician#MCShetty|C. M. Shetty]] ('''2006'''). ''[https://www.wiley.com/en-us/Nonlinear+Programming%3A+Theory+and+Algorithms%2C+3rd+Edition-p-9780471486008 Nonlinear Programming: Theory and Algorithms]''. 3rd edition, [https://en.wikipedia.org/wiki/Wiley_(publisher) Wiley] <ref>[http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450&start=3 Re: Adjusting weights the Deep Blue way] by [[Pradu Kannan]], [[Computer Chess Forums|Winboard Forum]], September 01, 2008</ref> » [[#NonlinearProgramming1st|1st]], [[#NonlinearProgramming2nd|2nd edition]]
 
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3
 
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3
* [[Hallam Nasreddine]], [[Hendra Suhanto Po]], [[Graham Kendall]] ('''2006'''). ''[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4017925 Using an Evolutionary Algorithm for the Tuning of a Chess Evaluation Function Based on a Dynamic Boundary Strategy]''. Proceedings of the 2006 IEEE Conference on Cybernetics and Intelligent Systems, [http://www.cs.nott.ac.uk/~gxk/papers/ieeecis2006.pdf pdf]
+
* [[Hallam Nasreddine]], [[Hendra Suhanto Poh]], [[Graham Kendall]] ('''2006'''). ''Using an Evolutionary Algorithm for the Tuning of a Chess Evaluation Function Based on a Dynamic Boundary Strategy''. Proceedings of the 2006 [[IEEE]] Conference on Cybernetics and Intelligent Systems, [http://www.graham-kendall.com/papers/npk2006.pdf pdf]
 
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2006'''). ''[http://www.springerlink.com/content/6180u7h3t312468u/ Automatic Construction of Static Evaluation Functions for Computer Game Players]''. ALT ’06
 
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2006'''). ''[http://www.springerlink.com/content/6180u7h3t312468u/ Automatic Construction of Static Evaluation Functions for Computer Game Players]''. ALT ’06
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Viljem Žumer]] ('''2006'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1688532 A Differential Evolution for the Tuning of a Chess Evaluation Function]''. IEEE Congress on Evolutionary Computation, 2006.
+
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Viljem Žumer]] ('''2006'''). ''[https://ieeexplore.ieee.org/document/1688532 A Differential Evolution for the Tuning of a Chess Evaluation Function]''. [[IEEE|IEEE Congress on Evolutionary Computation]]
 
* [[Kunihito Hoki]] ('''2006'''). ''Optimal control of minimax search result to learn positional evaluation''. [[Conferences#GPW|11th Game Programming Workshop]] (Japanese)
 
* [[Kunihito Hoki]] ('''2006'''). ''Optimal control of minimax search result to learn positional evaluation''. [[Conferences#GPW|11th Game Programming Workshop]] (Japanese)
 +
* [[Frank Hutter]], [https://scholar.google.co.uk/citations?user=LqUxHuQAAAAJ&hl=en Youssef Hamadi], [[Mathematician#HHHoos|Holger H. Hoos]], [[Mathematician#LeytonBrown|Kevin Leyton-Brown]] ('''2006'''). ''[https://link.springer.com/chapter/10.1007/11889205_17 Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms]''. [https://dblp.org/db/conf/cp/cp2006.html CP 2006], [https://www.cs.ubc.ca/~hutter/papers/aaaiws06_lfs06-autoparam-prelim.pdf pdf]
 
'''2007'''
 
'''2007'''
 
* [[Shogo Takeuchi]], [[Tomoyuki Kaneko]], [[Kazunori Yamaguchi]], [[Satoru Kawai]] ('''2007'''). ''Visualization and Adjustment of Evaluation Functions Based on Evaluation Values and Win Probability''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2007.html AAAI 2007], [https://www.aaai.org/Papers/AAAI/2007/AAAI07-136.pdf pdf]
 
* [[Shogo Takeuchi]], [[Tomoyuki Kaneko]], [[Kazunori Yamaguchi]], [[Satoru Kawai]] ('''2007'''). ''Visualization and Adjustment of Evaluation Functions Based on Evaluation Values and Win Probability''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2007.html AAAI 2007], [https://www.aaai.org/Papers/AAAI/2007/AAAI07-136.pdf pdf]
 
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2007'''). ''Automatic Generation of Evaluation Features for Computer Game Players''. [http://cswww.essex.ac.uk/cig/2007/papers/2037.pdf pdf]
 
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2007'''). ''Automatic Generation of Evaluation Features for Computer Game Players''. [http://cswww.essex.ac.uk/cig/2007/papers/2037.pdf pdf]
 
* [[Johannes Fürnkranz]] ('''2007'''). ''Recent advances in machine learning and game playing''. [http://www.oegai.at/journal.shtml ÖGAI Journal], Vol. 26, No. 2, Computer Game Playing, [https://www.ke.tu-darmstadt.de/~juffi/publications/ogai-07.pdf pdf]
 
* [[Johannes Fürnkranz]] ('''2007'''). ''Recent advances in machine learning and game playing''. [http://www.oegai.at/journal.shtml ÖGAI Journal], Vol. 26, No. 2, Computer Game Playing, [https://www.ke.tu-darmstadt.de/~juffi/publications/ogai-07.pdf pdf]
 +
* [https://dblp.org/pid/70/382.html Mohammed Shahid Abdulla], [[Shalabh Bhatnagar]] ('''2007'''). ''[https://link.springer.com/article/10.1007/s10626-006-0003-y Reinforcement Learning Based Algorithms for Average Cost Markov Decision Processes]''. [https://www.springer.com/journal/10626 Discrete Event Dynamic Systems], Vol.17, No.1  » [[SPSA]]
 
'''2008'''
 
'''2008'''
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2008'''). ''Genetic Algorithms for Mentor-Assisted Evaluation Function Optimization''. ACM Genetic and Evolutionary Computation Conference ([http://www.sigevo.org/gecco-2008/ GECCO '08]), pp. 1469-1475, Atlanta, GA, July 2008.
+
* [[Eli David|Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2008'''). ''Genetic Algorithms for Mentor-Assisted Evaluation Function Optimization''. [http://www.sigevo.org/gecco-2008/ GECCO '08], [https://arxiv.org/abs/1711.06839 arXiv:1711.06839]
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Aleš Zamuda]], [[Viljem Žumer]] ('''2008'''). ''An Adaptive Differential Evolution Algorithm with Opposition-Based Mechanisms, Applied to the Tuning of a Chess Program''. [http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-540-68827-3 Advances in Differential Evolution], Studies in Computational Intelligence, ISBN: 978-3-540-68827-3
+
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Aleš Zamuda]], [[Viljem Žumer]] ('''2008'''). ''[https://link.springer.com/chapter/10.1007%2F978-3-540-68830-3_12 An Adaptive Differential Evolution Algorithm with Opposition-Based Mechanisms, Applied to the Tuning of a Chess Program]''. [https://link.springer.com/book/10.1007/978-3-540-68830-3 Advances in Differential Evolution], [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
 
'''2009'''
 
'''2009'''
* [[Joel Veness]], [[David Silver]], [[William Uther]], [[Alan Blair]] ('''2009'''). ''[http://papers.nips.cc/paper/3722-bootstrapping-from-game-tree-search Bootstrapping from Game Tree Search]''. [http://nips.cc/ Neural Information Processing Systems (NIPS), 2009], [http://books.nips.cc/papers/files/nips22/NIPS2009_0508.pdf pdf]
+
* [[Joel Veness]], [[David Silver]], [[William Uther]], [[Alan Blair]] ('''2009'''). ''[http://papers.nips.cc/paper/3722-bootstrapping-from-game-tree-search Bootstrapping from Game Tree Search]''. [http://nips.cc/ Neural Information Processing Systems (NIPS), 2009], [http://jveness.info/publications/nips2009%20-%20bootstrapping%20from%20game%20tree%20search.pdf pdf] » [[Meep]] <ref>[http://www.talkchess.com/forum/viewtopic.php?start=0&t=31667 A paper about parameter tuning] by [[Rémi Coulom]], [[CCC]], January 12, 2010</ref>
* [[Omid David]], [[Jaap van den Herik]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2009'''). ''Simulating Human Grandmasters: Evolution and Coevolution of Evaluation Functions''. [[ACM]] Genetic and Evolutionary Computation Conference ([http://www.sigevo.org/gecco-2009/ GECCO '09]), pp. 1483 - 1489, Montreal, Canada, July 2009.
+
* [[Eli David|Omid David]], [[Jaap van den Herik]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2009'''). ''Simulating Human Grandmasters: Evolution and Coevolution of Evaluation Functions''. [http://www.sigevo.org/gecco-2009/ GECCO '09], [https://arxiv.org/abs/1711.06840 arXiv:1711.0684]
* [[Omid David]] ('''2009'''). ''Genetic Algorithms Based Learning for Evolving Intelligent Organisms''. Ph.D. Thesis.
+
* [[Eli David|Omid David]] ('''2009'''). ''Genetic Algorithms Based Learning for Evolving Intelligent Organisms''. Ph.D. Thesis.
 
* [[Broch Davison]] ('''2009'''). ''[http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/index.html Playing Chess with Matlab]''. M.Sc. thesis supervised by [http://www.bris.ac.uk/engineering/people/nello-cristianini/index.html Nello Cristianini], [http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/FinalReport.pdf pdf] <ref>[https://en.wikipedia.org/wiki/MATLAB MATLAB from Wikipedia]</ref>
 
* [[Broch Davison]] ('''2009'''). ''[http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/index.html Playing Chess with Matlab]''. M.Sc. thesis supervised by [http://www.bris.ac.uk/engineering/people/nello-cristianini/index.html Nello Cristianini], [http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/FinalReport.pdf pdf] <ref>[https://en.wikipedia.org/wiki/MATLAB MATLAB from Wikipedia]</ref>
 
* [[Mark Levene]], [[Trevor Fenner]] ('''2009'''). ''A Methodology for Learning Players' Styles from Game Records''. [http://arxiv.org/abs/0904.2595v1 arXiv:0904.2595v1]
 
* [[Mark Levene]], [[Trevor Fenner]] ('''2009'''). ''A Methodology for Learning Players' Styles from Game Records''. [http://arxiv.org/abs/0904.2595v1 arXiv:0904.2595v1]
 
* [[Wei-Lun Kao]] ('''2009'''). ''The Automatically Tuning System of Evaluation Function for Computer Chinese Chess''. Master thesis, [[National Chiao Tung University]], [https://ir.nctu.edu.tw/bitstream/11536/43333/1/553001.pdf pdf] (Chinese)
 
* [[Wei-Lun Kao]] ('''2009'''). ''The Automatically Tuning System of Evaluation Function for Computer Chinese Chess''. Master thesis, [[National Chiao Tung University]], [https://ir.nctu.edu.tw/bitstream/11536/43333/1/553001.pdf pdf] (Chinese)
 
==2010 ...==  
 
==2010 ...==  
 +
* [[Frank Hutter]], [[Mathematician#HHHoos|Holger H. Hoos]], [[Mathematician#LeytonBrown|Kevin Leyton-Brown]], [[Mathematician#KPMurphy|Kevin P. Murphy]] ('''2010'''). ''[https://dl.acm.org/citation.cfm?id=1893694 Time-Bounded Sequential Parameter Optimization]''. [https://dblp.org/db/conf/lion/lion2010.html LION 2010], [https://ml.informatik.uni-freiburg.de/papers/10-LION-TB-SPO.pdf pdf]
 
* [[Amine Bourki]], [[Matthieu Coulm]], [[Philippe Rolet]], [[Olivier Teytaud]], [[Paul Vayssière]] ('''2010'''). ''[http://hal.inria.fr/inria-00467796/en/ Parameter Tuning by Simple Regret Algorithms and Multiple Simultaneous Hypothesis Testing]''. [http://hal.inria.fr/docs/00/46/77/96/PDF/tosubmit.pdf pd]
 
* [[Amine Bourki]], [[Matthieu Coulm]], [[Philippe Rolet]], [[Olivier Teytaud]], [[Paul Vayssière]] ('''2010'''). ''[http://hal.inria.fr/inria-00467796/en/ Parameter Tuning by Simple Regret Algorithms and Multiple Simultaneous Hypothesis Testing]''. [http://hal.inria.fr/docs/00/46/77/96/PDF/tosubmit.pdf pd]
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2010'''). ''Genetic Algorithms for Automatic Search Tuning''. [[ICGA Journal#33_2|ICGA Journal, Vol. 33, No. 2]]
+
* [[Eli David|Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2010'''). ''Genetic Algorithms for Automatic Search Tuning''. [[ICGA Journal#33_2|ICGA Journal, Vol. 33, No. 2]]
* [[Borko Bošković]] ('''2010'''). ''[http://labraj.uni-mb.si/en/PhD_Thesis_Defence_%28Borko_Bo%C5%A1kovi%C4%87%29 Differential evolution for the Tuning of a Chess Evaluation Function]''. Ph.D. thesis, [[University of Maribor]]
+
* [[Borko Bošković]] ('''2010'''). ''[http://labraj.uni-mb.si/en/PhD_Thesis_Defence_%28Borko_Bo%C5%A1kovi%C4%87%29 Differential Evolution for the Tuning of a Chess Evaluation Function]''. Ph.D. thesis, [[University of Maribor]]
 
'''2011'''
 
'''2011'''
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2011'''). ''Expert-Driven Genetic Algorithms for Simulating Evaluation Functions''. Genetic Programming and Evolvable Machines, Vol. 12, No. 1
+
* [[Frank Hutter]], [[Mathematician#HHHoos|Holger H. Hoos]], [[Mathematician#LeytonBrown|Kevin Leyton-Brown]]  ('''2011'''). ''[https://link.springer.com/chapter/10.1007/978-3-642-25566-3_40 Sequential Model-Based Optimization for General Algorithm Configuration]''. [https://dblp.org/db/conf/lion/lion2011.html LION 2011], [https://ml.informatik.uni-freiburg.de/papers/11-LION5-SMAC.pdf pdf]
* [[Borko Bošković]], [[Janez Brest]] ('''2011'''). ''Tuning Chess Evaluation Function Parameters using Differential Evolution''. Algorithm. Informatica, 35, No. 2, [http://www.informatica.si/PDF/35-2/14_Boskovic%20-%20Tuning%20chess%20evaluation.pdf pdf]
+
* [[Eli David|Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2011'''). ''Expert-Driven Genetic Algorithms for Simulating Evaluation Functions''. Genetic Programming and Evolvable Machines, Vol. 12, No. 1, [https://arxiv.org/abs/1711.06841 arXiv:1711.06841]
* [[Borko Bošković]], [[Janez Brest]], [[Aleš Zamuda]], [[Sašo Greiner]], [[Viljem Žumer]] ('''2011'''). ''[http://www.springerlink.com/content/y62h14743364x2l7/ History mechanism supported differential evolution for chess evaluation function tuning]''. [http://www.springer.com/engineering/computational+intelligence+and+complexity/journal/500 Soft Computing], Vol. 15, No. 4
+
* [[Borko Bošković]], [[Janez Brest]] ('''2011'''). ''[http://www.informatica.si/index.php/informatica/article/view/353 Tuning Chess Evaluation Function Parameters using Differential Evolution]''. Informatica, Vol. 35, No. 2
 +
* [[Borko Bošković]], [[Janez Brest]], [[Aleš Zamuda]], [[Sašo Greiner]], [[Viljem Žumer]] ('''2011'''). ''[https://dl.acm.org/citation.cfm?id=1966601 History mechanism supported differential evolution for chess evaluation function tuning]''. [http://www.springer.com/engineering/computational+intelligence+and+complexity/journal/500 Soft Computing], Vol. 15, No. 4
 
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''An Evolutionary Algorithm for Tuning a Chess Evaluation Function''. [http://www.informatik.uni-trier.de/~ley/db/conf/cec/cec2011.html#Vazquez-FernandezCT11 CEC 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/eduardo-cec2011-final.pdf.gz pdf]
 
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''An Evolutionary Algorithm for Tuning a Chess Evaluation Function''. [http://www.informatik.uni-trier.de/~ley/db/conf/cec/cec2011.html#Vazquez-FernandezCT11 CEC 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/eduardo-cec2011-final.pdf.gz pdf]
 
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''[http://dl.acm.org/citation.cfm?id=2001882 An Adaptive Evolutionary Algorithm Based on Typical Chess Problems for Tuning a Chess Evaluation Function]''. [http://www.informatik.uni-trier.de/~ley/db/conf/gecco/gecco2011c.html#Vazquez-FernandezCT11 GECCO 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/vazquez-gecco2011.pdf.gz pdf]
 
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''[http://dl.acm.org/citation.cfm?id=2001882 An Adaptive Evolutionary Algorithm Based on Typical Chess Problems for Tuning a Chess Evaluation Function]''. [http://www.informatik.uni-trier.de/~ley/db/conf/gecco/gecco2011c.html#Vazquez-FernandezCT11 GECCO 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/vazquez-gecco2011.pdf.gz pdf]
 +
* [[Ilya Loshchilov]], [[Marc Schoenauer]], [[Michèle Sebag]] ('''2011'''). ''Adaptive coordinate descent.'' [https://dblp.org/db/conf/gecco/gecco2011.html GECCO 2011], [http://www.loshchilov.com/publications/GECCO2011_AdaptiveCoordinateDescent.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Adaptive_coordinate_descent Adaptive coordinate descent from Wikipedia]</ref>
 
* [[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]] <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=40987 CLOP slides] by [[Rémi Coulom]], [[CCC]], November 03, 2011</ref>
 
* [[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]] <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=40987 CLOP slides] by [[Rémi Coulom]], [[CCC]], November 03, 2011</ref>
 
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2011'''). ''[http://link.springer.com/chapter/10.1007%2F978-3-642-31866-5_16 The Global Landscape of Objective Functions for the Optimization of Shogi Piece Values with a Game-Tree Search]''. [[Advances in Computer Games 13]] » [[Shogi]]
 
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2011'''). ''[http://link.springer.com/chapter/10.1007%2F978-3-642-31866-5_16 The Global Landscape of Objective Functions for the Optimization of Shogi Piece Values with a Game-Tree Search]''. [[Advances in Computer Games 13]] » [[Shogi]]
 +
* [[Mathematician#JCDuchi|John Duchi]], [[Mathematician#EHazan|Elad Hazan]],  [[Mathematician#YSinger|Yoram Singer]] ('''2011'''). ''[https://dl.acm.org/doi/10.5555/1953048.2021068 Adaptive Subgradient Methods for Online Learning and Stochastic Optimization]''. [https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research Journal of Machine Learning Research], Vol. 12, [https://jmlr.org/papers/volume12/duchi11a/duchi11a.pdf pdf] » [https://en.wikipedia.org/wiki/Stochastic_gradient_descent#AdaGrad AdaGrad]
 
'''2012'''
 
'''2012'''
 
* [[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]
 
* [[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]
* [[Kanjanapa Thitipong]], [[Komiya Kanako]], [[Yoshiyuki Kotani]] ('''2012'''). ''Design and Implementation of Bonanza Method for the Evaluation in the Game of Arimaa''. [http://www.ipsj.or.jp/english/index.html IPSJ SIG Technical Report], Vol. 2012-GI-27, No. 4, [http://arimaa.com/arimaa/papers/KanjanapaThitipong/IPSJ-GI12027004.pdf pdf] » [[Arimaa]]
+
* [[Thitipong Kanjanapa]], [[Kanako Komiya]], [[Yoshiyuki Kotani]] ('''2012'''). ''Design and Implementation of Bonanza Method for the Evaluation in the Game of Arimaa''. [http://www.ipsj.or.jp/english/index.html IPSJ SIG Technical Report], Vol. 2012-GI-27, No. 4, [http://arimaa.com/arimaa/papers/KanjanapaThitipong/IPSJ-GI12027004.pdf pdf] » [[Arimaa]]
 +
* [[Alan J. Lockett]] ('''2012'''). ''General-Purpose Optimization Through Information Maximization''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Texas_at_Austin University of Texas at Austin], advisor [[Risto Miikkulainen]], [http://www.alockett.com/static/pdf/lockett-thesis.pdf pdf]
 
'''2013'''
 
'''2013'''
 +
* [[Alan J. Lockett]], [[Risto Miikkulainen]] ('''2013'''). ''[http://nn.cs.utexas.edu/?lockett:foga2013 A Measure-Theoretic Analysis of Stochastic Optimization]''. [https://dblp.uni-trier.de/db/conf/foga/foga2013.html FOGA 2013]
 
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Ching-Hua Kuo]], [[Bo-Han Lin]] ('''2013'''). ''[https://kaigi.org/jsai/webprogram/2013/paper-138.html A Supervised Learning Method for Chinese Chess Programs]''. [http://2013.conf.ai-gakkai.or.jp/english-info JSAI2013], [https://kaigi.org/jsai/webprogram/2013/pdf/138.pdf pdf]
 
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Ching-Hua Kuo]], [[Bo-Han Lin]] ('''2013'''). ''[https://kaigi.org/jsai/webprogram/2013/paper-138.html A Supervised Learning Method for Chinese Chess Programs]''. [http://2013.conf.ai-gakkai.or.jp/english-info JSAI2013], [https://kaigi.org/jsai/webprogram/2013/pdf/138.pdf pdf]
 
* [[Akira Ura]], [[Makoto Miwa]], [[Yoshimasa Tsuruoka]], [[Takashi Chikayama]] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-09165-5_18 Comparison Training of Shogi Evaluation Functions with Self-Generated Training Positions and Moves]''. [[CG 2013]], [https://pdfs.semanticscholar.org/6ad0/7167425539cf64e6bf420d7a28a1fc1047d6.pdf slides as pdf]
 
* [[Akira Ura]], [[Makoto Miwa]], [[Yoshimasa Tsuruoka]], [[Takashi Chikayama]] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-09165-5_18 Comparison Training of Shogi Evaluation Functions with Self-Generated Training Positions and Moves]''. [[CG 2013]], [https://pdfs.semanticscholar.org/6ad0/7167425539cf64e6bf420d7a28a1fc1047d6.pdf slides as pdf]
 
* [[Yoshikuni Sato]], [[Makoto Miwa]], [[Shogo Takeuchi]], [[Daisuke Takahashi]] ('''2013'''). ''[http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6402 Optimizing Objective Function Parameters for Strength in Computer Game-Playing]''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2013.html#SatoMTT13 AAAI 2013]
 
* [[Yoshikuni Sato]], [[Makoto Miwa]], [[Shogo Takeuchi]], [[Daisuke Takahashi]] ('''2013'''). ''[http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6402 Optimizing Objective Function Parameters for Strength in Computer Game-Playing]''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2013.html#SatoMTT13 AAAI 2013]
* [[Shalabh Bhatnagar]], [[H. L. Prasad]], [[L.A. Prashanth]] ('''2013'''). ''[http://stochastic.csa.iisc.ernet.in/~shalabh/book.html Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods]''. [http://www.springer.com/series/642 Lecture Notes in Control and Information Sciences], Vol. 434, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer] » [[SPSA]]
+
* [[Shalabh Bhatnagar]], [https://dblp.org/pid/31/10493.html H.L. Prasad], [https://scholar.google.co.in/citations?user=Q1YXWpoAAAAJ&hl=en L.A. Prashanth] ('''2013'''). ''[https://link.springer.com/book/10.1007/978-1-4471-4285-0 Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods]''. [https://www.springer.com/series/642 Lecture Notes in Control and Information Sciences], Vol. 434, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer] » [[SPSA]]
 
* [[Tomáš Hřebejk]] ('''2013'''). ''Arimaa challenge - Static Evaluation Function''. Master Thesis, [https://en.wikipedia.org/wiki/Charles_University_in_Prague Charles University in Prague], [http://arimaa.com/arimaa/papers/ThomasHrebejk/Arimaa.pdf pdf] » [[Arimaa]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58472 thesis on eval function learning in Arimaa] by  [[Jon Dart]], [[CCC]], December 04, 2015</ref>
 
* [[Tomáš Hřebejk]] ('''2013'''). ''Arimaa challenge - Static Evaluation Function''. Master Thesis, [https://en.wikipedia.org/wiki/Charles_University_in_Prague Charles University in Prague], [http://arimaa.com/arimaa/papers/ThomasHrebejk/Arimaa.pdf pdf] » [[Arimaa]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58472 thesis on eval function learning in Arimaa] by  [[Jon Dart]], [[CCC]], December 04, 2015</ref>
 +
* [[Yoshikuni Sato]], [[Makoto Miwa]], [[Shogo Takeuchi]], [[Daisuke Takahashi]] ('''2013'''). ''[http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6402 Optimizing Objective Function Parameters for Strength in Computer Game-Playing]''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2013.html#SatoMTT13 AAAI 2013]
 +
* [[Ilya Loshchilov]]  ('''2013'''). ''[http://loshchilov.com/phd.html Surrogate-Assisted Evolutionary Algorithms]''. Ph.D. thesis, [[University of Paris#11|Paris-Sud 11 University]], advisors [[Marc Schoenauer]] and [[Michèle Sebag]]
 +
* [https://www.cs.ubc.ca/~schmidtm/ Mark Schmidt], [https://inria.academia.edu/NicolasLeRoux Nicolas Le Roux], [https://www.di.ens.fr/~fbach/ Francis Bach] ('''2013'''). ''Minimizing Finite Sums with the Stochastic Average Gradient''. [https://arxiv.org/abs/1309.2388 arXiv:1309.2388] <ref>[https://groups.google.com/d/msg/fishcooking/XnLmUP_78iw/QgMZzmeVBgAJ Tuning floats] by [[Stephane Nicolet]], [[Computer Chess Forums|FishCooking]], April 12, 2018</ref>
 
'''2014'''
 
'''2014'''
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49], [https://www.jair.org/media/4217/live-4217-7792-jair.pdf pdf] » [[Shogi]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=55084 MMTO for evaluation learning] by [[Jon Dart]], [[CCC]], January 25, 2015</ref>
+
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49], [https://pdfs.semanticscholar.org/eb9c/173576577acbb8800bf96aba452d77f1dc19.pdf pdf] » [[Shogi]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=55084 MMTO for evaluation learning] by [[Jon Dart]], [[CCC]], January 25, 2015</ref>
 
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2014'''). ''RES: Regularized Stochastic BFGS Algorithm''. [https://arxiv.org/abs/1401.7625 arXiv:1401.7625] <ref> [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm  from Wikipedia]</ref>
 
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2014'''). ''RES: Regularized Stochastic BFGS Algorithm''. [https://arxiv.org/abs/1401.7625 arXiv:1401.7625] <ref> [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm  from Wikipedia]</ref>
 
* <span id="ROCK"></span>[http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=184943 Jemin Hwangbo], [https://www.linkedin.com/in/christian-gehring-1b958395/ Christian Gehring], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=186652 Hannes Sommer], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=29981 Roland Siegwart], [http://www.adrl.ethz.ch/doku.php/adrl:people:jbuchli Jonas Buchli] ('''2014'''). ''ROCK∗ — Efficient black-box optimization for policy learning''.  [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7028729 Humanoids, 2014] » [[Automated Tuning#Rockstar|Rockstar]]  
 
* <span id="ROCK"></span>[http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=184943 Jemin Hwangbo], [https://www.linkedin.com/in/christian-gehring-1b958395/ Christian Gehring], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=186652 Hannes Sommer], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=29981 Roland Siegwart], [http://www.adrl.ethz.ch/doku.php/adrl:people:jbuchli Jonas Buchli] ('''2014'''). ''ROCK∗ — Efficient black-box optimization for policy learning''.  [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7028729 Humanoids, 2014] » [[Automated Tuning#Rockstar|Rockstar]]  
 +
* [[Mathematician#YDauphin|Yann Dauphin]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#CGulcehre|Caglar Gulcehre]], [[Mathematician#KCho|Kyunghyun Cho]], [[Mathematician#SGanguli|Surya Ganguli]], [[Mathematician#YBengio|Yoshua Bengio]] ('''2014'''). ''Identifying and attacking the saddle point problem in high-dimensional non-convex optimization''. [https://arxiv.org/abs/1406.2572 arXiv:1406.2572] <ref>[https://groups.google.com/d/msg/fishcooking/wOfRuzTSi_8/VgjN8MmSBQAJ high dimensional optimization] by [[Warren D. Smith]], [[Computer Chess Forums|FishCooking]], December 27, 2019</ref>
 
* [https://arxiv.org/find/cs/1/au:+Martens_J/0/1/0/all/0/1 James Martens] ('''2014, 2017'''). ''New insights and perspectives on the natural gradient method''. [https://arxiv.org/abs/1412.1193 arXiv:1412.1193]
 
* [https://arxiv.org/find/cs/1/au:+Martens_J/0/1/0/all/0/1 James Martens] ('''2014, 2017'''). ''New insights and perspectives on the natural gradient method''. [https://arxiv.org/abs/1412.1193 arXiv:1412.1193]
 
==2015 ...==
 
==2015 ...==
 
* [https://scholar.google.nl/citations?user=yyIoQu4AAAAJ Diederik P. Kingma], [https://scholar.google.ca/citations?user=ymzxRhAAAAAJ&hl=en Jimmy Lei Ba] ('''2015'''). ''Adam: A Method for Stochastic Optimization''. [https://arxiv.org/abs/1412.6980v8 arXiv:1412.6980v8], [http://www.iclr.cc/doku.php?id=iclr2015:main ICLR 2015] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61948 Arasan 19.2] by [[Jon Dart]], [[CCC]], November 03, 2016 » [[Arasan#Tuning|Arasan's Tuning]]</ref>
 
* [https://scholar.google.nl/citations?user=yyIoQu4AAAAJ Diederik P. Kingma], [https://scholar.google.ca/citations?user=ymzxRhAAAAAJ&hl=en Jimmy Lei Ba] ('''2015'''). ''Adam: A Method for Stochastic Optimization''. [https://arxiv.org/abs/1412.6980v8 arXiv:1412.6980v8], [http://www.iclr.cc/doku.php?id=iclr2015:main ICLR 2015] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61948 Arasan 19.2] by [[Jon Dart]], [[CCC]], November 03, 2016 » [[Arasan#Tuning|Arasan's Tuning]]</ref>
 
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2015'''). ''Global Convergence of Online Limited Memory BFGS''. [https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research Journal of Machine Learning Research], Vol. 16, [http://www.jmlr.org/papers/volume16/mokhtari15a/mokhtari15a.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
 
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2015'''). ''Global Convergence of Online Limited Memory BFGS''. [https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research Journal of Machine Learning Research], Vol. 16, [http://www.jmlr.org/papers/volume16/mokhtari15a/mokhtari15a.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
 +
* [[Ilya Loshchilov]] ('''2015'''). ''LM-CMA: an Alternative to L-BFGS for Large Scale Black-box Optimization''. [https://arxiv.org/abs/1511.00221 arXiv:1511.00221] <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS LM-BFGS from Wikipedia]</ref> <ref>[https://sites.google.com/site/ecjlmcma/ LM-CMA source code]</ref>
 +
'''2016'''
 +
* [[Diogo Real]], [[Alan Blair]] ('''2016'''). ''[https://ieeexplore.ieee.org/document/7743850/ Learning a multi-player chess game with TreeStrap]''. [https://dblp.uni-trier.de/db/conf/cec/cec2016.html CEC 2016]
 +
* [[Wojciech Jaśkowski]], [[Marcin Szubert]] ('''2016'''). ''[https://ieeexplore.ieee.org/document/7180338 Coevolutionary CMA-ES for Knowledge-Free Learning of Game Position Evaluation]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. 8, No. 4  <ref>[https://en.wikipedia.org/wiki/CMA-ES CMA-ES from Wikipedia]</ref>
 +
* [[Wojciech Jaśkowski]], [[Paweł Liskowski]], [[Marcin Szubert]], [[Krzysztof Krawiec]] ('''2016'''). ''[https://content.sciendo.com/view/journals/amcs/26/1/article-p215.xml The performance profile: A multi–criteria performance evaluation method for test–based problems]''. [https://en.wikipedia.org/wiki/International_Journal_of_Applied_Mathematics_and_Computer_Science International Journal of Applied Mathematics and Computer Science], Vol. 26, No. 1
 
'''2017'''
 
'''2017'''
 
* [http://ruder.io/ Sebastian Ruder] ('''2017'''). ''[http://ruder.io/optimizing-gradient-descent/ An overview of gradient descent optimization algorithms]''. [https://arxiv.org/abs/1609.04747v2 arXiv:1609.04747v2] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017</ref>
 
* [http://ruder.io/ Sebastian Ruder] ('''2017'''). ''[http://ruder.io/optimizing-gradient-descent/ An overview of gradient descent optimization algorithms]''. [https://arxiv.org/abs/1609.04747v2 arXiv:1609.04747v2] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017</ref>
 +
* [[Hung-Jui Chang]], [[Gang-Yu Fan]], [[Jr-Chang Chen]], [[Chih-Wen Hsueh]], [[Tsan-sheng Hsu]] ('''2017'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-75931-9_10 Validating and Fine-Tuning of Game Evaluation Functions Using Endgame Databases]''. [[Conferences#IJCAI2017|CGW@IJCAI 2017]]
 +
'''2018'''
 +
* [[Takafumi Nakamichi]], [[Takeshi Ito]] ('''2018'''). ''Adjusting the evaluation function for weakening the competency level of a computer shogi program''. [[ICGA Journal#40_1|ICGA Journal, Vol. 40, No. 1]]
 +
* [[Hung-Jui Chang]], [[Jr-Chang Chen]], [[Gang-Yu Fan]], [[Chih-Wen Hsueh]], [[Tsan-sheng Hsu]] ('''2018'''). ''Using Chinese dark chess endgame databases to validate and fine-tune game evaluation functions''. [[ICGA Journal#40_2|ICGA Journal, Vol. 40, No. 2]] » [[Chinese Dark Chess]], [[Endgame Tablebases]]
 +
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Tinghan Wei]] ('''2018'''). ''Comparison Training for Computer Chinese Chess''. [https://arxiv.org/abs/1801.07411 arXiv:1801.07411] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=52861&start=7 Re: multi-dimensional piece/square tables] by Tony P., [[CCC]], January 28, 2020 » [[Piece-Square Tables]]</ref>
 +
* [[Jeremy Rapin]], [[Olivier Teytaud]] ('''2018'''). ''Nevergrad - A gradient-free optimization platform''. [https://github.com/facebookresearch/nevergrad GitHub - facebookresearch/nevergrad: A Python toolbox for performing gradient-free optimization]
 +
==2020 ...==
 +
* [[Andrew Grant]] ('''2020'''). ''Evaluation & Tuning in Chess Engines''. [https://github.com/AndyGrant/Ethereal/blob/master/Tuning.pdf pdf] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74877 Evaluation & Tuning in Chess Engines] by [[Andrew Grant]], [[CCC]], August 24, 2020</ref>
  
 
=Forum Posts=
 
=Forum Posts=
 
==1997 ...==  
 
==1997 ...==  
* [https://groups.google.com/group/rec.games.chess.computer/browse_frm/thread/77f10f072e907302 Evolutionary Evaluation] by [[Dan Homan]], [[Computer Chess Forums|rgcc]], September 09, 1997 » [[Evaluation]]
+
* [https://groups.google.com/group/rec.games.chess.computer/browse_frm/thread/77f10f072e907302 Evolutionary Evaluation] by [[Daniel Homan]], [[Computer Chess Forums|rgcc]], September 09, 1997 » [[Evaluation]]
 
* [https://www.stmintz.com/ccc/index.php?id=13794 Deep Blue eval function tuning technique] by [[Stuart Cracraft]], [[CCC]], January 08, 1998 » [[Deep Blue]] <ref>[[Thomas Anantharaman]] ('''1997'''). ''Evaluation Tuning for Computer Chess: Linear Discriminant Methods''. [[ICGA Journal#20_4|ICCA Journal, Vol. 20, No. 4]]</ref>
 
* [https://www.stmintz.com/ccc/index.php?id=13794 Deep Blue eval function tuning technique] by [[Stuart Cracraft]], [[CCC]], January 08, 1998 » [[Deep Blue]] <ref>[[Thomas Anantharaman]] ('''1997'''). ''Evaluation Tuning for Computer Chess: Linear Discriminant Methods''. [[ICGA Journal#20_4|ICCA Journal, Vol. 20, No. 4]]</ref>
 
* [https://www.stmintz.com/ccc/index.php?id=13968 Automated Tuning] by [[Stuart Cracraft]], [[CCC]], January 12, 1998
 
* [https://www.stmintz.com/ccc/index.php?id=13968 Automated Tuning] by [[Stuart Cracraft]], [[CCC]], January 12, 1998
Line 243: Line 279:
 
* [https://www.stmintz.com/ccc/index.php?id=487022 "learning" or "tuning" programs] by [[Sean Mintz]], [[CCC]], February 15, 2006
 
* [https://www.stmintz.com/ccc/index.php?id=487022 "learning" or "tuning" programs] by [[Sean Mintz]], [[CCC]], February 15, 2006
 
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450 Adjusting weights the Deep Blue way] by [[Tony van Roon-Werten]], [[Computer Chess Forums|Winboard Forum]], August 29, 2008 » [[Deep Blue]]
 
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450 Adjusting weights the Deep Blue way] by [[Tony van Roon-Werten]], [[Computer Chess Forums|Winboard Forum]], August 29, 2008 » [[Deep Blue]]
 +
: [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450&start=3 Re: Adjusting weights the Deep Blue way] by [[Pradu Kannan]], [[Computer Chess Forums|Winboard Forum]], September 01, 2008
 
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49818 Tuning the eval] by [[Daniel Anulliero]], [[Computer Chess Forums|Winboard Forum]], January 02, 2009
 
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49818 Tuning the eval] by [[Daniel Anulliero]], [[Computer Chess Forums|Winboard Forum]], January 02, 2009
 
* [http://www.talkchess.com/forum/viewtopic.php?t=27266 Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 01, 2009
 
* [http://www.talkchess.com/forum/viewtopic.php?t=27266 Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 01, 2009
Line 254: Line 291:
 
* [http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011 <ref>[[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]]</ref>
 
* [http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011 <ref>[[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]]</ref>
 
* [http://www.talkchess.com/forum/viewtopic.php?t=40964 Tuning again] by [[Ed Schroder]], [[CCC]], November 01, 2011
 
* [http://www.talkchess.com/forum/viewtopic.php?t=40964 Tuning again] by [[Ed Schroder]], [[CCC]], November 01, 2011
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=42283 What is the most difficult and danger feature to tune it?] by [[Fermin Serrano]], [[CCC]], February 02, 2012
 
* [http://www.open-chess.org/viewtopic.php?f=5&t=1954 Ban: Automatic Learning of Evaluation [...]] by [[Mark Watkins|BB+]], [[Computer Chess Forums|OpenChess Forum]], May 10, 2012 <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>
 
* [http://www.open-chess.org/viewtopic.php?f=5&t=1954 Ban: Automatic Learning of Evaluation [...]] by [[Mark Watkins|BB+]], [[Computer Chess Forums|OpenChess Forum]], May 10, 2012 <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>
 
'''2014'''
 
'''2014'''
Line 284: Line 322:
 
* [http://www.talkchess.com/forum/viewtopic.php?t=62012 CLOP: when to stop?] by [[Erin Dame]], [[CCC]], November 07, 2016 » [[CLOP]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=62012 CLOP: when to stop?] by [[Erin Dame]], [[CCC]], November 07, 2016 » [[CLOP]]
 
: [http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016 <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref>
 
: [http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016 <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=62056 C++ code for tuning evaluation function parameters] by [[Álvaro Begué]], [[CCC]], November 10, 2016 » [[RuyTune]] <ref>[https://bitbucket.org/alonamaloh/ruy_tune alonamaloh / ruy_tune — Bitbucket] by [[Álvaro Begué]]</ref>
+
* [http://www.talkchess.com/forum/viewtopic.php?t=62056 C++ code for tuning evaluation function parameters] by [[Álvaro Begué]], [[CCC]], November 10, 2016 » [[RuyTune]]  
 
'''2017'''
 
'''2017'''
 
* [http://www.talkchess.com/forum/viewtopic.php?t=63408 improved evaluation function] by [[Alexandru Mosoi]], [[CCC]], March 11, 2017 » [[Texel's Tuning Method]], [[Zurichess]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=63408 improved evaluation function] by [[Alexandru Mosoi]], [[CCC]], March 11, 2017 » [[Texel's Tuning Method]], [[Zurichess]]
Line 304: Line 342:
 
* [http://www.talkchess.com/forum/viewtopic.php?t=66221 tuning info] by [[Marco Belli]], [[CCC]], January 03, 2018
 
* [http://www.talkchess.com/forum/viewtopic.php?t=66221 tuning info] by [[Marco Belli]], [[CCC]], January 03, 2018
 
* [http://www.talkchess.com/forum/viewtopic.php?t=66681 3 million games for training neural networks] by [[Álvaro Begué]], [[CCC]], February 24, 2018 » [[Neural Networks]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=66681 3 million games for training neural networks] by [[Álvaro Begué]], [[CCC]], February 24, 2018 » [[Neural Networks]]
 +
* [https://groups.google.com/d/msg/fishcooking/XnLmUP_78iw/QgMZzmeVBgAJ Tuning floats] by [[Stephane Nicolet]], [[Computer Chess Forums|FishCooking]], April 12, 2018
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67831 Introducing PET] by [[Ed Schroder|Ed Schröder]], [[CCC]], June 27, 2018 » [[Strategic Test Suite]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=68326 Texel tuning speed] by [[Vivien Clauzon]], [[CCC]], August 29, 2018 » [[Texel's Tuning Method]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=68753 methods for tuning coefficients] by [[Stuart Cracraft]], [[CCC]], October 28, 2018
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=69035 Particle Swarm Optimization Code] by [[Erik Madsen]], [[CCC]], November 24, 2018 » [[MadChess]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=69207 Gradient Descent Introduction] by [[Michael Hoffmann|Desperado]], [[CCC]], December 09, 2018
 +
'''2019'''
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=69532 Automated tuning... finally... (Topple v0.3.0)] by [[Vincent Tang]], [[CCC]], January 08, 2019 » [[Topple]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=71650 New Tool for Tuning with Skopt] by [[Thomas Dybdahl Ahle]], [[CCC]], August 25, 2019 <ref>[https://scikit-optimize.github.io/ skopt API documentation]</ref>
 +
* [https://www.game-ai-forum.org/viewtopic.php?f=21&t=695 TD(1)] by [[Rémi Coulom]], [[Computer Chess Forums|Game-AI Forum]], November 20, 2019 » [[Temporal Difference Learning]]
 +
* [https://groups.google.com/d/msg/fishcooking/wOfRuzTSi_8/VgjN8MmSBQAJ high dimensional optimization] by [[Warren D. Smith]], [[Computer Chess Forums|FishCooking]], December 27, 2019 <ref>[[Mathematician#YDauphin|Yann Dauphin]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#CGulcehre|Caglar Gulcehre]], [[Mathematician#KCho|Kyunghyun Cho]], [[Mathematician#SGanguli|Surya Ganguli]], [[Mathematician#YBengio|Yoshua Bengio]] ('''2014'''). ''Identifying and attacking the saddle point problem in high-dimensional non-convex optimization''. [https://arxiv.org/abs/1406.2572 arXiv:1406.2572]</ref>
 +
==2020 ...==
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=72810 Board adaptive / tuning evaluation function - no NN/AI] by Moritz Gedig, [[CCC]], January 14, 2020
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=73629 Pawn structure tuning] by [[Vivien Clauzon]], [[CCC]],  April 11, 2020 » [[Pawn Structure]], [[Ethereal]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74184 Learning/Tuning in SlowChess Blitz Classic] by [[Jonathan Kreuzer]], [[CCC]], June 15, 2020 » [[Slow Chess]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74209 Great input about Bayesian optimization of noisy function methods] by [[Vivien Clauzon]], [[CCC]], June 16, 2020
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74877 Evaluation & Tuning in Chess Engines] by [[Andrew Grant]], [[CCC]], August 24, 2020 » [[Ethereal]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74955 Train a neural network evaluation] by [[Fabio Gobbato]], [[CCC]], September 01, 2020 » [[NNUE]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75012 Speeding Up The Tuner] by [[Dennis Sceviour]], [[CCC]], September 06, 2020
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75104 Yet another parameter tuner using optuna framework] by [[Ferdinand Mosca]], [[CCC]], September 14, 2020
 +
: [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75104&start=15 Re: Yet another parameter tuner using optuna framework] by [[Karlson Pfannschmidt]], [[CCC]], September 16, 2020 <ref>[https://optunity.readthedocs.io/en/latest/user/solvers/TPE.html Tree-structured Parzen Estimator — Optunity 1.1.0 documentation]</ref>
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75234 evaluation tuning - where to start?] by [[Maksim Korzh]], [[CCC]], September 27, 2020
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75267 How to calculate piece weights with logistic regression?] by [[Maksim Korzh]], [[CCC]], October 01, 2020 » [[Automated Tuning#Regression|Regression]], [[Point Value by Regression Analysis]], [[Point Value]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75411 Unsupervised reinforcement tuning from zero] by Madeleine Birchfield, [[CCC]], October 16, 2020 » [[Reinforcement Learning]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76024 Laskas parameter optimizer] by [[Ferdinand Mosca]], [[CCC]], December 09, 2020
 +
'''2021'''
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76227 How to calc the derivative for gradient descent?] by Brian Neal, [[CCC]], January 04, 2021
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76238 Help with Texel's tuning] by [[Maksim Korzh]], [[CCC]], January 05, 2021 » [[Texel's Tuning Method]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76265 Tapered Evaluation and MSE (Texel Tuning)] by [[Michael Hoffmann]], [[CCC]], January 10, 2021 » [[Texel's Tuning Method]], [[Tapered Eval]]
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76288 Training data] by [[Michael Hoffmann]], [[CCC]], January 12, 2021
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76292 Why using the game result instead of evaluation scores] by [[Michael Hoffmann]], [[CCC]], January 12, 2021
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76294 Using Mini-Batch for tunig] by [[Michael Hoffmann]], [[CCC]], January 12, 2021
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76380 Texel tuning variant] by [[Ferdinand Mosca]], [[CCC]], January 21, 2021
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76385 Parameter Tuning Algorithm] by [[Michael Hoffmann]], [[CCC]], January 21, 2021
  
 
=External Links=  
 
=External Links=  
Line 312: Line 384:
 
: [https://en.wikipedia.org/wiki/Engine_tuning Engine tuning from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Engine_tuning Engine tuning from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Self-tuning Self-tuning from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Self-tuning Self-tuning from Wikipedia]
 +
==Engine Tuning==
 +
* [https://www.3dkingdoms.com/chess/learning.html Automatic Tuning & Learning for Slow Chess Blitz Classic] by by [[Jonathan Kreuzer]] » [[Slow Chess]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74184 Learning/Tuning in SlowChess Blitz Classic] by [[Jonathan Kreuzer]], [[CCC]], June 15, 2020</ref>
 +
* [https://chess-tuning-tools.readthedocs.io/en/latest/ Chess Tuning Tools] by [[Karlson Pfannschmidt]]
 +
* [http://rebel13.nl/rebel13/pet.html Practical Engine Tuning] by [[Ed Schroder|Ed Schröder]], June 2018 » [[Strategic Test Suite]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67831 Introducing PET] by [[Ed Schroder|Ed Schröder]], [[CCC]], June 27, 2018</ref>
 
==Optimization==
 
==Optimization==
 
* [https://en.wiktionary.org/wiki/optimization optimization - Wiktionary]
 
* [https://en.wiktionary.org/wiki/optimization optimization - Wiktionary]
 
: [https://en.wiktionary.org/wiki/optimize optimize - Wiktionary]
 
: [https://en.wiktionary.org/wiki/optimize optimize - Wiktionary]
 
* [https://en.wikipedia.org/wiki/Mathematical_optimization Mathematical optimization from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Mathematical_optimization Mathematical optimization from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Operations_research Operations research from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Optimization_problem Optimization problem from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Optimization_problem Optimization problem from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Duality_(optimization) Duality (optimization) from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Local_search_%28optimization%29 Local search (optimization) from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Global_optimization Global optimization from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Global_optimization Global optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search from Wikipedia]
+
* [https://en.wikipedia.org/wiki/Bayesian_optimization Bayesian optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Local_search_%28optimization%29 Local search (optimization) from Wikipedia]
+
* [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]
* [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]
 
 
* [http://remi.coulom.free.fr/CLOP/ CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]] » [[CLOP]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=35049 Tool for automatic black-box parameter optimization released] by [[Rémi Coulom]], [[CCC]], June 20, 2010</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref>  
 
* [http://remi.coulom.free.fr/CLOP/ CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]] » [[CLOP]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=35049 Tool for automatic black-box parameter optimization released] by [[Rémi Coulom]], [[CCC]], June 20, 2010</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref>  
 
* [https://en.wikipedia.org/wiki/Conjugate_gradient_method Conjugate gradient method from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Conjugate_gradient_method Conjugate gradient method from Wikipedia]
Line 326: Line 405:
 
: [https://en.wikipedia.org/wiki/Entropy_maximization Entropy maximization from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Entropy_maximization Entropy maximization from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Linear_programming Linear programming from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Linear_programming Linear programming from Wikipedia]
 +
: [https://en.wikipedia.org/wiki/Nonlinear_programming Nonlinear programming from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Simplex_algorithm Simplex algorithm from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Simplex_algorithm Simplex algorithm from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Differential_evolution Differential evolution from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Differential_evolution Differential evolution from Wikipedia]
Line 333: Line 413:
 
* [https://en.wikipedia.org/wiki/Gradient_descent Gradient descent from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Gradient_descent Gradient descent from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Hill_climbing Hill climbing from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Hill_climbing Hill climbing from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Hyperparameter_optimization Hyperparameter optimization from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
 
* [https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
 
* [https://en.wikipedia.org/wiki/Loss_function Loss function from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Loss_function Loss function from Wikipedia]
Line 343: Line 424:
 
* [http://macechess.blogspot.de/2013/03/population-based-incremental-learning.html Population Based Incremental Learning (PBIL)] by [[Thomas Petzke]], March 16, 2013 » [[iCE]]
 
* [http://macechess.blogspot.de/2013/03/population-based-incremental-learning.html Population Based Incremental Learning (PBIL)] by [[Thomas Petzke]], March 16, 2013 » [[iCE]]
 
* [https://en.wikipedia.org/wiki/Simulated_annealing Simulated annealing from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Simulated_annealing Simulated annealing from Wikipedia]
 +
* [https://github.com/scikit-optimize Skopt (Scikit-Optimize) · GitHub]
 +
* [https://bayes-skopt.readthedocs.io/en/latest/ Welcome to Bayes-skopt’s documentation!]
 
* [https://en.wikipedia.org/wiki/Stochastic_optimization Stochastic optimization from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Stochastic_optimization Stochastic optimization from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia]
 
: [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia]
: [http://www.jhuapl.edu/spsa/ SPSA Algorithm]
+
: [https://www.jhuapl.edu/spsa/ SPSA Algorithm]
 
: [https://en.wikipedia.org/wiki/Stochastic_approximation Stochastic approximation from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Stochastic_approximation Stochastic approximation from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Stochastic_gradient_descent Stochastic gradient descent from Wikipedia]
 
: [https://en.wikipedia.org/wiki/Stochastic_gradient_descent Stochastic gradient descent from Wikipedia]
 +
: [https://en.wikipedia.org/wiki/Stochastic_gradient_descent#AdaGrad AdaGrad from Wikipedia]
 
==Machine Learning==
 
==Machine Learning==
 
* [https://en.wikipedia.org/wiki/Machine_learning Machine learning from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Machine_learning Machine learning from Wikipedia]
Line 371: Line 455:
 
* [https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation Correlation does not imply causation from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation Correlation does not imply causation from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Cross_entropy Cross entropy from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Cross_entropy Cross entropy from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Elastic_net_regularization Elastic net regularization from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Lasso_(statistics) LASSO from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Likelihood_function Likelihood function from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Likelihood_function Likelihood function from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Linear_regression Linear regression from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Linear_regression Linear regression from Wikipedia]
Line 380: Line 466:
 
* [https://en.wikipedia.org/wiki/Nonlinear_regression Nonlinear regression from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Nonlinear_regression Nonlinear regression from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Ordinary_least_squares Ordinary least squares from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Ordinary_least_squares Ordinary least squares from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Polynomial_regression Polynomial regression from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Simple_linear_regression Simple linear regression from Wikipedia]
 
* [https://en.wikipedia.org/wiki/Simple_linear_regression Simple linear regression from Wikipedia]
 +
* [https://en.wikipedia.org/wiki/Tikhonov_regularization Tikhonov regularization (Ridge regression) from Wikipedia]
 
==Code==
 
==Code==
* [https://bitbucket.org/alonamaloh/ruy_tune alonamaloh / ruy_tune — Bitbucket] » [[RuyTune]] by [[Álvaro Begué]]
+
* [https://github.com/facebookresearch/nevergrad GitHub - facebookresearch/nevergrad: A Python toolbox for performing gradient-free optimization]
 +
* [https://github.com/fsmosca/Optuna-Game-Parameter-Tuner GitHub - fsmosca/Optuna-Game-Parameter-Tuner: A game search and evaluation parameter tuner using optuna framework] by [[Ferdinand Mosca]] » [[Deuterium]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75104 Yet another parameter tuner using optuna framework] by [[Ferdinand Mosca]], [[CCC]], September 14, 2020</ref>
 +
* [https://github.com/fsmosca/Lakas GitHub - fsmosca/Lakas: Game parameter optimizer using nevergrad framework] by [[Ferdinand Mosca]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76024 Laskas parameter optimizer] by [[Ferdinand Mosca]], [[CCC]], December 09, 2020</ref>
 +
* [https://github.com/kiudee/bayes-skopt GitHub - kiudee/bayes-skopt: A fully Bayesian implementation of sequential model-based optimization] by [[Karlson Pfannschmidt]] » [[Fat Fritz]] <ref>[https://en.chessbase.com/post/fat-fritz-update-and-fat-fritz-jr Fat Fritz 1.1 update and a small gift] by [[Albert Silver]]. [[ChessBase|ChessBase News]], March 05, 2020</ref>
 +
* [https://github.com/kiudee/chess-tuning-tools GitHub - kiudee/chess-tuning-tools] by [[Karlson Pfannschmidt]] » [[Leela Chess Zero]]
 +
* [https://github.com/krasserm/bayesian-machine-learning GitHub - krasserm/bayesian-machine-learning: Notebooks about Bayesian methods for machine learning] by [https://krasserm.github.io/ Martin Krasser] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74209 Great input about Bayesian optimization of noisy function methods] by [[Vivien Clauzon]], [[CCC]], June 16, 2020</ref>
 +
* [https://github.com/thomasahle/noisy-bayesian-optimization GitHub - thomasahle/noisy-bayesian-optimization: Bayesian Optimization for very Noisy functions] by [[Thomas Dybdahl Ahle]] » [[FastChess]]
 +
* [https://github.com/scikit-optimize/scikit-optimize GitHub - scikit-optimize/scikit-optimize: Sequential model-based optimization with a `scipy.optimize` interface]
 
* <span id="Rockstar"></span>[https://github.com/lantonov/Rockstar Rockstar: Implementation of ROCK* algorithm (Gaussian kernel regression + natural gradient descent) for optimisation | GitHub] by [[Lyudmil Antonov]] and [[Joona Kiiski]] » [[Automated Tuning#ROCK|ROCK*]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=65045 ROCK* black-box optimizer for chess] by [[Jon Dart]], [[CCC]], August 31, 2017</ref>
 
* <span id="Rockstar"></span>[https://github.com/lantonov/Rockstar Rockstar: Implementation of ROCK* algorithm (Gaussian kernel regression + natural gradient descent) for optimisation | GitHub] by [[Lyudmil Antonov]] and [[Joona Kiiski]] » [[Automated Tuning#ROCK|ROCK*]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=65045 ROCK* black-box optimizer for chess] by [[Jon Dart]], [[CCC]], August 31, 2017</ref>
 
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine | GitHub] by [[Joona Kiiski]] » [[Stockfish]], [[Stockfish's Tuning Method]]
 
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine | GitHub] by [[Joona Kiiski]] » [[Stockfish]], [[Stockfish's Tuning Method]]
 
==Misc==
 
==Misc==
* [https://www.facebook.com/TheNextStepQuintet The Next Step Quintet] feat. [http://www.tivonpennicott.com/ Tivon Pennicott] - [http://www.discogs.com/Next-Step-Quintet-The-Next-Step-Quintet/release/4970720 Regression], [https://el-gr.facebook.com/KerameioBar KerameioBar] [https://en.wikipedia.org/wiki/Athens Athens], [https://en.wikipedia.org/wiki/Greece Greece], September 2014, [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
+
* [[:Category:The Next Step Quintet|The Next Step Quintet]] feat. [http://www.tivonpennicott.com/ Tivon Pennicott] - [http://www.discogs.com/Next-Step-Quintet-The-Next-Step-Quintet/release/4970720 Regression], [https://el-gr.facebook.com/KerameioBar KerameioBar] [https://en.wikipedia.org/wiki/Athens Athens], [https://en.wikipedia.org/wiki/Greece Greece], September 2014, [https://en.wikipedia.org/wiki/YouTube YouTube] Video  
 
: {{#evu:https://www.youtube.com/watch?v=lc4LBx2_Mak|alignment=left|valignment=top}}
 
: {{#evu:https://www.youtube.com/watch?v=lc4LBx2_Mak|alignment=left|valignment=top}}
  
 
=References=  
 
=References=  
 
<references />
 
<references />
 
 
'''[[Main Page|Up one Level]]'''
 
'''[[Main Page|Up one Level]]'''
 +
[[Category:The Next Step Quintet]]

Revision as of 23:29, 21 January 2021

Home * Automated Tuning

Engine Tuner [1]

Automated Tuning,
an automated adjustment of evaluation parameters or weights, and less commonly, search parameters [2], with the aim to improve the playing strength of a chess engine or game playing program. Evaluation tuning can be applied by mathematical optimization or machine learning, both fields with huge overlaps. Learning approaches are subdivided into supervised learning using labeled data, and reinforcement learning to learn from trying, facing the exploration (of uncharted territory) and exploitation (of current knowledge) dilemma. Johannes Fürnkranz gives a comprehensive overview in Machine Learning in Games: A Survey published in 2000 [3], covering evaluation tuning in chapter 4.

Playing Strength

A difficulty in tuning and automated tuning of engine parameters is measuring playing strength. Using small sets of test-positions, which was quite common in former times to estimate relative strength of chess programs, lacks adequate diversity for a reliable strength predication. In particular, solving test-positions does not necessarily correlate with practical playing strength in matches against other opponents. Therefore, measuring strength requires to play many games against a reference opponent to determine the win rate with a certain confidence. The closer the strength of two opponents, the more games are necessary to determine whether changed parameters or weights in one of them are improvements or not, up to several tens of thousands. Playing many games with ultra short time controls has became de facto standard with todays strong programs, as for instance applied in Stockfish's Fishtest, using the sequential probability ratio test (SPRT) to possibly terminate a match early [4].

Parameter

Quote by Ingo Althöfer [5] [6]:

It is one of the best arts to find the right SMALL set of parameters and to tune them.
Some 12 years ago I had a technical article on this ("On telescoping linear evaluation functions") in the ICCA Journal, Vol. 16, No. 2, pp. 91-94, describing a theorem (of existence) which says that in case of linear evaluation functions with lots of terms there is always a small subset of the terms such that this set with the right parameters is almost as good as the full evaluation function. 

Mathematical Optimization

Mathematical optimization methods in tuning consider the engine as a black box.

Methods

Instances

Advantages

  • Works with all engine parameters, including search
  • Takes search-eval interaction into account

Disadvantages

Reinforcement Learning

Reinforcement learning, in particular temporal difference learning, has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by Arthur Samuel in his Checkers player [7]. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the score of the root position after a quiescence search became closer to the score of the full search. This TD method was generalized and formalized by Richard Sutton in 1988 [8], who introduced the decay parameter λ, where proportions of the score came from the outcome of Monte Carlo simulated games, tapering between bootstrapping (λ = 0) and Monte Carlo (λ = 1). TD-λ was famously applied by Gerald Tesauro in his Backgammon program TD-Gammon [9] [10], its minimax adaptation TD-Leaf was successful used in eval tuning of chess programs [11], with KnightCap [12] and CilkChess [13] as prominent samples.

Instances

Engines

Supervised Learning

Move Adaptation

One supervised learning method considers desired moves from a set of positions, likely from grandmaster games, and tries to adjust their evaluation weights so that for instance a one-ply search agrees with the desired move. Already pioneering in reinforcement learning some years before, move adaptation was described by Arthur Samuel in 1967 as used in the second version of his checkers player [15], where a structure of stacked linear evaluation functions was trained by computing a correlation measure based on the number of times the feature rated an alternative move higher than the desired move played by an expert [16]. In chess, move adaptation was first described by Thomas Nitsche in 1982 [17], and with some extensions by Tony Marsland in 1985 [18]. Eval Tuning in Deep Thought as mentioned by Feng-hsiung Hsu et al. in 1990 [19], and later published by Andreas Nowatzyk, is also based on an extended form of move adaptation [20]. Jonathan Schaeffer's and Paul Lu's efforts to make Deep Thought's approach work for Chinook in 1990 failed [21] - nothing seemed to produce results that were as good than their hand-tuned effort [22].

Value Adaptation

A second supervised learning approach used to tune evaluation weights is based on regression of the desired value, i.e. using the final outcome from huge sets of positions from quality games, or other information supplied by a supervisor, i.e. in form of annotations from position evaluation symbols. Often, value adaptation is reinforced by determining an expected outcome by self play [23].

Advantages

  • Can modify any number of weights simultaneously - constant time complexity

Disadvantages

  • Requires a source for the labeled data
  • Can only be used for evaluation weights or anything else that can be labeled
  • Works not optimal when combined with search

Regression

Regression analysis is a statistical process with a substantial overlap with machine learning to predict the value of an Y variable (output), given known value pairs of the X and Y variables. While linear regression deals with continuous outputs, logistic regression covers binary or discrete output, such as win/loss, or win/draw/loss. Parameter estimation in regression analysis can be formulated as the minimization of a cost or loss function over a training set [25], such as mean squared error or cross-entropy error function for binary classification [26]. The minimization is implemented by iterative optimization algorithms or metaheuristics such as Iterated local search, Gauss–Newton algorithm, or conjugate gradient method.

Linear Regression

The supervised problem of regression applied to move adaptation was used by Thomas Nitsche in 1982, minimizing the mean squared error of a cost function considering the program’s and a grandmaster’s choice of moves, as mentioned, extended by Tony Marsland in 1985, and later by the Deep Thought team. Regression used to adapt desired values was described by Donald H. Mitchell in his 1984 master thesis on evaluation features in Othello, cited by Michael Buro [27] [28]. Jens Christensen applied linear regression to chess in 1986 to learn point values in the domain of temporal difference learning [29].

Logistic Regression

Since the relationship between win percentage and pawn advantage is assumed to follow a logistic model, one may treat static evaluation as single-layer perceptron or single neuron ANN with the common logistic activation function, performing the perceptron algorithm to train it [31]. Logistic regression in evaluation tuning was first elaborated by Michael Buro in 1995 [32], and proved successful in the game of Othello in comparison with Fisher's linear discriminant and quadratic discriminant function for normally distributed features, and served as eponym of his Othello program Logistello [33]. In computer chess, logistic regression was applied by Arkadiusz Paterek with Gosu [34], later proposed by Miguel A. Ballicora in 2009 as used by Gaviota [35], independently described by Amir Ban in 2012 for Junior's evaluation learning [36], and explicitly mentioned by Álvaro Begué in a January 2014 CCC discussion [37], when Peter Österlund explained Texel's Tuning Method [38], which subsequently popularized logistic regression tuning in computer chess. Vladimir Medvedev's Point Value by Regression Analysis [39] [40] experiments showed why the logistic function is appropriate, and further used cross-entropy and regularization.

Instances

See also

Publications

1959

1960 ...

1970 ...

1980 ...

1985 ...

1990 ...

1995 ...

2000 ...

Gerald Tesauro (2001). Comparison Training of Chess Evaluation Functions.  » SCP, Deep Blue

2005 ...

2006

2007

2008

2009

2010 ...

2011

2012

2013

2014

2015 ...

2016

2017

2018

2020 ...

Forum Posts

1997 ...

2000 ...

2005 ...

Re: Adjusting weights the Deep Blue way by Pradu Kannan, Winboard Forum, September 01, 2008
Re: Insanity... or Tal style? by Miguel A. Ballicora, CCC, April 02, 2009 [63]

2010 ...

2014

Re: How Do You Automatically Tune Your Evaluation Tables by Álvaro Begué, CCC, January 08, 2014
The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014 » Texel's Tuning Method
Re: The texel evaluation function optimization algorithm by Álvaro Begué, CCC, January 31, 2014 » Cross-entropy

2015 ...

Re: txt: automated chess engine tuning by Sergei S. Markoff, CCC, February 15, 2016 » SmarThink
Re: Piece weights with regression analysis (in Russian) by Fabien Letouzey, CCC, May 04, 2015
Re: Genetical tuning by Ferdinand Mosca, CCC, August 20, 2015

2016

Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016 [69]

2017

Re: Texel tuning method question by Peter Österlund, CCC, June 07, 2017
Re: Texel tuning method question by Ferdinand Mosca, CCC, July 20, 2017 » Python
Re: Texel tuning method question by Jon Dart, CCC, July 23, 2017
Re: tool to create derivates of a given function by Daniel Shawul, CCC, November 07, 2017 [71]

2018

2019

2020 ...

Re: Yet another parameter tuner using optuna framework by Karlson Pfannschmidt, CCC, September 16, 2020 [74]

2021

External Links

Engine tuning from Wikipedia
Self-tuning from Wikipedia

Engine Tuning

Optimization

optimize - Wiktionary
Entropy maximization from Wikipedia
Linear programming from Wikipedia
Nonlinear programming from Wikipedia
Simplex algorithm from Wikipedia
Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia
SPSA Algorithm
Stochastic approximation from Wikipedia
Stochastic gradient descent from Wikipedia
AdaGrad from Wikipedia

Machine Learning

reinforcement - Wiktionary
reinforce - Wiktionary
supervisor - Wiktionary
temporal - Wiktionary

Statistics/Regression Analysis

regression - Wiktionary
regress - Wiktionary

Code

Misc

References

  1. A vintage motor engine tester located at the James Hall museum of Transport, Johannesburg, South Africa - Engine tuning from Wikipedia
  2. Yngvi Björnsson, Tony Marsland (2001). Learning Search Control in Adversary Games. Advances in Computer Games 9, pp. 157-174. pdf
  3. Johannes Fürnkranz (2000). Machine Learning in Games: A Survey. Austrian Research Institute for Artificial Intelligence, OEFAI-TR-2000-3, pdf - Chapter 4, Evaluation Function Tuning
  4. Fishtest Distributed Testing Framework by Marco Costalba, CCC, May 01, 2013
  5. Re: Zappa Report by Ingo Althöfer, CCC, December 30, 2005 » Zappa
  6. Ingo Althöfer (1993). On Telescoping Linear Evaluation Functions. ICCA Journal, Vol. 16, No. 2, pp. 91-94
  7. Arthur Samuel (1959). Some Studies in Machine Learning Using the Game of Checkers. IBM Journal July 1959
  8. Richard Sutton (1988). Learning to Predict by the Methods of Temporal Differences. Machine Learning, Vol. 3, No. 1, pdf
  9. Gerald Tesauro (1992). Temporal Difference Learning of Backgammon Strategy. ML 1992
  10. Gerald Tesauro (1994). TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play. Neural Computation Vol. 6, No. 2
  11. Don Beal, Martin C. Smith (1999). Learning Piece-Square Values using Temporal Differences. ICCA Journal, Vol. 22, No. 4
  12. Jonathan Baxter, Andrew Tridgell, Lex Weaver (1998). Experiments in Parameter Learning Using Temporal Differences. ICCA Journal, Vol. 21, No. 2, pdf
  13. The Cilkchess Parallel Chess Program
  14. EXchess also uses CLOP
  15. Arthur Samuel (1967). Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress. pdf
  16. Johannes Fürnkranz (2000). Machine Learning in Games: A Survey. Austrian Research Institute for Artificial Intelligence, OEFAI-TR-2000-3, pdf
  17. Thomas Nitsche (1982). A Learning Chess Program. Advances in Computer Chess 3
  18. Tony Marsland (1985). Evaluation-Function Factors. ICCA Journal, Vol. 8, No. 2, pdf
  19. Feng-hsiung Hsu, Thomas Anantharaman, Murray Campbell, Andreas Nowatzyk (1990). A Grandmaster Chess Machine. Scientific American, Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.
  20. see 2.1 Learning from Desired Moves in Chess in Kunihito Hoki, Tomoyuki Kaneko (2014). Large-Scale Optimization for Evaluation Functions with Minimax Search. JAIR Vol. 49
  21. Jonathan Schaeffer, Joe Culberson, Norman Treloar, Brent Knight, Paul Lu, Duane Szafron (1992). A World Championship Caliber Checkers Program. Artificial Intelligence, Vol. 53, Nos. 2-3,ps
  22. Jonathan Schaeffer (1997, 2009). One Jump Ahead. 7. The Case for the Prosecution, pp. 111-114
  23. Bruce Abramson (1990). Expected-Outcome: A General Model of Static Evaluation. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 2
  24. Random data points and their linear regression. Created with Sage by Sewaqu, November 5, 2010, Wikimedia Commons
  25. Loss function - Use in statistics - Wkipedia
  26. "Using cross-entropy error function instead of sum of squares leads to faster training and improved generalization", from Sargur Srihari, Neural Network Training (pdf)
  27. Michael Buro (1995). Statistical Feature Combination for the Evaluation of Game Positions. JAIR, Vol. 3
  28. Donald H. Mitchell (1984). Using Features to Evaluate Positions in Experts' and Novices' Othello Games. Master thesis, Department of Psychology, Northwestern University, Evanston, IL
  29. Jens Christensen (1986). Learning Static Evaluation Functions by Linear Regression. in Tom Mitchell, Jaime Carbonell, Ryszard Michalski (1986). Machine Learning: A Guide to Current Research. The Kluwer International Series in Engineering and Computer Science, Vol. 12
  30. log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10 from Wolfram|Alpha
  31. Re: Piece weights with regression analysis (in Russian) by Fabien Letouzey, CCC, May 04, 2015
  32. Michael Buro (1995). Statistical Feature Combination for the Evaluation of Game Positions. JAIR, Vol. 3
  33. LOGISTELLO's Homepage
  34. Arkadiusz Paterek (2004). Modelowanie funkcji oceniającej w grach. University of Warsaw, zipped ps (Polish, Modeling of an evaluation function in games)
  35. Re: Insanity... or Tal style? by Miguel A. Ballicora, CCC, April 02, 2009
  36. Amir Ban (2012). Automatic Learning of Evaluation, with Applications to Computer Chess. Discussion Paper 613, The Hebrew University of Jerusalem - Center for the Study of Rationality, Givat Ram
  37. Re: How Do You Automatically Tune Your Evaluation Tables by Álvaro Begué, CCC, January 08, 2014
  38. The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014
  39. Определяем веса шахматных фигур регрессионным анализом / Хабрахабр by WinPooh, April 27, 2015 (Russian)
  40. Piece weights with regression analysis (in Russian) by Vladimir Medvedev, CCC, April 30, 2015
  41. SPSA Tuner for Stockfish Chess Engine by Joona Kiiski
  42. Re: Adjusting weights the Deep Blue way by Pradu Kannan, Winboard Forum, September 01, 2008
  43. A paper about parameter tuning by Rémi Coulom, CCC, January 12, 2010
  44. MATLAB from Wikipedia
  45. Adaptive coordinate descent from Wikipedia
  46. CLOP for Noisy Black-Box Parameter Optimization by Rémi Coulom, CCC, September 01, 2011
  47. CLOP slides by Rémi Coulom, CCC, November 03, 2011
  48. thesis on eval function learning in Arimaa by Jon Dart, CCC, December 04, 2015
  49. Tuning floats by Stephane Nicolet, FishCooking, April 12, 2018
  50. MMTO for evaluation learning by Jon Dart, CCC, January 25, 2015
  51. Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia
  52. high dimensional optimization by Warren D. Smith, FishCooking, December 27, 2019
  53. Arasan 19.2 by Jon Dart, CCC, November 03, 2016 » Arasan's Tuning
  54. Limited-memory BFGS from Wikipedia
  55. Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016
  56. LM-BFGS from Wikipedia
  57. LM-CMA source code
  58. CMA-ES from Wikipedia
  59. Re: Texel tuning method question by Jon Dart, CCC, July 23, 2017
  60. Re: multi-dimensional piece/square tables by Tony P., CCC, January 28, 2020 » Piece-Square Tables
  61. Evaluation & Tuning in Chess Engines by Andrew Grant, CCC, August 24, 2020
  62. Thomas Anantharaman (1997). Evaluation Tuning for Computer Chess: Linear Discriminant Methods. ICCA Journal, Vol. 20, No. 4
  63. The texel evaluation function optimization algorithm by Peter Österlund, CCC, January 31, 2014
  64. Rémi Coulom (2011). CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning. Advances in Computer Games 13
  65. Amir Ban (2012). Automatic Learning of Evaluation, with Applications to Computer Chess. Discussion Paper 613, The Hebrew University of Jerusalem - Center for the Study of Rationality, Givat Ram
  66. Kunihito Hoki, Tomoyuki Kaneko (2014). Large-Scale Optimization for Evaluation Functions with Minimax Search. JAIR Vol. 49, pdf
  67. brtzsnr / txt — Bitbucket by Alexandru Mosoi
  68. Home — TensorFlow
  69. Limited-memory BFGS from Wikipedia
  70. Maximum likelihood estimation from Wikipedia
  71. Jacobian matrix and determinant from WIkipedia
  72. skopt API documentation
  73. Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv:1406.2572
  74. Tree-structured Parzen Estimator — Optunity 1.1.0 documentation
  75. Learning/Tuning in SlowChess Blitz Classic by Jonathan Kreuzer, CCC, June 15, 2020
  76. Introducing PET by Ed Schröder, CCC, June 27, 2018
  77. Tool for automatic black-box parameter optimization released by Rémi Coulom, CCC, June 20, 2010
  78. CLOP for Noisy Black-Box Parameter Optimization by Rémi Coulom, CCC, September 01, 2011
  79. Re: CLOP: when to stop? by Álvaro Begué, CCC, November 08, 2016
  80. Re: Eval tuning - any open source engines with GA or PBIL? by Jon Dart, CCC, December 06, 2014
  81. Re: The texel evaluation function optimization algorithm by Jon Dart, CCC, March 12, 2014
  82. Eval tuning - any open source engines with GA or PBIL? by Hrvoje Horvatic, CCC, December 04, 2014
  83. Yet another parameter tuner using optuna framework by Ferdinand Mosca, CCC, September 14, 2020
  84. Laskas parameter optimizer by Ferdinand Mosca, CCC, December 09, 2020
  85. Fat Fritz 1.1 update and a small gift by Albert Silver. ChessBase News, March 05, 2020
  86. Great input about Bayesian optimization of noisy function methods by Vivien Clauzon, CCC, June 16, 2020
  87. ROCK* black-box optimizer for chess by Jon Dart, CCC, August 31, 2017

Up one Level