Changes

Jump to: navigation, search

Automated Tuning

70,674 bytes added, 22:21, 23 April 2018
Created page with "'''Home * Automated Tuning''' FILE:Sun 800 tester-001.jpg|border|right|thumb| Engine Tuner <ref>A vintage motor engine tester located at the [https://en.wikip..."
'''[[Main Page|Home]] * Automated Tuning'''

[[FILE:Sun 800 tester-001.jpg|border|right|thumb| Engine Tuner <ref>A vintage motor engine tester located at the [https://en.wikipedia.org/wiki/James_Hall_Transport_Museum James Hall museum of Transport], [https://en.wikipedia.org/wiki/Johannesburg Johannesburg], [https://en.wikipedia.org/wiki/South_Africa South Africa] - [https://en.wikipedia.org/wiki/Engine_tuning Engine tuning from Wikipedia]</ref> ]]

'''Automated Tuning''',<br/>
an [https://en.wikipedia.org/wiki/Automation automated] adjustment of [[Evaluation|evaluation]] parameters or weights, and less commonly, [[Search|search]] parameters <ref>[[Yngvi Björnsson]], [[Tony Marsland]] ('''2001'''). ''Learning Search Control in Adversary Games''. [[Advances in Computer Games 9]], pp. 157-174. [http://www.ru.is/faculty/yngvi/pdf/BjornssonM01b.pdf pdf]</ref>, with the aim to improve the [[Playing Strength|playing strength]] of a chess engine or game playing program. Evaluation tuning can be applied by [[Automated Tuning#Optimization|mathematical optimization]] or [[Learning|machine learning]], both fields with huge overlaps. Learning approaches are subdivided into [[Automated Tuning#SupervisedLearning|supervised learning]] using [https://en.wikipedia.org/wiki/Training_set labeled data], and [[Automated Tuning#ReinformentLearning|reinforcement learning]] to learn from trying, facing the exploration (of uncharted territory) and exploitation (of current knowledge) dilemma. [[Johannes Fürnkranz]] gives a comprehensive overview in ''Machine Learning in Games: A Survey'' published in 2000 <ref>[[Johannes Fürnkranz]] ('''2000'''). ''Machine Learning in Games: A Survey''. [https://en.wikipedia.org/wiki/Austrian_Research_Institute_for_Artificial_Intelligence Austrian Research Institute for Artificial Intelligence], OEFAI-TR-2000-3, [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-2000-31.pdf pdf] - Chapter 4, Evaluation Function Tuning</ref>, covering evaluation tuning in chapter 4.

=Playing Strength=
<span id="Playingstrength"></span>A difficulty in tuning and automated tuning of engine parameters is measuring [[Playing Strength|playing strength]]. Using small sets of [[Test-Positions|test-positions]], which was quite common in former times to estimate relative strength of chess programs, lacks adequate diversity for a reliable strength predication. In particular, solving test-positions does not necessarily correlate with practical playing strength in matches against other opponents. Therefore, measuring strength requires to play many games against a reference opponent to determine the [[Match Statistics#ratio|win rate]] with a certain [https://en.wikipedia.org/wiki/Confidence_interval confidence]. The closer the strength of two opponents, the more games are necessary to determine whether changed parameters or weights in one of them are improvements or not, up to several tens of thousands. Playing many games with ultra short time controls has became de facto standard with todays strong programs, as for instance applied in [[Stockfish|Stockfish's]] [[Stockfish#TestingFramework|Fishtest]], using the [[Match Statistics#SPRT|sequential probability ratio test]] (SPRT) to possibly terminate a match early <ref>[http://www.talkchess.com/forum/viewtopic.php?t=47885 Fishtest Distributed Testing Framework] by [[Marco Costalba]], [[CCC]], May 01, 2013</ref>.

=Parameter=
Quote by [[Ingo Althöfer]] <ref>[https://www.stmintz.com/ccc/index.php?id=475521 Re: Zappa Report] by [[Ingo Althöfer]], [[CCC]], December 30, 2005 » [[Zappa]]</ref> <ref>[[Ingo Althöfer]] ('''1993'''). ''On Telescoping Linear Evaluation Functions.'' [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]], pp. 91-94</ref>:
It is one of the best arts to find the right SMALL set of parameters and to tune them.

Some 12 years ago I had a technical article on this ("On telescoping linear evaluation functions") in the [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]], pp. 91-94, describing a theorem (of existence) which says that in case of linear evaluation functions with lots of terms there is always a small subset of the terms such that this set with the right parameters is almost as good as the full evaluation function.
<span id="Optimization"></span>
=Mathematical Optimization=
[https://en.wikipedia.org/wiki/Mathematical_optimization Mathematical optimization] methods in tuning consider the engine as a [https://en.wikipedia.org/wiki/Black_box black box].

==Methods==
* [[CLOP]]
* [[Genetic Programming#GeneticAlgorithm|Genetic Algorithms]]
* [[Genetic Programming#PBIL|PBIL]]
* [[Simulated Annealing]]
* [[SPSA]]

==Instances==
* [[Falcon#GA|Genetic Algorithm in Falcon]]
* [[Stockfish's Tuning Method]]

==Advantages==
* Works with all engine parameters, including search
* Takes search-eval interaction into account
==Disadvantages==
* [https://en.wikipedia.org/wiki/Time_complexity Time complexity] issues with increasing number of weights to tune
<span id="ReinformentLearning"></span>
=Reinforment Learning=
[[Reinforcement Learning|Reinforcement learning]], in particular [[Temporal Difference Learning|temporal difference learning]], has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by [[Arthur Samuel]] in his [[Checkers]] player <ref>[[Arthur Samuel]] ('''1959'''). ''[http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74!OpenDocument Some Studies in Machine Learning Using the Game of Checkers]''. IBM Journal July 1959</ref>. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the [[Score|score]] of the [[Root|root position]] after a [[Quiescence Search|quiescence search]] became closer to the score of the full search. This TD method was generalized and formalized by [[Richard Sutton]] in 1988 <ref>[[Richard Sutton]] ('''1988'''). ''Learning to Predict by the Methods of Temporal Differences''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Vol. 3, No. 1, [http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf pdf]</ref>, who introduced the decay parameter '''λ''', where proportions of the score came from the outcome of [https://en.wikipedia.org/wiki/Monte_Carlo_method Monte Carlo] simulated games, tapering between [https://en.wikipedia.org/wiki/Bootstrapping#Artificial_intelligence_and_machine_learning bootstrapping] (λ = 0) and Monte Carlo (λ = 1). [[Temporal Difference Learning#TDLamba|TD-λ]] was famously applied by [[Gerald Tesauro]] in his [[Backgammon]] program [https://en.wikipedia.org/wiki/TD-Gammon TD-Gammon] <ref>[[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]</ref> <ref>[[Gerald Tesauro]] ('''1994'''). ''TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play''. [http://www.informatik.uni-trier.de/~ley/db/journals/neco/neco6.html#Tesauro94 Neural Computation Vol. 6, No. 2]</ref>, its [[Minimax|minimax]] adaption [[Temporal Difference Learning#TDLeaf|TD-Leaf]] was successful used in eval tuning of chess programs <ref>[[Don Beal]], [[Martin C. Smith]] ('''1999'''). ''Learning Piece-Square Values using Temporal Differences.'' [[ICGA Journal#22_4|ICCA Journal, Vol. 22, No. 4]]</ref>, with [[KnightCap]] <ref>[[Jonathan Baxter]], [[Andrew Tridgell]], [[Lex Weaver]] ('''1998'''). ''Experiments in Parameter Learning Using Temporal Differences''. [[ICGA Journal#21_2|ICCA Journal, Vol. 21, No. 2]], [http://cs.anu.edu.au/%7ELex.Weaver/pub_sem/publications/ICCA-98_equiv.pdf pdf]</ref> and [[CilkChess]] <ref>[http://supertech.csail.mit.edu/chess/ The Cilkchess Parallel Chess Program]</ref> as prominent samples.

==Instances==
* [[Temporal Difference Learning#TDLamba|TD-λ]]
* [[Temporal Difference Learning#TDLeaf|TD-Leaf]]
* [[Meep#RootStrap|RootStrap]]
* [[Meep#TreeStrap|TreeStrap]]
<span id="Engines"></span>
==Engines==
* [[CilkChess]]
* [[EXchess]] <ref>[[EXchess]] also uses [[CLOP]]</ref>
* [[FUSCsharp|FUSc#]]
* [[Green Light Chess]]
* [[KnightCap]]
* [[Meep]]
* [[NeuroChess]]
* [[SAL]]
* [[Tao]]
* [[TDChess]]
<span id="SupervisedLearning"></span>
=Supervised Learning=
==Move Adaption==
<span id="MoveAdaption"></span>One [[Supervised Learning|supervised learning]] method considers desired moves from a set of positions, likely from grandmaster games, and tries to adjust their evaluation weights so that for instance a one-ply search agrees with the desired move. Already pioneering in reinforcement learning some years before, move adaption was described by [[Arthur Samuel]] in 1967 as used in the second version of his checkers player <ref>[[Arthur Samuel]] ('''1967'''). ''Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress''. [http://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf pdf]</ref>, where a structure of stacked linear evaluation functions was trained by computing a correlation measure based on the number of times the feature rated an alternative move higher than the desired move played by an expert <ref>[[Johannes Fürnkranz]] ('''2000'''). ''Machine Learning in Games: A Survey''. [https://en.wikipedia.org/wiki/Austrian_Research_Institute_for_Artificial_Intelligence Austrian Research Institute for Artificial Intelligence], OEFAI-TR-2000-3, [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-2000-31.pdf pdf]</ref>. In chess, move adaption was first described by [[Thomas Nitsche]] in 1982 <ref>[[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]</ref>, and with some extensions by [[Tony Marsland]] in 1985 <ref>[[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]</ref>. [[Eval Tuning in Deep Thought]] as mentioned by [[Feng-hsiung Hsu]] et al. in 1990 <ref>[[Feng-hsiung Hsu]], [[Thomas Anantharaman]], [[Murray Campbell]], [[Andreas Nowatzyk]] ('''1990'''). ''[http://www.disi.unige.it/person/DelzannoG/AI2/hsu.html A Grandmaster Chess Machine]''. [[Scientific American]], Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.</ref>, and later published by [[Andreas Nowatzyk]], is also based on an extended form of move adaption <ref>see ''2.1 Learning from Desired Moves in Chess'' in [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49]</ref>. [[Jonathan Schaeffer|Jonathan Schaeffer's]] and [[Paul Lu|Paul Lu's]] efforts to make Deep Thought's approach work for [https://en.wikipedia.org/wiki/Chinook_%28draughts_player%29 Chinook] in 1990 failed <ref>[[Jonathan Schaeffer]], [[Joe Culberson]], [[Norman Treloar]], [[Brent Knight]], [[Paul Lu]], [[Duane Szafron]] ('''1992'''). ''A World Championship Caliber Checkers Program''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 53, Nos. 2-3,[http://webdocs.cs.ualberta.ca/%7Ejonathan/Papers/Papers/chinook.ps ps]</ref> - nothing seemed to produce results that were as good than their hand-tuned effort <ref>[[Jonathan Schaeffer]] ('''1997, 2009'''). ''[http://www.springer.com/computer/ai/book/978-0-387-76575-4 One Jump Ahead]''. 7. The Case for the Prosecution, pp. 111-114</ref>.

==Value Adaption==
<span id="ValueAdaption"></span>A second supervised learning approach used to tune evaluation weights is based on [https://en.wikipedia.org/wiki/Regression regression] of the desired value, i.e. using the final outcome from huge sets of positions from quality games, or other information supplied by a supervisor, i.e. in form of annotations from [https://en.wikipedia.org/wiki/Chess_annotation_symbols#Position_evaluation_symbols position evaluation symbols]. Often, value adaption is reinforced by determining an expected outcome by self play <ref>[[Bruce Abramson]] ('''1990'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=44404 Expected-Outcome: A General Model of Static Evaluation]''. [[IEEE#TPAMI|IEEE Transactions on Pattern Analysis and Machine Intelligence]], Vol. 12, No. 2</ref>.

==Advantages==
* Can modify any number of weights simultaneously - constant [https://en.wikipedia.org/wiki/Time_complexity time complexity]
==Disadvantages==
* Requires a source for the labeled data
* Can only be used for evaluation weights or anything else that can be labeled
* Works not optimal when combined with search
<span id="Regression"></span>
=Regression=
[https://en.wikipedia.org/wiki/Regression_analysis Regression analysis] is a [https://en.wikipedia.org/wiki/Statistics statistical process] with a substantial overlap with machine learning to [https://en.wikipedia.org/wiki/Prediction predict] the value of an [https://en.wikipedia.org/wiki/Dependent_and_independent_variables Y variable] (output), given known value pairs of the X and Y variables. Parameter estimation in regression analysis can be formulated as the [https://en.wikipedia.org/wiki/Mathematical_optimization minimization] of a [https://en.wikipedia.org/wiki/Loss_function cost or loss function] over a [https://en.wikipedia.org/wiki/Training_set training set] <ref>[https://en.wikipedia.org/wiki/Loss_function#Use_in_statistics Loss function - Use in statistics - Wkipedia]</ref>, such as [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] or [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] for [https://en.wikipedia.org/wiki/Binary_classification binary classification] <ref>"Using [https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_error_function_and_logistic_regression cross-entropy error function] instead of [https://en.wikipedia.org/wiki/Mean_squared_error sum of squares] leads to faster training and improved generalization", from [https://en.wikipedia.org/wiki/Sargur_Srihari Sargur Srihari], [http://www.cedar.buffalo.edu/~srihari/CSE574/Chap5/Chap5.2-Training.pdf Neural Network Training] (pdf)</ref>. The minimization is implemented by [[Iteration|iterative]] optimization [[Algorithms|algorithms]] or [https://en.wikipedia.org/wiki/Metaheuristic metaheuristics] such as [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search], [https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm Gauss–Newton algorithm], or [https://en.wikipedia.org/wiki/Conjugate_gradient_method conjugate gradient method].
<span id="LinearRegression"></span>
==Linear Regression==
{|
|-
| [[FILE:Linear regression.svg|border|left|thumb|baseline|300px|[https://en.wikipedia.org/wiki/Linear_regression Linear Regression] <ref>Random data points and their [https://en.wikipedia.org/wiki/Linear_regression linear regression]. [https://commons.wikimedia.org/wiki/File:Linear_regression.svg Created] with [https://en.wikipedia.org/wiki/Sage_%28mathematics_software%29 Sage] by Sewaqu, November 5, 2010, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
| style="vertical-align:top;" | The supervised problem of regression applied to [[Automated Tuning#MoveAdaption|move adaption]] was used by [[Thomas Nitsche]] in 1982, minimizing the [https://en.wikipedia.org/wiki/Mean_squared_error mean squared error] of a cost function considering the program’s and a grandmaster’s choice of moves, as mentioned, extended by [[Tony Marsland]] in 1985, and later by the [[Deep Thought]] team. Regression used to [[Automated Tuning#ValueAdaption|adapt desired values]] was described by [[Donald H. Mitchell]] in his 1984 masters thesis on evaluation features in [[Othello]], cited by [[Michael Buro]] <ref>[[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref> <ref>[[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Masters thesis, Department of Psychology, [[Northwestern University]], Evanston, IL</ref>. [[Jens Christensen]] applied [https://en.wikipedia.org/wiki/Linear_regression linear regression] to chess in 1986 to learn [[Point Value|point values]] in the domain of [[Temporal Difference Learning|temporal difference learning]] <ref>[[Jens Christensen]] ('''1986'''). ''[http://link.springer.com/chapter/10.1007/978-1-4613-2279-5_9?no-access=true Learning Static Evaluation Functions by Linear Regression]''. in [[Tom Mitchell]], [[Jaime Carbonell]], [[Ryszard Michalski]] ('''1986'''). ''[http://link.springer.com/book/10.1007/978-1-4613-2279-5 Machine Learning: A Guide to Current Research]''. The Kluwer International Series in Engineering and Computer Science, Vol. 12</ref>.
|}
<span id="LogisticRegression"></span>
==Logistic Regression==
{|
|-
| [[FILE:SigmoidTexelTune.gif|border|left|thumb|baseline|300px|link=http://wolfr.am/1al3d5B|[https://en.wikipedia.org/wiki/Logistic_function Logistic function] <ref>[http://wolfr.am/1al3d5B log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10] from [https://en.wikipedia.org/wiki/Wolfram_Alpha Wolfram|Alpha]</ref> ]]
| style="vertical-align:top;" | Since the relationship between [[Pawn Advantage, Win Percentage, and ELO|win percentage and pawn advantage]] is assumed to follow a [https://en.wikipedia.org/wiki/Logistic_model logistic model], one may treat static evaluation as [[Neural Networks#Perceptron|single-layer perceptron]] or single [https://en.wikipedia.org/wiki/Artificial_neuron neuron] [[Neural Networks|ANN]] with the common [https://en.wikipedia.org/wiki/Logistic_function logistic] [https://en.wikipedia.org/wiki/Activation_function activation function], performing the perceptron algorithm to train it <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015</ref>. [https://en.wikipedia.org/wiki/Logistic_regression Logistic regression] in evaluation tuning was first elaborated by [[Michael Buro]] in 1995 <ref>[[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3</ref>, and proved successful in the game of [[Othello]] in comparison with [[Mathematician#RFisher|Fisher's]] [https://en.wikipedia.org/wiki/Kernel_Fisher_discriminant_analysis linear discriminant] and quadratic [https://en.wikipedia.org/wiki/Discriminant discriminant] function for [https://en.wikipedia.org/wiki/Normal_distribution normally distributed] features, and served as eponym of his Othello program ''Logistello'' <ref>[https://skatgame.net/mburo/log.html LOGISTELLO's Homepage]</ref>. In computer chess, logistic regression was proposed by [[Miguel A. Ballicora]] in a 2009 [[CCC]] post, as applied to [[Gaviota]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009</ref>, was independently described by [[Amir Ban]] in 2012 for [[Junior|Junior's]] evaluation learning <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>, and explicitly mentioned by [[Álvaro Begué]] in a January 2014 [[CCC]] discussion <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014</ref>, when [[Peter Österlund]] explained [[Texel's Tuning Method]] <ref>[http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=555522&t=50823 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014</ref>, which subsequently popularized logistic regression tuning in computer chess. [[Vladimir Medvedev|Vladimir Medvedev's]] [[Point Value by Regression Analysis]] <ref>[http://habrahabr.ru/post/254753/ Определяем веса шахматных фигур регрессионным анализом / Хабрахабр] by [[Vladimir Medvedev|WinPooh]], April 27, 2015 (Russian)</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168 Piece weights with regression analysis (in Russian)] by [[Vladimir Medvedev]], [[CCC]], April 30, 2015</ref> experiments showed why the [https://en.wikipedia.org/wiki/Logistic_function logistic function] is appropriate, and further used [https://en.wikipedia.org/wiki/Cross_entropy cross-entropy] and [https://en.wikipedia.org/wiki/Regularization_%28mathematics%29 regularization].
|}

==Instances==
* [[Arasan#Tuning|Arasan's Tuning]]
* [[Eval Tuning in Deep Thought]]
* [[Minimax Tree Optimization]] (MMTO or the Bonanza-Method in [[Shogi]])
* [[Point Value by Regression Analysis]]
* [[RuyTune]]
* [[Texel's Tuning Method]]
* [[Winter]]

=See also=
* [[Dynamic Programming]]
* [[Evaluation]]
* [[Iteration]]
* [[Knowledge]]
* [[Learning]]
* [[Match Statistics]]
* [[Neural Networks]]
* [[Trial and Error]]

=Publications=
==1959==
* [[Arthur Samuel]] ('''1959'''). ''[http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74!OpenDocument Some Studies in Machine Learning Using the Game of Checkers]''. IBM Journal July 1959
==1960 ...==
* [[Arnold K. Griffith]] ('''1966'''). ''[http://dspace.mit.edu/handle/1721.1/5896#files-area A new Machine-Learning Technique applied to the Game of Checkers]''. [[Massachusetts Institute of Technology|MIT]], [https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artificial_Intelligence_Laboratory#Project_MAC Project MAC], MAC-M-293
* [[Arthur Samuel]] ('''1967'''). ''Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress''. [http://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf pdf]
==1970 ...==
* [[Arnold K. Griffith]] ('''1974'''). ''[http://www.sciencedirect.com/science/article/pii/0004370274900277 A Comparison and Evaluation of Three Machine Learning Procedures as Applied to the Game of Checkers]''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 5, No. 2
==1980 ...==
* [[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]
* [[Donald H. Mitchell]] ('''1984'''). ''Using Features to Evaluate Positions in Experts' and Novices' Othello Games''. Masters thesis, Department of Psychology, [[Northwestern University]], Evanston, IL
==1985 ...==
* [[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]
* [[Jens Christensen]], [[Richard Korf]] ('''1986'''). ''A Unified Theory of Heuristic Evaluation functions and Its Applications to Learning.'' Proceedings of the [http://www.aaai.org/Conferences/AAAI/aaai86.php AAAI-86], pp. 148-152, [http://www.aaai.org/Papers/AAAI/1986/AAAI86-023.pdf pdf]
* [[Jens Christensen]] ('''1986'''). ''[http://link.springer.com/chapter/10.1007/978-1-4613-2279-5_9?no-access=true Learning Static Evaluation Functions by Linear Regression]''. in [[Tom Mitchell]], [[Jaime Carbonell]], [[Ryszard Michalski]] ('''1986'''). ''[http://link.springer.com/book/10.1007/978-1-4613-2279-5 Machine Learning: A Guide to Current Research]''. The Kluwer International Series in Engineering and Computer Science, Vol. 12
* [[Dap Hartmann]] ('''1987'''). ''How to Extract Relevant Knowledge from Grandmaster Games. Part 1: Grandmasters have Insights - the Problem is what to Incorporate into Practical Problems.'' [[ICGA Journal#10_1|ICCA Journal, Vol. 10, No. 1]]
* [[Dap Hartmann]] ('''1987'''). ''How to Extract Relevant Knowledge from Grandmaster Games. Part 2: the Notion of Mobility, and the Work of [[Adriaan de Groot|De Groot]] and [[Eliot Slater|Slater]]''. [[ICGA Journal#10_2|ICCA Journal, Vol. 10, No. 2]]
* [[Bruce Abramson]], [[Richard Korf]] ('''1987'''). ''A Model of Two-Player Evaluation Functions.'' [http://www.aaai.org/Conferences/AAAI/aaai87.php AAAI-87]. [http://www.aaai.org/Papers/AAAI/1987/AAAI87-016.pdf pdf]
* [[Bruce Abramson]] ('''1988'''). ''Learning Expected-Outcome Evaluators in Chess.'' Proceedings of the 1988 AAAI Spring Symposium Series: Computer Game Playing, 26-28.
* [[Kai-Fu Lee]], [[Sanjoy Mahajan]] ('''1988'''). ''[http://www.sciencedirect.com/science/article/pii/0004370288900768 A Pattern Classification Approach to Evaluation Function Learning]''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 36, No. 1
* [[Richard Sutton]] ('''1988'''). ''Learning to Predict by the Methods of Temporal Differences''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Vol. 3, No. 1, [http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf pdf]
* [[Bruce Abramson]] ('''1989'''). ''On Learning and Testing Evaluation Functions.'' Proceedings of the Sixth Israeli Conference on Artificial Intelligence, 1989, 7-16.
* [[Maarten van der Meulen]] ('''1989'''). ''Weight Assessment in Evaluation Functions''. [[Advances in Computer Chess 5]]
==1990 ...==
* [[Bruce Abramson]] ('''1990'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=44404 Expected-Outcome: A General Model of Static Evaluation]''. [[IEEE#TPAMI|IEEE Transactions on Pattern Analysis and Machine Intelligence]], Vol. 12, No. 2
* [[Bruce Abramson]] ('''1990'''). ''An Analysis of Expected-Outcome.'' Journal of Experimental and Theoretical Artificial Intelligence 2: 55-73.
* [[Bruce Abramson]] ('''1990'''). ''On Learning and Testing Evaluation Functions.'' Journal of Experimental and Theoretical Artificial Intelligence, Vol. 2
* [[Feng-hsiung Hsu]], [[Thomas Anantharaman]], [[Murray Campbell]], [[Andreas Nowatzyk]] ('''1990'''). ''[http://www.disi.unige.it/person/DelzannoG/AI2/hsu.html A Grandmaster Chess Machine]''. [[Scientific American]], Vol. 263, No. 4, pp. 44-50. ISSN 0036-8733.
* [[Bruce Abramson]] ('''1991'''). ''The Expected-Outcome Model of Two-Player Games.'' Part of the series, Research Notes in Artificial Intelligence (San Mateo: Morgan Kaufmann, 1991).
* [[Alex van Tiggelen]] ('''1991'''). ''Neural Networks as a Guide to Optimization - The Chess Middle Game Explored''. [[ICGA Journal#14_3|ICCA Journal, Vol. 14, No. 3]]
* [[William Tunstall-Pedoe]] ('''1991'''). ''Genetic Algorithms Optimizing Evaluation Functions''. [[ICGA Journal#14_3|ICCA Journal, Vol. 14, No. 3]]
* [[Paul E. Utgoff]], [http://dblp.uni-trier.de/pers/hd/c/Clouse:Jeffery_A= Jeffery A. Clouse] ('''1991'''). ''[http://scholarworks.umass.edu/cs_faculty_pubs/193/ Two Kinds of Training Information for Evaluation Function Learning]''. [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts, Amherst], Proceedings of the AAAI 1991
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
* [[Ingo Althöfer]] ('''1993'''). ''On Telescoping Linear Evaluation Functions.'' [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]], pp. 91-94
* [[Peter Mysliwietz]] ('''1994'''). ''Konstruktion und Optimierung von Bewertungsfunktionen beim Schach.'' Ph.D. thesis (German)
==1995 ...==
* [[Michael Buro]] ('''1995'''). ''[http://www.jair.org/papers/paper179.html Statistical Feature Combination for the Evaluation of Game Positions]''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 3
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search''. [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/ccm/www/papers/ml.ps ps] or [http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9B2A0CCA8B1AFB594A879799D974111A?doi=10.1.1.53.9742&rep=rep1&type=pdf pdf] from [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.9742 CiteSeerX]
* [[Chris McConnell]] ('''1995'''). ''Tuning Evaluation Functions for Search'' (Talk), [http://www.cs.cmu.edu/afs/cs.cmu.edu/user/ccm/www/talks/tune.ps ps]
* [[Johannes Fürnkranz]] ('''1996'''). ''Machine Learning in Computer Chess: The Next Generation.'' [[ICGA Journal#19_3|ICCA Journal, Vol. 19, No. 3]], [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-96-11.ps.gz zipped ps]
* [[Don Beal]], [[Martin C. Smith]] ('''1997'''). ''Learning Piece Values Using Temporal Differences''. [[ICGA Journal#20_3|ICCA Journal, Vol. 20, No. 3]]
* [[Thomas Anantharaman]] ('''1997'''). ''Evaluation Tuning for Computer Chess: Linear Discriminant Methods''. [[ICGA Journal#20_4|ICCA Journal, Vol. 20, No. 4]]
* [[Jonathan Baxter]], [[Andrew Tridgell]], [[Lex Weaver]] ('''1998'''). ''Experiments in Parameter Learning Using Temporal Differences''. [[ICGA Journal#21_2|ICCA Journal, Vol. 21, No. 2]], [http://cs.anu.edu.au/%7ELex.Weaver/pub_sem/publications/ICCA-98_equiv.pdf pdf]
* [[Michael Buro]] ('''1998'''). ''[http://link.springer.com/chapter/10.1007/3-540-48957-6_8 From Simple Features to Sophisticated Evaluation Functions]''. [[CG 1998]], [https://skatgame.net/mburo/ps/glem.pdf pdf]
* [[James C. Spall]] ('''1998'''). ''[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=705889&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D705889 Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization]''. [[IEEE#TOCAES|IEEE Transactions on Aerospace and Electronic Systems]], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF pdf] <ref>[https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine] by [[Joona Kiiski]]</ref>
* [[Don Beal]], [[Martin C. Smith]] ('''1999'''). ''Learning Piece-Square Values using Temporal Differences.'' [[ICGA Journal#22_4|ICCA Journal, Vol. 22, No. 4]]
==2000 ...==
* [[Johannes Fürnkranz]] ('''2000'''). ''Machine Learning in Games: A Survey''. [https://en.wikipedia.org/wiki/Austrian_Research_Institute_for_Artificial_Intelligence Austrian Research Institute for Artificial Intelligence], OEFAI-TR-2000-3, [http://www.ofai.at/cgi-bin/get-tr?download=1&paper=oefai-tr-2000-31.pdf pdf]
* [[Robert Levinson]], [[Ryan Weber]] ('''2000'''). ''[http://link.springer.com/chapter/10.1007/3-540-45579-5_9 Chess Neighborhoods, Function Combination, and Reinforcement Learning]''. [[CG 2000]], [https://users.soe.ucsc.edu/~levinson/Papers/CNFCRL.pdf pdf]
* [[Johannes Fürnkranz]], [[Miroslav Kubat]] (eds.) ('''2001'''). ''[https://www.novapublishers.com/catalog/product_info.php?products_id=720 Machines that Learn to Play Games]''. Advances in Computation: Theory and Practice, Vol. 8,. [https://en.wikipedia.org/wiki/Nova_Publishers NOVA Science Publishers]
: [[Gerald Tesauro]] ('''2001'''). ''[http://dl.acm.org/citation.cfm?id=644397 Comparison Training of Chess Evaluation Functions]''. » [[SCP]], [[Deep Blue]]
* [[Graham Kendall]], [[Glenn Whitwell]] ('''2001'''). ''[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=934299&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D934299 An Evolutionary Approach for the Tuning of a Chess Evaluation Function using Population Dynamics]''. Proceedings of the 2001 Congress on Evolutionary Computation, Vol. 2, [http://red.cs.nott.ac.uk/~gxk/papers/cec2001chess.pdf pdf]
* [[Yngvi Björnsson]], [[Tony Marsland]] ('''2001'''). ''Learning Search Control in Adversary Games''. [[Advances in Computer Games 9]], pp. 157-174. [http://www.ru.is/faculty/yngvi/pdf/BjornssonM01b.pdf pdf]
* [[Michael Buro]] ('''2002'''). ''Improving Mini-max Search by Supervised Learning.'' [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 134, No. 1, [http://www.cs.ualberta.ca/~mburo/ps/logaij.pdf pdf]
* [[Dave Gomboc]], [[Tony Marsland]], [[Michael Buro]] ('''2003'''). ''Evaluation Function Tuning via Ordinal Correlation''. [[Advances in Computer Games 10]], [http://www.top-5000.nl/ps/Dave%20Gomboc%20-%20Evaluation%20Tuning.pdf pdf]
* [[Dave Gomboc]] ('''2004'''). ''Tuning Evaluation Functions by Maximizing Concordance''. M.Sc. Thesis, [[University of Alberta]]
* [[Adam Marczyk]] ('''2004'''). ''[http://www.talkorigins.org/faqs/genalg/genalg.html Genetic Algorithms and Evolutionary Computation]'' from the [https://en.wikipedia.org/wiki/TalkOrigins_Archive TalkOrigins Archive]
* [[Petr Aksenov]] ('''2004'''). ''[http://joypub.joensuu.fi/publications/masters_thesis/aksenov_genetic/index_en.html Genetic algorithms for optimising chess position scoring]'', Master's thesis, [ftp://cs.joensuu.fi/pub/Theses/2004_MSc_Aksenov_Petr.pdf pdf]
* [[Mathieu Autonès]], [[Aryel Beck]], [[Phillippe Camacho]], [[Nicolas Lassabe]], [[Hervé Luga]], [[François Scharffe]] ('''2004'''). ''[http://link.springer.com/chapter/10.1007/978-3-540-24650-3_1 Evaluation of Chess Position by Modular Neural network Generated by Genetic Algorithm]''. [http://www.informatik.uni-trier.de/~ley/db/conf/eurogp/eurogp2004.html#AutonesBCLLS04 EuroGP 2004]
* [[Henk Mannen]], [[Marco Wiering]] ('''2004'''). ''[http://scholar.google.com/citations?view_op=view_citation&hl=en&user=xVas0I8AAAAJ&cstart=20&pagesize=80&citation_for_view=xVas0I8AAAAJ:7PzlFSSx8tAC Learning to play chess using TD(λ)-learning with database games]''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04
==2005 ...==
* [[Dave Gomboc]], [[Michael Buro]], [[Tony Marsland]] ('''2005'''). ''Tuning Evaluation Functions by Maximizing Concordance''. [https://en.wikipedia.org/wiki/Theoretical_Computer_Science_%28journal%29 Theoretical Computer Science], Vol. 349, No. 2, [http://www.cs.ualberta.ca/%7Emburo/ps/tcs-learn.pdf pdf]
* [[Jeff Rollason]] ('''2005'''). ''[http://www.aifactory.co.uk/newsletter/2005_03_hill-climbing.htm Evaluation by Hill-climbing: Getting the right move by solving micro-problems]''. [[AI Factory]], Autumn 2005
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf]
'''2006'''
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3
* [[Hallam Nasreddine]], [[Hendra Suhanto Po]], [[Graham Kendall]] ('''2006'''). ''[http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=4017925 Using an Evolutionary Algorithm for the Tuning of a Chess Evaluation Function Based on a Dynamic Boundary Strategy]''. Proceedings of the 2006 IEEE Conference on Cybernetics and Intelligent Systems, [http://www.cs.nott.ac.uk/~gxk/papers/ieeecis2006.pdf pdf]
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2006'''). ''[http://www.springerlink.com/content/6180u7h3t312468u/ Automatic Construction of Static Evaluation Functions for Computer Game Players]''. ALT ’06
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Viljem Žumer]] ('''2006'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1688532 A Differential Evolution for the Tuning of a Chess Evaluation Function]''. IEEE Congress on Evolutionary Computation, 2006.
* [[Kunihito Hoki]] ('''2006'''). ''Optimal control of minimax search result to learn positional evaluation''. [[Conferences#GPW|11th Game Programming Workshop]] (Japanese)
'''2007'''
* [[Shogo Takeuchi]], [[Tomoyuki Kaneko]], [[Kazunori Yamaguchi]], [[Satoru Kawai]] ('''2007'''). ''Visualization and Adjustment of Evaluation Functions Based on Evaluation Values and Win Probability''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2007.html AAAI 2007], [https://www.aaai.org/Papers/AAAI/2007/AAAI07-136.pdf pdf]
* [[Makoto Miwa]], [[Daisaku Yokoyama]], [[Takashi Chikayama]] ('''2007'''). ''Automatic Generation of Evaluation Features for Computer Game Players''. [http://cswww.essex.ac.uk/cig/2007/papers/2037.pdf pdf]
* [[Johannes Fürnkranz]] ('''2007'''). ''Recent advances in machine learning and game playing''. [http://www.oegai.at/journal.shtml ÖGAI Journal], Vol. 26, No. 2, Computer Game Playing, [https://www.ke.tu-darmstadt.de/~juffi/publications/ogai-07.pdf pdf]
'''2008'''
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2008'''). ''Genetic Algorithms for Mentor-Assisted Evaluation Function Optimization''. ACM Genetic and Evolutionary Computation Conference ([http://www.sigevo.org/gecco-2008/ GECCO '08]), pp. 1469-1475, Atlanta, GA, July 2008.
* [[Borko Bošković]], [[Sašo Greiner]], [[Janez Brest]], [[Aleš Zamuda]], [[Viljem Žumer]] ('''2008'''). ''An Adaptive Differential Evolution Algorithm with Opposition-Based Mechanisms, Applied to the Tuning of a Chess Program''. [http://www.springer.com/engineering/computational+intelligence+and+complexity/book/978-3-540-68827-3 Advances in Differential Evolution], Studies in Computational Intelligence, ISBN: 978-3-540-68827-3
'''2009'''
* [[Joel Veness]], [[David Silver]], [[William Uther]], [[Alan Blair]] ('''2009'''). ''[http://papers.nips.cc/paper/3722-bootstrapping-from-game-tree-search Bootstrapping from Game Tree Search]''. [http://nips.cc/ Neural Information Processing Systems (NIPS), 2009], [http://books.nips.cc/papers/files/nips22/NIPS2009_0508.pdf pdf]
* [[Omid David]], [[Jaap van den Herik]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2009'''). ''Simulating Human Grandmasters: Evolution and Coevolution of Evaluation Functions''. [[ACM]] Genetic and Evolutionary Computation Conference ([http://www.sigevo.org/gecco-2009/ GECCO '09]), pp. 1483 - 1489, Montreal, Canada, July 2009.
* [[Omid David]] ('''2009'''). ''Genetic Algorithms Based Learning for Evolving Intelligent Organisms''. Ph.D. Thesis.
* [[Broch Davison]] ('''2009'''). ''[http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/index.html Playing Chess with Matlab]''. M.Sc. thesis supervised by [http://www.bris.ac.uk/engineering/people/nello-cristianini/index.html Nello Cristianini], [http://www.enm.bris.ac.uk/teaching/projects/2008_09/bd5053/FinalReport.pdf pdf] <ref>[https://en.wikipedia.org/wiki/MATLAB MATLAB from Wikipedia]</ref>
* [[Mark Levene]], [[Trevor Fenner]] ('''2009'''). ''A Methodology for Learning Players' Styles from Game Records''. [http://arxiv.org/abs/0904.2595v1 arXiv:0904.2595v1]
* [[Wei-Lun Kao]] ('''2009'''). ''The Automatically Tuning System of Evaluation Function for Computer Chinese Chess''. Master thesis, [[National Chiao Tung University]], [https://ir.nctu.edu.tw/bitstream/11536/43333/1/553001.pdf pdf] (Chinese)
==2010 ...==
* [[Amine Bourki]], [[Matthieu Coulm]], [[Philippe Rolet]], [[Olivier Teytaud]], [[Paul Vayssière]] ('''2010'''). ''[http://hal.inria.fr/inria-00467796/en/ Parameter Tuning by Simple Regret Algorithms and Multiple Simultaneous Hypothesis Testing]''. [http://hal.inria.fr/docs/00/46/77/96/PDF/tosubmit.pdf pd]
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2010'''). ''Genetic Algorithms for Automatic Search Tuning''. [[ICGA Journal#33_2|ICGA Journal, Vol. 33, No. 2]]
* [[Borko Bošković]] ('''2010'''). ''[http://labraj.uni-mb.si/en/PhD_Thesis_Defence_%28Borko_Bo%C5%A1kovi%C4%87%29 Differential evolution for the Tuning of a Chess Evaluation Function]''. Ph.D. thesis, [[University of Maribor]]
'''2011'''
* [[Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2011'''). ''Expert-Driven Genetic Algorithms for Simulating Evaluation Functions''. Genetic Programming and Evolvable Machines, Vol. 12, No. 1
* [[Borko Bošković]], [[Janez Brest]] ('''2011'''). ''Tuning Chess Evaluation Function Parameters using Differential Evolution''. Algorithm. Informatica, 35, No. 2, [http://www.informatica.si/PDF/35-2/14_Boskovic%20-%20Tuning%20chess%20evaluation.pdf pdf]
* [[Borko Bošković]], [[Janez Brest]], [[Aleš Zamuda]], [[Sašo Greiner]], [[Viljem Žumer]] ('''2011'''). ''[http://www.springerlink.com/content/y62h14743364x2l7/ History mechanism supported differential evolution for chess evaluation function tuning]''. [http://www.springer.com/engineering/computational+intelligence+and+complexity/journal/500 Soft Computing], Vol. 15, No. 4
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''An Evolutionary Algorithm for Tuning a Chess Evaluation Function''. [http://www.informatik.uni-trier.de/~ley/db/conf/cec/cec2011.html#Vazquez-FernandezCT11 CEC 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/eduardo-cec2011-final.pdf.gz pdf]
* [[Eduardo Vázquez-Fernández]], [[Carlos Artemio Coello Coello]], [[Feliú Davino Sagols Troncoso]] ('''2011'''). ''[http://dl.acm.org/citation.cfm?id=2001882 An Adaptive Evolutionary Algorithm Based on Typical Chess Problems for Tuning a Chess Evaluation Function]''. [http://www.informatik.uni-trier.de/~ley/db/conf/gecco/gecco2011c.html#Vazquez-FernandezCT11 GECCO 2011], [http://delta.cs.cinvestav.mx/~ccoello/conferences/vazquez-gecco2011.pdf.gz pdf]
* [[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]] <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=40987 CLOP slides] by [[Rémi Coulom]], [[CCC]], November 03, 2011</ref>
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2011'''). ''[http://link.springer.com/chapter/10.1007%2F978-3-642-31866-5_16 The Global Landscape of Objective Functions for the Optimization of Shogi Piece Values with a Game-Tree Search]''. [[Advances in Computer Games 13]] » [[Shogi]]
'''2012'''
* [[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]
* [[Kanjanapa Thitipong]], [[Komiya Kanako]], [[Yoshiyuki Kotani]] ('''2012'''). ''Design and Implementation of Bonanza Method for the Evaluation in the Game of Arimaa''. [http://www.ipsj.or.jp/english/index.html IPSJ SIG Technical Report], Vol. 2012-GI-27, No. 4, [http://arimaa.com/arimaa/papers/KanjanapaThitipong/IPSJ-GI12027004.pdf pdf] » [[Arimaa]]
'''2013'''
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Ching-Hua Kuo]], [[Bo-Han Lin]] ('''2013'''). ''[https://kaigi.org/jsai/webprogram/2013/paper-138.html A Supervised Learning Method for Chinese Chess Programs]''. [http://2013.conf.ai-gakkai.or.jp/english-info JSAI2013], [https://kaigi.org/jsai/webprogram/2013/pdf/138.pdf pdf]
* [[Akira Ura]], [[Makoto Miwa]], [[Yoshimasa Tsuruoka]], [[Takashi Chikayama]] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-09165-5_18 Comparison Training of Shogi Evaluation Functions with Self-Generated Training Positions and Moves]''. [[CG 2013]], [https://pdfs.semanticscholar.org/6ad0/7167425539cf64e6bf420d7a28a1fc1047d6.pdf slides as pdf]
* [[Yoshikuni Sato]], [[Makoto Miwa]], [[Shogo Takeuchi]], [[Daisuke Takahashi]] ('''2013'''). ''[http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6402 Optimizing Objective Function Parameters for Strength in Computer Game-Playing]''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2013.html#SatoMTT13 AAAI 2013]
* [[Shalabh Bhatnagar]], [[H. L. Prasad]], [[L.A. Prashanth]] ('''2013'''). ''[http://stochastic.csa.iisc.ernet.in/~shalabh/book.html Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods]''. [http://www.springer.com/series/642 Lecture Notes in Control and Information Sciences], Vol. 434, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer] » [[SPSA]]
* [[Tomáš Hřebejk]] ('''2013'''). ''Arimaa challenge - Static Evaluation Function''. Master Thesis, [https://en.wikipedia.org/wiki/Charles_University_in_Prague Charles University in Prague], [http://arimaa.com/arimaa/papers/ThomasHrebejk/Arimaa.pdf pdf] » [[Arimaa]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58472 thesis on eval function learning in Arimaa] by [[Jon Dart]], [[CCC]], December 04, 2015</ref>
'''2014'''
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49], [https://www.jair.org/media/4217/live-4217-7792-jair.pdf pdf] » [[Shogi]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=55084 MMTO for evaluation learning] by [[Jon Dart]], [[CCC]], January 25, 2015</ref>
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2014'''). ''RES: Regularized Stochastic BFGS Algorithm''. [https://arxiv.org/abs/1401.7625 arXiv:1401.7625] <ref> [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]</ref>
* <span id="ROCK"></span>[http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=184943 Jemin Hwangbo], [https://www.linkedin.com/in/christian-gehring-1b958395/ Christian Gehring], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=186652 Hannes Sommer], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=29981 Roland Siegwart], [http://www.adrl.ethz.ch/doku.php/adrl:people:jbuchli Jonas Buchli] ('''2014'''). ''ROCK∗ — Efficient black-box optimization for policy learning''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7028729 Humanoids, 2014] » [[Automated Tuning#Rockstar|Rockstar]]
* [https://arxiv.org/find/cs/1/au:+Martens_J/0/1/0/all/0/1 James Martens] ('''2014, 2017'''). ''New insights and perspectives on the natural gradient method''. [https://arxiv.org/abs/1412.1193 arXiv:1412.1193]
==2015 ...==
* [https://scholar.google.nl/citations?user=yyIoQu4AAAAJ Diederik P. Kingma], [https://scholar.google.ca/citations?user=ymzxRhAAAAAJ&hl=en Jimmy Lei Ba] ('''2015'''). ''Adam: A Method for Stochastic Optimization''. [https://arxiv.org/abs/1412.6980v8 arXiv:1412.6980v8], [http://www.iclr.cc/doku.php?id=iclr2015:main ICLR 2015] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61948 Arasan 19.2] by [[Jon Dart]], [[CCC]], November 03, 2016 » [[Arasan#Tuning|Arasan's Tuning]]</ref>
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2015'''). ''Global Convergence of Online Limited Memory BFGS''. [https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research Journal of Machine Learning Research], Vol. 16, [http://www.jmlr.org/papers/volume16/mokhtari15a/mokhtari15a.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
'''2017'''
* [http://ruder.io/ Sebastian Ruder] ('''2017'''). ''[http://ruder.io/optimizing-gradient-descent/ An overview of gradient descent optimization algorithms]''. [https://arxiv.org/abs/1609.04747v2 arXiv:1609.04747v2] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017</ref>

=Forum Posts=
==1997 ...==
* [https://groups.google.com/group/rec.games.chess.computer/browse_frm/thread/77f10f072e907302 Evolutionary Evaluation] by [[Dan Homan]], [[Computer Chess Forums|rgcc]], September 09, 1997 » [[Evaluation]]
* [https://www.stmintz.com/ccc/index.php?id=13794 Deep Blue eval function tuning technique] by [[Stuart Cracraft]], [[CCC]], January 08, 1998 » [[Deep Blue]] <ref>[[Thomas Anantharaman]] ('''1997'''). ''Evaluation Tuning for Computer Chess: Linear Discriminant Methods''. [[ICGA Journal#20_4|ICCA Journal, Vol. 20, No. 4]]</ref>
* [https://www.stmintz.com/ccc/index.php?id=13968 Automated Tuning] by [[Stuart Cracraft]], [[CCC]], January 12, 1998
* [https://www.stmintz.com/ccc/index.php?id=14472 Pattern Matching -- Avoiding Hand-Tuning] by [[Stuart Cracraft]], [[CCC]], January 21, 1998
* [https://www.stmintz.com/ccc/index.php?id=28362 Speaking of "Evaluate"] by [[Dann Corbit|Danniel Corbit]], [[CCC]], September 29, 1998
* [https://www.stmintz.com/ccc/index.php?id=28584 Parameter Tuning] by [[Jonathan Baxter]], [[CCC]], October 01, 1998 » [[Temporal Difference Learning|TD-learning]], [[KnightCap]]
==2000 ...==
* [https://www.stmintz.com/ccc/index.php?id=128297 Deep Thought's tuning code and eval function!] by [[Severi Salminen]], [[CCC]], September 05, 2000 » [[Eval Tuning in Deep Thought]]
* [https://www.stmintz.com/ccc/index.php?id=146691 learning to tune parameters by comp-comp games] by [[Uri Blass]], [[CCC]], December 28, 2000
* [https://www.stmintz.com/ccc/index.php?id=177538 Automatic Eval Tuning] by [[Artem Petakov|Artem Pyatakov]], [[CCC]], June 29, 2001
* [https://www.stmintz.com/ccc/index.php?id=290239 deep blue's automatic tuning of evaluation function] by Emerson Tan, [[CCC]], March 22, 2003
* [https://www.stmintz.com/ccc/index.php?id=314498 evaluationfunction tuning] by [[Jan Willem de Kort]], [[CCC]], September 07, 2003
* [https://www.stmintz.com/ccc/index.php?id=355083 evaluation tuning tricks] by [[Peter Aloysius Harjanto|Peter Alloysius]], [[CCC]], March 17, 2004
==2005 ...==
* [https://www.stmintz.com/ccc/index.php?id=487022 "learning" or "tuning" programs] by [[Sean Mintz]], [[CCC]], February 15, 2006
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450 Adjusting weights the Deep Blue way] by [[Tony van Roon-Werten]], [[Computer Chess Forums|Winboard Forum]], August 29, 2008 » [[Deep Blue]]
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49818 Tuning the eval] by [[Daniel Anulliero]], [[Computer Chess Forums|Winboard Forum]], January 02, 2009
* [http://www.talkchess.com/forum/viewtopic.php?t=27266 Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 01, 2009
: [http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009 <ref>[http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=555522&t=50823 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014</ref>
==2010 ...==
* [http://www.talkchess.com/forum/viewtopic.php?t=31445 Revisiting GA's for tuning evaluation weights] by [[Ilari Pihlajisto]], [[CCC]], January 03, 2010
* [http://www.talkchess.com/forum/viewtopic.php?t=31935 Idea for Automatic Calibration of Evaluation Function...] by [[Steve Maughan]], [[CCC]], January 22, 2010
* [http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=378648&t=36829 Re: TEST position TCEC5- Houdini 1.03a-DRybka4 1-0] by [[Milos Stanisavljevic]], [[CCC]], November 30, 2010
* [http://www.talkchess.com/forum/viewtopic.php?t=38412 Parameter tuning] by [[Onno Garms]], [[CCC]], March 13, 2011 » [[Onno]]
* [http://www.talkchess.com/forum/viewtopic.php?t=40166 Ahhh... the holy grail of computer chess] by [[Marcel van Kervinck]], [[CCC]], August 23, 2011
* [http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011 <ref>[[Rémi Coulom]] ('''2011'''). ''[http://remi.coulom.free.fr/CLOP/ CLOP: Confident Local Optimization for Noisy Black-Box Parameter Tuning]''. [[Advances in Computer Games 13]]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=40964 Tuning again] by [[Ed Schroder]], [[CCC]], November 01, 2011
* [http://www.open-chess.org/viewtopic.php?f=5&t=1954 Ban: Automatic Learning of Evaluation [...]] by [[Mark Watkins|BB+]], [[Computer Chess Forums|OpenChess Forum]], May 10, 2012 <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref>
'''2014'''
* [http://www.talkchess.com/forum/viewtopic.php?t=50823 How Do You Automatically Tune Your Evaluation Tables] by [[Tom Likens]], [[CCC]], January 07, 2014
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=26 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014 » [[Texel's Tuning Method]]
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=27 Re: The texel evaluation function optimization algorithm] by [[Álvaro Begué]], [[CCC]], January 31, 2014 » [https://en.wikipedia.org/wiki/Cross_entropy Cross-entropy]
* [http://www.talkchess.com/forum/viewtopic.php?t=53526 Tuning eval] by [[Daniel Anulliero]], [[CCC]], September 01, 2014
* [http://www.talkchess.com/forum/viewtopic.php?t=53657 Tune cut margins with Texel/gaviota tuning method] by [[Fabio Gobbato]], [[CCC]], September 11, 2014
* [http://www.talkchess.com/forum/viewtopic.php?t=54545 Eval tuning - any open source engines with GA or PBIL?] by Hrvoje Horvatic, [[CCC]], December 04, 2014 » [[Genetic Programming#PBIL|PBIL]]
==2015 ...==
* [http://www.talkchess.com/forum/viewtopic.php?t=55084 MMTO for evaluation learning] by [[Jon Dart]], [[CCC]], January 25, 2015 <ref>[[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49], [https://www.jair.org/media/4217/live-4217-7792-jair.pdf pdf]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=55621 Experiments with eval tuning] by [[Jon Dart]], [[CCC]], March 10, 2015 » [[Arasan]], [[Texel's Tuning Method]]
* [http://www.talkchess.com/forum/viewtopic.php?t=55696 txt: automated chess engine tuning] by [[Alexandru Mosoi]], [[CCC]], March 18, 2015 » [[Zurichess]], [[Texel's Tuning Method]] <ref>[https://bitbucket.org/brtzsnr/txt brtzsnr / txt — Bitbucket] by [[Alexandru Mosoi]]</ref>
: [http://www.talkchess.com/forum/viewtopic.php?t=55696&start=108 Re: txt: automated chess engine tuning] by [[Sergei Markoff|Sergei S. Markoff]], [[CCC]], February 15, 2016 » [[SmarThink]]
* [http://www.talkchess.com/forum/viewtopic.php?t=56168 Piece weights with regression analysis (in Russian)] by [[Vladimir Medvedev]], [[CCC]], April 30, 2015 » [[Point Value by Regression Analysis]]
: [http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=56377 New Idea For Automated Tuning] by Jordan Bray, [[CCC]], May 16, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=57225 Evaluation Tuning] by [[Michael Hoffmann]], [[CCC]], August 09, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=57246 Genetical tuning] by [[Stefano Gemma]], [[CCC]], August 11, 2015 » [[Genetic Programming]]
: [http://www.talkchess.com/forum/viewtopic.php?t=57246&start=34 Re: Genetical tuning] by [[Ferdinand Mosca]], [[CCC]], August 20, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=57270 Some musings about search] by [[Ed Schroder]], [[CCC]], August 14, 2015 » [[Search]]
* [http://www.talkchess.com/forum/viewtopic.php?t=57860 td-leaf] by [[Alexandru Mosoi]], [[CCC]], October 06, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=58211 tensorflow] by [[Alexandru Mosoi]], [[CCC]], November 10, 2015 <ref>[http://tensorflow.org/ Home — TensorFlow]</ref>
'''2016'''
* [http://www.talkchess.com/forum/viewtopic.php?t=59319 pawn hash and eval tuning] by [[J. Wesley Cleveland]], [[CCC]], February 21, 2016 » [[Pawn Hash Table]]
* [http://www.open-chess.org/viewtopic.php?f=5&t=2987 Tuning] by ppyvabw, [[Computer Chess Forums|OpenChess Forum]], June 11, 2016 » [[Texel's Tuning Method]]
* [http://www.talkchess.com/forum/viewtopic.php?t=60902 GreKo 2015 ML: tuning evaluation (article in Russian)] by [[Vladimir Medvedev]], [[CCC]], July 22, 2016 » [[GreKo]], [[Texel's Tuning Method]]
* [http://www.talkchess.com/forum/viewtopic.php?t=61861 A database for learning evaluation functions] by [[Álvaro Begué]], [[CCC]], October 28, 2016 » [[Evaluation]], [[Learning]], [[Texel's Tuning Method]]
* [http://www.talkchess.com/forum/viewtopic.php?t=62012 CLOP: when to stop?] by [[Erin Dame]], [[CCC]], November 07, 2016 » [[CLOP]]
: [http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016 <ref>[https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=62056 C++ code for tuning evaluation function parameters] by [[Álvaro Begué]], [[CCC]], November 10, 2016 » [[RuyTune]] <ref>[https://bitbucket.org/alonamaloh/ruy_tune alonamaloh / ruy_tune — Bitbucket] by [[Álvaro Begué]]</ref>
'''2017'''
* [http://www.talkchess.com/forum/viewtopic.php?t=63408 improved evaluation function] by [[Alexandru Mosoi]], [[CCC]], March 11, 2017 » [[Texel's Tuning Method]], [[Zurichess]]
* [http://www.talkchess.com/forum/viewtopic.php?t=63425 automated tuning] by [[Stuart Cracraft]], [[CCC]], March 13, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=63926 Parameter tuning with multi objective optimization] by [[Marco Pampaloni]], [[CCC]], May 07, 2017 » [[Napoleon]]
* [http://www.talkchess.com/forum/viewtopic.php?t=64119 Evaluation Tuning: When To Stop?] by [[Cheney Nattress]], [[CCC]], May 29, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=64189 Texel tuning method question] by [[Sander Maassen vd Brink]], [[CCC]], June 05, 2017 » [[Texel's Tuning Method]]
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=35 Re: Texel tuning method question] by [[Peter Österlund]], [[CCC]], June 07, 2017
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=42 Re: Texel tuning method question] by [[Ferdinand Mosca]], [[CCC]], July 20, 2017 » [[Python]]
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=64972 Approximating Stockfish's Evaluation by PSQTs] by [[Thomas Dybdahl Ahle]], [[CCC]], August 23, 2017 » [[Automated Tuning#Regression|Regression]], [[Piece-Square Tables]], [[Stockfish]]
* [http://www.talkchess.com/forum/viewtopic.php?t=65039 Ab-initio evaluation tuning] by [[Evert Glebbeek]], [[CCC]], August 30, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=65045 ROCK* black-box optimizer for chess] by [[Jon Dart]], [[CCC]], August 31, 2017 » [[Automated Tuning#ROCK|ROCK*]], [[Automated Tuning#Rockstar|Rockstar]]
* [http://www.talkchess.com/forum/viewtopic.php?t=65373 tuning via maximizing likelihood] by [[Daniel Shawul]], [[CCC]], October 04, 2017 <ref>[https://en.wikipedia.org/wiki/Maximum_likelihood_estimation Maximum likelihood estimation from Wikipedia]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=65660 tool to create derivates of a given function] by [[Alexandru Mosoi]], [[CCC]], November 07, 2017
: [http://www.talkchess.com/forum/viewtopic.php?t=65660&start=2 Re: tool to create derivates of a given function] by [[Daniel Shawul]], [[CCC]], November 07, 2017 <ref>[https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant Jacobian matrix and determinant from WIkipedia]</ref>
* [http://www.talkchess.com/forum/viewtopic.php?t=65799 tuning for the uninformed] by [[Folkert van Heusden]], [[CCC]], November 23, 2017
'''2018'''
* [http://www.talkchess.com/forum/viewtopic.php?t=66221 tuning info] by [[Marco Belli]], [[CCC]], January 03, 2018
* [http://www.talkchess.com/forum/viewtopic.php?t=66681 3 million games for training neural networks] by [[Álvaro Begué]], [[CCC]], February 24, 2018 » [[Neural Networks]]

=External Links=
* [https://en.wiktionary.org/wiki/automatic automatic - Wiktionary]
* [https://en.wikipedia.org/wiki/Automation Automation from Wikipedia]
* [https://en.wiktionary.org/wiki/tuning tuning - Wiktionary]
* [https://en.wikipedia.org/wiki/Tuning Tuning from Wikipedia]
: [https://en.wikipedia.org/wiki/Engine_tuning Engine tuning from Wikipedia]
: [https://en.wikipedia.org/wiki/Self-tuning Self-tuning from Wikipedia]
==Optimization==
* [https://en.wiktionary.org/wiki/optimization optimization - Wiktionary]
: [https://en.wiktionary.org/wiki/optimize optimize - Wiktionary]
* [https://en.wikipedia.org/wiki/Mathematical_optimization Mathematical optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Optimization_problem Optimization problem from Wikipedia]
* [https://en.wikipedia.org/wiki/Global_optimization Global optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search from Wikipedia]
* [https://en.wikipedia.org/wiki/Local_search_%28optimization%29 Local search (optimization) from Wikipedia]
* [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]
* [http://remi.coulom.free.fr/CLOP/ CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]] » [[CLOP]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=35049 Tool for automatic black-box parameter optimization released] by [[Rémi Coulom]], [[CCC]], June 20, 2010</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref>
* [https://en.wikipedia.org/wiki/Conjugate_gradient_method Conjugate gradient method from Wikipedia]
* [https://en.wikipedia.org/wiki/Convex_optimization Convex optimization from Wikipedia]
: [https://en.wikipedia.org/wiki/Entropy_maximization Entropy maximization from Wikipedia]
: [https://en.wikipedia.org/wiki/Linear_programming Linear programming from Wikipedia]
: [https://en.wikipedia.org/wiki/Simplex_algorithm Simplex algorithm from Wikipedia]
* [https://en.wikipedia.org/wiki/Differential_evolution Differential evolution from Wikipedia]
* [https://en.wikipedia.org/wiki/Evolutionary_computation Evolutionary computation from Wikipedia]
* [https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm Gauss–Newton algorithm from Wikipedia]
* [https://en.wikipedia.org/wiki/Genetic_algorithm Genetic algorithm from Wikipedia]
* [https://en.wikipedia.org/wiki/Gradient_descent Gradient descent from Wikipedia]
* [https://en.wikipedia.org/wiki/Hill_climbing Hill climbing from Wikipedia]
* [https://en.wikipedia.org/wiki/Limited-memory_BFGS Limited-memory BFGS from Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=62012&start=6 Re: CLOP: when to stop?] by [[Álvaro Begué]], [[CCC]], November 08, 2016</ref>
* [https://en.wikipedia.org/wiki/Loss_function Loss function from Wikipedia]
* [https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method Nelder–Mead method from Wikipedia] » [[Amoeba]], [[Murka]]
* [https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization Newton's method in optimization from Wikipedia]
* [https://www.gerad.ca/nomad/Project/Home.html NOMAD - A blackbox optimization software] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=54545&start=2 Re: Eval tuning - any open source engines with GA or PBIL?] by [[Jon Dart]], [[CCC]], December 06, 2014</ref>
* [https://en.wikipedia.org/wiki/NEWUOA NEWUOA from Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=94 Re: The texel evaluation function optimization algorithm] by [[Jon Dart]], [[CCC]], March 12, 2014</ref>
* [https://en.wikipedia.org/wiki/Particle_swarm_optimization Particle swarm optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Population-based_incremental_learning Population-based incremental learning (PBIL) - Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=54545 Eval tuning - any open source engines with GA or PBIL?] by Hrvoje Horvatic, [[CCC]], December 04, 2014</ref>
* [http://macechess.blogspot.de/2013/03/population-based-incremental-learning.html Population Based Incremental Learning (PBIL)] by [[Thomas Petzke]], March 16, 2013 » [[iCE]]
* [https://en.wikipedia.org/wiki/Simulated_annealing Simulated annealing from Wikipedia]
* [https://en.wikipedia.org/wiki/Stochastic_optimization Stochastic optimization from Wikipedia]
: [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia]
: [http://www.jhuapl.edu/spsa/ SPSA Algorithm]
: [https://en.wikipedia.org/wiki/Stochastic_approximation Stochastic approximation from Wikipedia]
: [https://en.wikipedia.org/wiki/Stochastic_gradient_descent Stochastic gradient descent from Wikipedia]
==Machine Learning==
* [https://en.wikipedia.org/wiki/Machine_learning Machine learning from Wikipedia]
* [https://en.wikipedia.org/wiki/List_of_machine_learning_concepts List of machine learning concepts from Wikipedia]
* [https://en.wikipedia.org/wiki/Backpropagation Backpropagation from Wikipedia] » [[Neural Networks]]
* [https://en.wikipedia.org/wiki/Reinforcement_learning Reinforcement learning from Wikipedia]
: [https://en.wiktionary.org/wiki/reinforcement reinforcement - Wiktionary]
: [https://en.wiktionary.org/wiki/reinforce reinforce - Wiktionary]
* [https://en.wikipedia.org/wiki/Supervised_learning Supervised learning from Wikipedia]
: [https://en.wiktionary.org/wiki/supervisor supervisor - Wiktionary]
* [https://en.wikipedia.org/wiki/Temporal_difference_learning Temporal Difference Learning from Wikipeadia]
: [https://en.wiktionary.org/wiki/temporal temporal - Wiktionary]
* [https://en.wikipedia.org/wiki/Unsupervised_learning Unsupervised learning from Wikipedia]
==Statistics/Regression Analysis ==
* [https://en.wikipedia.org/wiki/Statistics Statistics from Wikipedia]
* [https://en.wikipedia.org/wiki/Regression Regression from Wikipedia]
: [https://en.wiktionary.org/wiki/regression regression - Wiktionary]
: [https://en.wiktionary.org/wiki/regress regress - Wiktionary]
* [https://en.wikipedia.org/wiki/Regression_analysis Regression analysis from Wikipedia]
* [https://en.wikipedia.org/wiki/Outline_of_regression_analysis Outline of regression analysis from Wikipedia]
* [https://en.wikipedia.org/wiki/Bayesian_linear_regression Bayesian linear regression from Wikipedia]
* [https://en.wikipedia.org/wiki/Bayesian_multivariate_linear_regression Bayesian multivariate linear regression from Wikipedia]
* [https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation Correlation does not imply causation from Wikipedia]
* [https://en.wikipedia.org/wiki/Cross_entropy Cross entropy from Wikipedia]
* [https://en.wikipedia.org/wiki/Likelihood_function Likelihood function from Wikipedia]
* [https://en.wikipedia.org/wiki/Linear_regression Linear regression from Wikipedia]
* [https://en.wikipedia.org/wiki/Linear_discriminant_analysis Linear discriminant analysis from Wikipedia]
* [https://en.wikipedia.org/wiki/Logistic_regression Logistic regression from Wikipedia]
* [https://en.wikipedia.org/wiki/Kernel_Fisher_discriminant_analysis Kernel Fisher discriminant analysis from Wikipedia]
* [https://en.wikipedia.org/wiki/Maximum_likelihood_estimation Maximum likelihood estimation from Wikipedia]
* [https://en.wikipedia.org/wiki/Mean_squared_error Mean squared error from Wikipedia]
* [https://en.wikipedia.org/wiki/Nonlinear_regression Nonlinear regression from Wikipedia]
* [https://en.wikipedia.org/wiki/Ordinary_least_squares Ordinary least squares from Wikipedia]
* [https://en.wikipedia.org/wiki/Simple_linear_regression Simple linear regression from Wikipedia]
==Code==
* [https://bitbucket.org/alonamaloh/ruy_tune alonamaloh / ruy_tune — Bitbucket] » [[RuyTune]] by [[Álvaro Begué]]
* <span id="Rockstar"></span>[https://github.com/lantonov/Rockstar Rockstar: Implementation of ROCK* algorithm (Gaussian kernel regression + natural gradient descent) for optimisation | GitHub] by [[Lyudmil Antonov]] and [[Joona Kiiski]] » [[Automated Tuning#ROCK|ROCK*]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=65045 ROCK* black-box optimizer for chess] by [[Jon Dart]], [[CCC]], August 31, 2017</ref>
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine | GitHub] by [[Joona Kiiski]] » [[Stockfish]], [[Stockfish's Tuning Method]]
==Misc==
* [https://www.facebook.com/TheNextStepQuintet The Next Step Quintet] feat. [http://www.tivonpennicott.com/ Tivon Pennicott] - [http://www.discogs.com/Next-Step-Quintet-The-Next-Step-Quintet/release/4970720 Regression], [https://el-gr.facebook.com/KerameioBar KerameioBar] [https://en.wikipedia.org/wiki/Athens Athens], [https://en.wikipedia.org/wiki/Greece Greece], September 2014, [https://en.wikipedia.org/wiki/YouTube YouTube] Video
: {{#evu:https://www.youtube.com/watch?v=lc4LBx2_Mak|alignment=left|valignment=top}}

=References=
<references />

'''[[Main Page|Up one Level]]'''

Navigation menu