Changes

Jump to: navigation, search

Automated Tuning

7,928 bytes added, 23:40, 29 July 2020
no edit summary
==Instances==
* [[ACPP]]
* [[Amoeba]]
* [[BBChess (SI)#DifferentialEvolution|Differential Evolution in BBChess]]
* [https://en.wikipedia.org/wiki/Time_complexity Time complexity] issues with increasing number of weights to tune
<span id="ReinformentLearning"></span>
=Reinforment Reinforcement Learning=
[[Reinforcement Learning|Reinforcement learning]], in particular [[Temporal Difference Learning|temporal difference learning]], has a long history in tuning evaluation weights in game programming, first seeen in the late 50s by [[Arthur Samuel]] in his [[Checkers]] player <ref>[[Arthur Samuel]] ('''1959'''). ''[http://domino.watson.ibm.com/tchjr/journalindex.nsf/600cc5649e2871db852568150060213c/39a870213169f45685256bfa00683d74!OpenDocument Some Studies in Machine Learning Using the Game of Checkers]''. IBM Journal July 1959</ref>. In self play against a stable copy of itself, after each move, the weights of the evaluation function were adjusted in a way that the [[Score|score]] of the [[Root|root position]] after a [[Quiescence Search|quiescence search]] became closer to the score of the full search. This TD method was generalized and formalized by [[Richard Sutton]] in 1988 <ref>[[Richard Sutton]] ('''1988'''). ''Learning to Predict by the Methods of Temporal Differences''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Vol. 3, No. 1, [http://webdocs.cs.ualberta.ca/~sutton/papers/sutton-88.pdf pdf]</ref>, who introduced the decay parameter '''λ''', where proportions of the score came from the outcome of [https://en.wikipedia.org/wiki/Monte_Carlo_method Monte Carlo] simulated games, tapering between [https://en.wikipedia.org/wiki/Bootstrapping#Artificial_intelligence_and_machine_learning bootstrapping] (λ = 0) and Monte Carlo (λ = 1). [[Temporal Difference Learning#TDLamba|TD-λ]] was famously applied by [[Gerald Tesauro]] in his [[Backgammon]] program [https://en.wikipedia.org/wiki/TD-Gammon TD-Gammon] <ref>[[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]</ref> <ref>[[Gerald Tesauro]] ('''1994'''). ''TD-Gammon, a Self-Teaching Backgammon Program, Achieves Master-Level Play''. [http://www.informatik.uni-trier.de/~ley/db/journals/neco/neco6.html#Tesauro94 Neural Computation Vol. 6, No. 2]</ref>, its [[Minimax|minimax]] adaption [[Temporal Difference Learning#TDLeaf|TD-Leaf]] was successful used in eval tuning of chess programs <ref>[[Don Beal]], [[Martin C. Smith]] ('''1999'''). ''Learning Piece-Square Values using Temporal Differences.'' [[ICGA Journal#22_4|ICCA Journal, Vol. 22, No. 4]]</ref>, with [[KnightCap]] <ref>[[Jonathan Baxter]], [[Andrew Tridgell]], [[Lex Weaver]] ('''1998'''). ''Experiments in Parameter Learning Using Temporal Differences''. [[ICGA Journal#21_2|ICCA Journal, Vol. 21, No. 2]], [http://cs.anu.edu.au/%7ELex.Weaver/pub_sem/publications/ICCA-98_equiv.pdf pdf]</ref> and [[CilkChess]] <ref>[http://supertech.csail.mit.edu/chess/ The Cilkchess Parallel Chess Program]</ref> as prominent samples.
==1970 ...==
* [[Arnold K. Griffith]] ('''1974'''). ''[http://www.sciencedirect.com/science/article/pii/0004370274900277 A Comparison and Evaluation of Three Machine Learning Procedures as Applied to the Game of Checkers]''. [https://en.wikipedia.org/wiki/Artificial_Intelligence_%28journal%29 Artificial Intelligence], Vol. 5, No. 2
* [[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1976'''). ''[https://link.springer.com/book/10.1007%2F978-3-642-48294-6 Foundations of Optimization]''. Lecture Notes in Economics and Mathematical Systems, Vol. 122, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* <span id="NonlinearProgramming1st"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1979'''). ''Nonlinear Programming: Theory and Algorithms''. [https://en.wikipedia.org/wiki/Wiley_(publisher) Wiley] » [[#NonlinearProgramming2nd|2nd]], [[#NonlinearProgramming3rd|3rd edition]]
==1980 ...==
* [[Thomas Nitsche]] ('''1982'''). ''A Learning Chess Program.'' [[Advances in Computer Chess 3]]
* [[Paul E. Utgoff]], [http://dblp.uni-trier.de/pers/hd/c/Clouse:Jeffery_A= Jeffery A. Clouse] ('''1991'''). ''[http://scholarworks.umass.edu/cs_faculty_pubs/193/ Two Kinds of Training Information for Evaluation Function Learning]''. [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts, Amherst], Proceedings of the AAAI 1991
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
* [[Ingo Althöfer]] ('''1993'''). ''On Telescoping Linear Evaluation Functions.'' [[ICGA Journal#16_2|ICCA Journal, Vol. 16, No. 2]]* <span id="NonlinearProgramming2nd"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#HDSherali|Hanif D. Sherali]], [[Mathematician#MCShetty|C. M. Shetty]] ('''1993'''). ''Nonlinear Programming: Theory and Algorithms''. 2nd edition, pp[https://en. 91-94wikipedia.org/wiki/Wiley_(publisher) Wiley] » [[#NonlinearProgramming1st|1st]], [[#NonlinearProgramming3rd|3rd edition]]
* [[Peter Mysliwietz]] ('''1994'''). ''Konstruktion und Optimierung von Bewertungsfunktionen beim Schach.'' Ph.D. thesis (German)
==1995 ...==
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf]
'''2006'''
* <span id="NonlinearProgramming3rd"></span>[[Mathematician#MSBazaraa|Mokhtar S. Bazaraa]], [[Mathematician#HDSherali|Hanif D. Sherali]], [[Mathematician#MCShetty|C. M. Shetty]] ('''2006'''). ''[https://www.wiley.com/en-us/Nonlinear+Programming%3A+Theory+and+Algorithms%2C+3rd+Edition-p-9780471486008 Nonlinear Programming: Theory and Algorithms]''. 3rd edition, [https://en.wikipedia.org/wiki/Wiley_(publisher) Wiley] <ref>[http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450&start=3 Re: Adjusting weights the Deep Blue way] by [[Pradu Kannan]], [[Computer Chess Forums|Winboard Forum]], September 01, 2008</ref> » [[#NonlinearProgramming1st|1st]], [[#NonlinearProgramming2nd|2nd edition]]
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3
* [[Hallam Nasreddine]], [[Hendra Suhanto Poh]], [[Graham Kendall]] ('''2006'''). ''Using an Evolutionary Algorithm for the Tuning of a Chess Evaluation Function Based on a Dynamic Boundary Strategy''. Proceedings of the 2006 [[IEEE]] Conference on Cybernetics and Intelligent Systems, [http://www.graham-kendall.com/papers/npk2006.pdf pdf]
* [[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]
* [[Thitipong Kanjanapa]], [[Kanako Komiya]], [[Yoshiyuki Kotani]] ('''2012'''). ''Design and Implementation of Bonanza Method for the Evaluation in the Game of Arimaa''. [http://www.ipsj.or.jp/english/index.html IPSJ SIG Technical Report], Vol. 2012-GI-27, No. 4, [http://arimaa.com/arimaa/papers/KanjanapaThitipong/IPSJ-GI12027004.pdf pdf] » [[Arimaa]]
* [[Alan J. Lockett]] ('''2012'''). ''General-Purpose Optimization Through Information Maximization''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Texas_at_Austin University of Texas at Austin], advisor [[Risto Miikkulainen]], [http://www.alockett.com/static/pdf/lockett-thesis.pdf pdf]
'''2013'''
* [[Alan J. Lockett]], [[Risto Miikkulainen]] ('''2013'''). ''[http://nn.cs.utexas.edu/?lockett:foga2013 A Measure-Theoretic Analysis of Stochastic Optimization]''. [https://dblp.uni-trier.de/db/conf/foga/foga2013.html FOGA 2013]
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Ching-Hua Kuo]], [[Bo-Han Lin]] ('''2013'''). ''[https://kaigi.org/jsai/webprogram/2013/paper-138.html A Supervised Learning Method for Chinese Chess Programs]''. [http://2013.conf.ai-gakkai.or.jp/english-info JSAI2013], [https://kaigi.org/jsai/webprogram/2013/pdf/138.pdf pdf]
* [[Akira Ura]], [[Makoto Miwa]], [[Yoshimasa Tsuruoka]], [[Takashi Chikayama]] ('''2013'''). ''[https://link.springer.com/chapter/10.1007/978-3-319-09165-5_18 Comparison Training of Shogi Evaluation Functions with Self-Generated Training Positions and Moves]''. [[CG 2013]], [https://pdfs.semanticscholar.org/6ad0/7167425539cf64e6bf420d7a28a1fc1047d6.pdf slides as pdf]
* [[Yoshikuni Sato]], [[Makoto Miwa]], [[Shogo Takeuchi]], [[Daisuke Takahashi]] ('''2013'''). ''[http://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/view/6402 Optimizing Objective Function Parameters for Strength in Computer Game-Playing]''. [http://www.informatik.uni-trier.de/~ley/db/conf/aaai/aaai2013.html#SatoMTT13 AAAI 2013]
* [[Ilya Loshchilov]] ('''2013'''). ''[http://loshchilov.com/phd.html Surrogate-Assisted Evolutionary Algorithms]''. Ph.D. thesis, [[University of Paris#11|Paris-Sud 11 University]], advisors [[Marc Schoenauer]] and [[Michèle Sebag]]
* [https://www.cs.ubc.ca/~schmidtm/ Mark Schmidt], [https://inria.academia.edu/NicolasLeRoux Nicolas Le Roux], [https://www.di.ens.fr/~fbach/ Francis Bach] ('''2013'''). ''Minimizing Finite Sums with the Stochastic Average Gradient''. [https://arxiv.org/abs/1309.2388 arXiv:1309.2388] <ref>[https://groups.google.com/d/msg/fishcooking/XnLmUP_78iw/QgMZzmeVBgAJ Tuning floats] by [[Stephane Nicolet]], [[Computer Chess Forums|FishCooking]], April 12, 2018</ref>
'''2014'''
* [[Kunihito Hoki]], [[Tomoyuki Kaneko]] ('''2014'''). ''[https://www.jair.org/papers/paper4217.html Large-Scale Optimization for Evaluation Functions with Minimax Search]''. [https://www.jair.org/vol/vol49.html JAIR Vol. 49], [https://pdfs.semanticscholar.org/eb9c/173576577acbb8800bf96aba452d77f1dc19.pdf pdf] » [[Shogi]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=55084 MMTO for evaluation learning] by [[Jon Dart]], [[CCC]], January 25, 2015</ref>
* [https://scholar.google.com/citations?user=glcep6EAAAAJ&hl=en Aryan Mokhtari], [https://scholar.google.com/citations?user=7mrPM4kAAAAJ&hl=en Alejandro Ribeiro] ('''2014'''). ''RES: Regularized Stochastic BFGS Algorithm''. [https://arxiv.org/abs/1401.7625 arXiv:1401.7625] <ref> [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]</ref>
* <span id="ROCK"></span>[http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=184943 Jemin Hwangbo], [https://www.linkedin.com/in/christian-gehring-1b958395/ Christian Gehring], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=186652 Hannes Sommer], [http://www.asl.ethz.ch/the-lab/people/person-detail.html?persid=29981 Roland Siegwart], [http://www.adrl.ethz.ch/doku.php/adrl:people:jbuchli Jonas Buchli] ('''2014'''). ''ROCK∗ — Efficient black-box optimization for policy learning''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7028729 Humanoids, 2014] » [[Automated Tuning#Rockstar|Rockstar]]
* [[Mathematician#YDauphin|Yann Dauphin]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#CGulcehre|Caglar Gulcehre]], [[Mathematician#KCho|Kyunghyun Cho]], [[Mathematician#SGanguli|Surya Ganguli]], [[Mathematician#YBengio|Yoshua Bengio]] ('''2014'''). ''Identifying and attacking the saddle point problem in high-dimensional non-convex optimization''. [https://arxiv.org/abs/1406.2572 arXiv:1406.2572] <ref>[https://groups.google.com/d/msg/fishcooking/wOfRuzTSi_8/VgjN8MmSBQAJ high dimensional optimization] by [[Warren D. Smith]], [[Computer Chess Forums|FishCooking]], December 27, 2019</ref>
* [https://arxiv.org/find/cs/1/au:+Martens_J/0/1/0/all/0/1 James Martens] ('''2014, 2017'''). ''New insights and perspectives on the natural gradient method''. [https://arxiv.org/abs/1412.1193 arXiv:1412.1193]
==2015 ...==
'''2016'''
* [[Diogo Real]], [[Alan Blair]] ('''2016'''). ''[https://ieeexplore.ieee.org/document/7743850/ Learning a multi-player chess game with TreeStrap]''. [https://dblp.uni-trier.de/db/conf/cec/cec2016.html CEC 2016]
* [[Wojciech Jaśkowski]], [[Marcin Szubert]] ('''2016'''). ''[https://ieeexplore.ieee.org/document/7180338 Coevolutionary CMA-ES for Knowledge-Free Learning of Game Position Evaluation]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. 8, No. 4 <ref>[https://en.wikipedia.org/wiki/CMA-ES CMA-ES from Wikipedia]</ref>
* [[Wojciech Jaśkowski]], [[Paweł Liskowski]], [[Marcin Szubert]], [[Krzysztof Krawiec]] ('''2016'''). ''[https://content.sciendo.com/view/journals/amcs/26/1/article-p215.xml The performance profile: A multi–criteria performance evaluation method for test–based problems]''. [https://en.wikipedia.org/wiki/International_Journal_of_Applied_Mathematics_and_Computer_Science International Journal of Applied Mathematics and Computer Science], Vol. 26, No. 1
'''2017'''
* [http://ruder.io/ Sebastian Ruder] ('''2017'''). ''[http://ruder.io/optimizing-gradient-descent/ An overview of gradient descent optimization algorithms]''. [https://arxiv.org/abs/1609.04747v2 arXiv:1609.04747v2] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017</ref>
* [[Takafumi Nakamichi]], [[Takeshi Ito]] ('''2018'''). ''Adjusting the evaluation function for weakening the competency level of a computer shogi program''. [[ICGA Journal#40_1|ICGA Journal, Vol. 40, No. 1]]
* [[Hung-Jui Chang]], [[Jr-Chang Chen]], [[Gang-Yu Fan]], [[Chih-Wen Hsueh]], [[Tsan-sheng Hsu]] ('''2018'''). ''Using Chinese dark chess endgame databases to validate and fine-tune game evaluation functions''. [[ICGA Journal#40_2|ICGA Journal, Vol. 40, No. 2]] » [[Chinese Dark Chess]], [[Endgame Tablebases]]
* [[Wen-Jie Tseng]], [[Jr-Chang Chen]], [[I-Chen Wu]], [[Tinghan Wei]] ('''2018'''). ''Comparison Training for Computer Chinese Chess''. [https://arxiv.org/abs/1801.07411 arXiv:1801.07411]<ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=52861&start=7 Re: multi-dimensional piece/square tables] by Tony P., [[CCC]], January 28, 2020 » [[Piece-Square Tables]]</ref>
=Forum Posts=
* [https://www.stmintz.com/ccc/index.php?id=487022 "learning" or "tuning" programs] by [[Sean Mintz]], [[CCC]], February 15, 2006
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450 Adjusting weights the Deep Blue way] by [[Tony van Roon-Werten]], [[Computer Chess Forums|Winboard Forum]], August 29, 2008 » [[Deep Blue]]
: [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49450&start=3 Re: Adjusting weights the Deep Blue way] by [[Pradu Kannan]], [[Computer Chess Forums|Winboard Forum]], September 01, 2008
* [http://www.open-aurec.com/wbforum/viewtopic.php?f=4&t=49818 Tuning the eval] by [[Daniel Anulliero]], [[Computer Chess Forums|Winboard Forum]], January 02, 2009
* [http://www.talkchess.com/forum/viewtopic.php?t=27266 Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 01, 2009
* [http://www.talkchess.com/forum/viewtopic.php?t=66221 tuning info] by [[Marco Belli]], [[CCC]], January 03, 2018
* [http://www.talkchess.com/forum/viewtopic.php?t=66681 3 million games for training neural networks] by [[Álvaro Begué]], [[CCC]], February 24, 2018 » [[Neural Networks]]
* [https://groups.google.com/d/msg/fishcooking/XnLmUP_78iw/QgMZzmeVBgAJ Tuning floats] by [[Stephane Nicolet]], [[Computer Chess Forums|FishCooking]], April 12, 2018
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67831 Introducing PET] by [[Ed Schroder|Ed Schröder]], [[CCC]], June 27, 2018 » [[Strategic Test Suite]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=68326 Texel tuning speed] by [[Vivien Clauzon]], [[CCC]], August 29, 2018 » [[Texel's Tuning Method]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=69035 Particle Swarm Optimization Code] by [[Erik Madsen]], [[CCC]], November 24, 2018 » [[MadChess]]
'''2019'''
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=69532 Automated tuning... finally... (Topple v0.3.0)] by [[Vincent Tang]], [[CCC]], January 08, 2019 » [[Topple]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=71650 New Tool for Tuning with Skopt] by [[Thomas Dybdahl Ahle]], [[CCC]], August 25, 2019 <ref>[https://scikit-optimize.github.io/ skopt API documentation]</ref>
* [https://www.game-ai-forum.org/viewtopic.php?f=21&t=695 TD(1)] by [[Rémi Coulom]], [[Computer Chess Forums|Game-AI Forum]], November 20, 2019 » [[Temporal Difference Learning]]
* [https://groups.google.com/d/msg/fishcooking/wOfRuzTSi_8/VgjN8MmSBQAJ high dimensional optimization] by [[Warren D. Smith]], [[Computer Chess Forums|FishCooking]], December 27, 2019 <ref>[[Mathematician#YDauphin|Yann Dauphin]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#CGulcehre|Caglar Gulcehre]], [[Mathematician#KCho|Kyunghyun Cho]], [[Mathematician#SGanguli|Surya Ganguli]], [[Mathematician#YBengio|Yoshua Bengio]] ('''2014'''). ''Identifying and attacking the saddle point problem in high-dimensional non-convex optimization''. [https://arxiv.org/abs/1406.2572 arXiv:1406.2572]</ref>
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=72810 Board adaptive / tuning evaluation function - no NN/AI] by Moritz Gedig, [[CCC]], January 14, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=73629 Pawn structure tuning] by [[Vivien Clauzon]], [[CCC]], April 11, 2020 » [[Pawn Structure]], [[Ethereal]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74184 Learning/Tuning in SlowChess Blitz Classic] by [[Jonathan Kreuzer]], [[CCC]], June 15, 2020 » [[Slow Chess]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74209 Great input about Bayesian optimization of noisy function methods] by [[Vivien Clauzon]], [[CCC]], June 16, 2020
=External Links=
: [https://en.wikipedia.org/wiki/Engine_tuning Engine tuning from Wikipedia]
: [https://en.wikipedia.org/wiki/Self-tuning Self-tuning from Wikipedia]
==Engine Tuning==
* [http://rebel13.nl/rebel13/pet.html Practical Engine Tuning] by [[Ed Schroder|Ed Schröder]], June 2018 » [[Strategic Test Suite]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=67831 Introducing PET] by [[Ed Schroder|Ed Schröder]], [[CCC]], June 27, 2018</ref>
* [https://www.3dkingdoms.com/chess/learning.html Automatic Tuning & Learning for Slow Chess Blitz Classic] by by [[Jonathan Kreuzer]] » [[Slow Chess]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74184 Learning/Tuning in SlowChess Blitz Classic] by [[Jonathan Kreuzer]], [[CCC]], June 15, 2020</ref>
==Optimization==
* [https://en.wiktionary.org/wiki/optimization optimization - Wiktionary]
: [https://en.wiktionary.org/wiki/optimize optimize - Wiktionary]
* [https://en.wikipedia.org/wiki/Mathematical_optimization Mathematical optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Operations_research Operations research from Wikipedia]
* [https://en.wikipedia.org/wiki/Optimization_problem Optimization problem from Wikipedia]
* [https://en.wikipedia.org/wiki/Duality_(optimization) Duality (optimization) from Wikipedia]
* [https://en.wikipedia.org/wiki/Local_search_%28optimization%29 Local search (optimization) from Wikipedia]
* [https://en.wikipedia.org/wiki/Iterated_local_search Iterated local search from Wikipedia]
* [https://en.wikipedia.org/wiki/Global_optimization Global optimization from Wikipedia]
* [https://en.wikipedia.org/wiki/Bayesian_optimization Bayesian optimization from Wikipedia]
* [https://scikit-optimize.github.io/notebooks/bayesian-optimization.html Bayesian optimization with skopt]* [https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm Broyden–Fletcher–Goldfarb–Shanno algorithm from Wikipedia]
* [http://remi.coulom.free.fr/CLOP/ CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]] » [[CLOP]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=35049 Tool for automatic black-box parameter optimization released] by [[Rémi Coulom]], [[CCC]], June 20, 2010</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?p=421995 CLOP for Noisy Black-Box Parameter Optimization] by [[Rémi Coulom]], [[CCC]], September 01, 2011</ref>
* [https://en.wikipedia.org/wiki/Conjugate_gradient_method Conjugate gradient method from Wikipedia]
: [https://en.wikipedia.org/wiki/Entropy_maximization Entropy maximization from Wikipedia]
: [https://en.wikipedia.org/wiki/Linear_programming Linear programming from Wikipedia]
: [https://en.wikipedia.org/wiki/Nonlinear_programming Nonlinear programming from Wikipedia]
: [https://en.wikipedia.org/wiki/Simplex_algorithm Simplex algorithm from Wikipedia]
* [https://en.wikipedia.org/wiki/Differential_evolution Differential evolution from Wikipedia]
* [http://macechess.blogspot.de/2013/03/population-based-incremental-learning.html Population Based Incremental Learning (PBIL)] by [[Thomas Petzke]], March 16, 2013 » [[iCE]]
* [https://en.wikipedia.org/wiki/Simulated_annealing Simulated annealing from Wikipedia]
* [https://github.com/scikit-optimize Skopt (Scikit-Optimize) · GitHub]
* [https://bayes-skopt.readthedocs.io/en/latest/ Welcome to Bayes-skopt’s documentation!]
* [https://en.wikipedia.org/wiki/Stochastic_optimization Stochastic optimization from Wikipedia]
: [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation (SPSA) - Wikipedia]
* <span id="Rockstar"></span>[https://github.com/lantonov/Rockstar Rockstar: Implementation of ROCK* algorithm (Gaussian kernel regression + natural gradient descent) for optimisation | GitHub] by [[Lyudmil Antonov]] and [[Joona Kiiski]] » [[Automated Tuning#ROCK|ROCK*]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=65045 ROCK* black-box optimizer for chess] by [[Jon Dart]], [[CCC]], August 31, 2017</ref>
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine | GitHub] by [[Joona Kiiski]] » [[Stockfish]], [[Stockfish's Tuning Method]]
* [https://github.com/scikit-optimize/scikit-optimize GitHub - scikit-optimize/scikit-optimize: Sequential model-based optimization with a `scipy.optimize` interface]
* [https://github.com/kiudee/bayes-skopt GitHub - kiudee/bayes-skopt: A fully Bayesian implementation of sequential model-based optimization] by [[Karlson Pfannschmidt]] » [[Fat Fritz]] <ref>[https://en.chessbase.com/post/fat-fritz-update-and-fat-fritz-jr Fat Fritz 1.1 update and a small gift] by [[Albert Silver]]. [[ChessBase|ChessBase News]], March 05, 2020</ref>
* [https://github.com/kiudee/chess-tuning-tools GitHub - kiudee/chess-tuning-tools] by [[Karlson Pfannschmidt]] » [[Leela Chess Zero]]
* [https://github.com/krasserm/bayesian-machine-learning GitHub - krasserm/bayesian-machine-learning: Notebooks about Bayesian methods for machine learning] by [https://krasserm.github.io/ Martin Krasser] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74209 Great input about Bayesian optimization of noisy function methods] by [[Vivien Clauzon]], [[CCC]], June 16, 2020</ref>
==Misc==
* [[:Category:The Next Step Quintet|The Next Step Quintet]] feat. [http://www.tivonpennicott.com/ Tivon Pennicott] - [http://www.discogs.com/Next-Step-Quintet-The-Next-Step-Quintet/release/4970720 Regression], [https://el-gr.facebook.com/KerameioBar KerameioBar] [https://en.wikipedia.org/wiki/Athens Athens], [https://en.wikipedia.org/wiki/Greece Greece], September 2014, [https://en.wikipedia.org/wiki/YouTube YouTube] Video

Navigation menu