Changes

Jump to: navigation, search

Texel's Tuning Method

25,387 bytes added, 22:52, 23 April 2018
Created page with "'''Home * Automated Tuning * Texel's Tuning Method''' FILE:Texel sheep and gulls on Flat Holm.jpg|border|right|thumb|Texel and Gaviotas stay tuned..."
'''[[Main Page|Home]] * [[Tuning|Automated Tuning]] * Texel's Tuning Method'''

[[FILE:Texel sheep and gulls on Flat Holm.jpg|border|right|thumb|Texel and Gaviotas stay tuned on [https://en.wikipedia.org/wiki/Flat_Holm Ynys Echni] <ref>[https://en.wikipedia.org/wiki/Texel_%28sheep%29 Texel sheep] and [https://en.wikipedia.org/wiki/Gull gulls], [https://en.wikipedia.org/wiki/Flat_Holm Flat Holm Island], [https://en.wikipedia.org/wiki/Wales Wales], Photo by [http://www.flickr.com/people/8007352@N06 Sam Whitfield], March 20, 2010, [http://commons.wikimedia.org/wiki/Category:Texel_%28sheep%29 Category:Texel (sheep) - Wikimedia Commons]</ref> ]]

'''Texel's Tuning Method''',<br/>
by [[Peter Österlund]] as applied to [[Texel#1.03|Texel 1.03]] and following versions <ref>[http://www.talkchess.com/forum/viewtopic.php?topic_view=threads&p=555522&t=50823 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014 » [[Automated Tuning]]</ref> is a concrete instance of a [[Automated Tuning#LogisticRegression|logistic regression]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015</ref> . This page is a slightly edited reprint of a reply to [[Álvaro Begué]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014</ref> and subsequent postings in a [[CCC]] discussion about [[Automated Tuning|automated tuning]] initiated by [[Tom Likens]] in January 2014 <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823 How Do You Automatically Tune Your Evaluation Tables] by [[Tom Likens]], [[CCC]], January 07, 2014</ref> . As mentioned in a further reply <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=28 Re: The texel evaluation function optimization algorithm] by [[Miguel A. Ballicora]], January 31, 2014</ref> , a similar technique was already proposed by [[Miguel A. Ballicora]] in 2009, as applied to [[Gaviota]] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009</ref> .

=Method=
Take 64000 [[Chess Game|games]] played at a fast [[Time Management|time control]] (such as 1s+0.08s/move) between the current and/or previous versions of the engine. Extract all [[Chess Position|positions]] from those games, except positions within the [[Opening Book|opening book]] and positions where the engine found a [[Score#MateScores|mate score]] during the game. This typically gives about 8.8 million positions which are stored as [[Forsyth-Edwards Notation|FEN]] strings in a text file.

Define the average [[Evaluation|evaluation]] [https://en.wikipedia.org/wiki/Mean_squared_error error] E <ref>[https://en.wikipedia.org/wiki/Partition_of_sums_of_squares Partition of sums of squares - Wikipedia]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58298&start=46 Re: Why computing K that minimizes the sigmoid func. value?] by [[Peter Österlund]], [[CCC]], November 29, 2015</ref>:
[[FILE:TexelTuneMathE.jpg|none|text-bottom]]
where:
* N is the number of test positions.
* Ri is the result of the game corresponding to position i; 0 for black win, '''0.5''' for draw and '''1''' for white win.
* qi or qScore corresponding to position i, is the [[Score|value]] returned by the chess engine [[Quiescence Search|quiescence]] function. The algorithm assumes the qScore function is [https://en.wikipedia.org/wiki/Deterministic_system deterministic]. If [[Transposition Table|transposition tables]] or the [[History Heuristic|history heuristic]] is used in the qScore function this may not be the case.
* [[FILE:TexelTuneMathSigmoid.jpg|none|middle]] [[FILE:SigmoidTexelTune.gif|right|frameless|link=http://wolfr.am/1al3d5B]]
: <ref>[https://en.wikipedia.org/wiki/Sigmoid_function Sigmoid function from Wikipedia]</ref> <ref>[https://en.wikipedia.org/wiki/Logistic_function Logistic function from Wikipedia]</ref> <ref>see also winning probability W in [[Pawn Advantage, Win Percentage, and ELO]]</ref> <ref>[http://wolfr.am/1al3d5B log-linear 1 / (1 + 10^(-s/4)) , s=-10 to 10] plot by [https://en.wikipedia.org/wiki/Wolfram_Alpha Wolfram Alpha]</ref>
* K is a scaling constant.

Compute the K that minimizes E. K is never changed again by the algorithm <ref>[http://www.talkchess.com/forum/viewtopic.php?t=55621&start=4 Re: Experiments with eval tuning] by [[Martin Sedlak]], [[CCC]], March 10, 2015</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58298 Why computing K that minimizes the sigmoid func. value?...] by [[Michael Hoffmann]], [[CCC]], November 19, 2015</ref>.

If needed, [https://en.wikipedia.org/wiki/Code_refactoring refactor] the source code so that the qScore function depends on a set of evaluation function [https://en.wikipedia.org/wiki/Parameter parameters]
[[FILE:TexelTuneMathParams.jpg|none|text-bottom]]
The average error E is now a function of the M parameters. Find parameter values such that E is a [https://en.wikipedia.org/wiki/Maxima_and_minima local minimum] in [https://en.wikipedia.org/wiki/Parameter_space parameter space]. The exact [https://en.wikipedia.org/wiki/Mathematical_optimization optimization] method is not that important. It could for example use [https://en.wikipedia.org/wiki/Local_search_%28optimization%29 local search] varying one parameter at a time, the [https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm Gauss-Newton method], the [https://en.wikipedia.org/wiki/Conjugate_gradient_method conjugate gradient method], or a hybrid approach mixing those methods.

If the evaluation function parameters are all of integer type with finite range (which they typically are in a chess engine), the local search method is guaranteed to eventually terminate since there is only a finite number of elements in the parameter space. Also in this case the conventional [https://en.wikipedia.org/wiki/Gradient gradient] isn't mathematically defined, but you can use [https://en.wikipedia.org/wiki/Difference_quotient difference quotients] instead of the "real" gradient in the GN and CG methods.

=Advantages=
One big advantage of this method is that it can simultaneously optimize several hundreds of evaluation function parameters in a reasonable amount of time. The first step that collects a set of games does not take any extra time, because those games have already been played when previous changes to the engine were tested. The step that converts from [[Portable Game Notation|PGN]] to FEN only takes a few minutes. The time consuming step is to compute the local minimum. In my engine M is currently around 400 and computing the gradient takes around 25 minutes on a 16-core Dell T620 computer <ref>[http://www.zdnet.com/dell-poweredge-t620-review-7000003879/ Dell PowerEdge T620 review | ZDNet]</ref> . A local minimum can usually be computed within 6 hours, faster if only small changes to the parameter values are needed.

While 6 hours may seem like a lot, consider how long it would take [[CLOP]] to simultaneously optimize 400 parameters (assuming you have enough memory to actually do that). I have not worked out the math but I guess it would take at least ten years to get reliable results.

Another advantage is that no external source of [[Knowledge|knowledge]] is needed. You don't need to find a large set of games played by engines or humans that are stronger than the engine you want to improve. While this may not be a practical advantage for the current [[Playing Strength|strength]] of my engine, I still find this property theoretically pleasing, somewhat similar to the concept of self [https://en.wikipedia.org/wiki/Calibration calibration] <ref>[http://www.selfcalibration.com/ What is Self-Calibration?]</ref> .

A third advantage is that the need for different evaluation terms to be "[https://en.wikipedia.org/wiki/Orthogonality#Statistics.2C_econometrics.2C_and_economics orthogonal]" disappears, since the optimization will automatically deal with dependencies between evaluation terms.

A fourth advantage, compared to the method used by [[Amir Ban]] to tune the [[Junior]] evaluation function <ref>[[Amir Ban]] ('''2012'''). ''[http://www.ratio.huji.ac.il/node/2362 Automatic Learning of Evaluation, with Applications to Computer Chess]''. Discussion Paper 613, [https://en.wikipedia.org/wiki/Hebrew_University_of_Jerusalem The Hebrew University of Jerusalem] - Center for the Study of Rationality, [https://en.wikipedia.org/wiki/Givat_Ram Givat Ram]</ref> , is that my method does not need the implementation of a "drawishness" function in the chess engine.

=Concerns=
My biggest concern about this method is what is called [https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation Correlation does not imply causation] in statistics. I don't have a specific example for chess, but the Wikipedia example about ice cream sales and drowning accidents is quite illuminating. It is a known fact that when ice cream sales increase, so does drowning accidents, but that does not imply that stopping ice cream sales would reduce the number of [https://en.wikipedia.org/wiki/Drowning drowning] accidents. It is conceivable that the chess engine would learn to appreciate positional features that are correlated to increased winning chances, even though actively striving to reach such positions does not increase winning chances. Anyway the method does work in my engine, possibly because of the counter argument that "Correlation is not causation but it sure is a hint".

Another concern is that the chess engine will not be able to learn things that it does not understand at all. For example, assume the chess engine did not know how to win [[KBNK Endgame|KBNK endings]], so all games leading to this endgame would end in a draw. Because of this E would tend to be smaller if the evaluation functions scored all KBNK endings as draws, so there is a risk the evaluation actually gets worse after optimization.

On the other hand, the algorithm can be expected to improve the engine's knowledge about things that it partially already knows, or knowledge about things that are good even if you don't know it. For example, assume the engine does not know that the bishop pair deserves a bonus. However, if the engine has general knowledge about mobility it is likely that it will win more than it loses anyway when it has the bishop pair advantage. Therefore the engine will be able to learn that the bishop pair deserves a bonus. If the algorithm is then repeated with a new set of games, the engine might learn that the bishop pair is even better when it knows that it should not unnecessarily trade a bishop.

It may also be argued that since more than one position is picked from each game (in fact about 140 positions per game on average), the result is invalid because there is a dependence between terms in the summation that defines E, but least squares theory in general assumes that different data points are independent. The way I see it, E has contributions from 64000 independent events and the fact that there are 140 times more terms in the summation is similar to what would happen if you solved a weighted least squares problem where the weights are determined by how often a particular type of position is present in typical games.

While it would probably be better (or at least equally good) to take one position each from 8.8 million different games, obtaining that many games would require more computation time than I am willing to spend. I believe 64000 independent events is more than enough to estimate 400 parameters anyway.

=Results=
When I started using this method, I calculated that K=1.13 best matched the then current evaluation function in my engine. I have not changed the value of K since then and do not intend to change it in the future either, since it is just an arbitrary scaling constant.

Since 2014-01-01 when the first evaluation change based on this algorithm was included in my engine, the Elo improvements caused by evaluation weight tuning (measured using 32000 games at 1+0.08) have been:

24.6 + 4.0 + 5.8 + 2.8 + 12.8 + 39.4 + 10.2 = 99.6 Elo

The 24.6 improvement was when I made most of the evaluation parameters accessible to the optimization algorithm. The next three improvements came when I in three stages exposed more evaluation parameters to the algorithm. The 12.8 improvement came when I re-created the set of test positions based on the most recently played games.

The 39.4 improvement came when I changed the criteria for which positions to include in the test set. Initially I removed also all positions where the q-search score deviated too much from the search score in the actual game (which conveniently is saved by [[Cutechess-cli|cutechess-cli]] in PGN comments). I believed that including those positions would just raise the "noise level" of the data and cause a worse solution to be found. Apparently this is not the case. I now believe that even though including these positions causes noise, the q-search function has to deal with them all the time in real games, so trying to learn how those positions should be evaluated on average is still beneficial.

The 10.2 improvement came when I added separate queen piece square tables for middle game and end game. Previously texel only had one queen PST, which had horizontal, vertical and diagonal symmetry.

The last improvement was made after the texel 1.03 release. The others are included in the 1.03 release and I believe they are responsible for most of the [[Playing Strength|strength]] increase in texel 1.03.

I don't know if the large strength increase was made possible because the method is good or because the texel evaluation function was particularly bad before I started using this algorithm. Anyway I don't expect to be able to get equally big improvements in the future as I got in texel 1.03, but I hope the algorithm will nevertheless help finding a fair amount of smaller evaluation improvements.

=Future improvements=
Currently local search is used to find a local optimum. I believe it would be faster to initially use a few Gauss-Newton iterations and then switch to local search when the remaining corrections are small.

A number of evaluation parameters have values that appear quite unintuitive. For example, the value of a white queen on b7/g7 (and a black queen on b2/g2 because of symmetry) in the middle game has a value of -128[[Centipawns|cp]]. I have not investigated the cause yet, but I believe the explanation is that almost always when a white/black queen is on b7/b2 in the early middle game, it is because it has just eaten the "[https://en.wikipedia.org/wiki/Poisoned_Pawn_Variation poisoned pawn]", and the opponent is about to win back a pawn by playing Rb1 and Rxb7. If placing the queen on b7/g7/b2/g2 for other reasons is much less common, the optimization algorithm will assign a large penalty for a queen on these squares. While this improves the evaluation function in the average case, it will also cause large evaluation errors for the uncommon cases that do not fall under the "poisoned pawn" category. Implementing a more specific rule about poisoned pawns could be profitable.

Other examples of odd parameter values include a white king on b8 which in the middle game gives white a 200cp bonus, a white knight on a8 in the middle game which gives white a 223cp penalty, the pawn race bonus which is only 157cp even though it is supposed to trigger only when one side can promote a queen and the opponent can not.

Investigating the cause for these and other odd looking parameter values may suggest further improvements to the evaluation function.

Another interesting approach is to apply data mining techniques on the residuals to try to discover missing knowledge in the evaluation function. I don't know how successful this approach will be. It may be more profitable to test different variants of already known chess evaluation principles instead.

=Pseudo Code=
of a local optimization routine <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=40 Re: The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], February 05, 2014</ref>
<pre>
vector<int> localOptimize(const vector<int>& initialGuess) {
const int nParams = initialGuess.size();
double bestE = E(initialGuess);
vector<int> bestParValues = initialGuess;
bool improved = true;
while ( improved ) {
improved = false;
for (int pi = 0; pi < nParams; pi++) {
vector<int> newParValues = bestParValues;
newParValues[pi] += 1;
double newE = E(newParValues);
if (newE < bestE) {
bestE = newE;
bestParValues = newParValues;
improved = true;
} else {
newParValues[pi] -= 2;
newE = E(newParValues);
if (newE < bestE) {
bestE = newE;
bestParValues = newParValues;
improved = true;
}
}
}
}
return bestParValues;
}
</pre>

=See also=
* [[Arasan#Tuning|Arasan's Tuning]]
* [[Eval Tuning in Deep Thought]]
* [[Point Value by Regression Analysis]]
* [[RuyTune]]
* [[Stockfish's Tuning Method]]

=Forum Posts=
==2009==
* [http://www.talkchess.com/forum/viewtopic.php?t=27266&postdays=0&postorder=asc&topic_view=&start=11 Re: Insanity... or Tal style?] by [[Miguel A. Ballicora]], [[CCC]], April 02, 2009 » [[Gaviota]]
==2014==
* [http://www.talkchess.com/forum/viewtopic.php?t=50823 How Do You Automatically Tune Your Evaluation Tables] by [[Tom Likens]], [[CCC]], January 07, 2014
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=10 Re: How Do You Automatically Tune Your Evaluation Tables] by [[Álvaro Begué]], [[CCC]], January 08, 2014
* [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=26 The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], January 31, 2014
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=27 Re: The texel evaluation function optimization algorithm] by [[Álvaro Begué]], [[CCC]], January 31, 2014 » [https://en.wikipedia.org/wiki/Cross_entropy Cross-entropy]
: [http://www.talkchess.com/forum/viewtopic.php?t=50823&start=40 Re: The texel evaluation function optimization algorithm] by [[Peter Österlund]], [[CCC]], February 05, 2014
* [https://groups.google.com/forum/#!topic/fishcooking/QJPG9MS8tCA Texel tuning method] by vdb..., [[Computer Chess Forums|FishCooking]], January 31, 2014 » [[Stockfish]]
* [http://www.talkchess.com/forum/viewtopic.php?t=53657 Tune cut margins with Texel/gaviota tuning method] by [[Fabio Gobbato]], [[CCC]], September 11, 2014
* [http://www.talkchess.com/forum/viewtopic.php?t=54021 A little improvement to the Texel's tuning method] by [[Fabio Gobbato]], [[CCC]], October 11, 2014
==2015 ...==
* [http://www.talkchess.com/forum/viewtopic.php?t=55265 Optimization algorithm for the Texel/Gaviota tuning method] by [[Fabio Gobbato]], [[CCC]], February 09, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=55621 Experiments with eval tuning] by [[Jon Dart]], [[CCC]], March 10, 2015 » [[Arasan]]
* [http://www.talkchess.com/forum/viewtopic.php?t=55696 txt: automated chess engine tuning] by [[Alexandru Mosoi]], [[CCC]], March 18, 2015 <ref>[https://bitbucket.org/brtzsnr/txt brtzsnr / txt — Bitbucket] by [[Alexandru Mosoi]]</ref>
: [http://www.talkchess.com/forum/viewtopic.php?t=55696&start=108 Re: txt: automated chess engine tuning] by [[Sergei Markoff|Sergei S. Markoff]], [[CCC]], February 15, 2016 » [[SmarThink]]
* [http://www.talkchess.com/forum/viewtopic.php?t=56168&start=30 Re: Piece weights with regression analysis (in Russian)] by [[Marcel van Kervinck]], [[CCC]], May 03, 2015 » [[Point Value by Regression Analysis]]
: [http://www.talkchess.com/forum/viewtopic.php?t=56168&start=36 Re: Piece weights with regression analysis (in Russian)] by [[Fabien Letouzey]], [[CCC]], May 04, 2015 » [[Automated Tuning#LogisticRegression|Logistic Regression]]
* [http://www.talkchess.com/forum/viewtopic.php?t=57700 test positions for texel tuning] by [[Alexandru Mosoi]], [[CCC]], September 20, 2015
* [http://www.talkchess.com/forum/viewtopic.php?t=58298 Why computing K that minimizes the sigmoid func. value?...] by [[Michael Hoffmann]], [[CCC]], November 19, 2015
: [http://www.talkchess.com/forum/viewtopic.php?t=58298&start=43 Re: Why computing K that minimizes the sigmoid func. value?] by [[Thomas Kolarik]], [[CCC]], November 27, 2015 » [[Nirvana]]
: [http://www.talkchess.com/forum/viewtopic.php?t=58298&start=46 Re: Why computing K that minimizes the sigmoid func. value?] by [[Peter Österlund]], [[CCC]], November 29, 2015
: [http://www.talkchess.com/forum/viewtopic.php?t=58298&start=52 Re: Why computing K that minimizes the sigmoid func. value?] by [[Daniel José Queraltó]], [[CCC]], December 07, 2015 » [[Andscacs]]
'''2016'''
* [http://www.open-chess.org/viewtopic.php?f=5&t=2987 Tuning] by ppyvabw, [[Computer Chess Forums|OpenChess Forum]], June 11, 2016
* [http://www.talkchess.com/forum/viewtopic.php?t=60902 GreKo 2015 ML: tuning evaluation (article in Russian)] by [[Vladimir Medvedev]], [[CCC]], July 22, 2016 » [[GreKo]]
* [http://www.talkchess.com/forum/viewtopic.php?t=61427 training data to use with Texel Tuning method] by [[Alexandru Mosoi]], September 14, 2016
* [http://www.talkchess.com/forum/viewtopic.php?t=61856 Python script for TTM] by Lucas Braesch, [[CCC]], October 28, 2016 » [[Python]]
* [http://www.talkchess.com/forum/viewtopic.php?t=61861 A database for learning evaluation functions] by [[Álvaro Begué]], [[CCC]], October 28, 2016 » [[Automated Tuning]], [[Evaluation]], [[Learning]]
* [http://www.talkchess.com/forum/viewtopic.php?t=62440 Texel tuning - some issues] by [[Jon Dart]], [[CCC]], December 10, 2016
'''2017'''
* [http://www.talkchess.com/forum/viewtopic.php?t=63408 improved evaluation function] by [[Alexandru Mosoi]], [[CCC]], March 11, 2017 » [[Zurichess]]
* [http://www.talkchess.com/forum/viewtopic.php?t=63542 Texel tuning method with a small number of positions] by [[Pawel Koziol]], [[CCC]], March 23, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=64189 Texel tuning method question] by [[Sander Maassen vd Brink]], [[CCC]], June 05, 2017
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=35 Re: Texel tuning method question] by [[Peter Österlund]], [[CCC]], June 07, 2017
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=42 Re: Texel tuning method question] by [[Ferdinand Mosca]], [[CCC]], July 20, 2017 » [[Python]]
: [http://www.talkchess.com/forum/viewtopic.php?t=64189&start=46 Re: Texel tuning method question] by [[Jon Dart]], [[CCC]], July 23, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=64310 Impressive Texel-tuning results] by [[Sander Maassen vd Brink]], [[CCC]], June 16, 2017 » [[chess22k]]
* [http://www.talkchess.com/forum/viewtopic.php?t=64313 Cooking Cheese with Texel's tuning method] by [[Patrice Duhamel]], [[CCC]], June 16, 2017 » [[Cheese]]
* [http://www.talkchess.com/forum/viewtopic.php?t=64338 cursed parameters in Texel tuning] by [[Pawel Koziol]], [[CCC]], June 18, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=65290 Troubles with Texel Tuning] by [[Andrew Grant]], [[CCC]], September 26, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=65538 Texel Tuning - Success!] by [[Andrew Grant]], [[CCC]], October 24, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=65799 tuning for the uninformed] by [[Folkert van Heusden]], [[CCC]], November 23, 2017
* [http://www.talkchess.com/forum/viewtopic.php?t=65881 how to create a labeled epd from pgn?] by [[Erin Dame]], [[CCC]], December 02, 2017 » [[Extended Position Description|EPD]], [[Portable Game Notation|PGN]]

=External Links=
* [https://en.wikipedia.org/wiki/Logistic_regression Logistic regression from Wikipedia]
* [http://macechess.blogspot.de/2014/03/the-texel-way-of-tuning_10.html The texel way of tuning] by [[Thomas Petzke]], [http://macechess.blogspot.de/ mACE Chess], March 10, 2014 » [[iCE]]
* [http://macechess.blogspot.de/2014/03/some-more-tuning-results.html Some more tuning results] by [[Thomas Petzke]], [http://macechess.blogspot.de/ mACE Chess], March 22, 2014
* [https://en.wikipedia.org/wiki/NEWUOA NEWUOA from Wikipedia] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=50823&start=94 Re: The texel evaluation function optimization algorithm] by [[Jon Dart]], [[CCC]], March 12, 2014</ref>
* [https://bitbucket.org/zurichess/txt zurichess/txt — Bitbucket] by [[Alexandru Mosoi]]
* [https://habrahabr.ru/post/305604/ Самообучение шахматной программы / Хабрахабр] by [[Vladimir Medvedev]], [https://en.wikipedia.org/wiki/Habrahabr Habrahabr], July 21, 2016 (Russian) [https://translate.google.com/translate?sl=ru&tl=en&js=y&prev=_t&hl=de&ie=UTF-8&u=https%3A%2F%2Fhabrahabr.ru%2Fpost%2F305604%2F&edit-text= translated] by [https://en.wikipedia.org/wiki/Google_Translate Google Translate] » [[GreKo]]

=References=
<references />

'''[[Tuning|Up one Level]]'''

Navigation menu