Difference between revisions of "ShashChess"
GerdIsenberg (talk | contribs) |
GerdIsenberg (talk | contribs) |
||
Line 29: | Line 29: | ||
and also employed in [[BrainLearn]] 9.0 <ref>[https://github.com/amchess/BrainLearn/releases/tag/9.0 Release BrainLearn 9.0 · amchess/BrainLearn · GitHub]</ref>, | and also employed in [[BrainLearn]] 9.0 <ref>[https://github.com/amchess/BrainLearn/releases/tag/9.0 Release BrainLearn 9.0 · amchess/BrainLearn · GitHub]</ref>, | ||
was applied in ShashChess since version 12.0 <ref>[https://groups.google.com/g/fishcooking/c/GLag32ARtKo/m/3Zoaq3-rAwAJ ShashChess 12.0] by [[Andrea Manzo]], [[Computer Chess Forums|FishCooking]], June 28, 2020</ref>. | was applied in ShashChess since version 12.0 <ref>[https://groups.google.com/g/fishcooking/c/GLag32ARtKo/m/3Zoaq3-rAwAJ ShashChess 12.0] by [[Andrea Manzo]], [[Computer Chess Forums|FishCooking]], June 28, 2020</ref>. | ||
− | After the end of a decisive game | + | After the end of a decisive selfplay game, the [[Move List|list of moves]] (ml) and associated [[Score|scores]] is merged into the learn table from end to start, |
the score of timestep t adjusted as weighted average with the future reward of timestep t+1, using a [https://en.wikipedia.org/wiki/Q-learning#Learning_Rate learning rate] α of 0.5 and a [https://en.wikipedia.org/wiki/Q-learning#Discount_factor discount factor] γ of 0.99 <ref>[https://github.com/amchess/ShashChess/blob/master/src/All/search.cpp#L2625 ShashChess/search.cpp at master · amchess/ShashChess · GitHub]</ref>: | the score of timestep t adjusted as weighted average with the future reward of timestep t+1, using a [https://en.wikipedia.org/wiki/Q-learning#Learning_Rate learning rate] α of 0.5 and a [https://en.wikipedia.org/wiki/Q-learning#Discount_factor discount factor] γ of 0.99 <ref>[https://github.com/amchess/ShashChess/blob/master/src/All/search.cpp#L2625 ShashChess/search.cpp at master · amchess/ShashChess · GitHub]</ref>: | ||
<pre> | <pre> | ||
Line 37: | Line 37: | ||
} | } | ||
</pre> | </pre> | ||
+ | During repeated selfplay games, subsequently playing along the learned best line so far, decreasing score adjustments will stimulate exploration of alternative siblings, while increasing score adjustments correspondents to exploitation of the best move. | ||
=Forum Posts= | =Forum Posts= |
Latest revision as of 21:48, 5 June 2021
Home * Engines * Stockfish * ShashChess
ShashChess,
a Stockfish derivative by Andrea Manzo with the aim to apply the proposals of Alexander Shashin as exposed in his book Best Play: A New Method for Discovering the Strongest Move [1] [2]
[3].
First released in July 2018 [4],
subsequent ShashChess versions feature skill levels and handicap modes, NNUE, Monte-Carlo Tree Search with one or multiple threads in conjunction with alpha-beta,
and various learning techniques utilizing a persistent hash table [5]
[6].
Contents
Personalities
Based on static evaluation score ranges derivered from pawn endgame point value (PawnValueEg = 208), ShashChess classifies the position with five personalities of three former World Chess Champions, Tigran Petrosian for negative scores, José Raúl Capablanca for balanced scores, and Mikhail Tal for positive scores [7]:
if (eval < -74) personality = Petosian; else if (eval < -31) personality = Petosian | Capablanca; else if (eval < 31) personality = Capablanca; else if (eval < 74) personality = Capablanca | Tal; else personality = Tal;
These personalities are considered in various search selectivity thresholds, along with multiple dynamic evaluation score adjustments.
Q-Learning
A rote learning technique inspired from Q-learning, worked out and introduced by Kelly Kinyama [8] [9] and also employed in BrainLearn 9.0 [10], was applied in ShashChess since version 12.0 [11]. After the end of a decisive selfplay game, the list of moves (ml) and associated scores is merged into the learn table from end to start, the score of timestep t adjusted as weighted average with the future reward of timestep t+1, using a learning rate α of 0.5 and a discount factor γ of 0.99 [12]:
for (t = ml.size() - 2; t >= 0; t--) { ml[t].score = (1-α)*ml[t].score + α*γ*ml[t+1].score; insertIntoOrUpdateLearningTable( ml[t] ); }
During repeated selfplay games, subsequently playing along the learned best line so far, decreasing score adjustments will stimulate exploration of alternative siblings, while increasing score adjustments correspondents to exploitation of the best move.
Forum Posts
2018 ...
- ShashChess by Andrea Manzo, CCC, July 28, 2018
- Re: ShashChess (11.0) by Andrea Manzo, CCC, March 06, 2020
- Re: ShashChess (12.0) by Andrea Manzo, CCC, June 28, 2020
- Re: ShashChess (15.0) by Andrea Manzo, CCC, October 03, 2020
- Re: ShashChess (17.1) by Andrea Manzo, CCC, June 01, 2021
- Build ShashChess for Android by Andrea Manzo, CCC, August 01, 2018
2020 ...
- ShashChess 12.0 by Andrea Manzo, FishCooking, June 28, 2020
- A new reinforcement learning implementation of Q learning algorithm for alphabeta engines to automatically tune the evaluation of chess positions by Kelly Kinyama, FishCooking, June 29, 2020
- ShashChess NNUE 1.0 by Andrea Manzo, FishCooking, July 25, 2020
- Shashchess which executable to use by Andrew Bernasrd, CCC, January 23, 2021
- New BrainLearn and ShashChess by Andrea Manzo, FishCooking, May 19, 2021
External Links
- GitHub - amchess/ShashChess: A try to implement Alexander Shashin's theory on a Stockfish's derived chess engine
- ShashChess 11.0 64-bit in CCRL 40/15
- ShashChess 15.0 64-bit in CCRL 40/15
References
- ↑ Welcome to BS Chess
- ↑ Alexander Shashin (2013). Best Play: A New Method for Discovering the Strongest Move. Mongoose Press, Amazon
- ↑ Review: Best Play | ChessVibes by Arne Moll, September 05, 2013 (Wayback Machine)
- ↑ ShashChess by Andrea Manzo, CCC, July 28, 2018
- ↑ ShashChess/README.md at master · amchess/ShashChess · GitHub
- ↑ Re: Komodo MCTS by Mark Lefler, CCC, June 12, 2019 » Komodo MCTS
- ↑ ShashChess/search.cpp at master · amchess/ShashChess · GitHub
- ↑ Re: Self-Learning stockfish upgraded by Kelly Kinyama, FishCooking, May 28, 2019
- ↑ A new reinforcement learning implementation of Q learning algorithm for alphabeta engines to automatically tune the evaluation of chess positions by Kelly Kinyama, FishCooking, June 29, 2020
- ↑ Release BrainLearn 9.0 · amchess/BrainLearn · GitHub
- ↑ ShashChess 12.0 by Andrea Manzo, FishCooking, June 28, 2020
- ↑ ShashChess/search.cpp at master · amchess/ShashChess · GitHub