ShashChess

Home * Engines * Stockfish * ShashChess

ShashChess, a Stockfish derivative by Andrea Manzo with the aim to apply the proposals of Alexander Shashin as exposed in his book Best Play: A New Method for Discovering the Strongest Move . First released in July 2018 , subsequent ShashChess versions feature skill levels and handicap modes, NNUE, Monte-Carlo Tree Search with one or multiple threads in conjunction with alpha-beta, and various learning techniques utilizing a persistent hash table .

=Personalities= Based on static evaluation score ranges derivered from pawn endgame point value (PawnValueEg = 208), ShashChess classifies the position with five personalities of three former World Chess Champions, Tigran Petrosian for negative scores, José Raúl Capablanca for balanced scores, and Mikhail Tal for positive scores : if     (eval < -74) personality =  Petosian; else if (eval < -31) personality = Petosian | Capablanca; else if (eval < 31) personality =             Capablanca; else if (eval < 74) personality =             Capablanca | Tal; else                personality =                          Tal; These personalities are considered in various search selectivity thresholds, along with multiple dynamic evaluation score adjustments.

=Q-Learning= A rote learning technique inspired from Q-learning, worked out and introduced by Kelly Kinyama and also employed in BrainLearn 9.0 , was applied in ShashChess since version 12.0. After the end of a decisive selfplay game, the list of moves (ml) and associated scores is merged into the learn table from end to start, the score of timestep t adjusted as weighted average with the future reward of timestep t+1, using a learning rate α of 0.5 and a discount factor γ of 0.99 : for (t = ml.size - 2; t >= 0; t--) { ml[t].score = (1-α)*ml[t].score + α*γ*ml[t+1].score; insertIntoOrUpdateLearningTable( ml[t] ); } During repeated selfplay games, subsequently playing along the learned best line so far, decreasing score adjustments will stimulate exploration of alternative siblings, while increasing score adjustments correspondents to exploitation of the best move.

=Forum Posts=

2018 ...

 * ShashChess by Andrea Manzo, CCC, July 28, 2018
 * Re: ShashChess (11.0) by Andrea Manzo, CCC, March 06, 2020
 * Re: ShashChess (12.0) by Andrea Manzo, CCC, June 28, 2020
 * Re: ShashChess (15.0) by Andrea Manzo, CCC, October 03, 2020
 * Re: ShashChess (17.1) by Andrea Manzo, CCC, June 01, 2021


 * Build ShashChess for Android by Andrea Manzo, CCC, August 01, 2018

2020 ...

 * ShashChess 12.0 by Andrea Manzo, FishCooking, June 28, 2020
 * A new reinforcement learning implementation of Q learning algorithm for alphabeta engines to automatically tune the evaluation of chess positions by Kelly Kinyama, FishCooking, June 29, 2020
 * ShashChess NNUE 1.0 by Andrea Manzo, FishCooking, July 25, 2020
 * Shashchess which executable to use by Andrew Bernasrd, CCC, January 23, 2021
 * New BrainLearn and ShashChess by Andrea Manzo, FishCooking, May 19, 2021

=External Links=
 * GitHub - amchess/ShashChess: A try to implement Alexander Shashin's theory on a Stockfish's derived chess engine
 * ShashChess 11.0 64-bit in CCRL 40/15
 * ShashChess 15.0 64-bit in CCRL 40/15

=References= Up one Level