Changes

Jump to: navigation, search

Neural Networks

3,215 bytes added, 09:52, 15 April 2021
no edit summary
In December 2017, the [[Google]] [[DeepMind]] team along with former [[Giraffe]] author [[Matthew Lai]] reported on their generalized [[AlphaZero]] algorithm, combining [[Deep Learning|Deep learning]] with [[Monte-Carlo Tree Search]]. AlphaZero can achieve, tabula rasa, superhuman performance in many challenging domains with some training effort. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved a superhuman level of play in the games of chess and [[Shogi]] as well as Go, and convincingly defeated a world-champion program in each case <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>. The open souece projects [[Leela Zero]] (Go) and its chess adaptation [[Leela Chess Zero]] successfully re-implemented the ideas of DeepMind.
===NNUE===
[[NNUE]] reverse of &#398;U&#1048;&#1048; - Efficiently Updatable Neural Networks, is an NN architecture intended to replace the [[Evaluation|evaluation]] of [[Shogi]], [[Chess|chess]] and other board game playing [[Alpha-Beta|alpha-beta]] searchers. NNUE was introduced in 2018 by [[Yu Nasu]] <ref>[[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf] (Japanese with English abstract)[https://github.com/asdfjkl/nnue GitHub - asdfjkl/nnue translation]</ref>,
and was used in Shogi adaptations of [[Stockfish]] such as [[YaneuraOu]] <ref>[https://github.com/yaneurao/YaneuraOu GitHub - yaneurao/YaneuraOu: YaneuraOu is the World's Strongest Shogi engine(AI player), WCSC29 1st winner, educational and USI compliant engine]</ref> ,
and [[Kristallweizen]] <ref>[https://github.com/Tama4649/Kristallweizen/ GitHub - Tama4649/Kristallweizen: 第29回世界コンピュータ将棋選手権 準優勝のKristallweizenです。]</ref>, apparently with [[AlphaZero]] strength <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72754 The Stockfish of shogi] by [[Larry Kaufman]], [[CCC]], January 07, 2020</ref>. [[Hisayori Noda|Nodchip]] incorporated NNUE into the chess playing Stockfish 10 as a proof of concept <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=74059 Stockfish NN release (NNUE)] by [[Henk Drost]], [[CCC]], May 31, 2020</ref>, yielding in the hype about [[Stockfish NNUE]] in summer 2020 <ref>[http://yaneuraou.yaneu.com/2020/06/19/stockfish-nnue-the-complete-guide/ Stockfish NNUE – The Complete Guide], June 19, 2020 (Japanese and English)</ref>.
* [[NNUE]]
* [[Pattern Recognition]]
* [[David E. Moriarty#SANE|SANE]]
* [[Temporal Difference Learning]]
'''2004'''
* [http://dblp.uni-trier.de/pers/hd/p/Patist:Jan_Peter Jan Peter Patist], [[Marco Wiering]] ('''2004'''). ''Learning to Play Draughts using Temporal Difference Learning with Neural Networks and Databases''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04
* [[Henk Mannen]], [[Marco Wiering]] ('''2004'''). ''[https://www.semanticscholar.org/paper/Learning-to-Play-Chess-using-TD(lambda)-learning-Mannen-Wiering/00a6f81c8ebe8408c147841f26ed27eb13fb07f3 Learning to play chess using TD(λ)-learning with database games]''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04, [https://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/learning-chess.pdf pdf]
* [[Mathieu Autonès]], [[Aryel Beck]], [[Phillippe Camacho]], [[Nicolas Lassabe]], [[Hervé Luga]], [[François Scharffe]] ('''2004'''). ''[http://link.springer.com/chapter/10.1007/978-3-540-24650-3_1 Evaluation of Chess Position by Modular Neural network Generated by Genetic Algorithm]''. [http://www.informatik.uni-trier.de/~ley/db/conf/eurogp/eurogp2004.html#AutonesBCLLS04 EuroGP 2004] <ref>[https://www.stmintz.com/ccc/index.php?id=358770 Presentation for a neural net learning chess program] by [[Dann Corbit]], [[CCC]], April 06, 2004</ref>
* [[Daniel Walker]], [[Robert Levinson]] ('''2004'''). ''The MORPH Project in 2004''. [[ICGA Journal#27_4|ICGA Journal, Vol. 27, No. 4]]
* [[Christopher Clark]], [[Amos Storkey]] ('''2014'''). ''Teaching Deep Convolutional Neural Networks to Play Go''. [http://arxiv.org/abs/1412.3409 arXiv:1412.3409] <ref>[http://computer-go.org/pipermail/computer-go/2014-December/007010.html Teaching Deep Convolutional Neural Networks to Play Go] by [[Hiroshi Yamashita]], [http://computer-go.org/pipermail/computer-go/ The Computer-go Archives], December 14, 2014</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=54663 Teaching Deep Convolutional Neural Networks to Play Go] by [[Michel Van den Bergh]], [[CCC]], December 16, 2014</ref>
* [[Chris J. Maddison]], [[Shih-Chieh Huang|Aja Huang]], [[Ilya Sutskever]], [[David Silver]] ('''2014'''). ''Move Evaluation in Go Using Deep Convolutional Neural Networks''. [http://arxiv.org/abs/1412.6564v1 arXiv:1412.6564v1] » [[Go]]
* [[Ilya Sutskever]], [https://research.google.com/pubs/OriolVinyals.html [Oriol Vinyals]], [https://www.linkedin.com/in/quoc-v-le-319b5a8 [Quoc V. Le]] ('''2014'''). ''Sequence to Sequence Learning with Neural Networks''. [https://arxiv.org/abs/1409.3215 arXiv:1409.3215]
'''2015'''
* [https://scholar.google.nl/citations?user=yyIoQu4AAAAJ Diederik P. Kingma], [https://scholar.google.ca/citations?user=ymzxRhAAAAAJ&hl=en Jimmy Lei Ba] ('''2015'''). ''Adam: A Method for Stochastic Optimization''. [https://arxiv.org/abs/1412.6980v8 arXiv:1412.6980v8], [http://www.iclr.cc/doku.php?id=iclr2015:main ICLR 2015] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61948 Arasan 19.2] by [[Jon Dart]], [[CCC]], November 03, 2016 » [[Arasan#Tuning|Arasan's Tuning]]</ref>
* [http://michaelnielsen.org/ Michael Nielsen] ('''2015'''). ''[http://neuralnetworksanddeeplearning.com/ Neural networks and deep learning]''. Determination Press
* [[Mathematician#SIoffe|Sergey Ioffe]], [[Mathematician#CSzegedy|Christian Szegedy]] ('''2015'''). ''Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift''. [https://arxiv.org/abs/1502.03167 arXiv:1502.03167]
* [[Mathematician#GEHinton|Geoffrey E. Hinton]], [https://research.google.com/pubs/OriolVinyals.html [Oriol Vinyals]], [https://en.wikipedia.org/wiki/Jeff_Dean_(computer_scientist) Jeff Dean] ('''2015'''). ''Distilling the Knowledge in a Neural Network''. [https://arxiv.org/abs/1503.02531 arXiv:1503.02531]
* [[James L. McClelland]] ('''2015'''). ''[https://web.stanford.edu/group/pdplab/pdphandbook/handbook3.html#handbookch10.html Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises]''. Second Edition, [https://web.stanford.edu/group/pdplab/pdphandbook/handbookli1.html Contents]
* [[Gábor Melis]] ('''2015'''). ''[http://jmlr.org/proceedings/papers/v42/meli14.html Dissecting the Winning Solution of the HiggsML Challenge]''. [https://nips.cc/Conferences/2014 NIPS 2014]
* [[Matthew Lai]] ('''2015'''). ''Giraffe: Using Deep Reinforcement Learning to Play Chess''. M.Sc. thesis, [https://en.wikipedia.org/wiki/Imperial_College_London Imperial College London], [http://arxiv.org/abs/1509.01549v1 arXiv:1509.01549v1] » [[Giraffe]]
* [[Nikolai Yakovenko]], [[Liangliang Cao]], [[Colin Raffel]], [[James Fan]] ('''2015'''). ''Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in Poker Games''. [https://arxiv.org/abs/1509.06731 arXiv:1509.06731]
* [https://scholar.google.ca/citations?user=yVtSOt8AAAAJ&hl=en Emmanuel Bengio], [https://scholar.google.ca/citations?user=9H77FYYAAAAJ&hl=en Pierre-Luc Bacon], [[Joelle Pineau]], [[Doina Precup]] ('''2015'''). ''Conditional Computation in Neural Networks for faster models''. [https://arxiv.org/abs/1511.06297 arXiv:1511.06297]
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2015'''). ''Online Batch Selection for Faster Training of Neural Networks''. [https://arxiv.org/abs/1511.06343 arXiv:1511.06343]
* [[Yuandong Tian]], [[Yan Zhu]] ('''2015'''). ''Better Computer Go Player with Neural Network and Long-term Prediction''. [http://arxiv.org/abs/1511.06410 arXiv:1511.06410] <ref>[http://www.technologyreview.com/view/544181/how-facebooks-ai-researchers-built-a-game-changing-go-engine/?utm_campaign=socialsync&utm_medium=social-post&utm_source=facebook How Facebook’s AI Researchers Built a Game-Changing Go Engine | MIT Technology Review], December 04, 2015</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58514 Combining Neural Networks and Search techniques (GO)] by Michael Babigian, [[CCC]], December 08, 2015</ref> » [[Go]]
* [https://dblp.org/pers/hd/s/Serb:Alexander Alexantrou Serb], [[Edoardo Manino]], [https://dblp.org/pers/hd/m/Messaris:Ioannis Ioannis Messaris], [https://dblp.org/pers/hd/t/Tran=Thanh:Long Long Tran-Thanh], [https://www.orc.soton.ac.uk/people/tp1f12 Themis Prodromakis] ('''2017'''). ''[https://eprints.soton.ac.uk/425616/ Hardware-level Bayesian inference]''. [https://nips.cc/Conferences/2017 NIPS 2017] » [[Analog Evaluation]]
'''2018'''
* [[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf], [https://www.apply.computer-shogi.org/wcsc28/appeal/the_end_of_genesis_T.N.K.evolution_turbo_type_D/nnue.pdf pdf] (Japanese with English abstract) [https://github.com/asdfjkl/nnue GitHub - asdfjkl/nnue translation] » [[NNUE]]<ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76250 Translation of Yu Nasu's NNUE paper] by [[Dominik Klein]], [[CCC]], January 07, 2021</ref>
* [[Kei Takada]], [[Hiroyuki Iizuka]], [[Masahito Yamamoto]] ('''2018'''). ''[https://link.springer.com/chapter/10.1007%2F978-3-319-75931-9_2 Computer Hex Algorithm Using a Move Evaluation Method Based on a Convolutional Neural Network]''. [https://link.springer.com/bookseries/7899 Communications in Computer and Information Science] » [[Hex]]
* [[Matthia Sabatelli]], [[Francesco Bidoia]], [[Valeriu Codreanu]], [[Marco Wiering]] ('''2018'''). ''Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead''. ICPRAM 2018, [http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/ICPRAM_CHESS_DNN_2018.pdf pdf]
* [[Guy Haworth]] ('''2019'''). ''Chess endgame news: an endgame challenge for neural nets''. [[ICGA Journal#41_3|ICGA Journal, Vol. 41, No. 3]] » [[Endgame]]
==2020 ...==
* [[Reid McIlroy-Young]], [[Siddhartha Sen]], [[Jon Kleinberg]], [[Ashton Anderson]] ('''2020'''). ''Aligning Superhuman AI with Human Behavior: Chess as a Model System''. [[ACM#SIGKDD|ACM SIGKDD 2020]], [https://arxiv.org/abs/2006.01855 arXiv:2006.01855] » [[Maia Chess]]
* [[Reid McIlroy-Young]], [[Russell Wang]], [[Siddhartha Sen]], [[Jon Kleinberg]], [[Ashton Anderson]] ('''2020'''). ''Learning Personalized Models of Human Behavior in Chess''. [https://arxiv.org/abs/2008.10086 arXiv:2008.10086]
* [[Oisín Carroll]], [[Joeran Beel]] ('''2020'''). ''Finite Group Equivariant Neural Networks for Games''. [https://arxiv.org/abs/2009.05027 arXiv:2009.05027]
* [https://scholar.google.com/citations?user=HT85tXsAAAAJ&hl=en Mohammad Pezeshki], [https://scholar.google.com/citations?user=jKqh8jAAAAAJ&hl=en Sékou-Oumar Kaba], [[Mathematician#YBengio|Yoshua Bengio]] , [[Mathematician#ACourville|Aaron Courville]] , [[Doina Precup]], [https://scholar.google.com/citations?user=ifu_7_0AAAAJ&hl=en Guillaume Lajoie] ('''2020'''). ''Gradient Starvation: A Learning Proclivity in Neural Networks''. [https://arxiv.org/abs/2011.09468 arXiv:2011.09468]
=Blog & Forum Posts=
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74077 How to work with batch size in neural network] by Gertjan Brouwer, [[CCC]], June 02, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531 NNUE accessible explanation] by [[Martin Fierz]], [[CCC]], July 21, 2020 » [[NNUE]]
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=1 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 23, 2020
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=5 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 24, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75190 First success with neural nets] by [[Jonathan Kreuzer]], [[CCC]], September 23, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75606 Transhuman Chess with NN and RL...] by [[Srdja Matovic]], [[CCC]], October 30, 2020 » [[Reinforcement Learning|RL]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75724 Pytorch NNUE training] by [[Gary Linscott]], [[CCC]], November 08, 2020 <ref>[https://en.wikipedia.org/wiki/PyTorch PyTorch from Wikipedia]</ref> » [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75925 Pawn King Neural Network] by [[Tamás Kuzmics]], [[CCC]], November 26, 2020 » [[NNUE]]
* [http://laatste.info/bb3/viewtopic.php?f=53&t=8327 Learning draughts evaluation functions using Keras/TensorFlow] by [[Rein Halbersma]], [http://laatste.info/bb3/viewforum.php?f=53 World Draughts Forum], November 30, 2020 » [[Draughts]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75985 Maiachess] by [[Marc-Philippe Huget]], [[CCC]], December 04, 2020 » [[Maia Chess]]
'''2021'''
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76263 More experiments with neural nets] by [[Jonathan Kreuzer]], [[CCC]], January 09, 2021 » [[Slow Chess]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76334 Keras/Tensforflow for very sparse inputs] by Jacek Dermont, [[CCC]], January 16, 2021
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76664 Are neural nets (the weights file) copyrightable?] by [[Adam Treat]], [[CCC]], February 21, 2021
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76885 A worked example of backpropagation using Javascript] by [[Colin Jenkins]], [[CCC]], March 16, 2021 » [[Neural Networks#Backpropagation|Backpropagation]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=77061 yet another NN library] by lucasart, [[CCC]], April 11, 2021 » [[#lucasart|lucasart/nn]]
=External Links=
* [https://en.wikipedia.org/wiki/Rprop Rprop from Wikipedia]
* [http://people.idsia.ch/~juergen/who-invented-backpropagation.html Who Invented Backpropagation?] by [[Jürgen Schmidhuber]] (2014, 2015)
* [https://alexander-schiendorfer.github.io/2020/02/24/a-worked-example-of-backprop.html A worked example of backpropagation] by [https://alexander-schiendorfer.github.io/about.html Alexander Schiendorfer], February 24, 2020 » [[Neural Networks#Backpropagation|Backpropagation]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76885 A worked example of backpropagation using Javascript] by [[Colin Jenkins]], [[CCC]], March 16, 2021</ref>
==Gradient==
* [https://en.wikipedia.org/wiki/Gradient Gradient from Wikipedia]
* [https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software Comparison of deep learning software from Wikipedia]
* [https://github.com/connormcmonigle/reference-neural-network GitHub - connormcmonigle/reference-neural-network] by [[Connor McMonigle]]
* <span id="lucasart"></span>[https://github.com/lucasart/nn GitHub - lucasart/nn: neural network experiment] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=77061 yet another NN library] by lucasart, [[CCC]], April 11, 2021</ref>
==Libraries==
* [https://en.wikipedia.org/wiki/Eigen_%28C%2B%2B_library%29 Eigen (C++ library) from Wikipedia]
* [http://leenissen.dk/fann/wp/ Fast Artificial Neural Network Library (FANN)]
* [https://en.wikipedia.org/wiki/Keras Keras from Wikipedia]
* [https://wiki.python.org/moin/PythonForArtificialIntelligence PythonForArtificialIntelligence - Python Wiki] [[Python]]
* [https://en.wikipedia.org/wiki/TensorFlow TensorFlow from Wikipedia]
: [https://www.youtube.com/watch?v=lvoHnicueoE Lecture 14 | Deep Reinforcement Learning] by [[Mathematician#SYeung|Serena Yeung]], [http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture14.pdf slides]
: [https://www.youtube.com/watch?v=eZdOkDtYMoo Lecture 15 | Efficient Methods and Hardware for Deep Learning] by [https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=en Song Han], [http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture15.pdf slides]
==Music==
* [https://en.wikipedia.org/wiki/John_Zorn#The_Dreamers The Dreamers] & [[:Category:John Zorn|John Zorn]] - Gormenghast, [https://en.wikipedia.org/wiki/Pellucidar:_A_Dreamers_Fantabula Pellucidar: A Dreamers Fantabula] (2015), [https://en.wikipedia.org/wiki/YouTube YouTube] Video
: [[:Category:Marc Ribot|Marc Ribot]], [https://en.wikipedia.org/wiki/Kenny_Wollesen Kenny Wollesen], [https://en.wikipedia.org/wiki/Joey_Baron Joey Baron], [https://en.wikipedia.org/wiki/Jamie_Saft Jamie Saft], [https://en.wikipedia.org/wiki/Trevor_Dunn Trevor Dunn], [https://en.wikipedia.org/wiki/Cyro_Baptista Cyro Baptista], John Zorn
: {{#evu:https://www.youtube.com/watch?v=97MsK88rjy8|alignment=left|valignment=top}}
=References=
<references />
 
'''[[Learning|Up one Level]]'''
[[Category:Marc Ribot]]
[[Category:John Zorn]]
[[Category:Videos]]

Navigation menu