Deep Learning

From Chessprogramming wiki
Jump to: navigation, search

Home * Learning * Neural Networks * Deep Learning

Deep Neural Network [1]

Deep Learning,
a branch of machine learning based on a set of algorithms that attempt to model high level abstractions in data - characterized as a buzzword, or a rebranding of neural networks. A deep neural network (DNN) is an ANN with multiple hidden layers of units between the input and output layers which can be discriminatively trained with the standard backpropagation algorithm. Two common issues if naively trained are overfitting and computation time. While deep learning techniques have yielded in another breakthrough in computer Go (after Monte-Carlo Tree Search), some trials in computer chess were promising as well, but until December 2017, less spectacular.

Go

Convolutional neural networks form a subclass of feedforward neural networks that have special weight constraints, individual neurons are tiled in such a way that they respond to overlapping regions. Convolutional NNs are suited for deep learning and are highly suitable for parallelization on GPUs [2]. In 2014, two teams independently investigated whether deep convolutional neural networks could be used to directly represent and learn a move evaluation function for the game of Go. Christopher Clark and Amos Storkey trained an 8-layer convolutional neural network by supervised learning from a database of human professional games, which without any search, defeated the traditional search program Gnu Go in 86% of the games [3] [4] [5] [6]. In their paper Move Evaluation in Go Using Deep Convolutional Neural Networks [7], Chris J. Maddison, Aja Huang, Ilya Sutskever, and David Silver report they trained a large 12-layer convolutional neural network in a similar way, to beat Gnu Go in 97% of the games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move [8].

In 2015, a team affiliated with Google DeepMind around David Silver and Aja Huang, supported by Google researchers John Nham and Ilya Sutskever, build a Go playing program dubbed AlphaGo [9], combining Monte-Carlo tree search with their 12-layer networks [10].

Chess

Giraffe & Zurichess

In 2015, Matthew Lai trained Giraffe's deep neural network by TD-Leaf [11]. Zurichess by Alexandru Moșoi uses the TensorFlow library for automated tuning - in a two layers neural network, the second layer is responsible for a tapered eval to phase endgame and middlegame scores [12].

DeepChess

In 2016, Omid E. David, Nathan S. Netanyahu, and Lior Wolf introduced DeepChess obtaining a grandmaster-level chess playing performance using a learning method incorporating two deep neural networks, which are trained using a combination of unsupervised pretraining and supervised training. The unsupervised training extracts high level features from a given chess position, and the supervised training learns to compare two chess positions to select the more favorable one. In order to use DeepChess inside a chess program, a novel version of alpha-beta is used that does not require bounds but positions αpos and βpos [13].

AlphaZero

In December 2017, the Google DeepMind team with Matthew Lai involved published on their generalized AlphaZero algorithm, combining Deep learning with Monte-Carlo Tree Search. AlphaZero can achieve, tabula rasa, superhuman performance in many challenging domains with some training effort. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved a superhuman level of play in the games of chess and Shogi as well as Go, and convincingly defeated a world-champion program in each case [14].

Leela Chess Zero

Leela Chess Zero is an adaptation of Gian-Carlo Pascutto's Leela Zero Go project [15] to Chess.

See also

Selected Publications

1965 ...

1980 ...

1990 ...

2000 ...

2010 ...

2013

2014

2015 ...

2016

2017

2018

2019

2020 ...

2021

Forum Posts

2014

2015 ...

2016

Re: Deep Learning Chess Engine ? by Alexandru Mosoi, CCC, July 21, 2016 » Zurichess
Re: Deep Learning Chess Engine ? by Matthew Lai, CCC, August 04, 2016 » Giraffe [52]

2017

Re: Is AlphaGo approach unsuitable to chess? by Peter Österlund, CCC, May 31, 2017 » Texel
Re: To TPU or not to TPU... by Rémi Coulom, CCC, December 16, 2017

2018

2019

Re: A question to MCTS + NN experts by Daniel Shawul, CCC, July 17, 2019

2020 ...

External Links

Networks

Convolutional Neural Networks for Image and Video Processing, TUM Wiki, Technical University of Munich
An Introduction to different Types of Convolutions in Deep Learning by Paul-Louis Pröve, July 22, 2017
Squeeze-and-Excitation Networks by Paul-Louis Pröve, October 17, 2017

Software

Libraries

Chess

Games

Music Generation

Nvidia

Reports & Blogs

Texas Hold'em: AI is almost as good as humans at playing poker by Matt Burgess, Wired UK, March 30, 2016
GitHub - suragnair/alpha-zero-general: A clean and simple implementation of a self-play learning algorithm based on AlphaGo Zero (any game, any framework!)

Videos

References

  1. Image based on HDLTex: Hierarchical Deep Learning for Text Classification by Kk7nc, December 14, 2017, Hierarchical Deep Learning from Wikipedia
  2. PARsE | Education | GPU Cluster | Efficient mapping of the training of Convolutional Neural Networks to a CUDA-based cluster
  3. Christopher Clark, Amos Storkey (2014). Teaching Deep Convolutional Neural Networks to Play Go. arXiv:1412.3409
  4. Teaching Deep Convolutional Neural Networks to Play Go by Hiroshi Yamashita, The Computer-go Archives, December 14, 2014
  5. Why Neural Networks Look Set to Thrash the Best Human Go Players for the First Time | MIT Technology Review, December 15, 2014
  6. Teaching Deep Convolutional Neural Networks to Play Go by Michel Van den Bergh, CCC, December 16, 2014
  7. Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver (2014). Move Evaluation in Go Using Deep Convolutional Neural Networks. arXiv:1412.6564v1
  8. Move Evaluation in Go Using Deep Convolutional Neural Networks by Aja Huang, The Computer-go Archives, December 19, 2014
  9. AlphaGo | Google DeepMind
  10. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis (2016). Mastering the game of Go with deep neural networks and tree search. Nature, Vol. 529
  11. *First release* Giraffe, a new engine based on deep learning by Matthew Lai, CCC, July 08, 2015
  12. Re: Deep Learning Chess Engine ? by Alexandru Mosoi, CCC, July 21, 2016
  13. Omid E. David, Nathan S. Netanyahu, Lior Wolf (2016). DeepChess: End-to-End Deep Neural Network for Automatic Learning in Chess. ICAAN 2016, Lecture Notes in Computer Science, Vol. 9887, Springer, pdf preprint
  14. David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis (2017). Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm. arXiv:1712.01815
  15. [- gcp/leela-zero: Go engine with no human-provided knowledge, modeled after the AlphaGo Zero paper]
  16. Neocognitron - Scholarpedia by Kunihiko Fukushima
  17. Who introduced the term “deep learning” to the field of Machine Learning by Jürgen Schmidhuber, Google+, March 18, 2015
  18. Sepp Hochreiter's Fundamental Deep Learning Problem (1991) by Jürgen Schmidhuber, 2013
  19. Long short term memory from Wikipedia
  20. Who introduced the term “deep learning” to the field of Machine Learning by Jürgen Schmidhuber, Google+, March 18, 2015
  21. Demystifying Deep Reinforcement Learning by Tambet Matiisen, Nervana, December 21, 2015
  22. high dimensional optimization by Warren D. Smith, FishCooking, December 27, 2019
  23. Teaching Deep Convolutional Neural Networks to Play Go by Hiroshi Yamashita, The Computer-go Archives, December 14, 2014
  24. Teaching Deep Convolutional Neural Networks to Play Go by Michel Van den Bergh, CCC, December 16, 2014
  25. Re: To TPU or not to TPU... by Rémi Coulom, CCC, December 16, 2017
  26. How Facebook’s AI Researchers Built a Game-Changing Go Engine | MIT Technology Review, December 04, 2015
  27. Combining Neural Networks and Search techniques (GO) by Michael Babigian, CCC, December 08, 2015
  28. Quoc Le’s Lectures on Deep Learning | Gaurav Trivedi
  29. GitHub - BarakOshri/ConvChess: Predicting Moves in Chess Using Convolutional Neural Networks
  30. ConvChess CNN by Brian Richardson, CCC, March 15, 2017
  31. Jürgen Schmidhuber (2015) Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436).
  32. DeepChess: Another deep-learning based chess program by Matthew Lai, CCC, October 17, 2016
  33. ICANN 2016 | Recipients of the best paper awards
  34. Jigsaw puzzle from Wikipedia
  35. Could DeepMind try to conquer poker next? by Alex Hern, The Guardian, March 30, 2016
  36. CMA-ES from Wikipedia
  37. catastrophic forgetting by Daniel Shawul, CCC, May 09, 2019
  38. Stockfish NN release (NNUE) by Henk Drost, CCC, May 31, 2020 » Stockfish
  39. AlphaGo Zero: Learning from scratch by Demis Hassabis and David Silver, DeepMind, October 18, 2017
  40. GitHub - suragnair/alpha-zero-general: A clean and simple implementation of a self-play learning algorithm based on AlphaGo Zero (any game, any framework!)
  41. GitHub - mil-tokyo/webdnn: The Fastest DNN Running Framework on Web Browser
  42. GitHub - paintception/DeepChess
  43. Edax by Richard Delorme
  44. Deep Pepper Paper by Leo, CCC, July 07, 2018
  45. AlphaZero: Shedding new light on the grand games of chess, shogi and Go by David Silver, Thomas Hubert, Julian Schrittwieser and Demis Hassabis, DeepMind, December 03, 2018
  46. MuZero: Mastering Go, chess, shogi and Atari without rules
  47. GitHub - koulanurag/muzero-pytorch: Pytorch Implementation of MuZero
  48. Book about Neural Networks for Chess by dkl, CCC, September 29, 2021
  49. Acquisition of Chess Knowledge in AlphaZero, ChessBase News, November 18, 2021
  50. Rina Dechter (1986). Learning While Searching in Constraint-Satisfaction-Problems. AAAI 86, pdf
  51. GitHub - pluskid/Mocha.jl: Deep Learning framework for Julia
  52. Rectifier (neural networks) from Wikipedia
  53. Yann Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, Yoshua Bengio (2014). Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. arXiv:1406.2572
  54. Barak Oshri, Nishith Khandwala (2015). Predicting Moves in Chess using Convolutional Neural Networks. pdf
  55. Re: Google's AlphaGo team has been working on chess by Brian Richardson, CCC, December 09, 2017
  56. Connect 4 AlphaZero implemented using Python... by Steve Maughan, CCC, January 29, 2018
  57. Basic Linear Algebra Subprograms - Functionality - Level 3 | Wikipedia
  58. Re: To TPU or not to TPU... by Rémi Coulom, CCC, December 16, 2017
  59. Yuandong Tian, Yan Zhu (2015). Better Computer Go Player with Neural Network and Long-term Prediction. arXiv:1511.06410
  60. Johannes Heinrich, David Silver (2016). Deep Reinforcement Learning from Self-Play in Imperfect-Information Games. arXiv:1603.01121
  61. A Simple Alpha(Go) Zero Tutorial by Oliver Roese, CCC, December 30, 2017

Up one Level