Difference between revisions of "Reinforcement Learning"

From Chessprogramming wiki
Jump to: navigation, search
 
(2 intermediate revisions by the same user not shown)
Line 172: Line 172:
 
* [https://dblp.org/pid/233/8144.html Indu John], [https://scholar.google.co.in/citations?user=1QlrvHkAAAAJ&hl=en Chandramouli Kamanchi], [[Shalabh Bhatnagar]] ('''2020'''). ''Generalized Speedy Q-Learning''. [[IEEE#CSL|IEEE Control Systems Letters]], Vol. 4, No. 3, [https://arxiv.org/abs/1911.00397 arXiv:1911.00397]
 
* [https://dblp.org/pid/233/8144.html Indu John], [https://scholar.google.co.in/citations?user=1QlrvHkAAAAJ&hl=en Chandramouli Kamanchi], [[Shalabh Bhatnagar]] ('''2020'''). ''Generalized Speedy Q-Learning''. [[IEEE#CSL|IEEE Control Systems Letters]], Vol. 4, No. 3, [https://arxiv.org/abs/1911.00397 arXiv:1911.00397]
 
* [[Takuya Hiraoka]], [https://dblp.org/pers/hd/i/Imagawa:Takahisa Takahisa Imagawa], [https://dblp.org/pers/hd/t/Tangkaratt:Voot Voot Tangkaratt], [https://dblp.org/pers/hd/o/Osa:Takayuki Takayuki Osa], [https://dblp.org/pers/hd/o/Onishi:Takashi Takashi Onishi], [https://dblp.org/pers/hd/t/Tsuruoka:Yoshimasa Yoshimasa Tsuruoka]  ('''2020'''). ''Meta-Model-Based Meta-Policy Optimization''. [https://arxiv.org/abs/2006.02608 arXiv:2006.02608]
 
* [[Takuya Hiraoka]], [https://dblp.org/pers/hd/i/Imagawa:Takahisa Takahisa Imagawa], [https://dblp.org/pers/hd/t/Tangkaratt:Voot Voot Tangkaratt], [https://dblp.org/pers/hd/o/Osa:Takayuki Takayuki Osa], [https://dblp.org/pers/hd/o/Onishi:Takashi Takashi Onishi], [https://dblp.org/pers/hd/t/Tsuruoka:Yoshimasa Yoshimasa Tsuruoka]  ('''2020'''). ''Meta-Model-Based Meta-Policy Optimization''. [https://arxiv.org/abs/2006.02608 arXiv:2006.02608]
* [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Thomas Hubert]], [[Karen Simonyan]], [[Laurent Sifre]], [[Simon Schmitt]], [[Arthur Guez]], [[Edward Lockhart]], [[Demis Hassabis]], [[Thore Graepel]], [[Timothy Lillicrap]], [[David Silver]] ('''2020'''). ''[https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]''. [https://en.wikipedia.org/wiki/Nature_%28journal%29 Nature], Vol. 588 <ref>[https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules?fbclid=IwAR3mSwrn1YXDKr9uuGm2GlFKh76wBilex7f8QvBiQecwiVmAvD6Bkyjx-rE MuZero: Mastering Go, chess, shogi and Atari without rules]</ref>
+
* [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Thomas Hubert]], [[Karen Simonyan]], [[Laurent Sifre]], [[Simon Schmitt]], [[Arthur Guez]], [[Edward Lockhart]], [[Demis Hassabis]], [[Thore Graepel]], [[Timothy Lillicrap]], [[David Silver]] ('''2020'''). ''[https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]''. [https://en.wikipedia.org/wiki/Nature_%28journal%29 Nature], Vol. 588 <ref>[https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules?fbclid=IwAR3mSwrn1YXDKr9uuGm2GlFKh76wBilex7f8QvBiQecwiVmAvD6Bkyjx-rE MuZero: Mastering Go, chess, shogi and Atari without rules]</ref> <ref>[https://github.com/koulanurag/muzero-pytorch GitHub - koulanurag/muzero-pytorch: Pytorch Implementation of MuZero]</ref>
 
* [[Tristan Cazenave]], [[Yen-Chi Chen]], [[Guan-Wei Chen]], [[Shi-Yu Chen]], [[Xian-Dong Chiu]], [[Julien Dehos]], [[Maria Elsa]], [[Qucheng Gong]], [[Hengyuan Hu]], [[Vasil Khalidov]], [[Cheng-Ling Li]], [[Hsin-I Lin]], [[Yu-Jin Lin]], [[Xavier Martinet]], [[Vegard Mella]], [[Jeremy Rapin]], [[Baptiste Roziere]], [[Gabriel Synnaeve]], [[Fabien Teytaud]], [[Olivier Teytaud]], [[Shi-Cheng Ye]], [[Yi-Jun Ye]], [[Shi-Jim Yen]], [[Sergey Zagoruyko]] ('''2020''').  ''Polygames: Improved zero learning''. [[ICGA Journal#42_4|ICGA Journal, Vol. 42, No. 4]], [https://arxiv.org/abs/2001.09832 arXiv:2001.09832], [https://arxiv.org/abs/2001.09832 arXiv:2001.09832]
 
* [[Tristan Cazenave]], [[Yen-Chi Chen]], [[Guan-Wei Chen]], [[Shi-Yu Chen]], [[Xian-Dong Chiu]], [[Julien Dehos]], [[Maria Elsa]], [[Qucheng Gong]], [[Hengyuan Hu]], [[Vasil Khalidov]], [[Cheng-Ling Li]], [[Hsin-I Lin]], [[Yu-Jin Lin]], [[Xavier Martinet]], [[Vegard Mella]], [[Jeremy Rapin]], [[Baptiste Roziere]], [[Gabriel Synnaeve]], [[Fabien Teytaud]], [[Olivier Teytaud]], [[Shi-Cheng Ye]], [[Yi-Jun Ye]], [[Shi-Jim Yen]], [[Sergey Zagoruyko]] ('''2020''').  ''Polygames: Improved zero learning''. [[ICGA Journal#42_4|ICGA Journal, Vol. 42, No. 4]], [https://arxiv.org/abs/2001.09832 arXiv:2001.09832], [https://arxiv.org/abs/2001.09832 arXiv:2001.09832]
 
* [[Matthia Sabatelli]], [https://github.com/glouppe Gilles Louppe], [https://scholar.google.com/citations?user=tyFTsmIAAAAJ&hl=en Pierre Geurts], [[Marco Wiering]] ('''2020'''). ''The Deep Quality-Value Family of Deep Reinforcement Learning Algorithms''. [https://dblp.org/db/conf/ijcnn/ijcnn2020.html#SabatelliLGW20 IJCNN 2020] <ref>[https://github.com/paintception/Deep-Quality-Value-DQV-Learning- GitHub - paintception/Deep-Quality-Value-DQV-Learning-: DQV-Learning: a novel faster synchronous Deep Reinforcement Learning algorithm]</ref>
 
* [[Matthia Sabatelli]], [https://github.com/glouppe Gilles Louppe], [https://scholar.google.com/citations?user=tyFTsmIAAAAJ&hl=en Pierre Geurts], [[Marco Wiering]] ('''2020'''). ''The Deep Quality-Value Family of Deep Reinforcement Learning Algorithms''. [https://dblp.org/db/conf/ijcnn/ijcnn2020.html#SabatelliLGW20 IJCNN 2020] <ref>[https://github.com/paintception/Deep-Quality-Value-DQV-Learning- GitHub - paintception/Deep-Quality-Value-DQV-Learning-: DQV-Learning: a novel faster synchronous Deep Reinforcement Learning algorithm]</ref>
Line 182: Line 182:
 
* [[Quentin Cohen-Solal]], [[Tristan Cazenave]] ('''2021'''). ''DESCENT wins five gold medals at the Computer Olympiad''. [[ICGA Journal#43_2|ICGA Journal, Vol. 43, No. 2]]
 
* [[Quentin Cohen-Solal]], [[Tristan Cazenave]] ('''2021'''). ''DESCENT wins five gold medals at the Computer Olympiad''. [[ICGA Journal#43_2|ICGA Journal, Vol. 43, No. 2]]
 
* [[Boris Doux]], [[Benjamin Negrevergne]], [[Tristan Cazenave]] ('''2021'''). ''Deep Reinforcement Learning for Morpion Solitaire''. [[Advances in Computer Games 17]]
 
* [[Boris Doux]], [[Benjamin Negrevergne]], [[Tristan Cazenave]] ('''2021'''). ''Deep Reinforcement Learning for Morpion Solitaire''. [[Advances in Computer Games 17]]
 +
* [[Weirui Ye]], [[Shaohuai Liu]], [[Thanard Kurutach]], [[Pieter Abbeel]], [[Yang Gao]] ('''2021'''). ''Mastering Atari Games with Limited Data''. [https://arxiv.org/abs/2111.00210 arXiv:2111.00210] <ref>[https://github.com/YeWR/EfficientZero GitHub - YeWR/EfficientZero: Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021]</ref> <ref>[https://www.talkchess.com/forum3/viewtopic.php?f=7&t=78790 Want to train nets faster?] by [[Dann Corbit]], [[CCC]], December 01, 2021</ref>
 +
* [[Dennis Soemers]], [[Vegard Mella]], [[Cameron Browne]], [[Olivier Teytaud]] ('''2021'''). ''Deep learning for general game playing with Ludii and Polygames''. [[ICGA Journal#43_3|ICGA Journal, Vol. 43, No. 3]]
  
 
=Postings=
 
=Postings=
Line 234: Line 236:
 
* [http://videolectures.net/deeplearning2016_pineau_reinforcement_learning/ Introduction to Reinforcement Learning] by [[Joelle Pineau]], [[McGill University]], 2016, [https://en.wikipedia.org/wiki/YouTube YouTube] Video
 
* [http://videolectures.net/deeplearning2016_pineau_reinforcement_learning/ Introduction to Reinforcement Learning] by [[Joelle Pineau]], [[McGill University]], 2016, [https://en.wikipedia.org/wiki/YouTube YouTube] Video
 
: {{#evu:https://www.youtube.com/watch?v=O_1Z63EDMvQ|alignment=left|valignment=top}}
 
: {{#evu:https://www.youtube.com/watch?v=O_1Z63EDMvQ|alignment=left|valignment=top}}
==OpenSpiel==
+
==GitHub==
 
* [https://github.com/deepmind/open_spiel GitHub - deepmind/open_spiel: OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games] <ref>[[Marc Lanctot]], [[Edward Lockhart]], [[Jean-Baptiste Lespiau]], [[Vinícius Flores Zambaldi]], [[Satyaki Upadhyay]], [[Julien Pérolat]], [[Sriram Srinivasan]], [[Finbarr Timbers]], [[Karl Tuyls]], [[Shayegan Omidshafiei]], [[Daniel Hennes]], [[Dustin Morrill]], [[Paul Muller]], [[Timo Ewalds]], [[Ryan Faulkner]], [[János Kramár]], [[Bart De Vylder]], [[Brennan Saeta]], [[James Bradbury]], [[David Ding]], [[Sebastian Borgeaud]], [[Matthew Lai]], [[Julian Schrittwieser]], [[Thomas Anthony]], [[Edward Hughes]], [[Ivo Danihelka]], [[Jonah Ryan-Davis]] ('''2019'''). ''OpenSpiel: A Framework for Reinforcement Learning in Games''. [https://arxiv.org/abs/1908.09453 arXiv:1908.09453]</ref>
 
* [https://github.com/deepmind/open_spiel GitHub - deepmind/open_spiel: OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games] <ref>[[Marc Lanctot]], [[Edward Lockhart]], [[Jean-Baptiste Lespiau]], [[Vinícius Flores Zambaldi]], [[Satyaki Upadhyay]], [[Julien Pérolat]], [[Sriram Srinivasan]], [[Finbarr Timbers]], [[Karl Tuyls]], [[Shayegan Omidshafiei]], [[Daniel Hennes]], [[Dustin Morrill]], [[Paul Muller]], [[Timo Ewalds]], [[Ryan Faulkner]], [[János Kramár]], [[Bart De Vylder]], [[Brennan Saeta]], [[James Bradbury]], [[David Ding]], [[Sebastian Borgeaud]], [[Matthew Lai]], [[Julian Schrittwieser]], [[Thomas Anthony]], [[Edward Hughes]], [[Ivo Danihelka]], [[Jonah Ryan-Davis]] ('''2019'''). ''OpenSpiel: A Framework for Reinforcement Learning in Games''. [https://arxiv.org/abs/1908.09453 arXiv:1908.09453]</ref>
 
** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/algorithms open_spiel/open_spiel/algorithms at master · deepmind/open_spiel · GitHub]
 
** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/algorithms open_spiel/open_spiel/algorithms at master · deepmind/open_spiel · GitHub]
Line 240: Line 242:
 
** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/games open_spiel/open_spiel/games at master · deepmind/open_spiel · GitHub]
 
** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/games open_spiel/open_spiel/games at master · deepmind/open_spiel · GitHub]
 
*** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/games/chess open_spiel/open_spiel/games/chess at master · deepmind/open_spiel · GitHub]
 
*** [https://github.com/deepmind/open_spiel/tree/master/open_spiel/games/chess open_spiel/open_spiel/games/chess at master · deepmind/open_spiel · GitHub]
 +
* [https://github.com/koulanurag/muzero-pytorch GitHub - koulanurag/muzero-pytorch: Pytorch Implementation of MuZero] <ref>[[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Thomas Hubert]], [[Karen Simonyan]], [[Laurent Sifre]], [[Simon Schmitt]], [[Arthur Guez]], [[Edward Lockhart]], [[Demis Hassabis]], [[Thore Graepel]], [[Timothy Lillicrap]], [[David Silver]] ('''2020'''). ''[https://www.nature.com/articles/s41586-020-03051-4 Mastering Atari, Go, chess and shogi by planning with a learned model]''. [https://en.wikipedia.org/wiki/Nature_%28journal%29 Nature], Vol. 588</ref>
 +
* [https://github.com/YeWR/EfficientZero GitHub - YeWR/EfficientZero: Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021] <ref>[[Weirui Ye]], [[Shaohuai Liu]], [[Thanard Kurutach]], [[Pieter Abbeel]], [[Yang Gao]] ('''2021'''). ''Mastering Atari Games with Limited Data''. [https://arxiv.org/abs/2111.00210 arXiv:2111.00210]</ref>
 +
* [https://github.com/facebookarchive/Polygames GitHub - facebookarchive/Polygames: The project is a platform of zero learning with a library of games]
  
 
=References=  
 
=References=  

Latest revision as of 11:47, 14 March 2022

Home * Learning * Reinforcement Learning

Reinforcement Learning,
a learning paradigm inspired by behaviourist psychology and classical conditioning - learning by trial and error, interacting with an environment to map situations to actions in such a way that some notion of cumulative reward is maximized. In computer games, reinforcement learning deals with adjusting feature weights based on results or their subsequent predictions during self play.

Reinforcement learning is indebted to the idea of Markov decision processes (MDPs) in the field of optimal control utilizing dynamic programming techniques. The crucial exploitation and exploration tradeoff in multi-armed bandit problems as also considered in UCT of Monte-Carlo Tree Search - between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines - is also faced in reinforcement learning.

Q-Learning

Q-Learning, introduced by Chris Watkins in 1989, is a simple way for agents to learn how to act optimally in controlled Markovian domains [2]. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely [3]. Q-learning has been successfully applied to deep learning by a Google DeepMind team in playing some Atari 2600 games as published in Nature, 2015, dubbed deep reinforcement learning or deep Q-networks [4], soon followed by the spectacular AlphaGo and AlphaZero breakthroughs.

Temporal Difference Learning

see main page Temporal Difference Learning

Q-learning at its simplest uses tables to store data. This very quickly loses viability with increasing sizes of state/action space of the system it is monitoring/controlling. One solution to this problem is to use an (adapted) artificial neural network as a function approximator, as demonstrated by Gerald Tesauro in his Backgammon playing temporal difference learning research [5] [6].

Temporal Difference Learning is a prediction method primarily used for reinforcement learning. In the domain of computer games and computer chess, TD learning is applied through self play, subsequently predicting the probability of winning a game during the sequence of moves from the initial position until the end, to adjust weights for a more reliable prediction.

See also

UCT

Selected Publications

1954 ...

1960 ...

1970 ...

1980 ...

1990 ...

1995 ...

2000 ...

2005 ...

2010 ...

2011

2012

István Szita (2012). Reinforcement Learning in Games. Chapter 17

2013

2014

2015 ...

2016

2017

2018

2019

2020 ...

2021

Postings

1995 ...

Re: Parameter Tuning by Don Beal, CCC, October 02, 1998

2000 ...

2010 ...

2020 ...

External Links

Reinforcement Learning

MDP

Q-Learning

Courses

  1. Lecture 1: Introduction to Reinforcement Learning
  2. Lecture 2: Markov Decision Process
  3. Lecture 3: Planning by Dynamic Programming
  4. Lecture 4: Model-Free Prediction
  5. Lecture 5: Model Free Control
  6. Lecture 6: Value Function Approximation
  7. Lecture 7: Policy Gradient Methods
  8. Lecture 8: Integrating Learning and Planning
  9. Lecture 9: Exploration and Exploitation
  10. Lecture 10: Classic Games

GitHub

References

  1. Example of a simple Markov decision processes with three states (green circles) and two actions (orange circles), with two rewards (orange arrows), image by waldoalvarez CC BY-SA 4.0, Wikimedia Commons
  2. Q-learning from Wikipedia
  3. Chris Watkins, Peter Dayan (1992). Q-learning. Machine Learning, Vol. 8, No. 2
  4. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis (2015). Human-level control through deep reinforcement learning. Nature, Vol. 518
  5. Q-learning from Wikipedia
  6. Gerald Tesauro (1995). Temporal Difference Learning and TD-Gammon. Communications of the ACM, Vol. 38, No. 3
  7. University of Bristol - Department of Computer Science - Technical Reports
  8. Ms. Pac-Man from Wikipedia
  9. Demystifying Deep Reinforcement Learning by Tambet Matiisen, Nervana, December 22, 2015
  10. Patent US20150100530 - Methods and apparatus for reinforcement learning - Google Patents
  11. DeepChess: Another deep-learning based chess program by Matthew Lai, CCC, October 17, 2016
  12. ICANN 2016 | Recipients of the best paper awards
  13. GitHub - paintception/A-Reinforcement-Learning-Approach-for-Solving-Chess-Endgames: Machine Learning - Reinforcement Learning
  14. AlphaGo Zero: Learning from scratch by Demis Hassabis and David Silver, DeepMind, October 18, 2017
  15. AlphaZero: Shedding new light on the grand games of chess, shogi and Go by David Silver, Thomas Hubert, Julian Schrittwieser and Demis Hassabis, DeepMind, December 03, 2018
  16. open_spiel/contributing.md at master · deepmind/open_spiel · GitHub
  17. New DeepMind paper by GregNeto, CCC, November 21, 2019
  18. e: Board adaptive / tuning evaluation function - no NN/AI by Tony P., CCC, January 15, 2020
  19. MuZero: Mastering Go, chess, shogi and Atari without rules
  20. GitHub - koulanurag/muzero-pytorch: Pytorch Implementation of MuZero
  21. GitHub - paintception/Deep-Quality-Value-DQV-Learning-: DQV-Learning: a novel faster synchronous Deep Reinforcement Learning algorithm
  22. Book about Neural Networks for Chess by dkl, CCC, September 29, 2021
  23. GitHub - YeWR/EfficientZero: Open-source codebase for EfficientZero, from "Mastering Atari Games with Limited Data" at NeurIPS 2021
  24. Want to train nets faster? by Dann Corbit, CCC, December 01, 2021
  25. Marc Lanctot, Edward Lockhart, Jean-Baptiste Lespiau, Vinícius Flores Zambaldi, Satyaki Upadhyay, Julien Pérolat, Sriram Srinivasan, Finbarr Timbers, Karl Tuyls, Shayegan Omidshafiei, Daniel Hennes, Dustin Morrill, Paul Muller, Timo Ewalds, Ryan Faulkner, János Kramár, Bart De Vylder, Brennan Saeta, James Bradbury, David Ding, Sebastian Borgeaud, Matthew Lai, Julian Schrittwieser, Thomas Anthony, Edward Hughes, Ivo Danihelka, Jonah Ryan-Davis (2019). OpenSpiel: A Framework for Reinforcement Learning in Games. arXiv:1908.09453
  26. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver (2020). Mastering Atari, Go, chess and shogi by planning with a learned model. Nature, Vol. 588
  27. Weirui Ye, Shaohuai Liu, Thanard Kurutach, Pieter Abbeel, Yang Gao (2021). Mastering Atari Games with Limited Data. arXiv:2111.00210

Up one Level