Difference between revisions of "Reinforcement Learning"

From Chessprogramming wiki
Jump to: navigation, search
(12 intermediate revisions by the same user not shown)
Line 43: Line 43:
 
* [[A. Harry Klopf]] ('''1972'''). ''Brain Function and Adaptive Systems - A Heterostatic Theory''. [https://en.wikipedia.org/wiki/Air_Force_Cambridge_Research_Laboratories Air Force Cambridge Research Laboratories], Special Reports, No. 133, [http://www.dtic.mil/dtic/tr/fulltext/u2/742259.pdf pdf]
 
* [[A. Harry Klopf]] ('''1972'''). ''Brain Function and Adaptive Systems - A Heterostatic Theory''. [https://en.wikipedia.org/wiki/Air_Force_Cambridge_Research_Laboratories Air Force Cambridge Research Laboratories], Special Reports, No. 133, [http://www.dtic.mil/dtic/tr/fulltext/u2/742259.pdf pdf]
 
* [[Mathematician#Holland|John H. Holland]] ('''1975'''). ''Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence''. [http://www.amazon.com/Adaptation-Natural-Artificial-Systems-Introductory/dp/0262581116 amazon.com]
 
* [[Mathematician#Holland|John H. Holland]] ('''1975'''). ''Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence''. [http://www.amazon.com/Adaptation-Natural-Artificial-Systems-Introductory/dp/0262581116 amazon.com]
 +
* [[Ian H. Witten]] ('''1977'''). ''An Adaptive Optimal Controller for Discrete-Time Markov Environments''. [https://en.wikipedia.org/wiki/Information_and_Computation Information and Control], Vol. 34, No. 4, [https://core.ac.uk/download/pdf/82451748.pdf pdf]
 
==1980 ...==
 
==1980 ...==
 
* [[Richard Sutton]] ('''1984'''). ''[http://scholarworks.umass.edu/dissertations/AAI8410337/ Temporal Credit Assignment in Reinforcement Learning]''. Ph.D. dissertation, [https://en.wikipedia.org/wiki/University_of_Massachusetts University of Massachusetts]
 
* [[Richard Sutton]] ('''1984'''). ''[http://scholarworks.umass.edu/dissertations/AAI8410337/ Temporal Credit Assignment in Reinforcement Learning]''. Ph.D. dissertation, [https://en.wikipedia.org/wiki/University_of_Massachusetts University of Massachusetts]
Line 50: Line 51:
 
* [[Richard Sutton]], [[Andrew Barto]] ('''1990'''). ''Time Derivative Models of Pavlovian Reinforcement''. Learning and Computational Neuroscience: Foundations of Adaptive Networks: 497-537
 
* [[Richard Sutton]], [[Andrew Barto]] ('''1990'''). ''Time Derivative Models of Pavlovian Reinforcement''. Learning and Computational Neuroscience: Foundations of Adaptive Networks: 497-537
 
* [[Jürgen Schmidhuber]] ('''1990'''). ''Reinforcement Learning in Markovian and Non-Markovian Environments''. [https://dblp.uni-trier.de/db/conf/nips/nips1990.html NIPS 1990], [ftp://ftp.idsia.ch/pub/juergen/nipsnonmarkov.pdf pdf]
 
* [[Jürgen Schmidhuber]] ('''1990'''). ''Reinforcement Learning in Markovian and Non-Markovian Environments''. [https://dblp.uni-trier.de/db/conf/nips/nips1990.html NIPS 1990], [ftp://ftp.idsia.ch/pub/juergen/nipsnonmarkov.pdf pdf]
 +
* [[Peter Dayan]] ('''1991'''). ''[https://www.era.lib.ed.ac.uk/handle/1842/14754 Reinforcing Connectionism: Learning the Statistical Way]''. Ph.D. thesis, [[University of Edinburgh]]
 
* [[Chris Watkins]], [[Peter Dayan]] ('''1992'''). ''[http://www.gatsby.ucl.ac.uk/~dayan/papers/wd92.html Q-learning]''. [https://en.wikipedia.org/wiki/Machine_Learning_(journal) Machine Learning], Vol. 8, No. 2
 
* [[Chris Watkins]], [[Peter Dayan]] ('''1992'''). ''[http://www.gatsby.ucl.ac.uk/~dayan/papers/wd92.html Q-learning]''. [https://en.wikipedia.org/wiki/Machine_Learning_(journal) Machine Learning], Vol. 8, No. 2
 
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
 
* [[Gerald Tesauro]] ('''1992'''). ''Temporal Difference Learning of Backgammon Strategy''. [http://www.informatik.uni-trier.de/~ley/db/conf/icml/ml1992.html#Tesauro92 ML 1992]
Line 73: Line 75:
 
* [[Andrew Ng]], [[Stuart Russell]] ('''2000'''). ''Algorithms for inverse reinforcement learning.'' In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, California: Morgan Kaufmann, [http://www.cs.berkeley.edu/~russell/papers/ml00-irl.pdf pdf]
 
* [[Andrew Ng]], [[Stuart Russell]] ('''2000'''). ''Algorithms for inverse reinforcement learning.'' In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, California: Morgan Kaufmann, [http://www.cs.berkeley.edu/~russell/papers/ml00-irl.pdf pdf]
 
* [http://www.cs.ou.edu/~hougen/ Dean F. Hougen], [http://www-users.cs.umn.edu/~gini/ Maria Gini], [[James R. Slagle]] ('''2000'''). ''[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.2633 An Integrated Connectionist Approach to Reinforcement Learning for Robotic Control]''. ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
 
* [http://www.cs.ou.edu/~hougen/ Dean F. Hougen], [http://www-users.cs.umn.edu/~gini/ Maria Gini], [[James R. Slagle]] ('''2000'''). ''[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.23.2633 An Integrated Connectionist Approach to Reinforcement Learning for Robotic Control]''. ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
* [[Jonathan Baxter]], [[Peter Bartlett]] ('''2000'''). ''Reinforcement Learning on POMDPs via Direct Gradient Ascent''. [http://dblp.uni-trier.de/db/conf/icml/icml2000.html ICML 2000], [https://pdfs.semanticscholar.org/b874/98f0879d312c308889135203b17069aa0486.pdf pdf]
+
* [[Jonathan Baxter]], [[Mathematician#PBartlett|Peter Bartlett]] ('''2000'''). ''Reinforcement Learning on POMDPs via Direct Gradient Ascent''. [http://dblp.uni-trier.de/db/conf/icml/icml2000.html ICML 2000], [https://pdfs.semanticscholar.org/b874/98f0879d312c308889135203b17069aa0486.pdf pdf]
 
* [[Doina Precup]] ('''2000'''). ''Temporal Abstraction in Reinforcement Learning''. Ph.D. Dissertation, Department of Computer Science, [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts], [https://en.wikipedia.org/wiki/Amherst,_Massachusetts Amherst].
 
* [[Doina Precup]] ('''2000'''). ''Temporal Abstraction in Reinforcement Learning''. Ph.D. Dissertation, Department of Computer Science, [https://en.wikipedia.org/wiki/University_of_Massachusetts_Amherst University of Massachusetts], [https://en.wikipedia.org/wiki/Amherst,_Massachusetts Amherst].
 
* [[Robert Levinson]], [[Ryan Weber]] ('''2001'''). ''Chess Neighborhoods, Function Combinations and Reinforcements Learning''. In Computers and Games (eds. [[Tony Marsland]] and I. Frank). [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science],. Springer,. [http://users.soe.ucsc.edu/~levinson/Papers/CNFCRL.pdf pdf]
 
* [[Robert Levinson]], [[Ryan Weber]] ('''2001'''). ''Chess Neighborhoods, Function Combinations and Reinforcements Learning''. In Computers and Games (eds. [[Tony Marsland]] and I. Frank). [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science],. Springer,. [http://users.soe.ucsc.edu/~levinson/Papers/CNFCRL.pdf pdf]
Line 79: Line 81:
 
* [[Henk Mannen]] ('''2003'''). ''Learning to play chess using reinforcement learning with database games''. Master’s thesis, [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University]
 
* [[Henk Mannen]] ('''2003'''). ''Learning to play chess using reinforcement learning with database games''. Master’s thesis, [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University]
 
* [[Joelle Pineau]], [[Geoffrey Gordon]], [[Sebastian Thrun]] ('''2003'''). ''Point-based value iteration: An anytime algorithm for POMDPs''. [[Conferences#IJCAI2003|IJCAI]], [http://www.fore.robot.cc/papers/Pineau03a.pdf pdf]
 
* [[Joelle Pineau]], [[Geoffrey Gordon]], [[Sebastian Thrun]] ('''2003'''). ''Point-based value iteration: An anytime algorithm for POMDPs''. [[Conferences#IJCAI2003|IJCAI]], [http://www.fore.robot.cc/papers/Pineau03a.pdf pdf]
 +
* [https://dblp.uni-trier.de/pers/hd/k/Kerr:Amy_J= Amy J. Kerr], [[Todd W. Neller]], [https://dblp.uni-trier.de/pers/hd/p/Pilla:Christopher_J=_La Christopher J. La Pilla] , [https://dblp.uni-trier.de/pers/hd/s/Schompert:Michael_D= Michael D. Schompert] ('''2002'''). ''[https://www.semanticscholar.org/paper/Java-Resources-for-Teaching-Reinforcement-Learning-Kerr-Neller/3d84018eb8b8668c13d1d4f6efca4442af2915b4 Java Resources for Teaching Reinforcement Learning]''. [https://dblp.uni-trier.de/db/conf/pdpta/pdpta2003-3.html PDPTA 2003]
 
* [[Yngvi Björnsson]], Vignir Hafsteinsson, Ársæll Jóhannsson, Einar Jónsson ('''2004'''). ''Efficient Use of Reinforcement Learning in a Computer Game''. In Computer Games: Artificial Intellignece, Design and Education (CGAIDE'04), pp. 379–383, 2004. [http://www.ru.is/faculty/yngvi/pdf/BjornssonHJJ04.pdf pdf]
 
* [[Yngvi Björnsson]], Vignir Hafsteinsson, Ársæll Jóhannsson, Einar Jónsson ('''2004'''). ''Efficient Use of Reinforcement Learning in a Computer Game''. In Computer Games: Artificial Intellignece, Design and Education (CGAIDE'04), pp. 379–383, 2004. [http://www.ru.is/faculty/yngvi/pdf/BjornssonHJJ04.pdf pdf]
 
* [http://imranontech.com/ Imran Ghory] ('''2004'''). ''Reinforcement learning in board games''. CSTR-04-004, [http://www.cs.bris.ac.uk/ Department of Computer Science], [https://en.wikipedia.org/wiki/University_of_Bristol University of Bristol]. [http://www.cs.bris.ac.uk/Publications/Papers/2000100.pdf pdf] <ref>[http://www.cs.bris.ac.uk/Publications/pub_master.jsp?type=117 University of Bristol - Department of Computer Science - Technical Reports]</ref>
 
* [http://imranontech.com/ Imran Ghory] ('''2004'''). ''Reinforcement learning in board games''. CSTR-04-004, [http://www.cs.bris.ac.uk/ Department of Computer Science], [https://en.wikipedia.org/wiki/University_of_Bristol University of Bristol]. [http://www.cs.bris.ac.uk/Publications/Papers/2000100.pdf pdf] <ref>[http://www.cs.bris.ac.uk/Publications/pub_master.jsp?type=117 University of Bristol - Department of Computer Science - Technical Reports]</ref>
Line 92: Line 95:
 
* [[Cécile Germain-Renaud]], [[Julien Pérez]], [[Balázs Kégl]], [[Charles Loomis]] ('''2008'''). ''Grid Differentiated Services: a Reinforcement Learning Approach''. In 8th [[IEEE]] Symposium on Cluster Computing and the Grid. Lyon, [http://hal.inria.fr/docs/00/28/78/26/PDF/RLccg08.pdf pdf]
 
* [[Cécile Germain-Renaud]], [[Julien Pérez]], [[Balázs Kégl]], [[Charles Loomis]] ('''2008'''). ''Grid Differentiated Services: a Reinforcement Learning Approach''. In 8th [[IEEE]] Symposium on Cluster Computing and the Grid. Lyon, [http://hal.inria.fr/docs/00/28/78/26/PDF/RLccg08.pdf pdf]
 
* [[David Silver]] ('''2009'''). ''Reinforcement Learning and Simulation-Based Search''. Ph.D. thesis, [[University of Alberta]]. [http://webdocs.cs.ualberta.ca/~silver/David_Silver/Publications_files/thesis.pdf pdf]
 
* [[David Silver]] ('''2009'''). ''Reinforcement Learning and Simulation-Based Search''. Ph.D. thesis, [[University of Alberta]]. [http://webdocs.cs.ualberta.ca/~silver/David_Silver/Publications_files/thesis.pdf pdf]
 +
* [[Balázs Csanád Csáji]], [https://dblp.dagstuhl.de/pers/hd/m/Monostori:L=aacute=szl=oacute= László Monostori] ('''2008'''). ''Value function based reinforcement learning in changing Markovian environments''. [https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research Journal of Machine Learning Research], Vol. 9, [http://www.jmlr.org/papers/volume9/csaji08a/csaji08a.pdf pdf]
 
==2010 ...==
 
==2010 ...==
 
* [[Joel Veness]], [[Kee Siong Ng]], [[Marcus Hutter]], [[David Silver]] ('''2010'''). ''Reinforcement Learning via AIXI Approximation''. Association for the Advancement of Artificial Intelligence (AAAI), [http://jveness.info/publications/veness_rl_via_aixi_approx.pdf pdf]
 
* [[Joel Veness]], [[Kee Siong Ng]], [[Marcus Hutter]], [[David Silver]] ('''2010'''). ''Reinforcement Learning via AIXI Approximation''. Association for the Advancement of Artificial Intelligence (AAAI), [http://jveness.info/publications/veness_rl_via_aixi_approx.pdf pdf]
Line 104: Line 108:
 
: [[István Szita]] ('''2012'''). ''[http://link.springer.com/chapter/10.1007%2F978-3-642-27645-3_17 Reinforcement Learning in Games]''. Chapter 17
 
: [[István Szita]] ('''2012'''). ''[http://link.springer.com/chapter/10.1007%2F978-3-642-27645-3_17 Reinforcement Learning in Games]''. Chapter 17
 
* [[Thomas J. Walsh]], [[István Szita]], [[Carlos Diuk]], [[Michael L. Littman]] ('''2012'''). ''Exploring compact reinforcement-learning representations with linear regression''. [https://arxiv.org/abs/1205.2606 arXiv:1205.2606]
 
* [[Thomas J. Walsh]], [[István Szita]], [[Carlos Diuk]], [[Michael L. Littman]] ('''2012'''). ''Exploring compact reinforcement-learning representations with linear regression''. [https://arxiv.org/abs/1205.2606 arXiv:1205.2606]
* [[Arthur Guez]], [[David Silver]], [[Peter Dayan]] ('''2012'''). ''Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search''. [http://papers.nips.cc/book/advances-in-neural-information-processing-systems-25-2012 NIPS 2012], [https://papers.nips.cc/paper/4767-efficient-bayes-adaptive-reinforcement-learning-using-sample-based-search.pdf pdf]
+
* [[Arthur Guez]], [[David Silver]], [[Peter Dayan]] ('''2012'''). ''[https://papers.nips.cc/paper/4767-efficient-bayes-adaptive-reinforcement-learning-using-sample-based-search Efficient Bayes-Adaptive Reinforcement Learning using Sample-Based Search]''. [https://papers.nips.cc/book/advances-in-neural-information-processing-systems-25-2012 NIPS 2012]
 
'''2013'''
 
'''2013'''
 
* [[Arthur Guez]], [[David Silver]], [[Peter Dayan]] ('''2013'''). ''Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research Journal of Artificial Intelligence Research], Vol. 48, [https://www.jair.org/media/4117/live-4117-7507-jair.pdf pdf]
 
* [[Arthur Guez]], [[David Silver]], [[Peter Dayan]] ('''2013'''). ''Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research Journal of Artificial Intelligence Research], Vol. 48, [https://www.jair.org/media/4117/live-4117-7507-jair.pdf pdf]
Line 145: Line 149:
 
* [[Taichi Nakayashiki]], [[Tomoyuki Kaneko]] ('''2018'''). ''Learning of Evaluation Functions via Self-Play Enhanced by Checkmate Search''. [[TAAI 2018]]
 
* [[Taichi Nakayashiki]], [[Tomoyuki Kaneko]] ('''2018'''). ''Learning of Evaluation Functions via Self-Play Enhanced by Checkmate Search''. [[TAAI 2018]]
 
* [[Hung Guei]], [[Ting-Han Wei]], [[I-Chen Wu]] ('''2018'''). ''Using 2048-like games as a pedagogical tool for reinforcement learning''. [[CG 2018]], [[ICGA Journal#40_3|ICGA Journal, Vol. 40, No. 3]]
 
* [[Hung Guei]], [[Ting-Han Wei]], [[I-Chen Wu]] ('''2018'''). ''Using 2048-like games as a pedagogical tool for reinforcement learning''. [[CG 2018]], [[ICGA Journal#40_3|ICGA Journal, Vol. 40, No. 3]]
 +
'''2019'''
 +
* [https://scholar.google.co.uk/citations?user=OAkRr-YAAAAJ&hl=en Sanjeevan Ahilan], [[Peter Dayan]] ('''2019'''). ''Feudal Multi-Agent Hierarchies for Cooperative Reinforcement Learning''. [https://arxiv.org/abs/1901.08492 arXiv:1901.08492]
 +
* [[Marc Lanctot]], [[Edward Lockhart]], [[Jean-Baptiste Lespiau]], [[Vinicius Zambaldi]], [[Satyaki Upadhyay]], [[Julien Pérolat]], [[Sriram Srinivasan]], [[Finbarr Timbers]], [[Karl Tuyls]], [[Shayegan Omidshafiei]], [[Daniel Hennes]], [[Dustin Morrill]], [[Paul Muller]], [[Timo Ewalds]], [[Ryan Faulkner]], [[János Kramár]], [[Bart De Vylder]], [[Brennan Saeta]], [[James Bradbury]], [[David Ding]], [[Sebastian Borgeaud]], [[Matthew Lai]], [[Julian Schrittwieser]], [[Thomas Anthony]], [[Edward Hughes]], [[Ivo Danihelka]], [[Jonah Ryan-Davis]] ('''2019'''). ''OpenSpiel: A Framework for Reinforcement Learning in Games''. [https://arxiv.org/abs/1908.09453 arXiv:1908.09453] <ref>[https://github.com/deepmind/open_spiel/blob/master/docs/contributing.md open_spiel/contributing.md at master · deepmind/open_spiel · GitHub]</ref>
 +
* [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Thomas Hubert]], [[Karen Simonyan]], [[Laurent Sifre]], [[Simon Schmitt]], [[Arthur Guez]], [[Edward Lockhart]], [[Demis Hassabis]], [[Thore Graepel]], [[Timothy Lillicrap]], [[David Silver]] ('''2019'''). ''Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model''. [https://arxiv.org/abs/1911.08265 arXiv:1911.08265] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72381 New DeepMind paper] by GregNeto, [[CCC]], November 21, 2019</ref>
 +
* [[Mathematician#SrbhBose|Sourabh Bose]] ('''2019'''). ''[https://rc.library.uta.edu/uta-ir/handle/10106/28094 Learning Representations Using Reinforcement Learning]''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Texas_at_Arlington University of Texas at Arlington], advisor [[Mathematician#MHuber|Manfred Huber]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=72810&start=6 e: Board adaptive / tuning evaluation function - no NN/AI] by Tony P., [[CCC]], January 15, 2020</ref>
  
 
=Postings=
 
=Postings=
Line 154: Line 163:
 
* [http://www.talkchess.com/forum/viewtopic.php?t=65909 Google's AlphaGo team has been working on chess] by [[Peter Kappler]], [[CCC]], December 06, 2017 » [[AlphaZero]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=65909 Google's AlphaGo team has been working on chess] by [[Peter Kappler]], [[CCC]], December 06, 2017 » [[AlphaZero]]
 
* [http://www.talkchess.com/forum/viewtopic.php?t=65990 Understanding the power of reinforcement learning] by [[Michael Sherwin]], [[CCC]], December 12, 2017
 
* [http://www.talkchess.com/forum/viewtopic.php?t=65990 Understanding the power of reinforcement learning] by [[Michael Sherwin]], [[CCC]], December 12, 2017
 +
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=72810 Board adaptive / tuning evaluation function - no NN/AI] by Moritz Gedig, [[CCC]], January 14, 2020
  
 
=External Links=  
 
=External Links=  

Revision as of 22:01, 16 January 2020

Home * Learning * Reinforcement Learning

Reinforcement Learning,
a learning paradigm inspired by behaviourist psychology and classical conditioning - learning by trial and error, interacting with an environment to map situations to actions in such a way that some notion of cumulative reward is maximized. In computer games, reinforcement learning deals with adjusting feature weights based on results or their subsequent predictions during self play.

Reinforcement learning is indebted to the idea of Markov decision processes (MDPs) in the field of optimal control utilizing dynamic programming techniques. The crucial exploitation and exploration tradeoff in multi-armed bandit problems as also considered in UCT of Monte-Carlo Tree Search - between "exploitation" of the machine that has the highest expected payoff and "exploration" to get more information about the expected payoffs of the other machines - is also faced in reinforcement learning.

Q-Learning

Q-Learning, introduced by Chris Watkins in 1989, is a simple way for agents to learn how to act optimally in controlled Markovian domains [2]. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely [3]. Q-learning has been successfully applied to deep learning by a Google DeepMind team in playing some Atari 2600 games as published in Nature, 2015, dubbed deep reinforcement learning or deep Q-networks [4], soon followed by the spectacular AlphaGo and AlphaZero breakthroughs.

Temporal Difference Learning

see main page Temporal Difference Learning

Q-learning at its simplest uses tables to store data. This very quickly loses viability with increasing sizes of state/action space of the system it is monitoring/controlling. One solution to this problem is to use an (adapted) artificial neural network as a function approximator, as demonstrated by Gerald Tesauro in his Backgammon playing temporal difference learning research [5] [6].

Temporal Difference Learning is a prediction method primarily used for reinforcement learning. In the domain of computer games and computer chess, TD learning is applied through self play, subsequently predicting the probability of winning a game during the sequence of moves from the initial position until the end, to adjust weights for a more reliable prediction.

See also

UCT

Selected Publications

1954 ...

1960 ...

1970 ...

1980 ...

1990 ...

1995 ...

2000 ...

2005 ...

2010 ...

2011

2012

István Szita (2012). Reinforcement Learning in Games. Chapter 17

2013

2014

2015 ...

2016

2017

2018

2019

Postings

External Links

Reinforcement Learning

MDP

Q-Learning

Courses

  1. Lecture 1: Introduction to Reinforcement Learning
  2. Lecture 2: Markov Decision Process
  3. Lecture 3: Planning by Dynamic Programming
  4. Lecture 4: Model-Free Prediction
  5. Lecture 5: Model Free Control
  6. Lecture 6: Value Function Approximation
  7. Lecture 7: Policy Gradient Methods
  8. Lecture 8: Integrating Learning and Planning
  9. Lecture 9: Exploration and Exploitation
  10. Lecture 10: Classic Games

References

  1. Example of a simple Markov decision processes with three states (green circles) and two actions (orange circles), with two rewards (orange arrows), image by waldoalvarez CC BY-SA 4.0, Wikimedia Commons
  2. Q-learning from Wikipedia
  3. Chris Watkins, Peter Dayan (1992). Q-learning. Machine Learning, Vol. 8, No. 2
  4. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis (2015). Human-level control through deep reinforcement learning. Nature, Vol. 518
  5. Q-learning from Wikipedia
  6. Gerald Tesauro (1995). Temporal Difference Learning and TD-Gammon. Communications of the ACM, Vol. 38, No. 3
  7. University of Bristol - Department of Computer Science - Technical Reports
  8. Ms. Pac-Man from Wikipedia
  9. Demystifying Deep Reinforcement Learning by Tambet Matiisen, Nervana, December 22, 2015
  10. Patent US20150100530 - Methods and apparatus for reinforcement learning - Google Patents
  11. DeepChess: Another deep-learning based chess program by Matthew Lai, CCC, October 17, 2016
  12. ICANN 2016 | Recipients of the best paper awards
  13. AlphaGo Zero: Learning from scratch by Demis Hassabis and David Silver, DeepMind, October 18, 2017
  14. AlphaZero: Shedding new light on the grand games of chess, shogi and Go by David Silver, Thomas Hubert, Julian Schrittwieser and Demis Hassabis, DeepMind, December 03, 2018
  15. open_spiel/contributing.md at master · deepmind/open_spiel · GitHub
  16. New DeepMind paper by GregNeto, CCC, November 21, 2019
  17. e: Board adaptive / tuning evaluation function - no NN/AI by Tony P., CCC, January 15, 2020

Up one Level