Difference between revisions of "Arthur Guez"

From Chessprogramming wiki
Jump to: navigation, search
Line 25: Line 25:
 
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815] » [[AlphaZero]]
 
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815] » [[AlphaZero]]
 
* [[Arthur Guez]], [[Théophane Weber]], [[Ioannis Antonoglou]], [[Karen Simonyan]], [[Oriol Vinyals]], [[Daan Wierstra]], [[Rémi Munos]], [[David Silver]] ('''2018'''). ''Learning to Search with MCTSnets''. [https://arxiv.org/abs/1802.04697 arXiv:1802.04697]
 
* [[Arthur Guez]], [[Théophane Weber]], [[Ioannis Antonoglou]], [[Karen Simonyan]], [[Oriol Vinyals]], [[Daan Wierstra]], [[Rémi Munos]], [[David Silver]] ('''2018'''). ''Learning to Search with MCTSnets''. [https://arxiv.org/abs/1802.04697 arXiv:1802.04697]
 +
* [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2018'''). ''[http://science.sciencemag.org/content/362/6419/1140 A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play]''. [https://en.wikipedia.org/wiki/Science_(journal) Science], Vol. 362, No. 6419 <ref>[https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/ AlphaZero: Shedding new light on the grand games of chess, shogi and Go] by [[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]] and [[Demis Hassabis]], [[DeepMind]], December 03, 2018</ref>
  
 
=External Links=
 
=External Links=

Revision as of 00:20, 7 December 2018

Home * People * Arthur Guez

Arthur Guez [1]

Arthur Guez,
a Canadian computer and neuro scientist, currently researcher at Google DeepMind with expertise in machine learning, in particular deep learning, and involved in the AlphaGo and AlphaZero projects. He holds a M.Sc. in machine learning from McGill University in 2010 and a Ph.D. from Gatsby Computational Neuroscience Unit at University College London in 2015 titled Sample-based Search Methods for Bayes-Adaptive Planning, where he was supervised by Peter Dayan and David Silver.

Ph.D. Thesis

In his Ph.D. thesis, Arthur Guez elaborates on search and planning methods in the face of uncertainty about the environment inducing the exploration versus exploitation trade-off of an agent-based model to optimize the return by maintaining a posterior distribution over possible environments considering all possible future paths. This optimization is equivalent to solving a Markov decision process (MDP) whose hyperstate comprises the agent’s beliefs about the environment, as well as its current state in that environment - the corresponding process is called a Bayes-Adaptive MDP (BAMDP), also using a tailored Monte-Carlo tree search. In historical notes on Bayesian Adaptive control, Arthur Guez mentions Abraham Wald's Sequential Probability Ratio Test (SPRT) [2], and that Alan Turing assisted by Jack Good used a similar sequential testing technique to help decipher enigma codes at Bletchley Park [3] [4].

Selected Publications

[5]

2010 ...

2015 ...

External Links

References

Up one level