Timothy Lillicrap

From Chessprogramming wiki
Revision as of 23:16, 3 June 2018 by GerdIsenberg (talk | contribs) (Created page with "'''Home * People * Timothy Lillicrap''' FILE:TimothyLillicrap.jpg|border|right|thumb| Timothy Lillicrap <ref>Image captured from the [[Timothy Lillicrap#D...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Home * People * Timothy Lillicrap

Timothy Lillicrap [1]

Timothy P. (Tim) Lillicrap,
a Canadian neuroscientist an AI researcher, adjunct professor at University College London, and staff research scientist at Google, DeepMind, where he is involved in the AlphaGo and AlphaZero projects mastering the games of Go, chess and Shogi. He holds a B.Sc. in cognitive science and artificial intelligence from University of Toronto in 2005, and a Ph.D. in systems neuroscience from Queen's University in 2014 under Stephen H. Scott [2] [3]. His research focuses on machine learning and statistics for optimal control and decision making, as well as using these mathematical frameworks to understand how the brain learns. He has developed algorithms and approaches for exploiting deep neural networks in the context of reinforcement learning, and new recurrent memory architectures for one-shot learning [4].

Selected Publications

[5]

2014

2015 ...

2016

2017

External Links

References

  1. Image captured from the Data efficient Deep Reinforcement Learning for Continuous Control - Video at 20:21
  2. Timothy Lillicrap (2014). Modelling Motor Cortex using Neural Network Controls Laws. Ph.D. Systems Neuroscience Thesis, Centre for Neuroscience Studies, Queen's University, advisor: Stephen H. Scott
  3. Curriculum Vitae - Timothy P. Lillicrap (pdf)
  4. timothy lillicrap - research
  5. dblp: Timothy P. Lillicrap
  6. Q-learning from Wikipedia
  7. AlphaGo Zero: Learning from scratch by Demis Hassabis and David Silver, DeepMind, October 18, 2017

Up one level