Joel Veness

Home * People * Joel Veness



Joel Veness, an Australian games programmer, mathematician and computer scientist with a Ph.D. from University of New South Wales (UNSW). He spent two years at the University of Alberta as a postdoc under Michael Bowling, and now works in the UK as research scientist at Google DeepMind.

Joel is author of the chess engine Bodo, written in C and later C++. Joel Veness’ chess program Meep based on Bodo is one of the first master-level programs with an evaluation function that was learned entirely from self-play, by bootstrapping from deep searches.

=Selected Publications=

2006 ...

 * Joel Veness (2006). Expectimax Enhancements for Stochastic Game Players. BSc-Thesis, pdf
 * Joel Veness, Alan Blair (2007). Effective Use of Transposition Tables in Stochastic Game Tree Search. IEEE Symposium on Computational Intelligence and Games, pdf
 * Joel Veness, David Silver, William Uther, Alan Blair (2009). Bootstrapping from Game Tree Search. pdf, video presentation
 * Joel Veness, Kee Siong Ng, Marcus Hutter, David Silver (2009). A Monte Carlo AIXI Approximation, pdf

2010 ...

 * Joel Veness, Kee Siong Ng, Marcus Hutter, David Silver (2010). Reinforcement Learning via AIXI Approximation. Association for the Advancement of Artificial Intelligence (AAAI), pdf
 * Joel Veness (2011). Approximate Universal Artificial Intelligence and Self-Play Learning for Games. Ph.D. thesis, University of New South Wales, supervisors: Kee Siong Ng, Marcus Hutter, Alan Blair, William Uther, John Lloyd; pdf
 * Joel Veness, Marc Lanctot, Michael Bowling (2011). Variance Reduction in Monte-Carlo Tree Search. NIPS, pdf
 * Marc Lanctot, Abdallah Saffidine, Joel Veness, Christopher Archibald (2012). Sparse Sampling for Adversarial Games. ECAI CGW 2012
 * Marc Lanctot, Abdallah Saffidine, Joel Veness, Christopher Archibald, Mark Winands (2013). Monte Carlo *-Minimax Search. IJCAI 2013

2015 ...

 * Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis (2015). Human-level control through deep reinforcement learning. Nature, Vol. 518
 * James Kirkpatrick, Razvan Pascanu, Neil C. Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, Raia Hadsell (2016). Overcoming catastrophic forgetting in neural networks. arXiv:1612.00796

=Forum Posts=
 * quiescent nodes, and history heuristic... by Joel Veness, CCC, January 30, 2003 » Quiescent Node, History Heuristic
 * Your program is a ... by Joel Veness, CCC, October 29, 2003
 * Re: CCT6: Rybka /Bodo ??? by Joel Veness, CCC, January 26, 2004 » CCT6, Rybka, Bodo
 * Bodo @ CCT6....day 1.... by Joel Veness, CCC, February 03, 2004
 * Re: BODO new OZ champion by Joel Veness, CCC, July 17, 2005 » NC3 2005

=External Links=
 * Homepage of Joel Veness
 * Joel Veness from Microsoft Academic Search
 * Veness, Joel from computer-go.info
 * Bootstrapping from Game Tree Search, video presentation by Joel Veness, from VideoLectures - exchange ideas & share knowledge, December 2009
 * Enabling Continual Learning in Neural Networks by James Kirkpatrick, Joel Veness et al., DeepMind, March 13, 2017 » Deep Learning, Neural Networks

=References=

Up one level