Ari Weinstein

From Chessprogramming wiki
Revision as of 22:46, 28 November 2018 by GerdIsenberg (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Home * People * Ari Weinstein

Ari Weinstein,
an American computer scientist at Google DeepMind. He holds a Ph.D. from Rutgers University under advisor Michael L. Littman on Local Planning For Continuous Markov Decision Processes, covering algorithms that create plans to maximize a numeric reward over time. While a general formulation of this problem in terms of reinforcement learning has traditionally been restricted to small discrete domains, Weinstein thesis include both continuous and high dimensional domains, with simulations of swimming, riding a bicycle, and walking as concrete examples.

FSSS-Minimax

In their paper Rollout-based Game-tree Search Outprunes Traditional Alpha-beta, along with Sergiu Goschin and his advisor Michael Littman [1], Weinstein introduce the rollout-based FSSS (Forward-search sparse sampling) [2] applied to game-tree search, outpruning alpha-beta both empirically and formally. FSSS-Minimax only visits parts of the tree that alpha-beta visits, and is in terms of related work similar to the Score Bounded Monte-Carlo Tree Search introduced by Tristan Cazenave and Abdallah Saffidine [3].

Recently, rollout-based planning and search methods have emerged as an alternative to traditional tree-search methods. The fundamental operation in rollout-based tree search is the generation of trajectories in the search tree from root to leaf. Game-playing programs based on Monte-Carlo rollouts methods such as “UCT” have proven remarkably effective at using information from trajectories to make state-of-the-art decisions at the root. In this paper, we show that trajectories can be used to prune more aggressively than classical alpha-beta search. We modify a rollout-based method, FSSS, to allow for use in game-tree search and show it outprunes alpha-beta both empirically and formally.

While FSSS-Minimax is guaranteed to never expand more leaves than alpha-beta, the best-first approach comes at a cost in terms of memory requirements as well as computational cost.

Selected Publications

[4] [5]

External Links

References

Up one Level