Changes

Jump to: navigation, search

Eric Wefald

5,462 bytes added, 15:03, 10 March 2019
Created page with "'''Home * People * Eric Wefald''' '''Eric Huang Wefald''', (died August 31, 1989) <ref>[https://paw.princeton.edu/memorial/eric-huang-wefald-85 Princeton Al..."
'''[[Main Page|Home]] * [[People]] * Eric Wefald'''

'''Eric Huang Wefald''', (died August 31, 1989) <ref>[https://paw.princeton.edu/memorial/eric-huang-wefald-85 Princeton Alumni Weekly: Eric Huang Wefald], February 6, 1991</ref><br/>
was an American computer scientist and researcher from [[University of California, Berkeley]], [https://en.wikipedia.org/wiki/California California], who co-authored with [[Stuart Russell]] on multiple papers on [[Search|search]] control, [[Learning|machine learning]], and [https://en.wiktionary.org/wiki/metareasoning metareasoning], that is the process of reasoning about reasoning itself.
Eric Wefald obtained a [https://en.wikipedia.org/wiki/Master_of_Philosophy Master of philosophy] from [https://en.wikipedia.org/wiki/Princeton_University Princeton] in 1985, and was completing a doctorate in [[Artificial Intelligence|artificial intelligence]] at U.C. Berkeley, when he died in a car accident near [https://en.wikipedia.org/wiki/Bordeaux Bordeaux], [https://en.wikipedia.org/wiki/France France], on August 31, 1989.

=Decision-Theoretic Search Control=
Abstract from [[Stuart Russell]] and [[Eric Wefald]] ('''1988'''). ''Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.'' <ref>[[Stuart Russell]], [[Eric Wefald]] ('''1988'''). ''Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.'' CS Technical Report 88/435, [[University of California, Berkeley]]</ref>:
In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low- overhead algorithm, MGSS, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of [[Othello]] appears to rival an [[Alpha-Beta|alpha-beta]] search with equal node allocations or time allocations. [[Pruning]] and search termination subsumes or improve on many other algorithms. Single-agent search, as in the A algorithm, yields a simpler analysis, and we are currently investigating applications of the algorithm developed for this case.

=Optimal Game-Tree Search=
Abstract from [[Stuart Russell]] and [[Eric Wefald]] ('''1989'''). ''On optimal game-tree search using rational metareasoning'' <ref>[[Stuart Russell]], [[Eric Wefald]] ('''1989'''). ''On optimal game-tree search using rational metareasoning.'' [[Conferences#IJCAI1989|IJCAI 1989]]</ref>:
In this paper we outline a general approach to the study of problem-solving, in which search steps are considered decisions in the same sense as actions in the world. Unlike other metrics in the literature, the value of a search step is defined as a real utility rather than as a quasi-utility, and can therefore be computed directly from a model of the base-level problem-solver. We develop a formula for the expected value of a search step in a game-playing context using the single-step assumption, namely that a computation step can be evaluated as it was the last to be taken. We prove some meta-level theorems that enable the development of a low-overhead algorithm, MGSS*, that chooses search steps in order of highest estimated utility. Although we show that the single-step assumption is untenable in general, a program implemented for the game of [[Othello]] soundly beats an alpha-beta search while expanding significantly fewer nodes, even though both programs use the same evaluation function.

=Selected Publications=
<ref>[https://dblp.uni-trier.de/pers/hd/w/Wefald:Eric.htm DBLP: Eric Wefald]</ref> <ref>[https://www.researchgate.net/scientific-contributions/69792834_Eric_Wefald Eric Wefald's research works | University of California, Berkeley, CA (UCB) and other places]</ref>
* [[Stuart Russell]], [[Eric Wefald]] ('''1988'''). ''Decision-Theoretic Search Control: General Theory and an Application to Game-Playing.'' CS Technical Report 88/435, [[University of California, Berkeley]]
* [[Stuart Russell]], [[Eric Wefald]] ('''1988'''). ''Multi-Level Decision-Theoretic Search.'' [[AAAI]] Symposium on Computer Game-Playing, Stanford.
* [[Stuart Russell]], [[Eric Wefald]] ('''1989'''). ''On optimal game-tree search using rational metareasoning.'' [[Conferences#IJCAI1989|IJCAI 1989]]
* [[Eric Wefald]], [[Stuart Russell]] ('''1989'''). ''[https://www.sciencedirect.com/science/article/pii/B978155860036250103X Adaptive Learning of Decision-Theoretic Search Control Knowledge]''. 6th International Workshop on Machine Learning
* [[Stuart Russell]], [[Eric Wefald]] ('''1991'''). ''Principles of Metareasoning.'' [https://en.wikipedia.org/wiki/Artificial_Intelligence_(journal) Artificial Intelligence], Vol. 49, Nos. 1-3
* [[Stuart Russell]], [[Eric Wefald]] ('''1991'''). ''Do the right thing: studies in limited rationality''. [https://en.wikipedia.org/wiki/MIT_Press MIT Press]

=References=
<references />
'''[[People|Up one level]]'''

Navigation menu