Michael Gherrity

From Chessprogramming wiki
Revision as of 13:23, 29 October 2018 by GerdIsenberg (talk | contribs) (Created page with "'''Home * People * Michael Gherrity''' '''Michael (Mike) Gherrity''',<br/> an American computer scientist and AI-researcher from the [https://en.wikipedia.o...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Home * People * Michael Gherrity

Michael (Mike) Gherrity,
an American computer scientist and AI-researcher from the University of California, San Diego. He defended his Ph.D. in 1993 - A Game Learning Machine, elaborating on SAL (Search and Learn) [1], his General Game Playing program. While applying a move generator, and losing if own king is captured as sole domain specific knowledge, it was the first chess program used Temporal Difference Learning [2]. In a match of 4200 games against GNU Chess (One second per move), it started to play random moves within its two ply search plus Consistency Search, a generalized Quiescence Search [3], but learned to play reasonable, but still weak chess. It archived eight draws, apparently due to a repetition detection bug in GNU Chess [4].

Selected Publications

[5]

Forum Posts

External Links

References

  1. SAL from Machine Learning in Games by Jay Scott
  2. Marco Block, Maro Bader, Ernesto Tapia, Marte Ramírez, Ketill Gunnarsson, Erik Cuevas, Daniel Zaldivar, Raúl Rojas (2008). Using Reinforcement Learning in Chess Engines. Concibe Science 2008, Research in Computing Science: Special Issue in Electronics and Biomedical Engineering, Computer Science and Informatics, Vol. 35, pdf, 1.1 Related Work
  3. Don Beal (1989). Experiments with the Null Move. Advances in Computer Chess 5, a revised version is published (1990) under the title A Generalized Quiescence Search Algorithm. Artificial Intelligence, Vol. 43, No. 1
  4. Michael Gherrity (1993). A Game Learning Machine. Ph.D. thesis, University of California, San Diego, advisor Paul Kube, pdf, pdf
  5. dblp: Michael Gherrity
  6. Barney Pell (1993). Strategy Generation and Evaluation for Meta-Game Playing. Ph.D: thesis, Trinity College, Cambridge, pdf

Up one level