Changes

Jump to: navigation, search

David E. Moriarty

5,057 bytes added, 20:55, 14 July 2020
Created page with "'''Home * People * David E. Moriarty''' '''David Eric Moriarty''',<br/> an American computer scientist an Ph.D. alumni from [https://en.wikipedia.org/wiki/U..."
'''[[Main Page|Home]] * [[People]] * David E. Moriarty'''

'''David Eric Moriarty''',<br/>
an American computer scientist an Ph.D. alumni from [https://en.wikipedia.org/wiki/University_of_Texas_at_Austin University of Texas at Austin] <ref>[http://nn.cs.utexas.edu/?moriarty NNRG People - David E. Moriarty]</ref>.
During the 90s, along with his advisor, [[Risto Miikkulainen]], David E. Moriarty worked on [[Reinforcement Learning|reinforcement learning]] by [https://en.wikipedia.org/wiki/Symbiosis Symbiotic], Adaptive [https://en.wikipedia.org/wiki/Neuroevolution NeuroEvolution] dubbed SANE, also topic of his Ph.D. thesis <ref>[[David E. Moriarty]] ('''1997'''). ''[http://nn.cs.utexas.edu/?moriarty:phd97 Symbiotic Evolution of Neural Networks in Sequential Decision Tasks]''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Texas_at_Austin University of Texas at Austin], advisor [[Risto Miikkulainen]]</ref>.

=SANE=
SANE evolves [[Neural Networks|neural networks]] with [[Genetic Programming#GeneticAlgorithm|genetic algorithms]] for [https://en.wikipedia.org/wiki/Sequential_decision_making sequential decision tasks],
also applied to the [[Games|games]] of [[Othello]] and [[Go]].
SANE selects a population of hidden neurons of a "vanilla" three-layer feed-forward neural network with the connections and weights in both directions,
performing following basic steps in one generation <ref>[[David E. Moriarty]], [[Risto Miikkulainen]] ('''1996'''). ''[http://nn.cs.utexas.edu/?moriarty:mlj96 Efficient Reinforcement Learning through Symbiotic Evolution]''. [https://en.wikipedia.org/wiki/Machine_Learning_(journal) Machine Learning], Vol. 22</ref>:
<pre>
1. Clear all fitness values from each neuron
2. Select neurons randomly from the population
3. Create a neural network from the selected neurons
4. Evaluate the network in the given task
5. Add the network's score to each selected neuron's fitness variable
6. Repeat steps 2-5 a sufficient number of times
7. Get each neuron's average fitness score by dividing its total fitness values by the number of networks in which it was implemented
8. Perform crossover operations on the population based on the average fitness value of each neuron
</pre>
For Go and Othello, the input layer sees the board configuration, while the output layer indicates the goodness of each possible move by an output neuron associated with each space or point of the board.
However the research was conducted a few years before the [[Monte-Carlo Tree Search|MCTS]] revolution appeared in computer Go, not to mention the [[Deep Learning|deep learning]] breakthrough.
The [[Pruning|forward pruning]] decisions of the [[Alpha-Beta|alpha-beta]] search in Othello was controlled by the neural network <ref>[[David E. Moriarty]], [[Risto Miikkulainen]] ('''1994'''). ''[http://nn.cs.utexas.edu/?moriarty:aaai94 Evolving Neural Networks to focus Minimax Search]''. [[Conferences#AAAI-94|AAAI-94]]</ref>.

=Selected Publications=
<ref>[https://dblp.uni-trier.de/pers/hd/m/Moriarty:David_E= dblp: David E. Moriarty]</ref>
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1994'''). ''[https://ieeexplore.ieee.org/document/349900 Improving Game-Tree Search with Evolutionary Neural Networks]''. [https://dblp.uni-trier.de/db/conf/icec/icec1994-1.html ICEC 1994]
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1994'''). ''[http://nn.cs.utexas.edu/?moriarty:aaai94 Evolving Neural Networks to focus Minimax Search]''. [[Conferences#AAAI-94|AAAI-94]] » [[Othello]]
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1995'''). ''[http://nn.cs.utexas.edu/?moriarty:connsci95 Discovering Complex Othello Strategies Through Evolutionary Neural Networks]''. [https://www.scimagojr.com/journalsearch.php?q=24173&tip=sid Connection Science], Vol. 7
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1996'''). ''[http://nn.cs.utexas.edu/?moriarty:mlj96 Efficient Reinforcement Learning through Symbiotic Evolution]''. [https://en.wikipedia.org/wiki/Machine_Learning_(journal) Machine Learning], Vol. 22 <ref>[https://en.wikipedia.org/wiki/Inverted_pendulum Inverted pendulum from Wikipedia]</ref>
* [[David E. Moriarty]] ('''1997'''). ''[http://nn.cs.utexas.edu/?moriarty:phd97 Symbiotic Evolution of Neural Networks in Sequential Decision Tasks]''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Texas_at_Austin University of Texas at Austin], advisor [[Risto Miikkulainen]]
* [[Norman Richards]], [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1998'''). ''[http://nn.cs.utexas.edu/?richards:apin98 Evolving Neural Networks to Play Go]''. [https://www.springer.com/journal/10489 Applied Intelligence], Vol. 8, No. 1

=External Links=
* [http://nn.cs.utexas.edu/?moriarty NNRG People - David E. Moriarty]
* [http://www.cs.utexas.edu/users/ai-lab/?moriarty AI-Lab People - David E. Moriarty]
* [https://www.mathgenealogy.org/id.php?id=128214 David Moriarty - The Mathematics Genealogy Project]

=References=
<references />
'''[[People|Up one Level]]'''
[[Category:Researcher|Moriarty]]

Navigation menu