Changes

Jump to: navigation, search

Neural Networks

710 bytes added, 22:13, 6 January 2020
no edit summary
The [https://en.wikipedia.org/wiki/Perceptron perceptron] is an algorithm for [[Supervised Learning|supervised learning]] of [https://en.wikipedia.org/wiki/Binary_classification binary classifiers]. It was the first artificial neural network, introduced in 1957 by [https://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] <ref>[https://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] ('''1957'''). ''The Perceptron - a Perceiving and Recognizing Automaton''. Report 85-460-1, [https://en.wikipedia.org/wiki/Calspan#History Cornell Aeronautical Laboratory]</ref>, implemented in custom hardware. In its basic form it consists of a single neuron with multiple inputs and associated weights.
[[Supervised Learning|Supervised learning]] is applied using a set D of labeled [https://en.wikipedia.org/wiki/Test_set training data] with pairs of [https://en.wikipedia.org/wiki/Feature_vector feature vectors] (x) and given results as desired output (d), usually started with cleared or randomly initialized weight vector w. The output is calculated by all inputs of a sample, multiplied by its corresponding weights, passing the sum to the activation function f. The difference of desired and actual value is then immediately used modify the weights for all features using a learning rate 0.0 < α <= 1.0:
<pre>
for (j=0, Σ = 0.0; j < nSamples; ++j) {
===Alpha Zero===
In December 2017, the [[Google]] [[DeepMind]] team along with former [[Giraffe]] author [[Matthew Lai]] reported on their generalized [[AlphaZero]] algorithm, combining [[Deep Learning|Deep learning]] with [[Monte-Carlo Tree Search]]. AlphaZero can achieve, tabula rasa, superhuman performance in many challenging domains with some training effort. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved a superhuman level of play in the games of chess and [[Shogi]] as well as Go, and convincingly defeated a world-champion program in each case <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>.
<span id="engines"></span>
===NN Chess Programs===
* [[:Category:NN]]
=See also=
* [[Pattern Recognition]]
* [[Temporal Difference Learning]]
<span id="engines"></span>
=NN Chess Programs=
* [[:Category:NN]]
=Selected Publications=
* [[Pieter Spronck]] ('''1996'''). ''Elegance: Genetic Algorithms in Neural Reinforcement Control''. Master thesis, [[Delft University of Technology]], [http://ticc.uvt.nl/~pspronck/pubs/Elegance.pdf pdf]
* [[Raúl Rojas]] ('''1996'''). ''Neural Networks - A Systematic Introduction''. Springer, available as [http://www.inf.fu-berlin.de/inst/ag-ki/rojas_home/documents/1996/NeuralNetworks/neuron.pdf pdf ebook]
* [[Ida Sprinkhuizen-Kuyper]], [https://dblp.org/pers/hd/b/Boers:Egbert_J=_W= Egbert J. W. Boers] ('''1996'''). ''[https://ieeexplore.ieee.org/abstract/document/6796246 The Error Surface of the Simplest XOR Network Has Only Global Minima]''. [https://en.wikipedia.org/wiki/Neural_Computation_(journal) Neural Computation], Vol. 8, No. 6, [http://www.socsci.ru.nl/idak/publications/papers/NeuralComputation.pdf pdf]
'''1997'''
* [[Mathematician#SHochreiter|Sepp Hochreiter]], [[Jürgen Schmidhuber]] ('''1997'''). ''Long short-term memory''. [https://en.wikipedia.org/wiki/Neural_Computation_%28journal%29 Neural Computation], Vol. 9, No. 8, [http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Long_short_term_memory Long short term memory from Wikipedia]</ref>
* [[Mathematician#GEHinton|Geoffrey E. Hinton]], [[Terrence J. Sejnowski]] (eds.) ('''1999'''). ''[https://mitpress.mit.edu/books/unsupervised-learning Unsupervised Learning: Foundations of Neural Computation]''. [https://en.wikipedia.org/wiki/MIT_Press MIT Press]
* [[Peter Dayan]] ('''1999'''). ''Recurrent Sampling Models for the Helmholtz Machine''. [https://en.wikipedia.org/wiki/Neural_Computation_(journal) Neural Computation], Vol. 11, No. 3, [http://www.gatsby.ucl.ac.uk/~dayan/papers/rechelm99.pdf pdf] <ref>[https://en.wikipedia.org/wiki/Helmholtz_machine Helmholtz machine from Wikipedia]</ref>
* [[Ida Sprinkhuizen-Kuyper]], [https://dblp.org/pers/hd/b/Boers:Egbert_J=_W= Egbert J. W. Boers] ('''1999'''). ''[https://ieeexplore.ieee.org/document/774274 A local minimum for the 2-3-1 XOR network]''. [[IEEE#NN|IEEE Transactions on Neural Networks]], Vol. 10, No. 4
==2000 ...==
* [[Levente Kocsis]], [[Jos Uiterwijk]], [[Jaap van den Herik]] ('''2000'''). ''[http://link.springer.com/chapter/10.1007%2F3-540-45579-5_11 Learning Time Allocation using Neural Networks]''. [[CG 2000]]

Navigation menu