Changes

Jump to: navigation, search

Neural Networks

12,007 bytes added, 09:52, 15 April 2021
no edit summary
=ANNs=
[https://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Networks] ('''ANNs''') are a family of [https://en.wikipedia.org/wiki/Machine_learning statistical learning] devices or algorithms used in [https://en.wikipedia.org/wiki/Regression_analysis regression], and [https://en.wikipedia.org/wiki/Binary_classification binary] or [[https://en.wikipedia.org/wiki/Multiclass_classification multiclass classification|multiclass classification]], implemented in [[Hardware|hardware]] or [[Software|software]] inspired by their biological counterparts. The [https://en.wikipedia.org/wiki/Artificial_neuron artificial neurons] of one or more layers receive one or more inputs (representing dendrites), and after being weighted, sum them to produce an output (representing a neuron's axon). The sum is passed through a [https://en.wikipedia.org/wiki/Nonlinear_system nonlinear] function known as an [https://en.wikipedia.org/wiki/Activation_function activation function] or transfer function. The transfer functions usually have a [https://en.wikipedia.org/wiki/Sigmoid_function sigmoid shape], but they may also take the form of other non-linear functions, [https://en.wikipedia.org/wiki/Piecewise piecewise] linear functions, or [https://en.wikipedia.org/wiki/Artificial_neuron#Step_function step functions] <ref>[https://en.wikipedia.org/wiki/Artificial_neuron Artificial neuron from Wikipedia]</ref>. The weights of the inputs of each layer are tuned to minimize a [https://en.wikipedia.org/wiki/Loss_function cost or loss function], which is a task in [https://en.wikipedia.org/wiki/Mathematical_optimization mathematical optimization] and machine learning.
==Perceptron==
Typical CNN <ref>Typical [https://en.wikipedia.org/wiki/Convolutional_neural_network CNN] architecture, Image by Aphex34, December 16, 2015, [https://creativecommons.org/licenses/by-sa/4.0/deed.en CC BY-SA 4.0], [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref>
<span id="Residual"></span>
==Residual NetsNet==
[[FILE:ResiDualBlock.png|border|right|thumb|link=https://arxiv.org/abs/1512.03385| A residual block <ref>The fundamental building block of residual networks. Figure 2 in [https://scholar.google.com/citations?user=DhtAFkwAAAAJ Kaiming He], [https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en Xiangyu Zhang], [http://shaoqingren.com/ Shaoqing Ren], [http://www.jiansun.org/ Jian Sun] ('''2015'''). ''Deep Residual Learning for Image Recognition''. [https://arxiv.org/abs/1512.03385 arXiv:1512.03385]</ref> <ref>[https://blog.waya.ai/deep-residual-learning-9610bb62c355 Understand Deep Residual Networks — a simple, modular learning framework that has redefined state-of-the-art] by [https://blog.waya.ai/@waya.ai Michael Dietz], [https://blog.waya.ai/ Waya.ai], May 02, 2017</ref> ]]
A '''Residual netsnet''' add (ResNet) adds the input of a layer, typically composed of a convolutional layer and of a [https://en.wikipedia.org/wiki/Rectifier_(neural_networks) ReLU] layer, to its output. This modification, like convolutional nets inspired from image classification, enables faster training and deeper networks <ref>[[Tristan Cazenave]] ('''2017'''). ''[http://ieeexplore.ieee.org/document/7875402/ Residual Networks for Computer Go]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. PP, No. 99, [http://www.lamsade.dauphine.fr/~cazenave/papers/resnet.pdf pdf]</ref> <ref>[https://wiki.tum.de/display/lfdv/Deep+Residual+Networks Deep Residual Networks] from [https://wiki.tum.de/ TUM Wiki], [[Technical University of Munich]]</ref> <ref>[https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8 Understanding and visualizing ResNets] by Pablo Ruiz, October 8, 2018</ref>.
=ANNs in Games=
<span id="AlphaZero"></span>
===Alpha Zero===
In December 2017, the [[Google]] [[DeepMind]] team along with former [[Giraffe]] author [[Matthew Lai]] reported on their generalized [[AlphaZero]] algorithm, combining [[Deep Learning|Deep learning]] with [[Monte-Carlo Tree Search]]. AlphaZero can achieve, tabula rasa, superhuman performance in many challenging domains with some training effort. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved a superhuman level of play in the games of chess and [[Shogi]] as well as Go, and convincingly defeated a world-champion program in each case <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>. The open souece projects [[Leela Zero]] (Go) and its chess adaptation [[Leela Chess Zero]] successfully re-implemented the ideas of DeepMind.===NNUE===[[NNUE]] reverse of &#398;U&#1048;&#1048; - Efficiently Updatable Neural Networks, is an NN architecture intended to replace the [[Evaluation|evaluation]] of [[Shogi]], [[Chess|chess]] and other board game playing [[Alpha-Beta|alpha-beta]] searchers. NNUE was introduced in 2018 by [[Yu Nasu]] <ref>[[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf] (Japanese with English abstract) [https://github.com/asdfjkl/nnue GitHub - asdfjkl/nnue translation]</ref>,and was used in Shogi adaptations of [[Stockfish]] such as [[YaneuraOu]] <ref>[https://github.com/yaneurao/YaneuraOu GitHub - yaneurao/YaneuraOu: YaneuraOu is the World's Strongest Shogi engine(AI player), WCSC29 1st winner, educational and USI compliant engine]</ref> ,and [[Kristallweizen]] <ref>[https://github.com/Tama4649/Kristallweizen/ GitHub - Tama4649/Kristallweizen: 第29回世界コンピュータ将棋選手権 準優勝のKristallweizenです。]</ref>, apparently with [[AlphaZero]] strength <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72754 The Stockfish of shogi] by [[Larry Kaufman]], [[CCC]], January 07, 2020</ref>. [[Hisayori Noda|Nodchip]] incorporated NNUE into the chess playing Stockfish 10 as a proof of concept <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=74059 Stockfish NN release (NNUE)] by [[Henk Drost]], [[CCC]], May 31, 2020</ref>, yielding in the hype about [[Stockfish NNUE]] in summer 2020 <ref>[http://yaneuraou.yaneu.com/2020/06/19/stockfish-nnue-the-complete-guide/ Stockfish NNUE – The Complete Guide], June 19, 2020 (Japanese and English)</ref>.Its heavily over parametrized computational most expensive input layer is efficiently [[Incremental Updates|incremental updated]] in [[Make Move|make]] and [[Unmake Move|unmake move]].
<span id="engines"></span>
===NN Chess Programs===
* [[Memory]]
* [[Neural MoveMap Heuristic]]
* [[NNUE]]
* [[Pattern Recognition]]
* [[David E. Moriarty#SANE|SANE]]
* [[Temporal Difference Learning]]
* [[John von Neumann]] ('''1956'''). ''Probabilistic Logic and the Synthesis of Reliable Organisms From Unreliable Components''. in
: [[Claude Shannon]], [[John McCarthy]] (eds.) ('''1956'''). ''Automata Studies''. [http://press.princeton.edu/math/series/amh.html Annals of Mathematics Studies], No. 34, [http://www.dna.caltech.edu/courses/cs191/paperscs191/VonNeumann56.pdf pdf]
* [[Nathaniel Rochester]], [[Mathematician#Holland|John H. Holland]], [httphttps://dblp.uni-trier.de/pers/hd/h/Haibt:L=_H= L. H. Haibt], [httphttps://dblp.uni-trier.de/pers/hd/d/Duda:WWilliam_L=_L= W. William L. Duda] ('''1956'''). ''[https://www.semanticscholar.org/paper/Tests-on-a-cell-assembly-theory-of-the-action-of-a-Rochester-Holland/878d615b84cf779e162f62c4a9192d6bddeefbf9 Tests on a Cell Assembly Theory of the Action of the Brain, Using a Large Digital Computer]''. [httphttps://dblp.uni-trier.de/db/journals/tit/tit2n.html#RochesterHHD56 IRE Transactions on Information Theory, Vol. 2], No. 3
* [https://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] ('''1957'''). ''The Perceptron - a Perceiving and Recognizing Automaton''. Report 85-460-1, [https://en.wikipedia.org/wiki/Calspan#History Cornell Aeronautical Laboratory] <ref>[http://csis.pace.edu/~ctappert/srd2011/rosenblatt-contributions.htm Rosenblatt's Contributions]</ref>
==1960 ...==
* [https://dblp.uni-trier.de/pers/hd/h/Hellstrom:Benjamin_J= Benjamin J. Hellstrom], [[Laveen Kanal|Laveen N. Kanal]] ('''1990'''). ''[https://ieeexplore.ieee.org/document/5726889 The definition of necessary hidden units in neural networks for combinatorial optimization]''. [https://dblp.uni-trier.de/db/conf/ijcnn/ijcnn1990.html IJCNN 1990]
* [[Mathematician#XZhang|Xiru Zhang]], [https://dblp.uni-trier.de/pers/hd/m/McKenna:Michael Michael McKenna], [[Mathematician#JPMesirov|Jill P. Mesirov]], [[David Waltz]] ('''1990'''). ''[https://www.sciencedirect.com/science/article/pii/016781919090084M The backpropagation algorithm on grid and hypercube architectures]''. [https://www.journals.elsevier.com/parallel-computing Parallel Computing], Vol. 14, No. 3
* [[Simon Lucas]], [https://dblp.uni-trier.de/pers/hd/d/Damper:Robert_I= Robert I. Damper] ('''1990'''). ''[https://www.tandfonline.com/doi/abs/10.1080/09540099008915669 Syntactic Neural Networks]''. [https://www.tandfonline.com/toc/ccos20/current Connection Science], Vol. 2, No. 3
'''1991'''
* [[Mathematician#SHochreiter|Sepp Hochreiter]] ('''1991'''). ''Untersuchungen zu dynamischen neuronalen Netzen''. Diploma thesis, [[Technical University of Munich|TU Munich]], advisor [[Jürgen Schmidhuber]], [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf pdf] (German) <ref>[http://people.idsia.ch/~juergen/fundamentaldeeplearningproblem.html Sepp Hochreiter's Fundamental Deep Learning Problem (1991)] by [[Jürgen Schmidhuber]], 2013</ref>
* [[Alex van Tiggelen]] ('''1991'''). ''Neural Networks as a Guide to Optimization - The Chess Middle Game Explored''. [[ICGA Journal#14_3|ICCA Journal, Vol. 14, No. 3]]
* [[Mathematician#TMartinetz|Thomas Martinetz]], [[Mathematician#KSchulten|Klaus Schulten]] ('''1991'''). ''A "Neural-Gas" Network Learns Topologies''. In [[Mathematician#TKohonen|Teuvo Kohonen]], [https://dblp.uni-trier.de/pers/hd/m/Makisara:Kai Kai Mäkisara], [http://users.ics.tkk.fi/ollis/ Olli Simula], [http://cis.legacy.ics.tkk.fi/jari/ Jari Kangas] (eds.) ('''1991'''). ''[https://www.elsevier.com/books/artificial-neural-networks/makisara/978-0-444-89178-5 Artificial Neural Networks]''. [https://en.wikipedia.org/wiki/Elsevier Elsevier], [http://www.ks.uiuc.edu/Publications/Papers/PDF/MART91B/MART91B.pdf pdf]
* [[Jürgen Schmidhuber]], [[Rudolf Huber]] ('''1991'''). ''[https://www.researchgate.net/publication/2290900_Using_Adaptive_Sequential_Neurocontrol_For_Efficient_Learning_Of_Translation_And_Rotation_Invariance Using sequential adaptive Neuro-control for efficient Learning of Rotation and Translation Invariance]''. In [[Mathematician#TKohonen|Teuvo Kohonen]], [https://dblp.uni-trier.de/pers/hd/m/Makisara:Kai Kai Mäkisara], [http://users.ics.tkk.fi/ollis/ Olli Simula], [http://cis.legacy.ics.tkk.fi/jari/ Jari Kangas] (eds.) ('''1991'''). ''[https://www.sciencedirect.com/book/9780444891785/artificial-neural-networks#book-description Artificial Neural Networks]''. [https://en.wikipedia.org/wiki/Elsevier Elsevier]
* [[Jürgen Schmidhuber]] ('''1991'''). ''[http://www.idsia.ch/%7Ejuergen/promotion/ Dynamische neuronale Netze und das fundamentale raumzeitliche Lernproblem]'' (Dynamic Neural Nets and the Fundamental Spatio-Temporal Credit Assignment Problem). Ph.D. thesis
* [[Yoav Freund]], [[Mathematician#DHHaussler|David Haussler]] ('''1991'''). ''Unsupervised Learning of Distributions of Binary Vectors Using 2-Layer Networks''. [http://dblp.uni-trier.de/db/conf/nips/nips1991.html#FreundH91 NIPS 1991]
* [[Byoung-Tak Zhang]], [[Gerd Veenker]] ('''1991'''). ''[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=170480&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D170480 Neural networks that teach themselves through genetic discovery of novel examples]''. [http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000500 IEEE IJCNN'91], [https://bi.snu.ac.kr/Publications/Conferences/International/IJCNN91.pdf pdf]
* [[Simon Lucas]], [https://dblp.uni-trier.de/pers/hd/d/Damper:Robert_I= Robert I. Damper] ('''1991'''). ''[https://link.springer.com/chapter/10.1007/978-1-4615-3752-6_30 Syntactic neural networks in VLSI]''. [https://link.springer.com/book/10.1007/978-1-4615-3752-6 VLSI for Artificial Intelligence and Neural Networks]
* [[Simon Lucas]] ('''1991'''). ''[https://eprints.soton.ac.uk/256263/ Connectionist architectures for syntactic pattern recognition]''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Southampton University of Southampton]
'''1992'''
* [[Michael Reiss]] ('''1992'''). ''Temporal Sequence Processing in Neural Networks''. Ph.D. thesis, [https://en.wikipedia.org/wiki/King%27s_College_London King's College London], advisor [[Mathematician#JGTaylor|John G. Taylor]], [http://www.reiss.demon.co.uk/misc/m_reiss_phd.pdf pdf]
'''1994'''
* [[Mathematician#PWerbos|Paul Werbos]] ('''1994'''). ''The Roots of Backpropagation. From Ordered Derivatives to Neural Networks and Political Forecasting''. [https://en.wikipedia.org/wiki/John_Wiley_%26_Sons John Wiley & Sons]
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1994'''). ''[http://nn.cs.utexas.edu/?moriarty:aaai94 Evolving Neural Networks to focus Minimax Search]''. [[Conferences#AAAI-94|AAAI-94]], » [[http://www.cs.utexas.edu/~ai-lab/pubs/moriarty.focus.pdf pdfOthello]]
* [[Eric Postma]] ('''1994'''). ''SCAN: A Neural Model of Covert Attention''. Ph.D. thesis, [[Maastricht University]], advisor [[Jaap van den Herik]]
* [[Sebastian Thrun]] ('''1994'''). ''Neural Network Learning in the Domain of Chess''. Machines That Learn, [http://snowbird.djvuzone.org/ Snowbird], Extended abstract
'''1995'''
* [https://peterbraspenning.wordpress.com/ Peter J. Braspenning], [[Frank Thuijsman]], [https://scholar.google.com/citations?user=Ba9L7CAAAAAJ Ton Weijters] (eds) ('''1995'''). ''[http://link.springer.com/book/10.1007%2FBFb0027019 Artificial neural networks: an introduction to ANN theory and practice]''. [https://de.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science LNCS] 931, [https://de.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1995'''). ''[http://nn.cs.utexas.edu/?moriarty:connsci95 Discovering Complex Othello Strategies Through Evolutionary Neural Networks]''. [https://www.scimagojr.com/journalsearch.php?q=24173&tip=sid Connection Science], Vol. 7
* [[Anton Leouski]] ('''1995'''). ''Learning of Position Evaluation in the Game of Othello''. Master's Project, [https://en.wikipedia.org/wiki/University_of_Massachusetts University of Massachusetts], [https://en.wikipedia.org/wiki/Amherst,_Massachusetts Amherst, Massachusetts], [http://people.ict.usc.edu/~leuski/publications/papers/UM-CS-1995-023.pdf pdf]
* [[Mathematician#SHochreiter|Sepp Hochreiter]], [[Jürgen Schmidhuber]] ('''1995'''). ''[http://www.idsia.ch/%7Ejuergen/nipsfm/ Simplifying Neural Nets by Discovering Flat Minima]''. In [[Gerald Tesauro]], [http://www.cs.cmu.edu/%7Edst/home.html David S. Touretzky] and [http://www.bme.ogi.edu/%7Etleen/ Todd K. Leen] (eds.), ''[http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=8420 Advances in Neural Information Processing Systems 7]'', NIPS'7, pages 529-536. [https://en.wikipedia.org/wiki/MIT_Press MIT Press]
* [[Don Beal]], [[Martin C. Smith]] ('''1997'''). ''Learning Piece Values Using Temporal Differences''. [[ICGA Journal#20_3|ICCA Journal, Vol. 20, No. 3]]
* [https://dblp.uni-trier.de/pers/hd/t/Thiesing:Frank_M= Frank M. Thiesing], [[Oliver Vornberger]] ('''1997'''). ''Forecasting Sales Using Neural Networks''. [https://dblp.uni-trier.de/db/conf/fuzzy/fuzzy1997.html Fuzzy Days 1997], [http://www2.inf.uos.de/papers_pdf/fuzzydays_97.pdf pdf]
* [[Simon Lucas]] ('''1997'''). ''[https://link.springer.com/chapter/10.1007/BFb0032531 Forward-Backward Building Blocks for Evolving Neural Networks with Intrinsic Learning Behaviors]''. [https://dblp.uni-trier.de/db/conf/iwann/iwann1997.html IWANN 1997]
'''1998'''
* [[Kieran Greer]] ('''1998'''). ''A Neural Network Based Search Heuristic and its Application to Computer Chess''. D.Phil. Thesis, [https://en.wikipedia.org/wiki/University_of_Ulster University of Ulster]
* <span id="FundamentalsNAI1st"></span>[[Toshinori Munakata]] ('''1998'''). ''[http://cis.csuohio.edu/~munakata/publs/book/sp.html Fundamentals of the New Artificial Intelligence: Beyond Traditional Paradigms]''. 1st edition, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer], [[Neural Networks#FundamentalsNAI2nd|2nd edition 2008]]
* [[Lex Weaver]], [https://bjbs.csu.edu.au/schools/computing-and-mathematics/staff/profiles/professorial-staff/terry-bossomaier Terry Bossomaier] ('''1998'''). ''Evolution of Neural Networks to Play the Game of Dots-and-Boxes''. [https://arxiv.org/abs/cs/9809111 arXiv:cs/9809111]
* [[Norman Richards]], [[David E. Moriarty]], [[Risto Miikkulainen]] ('''1998'''). ''[http://nn.cs.utexas.edu/?richards:apin98 Evolving Neural Networks to Play Go]''. [https://www.springer.com/journal/10489 Applied Intelligence], Vol. 8, No. 1
'''1999'''
* [[Kumar Chellapilla]], [[David B. Fogel]] ('''1999'''). ''[http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=784222 Evolution, Neural Networks, Games, and Intelligence]''. Proceedings of the IEEE, September, pp. 1471-1496. [http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.99.979 CiteSeerX]
* [[Paul E. Utgoff]], [[David J. Stracuzzi]] ('''2002'''). ''Many-Layered Learning''. [https://en.wikipedia.org/wiki/Neural_Computation_%28journal%29 Neural Computation], Vol. 14, No. 10, [http://people.cs.umass.edu/~utgoff/papers/neco-stl.pdf pdf]
* [[Mathematician#MIJordan|Michael I. Jordan]], [[Terrence J. Sejnowski]] (eds.) ('''2002'''). ''[https://mitpress.mit.edu/books/graphical-models Graphical Models: Foundations of Neural Computation]''. [https://en.wikipedia.org/wiki/MIT_Press MIT Press]
* [[Kenneth O. Stanley]], [[Risto Miikkulainen]] ('''2002'''). ''[http://nn.cs.utexas.edu/?stanley:ec02 Evolving Neural Networks Through Augmenting Topologies]''. [https://en.wikipedia.org/wiki/Evolutionary_Computation_(journal) Evolutionary Computation], Vol. 10, No. 2
'''2003'''
* [[Levente Kocsis]] ('''2003'''). ''Learning Search Decisions''. Ph.D thesis, [[Maastricht University]], [https://project.dke.maastrichtuniversity.nl/games/files/phd/Kocsis_thesis.pdf pdf]
'''2004'''
* [http://dblp.uni-trier.de/pers/hd/p/Patist:Jan_Peter Jan Peter Patist], [[Marco Wiering]] ('''2004'''). ''Learning to Play Draughts using Temporal Difference Learning with Neural Networks and Databases''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04
* [[Henk Mannen]], [[Marco Wiering]] ('''2004'''). ''[https://www.semanticscholar.org/paper/Learning-to-Play-Chess-using-TD(lambda)-learning-Mannen-Wiering/00a6f81c8ebe8408c147841f26ed27eb13fb07f3 Learning to play chess using TD(λ)-learning with database games]''. [http://students.uu.nl/en/hum/cognitive-artificial-intelligence Cognitive Artificial Intelligence], [https://en.wikipedia.org/wiki/Utrecht_University Utrecht University], Benelearn’04, [https://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/learning-chess.pdf pdf]
* [[Mathieu Autonès]], [[Aryel Beck]], [[Phillippe Camacho]], [[Nicolas Lassabe]], [[Hervé Luga]], [[François Scharffe]] ('''2004'''). ''[http://link.springer.com/chapter/10.1007/978-3-540-24650-3_1 Evaluation of Chess Position by Modular Neural network Generated by Genetic Algorithm]''. [http://www.informatik.uni-trier.de/~ley/db/conf/eurogp/eurogp2004.html#AutonesBCLLS04 EuroGP 2004] <ref>[https://www.stmintz.com/ccc/index.php?id=358770 Presentation for a neural net learning chess program] by [[Dann Corbit]], [[CCC]], April 06, 2004</ref>
* [[Daniel Walker]], [[Robert Levinson]] ('''2004'''). ''The MORPH Project in 2004''. [[ICGA Journal#27_4|ICGA Journal, Vol. 27, No. 4]]
* [[Mathematician#GMontavon|Grégoire Montavon]] ('''2013'''). ''[https://opus4.kobv.de/opus4-tuberlin/frontdoor/index/index/docId/4467 On Layer-Wise Representations in Deep Neural Networks]''. Ph.D. Thesis, [https://en.wikipedia.org/wiki/Technical_University_of_Berlin TU Berlin], advisor [[Mathematician#KRMueller|Klaus-Robert Müller]]
* [[Volodymyr Mnih]], [[Koray Kavukcuoglu]], [[David Silver]], [[Alex Graves]], [[Ioannis Antonoglou]], [[Daan Wierstra]], [[Martin Riedmiller]] ('''2013'''). ''Playing Atari with Deep Reinforcement Learning''. [http://arxiv.org/abs/1312.5602 arXiv:1312.5602] <ref>[http://www.nervanasys.com/demystifying-deep-reinforcement-learning/ Demystifying Deep Reinforcement Learning] by [http://www.nervanasys.com/author/tambet/ Tambet Matiisen], [http://www.nervanasys.com/ Nervana], December 21, 2015</ref>
* [[Risto Miikkulainen]] ('''2013'''). ''Evolving Neural Networks''. [https://dblp.org/db/conf/ijcnn/ijcnn2013 IJCNN 2013], [http://nn.cs.utexas.edu/downloads/slides/miikkulainen.ijcnn13.pdf pdf]
'''2014'''
* [[Mathematician#YDauphin|Yann Dauphin]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#CGulcehre|Caglar Gulcehre]], [[Mathematician#KCho|Kyunghyun Cho]], [[Mathematician#SGanguli|Surya Ganguli]], [[Mathematician#YBengio|Yoshua Bengio]] ('''2014'''). ''Identifying and attacking the saddle point problem in high-dimensional non-convex optimization''. [https://arxiv.org/abs/1406.2572 arXiv:1406.2572] <ref>[https://groups.google.com/d/msg/fishcooking/wOfRuzTSi_8/VgjN8MmSBQAJ high dimensional optimization] by [[Warren D. Smith]], [[Computer Chess Forums|FishCooking]], December 27, 2019</ref>
* [[Christopher Clark]], [[Amos Storkey]] ('''2014'''). ''Teaching Deep Convolutional Neural Networks to Play Go''. [http://arxiv.org/abs/1412.3409 arXiv:1412.3409] <ref>[http://computer-go.org/pipermail/computer-go/2014-December/007010.html Teaching Deep Convolutional Neural Networks to Play Go] by [[Hiroshi Yamashita]], [http://computer-go.org/pipermail/computer-go/ The Computer-go Archives], December 14, 2014</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=54663 Teaching Deep Convolutional Neural Networks to Play Go] by [[Michel Van den Bergh]], [[CCC]], December 16, 2014</ref>
* [[Chris J. Maddison]], [[Shih-Chieh Huang|Aja Huang]], [[Ilya Sutskever]], [[David Silver]] ('''2014'''). ''Move Evaluation in Go Using Deep Convolutional Neural Networks''. [http://arxiv.org/abs/1412.6564v1 arXiv:1412.6564v1] » [[Go]]
* [[Ilya Sutskever]], [https://research.google.com/pubs/OriolVinyals.html [Oriol Vinyals]], [https://www.linkedin.com/in/quoc-v-le-319b5a8 [Quoc V. Le]] ('''2014'''). ''Sequence to Sequence Learning with Neural Networks''. [https://arxiv.org/abs/1409.3215 arXiv:1409.3215]
'''2015'''
* [https://scholar.google.nl/citations?user=yyIoQu4AAAAJ Diederik P. Kingma], [https://scholar.google.ca/citations?user=ymzxRhAAAAAJ&hl=en Jimmy Lei Ba] ('''2015'''). ''Adam: A Method for Stochastic Optimization''. [https://arxiv.org/abs/1412.6980v8 arXiv:1412.6980v8], [http://www.iclr.cc/doku.php?id=iclr2015:main ICLR 2015] <ref>[http://www.talkchess.com/forum/viewtopic.php?t=61948 Arasan 19.2] by [[Jon Dart]], [[CCC]], November 03, 2016 » [[Arasan#Tuning|Arasan's Tuning]]</ref>
* [http://michaelnielsen.org/ Michael Nielsen] ('''2015'''). ''[http://neuralnetworksanddeeplearning.com/ Neural networks and deep learning]''. Determination Press
* [[Mathematician#SIoffe|Sergey Ioffe]], [[Mathematician#CSzegedy|Christian Szegedy]] ('''2015'''). ''Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift''. [https://arxiv.org/abs/1502.03167 arXiv:1502.03167]
* [[Mathematician#GEHinton|Geoffrey E. Hinton]], [https://research.google.com/pubs/OriolVinyals.html [Oriol Vinyals]], [https://en.wikipedia.org/wiki/Jeff_Dean_(computer_scientist) Jeff Dean] ('''2015'''). ''Distilling the Knowledge in a Neural Network''. [https://arxiv.org/abs/1503.02531 arXiv:1503.02531]
* [[James L. McClelland]] ('''2015'''). ''[https://web.stanford.edu/group/pdplab/pdphandbook/handbook3.html#handbookch10.html Explorations in Parallel Distributed Processing: A Handbook of Models, Programs, and Exercises]''. Second Edition, [https://web.stanford.edu/group/pdplab/pdphandbook/handbookli1.html Contents]
* [[Gábor Melis]] ('''2015'''). ''[http://jmlr.org/proceedings/papers/v42/meli14.html Dissecting the Winning Solution of the HiggsML Challenge]''. [https://nips.cc/Conferences/2014 NIPS 2014]
* [[Guillaume Desjardins]], [[Karen Simonyan]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Koray Kavukcuoglu]] ('''2015'''). ''Natural Neural Networks''. [https://arxiv.org/abs/1507.00210 arXiv:1507.00210]
* [[Barak Oshri]], [[Nishith Khandwala]] ('''2015'''). ''Predicting Moves in Chess using Convolutional Neural Networks''. [http://cs231n.stanford.edu/reports/ConvChess.pdf pdf] <ref>[https://github.com/BarakOshri/ConvChess GitHub - BarakOshri/ConvChess: Predicting Moves in Chess Using Convolutional Neural Networks]</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=63458 ConvChess CNN] by [[Brian Richardson]], [[CCC]], March 15, 2017</ref>
* [[Mathematician#YLeCun|Yann LeCun]], [[Mathematician#YBengio|Yoshua Bengio]], [[Mathematician#GEHinton|Geoffrey E. Hinton]] ('''2015'''). ''[http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html Deep Learning]''. [https://en.wikipedia.org/wiki/Nature_%28journal%29 Nature], Vol. 521 <ref>[[Jürgen Schmidhuber]] ('''2015''') ''[http://people.idsia.ch/~juergen/deep-learning-conspiracy.html Critique of Paper by "Deep Learning Conspiracy" (Nature 521 p 436)]''.</ref>
* [[Matthew Lai]] ('''2015'''). ''Giraffe: Using Deep Reinforcement Learning to Play Chess''. M.Sc. thesis, [https://en.wikipedia.org/wiki/Imperial_College_London Imperial College London], [http://arxiv.org/abs/1509.01549v1 arXiv:1509.01549v1] » [[Giraffe]]
* [[Nikolai Yakovenko]], [[Liangliang Cao]], [[Colin Raffel]], [[James Fan]] ('''2015'''). ''Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in Poker Games''. [https://arxiv.org/abs/1509.06731 arXiv:1509.06731]
* [https://scholar.google.ca/citations?user=yVtSOt8AAAAJ&hl=en Emmanuel Bengio], [https://scholar.google.ca/citations?user=9H77FYYAAAAJ&hl=en Pierre-Luc Bacon], [[Joelle Pineau]], [[Doina Precup]] ('''2015'''). ''Conditional Computation in Neural Networks for faster models''. [https://arxiv.org/abs/1511.06297 arXiv:1511.06297]
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2015'''). ''Online Batch Selection for Faster Training of Neural Networks''. [https://arxiv.org/abs/1511.06343 arXiv:1511.06343]
* [[Yuandong Tian]], [[Yan Zhu]] ('''2015'''). ''Better Computer Go Player with Neural Network and Long-term Prediction''. [http://arxiv.org/abs/1511.06410 arXiv:1511.06410] <ref>[http://www.technologyreview.com/view/544181/how-facebooks-ai-researchers-built-a-game-changing-go-engine/?utm_campaign=socialsync&utm_medium=social-post&utm_source=facebook How Facebook’s AI Researchers Built a Game-Changing Go Engine | MIT Technology Review], December 04, 2015</ref> <ref>[http://www.talkchess.com/forum/viewtopic.php?t=58514 Combining Neural Networks and Search techniques (GO)] by Michael Babigian, [[CCC]], December 08, 2015</ref> » [[Go]]
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2016'''). ''CMA-ES for Hyperparameter Optimization of Deep Neural Networks''. [https://arxiv.org/abs/1604.07269 arXiv:1604.07269] <ref>[https://en.wikipedia.org/wiki/CMA-ES CMA-ES from Wikipedia]</ref>
* [[Audrūnas Gruslys]], [[Rémi Munos]], [[Ivo Danihelka]], [[Marc Lanctot]], [[Alex Graves]] ('''2016'''). ''Memory-Efficient Backpropagation Through Time''. [https://arxiv.org/abs/1606.03401v1 arXiv:1606.03401]
* [[Mathematician#AARusu|Andrei A. Rusu]], [[Neil C. Rabinowitz]], [[Guillaume Desjardins]], [[Hubert Soyer]], [[James Kirkpatrick]], [[Koray Kavukcuoglu]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Mathematician#RHadsell|Raia Hadsell]] ('''2016'''). ''Progressive Neural Networks''. [https://arxiv.org/abs/1606.04671 arXiv:1606.04671]* [[Gao Huang]], [[Zhuang Liu]], [[Laurens van der Maaten]], [[Kilian Q. Weinberger]] ('''2016'''). ''Densely Connected Convolutional Networks''. [https://arxiv.org/abs/1608.06993 arXiv:1608.06993] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75665&start=9 Re: Minic version 3] by [[Connor McMonigle]], [[CCC]], November 03, 2020 » [[Minic#Minic 3|Minic 3]], [[Seer|Seer 1.1]]</ref>
* [[George Rajna]] ('''2016'''). ''Deep Neural Networks''. [http://vixra.org/abs/1609.0126 viXra:1609.0126]
* [[James Kirkpatrick]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Neil C. Rabinowitz]], [[Joel Veness]], [[Guillaume Desjardins]], [[Mathematician#AARusu|Andrei A. Rusu]], [[Kieran Milan]], [[John Quan]], [[Tiago Ramalho]], [[Agnieszka Grabska-Barwinska]], [[Demis Hassabis]], [[Claudia Clopath]], [[Dharshan Kumaran]], [[Mathematician#RHadsell|Raia Hadsell]] ('''2016'''). ''Overcoming catastrophic forgetting in neural networks''. [https://arxiv.org/abs/1612.00796 arXiv:1612.00796] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=70704 catastrophic forgetting] by [[Daniel Shawul]], [[CCC]], May 09, 2019</ref>
* [https://dblp.uni-trier.de/pers/hd/n/Niu:Zhenxing Zhenxing Niu], [https://dblp.uni-trier.de/pers/hd/z/Zhou:Mo Mo Zhou], [https://dblp.uni-trier.de/pers/hd/w/Wang_0003:Le Le Wang], [[Xinbo Gao]], [https://dblp.uni-trier.de/pers/hd/h/Hua_0001:Gang Gang Hua] ('''2016'''). ''Ordinal Regression with Multiple Output CNN for Age Estimation''. [https://dblp.uni-trier.de/db/conf/cvpr/cvpr2016.html CVPR 2016], [https://www.cv-foundation.org/openaccess/content_cvpr_2016/app/S21-20.pdf pdf]
* [[Li Jing]], [[Yichen Shen]], [[Tena Dubček]], [[John Peurifoy]], [[Scott Skirlo]], [[Mathematician#YLeCun|Yann LeCun]], [[Max Tegmark]], [[Marin Soljačić]] ('''2016'''). ''Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs''. [https://arxiv.org/abs/1612.05231 arXiv:1612.05231] <ref>[http://talkchess.com/forum3/viewtopic.php?f=2&t=74059 Stockfish NN release (NNUE)] by [[Henk Drost]], [[CCC]], May 31, 2020 » [[Stockfish]]</ref>
'''2017'''
* [[Yutian Chen]], [[Matthew W. Hoffman]], [[Sergio Gomez Colmenarejo]], [[Misha Denil]], [[Timothy Lillicrap]], [[Matthew Botvinick]], [[Nando de Freitas]] ('''2017'''). ''Learning to Learn without Gradient Descent by Gradient Descent''. [https://arxiv.org/abs/1611.03824v6 arXiv:1611.03824v6], [http://dblp.uni-trier.de/db/conf/icml/icml2017.html ICML 2017]
* [https://dblp.org/pers/hd/s/Serb:Alexander Alexantrou Serb], [[Edoardo Manino]], [https://dblp.org/pers/hd/m/Messaris:Ioannis Ioannis Messaris], [https://dblp.org/pers/hd/t/Tran=Thanh:Long Long Tran-Thanh], [https://www.orc.soton.ac.uk/people/tp1f12 Themis Prodromakis] ('''2017'''). ''[https://eprints.soton.ac.uk/425616/ Hardware-level Bayesian inference]''. [https://nips.cc/Conferences/2017 NIPS 2017] » [[Analog Evaluation]]
'''2018'''
* [[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf], [https://www.apply.computer-shogi.org/wcsc28/appeal/the_end_of_genesis_T.N.K.evolution_turbo_type_D/nnue.pdf pdf] (Japanese with English abstract) [https://github.com/asdfjkl/nnue GitHub - asdfjkl/nnue translation] » [[NNUE]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76250 Translation of Yu Nasu's NNUE paper] by [[Dominik Klein]], [[CCC]], January 07, 2021</ref>
* [[Kei Takada]], [[Hiroyuki Iizuka]], [[Masahito Yamamoto]] ('''2018'''). ''[https://link.springer.com/chapter/10.1007%2F978-3-319-75931-9_2 Computer Hex Algorithm Using a Move Evaluation Method Based on a Convolutional Neural Network]''. [https://link.springer.com/bookseries/7899 Communications in Computer and Information Science] » [[Hex]]
* [[Matthia Sabatelli]], [[Francesco Bidoia]], [[Valeriu Codreanu]], [[Marco Wiering]] ('''2018'''). ''Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead''. ICPRAM 2018, [http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/ICPRAM_CHESS_DNN_2018.pdf pdf]
* [[Marius Lindauer]], [[Frank Hutter]] ('''2019'''). ''Best Practices for Scientific Research on Neural Architecture Search''. [https://arxiv.org/abs/1909.02453 arXiv:1909.02453]
* [[Guy Haworth]] ('''2019'''). ''Chess endgame news: an endgame challenge for neural nets''. [[ICGA Journal#41_3|ICGA Journal, Vol. 41, No. 3]] » [[Endgame]]
==2020 ...==
* [[Reid McIlroy-Young]], [[Siddhartha Sen]], [[Jon Kleinberg]], [[Ashton Anderson]] ('''2020'''). ''Aligning Superhuman AI with Human Behavior: Chess as a Model System''. [[ACM#SIGKDD|ACM SIGKDD 2020]], [https://arxiv.org/abs/2006.01855 arXiv:2006.01855] » [[Maia Chess]]
* [[Reid McIlroy-Young]], [[Russell Wang]], [[Siddhartha Sen]], [[Jon Kleinberg]], [[Ashton Anderson]] ('''2020'''). ''Learning Personalized Models of Human Behavior in Chess''. [https://arxiv.org/abs/2008.10086 arXiv:2008.10086]
* [[Oisín Carroll]], [[Joeran Beel]] ('''2020'''). ''Finite Group Equivariant Neural Networks for Games''. [https://arxiv.org/abs/2009.05027 arXiv:2009.05027]
* [https://scholar.google.com/citations?user=HT85tXsAAAAJ&hl=en Mohammad Pezeshki], [https://scholar.google.com/citations?user=jKqh8jAAAAAJ&hl=en Sékou-Oumar Kaba], [[Mathematician#YBengio|Yoshua Bengio]] , [[Mathematician#ACourville|Aaron Courville]] , [[Doina Precup]], [https://scholar.google.com/citations?user=ifu_7_0AAAAJ&hl=en Guillaume Lajoie] ('''2020'''). ''Gradient Starvation: A Learning Proclivity in Neural Networks''. [https://arxiv.org/abs/2011.09468 arXiv:2011.09468]
=Blog & Forum Posts=
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74077 How to work with batch size in neural network] by Gertjan Brouwer, [[CCC]], June 02, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531 NNUE accessible explanation] by [[Martin Fierz]], [[CCC]], July 21, 2020 » [[NNUE]]
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=1 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 23, 2020
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=5 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 24, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=74607 LC0 vs. NNUE - some tech details...] by [[Srdja Matovic]], [[CCC]], July 29, 2020 » [[Leela Chess Zero#Lc0|Lc0]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74771 AB search with NN on GPU...] by [[Srdja Matovic]], [[CCC]], August 13, 2020 » [[GPU]] <ref>[https://forums.developer.nvidia.com/t/kernel-launch-latency/62455 kernel launch latency - CUDA / CUDA Programming and Performance - NVIDIA Developer Forums] by LukeCuda, June 18, 2018</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74777 Neural Networks weights type] by [[Fabio Gobbato]], [[CCC]], August 13, 2020 » [[Stockfish NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74955 Train a neural network evaluation] by [[Fabio Gobbato]], [[CCC]], September 01, 2020 » [[Automated Tuning]], [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75042 Neural network quantization] by [[Fabio Gobbato]], [[CCC]], September 08, 2020 » [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75190 First success with neural nets] by [[Jonathan Kreuzer]], [[CCC]], September 23, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=75606 Transhuman Chess with NN and RL...] by [[Srdja Matovic]], [[CCC]], October 30, 2020 » [[Reinforcement Learning|RL]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75724 Pytorch NNUE training] by [[Gary Linscott]], [[CCC]], November 08, 2020 <ref>[https://en.wikipedia.org/wiki/PyTorch PyTorch from Wikipedia]</ref> » [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75925 Pawn King Neural Network] by [[Tamás Kuzmics]], [[CCC]], November 26, 2020 » [[NNUE]]
* [http://laatste.info/bb3/viewtopic.php?f=53&t=8327 Learning draughts evaluation functions using Keras/TensorFlow] by [[Rein Halbersma]], [http://laatste.info/bb3/viewforum.php?f=53 World Draughts Forum], November 30, 2020 » [[Draughts]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75985 Maiachess] by [[Marc-Philippe Huget]], [[CCC]], December 04, 2020 » [[Maia Chess]]
'''2021'''
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76263 More experiments with neural nets] by [[Jonathan Kreuzer]], [[CCC]], January 09, 2021 » [[Slow Chess]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76334 Keras/Tensforflow for very sparse inputs] by Jacek Dermont, [[CCC]], January 16, 2021
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=76664 Are neural nets (the weights file) copyrightable?] by [[Adam Treat]], [[CCC]], February 21, 2021
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76885 A worked example of backpropagation using Javascript] by [[Colin Jenkins]], [[CCC]], March 16, 2021 » [[Neural Networks#Backpropagation|Backpropagation]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=77061 yet another NN library] by lucasart, [[CCC]], April 11, 2021 » [[#lucasart|lucasart/nn]]
=External Links=
: [https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d An Introduction to different Types of Convolutions in Deep Learning] by [http://plpp.de/ Paul-Louis Pröve], July 22, 2017
: [https://towardsdatascience.com/squeeze-and-excitation-networks-9ef5e71eacd7 Squeeze-and-Excitation Networks] by [http://plpp.de/ Paul-Louis Pröve], October 17, 2017
* [https://towardsdatascience.com/deep-convolutional-neural-networks-ccf96f830178 Deep Convolutional Neural Networks] by Pablo Ruiz, October 11, 2018
===ResNet===
* [https://en.wikipedia.org/wiki/Residual_neural_network Residual neural network from Wikipedia]
* [https://wiki.tum.de/display/lfdv/Deep+Residual+Networks Deep Residual Networks] from [https://wiki.tum.de/ TUM Wiki], [[Technical University of Munich]]
* [https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8 Understanding and visualizing ResNets] by Pablo Ruiz, October 8, 2018
===RNNs===
* [https://en.wikipedia.org/wiki/Recurrent_neural_network Recurrent neural network from Wikipedia]
* [https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Rectifier (neural networks) from Wikipedia]
* [https://en.wikipedia.org/wiki/Sigmoid_function Sigmoid function from Wikipedia]
* [https://en.wikipedia.org/wiki/Softmax_function Softmax function from Wikipedia]
==Backpropagation==
* [https://en.wikipedia.org/wiki/Backpropagation Backpropagation from Wikipedia]
* [https://en.wikipedia.org/wiki/Rprop Rprop from Wikipedia]
* [http://people.idsia.ch/~juergen/who-invented-backpropagation.html Who Invented Backpropagation?] by [[Jürgen Schmidhuber]] (2014, 2015)
* [https://alexander-schiendorfer.github.io/2020/02/24/a-worked-example-of-backprop.html A worked example of backpropagation] by [https://alexander-schiendorfer.github.io/about.html Alexander Schiendorfer], February 24, 2020 » [[Neural Networks#Backpropagation|Backpropagation]] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=76885 A worked example of backpropagation using Javascript] by [[Colin Jenkins]], [[CCC]], March 16, 2021</ref>
==Gradient==
* [https://en.wikipedia.org/wiki/Gradient Gradient from Wikipedia]
: [https://en.wikipedia.org/wiki/SNNS SNNS from Wikipedia]
* [https://en.wikipedia.org/wiki/Comparison_of_deep_learning_software Comparison of deep learning software from Wikipedia]
* [https://github.com/connormcmonigle/reference-neural-network GitHub - connormcmonigle/reference-neural-network] by [[Connor McMonigle]]
* <span id="lucasart"></span>[https://github.com/lucasart/nn GitHub - lucasart/nn: neural network experiment] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=77061 yet another NN library] by lucasart, [[CCC]], April 11, 2021</ref>
==Libraries==
* [https://en.wikipedia.org/wiki/Eigen_%28C%2B%2B_library%29 Eigen (C++ library) from Wikipedia]
* [http://leenissen.dk/fann/wp/ Fast Artificial Neural Network Library (FANN)]
* [https://en.wikipedia.org/wiki/Keras Keras from Wikipedia]
* [https://wiki.python.org/moin/PythonForArtificialIntelligence PythonForArtificialIntelligence - Python Wiki] [[Python]]
* [https://en.wikipedia.org/wiki/TensorFlow TensorFlow from Wikipedia]
: [https://www.youtube.com/watch?v=9KM9Td6RVgQ Part 6: Training]
: [https://www.youtube.com/watch?v=S4ZUwgesjS8 Part 7: Overfitting, Testing, and Regularization]
* [https://www.youtube.com/playlist?list=PLgomWLYGNl1dL1Qsmgumhcg4HOcWZMd3k NN - Fully Connected Tutorial], [https://en.wikipedia.org/wiki/YouTube YouTube] Videos by [[Finn Eggers]]
* [https://www.youtube.com/watch?v=UdSK7nnJKHU Deep Learning Master Class] by [[Ilya Sutskever]], [https://en.wikipedia.org/wiki/YouTube YouTube] Video
* [https://www.youtube.com/watch?v=Ih5Mr93E-2c&hd=1 Lecture 10 - Neural Networks] from [http://work.caltech.edu/telecourse.html Learning From Data - Online Course (MOOC)] by [https://en.wikipedia.org/wiki/Yaser_Abu-Mostafa Yaser Abu-Mostafa], [https://en.wikipedia.org/wiki/California_Institute_of_Technology Caltech], [https://en.wikipedia.org/wiki/YouTube YouTube] Video
: [https://www.youtube.com/watch?v=lvoHnicueoE Lecture 14 | Deep Reinforcement Learning] by [[Mathematician#SYeung|Serena Yeung]], [http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture14.pdf slides]
: [https://www.youtube.com/watch?v=eZdOkDtYMoo Lecture 15 | Efficient Methods and Hardware for Deep Learning] by [https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=en Song Han], [http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture15.pdf slides]
==Music==
* [https://en.wikipedia.org/wiki/John_Zorn#The_Dreamers The Dreamers] & [[:Category:John Zorn|John Zorn]] - Gormenghast, [https://en.wikipedia.org/wiki/Pellucidar:_A_Dreamers_Fantabula Pellucidar: A Dreamers Fantabula] (2015), [https://en.wikipedia.org/wiki/YouTube YouTube] Video
: [[:Category:Marc Ribot|Marc Ribot]], [https://en.wikipedia.org/wiki/Kenny_Wollesen Kenny Wollesen], [https://en.wikipedia.org/wiki/Joey_Baron Joey Baron], [https://en.wikipedia.org/wiki/Jamie_Saft Jamie Saft], [https://en.wikipedia.org/wiki/Trevor_Dunn Trevor Dunn], [https://en.wikipedia.org/wiki/Cyro_Baptista Cyro Baptista], John Zorn
: {{#evu:https://www.youtube.com/watch?v=97MsK88rjy8|alignment=left|valignment=top}}
=References=
<references />
 
'''[[Learning|Up one Level]]'''
[[Category:Marc Ribot]]
[[Category:John Zorn]]
[[Category:Videos]]

Navigation menu