Changes

Jump to: navigation, search

Neural Networks

6,222 bytes added, 23:21, 24 September 2020
no edit summary
=ANNs=
[https://en.wikipedia.org/wiki/Artificial_neural_network Artificial Neural Networks] ('''ANNs''') are a family of [https://en.wikipedia.org/wiki/Machine_learning statistical learning] devices or algorithms used in [https://en.wikipedia.org/wiki/Regression_analysis regression], and [https://en.wikipedia.org/wiki/Binary_classification binary] or [[https://en.wikipedia.org/wiki/Multiclass_classification multiclass classification|multiclass classification]], implemented in [[Hardware|hardware]] or [[Software|software]] inspired by their biological counterparts. The [https://en.wikipedia.org/wiki/Artificial_neuron artificial neurons] of one or more layers receive one or more inputs (representing dendrites), and after being weighted, sum them to produce an output (representing a neuron's axon). The sum is passed through a [https://en.wikipedia.org/wiki/Nonlinear_system nonlinear] function known as an [https://en.wikipedia.org/wiki/Activation_function activation function] or transfer function. The transfer functions usually have a [https://en.wikipedia.org/wiki/Sigmoid_function sigmoid shape], but they may also take the form of other non-linear functions, [https://en.wikipedia.org/wiki/Piecewise piecewise] linear functions, or [https://en.wikipedia.org/wiki/Artificial_neuron#Step_function step functions] <ref>[https://en.wikipedia.org/wiki/Artificial_neuron Artificial neuron from Wikipedia]</ref>. The weights of the inputs of each layer are tuned to minimize a [https://en.wikipedia.org/wiki/Loss_function cost or loss function], which is a task in [https://en.wikipedia.org/wiki/Mathematical_optimization mathematical optimization] and machine learning.
==Perceptron==
Typical CNN <ref>Typical [https://en.wikipedia.org/wiki/Convolutional_neural_network CNN] architecture, Image by Aphex34, December 16, 2015, [https://creativecommons.org/licenses/by-sa/4.0/deed.en CC BY-SA 4.0], [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref>
<span id="Residual"></span>
==Residual NetsNet==
[[FILE:ResiDualBlock.png|border|right|thumb|link=https://arxiv.org/abs/1512.03385| A residual block <ref>The fundamental building block of residual networks. Figure 2 in [https://scholar.google.com/citations?user=DhtAFkwAAAAJ Kaiming He], [https://scholar.google.com/citations?user=yuB-cfoAAAAJ&hl=en Xiangyu Zhang], [http://shaoqingren.com/ Shaoqing Ren], [http://www.jiansun.org/ Jian Sun] ('''2015'''). ''Deep Residual Learning for Image Recognition''. [https://arxiv.org/abs/1512.03385 arXiv:1512.03385]</ref> <ref>[https://blog.waya.ai/deep-residual-learning-9610bb62c355 Understand Deep Residual Networks — a simple, modular learning framework that has redefined state-of-the-art] by [https://blog.waya.ai/@waya.ai Michael Dietz], [https://blog.waya.ai/ Waya.ai], May 02, 2017</ref> ]]
A '''Residual netsnet''' add (ResNet) adds the input of a layer, typically composed of a convolutional layer and of a [https://en.wikipedia.org/wiki/Rectifier_(neural_networks) ReLU] layer, to its output. This modification, like convolutional nets inspired from image classification, enables faster training and deeper networks <ref>[[Tristan Cazenave]] ('''2017'''). ''[http://ieeexplore.ieee.org/document/7875402/ Residual Networks for Computer Go]''. [[IEEE#TOCIAIGAMES|IEEE Transactions on Computational Intelligence and AI in Games]], Vol. PP, No. 99, [http://www.lamsade.dauphine.fr/~cazenave/papers/resnet.pdf pdf]</ref> <ref>[https://wiki.tum.de/display/lfdv/Deep+Residual+Networks Deep Residual Networks] from [https://wiki.tum.de/ TUM Wiki], [[Technical University of Munich]]</ref> <ref>[https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8 Understanding and visualizing ResNets] by Pablo Ruiz, October 8, 2018</ref>.
=ANNs in Games=
<span id="AlphaZero"></span>
===Alpha Zero===
In December 2017, the [[Google]] [[DeepMind]] team along with former [[Giraffe]] author [[Matthew Lai]] reported on their generalized [[AlphaZero]] algorithm, combining [[Deep Learning|Deep learning]] with [[Monte-Carlo Tree Search]]. AlphaZero can achieve, tabula rasa, superhuman performance in many challenging domains with some training effort. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved a superhuman level of play in the games of chess and [[Shogi]] as well as Go, and convincingly defeated a world-champion program in each case <ref>[[David Silver]], [[Thomas Hubert]], [[Julian Schrittwieser]], [[Ioannis Antonoglou]], [[Matthew Lai]], [[Arthur Guez]], [[Marc Lanctot]], [[Laurent Sifre]], [[Dharshan Kumaran]], [[Thore Graepel]], [[Timothy Lillicrap]], [[Karen Simonyan]], [[Demis Hassabis]] ('''2017'''). ''Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm''. [https://arxiv.org/abs/1712.01815 arXiv:1712.01815]</ref>. The open souece projects [[Leela Zero]] (Go) and its chess adaptation [[Leela Chess Zero]] successfully re-implemented the ideas of DeepMind.===NNUE===[[NNUE]] reverse of &#398;U&#1048;&#1048; - Efficiently Updatable Neural Networks, is an NN architecture intended to replace the [[Evaluation|evaluation]] of [[Shogi]], [[Chess|chess]] and other board game playing [[Alpha-Beta|alpha-beta]] searchers. NNUE was introduced in 2018 by [[Yu Nasu]] <ref>[[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf] (Japanese with English abstract)</ref>,and was used in Shogi adaptations of [[Stockfish]] such as [[YaneuraOu]] <ref>[https://github.com/yaneurao/YaneuraOu GitHub - yaneurao/YaneuraOu: YaneuraOu is the World's Strongest Shogi engine(AI player), WCSC29 1st winner, educational and USI compliant engine]</ref> ,and [[Kristallweizen]] <ref>[https://github.com/Tama4649/Kristallweizen/ GitHub - Tama4649/Kristallweizen: 第29回世界コンピュータ将棋選手権 準優勝のKristallweizenです。]</ref>, apparently with [[AlphaZero]] strength <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=72754 The Stockfish of shogi] by [[Larry Kaufman]], [[CCC]], January 07, 2020</ref>. [[Hisayori Noda|Nodchip]] incorporated NNUE into the chess playing Stockfish 10 as a proof of concept <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=2&t=74059 Stockfish NN release (NNUE)] by [[Henk Drost]], [[CCC]], May 31, 2020</ref>, yielding in the hype about [[Stockfish NNUE]] in summer 2020 <ref>[http://yaneuraou.yaneu.com/2020/06/19/stockfish-nnue-the-complete-guide/ Stockfish NNUE – The Complete Guide], June 19, 2020 (Japanese and English)</ref>.Its heavily over parametrized computational most expensive input layer is efficiently [[Incremental Updates|incremental updated]] in [[Make Move|make]] and [[Unmake Move|unmake move]].
<span id="engines"></span>
===NN Chess Programs===
* [[Memory]]
* [[Neural MoveMap Heuristic]]
* [[NNUE]]
* [[Pattern Recognition]]
* [[Temporal Difference Learning]]
* [[John von Neumann]] ('''1956'''). ''Probabilistic Logic and the Synthesis of Reliable Organisms From Unreliable Components''. in
: [[Claude Shannon]], [[John McCarthy]] (eds.) ('''1956'''). ''Automata Studies''. [http://press.princeton.edu/math/series/amh.html Annals of Mathematics Studies], No. 34, [http://www.dna.caltech.edu/courses/cs191/paperscs191/VonNeumann56.pdf pdf]
* [[Nathaniel Rochester]], [[Mathematician#Holland|John H. Holland]], [httphttps://dblp.uni-trier.de/pers/hd/h/Haibt:L=_H= L. H. Haibt], [httphttps://dblp.uni-trier.de/pers/hd/d/Duda:WWilliam_L=_L= W. William L. Duda] ('''1956'''). ''[https://www.semanticscholar.org/paper/Tests-on-a-cell-assembly-theory-of-the-action-of-a-Rochester-Holland/878d615b84cf779e162f62c4a9192d6bddeefbf9 Tests on a Cell Assembly Theory of the Action of the Brain, Using a Large Digital Computer]''. [httphttps://dblp.uni-trier.de/db/journals/tit/tit2n.html#RochesterHHD56 IRE Transactions on Information Theory, Vol. 2], No. 3
* [https://en.wikipedia.org/wiki/Frank_Rosenblatt Frank Rosenblatt] ('''1957'''). ''The Perceptron - a Perceiving and Recognizing Automaton''. Report 85-460-1, [https://en.wikipedia.org/wiki/Calspan#History Cornell Aeronautical Laboratory] <ref>[http://csis.pace.edu/~ctappert/srd2011/rosenblatt-contributions.htm Rosenblatt's Contributions]</ref>
==1960 ...==
* [https://dblp.uni-trier.de/pers/hd/h/Hellstrom:Benjamin_J= Benjamin J. Hellstrom], [[Laveen Kanal|Laveen N. Kanal]] ('''1990'''). ''[https://ieeexplore.ieee.org/document/5726889 The definition of necessary hidden units in neural networks for combinatorial optimization]''. [https://dblp.uni-trier.de/db/conf/ijcnn/ijcnn1990.html IJCNN 1990]
* [[Mathematician#XZhang|Xiru Zhang]], [https://dblp.uni-trier.de/pers/hd/m/McKenna:Michael Michael McKenna], [[Mathematician#JPMesirov|Jill P. Mesirov]], [[David Waltz]] ('''1990'''). ''[https://www.sciencedirect.com/science/article/pii/016781919090084M The backpropagation algorithm on grid and hypercube architectures]''. [https://www.journals.elsevier.com/parallel-computing Parallel Computing], Vol. 14, No. 3
* [[Simon Lucas]], [https://dblp.uni-trier.de/pers/hd/d/Damper:Robert_I= Robert I. Damper] ('''1990'''). ''[https://www.tandfonline.com/doi/abs/10.1080/09540099008915669 Syntactic Neural Networks]''. [https://www.tandfonline.com/toc/ccos20/current Connection Science], Vol. 2, No. 3
'''1991'''
* [[Mathematician#SHochreiter|Sepp Hochreiter]] ('''1991'''). ''Untersuchungen zu dynamischen neuronalen Netzen''. Diploma thesis, [[Technical University of Munich|TU Munich]], advisor [[Jürgen Schmidhuber]], [http://people.idsia.ch/~juergen/SeppHochreiter1991ThesisAdvisorSchmidhuber.pdf pdf] (German) <ref>[http://people.idsia.ch/~juergen/fundamentaldeeplearningproblem.html Sepp Hochreiter's Fundamental Deep Learning Problem (1991)] by [[Jürgen Schmidhuber]], 2013</ref>
* [[Yoav Freund]], [[Mathematician#DHHaussler|David Haussler]] ('''1991'''). ''Unsupervised Learning of Distributions of Binary Vectors Using 2-Layer Networks''. [http://dblp.uni-trier.de/db/conf/nips/nips1991.html#FreundH91 NIPS 1991]
* [[Byoung-Tak Zhang]], [[Gerd Veenker]] ('''1991'''). ''[http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=170480&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D170480 Neural networks that teach themselves through genetic discovery of novel examples]''. [http://ieeexplore.ieee.org/xpl/conhome.jsp?punumber=1000500 IEEE IJCNN'91], [https://bi.snu.ac.kr/Publications/Conferences/International/IJCNN91.pdf pdf]
* [[Simon Lucas]], [https://dblp.uni-trier.de/pers/hd/d/Damper:Robert_I= Robert I. Damper] ('''1991'''). ''[https://link.springer.com/chapter/10.1007/978-1-4615-3752-6_30 Syntactic neural networks in VLSI]''. [https://link.springer.com/book/10.1007/978-1-4615-3752-6 VLSI for Artificial Intelligence and Neural Networks]
* [[Simon Lucas]] ('''1991'''). ''[https://eprints.soton.ac.uk/256263/ Connectionist architectures for syntactic pattern recognition]''. Ph.D. thesis, [https://en.wikipedia.org/wiki/University_of_Southampton University of Southampton]
'''1992'''
* [[Michael Reiss]] ('''1992'''). ''Temporal Sequence Processing in Neural Networks''. Ph.D. thesis, [https://en.wikipedia.org/wiki/King%27s_College_London King's College London], advisor [[Mathematician#JGTaylor|John G. Taylor]], [http://www.reiss.demon.co.uk/misc/m_reiss_phd.pdf pdf]
* [[Don Beal]], [[Martin C. Smith]] ('''1997'''). ''Learning Piece Values Using Temporal Differences''. [[ICGA Journal#20_3|ICCA Journal, Vol. 20, No. 3]]
* [https://dblp.uni-trier.de/pers/hd/t/Thiesing:Frank_M= Frank M. Thiesing], [[Oliver Vornberger]] ('''1997'''). ''Forecasting Sales Using Neural Networks''. [https://dblp.uni-trier.de/db/conf/fuzzy/fuzzy1997.html Fuzzy Days 1997], [http://www2.inf.uos.de/papers_pdf/fuzzydays_97.pdf pdf]
* [[Simon Lucas]] ('''1997'''). ''[https://link.springer.com/chapter/10.1007/BFb0032531 Forward-Backward Building Blocks for Evolving Neural Networks with Intrinsic Learning Behaviors]''. [https://dblp.uni-trier.de/db/conf/iwann/iwann1997.html IWANN 1997]
'''1998'''
* [[Kieran Greer]] ('''1998'''). ''A Neural Network Based Search Heuristic and its Application to Computer Chess''. D.Phil. Thesis, [https://en.wikipedia.org/wiki/University_of_Ulster University of Ulster]
* [[James Kirkpatrick]], [[Mathematician#RPascanu|Razvan Pascanu]], [[Neil C. Rabinowitz]], [[Joel Veness]], [[Guillaume Desjardins]], [[Mathematician#AARusu|Andrei A. Rusu]], [[Kieran Milan]], [[John Quan]], [[Tiago Ramalho]], [[Agnieszka Grabska-Barwinska]], [[Demis Hassabis]], [[Claudia Clopath]], [[Dharshan Kumaran]], [[Mathematician#RHadsell|Raia Hadsell]] ('''2016'''). ''Overcoming catastrophic forgetting in neural networks''. [https://arxiv.org/abs/1612.00796 arXiv:1612.00796] <ref>[http://www.talkchess.com/forum3/viewtopic.php?f=7&t=70704 catastrophic forgetting] by [[Daniel Shawul]], [[CCC]], May 09, 2019</ref>
* [https://dblp.uni-trier.de/pers/hd/n/Niu:Zhenxing Zhenxing Niu], [https://dblp.uni-trier.de/pers/hd/z/Zhou:Mo Mo Zhou], [https://dblp.uni-trier.de/pers/hd/w/Wang_0003:Le Le Wang], [[Xinbo Gao]], [https://dblp.uni-trier.de/pers/hd/h/Hua_0001:Gang Gang Hua] ('''2016'''). ''Ordinal Regression with Multiple Output CNN for Age Estimation''. [https://dblp.uni-trier.de/db/conf/cvpr/cvpr2016.html CVPR 2016], [https://www.cv-foundation.org/openaccess/content_cvpr_2016/app/S21-20.pdf pdf]
* [[Li Jing]], [[Yichen Shen]], [[Tena Dubček]], [[John Peurifoy]], [[Scott Skirlo]], [[Mathematician#YLeCun|Yann LeCun]], [[Max Tegmark]], [[Marin Soljačić]] ('''2016'''). ''Tunable Efficient Unitary Neural Networks (EUNN) and their application to RNNs''. [https://arxiv.org/abs/1612.05231 arXiv:1612.05231] <ref>[http://talkchess.com/forum3/viewtopic.php?f=2&t=74059 Stockfish NN release (NNUE)] by [[Henk Drost]], [[CCC]], May 31, 2020 » [[Stockfish]]</ref>
'''2017'''
* [[Yutian Chen]], [[Matthew W. Hoffman]], [[Sergio Gomez Colmenarejo]], [[Misha Denil]], [[Timothy Lillicrap]], [[Matthew Botvinick]], [[Nando de Freitas]] ('''2017'''). ''Learning to Learn without Gradient Descent by Gradient Descent''. [https://arxiv.org/abs/1611.03824v6 arXiv:1611.03824v6], [http://dblp.uni-trier.de/db/conf/icml/icml2017.html ICML 2017]
* [https://dblp.org/pers/hd/s/Serb:Alexander Alexantrou Serb], [[Edoardo Manino]], [https://dblp.org/pers/hd/m/Messaris:Ioannis Ioannis Messaris], [https://dblp.org/pers/hd/t/Tran=Thanh:Long Long Tran-Thanh], [https://www.orc.soton.ac.uk/people/tp1f12 Themis Prodromakis] ('''2017'''). ''[https://eprints.soton.ac.uk/425616/ Hardware-level Bayesian inference]''. [https://nips.cc/Conferences/2017 NIPS 2017] » [[Analog Evaluation]]
'''2018'''
* [[Yu Nasu]] ('''2018'''). ''&#398;U&#1048;&#1048; Efficiently Updatable Neural-Network based Evaluation Functions for Computer Shogi''. Ziosoft Computer Shogi Club, [https://github.com/ynasu87/nnue/blob/master/docs/nnue.pdf pdf] (Japanese with English abstract) » [[NNUE]]
* [[Kei Takada]], [[Hiroyuki Iizuka]], [[Masahito Yamamoto]] ('''2018'''). ''[https://link.springer.com/chapter/10.1007%2F978-3-319-75931-9_2 Computer Hex Algorithm Using a Move Evaluation Method Based on a Convolutional Neural Network]''. [https://link.springer.com/bookseries/7899 Communications in Computer and Information Science] » [[Hex]]
* [[Matthia Sabatelli]], [[Francesco Bidoia]], [[Valeriu Codreanu]], [[Marco Wiering]] ('''2018'''). ''Learning to Evaluate Chess Positions with Deep Neural Networks and Limited Lookahead''. ICPRAM 2018, [http://www.ai.rug.nl/~mwiering/GROUP/ARTICLES/ICPRAM_CHESS_DNN_2018.pdf pdf]
* [[Marius Lindauer]], [[Frank Hutter]] ('''2019'''). ''Best Practices for Scientific Research on Neural Architecture Search''. [https://arxiv.org/abs/1909.02453 arXiv:1909.02453]
* [[Guy Haworth]] ('''2019'''). ''Chess endgame news: an endgame challenge for neural nets''. [[ICGA Journal#41_3|ICGA Journal, Vol. 41, No. 3]] » [[Endgame]]
==2020 ...==
* [[Oisín Carroll]], [[Joeran Beel]] ('''2020'''). ''Finite Group Equivariant Neural Networks for Games''. [https://arxiv.org/abs/2009.05027 arXiv:2009.05027]
=Blog & Forum Posts=
==2020 ...==
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74077 How to work with batch size in neural network] by Gertjan Brouwer, [[CCC]], June 02, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531 NNUE accessible explanation] by [[Martin Fierz]], [[CCC]], July 21, 2020 [[NNUE]]
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=1 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 23, 2020
: [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74531&start=5 Re: NNUE accessible explanation] by [[Jonathan Rosenthal]], [[CCC]], July 24, 2020
* [http://www.talkchess.com/forum3/viewtopic.php?f=2&t=74607 LC0 vs. NNUE - some tech details...] by [[Srdja Matovic]], [[CCC]], July 29, 2020 » [[Leela Chess Zero#Lc0|Lc0]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74771 AB search with NN on GPU...] by [[Srdja Matovic]], [[CCC]], August 13, 2020 » [[GPU]] <ref>[https://forums.developer.nvidia.com/t/kernel-launch-latency/62455 kernel launch latency - CUDA / CUDA Programming and Performance - NVIDIA Developer Forums] by LukeCuda, June 18, 2018</ref>
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74777 Neural Networks weights type] by [[Fabio Gobbato]], [[CCC]], August 13, 2020 » [[Stockfish NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=74955 Train a neural network evaluation] by [[Fabio Gobbato]], [[CCC]], September 01, 2020 » [[Automated Tuning]], [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75042 Neural network quantization] by [[Fabio Gobbato]], [[CCC]], September 08, 2020 » [[NNUE]]
* [http://www.talkchess.com/forum3/viewtopic.php?f=7&t=75190 First success with neural nets] by [[Jonathan Kreuzer]], [[CCC]], September 23, 2020
=External Links=
: [https://towardsdatascience.com/types-of-convolutions-in-deep-learning-717013397f4d An Introduction to different Types of Convolutions in Deep Learning] by [http://plpp.de/ Paul-Louis Pröve], July 22, 2017
: [https://towardsdatascience.com/squeeze-and-excitation-networks-9ef5e71eacd7 Squeeze-and-Excitation Networks] by [http://plpp.de/ Paul-Louis Pröve], October 17, 2017
* [https://towardsdatascience.com/deep-convolutional-neural-networks-ccf96f830178 Deep Convolutional Neural Networks] by Pablo Ruiz, October 11, 2018
===ResNet===
* [https://en.wikipedia.org/wiki/Residual_neural_network Residual neural network from Wikipedia]
* [https://wiki.tum.de/display/lfdv/Deep+Residual+Networks Deep Residual Networks] from [https://wiki.tum.de/ TUM Wiki], [[Technical University of Munich]]
* [https://towardsdatascience.com/understanding-and-visualizing-resnets-442284831be8 Understanding and visualizing ResNets] by Pablo Ruiz, October 8, 2018
===RNNs===
* [https://en.wikipedia.org/wiki/Recurrent_neural_network Recurrent neural network from Wikipedia]
* [https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Rectifier (neural networks) from Wikipedia]
* [https://en.wikipedia.org/wiki/Sigmoid_function Sigmoid function from Wikipedia]
* [https://en.wikipedia.org/wiki/Softmax_function Softmax function from Wikipedia]
==Backpropagation==
* [https://en.wikipedia.org/wiki/Backpropagation Backpropagation from Wikipedia]
: [https://www.youtube.com/watch?v=9KM9Td6RVgQ Part 6: Training]
: [https://www.youtube.com/watch?v=S4ZUwgesjS8 Part 7: Overfitting, Testing, and Regularization]
* [https://www.youtube.com/playlist?list=PLgomWLYGNl1dL1Qsmgumhcg4HOcWZMd3k NN - Fully Connected Tutorial], [https://en.wikipedia.org/wiki/YouTube YouTube] Videos by [[Finn Eggers]]
* [https://www.youtube.com/watch?v=UdSK7nnJKHU Deep Learning Master Class] by [[Ilya Sutskever]], [https://en.wikipedia.org/wiki/YouTube YouTube] Video
* [https://www.youtube.com/watch?v=Ih5Mr93E-2c&hd=1 Lecture 10 - Neural Networks] from [http://work.caltech.edu/telecourse.html Learning From Data - Online Course (MOOC)] by [https://en.wikipedia.org/wiki/Yaser_Abu-Mostafa Yaser Abu-Mostafa], [https://en.wikipedia.org/wiki/California_Institute_of_Technology Caltech], [https://en.wikipedia.org/wiki/YouTube YouTube] Video

Navigation menu