Difference between revisions of "Supervised Learning"

From Chessprogramming wiki
Jump to: navigation, search
(One intermediate revision by the same user not shown)
Line 1: Line 1:
 
'''[[Main Page|Home]] * [[Learning]] * Supervised Learning'''
 
'''[[Main Page|Home]] * [[Learning]] * Supervised Learning'''
  
[[FILE:Supervised machine learning in a nutshell.svg|480px|border|right|thumb| Supervised Learning  <ref>A data flow diagram shows the machine learning process in summary, by [https://en.wikipedia.org/wiki/User:EpochFail EpochFail], November 15, 2015, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref> ]]
+
'''Supervised Learning''', (SL)<br/>
 +
is learning from examples provided by a knowledgable external [https://en.wikipedia.org/wiki/Supervisor supervisor].  
 +
In machine learning, supervised learning is a technique for deducing a function from [https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets training data]. The training data consist of pairs of input objects and desired outputs. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set <ref>[https://en.wikipedia.org/wiki/Supervised_learning Supervised learning from Wikipedia]</ref>.
  
'''Supervised Learning''',<br/>
+
=SL in a nutshell=
is learning from examples provided by a knowledgable external [https://en.wikipedia.org/wiki/Supervisor supervisor].
+
[[FILE:Supervised machine learning in a nutshell.svg|640px|none|border|text-bottom]]
In machine learning, supervised learning is a technique for deducing a function from [https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets training data]. The training data consist of pairs of input objects and desired outputs. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set <ref>[https://en.wikipedia.org/wiki/Supervised_learning Supervised learning from Wikipedia]</ref>. In computer games and chess, supervised learning techniques were used in [[Automated Tuning|automated tuning]] or to train [[Neural Networks|neural network]] game and chess programs. Input objects are [[Chess Position|chess positions]]. The desired output is either the supervisor's move choice in that position ([[Automated Tuning#MoveAdaption|move adaption]]), or a [[Score|score]] provided by an [[Oracle|oracle]] ([[Automated Tuning#ValueAdaption|value adaption]]).
+
<ref>A data flow diagram shows the machine learning process in summary, by [https://en.wikipedia.org/wiki/User:EpochFail EpochFail], November 15, 2015, [https://en.wikipedia.org/wiki/Wikimedia_Commons Wikimedia Commons]</ref>
 +
 
 +
=SL in Chess=
 +
In computer games and chess, supervised learning techniques were used in [[Automated Tuning|automated tuning]] or to train [[Neural Networks|neural network]] game and chess programs. Input objects are [[Chess Position|chess positions]]. The desired output is either the supervisor's move choice in that position ([[Automated Tuning#MoveAdaption|move adaption]]), or a [[Score|score]] provided by an [[Oracle|oracle]] ([[Automated Tuning#ValueAdaption|value adaption]]).
  
=Move Adaption=
+
==Move Adaption==
 
[[Automated Tuning#MoveAdaption|Move adaption]] can be applied by [[Automated Tuning#LinearRegression|linear regression]] to minimize a [https://en.wikipedia.org/wiki/Loss_function cost function] considering the rank-number of the desired move in a [[Move List|move list]] ordered by score <ref>[[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]</ref>.
 
[[Automated Tuning#MoveAdaption|Move adaption]] can be applied by [[Automated Tuning#LinearRegression|linear regression]] to minimize a [https://en.wikipedia.org/wiki/Loss_function cost function] considering the rank-number of the desired move in a [[Move List|move list]] ordered by score <ref>[[Tony Marsland]] ('''1985'''). ''Evaluation-Function Factors''. [[ICGA Journal#8_2|ICCA Journal, Vol. 8, No. 2]], [http://webdocs.cs.ualberta.ca/~tony/OldPapers/evaluation.pdf pdf]</ref>.
  
=Value Adaption=
+
==Value Adaption==
 
One common idea to provide an [[Oracle|oracle]] for supervised [[Automated Tuning#ValueAdaption|value adaption]] is to use the win/draw/loss outcome from finished games  
 
One common idea to provide an [[Oracle|oracle]] for supervised [[Automated Tuning#ValueAdaption|value adaption]] is to use the win/draw/loss outcome from finished games  
 
for all training positions selected from that game. Discrete {-1, 0, +1} or {0, ½, 1} desired values are the domain of [[Automated Tuning#LogisticRegression|logistic regression]] and require the
 
for all training positions selected from that game. Discrete {-1, 0, +1} or {0, ½, 1} desired values are the domain of [[Automated Tuning#LogisticRegression|logistic regression]] and require the
Line 43: Line 48:
 
* [[Dave Gomboc]], [[Michael Buro]], [[Tony Marsland]] ('''2005'''). ''Tuning Evaluation Functions by Maximizing Concordance''. [https://en.wikipedia.org/wiki/Theoretical_Computer_Science_%28journal%29 Theoretical Computer Science], Vol. 349, No. 2, [http://www.cs.ualberta.ca/%7Emburo/ps/tcs-learn.pdf pdf]
 
* [[Dave Gomboc]], [[Michael Buro]], [[Tony Marsland]] ('''2005'''). ''Tuning Evaluation Functions by Maximizing Concordance''. [https://en.wikipedia.org/wiki/Theoretical_Computer_Science_%28journal%29 Theoretical Computer Science], Vol. 349, No. 2, [http://www.cs.ualberta.ca/%7Emburo/ps/tcs-learn.pdf pdf]
 
* [[Amos Storkey]], [https://www.k.u-tokyo.ac.jp/pros-e/person/masashi_sugiyama/masashi_sugiyama.htm Masashi Sugiyama] ('''2006'''). ''[http://papers.neurips.cc/paper/3019-mixture-regression-for-covariate-shift Mixture Regression for Covariate Shift]''. [https://dblp.uni-trier.de/db/conf/nips/nips2006.html NIPS 2006]
 
* [[Amos Storkey]], [https://www.k.u-tokyo.ac.jp/pros-e/person/masashi_sugiyama/masashi_sugiyama.htm Masashi Sugiyama] ('''2006'''). ''[http://papers.neurips.cc/paper/3019-mixture-regression-for-covariate-shift Mixture Regression for Covariate Shift]''. [https://dblp.uni-trier.de/db/conf/nips/nips2006.html NIPS 2006]
 +
* [[Eli David|Omid David]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2008'''). ''Genetic Algorithms for Mentor-Assisted Evaluation Function Optimization''. [http://www.sigevo.org/gecco-2008/ GECCO '08], [https://arxiv.org/abs/1711.06839 arXiv:1711.06839]
 +
* [[Eli David|Omid David]], [[Jaap van den Herik]], [[Moshe Koppel]], [[Nathan S. Netanyahu]] ('''2009'''). ''Simulating Human Grandmasters: Evolution and Coevolution of Evaluation Functions''. [http://www.sigevo.org/gecco-2009/ GECCO '09], [https://arxiv.org/abs/1711.06840 arXiv:1711.06840]
 
==2010 ...==
 
==2010 ...==
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''No Free Lunch versus Occam's Razor in Supervised Learning''. [https://en.wikipedia.org/wiki/Ray_Solomonoff Solomonoff] Memorial, [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science], [https://en.wikipedia.org/wiki/Springer-Verlag Springer], [https://arxiv.org/abs/1111.3846 arXiv:1111.3846] <ref>[https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization No free lunch in search and optimization - Wikipedia]</ref> <ref>[https://en.wikipedia.org/wiki/Occam%27s_razor Occam's razor from Wikipedia]</ref>
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''No Free Lunch versus Occam's Razor in Supervised Learning''. [https://en.wikipedia.org/wiki/Ray_Solomonoff Solomonoff] Memorial, [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science], [https://en.wikipedia.org/wiki/Springer-Verlag Springer], [https://arxiv.org/abs/1111.3846 arXiv:1111.3846] <ref>[https://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization No free lunch in search and optimization - Wikipedia]</ref> <ref>[https://en.wikipedia.org/wiki/Occam%27s_razor Occam's razor from Wikipedia]</ref>

Revision as of 22:33, 4 February 2020

Home * Learning * Supervised Learning

Supervised Learning, (SL)
is learning from examples provided by a knowledgable external supervisor. In machine learning, supervised learning is a technique for deducing a function from training data. The training data consist of pairs of input objects and desired outputs. After parameter adjustment and learning, the performance of the resulting function should be measured on a test set that is separate from the training set [1].

SL in a nutshell

Supervised machine learning in a nutshell.svg

[2]

SL in Chess

In computer games and chess, supervised learning techniques were used in automated tuning or to train neural network game and chess programs. Input objects are chess positions. The desired output is either the supervisor's move choice in that position (move adaption), or a score provided by an oracle (value adaption).

Move Adaption

Move adaption can be applied by linear regression to minimize a cost function considering the rank-number of the desired move in a move list ordered by score [3].

Value Adaption

One common idea to provide an oracle for supervised value adaption is to use the win/draw/loss outcome from finished games for all training positions selected from that game. Discrete {-1, 0, +1} or {0, ½, 1} desired values are the domain of logistic regression and require the evaluation scores mapped from pawn advantage to appropriate winning probabilities using the sigmoid function to calculate a mean squared error of the cost function to minimize, as demonstrated by Texel's Tuning Method.

See also

Selected Publications

1960 ....

  • Arthur Samuel (1967). Some Studies in Machine Learning. Using the Game of Checkers. II-Recent Progress. pdf

1980 ...

1990 ...

2000 ...

2010 ...

Forum Posts

External Links

References

  1. Supervised learning from Wikipedia
  2. A data flow diagram shows the machine learning process in summary, by EpochFail, November 15, 2015, Wikimedia Commons
  3. Tony Marsland (1985). Evaluation-Function Factors. ICCA Journal, Vol. 8, No. 2, pdf
  4. No free lunch in search and optimization - Wikipedia
  5. Occam's razor from Wikipedia

Up one Level