Difference between revisions of "Marcus Hutter"

From Chessprogramming wiki
Jump to: navigation, search
Line 26: Line 26:
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''Time Consistent Discounting''. [http://www.informatik.uni-trier.de/~ley/db/conf/alt/alt2011.html Algorithmic Learning Theory], [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science] 6925, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''Time Consistent Discounting''. [http://www.informatik.uni-trier.de/~ley/db/conf/alt/alt2011.html Algorithmic Learning Theory], [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science] 6925, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''No Free Lunch versus Occam's Razor in Supervised Learning''. [https://en.wikipedia.org/wiki/Ray_Solomonoff Solomonoff] Memorial, [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science] 7070, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer], [https://arxiv.org/abs/1111.3846 arXiv:1111.3846]
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2011'''). ''No Free Lunch versus Occam's Razor in Supervised Learning''. [https://en.wikipedia.org/wiki/Ray_Solomonoff Solomonoff] Memorial, [https://en.wikipedia.org/wiki/Lecture_Notes_in_Computer_Science Lecture Notes in Computer Science] 7070, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer], [https://arxiv.org/abs/1111.3846 arXiv:1111.3846]
 +
* [[Joel Veness]], [[Kee Siong Ng]], [[Marcus Hutter]], [[William Uther]] , [[David Silver]] ('''2011'''). ''A Monte-Carlo AIXI Approximation''. [https://en.wikipedia.org/wiki/Journal_of_Artificial_Intelligence_Research JAIR], Vol. 40, [http://www.aaai.org/Papers/JAIR/Vol40/JAIR-4004.pdf pdf]
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2012'''). ''PAC Bounds for Discounted MDPs''. [http://www.informatik.uni-trier.de/~ley/db/conf/alt/alt2012.htm Algorithmic Learning Theory], [https://arxiv.org/abs/1202.3890 arXiv:1202.3890] <ref>[https://en.wikipedia.org/wiki/Markov_decision_process Markov decision process from Wikipedia]</ref>
 
* [[Tor Lattimore]], [[Marcus Hutter]] ('''2012'''). ''PAC Bounds for Discounted MDPs''. [http://www.informatik.uni-trier.de/~ley/db/conf/alt/alt2012.htm Algorithmic Learning Theory], [https://arxiv.org/abs/1202.3890 arXiv:1202.3890] <ref>[https://en.wikipedia.org/wiki/Markov_decision_process Markov decision process from Wikipedia]</ref>
 
* [[Peter Auer]], [[Marcus Hutter]], [[Laurent Orseau]] ('''2013'''). ''[http://drops.dagstuhl.de/opus/volltexte/2013/4340/ Reinforcement Learning]''. [http://dblp.uni-trier.de/db/journals/dagstuhl-reports/dagstuhl-reports3.html#AuerHO13 Dagstuhl Reports, Vol. 3, No. 8], DOI: [http://drops.dagstuhl.de/opus/volltexte/2013/4340/ 10.4230/DagRep.3.8.1], URN: [http://drops.dagstuhl.de/opus/volltexte/2013/4340/ urn:nbn:de:0030-drops-43409]
 
* [[Peter Auer]], [[Marcus Hutter]], [[Laurent Orseau]] ('''2013'''). ''[http://drops.dagstuhl.de/opus/volltexte/2013/4340/ Reinforcement Learning]''. [http://dblp.uni-trier.de/db/journals/dagstuhl-reports/dagstuhl-reports3.html#AuerHO13 Dagstuhl Reports, Vol. 3, No. 8], DOI: [http://drops.dagstuhl.de/opus/volltexte/2013/4340/ 10.4230/DagRep.3.8.1], URN: [http://drops.dagstuhl.de/opus/volltexte/2013/4340/ urn:nbn:de:0030-drops-43409]

Revision as of 20:15, 23 June 2018

Home * People * Marcus Hutter

Marcus Hutter [1]

Marcus Hutter,
a German physicist and computer scientist, professor in the Research School of Computer Science at Australian National University. Before, he researched at IDSIA, Lugano, Switzerland in Jürgen Schmidhuber's group. Marcus Hutter defended his PhD and BSc in physics from the Ludwig Maximilian University of Munich and a Habilitation, MSc, and BSc in computer science from Technical University of Munich. He is author of the AI-book Universal Artificial Intelligence [2] , a novel algorithmic information theory [3] perspective, also introducing the universal algorithmic agent called AIXI.

AIXI

Quote from The AIXI Model in One Line [4]

It is actually possible to write down the AIXI model explicitly in one line, although one should not expect to be able to grasp the full meaning and power from this compact representation.
AIXI is an agent that interacts with an environment in cycles k=1,2,...,m. In cycle k, AIXI takes action ak (e.g. a limb movement) based on past perceptions o1 r1...ok-1 rk-1 as defined below. Thereafter, the environment provides a (regular) observation ok (e.g. a camera image) to AIXI and a real-valued reward rk. The reward can be very scarce, e.g. just +1 (-1) for winning (losing) a chess game, and 0 at all other times. Then the next cycle k+1 starts. Given the above, AIXI is defined by: 
Aixi1line.gif
The expression shows that AIXI tries to maximize its total future reward rk+...+rm. If the environment is modeled by a deterministic program q, then the future perceptions ...okrk...omrm = U(q,a1..am) can be computed, where U is a universal (monotone Turing) machine executing q given a1..am. Since q is unknown, AIXI has to maximize its expected reward, i.e. average rk+...+rm over all possible perceptions created by all possible environments q. The simpler an environment, the higher is its a-priori contribution 2-l(q), where simplicity is measured by the length l of program q. Since noisy environments are just mixtures of deterministic environments, they are automatically included. The sums in the formula constitute the averaging process. Averaging and maximization have to be performed in chronological order, hence the interleaving of max and Σ (similarly to minimax for games). 

Selected Publications

[5] [6]

2005 ...

2010 ...

2015 ...

External Links

References

Up one level