Difference between revisions of "Ilya Loshchilov"

From Chessprogramming wiki
Jump to: navigation, search
(Created page with "'''Home * People * Ilya Loshchilov''' FILE:IlyaLoshchilov.jpg|border|right|thumb|link=http://loshchilov.com/| Ilya Loshchilov <ref>[http://loshchilov.com/...")
 
(One intermediate revision by the same user not shown)
Line 7: Line 7:
 
He worked with [[Marc Schoenauer]] and [[Michèle Sebag]] at [[University of Paris#11|Paris-Sud 11 University]] and [https://en.wikipedia.org/wiki/French_Institute_for_Research_in_Computer_Science_and_Automation INRIA] within their ''TAO Project Team'' <ref>[https://tao.lri.fr/tiki-index.php?page=People TikiWiki | People]</ref> on [https://en.wikipedia.org/wiki/Multi-objective_optimization multi-objective optimization],  
 
He worked with [[Marc Schoenauer]] and [[Michèle Sebag]] at [[University of Paris#11|Paris-Sud 11 University]] and [https://en.wikipedia.org/wiki/French_Institute_for_Research_in_Computer_Science_and_Automation INRIA] within their ''TAO Project Team'' <ref>[https://tao.lri.fr/tiki-index.php?page=People TikiWiki | People]</ref> on [https://en.wikipedia.org/wiki/Multi-objective_optimization multi-objective optimization],  
 
[https://en.wikipedia.org/wiki/Adaptive_coordinate_descent adaptive coordinate descent] and [https://en.wikipedia.org/wiki/CMA-ES CMA-ES], where he defended his Ph.D. with the title ''Surrogate-Assisted Evolutionary Algorithms'' in 2013 <ref>[[Ilya Loshchilov]]  ('''2013'''). ''[http://loshchilov.com/phd.html Surrogate-Assisted Evolutionary Algorithms]''. Ph.D. thesis, [[University of Paris#11|Paris-Sud 11 University]], advisors [[Marc Schoenauer]] and [[Michèle Sebag]]</ref>.
 
[https://en.wikipedia.org/wiki/Adaptive_coordinate_descent adaptive coordinate descent] and [https://en.wikipedia.org/wiki/CMA-ES CMA-ES], where he defended his Ph.D. with the title ''Surrogate-Assisted Evolutionary Algorithms'' in 2013 <ref>[[Ilya Loshchilov]]  ('''2013'''). ''[http://loshchilov.com/phd.html Surrogate-Assisted Evolutionary Algorithms]''. Ph.D. thesis, [[University of Paris#11|Paris-Sud 11 University]], advisors [[Marc Schoenauer]] and [[Michèle Sebag]]</ref>.
As postdoc at [https://en.wikipedia.org/wiki/University_of_Freiburg University of Freiburg], he continued his research with [[Frank Hutter]] combining, evolutionary algorithms with [[Deep Learning|deep learning]] techniques.
+
In his thesis, he acknowledged ''Leonid Ivanovich Babak'' <ref>[https://4science.ru/person/Babak-Leonid-Ivanovich Бабак Леонид Иванович | Персона | 4science]</ref> from [https://en.wikipedia.org/wiki/Tomsk_State_University Tomsk State University] as his first scientific advisor who showed him the power of evolution, and his chess teacher ''Vladimir Andreevich Zhukov'' who showed him that there is the pleasure of thinking. As postdoc at [https://en.wikipedia.org/wiki/University_of_Freiburg University of Freiburg], he continued his research with [[Frank Hutter]] combining, evolutionary algorithms with [[Deep Learning|deep learning]] techniques.
  
 
=SGDR=
 
=SGDR=
Line 29: Line 29:
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2016'''). ''CMA-ES for Hyperparameter Optimization of Deep Neural Networks''. [https://arxiv.org/abs/1604.07269 arXiv:1604.07269]  
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2016'''). ''CMA-ES for Hyperparameter Optimization of Deep Neural Networks''. [https://arxiv.org/abs/1604.07269 arXiv:1604.07269]  
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2016'''). ''SGDR: Stochastic Gradient Descent with Warm Restarts''. [https://arxiv.org/abs/1608.03983 arXiv:1608.03983]
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2016'''). ''SGDR: Stochastic Gradient Descent with Warm Restarts''. [https://arxiv.org/abs/1608.03983 arXiv:1608.03983]
* [[Ilya Loshchilov]], [[#TGlasmachers|Tobias Glasmachers]] ('''2016'''). ''[https://dl.acm.org/citation.cfm?id=2931698 Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES)]''. [https://dblp.org/db/conf/gecco/gecco2016c.html GECCO 2016]
+
* [[Ilya Loshchilov]], [[Mathematician‎#TGlasmachers|Tobias Glasmachers]] ('''2016'''). ''[https://dl.acm.org/citation.cfm?id=2931698 Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES)]''. [https://dblp.org/db/conf/gecco/gecco2016c.html GECCO 2016]
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2017'''). ''Decoupled Weight Decay Regularization''. [https://arxiv.org/abs/1711.05101 arXiv:1711.05101]  
 
* [[Ilya Loshchilov]], [[Frank Hutter]] ('''2017'''). ''Decoupled Weight Decay Regularization''. [https://arxiv.org/abs/1711.05101 arXiv:1711.05101]  
* [[Ilya Loshchilov]], [[#TGlasmachers|Tobias Glasmachers]], [https://de.wikipedia.org/wiki/Hans-Georg_Beyer_(Informatiker) Hans-Georg Beyer] ('''2017'''). ''Limited-Memory Matrix Adaptation for Large Scale Black-box Optimization''. [https://arxiv.org/abs/1705.06693 arXiv:1705.06693]
+
* [[Ilya Loshchilov]], [[Mathematician‎#TGlasmachers|Tobias Glasmachers]], [https://de.wikipedia.org/wiki/Hans-Georg_Beyer_(Informatiker) Hans-Georg Beyer] ('''2017'''). ''Limited-Memory Matrix Adaptation for Large Scale Black-box Optimization''. [https://arxiv.org/abs/1705.06693 arXiv:1705.06693]
 
* [https://dblp.org/pers/hd/c/Chrabaszcz:Patryk Patryk Chrabaszcz], [[Ilya Loshchilov]], [[Frank Hutter]] ('''2018'''). ''Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari''. [https://arxiv.org/abs/1802.08842 arXiv:1802.08842] <ref>[https://www.bbc.com/news/technology-43241936 AI finds novel way to beat classic Q*bert Atari video game], [https://en.wikipedia.org/wiki/BBC_News BBC News], March 01, 2018</ref> <ref>[https://en.wikipedia.org/wiki/Q*bert Q*bert from Wikipedia]</ref>  
 
* [https://dblp.org/pers/hd/c/Chrabaszcz:Patryk Patryk Chrabaszcz], [[Ilya Loshchilov]], [[Frank Hutter]] ('''2018'''). ''Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari''. [https://arxiv.org/abs/1802.08842 arXiv:1802.08842] <ref>[https://www.bbc.com/news/technology-43241936 AI finds novel way to beat classic Q*bert Atari video game], [https://en.wikipedia.org/wiki/BBC_News BBC News], March 01, 2018</ref> <ref>[https://en.wikipedia.org/wiki/Q*bert Q*bert from Wikipedia]</ref>  
* [[Ilya Loshchilov]], [[#TGlasmachers|Tobias Glasmachers]], [https://de.wikipedia.org/wiki/Hans-Georg_Beyer_(Informatiker) Hans-Georg Beyer] ('''2019'''). ''Large Scale Black-Box Optimization by Limited-Memory Matrix Adaptation''. [[IEEE#EC|IEEE Transactions on Evolutionary Computation]], Vol. 23, No. 2
+
* [[Ilya Loshchilov]], [[Mathematician‎#TGlasmachers|Tobias Glasmachers]], [https://de.wikipedia.org/wiki/Hans-Georg_Beyer_(Informatiker) Hans-Georg Beyer] ('''2019'''). ''Large Scale Black-Box Optimization by Limited-Memory Matrix Adaptation''. [[IEEE#EC|IEEE Transactions on Evolutionary Computation]], Vol. 23, No. 2
  
 
=External Links=
 
=External Links=
Line 45: Line 45:
 
'''[[People|Up one Level]]'''
 
'''[[People|Up one Level]]'''
 
[[Category:Researcher|Loshchilov]]
 
[[Category:Researcher|Loshchilov]]
[[Category:Mathematician|Loshchilov]]
 

Revision as of 10:38, 13 September 2019

Home * People * Ilya Loshchilov

Ilya Loshchilov [1]

Ilya Gennadyevich Loshchilov,
a Russian computer scientist in the field of evolutionary algorithms (EA), optimization and machine learning. He worked with Marc Schoenauer and Michèle Sebag at Paris-Sud 11 University and INRIA within their TAO Project Team [2] on multi-objective optimization, adaptive coordinate descent and CMA-ES, where he defended his Ph.D. with the title Surrogate-Assisted Evolutionary Algorithms in 2013 [3]. In his thesis, he acknowledged Leonid Ivanovich Babak [4] from Tomsk State University as his first scientific advisor who showed him the power of evolution, and his chess teacher Vladimir Andreevich Zhukov who showed him that there is the pleasure of thinking. As postdoc at University of Freiburg, he continued his research with Frank Hutter combining, evolutionary algorithms with deep learning techniques.

SGDR

Loshchilov's and Hutter's stochastic gradient descent with warm restarts (SGDR) as introduced in their 2016 paper [5], is mentioned as method for learning rate scheduling to train Leela Chess' third party network Leelenstein [6], which is combined with Allie for the AllieStein chess playing entity.

Selected Publications

[7] [8]

2010 ...

2015 ...

External Links

GitHub - loshchil/SGDR
GitHub - loshchil/AdamW-and-SGDW: Decoupled Weight Decay Regularization (ICLR 2019)

References

Up one Level