25,161
edits
Changes
SPSA
,Created page with "'''Home * Programming * Algorithms * SPSA''' '''SPSA''', (Simultaneous Perturbation Stochastic Approximation)<br/> a [https://en.wikipedia.org/wiki/Stoc..."
'''[[Main Page|Home]] * [[Programming]] * [[Algorithms]] * SPSA'''
'''SPSA''', (Simultaneous Perturbation Stochastic Approximation)<br/>
a [https://en.wikipedia.org/wiki/Stochastic_approximation stochastic approximation] algorithm devised in the late 80s <ref>[[James C. Spall]] ('''1987'''). ''A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4789283 Proceedings of the American Control Conference], [https://en.wikipedia.org/wiki/Minneapolis Minneapolis, MN], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_A_Stochastic_Approximation.PDF pdf reprint]</ref> and 90s by [[James C. Spall]] <ref>[[James C. Spall]] ('''1992'''). ''Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 37, No. 3</ref>. It is an extension of the Finite Difference Stochastic Approximation ('''FDSA''') algorithm aka [https://en.wikipedia.org/wiki/Stochastic_approximation#Kiefer-Wolfowitz_algorithm Kiefer-Wolfowitz algorithm] introduced in 1952 by [[Mathematician#JKiefer|Jack Kiefer]] and [[Mathematician#JWolfowitz|Jacob Wolfowitz]] <ref>[[Mathematician#JKiefer|Jack Kiefer]], [[Mathematician#JWolfowitz|Jacob Wolfowitz]] ('''1952'''). ''[https://projecteuclid.org/euclid.aoms/1177729392 Stochastic Estimation of the Maximum of a Regression Function]''. [https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics The Annals of Mathematical Statistics], Vol. 23, No. 3</ref>, on the other hand motivated by the publication of the [https://en.wikipedia.org/wiki/Stochastic_approximation#Robbins.E2.80.93Monro_algorithm Robbins-Monro algorithm] in 1951 <ref>[[Mathematician#HRobbins|Herbert Robbins]], [http://articles.mcall.com/1995-03-07/news/3017569_1_burlington-korean-war-memorial-services Sutton Monro] ('''1951'''). ''[https://projecteuclid.org/euclid.aoms/1177729586 A Stochastic Approximation Method]''. [https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics The Annals of Mathematical Statistics], Vol. 22, No. 3</ref>.
The SPSA algorithm is suited for high-dimensional [https://en.wikipedia.org/wiki/Optimization_problem optimization problems] giving an [https://en.wikipedia.org/wiki/Loss_function objective function] of a [https://en.wikipedia.org/wiki/Dimension p-dimensional] vector of adjustable weights, [https://en.wikipedia.org/wiki/Theta Theta] or Θ, using a [https://en.wikipedia.org/wiki/Gradient gradient] [https://en.wikipedia.org/wiki/Approximation approximation] that requires only N+1 or 2N objective function measurements over all N [[Iteration|iterations]] regardless of the dimension of the optimization problem - opposed to FDSA, which needs p + 1 objective function measurements or simulations per step. At each iteration, a simultaneous [https://en.wikipedia.org/wiki/Perturbation_theory perturbation] vector with mutually independent zero-mean [[Pseudorandom Number Generator|random variables]] is generated, a good choice for each delta is the [https://en.wikipedia.org/wiki/Rademacher_distribution Rademacher distribution] with [https://en.wikipedia.org/wiki/Probability probability] ½ of being either +1 or -1. Two feature vectors Θ+ and Θ- are calculated by adding and subtracting the delta vector scaled by gain sequence ck to/from the current feature vector Θ, to compare their objective function measurements. Dependent on the outcome and scaled by gain sequences ak and ck, the current feature vector is approximated accordantly. The gain sequences decrease with increasing iterations, converging to 0. The theory pertains to both local optimization and global optimization in the face of multiple local optima <ref>[http://dblp.uni-trier.de/pers/hd/m/Maryak:John_L= John L. Maryak], [http://dblp.uni-trier.de/pers/hd/c/Chin:Daniel_C= Daniel C. Chin] ('''2008'''). ''Global Random Optimization by Simultaneous Perturbation Stochastic Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 53, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Maryak_Chin_IEEETAC08.pdf pdf]</ref>.
=Automated Tuning=
<pre>
α = 0.602; γ = 0.101;
for (k=0; k < N; k++) {
ak = a / (k + 1 + A)^α;
ck = c / (k + 1)^γ;
for each p
Δp = 2 * round ( rand() / (RAND_MAX + 1.0) ) - 1.0;
Θ+ = Θ + ck*Δ;
Θ- = Θ - ck*Δ;
Θ += ak * match(Θ+, Θ-) / (ck*Δ);
}
</pre>
In computer chess or games, where the objective function reflects the [[Playing Strength|playing strength]] to maximize, SPSA can be used in [[Automated Tuning|automated tuning]] of [[Evaluation|evaluation]] parameters as well as [[Search|search]] parameters. A prominent SPSA instance is devised from [[Stockfish's tuning method]] as introduced by [[Joona Kiiski]] in 2011 <ref>[http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662 Stockfish's tuning method] by [[Joona Kiiski]], [[CCC]], October 07, 2011</ref>, where the objective function is measured once per iteration by playing a pair of games with Θ+ versus Θ-, the function "match" returning a ±2 range, see pseudo code. The selection of the coefficients A, a, c, α and γ determine the initial values and time decay of the gain sequences ak and ck, is critical to the performance of SPSA. Spall recommends using α = 0.602 and γ = 0.101, which are the lowest possible values which theoretically guarantees convergence, further see the practical suggestions in Spall's 1998 SPSA implementation paper <ref>[[James C. Spall]] ('''1998'''). ''Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization''. [[IEEE#TOCAES|IEEE Transactions on Aerospace and Electronic Systems]], Vol. 34, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF pdf]</ref>.
<span id="RSPSA"></span>
=RSPSA=
Already at the [[Advances in Computer Games 11]] conference at [https://en.wikipedia.org/wiki/Academia_Sinica Academia Sinica], [https://en.wikipedia.org/wiki/Taipei Taipei], [https://en.wikipedia.org/wiki/Taiwan Taiwan] in 2005, [[Levente Kocsis]], [[Csaba Szepesvári]], and [[Mark Winands]] introduced SPSA for the game programming community and discussed several methods that can be used to enhance its performance, including the use of [https://en.wikipedia.org/wiki/Variance_reduction common random numbers] and [https://en.wikipedia.org/wiki/Antithetic_variates antithetic variates], a combination of SPSA with [https://en.wikipedia.org/wiki/Rprop RPROP] (resilient backpropagation), and the reuse of samples of previous performance evaluations. RPROP, though it was originally worked out for the training of [[Neural Networks|neural networks]], is applicable to any optimization task where the gradient can be computed or approximated <ref>[[Martin Riedmiller]], [[Heinrich Braun]] ('''1993'''). ''A direct adaptive method for faster backpropagation learning: The RPROP algorithm''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=1059 IEEE International Conference On Neural Networks], [http://paginas.fe.up.pt/~ee02162/dissertacao/RPROP%20paper.pdf pdf]</ref>. The described '''RSPSA''' (Resilient SPSA) was successfully applied in parameter tuning in the domains of [[Poker]] and [[Lines of Action]] <ref>[[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf], [https://dke.maastrichtuniversity.nl/m.winands/documents/RSPSA.pdf pdf]</ref>.
=See also=
* [[Automated Tuning]]
* [[CLOP]]
* [[Genetic Programming#GeneticAlgorithm|Genetic Algorithms]]
* [[Genetic Programming#PBIL|PBIL]]
* [[Simulated Annealing]]
* [[Stockfish's tuning method]]
=Selected Publications=
==1987 ...==
* [[James C. Spall]] ('''1987'''). ''A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4789283 Proceedings of the American Control Conference], [https://en.wikipedia.org/wiki/Minneapolis Minneapolis, MN], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_A_Stochastic_Approximation.PDF pdf reprint]
==1990 ...==
* [[James C. Spall]] ('''1992'''). ''Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 37, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_TAC92.pdf pdf]
* [[James C. Spall]] ('''1997'''). ''A one-measurement form of SPSA''. [http://www.journals.elsevier.com/automatica Automatica], Vol. 33, No. 1, [http://www.jhuapl.edu/spsa/PDF-SPSA/automatica97_one_measSPSA.pdf pdf]
* [https://www.linkedin.com/in/payman-sadegh-493b372 Payman Sadegh] ('''1997'''). ''Constrained Optimization via Stochastic Approximation with a Simultaneous Perturbation Gradient Approximation''. [http://www.journals.elsevier.com/automatica Automatica], Vol. 33, No. 5, [http://www.jhuapl.edu/spsa/PDF-SPSA/Sadegh_Constrained_Optimization.PDF pdf]
* [[James C. Spall]] ('''1998'''). ''An Overview of the Simultaneous Perturbation Method for Efficient Optimization''. [http://www.jhuapl.edu/techdigest/ Johns Hopkins APL Technical Digest], Vol. 19, No. 4, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_An_Overview.PDF pdf]
* [[James C. Spall]] ('''1998'''). ''Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization''. [[IEEE#TOCAES|IEEE Transactions on Aerospace and Electronic Systems]], Vol. 34, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF pdf]
* [[James C. Spall]] ('''1999'''). ''Stochastic Optimization: Stochastic Approximation and Simulated Annealing''. in [https://en.wikipedia.org/wiki/John_G._Webster John G. Webster] (ed.) ('''1999'''). [http://onlinelibrary.wiley.com/book/10.1002/047134608X Encyclopedia of Electrical and Electronics Engineering], Vol. 20, [https://en.wikipedia.org/wiki/John_Wiley_%26_Sons John Wiley & Sons], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Stochastic_Optimization.PDF pdf]
==2000 ...==
* [[James C. Spall]] ('''2000'''). ''Adaptive Stochastic Approximation by the Simultaneous Perturbation Method''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 45, No. 10, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_TAC00.pdf pdf]
* [https://www.genealogy.math.ndsu.nodak.edu/id.php?id=197469 László Gerencsé], [http://www.math.buffalo.edu/mad/PEEPS/hill_stacy.html Stacy D. Hill], [https://itk.ppke.hu/karunkrol/oktatoink-sajat-weboldalai/gerencserne-dr-vago-zsuzsanna-bfd51 Zsuzsanna Vágó], Zoltán Vincze ('''2004'''). ''Discrete optimization, SPSA and Markov Chain Monte Carlo methods''. Proceeding of the 2004 American Control Conference, [http://www.jhuapl.edu/spsa/PDF-SPSA/Gerencser_ACC04.pdf pdf]
* [http://www.math.buffalo.edu/mad/PEEPS/hill_stacy.html Stacy D. Hill] ('''2005'''). ''Discrete Stochastic Approximation with Application to Resource Allocation''. [http://www.jhuapl.edu/techdigest/ Johns Hopkins APL Technical Digest], Vol. 26, No. 1, [http://www.jhuapl.edu/spsa/PDF-SPSA/Hill_TechDig05.pdf pdf]
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf], [https://dke.maastrichtuniversity.nl/m.winands/documents/RSPSA.pdf pdf]
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3, [http://www.sztaki.hu/~szcsaba/papers/kocsis_acg05.pdf pdf]
* [http://dblp.uni-trier.de/pers/hd/m/Maryak:John_L= John L. Maryak], [http://dblp.uni-trier.de/pers/hd/c/Chin:Daniel_C= Daniel C. Chin] ('''2008'''). ''Global Random Optimization by Simultaneous Perturbation Stochastic Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 53, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Maryak_Chin_IEEETAC08.pdf pdf]
* [http://dblp.uni-trier.de/pers/hd/s/Song_0001:Qing Qing Song], [[James C. Spall]], [http://dblp.uni-trier.de/pers/hd/s/Soh:Yeng_Chai Yeng Chai Soh], [http://dblp.uni-trier.de/pers/hd/n/Ni:Jie Jie Ni] ('''2008'''). ''Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation''. [[IEEE#NN|IEEE Transactions on Neural Networks]], Vol. 19, No. 5, [https://pdfs.semanticscholar.org/3f2a/4d69ca8adbbc072d82af58b3c750621d36ab.pdf 2003 pdf] » [[Neural Networks]]
==2010 ...==
* [[Shalabh Bhatnagar]], [[H. L. Prasad]], [[L.A. Prashanth]] ('''2013'''). ''[http://stochastic.csa.iisc.ernet.in/~shalabh/book.html Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods]''. [http://www.springer.com/series/642 Lecture Notes in Control and Information Sciences], Vol. 434, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* [https://scholar.google.com/citations?user=nqDASHMAAAAJ Pushpendre Rastogi], [http://dblp.uni-trier.de/pers/hd/z/Zhu:Jingyi Jingyi Zhu], [[James C. Spall]] ('''2016'''). ''Efficient implementation of Enhanced Adaptive Simultaneous Perturbation Algorithms''. [http://dblp.uni-trier.de/db/conf/ciss/ciss2016.html#RastogiZS16 CISS 2016], [http://www.cs.jhu.edu/~prastog3/res/rastogi2016efficient.pdf pdf]
=Forum Posts=
==2010 ...==
* [http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662 Stockfish's tuning method] by [[Joona Kiiski]], [[CCC]], October 07, 2011 » [[Stockfish's tuning method]]
: [http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662&start=6 Re: Stockfish's tuning method] by [[Rémi Coulom]], [[CCC]], October 07, 2011
* [http://www.talkchess.com/forum/viewtopic.php?t=40964 Tuning again] by [[Ed Schroder]], [[CCC]], November 01, 2011
* [https://groups.google.com/d/msg/fishcooking/WNrxeXAJ6VI/ZkCnRv4I_qEJ Goodbye CLOP, hello SPSA] by [[Gary Linscott]], [[Computer Chess Forums|FishCooking]], May 17, 2014 » [[CLOP]]
* [http://www.talkchess.com/forum/viewtopic.php?t=54545&start=2 Re: Eval tuning - any open source engines with GA or PBIL?] by [[Jon Dart]], [[CCC]], December 06, 2014 » [[Genetic Programming#PBIL|PBIL]] <ref>[https://www.gerad.ca/nomad/Project/Home.html NOMAD - A blackbox optimization software]</ref>
==2015 ...==
* [http://www.talkchess.com/forum/viewtopic.php?t=55925&start=10 Re: A plea to someone] by [[Lyudmil Antonov]], [[CCC]], April 07, 2015
: [http://www.talkchess.com/forum/viewtopic.php?t=55925&start=21 Re: A plea to someone] by [[Jon Dart]], [[CCC]], April 08, 2015
* [https://groups.google.com/d/msg/fishcooking/1FSprH10MpM/bU1bMkqKtaUJ Automatic Criterion for stopping SPSA?] by tsa..., [[Computer Chess Forums|FishCooking]], May 29, 2015
* [https://groups.google.com/d/msg/fishcooking/mbnctfDdcqM/S_rUOa7177AJ Re: SPSA Tuner] by [[Lyudmil Antonov]], [[Computer Chess Forums|FishCooking]], July 20, 2015
* [https://groups.google.com/d/msg/fishcooking/Np13OpEO1_w/xhYulilABgAJ Too small number of games in SPSA] by [[Lyudmil Antonov]], [[Computer Chess Forums|FishCooking]], April 22, 2016
* [http://www.talkchess.com/forum/viewtopic.php?t=63632 SPSA problems] by Ralf Müller, [[CCC]], April 02, 2017
=External Links=
* [http://www.jhuapl.edu/spsa/ SPSA Algorithm] by [[James C. Spall]]
* [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation - Wikipedia]
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine] by [[Joona Kiiski]] » [[Stockfish]]
* [https://github.com/lantonov/SPSA GitHub - lantonov/spsa: Modifications of SPSA] by [[Lyudmil Antonov]] » [[Stockfish]]
* [https://github.com/jgomezdans/spsa GitHub - jgomezdans/spsa: Simultaneous perturbation stochastic approximation Python code]
* [https://gist.github.com/yanatan16/5420795 Simultaneous Perturbation Stochastic Approximation code in python · GitHub]
* [https://youtu.be/kVJQkmYU2VA NIPS 2012 Tutoral - Stochastic Search and Optimization], [[James C. Spall#Video|Video Lecture]] by [[James C. Spall]]
=References=
<references />
'''[[Algorithms|Up one Level]]'''
'''SPSA''', (Simultaneous Perturbation Stochastic Approximation)<br/>
a [https://en.wikipedia.org/wiki/Stochastic_approximation stochastic approximation] algorithm devised in the late 80s <ref>[[James C. Spall]] ('''1987'''). ''A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4789283 Proceedings of the American Control Conference], [https://en.wikipedia.org/wiki/Minneapolis Minneapolis, MN], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_A_Stochastic_Approximation.PDF pdf reprint]</ref> and 90s by [[James C. Spall]] <ref>[[James C. Spall]] ('''1992'''). ''Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 37, No. 3</ref>. It is an extension of the Finite Difference Stochastic Approximation ('''FDSA''') algorithm aka [https://en.wikipedia.org/wiki/Stochastic_approximation#Kiefer-Wolfowitz_algorithm Kiefer-Wolfowitz algorithm] introduced in 1952 by [[Mathematician#JKiefer|Jack Kiefer]] and [[Mathematician#JWolfowitz|Jacob Wolfowitz]] <ref>[[Mathematician#JKiefer|Jack Kiefer]], [[Mathematician#JWolfowitz|Jacob Wolfowitz]] ('''1952'''). ''[https://projecteuclid.org/euclid.aoms/1177729392 Stochastic Estimation of the Maximum of a Regression Function]''. [https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics The Annals of Mathematical Statistics], Vol. 23, No. 3</ref>, on the other hand motivated by the publication of the [https://en.wikipedia.org/wiki/Stochastic_approximation#Robbins.E2.80.93Monro_algorithm Robbins-Monro algorithm] in 1951 <ref>[[Mathematician#HRobbins|Herbert Robbins]], [http://articles.mcall.com/1995-03-07/news/3017569_1_burlington-korean-war-memorial-services Sutton Monro] ('''1951'''). ''[https://projecteuclid.org/euclid.aoms/1177729586 A Stochastic Approximation Method]''. [https://en.wikipedia.org/wiki/Annals_of_Mathematical_Statistics The Annals of Mathematical Statistics], Vol. 22, No. 3</ref>.
The SPSA algorithm is suited for high-dimensional [https://en.wikipedia.org/wiki/Optimization_problem optimization problems] giving an [https://en.wikipedia.org/wiki/Loss_function objective function] of a [https://en.wikipedia.org/wiki/Dimension p-dimensional] vector of adjustable weights, [https://en.wikipedia.org/wiki/Theta Theta] or Θ, using a [https://en.wikipedia.org/wiki/Gradient gradient] [https://en.wikipedia.org/wiki/Approximation approximation] that requires only N+1 or 2N objective function measurements over all N [[Iteration|iterations]] regardless of the dimension of the optimization problem - opposed to FDSA, which needs p + 1 objective function measurements or simulations per step. At each iteration, a simultaneous [https://en.wikipedia.org/wiki/Perturbation_theory perturbation] vector with mutually independent zero-mean [[Pseudorandom Number Generator|random variables]] is generated, a good choice for each delta is the [https://en.wikipedia.org/wiki/Rademacher_distribution Rademacher distribution] with [https://en.wikipedia.org/wiki/Probability probability] ½ of being either +1 or -1. Two feature vectors Θ+ and Θ- are calculated by adding and subtracting the delta vector scaled by gain sequence ck to/from the current feature vector Θ, to compare their objective function measurements. Dependent on the outcome and scaled by gain sequences ak and ck, the current feature vector is approximated accordantly. The gain sequences decrease with increasing iterations, converging to 0. The theory pertains to both local optimization and global optimization in the face of multiple local optima <ref>[http://dblp.uni-trier.de/pers/hd/m/Maryak:John_L= John L. Maryak], [http://dblp.uni-trier.de/pers/hd/c/Chin:Daniel_C= Daniel C. Chin] ('''2008'''). ''Global Random Optimization by Simultaneous Perturbation Stochastic Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 53, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Maryak_Chin_IEEETAC08.pdf pdf]</ref>.
=Automated Tuning=
<pre>
α = 0.602; γ = 0.101;
for (k=0; k < N; k++) {
ak = a / (k + 1 + A)^α;
ck = c / (k + 1)^γ;
for each p
Δp = 2 * round ( rand() / (RAND_MAX + 1.0) ) - 1.0;
Θ+ = Θ + ck*Δ;
Θ- = Θ - ck*Δ;
Θ += ak * match(Θ+, Θ-) / (ck*Δ);
}
</pre>
In computer chess or games, where the objective function reflects the [[Playing Strength|playing strength]] to maximize, SPSA can be used in [[Automated Tuning|automated tuning]] of [[Evaluation|evaluation]] parameters as well as [[Search|search]] parameters. A prominent SPSA instance is devised from [[Stockfish's tuning method]] as introduced by [[Joona Kiiski]] in 2011 <ref>[http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662 Stockfish's tuning method] by [[Joona Kiiski]], [[CCC]], October 07, 2011</ref>, where the objective function is measured once per iteration by playing a pair of games with Θ+ versus Θ-, the function "match" returning a ±2 range, see pseudo code. The selection of the coefficients A, a, c, α and γ determine the initial values and time decay of the gain sequences ak and ck, is critical to the performance of SPSA. Spall recommends using α = 0.602 and γ = 0.101, which are the lowest possible values which theoretically guarantees convergence, further see the practical suggestions in Spall's 1998 SPSA implementation paper <ref>[[James C. Spall]] ('''1998'''). ''Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization''. [[IEEE#TOCAES|IEEE Transactions on Aerospace and Electronic Systems]], Vol. 34, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF pdf]</ref>.
<span id="RSPSA"></span>
=RSPSA=
Already at the [[Advances in Computer Games 11]] conference at [https://en.wikipedia.org/wiki/Academia_Sinica Academia Sinica], [https://en.wikipedia.org/wiki/Taipei Taipei], [https://en.wikipedia.org/wiki/Taiwan Taiwan] in 2005, [[Levente Kocsis]], [[Csaba Szepesvári]], and [[Mark Winands]] introduced SPSA for the game programming community and discussed several methods that can be used to enhance its performance, including the use of [https://en.wikipedia.org/wiki/Variance_reduction common random numbers] and [https://en.wikipedia.org/wiki/Antithetic_variates antithetic variates], a combination of SPSA with [https://en.wikipedia.org/wiki/Rprop RPROP] (resilient backpropagation), and the reuse of samples of previous performance evaluations. RPROP, though it was originally worked out for the training of [[Neural Networks|neural networks]], is applicable to any optimization task where the gradient can be computed or approximated <ref>[[Martin Riedmiller]], [[Heinrich Braun]] ('''1993'''). ''A direct adaptive method for faster backpropagation learning: The RPROP algorithm''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=1059 IEEE International Conference On Neural Networks], [http://paginas.fe.up.pt/~ee02162/dissertacao/RPROP%20paper.pdf pdf]</ref>. The described '''RSPSA''' (Resilient SPSA) was successfully applied in parameter tuning in the domains of [[Poker]] and [[Lines of Action]] <ref>[[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf], [https://dke.maastrichtuniversity.nl/m.winands/documents/RSPSA.pdf pdf]</ref>.
=See also=
* [[Automated Tuning]]
* [[CLOP]]
* [[Genetic Programming#GeneticAlgorithm|Genetic Algorithms]]
* [[Genetic Programming#PBIL|PBIL]]
* [[Simulated Annealing]]
* [[Stockfish's tuning method]]
=Selected Publications=
==1987 ...==
* [[James C. Spall]] ('''1987'''). ''A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates''. [http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=4789283 Proceedings of the American Control Conference], [https://en.wikipedia.org/wiki/Minneapolis Minneapolis, MN], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_A_Stochastic_Approximation.PDF pdf reprint]
==1990 ...==
* [[James C. Spall]] ('''1992'''). ''Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 37, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_TAC92.pdf pdf]
* [[James C. Spall]] ('''1997'''). ''A one-measurement form of SPSA''. [http://www.journals.elsevier.com/automatica Automatica], Vol. 33, No. 1, [http://www.jhuapl.edu/spsa/PDF-SPSA/automatica97_one_measSPSA.pdf pdf]
* [https://www.linkedin.com/in/payman-sadegh-493b372 Payman Sadegh] ('''1997'''). ''Constrained Optimization via Stochastic Approximation with a Simultaneous Perturbation Gradient Approximation''. [http://www.journals.elsevier.com/automatica Automatica], Vol. 33, No. 5, [http://www.jhuapl.edu/spsa/PDF-SPSA/Sadegh_Constrained_Optimization.PDF pdf]
* [[James C. Spall]] ('''1998'''). ''An Overview of the Simultaneous Perturbation Method for Efficient Optimization''. [http://www.jhuapl.edu/techdigest/ Johns Hopkins APL Technical Digest], Vol. 19, No. 4, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_An_Overview.PDF pdf]
* [[James C. Spall]] ('''1998'''). ''Implementation of the Simultaneous Perturbation Algorithm for Stochastic Optimization''. [[IEEE#TOCAES|IEEE Transactions on Aerospace and Electronic Systems]], Vol. 34, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Implementation_of_the_Simultaneous.PDF pdf]
* [[James C. Spall]] ('''1999'''). ''Stochastic Optimization: Stochastic Approximation and Simulated Annealing''. in [https://en.wikipedia.org/wiki/John_G._Webster John G. Webster] (ed.) ('''1999'''). [http://onlinelibrary.wiley.com/book/10.1002/047134608X Encyclopedia of Electrical and Electronics Engineering], Vol. 20, [https://en.wikipedia.org/wiki/John_Wiley_%26_Sons John Wiley & Sons], [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_Stochastic_Optimization.PDF pdf]
==2000 ...==
* [[James C. Spall]] ('''2000'''). ''Adaptive Stochastic Approximation by the Simultaneous Perturbation Method''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 45, No. 10, [http://www.jhuapl.edu/spsa/PDF-SPSA/Spall_TAC00.pdf pdf]
* [https://www.genealogy.math.ndsu.nodak.edu/id.php?id=197469 László Gerencsé], [http://www.math.buffalo.edu/mad/PEEPS/hill_stacy.html Stacy D. Hill], [https://itk.ppke.hu/karunkrol/oktatoink-sajat-weboldalai/gerencserne-dr-vago-zsuzsanna-bfd51 Zsuzsanna Vágó], Zoltán Vincze ('''2004'''). ''Discrete optimization, SPSA and Markov Chain Monte Carlo methods''. Proceeding of the 2004 American Control Conference, [http://www.jhuapl.edu/spsa/PDF-SPSA/Gerencser_ACC04.pdf pdf]
* [http://www.math.buffalo.edu/mad/PEEPS/hill_stacy.html Stacy D. Hill] ('''2005'''). ''Discrete Stochastic Approximation with Application to Resource Allocation''. [http://www.jhuapl.edu/techdigest/ Johns Hopkins APL Technical Digest], Vol. 26, No. 1, [http://www.jhuapl.edu/spsa/PDF-SPSA/Hill_TechDig05.pdf pdf]
* [[Levente Kocsis]], [[Csaba Szepesvári]], [[Mark Winands]] ('''2005'''). ''[http://link.springer.com/chapter/10.1007/11922155_4 RSPSA: Enhanced Parameter Optimization in Games]''. [[Advances in Computer Games 11]], [http://www.sztaki.hu/~szcsaba/papers/rspsa_acg.pdf pdf], [https://dke.maastrichtuniversity.nl/m.winands/documents/RSPSA.pdf pdf]
* [[Levente Kocsis]], [[Csaba Szepesvári]] ('''2006'''). ''[http://link.springer.com/article/10.1007/s10994-006-6888-8 Universal Parameter Optimisation in Games Based on SPSA]''. [https://en.wikipedia.org/wiki/Machine_Learning_%28journal%29 Machine Learning], Special Issue on Machine Learning and Games, Vol. 63, No. 3, [http://www.sztaki.hu/~szcsaba/papers/kocsis_acg05.pdf pdf]
* [http://dblp.uni-trier.de/pers/hd/m/Maryak:John_L= John L. Maryak], [http://dblp.uni-trier.de/pers/hd/c/Chin:Daniel_C= Daniel C. Chin] ('''2008'''). ''Global Random Optimization by Simultaneous Perturbation Stochastic Approximation''. [[IEEE#TAC|IEEE Transactions on Automatic Control]], Vol. 53, No. 3, [http://www.jhuapl.edu/spsa/PDF-SPSA/Maryak_Chin_IEEETAC08.pdf pdf]
* [http://dblp.uni-trier.de/pers/hd/s/Song_0001:Qing Qing Song], [[James C. Spall]], [http://dblp.uni-trier.de/pers/hd/s/Soh:Yeng_Chai Yeng Chai Soh], [http://dblp.uni-trier.de/pers/hd/n/Ni:Jie Jie Ni] ('''2008'''). ''Robust Neural Network Tracking Controller Using Simultaneous Perturbation Stochastic Approximation''. [[IEEE#NN|IEEE Transactions on Neural Networks]], Vol. 19, No. 5, [https://pdfs.semanticscholar.org/3f2a/4d69ca8adbbc072d82af58b3c750621d36ab.pdf 2003 pdf] » [[Neural Networks]]
==2010 ...==
* [[Shalabh Bhatnagar]], [[H. L. Prasad]], [[L.A. Prashanth]] ('''2013'''). ''[http://stochastic.csa.iisc.ernet.in/~shalabh/book.html Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods]''. [http://www.springer.com/series/642 Lecture Notes in Control and Information Sciences], Vol. 434, [https://en.wikipedia.org/wiki/Springer_Science%2BBusiness_Media Springer]
* [https://scholar.google.com/citations?user=nqDASHMAAAAJ Pushpendre Rastogi], [http://dblp.uni-trier.de/pers/hd/z/Zhu:Jingyi Jingyi Zhu], [[James C. Spall]] ('''2016'''). ''Efficient implementation of Enhanced Adaptive Simultaneous Perturbation Algorithms''. [http://dblp.uni-trier.de/db/conf/ciss/ciss2016.html#RastogiZS16 CISS 2016], [http://www.cs.jhu.edu/~prastog3/res/rastogi2016efficient.pdf pdf]
=Forum Posts=
==2010 ...==
* [http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662 Stockfish's tuning method] by [[Joona Kiiski]], [[CCC]], October 07, 2011 » [[Stockfish's tuning method]]
: [http://www.talkchess.com/forum/viewtopic.php?start=0&t=40662&start=6 Re: Stockfish's tuning method] by [[Rémi Coulom]], [[CCC]], October 07, 2011
* [http://www.talkchess.com/forum/viewtopic.php?t=40964 Tuning again] by [[Ed Schroder]], [[CCC]], November 01, 2011
* [https://groups.google.com/d/msg/fishcooking/WNrxeXAJ6VI/ZkCnRv4I_qEJ Goodbye CLOP, hello SPSA] by [[Gary Linscott]], [[Computer Chess Forums|FishCooking]], May 17, 2014 » [[CLOP]]
* [http://www.talkchess.com/forum/viewtopic.php?t=54545&start=2 Re: Eval tuning - any open source engines with GA or PBIL?] by [[Jon Dart]], [[CCC]], December 06, 2014 » [[Genetic Programming#PBIL|PBIL]] <ref>[https://www.gerad.ca/nomad/Project/Home.html NOMAD - A blackbox optimization software]</ref>
==2015 ...==
* [http://www.talkchess.com/forum/viewtopic.php?t=55925&start=10 Re: A plea to someone] by [[Lyudmil Antonov]], [[CCC]], April 07, 2015
: [http://www.talkchess.com/forum/viewtopic.php?t=55925&start=21 Re: A plea to someone] by [[Jon Dart]], [[CCC]], April 08, 2015
* [https://groups.google.com/d/msg/fishcooking/1FSprH10MpM/bU1bMkqKtaUJ Automatic Criterion for stopping SPSA?] by tsa..., [[Computer Chess Forums|FishCooking]], May 29, 2015
* [https://groups.google.com/d/msg/fishcooking/mbnctfDdcqM/S_rUOa7177AJ Re: SPSA Tuner] by [[Lyudmil Antonov]], [[Computer Chess Forums|FishCooking]], July 20, 2015
* [https://groups.google.com/d/msg/fishcooking/Np13OpEO1_w/xhYulilABgAJ Too small number of games in SPSA] by [[Lyudmil Antonov]], [[Computer Chess Forums|FishCooking]], April 22, 2016
* [http://www.talkchess.com/forum/viewtopic.php?t=63632 SPSA problems] by Ralf Müller, [[CCC]], April 02, 2017
=External Links=
* [http://www.jhuapl.edu/spsa/ SPSA Algorithm] by [[James C. Spall]]
* [https://en.wikipedia.org/wiki/Simultaneous_perturbation_stochastic_approximation Simultaneous perturbation stochastic approximation - Wikipedia]
* [https://github.com/zamar/spsa SPSA Tuner for Stockfish Chess Engine] by [[Joona Kiiski]] » [[Stockfish]]
* [https://github.com/lantonov/SPSA GitHub - lantonov/spsa: Modifications of SPSA] by [[Lyudmil Antonov]] » [[Stockfish]]
* [https://github.com/jgomezdans/spsa GitHub - jgomezdans/spsa: Simultaneous perturbation stochastic approximation Python code]
* [https://gist.github.com/yanatan16/5420795 Simultaneous Perturbation Stochastic Approximation code in python · GitHub]
* [https://youtu.be/kVJQkmYU2VA NIPS 2012 Tutoral - Stochastic Search and Optimization], [[James C. Spall#Video|Video Lecture]] by [[James C. Spall]]
=References=
<references />
'''[[Algorithms|Up one Level]]'''