Dynamic Programming

Home * Programming * Algorithms * Dynamic Programming

Dynamic Programming, (DP) a mathematical, algorithmic optimization method of recursively nesting overlapping sub problems of optimal substructure inside larger decision problems. The term DP was coined by Richard E. Bellman in the 50s not as programming in the sense of producing computer code, but mathematical programming, planning or optimization similar to linear programming, devoted to the study of multistage processes. These processes are composed of sequences of operations in which the outcome of those preceding may be used to guide the course of future ones.

=DP in Computer Chess= In computer chess, dynamic programming is applied in depth-first search with memoization aka using a transposition table and/or other hash tables while traversing a tree of overlapping sub problems aka child positions after making a move by one side in top-down manner, gaining from stored positions of sibling subtrees due to transpositions and/or common aspects of positions, in particular effective inside an iterative deepening framework. Another approach of dynamic programming in computer chess or computer games is the application of retrograde analysis, to solve a problem by solving subproblems in bottom-up manner starting from terminal nodes.

=See also=
 * Markov Models by Michael L. Littman
 * Reinforcement Learning
 * Temporal Difference Learning

=Selected Publications=

1953 ...

 * Richard E. Bellman (1953). An Introduction to the Theory of Dynamic Programming. R-245, RAND Corporation
 * Richard E. Bellman (1954). The Theory of Dynamic Programming. P-550, RAND Corporation, pdf
 * Richard E. Bellman (1954). On a new Iterative Algorithm for Finding the Solutions of Games and Linear Programming Problems. Technical Report P-473, RAND Corporation, U. S. Air Force Project RAND
 * Samuel Karlin (1955). The Stucture of Dynamic Programming Models. Naval Research Logistics, Vol. 2
 * Richard E. Bellman (1957). Dynamic Programming. Princeton University Press
 * Richard E. Bellman (1958). Dynamic Programming and Stochastic Control Processes. RAND Corporation, Santa Monica, CA, Information and Control 1

1960 ...

 * Richard E. Bellman (1960). Sequential Machines, Ambiguity, and Dynamic Programming. Journal of the ACM, Vol. 7, No. 1
 * Ronald A. Howard (1960). Dynamic Programming and Markov Processes. MIT Press, amazon
 * Richard E. Bellman, Stuart E. Dreyfus (1962). Applied Dynamic Programming. RAND Corporation, Princeton University Press, pdf
 * Richard E. Bellman (1962). Dynamic Programming Treatment of the Travelling Salesman Problem. Journal of the ACM, Vol. 9, No. 1, 1961 pdf preprint
 * David Blackwell (1962). Discrete Dynamic Programming. Annals of Mathematical Statistics, Vol. 33, No. 2
 * Richard E. Bellman (1965). On the Application of Dynamic Programming to the Determination of Optimal Play in Chess and Checkers. PNAS
 * David Blackwell (1965). Discounted Dynamic Programming. Annals of Mathematical Statistics, Vol. 36, No. 1
 * David Blackwell (1967). Positive Dynamic Programming. Proc. 5th Berkeley Symposium on Mathematical Statistics and Probability

1970 ...

 * David Blackwell (1976). The Stochastic Processes of Borel Gambling and Dynamic Programming. Annals of Statistics, Vol. 4, No. 2
 * Stuart E. Dreyfus, Averill M. Law (1977). Art and Theory of Dynamic Programming. Academic Press
 * Steven E. Shreve, Dimitri Bertsekas (1979). Universally Measurable Policies in Dynamic Programming. Mathematics of Operations Research, Vol. 4, No. 1

1990 ...

 * Dimitri Bertsekas (1996, 2017). Dynamic Programming and Optimal Control. Athena Scientific

2000 ...

 * Dimitri Bertsekas (2001). Neuro-Dynamic Programming: An Overview. Encyclopedia of Optimization, pdf, slides as pdf
 * Stuart E. Dreyfus (2002). Richard Bellman on the Birth of Dynamic Programming. Operations Research, Vol. 50, No. 1, pdf
 * Richard E. Bellman (2003). Dynamic Programming. Dover Publications
 * Sanjoy Dasgupta, Christos H. Papadimitriou, Umesh Vazirani (2006). Algorithms. McGraw-Hill, Chapter 6 Dynamic programming (pdf)
 * Sylvain Gelly, Jérémie Mary, Olivier Teytaud (2006). Learning for stochastic dynamic programming. pdf, pdf
 * Sylvain Gelly, Olivier Teytaud, Jérémie Mary (2007). Active learning in regression, with application to stochastic dynamic programming. ICINCO and CAP, 2007, pdf
 * Warren B. Powell (2007). Approximate Dynamic Programming - Solving the Curses of Dimensionality. Wiley

2010 ...

 * Richard E. Bellman (2010). Dynamic Programming. With a new introduction by Stuart E. Dreyfus, Princeton University Press
 * Rémi Munos (2010). Approximate dynamic programming. In Olivier Sigaud and Olivier Buffet, editors, Markov Decision Processes in Artificial Intelligence, chapter 3, ISTE Ltd and John Wiley & Sons Inc., pdf
 * Warren B. Powell (2011). Approximate Dynamic Programming - Solving the Curses of Dimensionality. 2nd edition, Wiley
 * Dimitri Bertsekas (2013). Abstract Dynamic Programming. Athena Scientific
 * Richard E. Bellman, Stuart E. Dreyfus (2015). Applied Dynamic Programming. Princeton University Press

=External Links=
 * Dynamic programming from Wikipedia
 * Algorithms that use dynamic programming


 * Bellman equation from Wikipedia
 * Algorithms/Dynamic Programming - Wikibooks
 * Stochastic dynamic programming from Wikipedia
 * Dynamic Programming – From Novice to Advanced – topcoder
 * Tutorial for Dynamic Programming | CodeChef
 * Dynamic Programming Archives - GeeksforGeeks
 * Dynamic Programming Practice Problems

=References=

Up one Level