Dynamic Programming and Optimal Control: (in 2 volumes)

Bertsekas, Dimitri P.

Dynamic Programming and Optimal Control: (in 2 volumes) by Dimitri P. Bertsekas. - 4th ed. - Belmont, Mass. Athena Scientific 1995. - v.1: xx. 555p.; v.2: xxi, 695p.

This book provides a very gentle introduction to basics of dynamic programming. I have never seen a book in mathematics or engineering which is more reader-friendly with respect to the presentation of theorems and examples. Most proofs cover almost every case without notorious "this is left to the reader as an exercise", and every example is accompanied with detailed steps for computation. Although this book is targeted for first-year graduate students, undergraduate students would not have much difficulty understanding most of the material. I can't say much about the coverage of the breadth since I am not an expert on dynamic programming, but the book seems to cover a good range of topics, from basic discrete finite-horizon problems to infinite-horizon problems, continuous-time problems, and approximate control. Unfortunately, the chapter on approximate control, which is the most fashionable topic today and a matter of fact the primary motivation for me to read this book, is focused mostly on delivering very basic intuitions and defers most of serious discussions to Volume II. I would've loved this book more if it contained numerical exercises as well. Although computational considerations are discussed time to time, most of the examples and exercise problems are analytical ones. This is unfortunate, because implementing an algorithm is often a very good way of understanding it. Since that dynamic programming has a lot of fascinating applications, implementing algorithms for such problems and seeing them work would help students gain interest on this topic. Sutton and Barto's reinforcement learning book certainly does a very good job on this aspect.

v.1: 9781886529434 v.2: 9781886529441


Dynamic programming.
Control theory.

519.703 / B462D