This book concisely presents the optimization process and optimal control process with examples and simulations to help self-learning and better comprehension. It starts with function optimization and constraint inclusion and then extends to functional optimization using the calculus of variations. The development of optimal controls for continuous-time, linear, open-loop systems is presented using Lagrangian and Pontryagin-Hamiltonian methods, showing how to introduce the end-point conditions in time and state. The closed-loop optimal control for linear systems with a quadratic cost function, well-known as the linear quadratic regulator (LQR) is developed for both time-bound and time-unbounded conditions. Some control systems need to maximize performance alongside cost minimization. The Pontryagin's maximum principle is presented in this regard with clear examples that show the practical implementation of it. It is shown through examples how the maximum principle leads to control switching and Bang-Bang control in certain types of systems. The application of optimal controls in discrete-time open-loop systems with the quadratic cost is presented and then extended to the closed-loop control, which results in the model predictive control (MPC).
Throughout the book, examples and Matlab simulation codes are provided for the learner to practice the contents in each section. The aligned lineup of content helps the learner develop knowledge and skills in optimal control gradually and quickly.