Optimal Control for Continuous-time Nonlinear Systems based on a Linear-like Policy Iteration
We propose a novel strategy to construct optimal controllers for continuous-time nonlinear systems by means of linear-like techniques, provided that the optimal value function is differentiable and quadratic-like. This assumption covers a wide range of cases and holds locally in general. The proposed strategy avoids solving the Hamilton-Jacobi-Bellman (HJB) equation, that is a nonlinear partial differential equation, which is known to be hard or impossible to solve. Instead, the HJB equation is replaced with an easy-solvable state- dependent Lyapunov matrix equation without introducing any approximation. We achieve this exploiting a linear-factorization of the underlying nonlinear system and a policy-iteration algorithm (PI) to yield a linear-like PI for nonlinear systems. The proposed control strategy solves optimal nonlinear control problems in an exact, yet still linear-like manner. We prove optimality of the resulting solution and illustrate the results via two examples.