Linear-Like Policy Iteration Based Optimal Control for Continuous-Time Nonlinear Systems
We propose a novel strategy to construct optimal controllers for continuous-time nonlinear systems by means of linear-like techniques, provided that the optimal value function is differentiable and quadratic-like. This assumption covers a wide range of cases and holds locally around an equilibrium under mild assumptions. The proposed strategy does not require solving the Hamilton–Jacobi–Bellman equation, i.e., a nonlinear partial differential equation, which is known to be hard or impossible to solve. Instead, the Hamilton–Jacobi–Bellman equation is replaced with an easy-solvable state-dependent Lyapunov matrix equation. We exploit a linear-like factorization of the underlying nonlinear system and a policy-iteration algorithm to yield a linear-like policy-iteration for nonlinear systems. The proposed control strategy solves optimal nonlinear control problems in an asymptotically exact, yet still linear-like manner. We prove optimality of the resulting solution and illustrate the results via four examples.