Model Predictive Control: from the Basics to Reinforcement Learning
CDC'19 Workshop, December 10, 2019

Organizers

Motivation

In spite of its long tradition of success as a very powerful and versatile advanced control technique, the interest of industry and academia in model predictive control (MPC) is strongly growing, and MPC is spreading to a large variety of application domains. While most of the attention has been focused so far on computational efficiency and closed-loop performance, as the use of MPC in industrial production is increasing the time required to develop an MPC solution has also become of strong importance. Development time is mainly due to constructing suitable prediction models and to calibrating the resulting controller. Reinforcement learning, and more generally data-driven synthesis of MPC laws, has recently attracted a lot of attention to possibly reduce such development time. This workshop aims at providing an overview of several techniques for practical use of MPC, covering linear, hybrid, and nonlinear MPC formulations and various computational methods that can be used to effectively compute the MPC action in real-time.

Background

Model Predictive Control (MPC) was originally developed to control multivariable linear models subject to constraints. The main idea of MPC is to use a mathematical model of the process to predict its future behavior and minimize a given performance index, possibly subject to constraints capturing actuator limits and other operating constraints. The advantages of MPC are numerous, as it makes it relatively easy to handle various difficulties in control design, such as dealing with many inputs and outputs, input and output constraints, nonlinear and hybrid dynamics, delays, etc. Uncertainty can also be taken into account explicitly in the MPC formulation, either as unknown-but-bounded or stochastic disturbances. In the latter case, the optimal control problem can be formulated as a stochastic optimization problem, by minimizing, e.g., the expected value or the conditional value-at-risk of a given performance index, subject to constraints that must either hold for all possible perturbations (robust formulation), or in probability.

Since its introduction in the late sixties, MPC has been successfully applied to a very wide range of applications, ranging from traditional process control to more recent applications in the automotive and aerospace industries, as well as in smart energy systems just to mention a few. MPC is a superb control design methodology for coordinating multiple actuators within their constraints and to blend feedforward and feedback signals, as it does this in a very natural way. This is in contrast with other approaches that often require complex lookup tables and ad-hoc solutions for handling constraints as part of the control law, that badly scale with the number of inputs and outputs. As a result, MPC not only provides very good closed-loop performance, but is also much faster to design, calibrate, and reconfigure.

The main drawback of MPC is the need to solve the optimal control problem on-line, under real-time constraints. In case of linear prediction models, linear constraints, and quadratic performance indexes, the MPC problem can be translated into a quadratic programming (QP) problem. In case of hybrid or nonlinear prediction models, the optimization problem becomes a mixed-integer (MIP) or a (non-convex) nonlinear programming (NLP) problem, respectively. A vast literature has been developed in the past years addressing algorithms to solve such optimization problems in an embedded control setting, producing several specialized solvers which have made it possible to apply MPC at unprecedented rates and to problems previously considered prohibitively difficult to solve in real-time. In case of using online solvers, the MPC law is implicitly defined as a function of the current state and reference signals by the solver itself. To avoid the use of online solvers, explicit MPC was introduced to find the control law explicitly. In case of linear MPC this results in a continuous piecewise affine mapping that can be precomputed offline and stored, so that online computations simply amount to evaluate a lookup table of linear feedback gains. Explicit approaches are usually limited to small-size problems, as the complexity of the control law typically increases exponentially with the number of constraints included in the MPC problem formulation, and therefore both implicit and explicit solution methods are of great interest in practical applications of MPC.

A second possible drawback of MPC, as of all model-based control strategies, is the need of having a simplified dynamical model of the process. This requires deriving model equations and identifying their parameters, or use system identification to retrieve simple models from data. Control performance is highly dependent on the quality of the model: the better the model, the better is usually the achieved closed-loop control performance. On the other hand, an excessively detailed model can lead to an overly complicated optimization problem that is difficult to solve.

Data-driven control approaches mitigate the issue of model construction and tuning in two ways. In a more traditional way, using black (or gray) box system identification to come up with linear or nonlinear (such as neural) prediction models from data (model-based MPC). Alternatively, learning directly the control law without going first through an open-loop prediction model (model-free MPC). In fact, recent progress on learning techniques has made the achievement of astonishing results possible by using purely data-driven control design methods. Machines have been able to beat humans in various complex games, such as Chess and Go, and in other tasks, where the problem is to decide the best action among a set, given the current state. When it comes to dynamical systems characterized by continuous input and state domains, additional difficulties arise and problems such as guaranteeing stability and constraint enforcement raise questions that still remain unanswered. Some approaches for learning the optimal strategy have been recently proposed for continuous systems. The main idea of such approaches is to combine learning algorithms with control techniques like linear quadratic regulation and MPC, so to synthesize an optimal policy for the real process directly from data, without going first through time-consuming modeling and MPC tuning efforts, therefore reducing the overall calibration time.

Objectives

The aim of the workshop is to provide a comprehensive understanding of the various MPC techniques so as to be able to apply it to real control problems and gain familiarity with the state of the art.

Dedicated software and ad-hoc problem formulations will be presented during the workshop in order to give the attendee the ability to implement MPC on embedded hardware.

Together with applications and software, references to cutting edge research in the field will be made in order to make the attendee aware of the most promising developments. In particular, recent results on data-driven learning applied to MPC will be given large attention.

Target audience

No previous knowledge on MPC is required to attend the workshop. In fact, the workshop is intended for researchers and engineers who want to learn about the theory and practice of model predictive control (MPC), from the basics of MPC to current state-of-the-art methods and new trends in learning algorithms for MPC. During the workshop, open research questions and challenges will be highlighted, with the intent of raising awareness about the most promising future developments.

Workshop schedule

08:45-09:00 Welcome and opening remarks.
09:00-10:30 Linear MPC: introduction and algorithms (A. Bemporad).
Introduction to Model Predictive Control (MPC): basic principles, use in industrial practice. MPC based on linear models. Observer design and integral action in MPC. Numerical solvers for quadratic programming problems arising in embedded MPC applications. Explicit MPC.
10:30-10:45 Coffee break
10:45-12:00 Nonlinear and economic MPC(part 1, part 2) (M. Zanon).
Nonlinear model predictive control and real-time numerical algorithms. Economic MPC: theoretical challenges, practical benefits, approximations with stability guarantees.
12:00-13:30 Lunch
13:30-15:00 Hybrid and stochastic MPC (A. Bemporad).
Hybrid model predictive control for dealing with switched dynamics and logic constraints: modeling, MPC setup, computational methods. Stochastic MPC based on scenarios.
15:00-15:15 Coffee break
15:15-17:00 Reinforcement learning and data-driven MPC (part 1, part 2) (A. Bemporad, M. Zanon).
Data-driven and learning methods for MPC: Q-learning, policy search, self-tuning algorithms.
17:00-17:15 Concluding remarks