Model-Predictive Control (MPC)
Introduction
Model-Predictive Control refers to the process of using a prediction model to simulate the future response of the controlled system, and an optimizer to optimize the future control signal trajectory to optimize a cost function over the simulation, subject to constraints. When an optimal trajectory is found, the first control input from this trajectory is applied to the controlled system, a new measurement is obtained, the estimate of the current state is updated and the process is repeated. An MPC controller is thus a model-based feedback controller capable of incorporating constraints, making it a generally applicable advanced control method.
The prediction model used by the optimizer may be either linear or nonlinear, and the same applies to the state observer. The following documentation will demonstrate how to setup, tune and simulate MPC controllers.
The workflow for designing an MPC controller is roughly split up into
- Specifying the dynamics. This can be done in several ways:
- Using
LinearMPCModel
for standard linear MPC. - Using
RobustMPCModel
for linear MPC with robust loop shaping, including integral action etc. - Using a
FunctionSystem
that accepts a nonlinear discrete-time dynamics function with signature(x,u,p,t) -> x⁺
, or a continuous-time function with signature(x,u,p,t) -> ẋ
. Learn more under Discretization.
- Using
- Defining a state observer.
- Specifying an MPC solver.
- Defining an MPC problem containing things like
- Prediction horizon
- State and control constraints
- The dynamics and the observer
The currently defined problem and solver types are
LQMPCProblem
: Linear Quadratic MPC problem. The cost is on the form $z^T Q_1 z + u^T Q_2 u$ where $z = C_z x$ are the controlled outputs.QMPCProblem
: Quadratic Nonlinear MPC problem. The cost is on the form $x^T Q_1 x + u^T Q_2 u$.GenericMPCProblem
: An MPC problem that allows the user to specify arbitrary costs and constraints, sometimes referred to as economic MPC. This documentation uses the terms, linear-quadratic, nonlinear-quadratic and generic MPC to refer to different variations of the MPC problem.OSQPSolver
A QP or SQP (sequential quadratic programming) solver using OSQP.jl. This solver is used forLQMPCProblem
andQMPCProblem
.IpoptSolver
this solver is used forGenericMPCProblem
.
See docstrings and examples for further information. The following sections contains details around each step mentioned above. See the Quick links section below to quickly find information about certain keywords.
Quick links
- State observers/ state estimation: see Observers.
- Integral action: see Disturbance modeling and rejection with MPC controllers, Integral action and robust tuning and Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers
- Discretization: to convert continuous-time dynamics to discrete time, see Discretization
- Constraints: see Constraints and MPC with model estimated from data
- Reference handling and reference preview: see Example: Linear system
- Disturbance modeling: see Disturbance modeling and rejection with MPC controllers, Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers and model augmentation in RobustAndOptimalControl.jl
- Tuning an MPC controller: see Integral action and robust tuning and Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers
MPC with quadratic cost functions
This section documents the quadratic-cost MPC formulations implemented by LQMPCProblem
and QMPCProblem
. These problems are solved using quadratic programming in the linear case, and sequential quadratic programming (SQP) in the nonlinear case. These problems support linear/nonlinear dynamics, quadratic cost functions and bound constraints on control and state variables. For more advanced MPC formulations, see the section on MPC with generic cost and constraints.
Example: Nonlinear system
We begin the exposition of the MPC functionality with some examples, and provide details in the sections below.
In this example, we will design an MPC controller for a pendulum on a cart. We start by defining the dynamics (in this case hand-written, see build_controlled_dynamics
to generate a suitable function from a ModelingToolkit model). The states are
\[(p, \theta, v, \omega)\]
corresponding to position, angle, velocity and angular velocity. We also define some problem parameters.
using JuliaSimControl
using JuliaSimControl.MPC, StaticArrays
# Dynamics function in continuous time (x,u,p,t) = (states, control inputs, parameters, time)
function cartpole(x, u, p, _)
mc, mp, l, g = 1.0, 0.2, 0.5, 9.81
q = x[SA[1, 2]]
qd = x[SA[3, 4]]
s = sin(q[2])
c = cos(q[2])
H = @SMatrix [mc+mp mp*l*c; mp*l*c mp*l^2]
C = @SMatrix [0 -mp*qd[2]*l*s; 0 0]
G = @SVector [0, mp * g * l * s]
B = @SVector [1, 0]
qdd = -H \ (C * qd + G - B * u[1])
return [qd; qdd]
end
nu = 1 # number of control inputs
nx = 4 # number of states
ny = nx # number of outputs (here we assume that all states are measurable)
Ts = 0.02 # sample time
N = 15 # MPC optimization horizon (control horizon is also equal to N)
x0 = zeros(nx) # Initial state
x0[1] = 3 # cart pos
x0[2] = pi*0.5 # pendulum angle
xr = zeros(nx) # reference state
Next, we discretize the dynamics and specify the cost-function matrices
discrete_dynamics0 = rk4(cartpole, Ts) # discretize using Runge-Kutta 4
measurement = (x,u,p,t)->x # Define the measurement function, in this case we measure the full state
state_names = [:p, :θ, :v, :ω]
discrete_dynamics = FunctionSystem(discrete_dynamics0, measurement, Ts, x=state_names, u=:u, y=state_names) # Add a
Q1 = spdiagm(ones(nx)) # state cost matrix
Q2 = Ts * spdiagm(ones(nu)) # control cost matrix (penalty on control derivative is also supported)
# Control limits
umin = -5 * ones(nu)
umax = 5 * ones(nu)
# State limits (state constraints are soft by default if enabled)
xmin = nothing # no constraints on states
xmax = nothing
constraints = NonlinearMPCConstraints(; umin, umax, xmin, xmax)
NonlinearMPCConstraints{JuliaSimControl.MPC.var"#5#7"{SparseArrays.SparseMatrixCSC{Float64, Int64}}, Vector{Float64}, UnitRange{Int64}}(JuliaSimControl.MPC.var"#5#7"{SparseArrays.SparseMatrixCSC{Float64, Int64}}(sparse([1], [1], [1.0], 1, 1)), [-5.0], [5.0], 1:0)
## Define problem structure
observer = MPC.OpenLoopObserver(discrete_dynamics, x0, nu, ny)
solver = OSQPSolver(
eps_rel = 1e-3,
max_iter = 500, # in the QP solver
check_termination = 5, # how often the QP solver checks termination criteria
sqp_iters = 2, # How many SQP iterations are taken per time step
)
This example will use the QMPCProblem
structure since our dynamics is nonlinear, but our cost is quadratic.
prob = QMPCProblem(
discrete_dynamics;
observer,
Q1,
Q2,
constraints,
N,
xr,
solver,
)
With the problem defined, we could create a DiscreteSystem
representing the MPC controller. This is currently work in progress and is waiting on the these issues to be solved. Until then, we may run the MPC controller manually using
history = MPC.solve(prob; x0, T = 500, verbose = false) # solve for T=500 time steps
using Plots
plot(history)
Example: Linear system
Instead of passing a (generally) nonlinear function dynamics
to the constructor QMPCProblem
, you may pass a linear system on statespace form to the constructor LQMPCProblem
. The LQMPC formulation acting on a linear system is equivalent to an LQG controller if no constraints are active, but allows for more principled handling of constraints than what using standard linear methods does. The MPC controller can also incorporate reference pre-view, achievable only by acausal pre-filtering of references for standard controllers.
This example illustrates the MPC controller controlling a simple discrete-time double integrator with sample time 1. We also illustrate the use of tight state constraints, here, the state reference goes from 0 to 1, at the same time, we constrain the system to have state components no greater than 1 to prevent overshoot etc. Being able to operate close to constraints is one of many appealing properties of an MPC controller. (In practice, noise makes having the state reference exactly equal to the constraint a somewhat questionable practice).
using JuliaSimControl
using JuliaSimControl.MPC
using LinearAlgebra # For Diagonal
Ts = 1
sys = ss([1 Ts; 0 1], [0; 1;;], [1 0], 0, Ts) # Create a statespace object
(; nx,ny,nu) = sys
N = 20
x0 = [1.0, 0]
xr = zeros(nx, N+1) # reference trajectory (should have at least length N+1)
xr[1, 20:end] .= 1 # Change reference after 20 time steps
ur = zeros(nu, N) # Control signal at operating point (should have at least length N)
yr = sys.C*xr
op = OperatingPoint() # Empty operating point implies x = u = y = 0
# Control limits
umin = -0.07 * ones(nu)
umax = 0.07 * ones(nu)
# State limits (state constraints are soft by default)
xmin = -0.2 * ones(nx)
xmax = 1 * ones(nx)
constraints = MPCConstraints(; umin, umax, xmin, xmax)
solver = OSQPSolver(
verbose = false,
eps_rel = 1e-6,
max_iter = 1500,
check_termination = 5,
polish = true,
)
Q2 = spdiagm(ones(nu)) # control cost matrix
Q1 = Diagonal([10, 1]) # state cost matrix
T = 40 # Simulation length (time steps)
The next step is to define a state observer and a prediction model, in this example, we will use a Kalman filter:
R1 = I(nx)
R2 = I(ny)
kf = KalmanFilter(ssdata(sys)..., R1, R2)
named_sys = named_ss(sys, x=[:pos, :vel], u=:force, y=:pos) # give names to signals for nicer plot labels
predmodel = LinearMPCModel(named_sys, kf; constraints, op, x0)
prob = LQMPCProblem(predmodel; Q1, Q2, N, solver, r=xr)
hist = MPC.solve(prob; x0, T, verbose = false)
plot(hist); hline!(xmin[2:2], lab="Constraint", l=(:black, :dash), sp=1)
Notice how the initial step response never goes below -0.2, and the second step response has very little overshoot due to the state constraint. If you run the code, try increasing the state constraint and see how the step response changes
We also notice that before the change in reference at $T \approx 20$, the response appears to make a slight move in the wrong direction. This behavior is actually expected, and arises due to the penalty and constraint on the control action. The system we are controlling is a simple double integrator, $f = ma$, and the control signal thus corresponds to a force which is proportional to the acceleration. If it's expensive to perform large accelerations, it might be beneficial to accept a small state error (small errors are penalized very little under a quadratic cost) before the step in order to build momentum for the large step in reference and avoid larger errors after the step has occurred.
Output references and constraints
To use references for controlled outputs rather than for states, make use of the keyword arguments z
and v
to LinearMPCModel
to specify the controlled and constrained outputs.
See MPC with model estimated from data for an example where this functionality is used.
Constraints
For linear MPC formulations, we provide the simple structure MPCConstraints
that allows you to specify bounds on state variables and control inputs. We also provide the more general LinearMPCConstraints
that allows you to define bounds on any linear combination of states and control inputs, such a linear combination is referred to as a constrained output$v$: $v = C_v x + D_v u$. When the simple constructor is used, xmin, xmax, umin, umax
are internally converted to vmin, vmax
.
For nonlinear MPC problems, we have NonlinearMPCConstraints
that accepts a function $v = f(x,u,p,t)$ that computes the constrained outputs. This structure also requires you to manually specify which components of $v$ are soft constraints.
Constraints are by default
- Hard for control inputs.
- Soft for states.
Soft state constraints should be favored in practical applications and in simulations with disturbances and noise. They are, however, significantly more computationally expensive than hard constraints, and in some simulation scenarios it may be favorable to select hard state constraints for faster simulation. The parameters qs
and qs2
determine the weights on the slack variables for the soft constraints, qs
weighs the absolute value of the slack and qs2
weighs the square of the slack. Hard constraints are used if qs = qs2 = 0
. In a practical application, using hard constraints is not recommended, a disturbance acting on the system might bring the state outside the feasible region from which there is no feasible trajectory back into the feasible region, leading to a failure of the optimizer.
By using only qs2
, i.e. a quadratic penalty on the slack variables, a small violation is penalized lightly, and the problem remains easy to solve. Using only qs
(with a sufficiently large value), constraint violation is kept zero if possible, violating constraints only if it is impossible to satisfy them due to, e.g., hard constraints on control inputs. A combination of qs
and qs2
can be used to tweak performance.
If xmin, xmax
are not provided, state constraints are disabled.
MPC with generic cost and constraints
The MPC problem type GenericMPCProblem
allows much more general cost functions and constraints that the quadratic-programming based MPC problem types, and instances of GenericMPCProblem
are solved using Optimization.jl. The examples in this section will all use the IPOPT solver, a good general-purpose solver that supports large, sparse problems with nonlinear constraints.
Specifying cost and constraints
The GenericMPCProblem
requires the specification of an Objective
, which internally contains one or many cost functions, such as
Each cost object takes a function as its first argument that computes the cost based on the relevant optimization variables. We illustrate with an example where we create a stage cost that computes $x(t)^T Q_1 x(t) + u(t)^T Q_2 u(t)$:
running_cost = StageCost() do si, p, t
x = si.x
u = si.u
dot(x, Q1, x) + dot(u, Q2, u)
end
This uses the Julia do
-syntax to create an anonymous function that takes the tuple (si, p, t)
. si
is of type MPC.StageInput
, a structure containing vectors x, u, r
, all at the stage time t
. A stage refers to a single instant in time in the optimization horizon. While most cost and constraint types passes a MPC.StageInput
as the first argument to the cost/constraint function, the TrajectoryCost
and TrajectoryConstraint
are passed an MPC.ObjectiveInput
which contains the entire trajectories $x \in \mathbb{R}^{n_x \times N+1}$ and $u \in \mathbb{R}^{n_u \times N}$.
One or many cost functions are finally packaged into an Objective
, illustrated in the comprehensive example below.
Constraints are similarly defined to take a function that computes the constrained output as first argument. We illustrate with an example that creates control and state constraints corresponding to
\[\begin{aligned} -3 &\leq u_1(t) \leq 3 \\ -4 &\leq x_2(t) + x_4(t) \leq 4 \end{aligned}\]
using StaticArrays
control_and_state_constraint = StageConstraint([-3, -4], [3, 4]) do si, p, t
u = si.u[1]
x2 = si.x[2]
x4 = si.x[4]
SA[
u
x2 + x4
]
end
The full signature of StageConstraint
is
StageConstraint(fun, lb, ub)
where fun
is a function from (stage_input, parameters, time)
to constrained output and lb, ub
are the lower and upper bounds of the constrained output. In this case, our constrained output is a static array (for high performance) containing [u[1], x[2]+x[4]]
, which are the expressions we wanted to constrain, i.e., the constrained output.
The available constraint types are
Specifying the discretization
The GenericMPCProblem
allows the user to select the discretization method used in the transcription from continuous to discrete time. The available choices are
MultipleShooting
(the default if none is chosen)CollocationFinE
Trapezoidal
More details on these choices are available under Discretization.
Example: Using the Generic MPC problem
Below, we design an MPC controller for the nonlinear pendulum-on-a-cart system using the generic interface. Additional examples using this problem type are available in the tutorials
- MPC control of a Continuously Stirred Tank Reactor (CSTR)
- Solving optimal-control problems
- Solving optimal-control problems with MTK models
- Model-Predictive Control for the Research Civil Aircraft system
using JuliaSimControl, Plots
using JuliaSimControl.MPC
using JuliaSimControl.Symbolics
using StaticArrays
using LinearAlgebra
function cartpole(x, u, p, _=0)
T = promote_type(eltype(x), eltype(u))
mc, mp, l, g = 1.0, 0.2, 0.5, 9.81
q = x[SA[1, 2]]
qd = x[SA[3, 4]]
s = sin(q[2])
c = cos(q[2])
H = @SMatrix [mc+mp mp*l*c; mp*l*c mp*l^2]
C = @SMatrix [0 -mp*qd[2]*l*s; 0 0]
G = @SVector [0, mp * g * l * s]
B = @SVector [1, 0]
if T <: Symbolics.Num
qdd = Matrix(-H) \ Vector(C * qd + G - B * u[1])
return [qd; qdd]
else
qdd = -H \ (C * qd + G - B * u[1])
return [qd; qdd]::SVector{4, T}
end
end
nu = 1 # number of controls
nx = 4 # number of states
Ts = 0.1 # sample time
N = 10 # MPC optimization horizon
x0 = ones(nx) # Initial state
r = zeros(nx)
discrete_dynamics = MPC.rk4(cartpole, Ts) # Discretize the dynamics
measurement = (x,u,p,t) -> x # The entire state is available for measurement
dynamics = FunctionSystem(discrete_dynamics, measurement, Ts; x=:x^nx, u=:u^nu, y=:y^nx)
# Create objective
Q1 = Diagonal(@SVector ones(nx)) # state cost matrix
Q2 = 0.1Diagonal(@SVector ones(nu)) # control cost matrix
Q3 = Q2
QN, _ = MPC.calc_QN_AB(Q1, Q2, Q3, dynamics, r) # Compute terminal cost
QN = Matrix(QN)
p = (; Q1, Q2, Q3, QN) # Parameter vector
running_cost = StageCost() do si, p, t
Q1, Q2 = p.Q1, p.Q2
e = (si.x)
u = (si.u)
dot(e, Q1, e) + dot(u, Q2, u)
end
difference_cost = DifferenceCost((si,p,t)->SVector(si.u[])) do e, p, t
dot(e, p.Q3, e)
end
terminal_cost = TerminalCost() do ti, p, t
e = ti.x
dot(e, p.QN, e)
end
objective = Objective(running_cost, terminal_cost, difference_cost)
# Create objective input
x = zeros(nx, N+1)
u = zeros(nu, N)
x, u = MPC.rollout(dynamics, x0, u, p, 0)
oi = ObjectiveInput(x, u, r)
# Create constraints
control_and_state_constraint = StageConstraint([-3, -4], [3, 4]) do si, p, t
u = (si.u)[]
x4 = (si.x)[4]
SA[
u
x4
]
end
# Create observer solver and problem
observer = OpenLoopObserver(dynamics, x0, nu, dynamics.ny)
solver = MPC.IpoptSolver(;
verbose = false,
tol = 1e-4,
acceptable_tol = 1e-1,
max_iter = 100,
max_cpu_time = 10.0,
max_wall_time = 10.0,
constr_viol_tol = 1e-4,
acceptable_constr_viol_tol = 1e-1,
acceptable_iter = 2,
)
prob = GenericMPCProblem(
dynamics;
N,
observer,
objective,
constraints = [control_and_state_constraint],
p,
objective_input = oi,
solver,
xr = r,
presolve = true,
);
# Run MPC controller
history = MPC.solve(prob; x0, T = 100, verbose = false)
# Extract matrices
X,E,R,U,Y = reduce(hcat, history)
plot(history)
Observers
This package defines the type
OpenLoopObserver
This observer does not incorporate measurement feedback. It can be used if you assume availability of full state information.
In addition to OpenLoopObserver
, you may use any observer defined in LowLevelParticleFilters, such as
LowLevelParticleFilters.ParticleFilter
: This filter is simple to use and assumes that both dynamics noise and measurement noise are additive.LowLevelParticleFilters.AuxiliaryParticleFilter
: This filter is identical to ParticleFilter, but uses a slightly different proposal mechanism for new particles.LowLevelParticleFilters.AdvancedParticleFilter
: This filter gives you more flexibility, at the expense of having to define a few more functions.LowLevelParticleFilters.KalmanFilter
. Is what you would expect. Has the same features as the particle filters, but is restricted to linear dynamics and gaussian noise.LowLevelParticleFilters.UnscentedKalmanFilter
. Is also what you would expect. Has almost the same features as the Kalman filters, but handle nonlinear dynamics and measurement model, still requires an additive Gaussian noise model.LowLevelParticleFilters.ExtendedKalmanFilter
. Runs a regular Kalman filter on linearized dynamics. Uses ForwardDiff.jl for linearization.
How these observers are set up and used are shown in the examples above as well as in the examples section of the documentation.
Discretization
When the dynamics are specified in continuous time, a discretization scheme must be employed in order for the optimizer to obtain a finite dimensional problem. While the quadratic MPC problem types always make use of multiple shooting, The GenericMPCProblem
supports multiple different discretization methods, detailed in this section.
On a high level, the MPC library supports three general approaches to transcribe infinite-dimensional optimal-control problems to finite-dimensional optimization problems
- Multiple shooting
- Direct collocation on finite elements
- Trapezoidal integration
The implementation of multiple shooting supports dynamics consisting of ODEs only, i.e., algebraic equations (DAEs) are not supported, while the collocation and trapezoidal methods support DAEs and stiff dynamics. Generally, we have the following properties of the different transcription methods:
- Multiple shooting introduces optimization variables for the state at each sample instant, $n_x \times N$ in total.
- Direct collocation introduces optimization variables for the state at each collocation point, $n_x \times n_c \times N$ in total, where $n_c$ is the number of collocation points (selected upon creating the
CollocationFinE
structure). - Trapezoidal integration is an implicit method that introduces optimization variables for the state at each sample instant, similar to multiple shooting, $n_x \times N$ in total.
The multiple-shooting transcription will thus introduce fewer variables than collocation, but is only applicable to non-stiff systems of ODEs. Direct collocation (and the simpler trapezoidal integration scheme) is an implicit method that handles stiff dynamics and algebraic equations.
Multiple shooting
Systems of ordinary differential equations (ODEs) can be discretized using an explicit method, such as Runge-Kutta 4. For non-stiff systems, the fastest option in this case is to make use of the special-purpose function MPC.rk4
. To discretize continuous-time dynamics functions on the form (x,u,p,t) -> ẋ
using the function MPC.rk4
, we simply wrap the dynamics function by calling rk4
like so:
discrete_dynamics = MPC.rk4(continuous_dynamics, sampletime; supersample=1)
where the integer supersample
determines the number of RK4 steps that is taken internally for each change of the control signal (1 is often sufficient and is the default). The returned function discrete_dynamics
is on the form (x,u,p,t) -> x⁺
. The discretized dynamics can further be wrapped in a FunctionSystem
in order to add a measurement function and names of states, inputs and outputs.
Dynamics that is difficult to integrate due to stiffness etc. may make use of MPCIntegrator
. This type can use any method from the DifferentialEquations.jl ecosystem to perform the integration. This comes at a slight cost in performance, where MPCIntegrator
with an internal RK4
integrator is about 2-3x slower than the MPC.rk4
function. The main difference in performance coming from the choice of integrator arises during the linearization step when SQP iterations are used.
Direct collocation on finite elements
As an alternate to MPC.rk4
, collocation of the system dynamics on finite elements provides a method that combines the rapid convergence of the orthogonal collocation method with the convenience associated with finite difference methods of locating grid points or elements where the solution is important or has large gradients. Instead of integrating the continuous-time dynamics, collocation on finite elements utilizes Lagrange polynomials to approximate the solution of the system dynamics over a finite element of time. These elements are collected over the time horizon of the MPC formulation to yield an optimal solution. The integer degree deg
of the collocating Legendre polynomial determines the accuracy of the state solution obtained, and is related to the number of collocation points as deg = n_colloc-1
where n_colloc
is a user choice. The number of collocation points used is thus a tradeoff between increased computational cost and higher-order convergence. The truncation error depends on the choice of collocation points roots_c
. For a choice of Gauss-Legendre collocation roots, the truncation error is of the order $\mathcal{O}(h^{2k})$ where $k$ is the degree of the polynomial. For Gauss-Radau collocation, the truncation error is of the order $\mathcal{O}(h^{2k-1})$. Collocation on finite elements can also be used to solve continuous-time DAE problems. The discretization structure for collocation on finite elements can be constructed as
disc = CollocationFinE(dynamics, false; n_colloc = 5, roots_c = "Legendre")
where, among the arguments to CollocationFinE
, false
disables the threaded evaluation of dynamics and n_colloc
refers to the size of the collocation point vector for each finite element. The roots_c
option is set to choose Gauss-Legendre collocation by default. This can be specified explicitly by setting roots_c = "Legendre"
. For Radau collocation points, roots_c = "Radau"
. This discretization structure can be passed in GenericMPCProblem
by specifying keyword argument disc
.
Accuracy of integration vs. performance
When solving MPC problems, it is sometimes beneficial to favor a faster sample rate and a longer prediction horizon over highly accurate integration. The motivations for this are several
- The dynamics model is often inaccurate, and solving an inaccurate model to high accuracy can be a waste of effort.
- The performance is often dictated by the disturbances acting on the system, and having a higher sample rate may allow the controller to detect and reject disturbances faster.
- Feedback from measurements will over time correct for slight errors due to integration.
- Increasing sample rate leads to each subsequent optimization problem being more similar to the previous one, making warm-staring more efficient and a good solution being found in fewer iterations.
Solving optimal-control problems
At the heart of the MPC controller is a numerical optimal-control problem that is solved repeatedly each sample instant. For LQMPCProblem
and QMPCProblem
, a single instance of this problem can be solved by calling
x, u = MPC.optimize!(prob, x0, p, t)
where x0
is the initial state and t
is the time at which the problem starts. The returned value u
may for linear problems need adjustment for offsets, the call
MPC.get_u!(prob, x, u)
transforms the result of optimize!
to the appropriate space.
For GenericMPCProblem
, the interface to MPC.optimize!
is
controlleroutput = MPC.optimize!(prob, controllerinput, p, t)
where controllerinput
and controlleroutput
are of types ControllerInput
and ControllerOutput
. The constructor to GenericMPCProblem
also has an option presolve
that solves the optimal-control problem directly, after which the state and control trajectories are available as
x, u = get_xu(prob)
For an example of solving optimal-control problem, see Optimal-control example.
Stepping the MPC controller
The a single step of the MPC controller can be taken by calling MPC.step!
uopt, x, u0 = MPC.step!(prob, u, y, r, p, t)
where u
is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of u
is used as an initial guess for the optimizer. y
is the latest measurement and is used to update the observer in prob
. Internally, step!
performs the following actions:
- Measurement update of the observer, forms $\hat x_{k | k}$.
- Solve the optimization problem with the state of the observer as the initial condition.
- Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
- Advance the problem caches, including the reference trajectory if
xr
is a full trajectory.
The return values of step!
are
uopt
: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.x
: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.u0
The control signal used to update the observer in the prediction step. Similar tox
, this value may contain offsets and is usually of less external use thanuopt
which is transformed to the correct units of the actual plant input.
Interface to ModelingToolkit
Simulating MPC controllers with ModelingToolkit models is an upcoming feature. To use an MTK model as the prediction model in an MPC problem or to solve optimal-control problems for MTK models, see the tutorial Solving optimal-control problems with MTK models.
Index
JuliaSimControl.MPC.BoundsConstraint
— TypeBoundsConstraint(xmin, xmax, umin, umax, xNmin, xNmax)
BoundsConstraint(; xmin, xmax, umin, umax, xNmin=xmin, xNmax=xmax)
Upper and lower bounds for state and control inputs. This constraint is typically more efficient than StageConstraint
for simple bounds constraints.
Separate bounds may be provided for the terminal state xN
, if none are given, the terminal state is assumed to have the same bounds as the rest of the state trajectory.
JuliaSimControl.MPC.CollocationFinE
— TypeCollocationFinE(dyn, threads=false; n_colloc = 5, roots_c = "Legendre", scale_x=ones(dyn.nx))
JuliaSimControl.MPC.CollocationFinE
— TypeCollocationFinE{is_dae, F <: FunctionSystem}
Orhogonal Collocation on Finite Elements dynamics constraint
Fields:
dyn::F
: The continuous-time dynamicsTs
: Sample timethreads::Bool = false
: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
JuliaSimControl.MPC.DifferenceCost
— TypeDifferenceCost
A cost-function object that represents a running cost of differences in a GenericMPCProblem
.
Example:
The example below illustrates how to penalize $\Delta u = u(k) - u(k-1)$ for a single-input system
p = (; Q1, Q2, Q3, QN)
getter = (si,p,t)->SVector(si.u[]) # Extract the signal to penalize differences for, in this case, the penalized signal `z = u`
difference_cost = DifferenceCost(getter) do e, p, t
dot(e, p.Q3, e) # Compute the penalty given a difference `e = Δz`
end
JuliaSimControl.MPC.GenericMPCProblem
— MethodGenericMPCProblem(dynamics; N, observer, objective, constraints::Union{AbstractVector{<:AbstractGenericMPCConstraint}, CompositeMPCConstraint}, solver = IpoptSolver(), p = DiffEqBase.NullParameters(), objective_input, xr, presolve = true, threads = false)
Defines a generic, nonlinear MPC problem.
Arguments:
dynamics
: An instance ofLinearMPCModel
orRobustMPCModel
N
: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)Ts
: Sample time for the system, the control signal is allowed to change everyTs
seconds.observer
: An instance ofAbstractObserver
, defaults toOpenLoopObserver(dynamics, zeros(nx))
.objective
: An instance ofObjective
.constraints
: A vector ofAbstractGenericMPCConstraint
.solver
: An instance of Ipopt.Optimizer, can be created usingIpoptSolver
.p
: Parameters that will be passed to all constraint and cost functions.objective_input
: An instance ofObjectiveInput
that contains initial guesses of states and control trajectories.xr
: Reference trajectorypresolve
: Solve the initial optimal-control problem already in the constructor.threads
: Use threads to evaluate the dynamics constraints. This is only beneficial for large systems.scale_x = ones(dynamics.nx)
: Scaling factors for the state variables. These can be set to the "typical magnitude" of each state to improve numerical performance in the solver.scale_u = ones(dynamics.nu)
: Scaling factors for the input variables.
JuliaSimControl.MPC.LQMPCProblem
— MethodLQMPCProblem(dynamics ;
N::Int,
Q1::AbstractMatrix,
Q2::AbstractMatrix,
Q3::Union{AbstractMatrix,Nothing} = nothing,
qs2 = 1000*maximum(Q1),
qs = sqrt(maximum(Q1)),
constraints,
r,
solver::AbstractMPCSolver = OSQPSolver(),
)
Defines a Linear Quadratic MPC problem. The cost is on the form (z - zᵣ)'Q1*(z - zᵣ) + (u-uᵣ)'Q2*(u-uᵣ) + Δu'Q3*Δu
.
Arguments:
dynamics
: An instance ofLinearMPCModel
orRobustMPCModel
N
: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)Q1
: State penalty matrixQ2
: Control penalty matrixQ3
: Control derivative penalty matrixqs
: Soft state constraint linear penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).qs2
: Soft state constraint quadratic penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).constraints
: An instance ofMPCConstraints
r
: References. Ifdynamics
contains an operating point,dynamics.op.x
will be the defaultr
if none is provided.ur
: Reference control. Ifdynamics
contains an operating point,dynamics.op.u
will be the defaultur
if none is provided.solver
: The solver to use. SeeOSQPSolver
JuliaSimControl.MPC.LinearMPCConstraints
— TypeLinearMPCConstraints(vmin, vmax, Cv, Dv, soft_indices)
LinearMPCConstraints(; vmin, vmax, Cv, Dv, soft_indices)
Applicable to LinearMPCModel
, this structure allows you to constrain any linear combination of the states and inputs. The constrained output $v_{min} ≤ v ≤ v_{max}$ is defined as
\[v = C_v x + D_v u\]
soft_indices
: A vector of integer indices indicating which components ofv
are soft constraints. It's recommended to make components of $v$ that depend on the states soft constraints.
JuliaSimControl.MPC.LinearMPCModel
— MethodLinearMPCModel(G, observer; constraints::AbstractLinearMPCConstraints, op::OperatingPoint = OperatingPoint(), strictly_proper = false, z = I(G.nx), x0)
A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer.
Arguments:
G
: A linear system created using, e.g., the functionss
.observer
: Any supported observer object, such as aKalmanFilter
.constraints
: An instance ofMPCConstraints
orLinearMPCConstrints
op
: An instance ofOperatingPoint
strictly_proper
: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.z
: Either a vector of state indices indicating controlled variables, or a matrixnz × nx
that multiplies the state vector to yield the controlled variables.v
: Either a vector of state indices indicating constrained outputs, or a matrixnv × nx
that multiplies the state vector to yield the constrained outputs. This option has no effect ifLinearMPCConstraints
are used.x0
: The initial state of the internal observer.
JuliaSimControl.MPC.MPCConstraints
— MethodMPCConstraints(; umin, umax, xmin = nothing, xmax = nothing, soft = true)
A structure representing the constraints of an MPC problem. See also LinearMPCConstraints
for a more advanced interface to linear constraints.
Arguments:
umin
: Lower bound for control signals.umax
: Upper bound for control signals.xmin
: Lower bound for constrained output signals.xmax
: Upper bound for constrained output signals.soft
: Indicate if constrained outputs are using soft constraints (recommended)
JuliaSimControl.MPC.MPCIntegrator
— Methodint = MPCIntegrator(dynamics, problem_constructor, alg::SciML.DEAlgorithm; Ts::Real, nx, nu, kwargs...)
Discretize a dynamics function on the form (x,u,p,t)->ẋ
using DE-integrator alg
. The resulting object int
behaves like a discrete-time dynamics function (x,u,p,t)->x⁺
Arguments:
dynamics
: The continuous-time dynamics.problem_constructor
: One ofODEProblem, DAEProblem
etc.alg
: AnyDEAlgorithm
, e.g.,Tsit5()
.Ts
: The fixed sample time between control updates. The algorithm may take smaller steps internally.nx
: The state (x
) dimension.nu
: The input (u
) dimension.kwargs
: Are sent to the integrator initialization functioninit
.
Example:
This example creates two identical versions of a discretized dynamics, one using the rk4
function and one using MPCIntegrator
. For the MPCIntegrator
, we set dt
and adaptive=false
in order to get equivalent results.
using JuliaSimControl.MPC
using OrdinaryDiffEq
"Continuous-time dynamics of a quadruple tank system."
function quadtank(h,u,p=nothing,t=nothing)
kc = 0.5
k1, k2, g = 1.6, 1.6, 9.81
A1 = A3 = A2 = A4 = 4.9
a1, a3, a2, a4 = 0.03, 0.03, 0.03, 0.03
γ1, γ2 = 0.3, 0.3
ssqrt(x) = √(max(x, zero(x)) + 1e-3)
# ssqrt(x) = sqrt(x)
xd = @inbounds SA[
-a1/A1 * ssqrt(2g*h[1]) + a3/A1*ssqrt(2g*h[3]) + γ1*k1/A1 * u[1]
-a2/A2 * ssqrt(2g*h[2]) + a4/A2*ssqrt(2g*h[4]) + γ2*k2/A2 * u[2]
-a3/A3*ssqrt(2g*h[3]) + (1-γ2)*k2/A3 * u[2]
-a4/A4*ssqrt(2g*h[4]) + (1-γ1)*k1/A4 * u[1]
]
end
nx = 4 # Number of states
nu = 2 # Number of inputs
Ts = 2 # Sample time
x0 = SA[1.0,1,1,1]
u0 = SVector(zeros(nu)...)
discrete_dynamics_rk = MPC.rk4(quadtank, Ts)
discrete_dynamics = MPC.MPCIntegrator(quadtank, ODEProblem, RK4(); Ts, nx, nu, dt=Ts, adaptive=false, saveat_end=false)
x1_rk = discrete_dynamics_rk(x0, u0, 0, 0)
x1 = discrete_dynamics(x0, u0, 0, 0)
@assert norm(x1 - x1_rk) < 1e-12
JuliaSimControl.MPC.MultipleShooting
— TypeMultipleShooting{F <: FunctionSystem}
Multiple-shooting dynamics constraint
Fields:
dyn::F
: The discrete-time dynamicsscale_x
: Numerical scaling of state variablesthreads::Bool = false
: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
JuliaSimControl.MPC.NonlinearMPCConstraints
— TypeNonlinearMPCConstraints(fun, min, max, soft_indices)
NonlinearMPCConstraints(; umin, umax, xmin=nothing, xmax=nothing)
A struct holding constraint information for nonlinear MPC.
If the signature NonlinearMPCConstraints(; umin, umax, xmin=nothing, xmax=nothing)
is used, the fun
and soft_indices
are created automatically.
Arguments:
fun
: A function(x,u,p,t)->constrained_outputs
min
: The lower bound forconstrained_outputs
max
: The upper bound forconstrained_outputs
soft_indices::Vector{Int}
: A vector of indices that indicates whichconstrained_outputs
are soft constraints. Slack variables will be added for each soft constraint and this increases the computational complexity. It is recommended to use soft constraints for states and functions of the states, but typically not for intputs.
JuliaSimControl.MPC.OSQPSolver
— TypeOSQPSolver <: AbstractMPCSolver
A solver which uses sequential quadratic programming on a system linearized around a trajectory computed using discretized dynamics.
Arguments:
verbose::Bool = false
eps_rel::Float64 = 1e-5
eps_abs::Float64 = 1e-5
max_iter::Int = 5000
check_termination::Int = 15
: The interval at which termination criteria is checkedsqp_iters::Int = 2
: Number of sequential QP iterations to run. Oftentimes 1-5 is enough, but hard non-linear problems can be solved more robustly with more iterations. This option has no effect for linear MPC.dynamics_interval::Int = 1
: How often to update the linearization. This is an expensive step, and the (average) solution time can be sped up by not updating every step. This must be set to 1 ifsqp_iters > 1
. This option has no effect for linear MPC.polish::Bool = true
: Try to obtain a high-accuracy solution, increases computational time.
JuliaSimControl.MPC.Objective
— TypeObjective(costs...)
An Objective
holds multiple cost objects.
JuliaSimControl.MPC.Objective
— MethodObjective(costs...)
A structure that represents the objective for GenericMPCProblem
. The input can be an arbitrary number of cost objects. See
JuliaSimControl.MPC.ObjectiveInput
— MethodObjectiveInput(x, u, r[, x0, u0])
Create an ObjectiveInput structure containing trajectories for x,u
and r
. Initial state x0
and initial control u0
will be taken from x
and u
.
Arguments:
x
: The state trajectory as a matrix of size(nx, N+1)
u
: The control trajectory as a matrix of size(nu, N)
r
: A reference vector or matrix of size(nr, N+1)
,nr
does not have to equalnx
.
JuliaSimControl.MPC.QMPCProblem
— MethodQMPCProblem(dynamics ;
N::Int,
Q1::AbstractMatrix,
Q2::AbstractMatrix,
Q3::Union{AbstractMatrix,Nothing} = nothing,
qs2 = 1000*maximum(Q1),
qs = sqrt(maximum(Q1)),
constraints::NonlinearMPCConstraints,
observer = nothing,
xr,
ur = zeros(size(Q2,1)),
solver::AbstractMPCSolver = OSQPSolver(),
chunkA = ForwardDiff.Chunk{min(8, size(Q1, 1))}(),
chunkB = ForwardDiff.Chunk{min(8, size(Q2, 1))}(),
)
Defines a Nonlinear Quadratic MPC problem. The cost is on the form (z - zᵣ)'Q1*(z - zᵣ) + (u-uᵣ)'Q2*(u-uᵣ) + Δu'Q3*Δu
.
Arguments:
dynamics
: An instance ofFunctionSystem
representingx(t+1) = f(x(t), u(t), p, t)
, i.e., already discretized.observer
: An instance ofAbstractObserver
, defaults toOpenLoopObserver(dynamics, zeros(nx))
.N
: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)Q1
: Controlled variable penalty matrixQ2
: Control signal penalty matrixQ3
: Control derivative penalty matrixqs
: Soft state constraint linear penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).qs2
: Soft state constraint quadratic penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).constraints
: An instance ofNonlinearMPCConstraints
xr
: Reference state (or reference output ifstate_reference=false
forLinearMPCModel
). Ifdynamics
contains an operating point,dynamics.op.x
will be the defaultxr
if none is provided.ur
: Reference control. Ifdynamics
contains an operating point,dynamics.op.u
will be the defaultur
if none is provided.solver
: The solver to use. SeeOSQPSolver
JuliaSimControl.MPC.RobustMPCModel
— MethodRobustMPCModel(G; W1, W2 = I(G.ny), constraints::AbstractLinearMPCConstraints, x0, strictly_proper = true, op::OperatingPoint = OperatingPoint(), K)
A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer. Internally, the Glover-McFarlane method is used to find a robustly stabilizing controller for the shaped plant $G_s = W_2 G W_1$, see glover_mcfarlane
and examples in the documentation for additional details.
Note, this model has automatically generated penalty matrices Q1, Q2
built in, and there is thus no need to supply them to the constructor of LQMPCProblem
.
Arguments:
G
: A linear system, created using, e.g., the functionss
.W1
: A precompensator for loop shaping. SetW1
to a LTI controller with an integrator to achieve integral action in the MPC controller.W2
: A post compensator for loop shaping.K
: Is an observer gain matrix of size(G.nx, G.ny)
, or an observer object for the plantG
, i.e., aKalmanFilter
.constraints
: An instace ofMPCConstraints
orLinearMPCConstrints
x0
: The initial state.strictly_proper
: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.op
: An instance ofOperatingPoint
.v
: Either a vector of state indices indicating constrained outputs, or a matrixnv × nx
that multiplies the state vector to yield the constrained outputs. This option has no effect ifLinearMPCConstraints
are used.
JuliaSimControl.MPC.StageConstraint
— TypeStageConstraint(f, lcons, ucons, N)
Create a constraint that holds for each stage of a GenericMPCProblem
. The constraint may be any nonlinear function of states and inputs. NOTE: to implement simple bounds constraints on the states or control inputs, it is more efficient to use BoundsConstraint
.
Arguments:
f
: A function (si, p, t)->v that computes the constrained output, wheresi
is an object of typeStageInput
.lcons
: A vector of lower bounds forv
.ucons
: A vector of upper bounds forv
, set equal tolcons
to indicate equality constraints.N
: The optimization horizon.
Example:
This example creates a constraints that bounds the square of a single input $1 ≤ u^2 ≤ 3$ and the sum of the state components $-4 ≤ \sum x_i ≤ 4$. Note that we create v
as a static array for maximum performance.
control_constraint = StageConstraint([1, -4], [3, 4], N) do si, p, t
SA[
si.u[1]^2
sum(si.x)
]
end
JuliaSimControl.MPC.StageCost
— TypeStageCost
A cost-function object that represents the cost at each time-step in a GenericMPCProblem
.
Example:
p = (; Q1, Q2, Q3, QN)
running_cost = StageCost() do si, p, t
Q1, Q2 = p.Q1, p.Q2
e = si.x .- si.r
u = si.u
dot(e, Q1, e) + dot(u, Q2, u)
end
JuliaSimControl.MPC.TerminalCost
— TypeTerminalCost{F}
A cost-function object that represents the terminal cost in a GenericMPCProblem
.
Example:
p = (; Q1, Q2, Q3, QN)
terminal_cost = TerminalCost() do ti, p, t
e = ti.x .- ti.r
dot(e, p.QN, e)
end
JuliaSimControl.MPC.TerminalStateConstraint
— TypeTerminalStateConstraint(f, lcons, ucons)
Create a constraint that holds for the terminal set x(N+1)
in an GenericMPCProblem
.
Arguments:
f
: A function (ti, p, t)->v that computes the constrained output, whereti
is an object of typeTerminalInput
.lcons
: A vector of lower bounds forv
.ucons
: A vector of upper bounds forv
, set equal tolcons
to indicate equality constraints.
Example
This example shows how to force the terminal state to equal a particular reference point r
terminal_constraint = TerminalStateConstraint(r, r) do ti, p, t
ti.x
end
To make the terminal set a box $l ≤ x ≤ u$, use
terminal_constraint = TerminalStateConstraint(l, u) do ti, p, t
ti.x
end
JuliaSimControl.MPC.Trapezoidal
— TypeTrapezoidal{is_dae, F <: FunctionSystem}
Trapezoidal integration dynamics constraint
JuliaSimControl.MPC.Trapezoidal
— MethodTrapezoidal(; dyn, Ts, scale_x = ones(dyn.nx), threads = false)
Trapezoidal integration dynamics constraint
Arguments:
dyn::F
: The discrete-time dynamicsscale_x
: Numerical scaling of state variablesthreads::Bool = false
: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
CommonSolve.solve
— MethodMPC.solve(prob::GenericMPCProblem; x0,
T,
verbose = false,
p = MPC.parameters(prob),
callback = (actual_x, u, x, X, U)->nothing,
sqp_callback,
noise = 0,
reset_observer = true,
dyn_actual = deepcopy(prob.dynamics),
x0_actual = copy(x0),
p_actual = p,
disturbance = (u,t)->0,
)
Simulate an MPC controller in feedback loop with a plant for T
steps. To step an MPC controller forward one step, see step!
.
Arguments:
x0
: Initial stateT
: Number of time steps to perform the simulation.verbose
: Print stuffp
: Parameterscallback
: Called after each iterationnoise
: Add measurement noise to the simulation, the noise is Gaussian withσ = noise
(can be a vector or a scalar).reset_observer
: Set the intitial state of the observer tox0
before each simulation.dyn_actual
: Actual dynamics. This defaults toprob.dynamics
, but can be set to any other dynamics in order to simulate model errors etc. If this is set, set alsox0_actual
andp_actual
.x0_actual
: Initial state for the actual dynamics.p_actual
: Parameters for the actual dynamics.disturbance
: A function (u_out,t)->0 that takes the control inputu
and modifies this in place to compute the disturbance.
CommonSolve.solve
— Methodsolve(prob::AbstractMPCProblem; x0, T, p, verbose = false)
Solve an MPC problem for T
time steps starting at initial state x0
.
Set verbose = true
to get diagnostic outputs, including tuning tips, in case you experience poor performance.
CommonSolve.solve
— Methodsolve(prob::AbstractMPCProblem; x0, T, p, verbose = false)
Solve an MPC problem for T
time steps starting at initial state x0
.
Set verbose = true
to get diagnostic outputs, including tuning tips, in case you experience poor performance.
JuliaSimControl.MPC.IpoptSolver
— MethodIpoptSolver(;
verbose = false,
tol = 0.0001,
acceptable_tol = 0.1,
max_iter = 100,
max_cpu_time = 30.0,
max_wall_time = 30.0,
constr_viol_tol = 0.0001,
acceptable_constr_viol_tol = 0.1,
acceptable_iter = 2,
exact_hessian = true,
mu_init = 0.1,
mu_strategy = "monotone", # can also be "adaptive" if problem has convergence issues
)
A convenience constructor to create an solver = Ipopt.Optimizer()
and set options. See https://coin-or.github.io/Ipopt/OPTIONS.html for information about each option. The defaults provided here are more relaxed than Ipopt defaults.
Ipopt will try to meet tol
and constr_viol_tol
, but stops early if it has met acceptable_tol
and acceptable_constr_viol_tol
for acceptable_iter
number of iterations.
When solving MPC problems, it is often beneficial to favor a faster sample rate and a longer prediction horizon over accurate integration and optimization. The motivations for this are several
- The dynamics model is often inaccurate, and solving an inaccurate model to high accuracy can be a waste of effort.
- The performance is often dictated by the disturbances acting on the system, and having a higher sample rate may allow the controller to reject disturbances faster.
- Feedback from measurements corrects for slight errors due to integration.
- Increasing sample rate leads to each subsequent optimization problem being closer to the previous one, making warm-staring more efficient and a good solution being found in fewer iterations.
The verbose
option can be a Bool or an integer, true
is interpreted as the default Ipopt verbosity of 5.
JuliaSimControl.MPC.get_xu
— Methodx, u = get_xu(vars::Variables)
x, u = get_xu(prob::GenericMPCProblem)
Extract x
and u
matrices.
JuliaSimControl.MPC.optimize!
— Methodoptimize!(prob::QMPCProblem, x0, p, t; verbose = true)
Solve a single instance of the optimal-control problem in the MPC controller.
Arguments:
x0
: Initial statet
: Initial time
JuliaSimControl.MPC.optimize!
— Methodoptimize!(prob::LQMPCProblem, x0, p, t; verbose = true)
Solve a single instance of the optimal-control problem in the MPC controller.
Arguments:
x0
: Initial statet
: Initial time
JuliaSimControl.MPC.step!
— MethodMPC.step!(prob::GenericMPCProblem, observerinput, p, t; kwargs)
Arguments:
observerinput
: An instance of typeObserverInput
.
JuliaSimControl.MPC.step!
— Methodstep!(prob::LQMPCProblem, u, y, r, p, t; kwargs...)
Take a single step using the MPC controller.
uopt, x, u0 = MPC.step!(prob, u, y, r, p, t)
where u
is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of u
is used as an initial guess for the optimizer. y
is the latest measurement and is used to update the observer in prob
. Internally, step!
performs the following actions:
- Measurement update of the observer, forms $\hat x_{k | k}$.
- Solve the optimization problem with the state of the observer as the initial condition.
- Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
- Advance the problem caches, including the reference trajectory if
xr
is a full trajectory.
The return values of step!
are
uopt
: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.x
: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.u0
The control signal used to update the observer in the prediction step. Similar toxopt
, this value may contain offsets and is usually of less external use thanuopt
which is transformed to the correct units of the actual plant input.
JuliaSimControl.MPC.step!
— Methodstep!(prob::QMPCProblem, u, y, r, p, t; kwargs...)
Take a single step using the MPC controller.
uopt, x, u0 = MPC.step!(prob, u, y, r, p, t)
where u
is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of u
is used as an initial guess for the optimizer. y
is the latest measurement and is used to update the observer in prob
. Internally, step!
performs the following actions:
- Measurement update of the observer, forms $\hat x_{k | k}$.
- Solve the optimization problem with the state of the observer as the initial condition.
- Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
- Advance the problem caches, including the reference trajectory if
xr
is a full trajectory.
The return values of step!
are
uopt
: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.x
: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.u0
The control signal used to update the observer in the prediction step. Similar toxopt
, this value may contain offsets and is usually of less external use thanuopt
which is transformed to the correct units of the actual plant input.
JuliaSimControl.rk4
— Methodf_discrete = rk4(f, Ts; supersample = 1)
Discretize f
using RK4 with sample time Ts
. See also MPCIntegrator
for more advanced integration possibilities. More details are available at https://help.juliahub.com/Control/mpc/#Discretization
JuliaSimControl.rk4
— Methodf_discrete, l_discrete = rk4(f, l, Ts)
Discretize dynamics f
and loss function l
using RK4 with sample time Ts
. The returned function is on the form (xₖ,uₖ,p,t)-> (xₖ₊₁, loss)
. Both f
and l
take the arguments (x, u, p, t)
.
JuliaSimControl.MPC.rollout
— Functionx, u = rollout(f, x0::AbstractVector, u, p, t=1)
Simulate discrete system f
from initial condition x0
and input array u
.
JuliaSimControl.MPC.rms
— Functionrms(x)
Root-mean square error of x
. If x
is a matrix, the rms value will be calculated along rows, i.e.,
sqrt.(mean(abs2, x, dims = 2))
JuliaSimControl.MPC.modelfit
— Functionmodelfit(y, yh)
Calculate model-fit percentage (normalized RMS error)
\[100 \dfrac{1-rms(y - yh)}{rms(y - mean(y))}\]
MPC signals
┌───────────────┬──────────────────────┐
│ │ │
│ ┌─────┐ │ ┌─────┐ │
w─┴───►│ │ └────►│ ├─────►v │
│ │ u │ │ │
r─────►│ MPC ├──┬──────►│ P ├─────►z │
│ │ │ │ │ │
┌────►│ │ │ d────►│ ├──┬──► │
│ └─────┘ │ └─────┘ │y │
│ │ │ │
│ ┌───────┐ │ │ │
│ │ │◄─┘ │ │
│ │ │ │ │
└───┤ OBS │◄──────────────────┘ │
│ │ │
│ │◄──────────────────────────┘
└───────┘
All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.
The following signals are shown in the block diagram
- $w$ is a known disturbance, i.e., its value is known to the controller through a measurement or otherwise.
- $r$ is a reference value for the controlled output $z$.
- $\hat x$ is an estimate of the state of the plant $P$.
- $u$ is the control signal.
- $v$ is a set of constrained outputs. This set may include direct feedthrough of inputs from $u$.
- $z$ is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
- $y$ is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
- $d$ is an unknown disturbance, i.e., a disturbance of which there is no measurement or knowledge.
The controller assumes that there are references $r$ provided for all controlled outputs $z$. If $z$ is not provided, the controller assumes that all states are to be considered controlled variables and expects $Q_1$ to be a square matrix of size $n_x$, otherwise $Q_1$ is a square matrix of size $n_z$. $z$ may be provided as either a list of indices into the state vector, or as a matrix that multiplies the state vector.
Internals of the quadratic MPC formulations
The internal representation of the MPC problem may be useful in order to interface to general-purpose QP/SQP solvers etc. This section provides useful details regarding the problem layout as well as the exact representation of soft constraints and other low level details.
The MPC problem is formulated as a QP problem with
\[\operatorname{minimize}_{x,u} (z_{N+1} - r)^T Q_N (z_{N+1} - r) + \sum_{n=1}^N (z_n - r)^T Q_1 (z_n - r) + u_n^T Q_2 u_n + \Delta u_n^T Q_3 \Delta u_n\]
where $\Delta u_n = u_n - u_{n-1}$.
\[\operatorname{subject~to}\\ x_{n+1} = Ax_n + Bu_n\\ v_{min} \leq v \leq v_{max}\\ v_n = C_v x + D_v u \\ z_n = C_z x\\\]
and $x_1$ is constrained to be equal to the MPC controller input and $Q_N$ is given by the solution to the discrete-time algebraic Riccati equation.
In the formulation above, $r$ is constant, but $r$ is allowed to be of length $n_z \times N+1$ as well.
Hard state constraints is known to be problematic in MPC applications, since disturbances might cause the initial state of the problem to become infeasible. The solution used to solve this problem is to make the state constraints soft by means of slack variables. The soft inequality constraints are formulated as
\[v \leq v_{max} + s_u\\ v \geq v_{min} - s_l\\ s_u \geq 0 \\ s_l \geq 0 \]
where $s_u$ and $s_l$ are the upper and lower slack variables. To ensure that the slack variables remain zero unless necessary, the penalty term $q_s \left(||s_u||_1 + ||s_l||_1\right) + q_{s2} \left(||s_u||_2^2 + ||s_l||_2^2\right)$ is added to the cost function. Since the slack variables are all non-negative, one-norm part of this term is linear, and equivalent to
\[q_s\mathbf{1} ^T s_u + q_s\mathbf{1} ^T s_l\]
State constraints
Constraints on the state components (xmin, xmax
) are by default "soft", in the sense that violations are penalized rather than enforced. The penalty weights can be controlled by the keyword arguments qs, qs2
to LQMPCProblem
, where qs
penalizes a linear term corresponding to the one-norm of the constraint violation, while qs2
penalizes a quadratic term. By default, a combination of quadratic and linear penalty is used. Having a large linear penalty promotes tighter constraint enforcement, but makes the optimization problem less smooth and harder to solve.
Output references
By default, references are expected for all state components. However, it is possible to use output references instead, where the outputs can be selected as any linear combination of the states $z = C_z x$. The matrix $C_z$ can be chosen by either specifying the indices of the controlled variables LinearMPCModel(...; z = state_indices)
or directly by LinearMPCModel(...; z = Cz)
.
Operating points
For linear MPC controllers, the type OperatingPoint
(x,u,y)
keeps track of state, input and output values around which a system is linearized. Nonlinear MPC formulations do not require this, but linear MPC controllers will generally perform better if the linearization point is provided to the constructors of LinearMPCProblem
and RobustMPCProblem
. Please note, if using nonlinear observers together with linear prediction models, an OperatingPointWrapper
is required to ensure that the nonlinear observer operates in the original coordinates of the nonlinear system.
Given an op::OperatingPoint
, JuliaSimControl.linearize(system, op, p, t)
is shorthand to linearize the system around the operating point.
The GenericMPCProblem
does not handle operating points due to the nonlinear applications of this problem type.
Specialization for linear MPC
For linear MPC problems, further control over the constraints are allowed, the constraints are then posed on the constrained output variable$v$, defined as
\[\operatorname{subject~to}\\ v_{min} - S s_l \leq C_v x + D_v u \leq v_{max} + S s_u\\ s_u \geq 0 \\ s_l \geq 0 \]
Where $S$ is used to be able to mix hard and soft constraints, i.e., $S$ is a matrix with the same row-size as $v$ and a column-size equal to the number of soft constraints.
In the presence of output matrices $C_v, D_v$, the constraint equations change to
\[C_v x + D_v u \leq v_{max} + S s_u\\ C_v x + D_v u \geq v_{min} - S s_l\\\]
Note, if loop-shaping weights $W_1$ and $W_2$ are in use, the constraint equations have been transformed from what the user originally provided.
Problem layout
The variables are ordered
\[\mathbf{x} = x_1,\; u_1,\; s^u_1, \; s^l_1 ...,\; x_N,\; u_N,\; s^u_N, \; s^l_N, \; x_{N+1},\]
All equality constraints are encoded in the form (notation borrowed from the solver documentation)
\[l \leq \mathbf{Ax} \leq u\]
where $\mathbf{x}$ contains all variables and $\mathbf{A}$ is called the constraint matrix. Factorizing $\mathbf{A}$ is an expensive step of the solution to the MPC problem.
The cost function is encoded in the form
\[\dfrac{1}{2} \mathbf{C_z x}^T P \mathbf{x} + q^T \mathbf{C_z x}\]
Constraint updating
Linear MPC
For linear systems, the only constraint that must be updated each iteration is the constraint corresponding to the initial state, and if a $Q_3$ term is used, a similar constraint corresponding to $u_0$ (the previous control action).
Nonlinear MPC (NMPC)
When sequential quadratic programming (SQP) iterations are used, the nonlinear dynamics are linearized around a trajectory each SQP-iteration. This linearization leads to updated $A$ and $B$ matrices in the constraint $x_{n+1} = Ax_n + Bu_n$, and consequently an update and refactorization of the constraint matrix $\mathbf{A}$. This causes NMPC to be significantly more expensive than linear MPC. The number of SQP iterations is a user parameter and, due to the factorization dominating the computational time, often causes a linear increase in the computational time. Typically, a low number of SQP iterations is enough since the solution at the previous time step provides a very good initial guess for the optimization problem.