# Model-Predictive Control (MPC)

## Introduction

The workflow for designing an MPC controller is roughly split up into

- Specifying the dynamics. This can be done in several ways:
- Using
`LinearMPCProblem`

for linear MPC. - Using
`RobustMPCProblem`

for linear MPC with robust loop shaping, including integral action etc. - Using a linear system from
`ControlSystems.jl`

. This alternative can be used to use linear MPC with a nonlinear observer. - Using a
`FunctionSystem`

that accepts a nonlinear discrete-time dynamics function with signature`(x,u,p,t) -> x⁺`

. Continuous-time dynamics can be discretized using the methods listed under Discretization

- Using
- Defining a state observer.
- Specifying an MPC solver
- Defining an MPC problem containing things like
- Prediction horizon
- State and control constraints
- The dynamics and the observer

The currently defined problem and solver types are

`LQMPCProblem`

: Linear Quadratic MPC problem. The cost is on the form`x'Q1*x + u'Q2*u`

.`OSQPSolver`

A QP or SQP (sequential quadratic programming) solver using OSQP.jl.

See docstrings and examples for further information. The following sections contains details around each step mentioned above. See the Quick links section below to quickly find information about certain keywords.

## Quick links

**State observers/ state estimation**: see Observers.**Integral action**: see Disturbance modeling and rejection with MPC controllers, Integral action and robust tuning and Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers**Discretization**: see Discretization**Constraints**: see Constraints and Constrained nonlinear MPC for MIMO systems**Reference handling and reference preview**: see Example: Linear system**Disturbance modeling**: see Disturbance modeling and rejection with MPC controllers, Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers and model augmentation in RobustAndOptimalControl.jl**Tuning an MPC controller**: see Integral action and robust tuning and Mixed-sensitivity $\mathcal{H}_2$ design for MPC controllers

## Example: Nonlinear system

We begin the exposition of the MPC functionality with some examples, and provide details in the sections below.

In this example, we will design an MPC controller for a pendulum on a cart. We start by defining the dynamics (in this case hand-written, see `build_controlled_dynamics`

to generate a suitable function from a ModelingToolkit model). The states are

\[(p, \theta, v, \omega)\]

corresponding to position, angle, velocity and angular velocity. We also define some problem parameters.

```
using JuliaSimControls
using JuliaSimControls.MPC, StaticArrays
# Dynamics function in continuous time (x,u,p,t) = (states, control inputs, parameters, time)
function cartpole(x, u, p, _)
mc, mp, l, g = 1.0, 0.2, 0.5, 9.81
q = x[SA[1, 2]]
qd = x[SA[3, 4]]
s = sin(q[2])
c = cos(q[2])
H = @SMatrix [mc+mp mp*l*c; mp*l*c mp*l^2]
C = @SMatrix [0 -mp*qd[2]*l*s; 0 0]
G = @SVector [0, mp * g * l * s]
B = @SVector [1, 0]
qdd = -H \ (C * qd + G - B * u[1])
return [qd; qdd]
end
nu = 1 # number of control inputs
nx = 4 # number of states
ny = nx # number of outputs (here we assume that all states are measurable)
Ts = 0.02 # sample time
N = 15 # MPC optimization horizon (control horizon is also equal to N)
x0 = zeros(nx) # Initial state
x0[1] = 3 # cart pos
x0[2] = pi*0.5 # pendulum angle
xr = zeros(nx) # reference state
```

Next, we discretize the dynamics and specify the cost-function matrices

```
discrete_dynamics0 = rk4(cartpole, Ts) # discretize using Runge-Kutta 4
measurement = (x,u,p,t)->x # Define the measurement function, in this case we measure the full state
state_names = [:p, :θ, :v, :ω]
discrete_dynamics = FunctionSystem(discrete_dynamics0, measurement, Ts, x=state_names, u=:u, y=state_names) # Add a
Q1 = spdiagm(ones(nx)) # state cost matrix
Q2 = Ts * spdiagm(ones(nu)) # control cost matrix (penalty on control derivative is also supported)
# Control limits
umin = -5 * ones(nu)
umax = 5 * ones(nu)
# State limits (state constraints are soft by default if enabled)
xmin = nothing # no consraints on states
xmax = nothing
constraints = MPCConstraints(discrete_dynamics; umin, umax, xmin, xmax)
```

`MPCConstraints{Vector{Float64}, Nothing, SparseArrays.SparseMatrixCSC{Float64, Int64}, SparseArrays.SparseMatrixCSC{Float64, Int64}, UnitRange{Int64}}([-5.0], [5.0], nothing, nothing, sparse(Int64[], Int64[], Float64[], 1, 4), sparse([1], [1], [1.0], 1, 1), 1:0)`

```
## Define problem structure
observer = MPC.OpenLoopObserver(discrete_dynamics, x0, nu, ny)
solver = OSQPSolver(
eps_rel = 1e-3,
max_iter = 500, # in the QP solver
check_termination = 5, # how often the QP solver checks termination criteria
sqp_iters = 2,
)
```

```
prob = LQMPCProblem(
discrete_dynamics;
observer,
Q1,
Q2,
constraints,
N,
xr,
solver,
)
```

With the problem defined, we could create a `DiscreteSystem`

representing the MPC controller. This is currently work in progress and is waiting on the these issues to be solved. Until then, we may run the MPC controller manually using

`history = MPC.solve(prob; x0, T = 500, verbose = false) # solve for T=500 time steps`

```
using Plots
plot(history)
```

## Example: Linear system

Instead of passing a (generally) nonlinear function `dynamics`

to the constructor of `LQMPCProblem`

, you may also pass a linear system on statespace form. The LQMPC formulation acting on a linear system is equivalent to an LQG controller if no constraints are active, but allows for more principled handling of constraints than what using standard linear methods does. The MPC controller can also incorporate reference pre-view, achievable only by acausal pre-filtering of references for standard controllers.

This example illustrates the MPC controller controlling a simple discrete-time double integrator with sample time 1. We also illustrate the use of tight state constraints, here, the state reference goes from 0 to 1, at the same time, we constrain the system to have state components no greater than 1 to prevent overshoot etc. Being able to operate close to constraints is one of many appealing properties of an MPC controller. (In practice, noise makes having the state reference exactly equal to the constraint a questionable practice).

```
using JuliaSimControls
using JuliaSimControls.MPC
using LinearAlgebra # For Diagonal
Ts = 1
sys = ss([1 Ts; 0 1], [0; 1;;], [1 0], 0, Ts) # Create a statespace object
(; nx,ny,nu) = sys
N = 20
x0 = [1.0, 0]
xr = zeros(nx, N+1) # reference trajectory (should have at least length N+1)
xr[1, 20:end] .= 1 # Change reference after 20 time steps
ur = zeros(nu, N) # Control signal at operating point (should have at least length N)
yr = sys.C*xr
op = OperatingPoint(xr, ur, yr)
# Control limits
umin = -0.07 * ones(nu)
umax = 0.07 * ones(nu)
# State limits (state constraints are soft by default)
xmin = -0.2 * ones(nx)
xmax = 1 * ones(nx)
constraints = MPCConstraints(sys; umin, umax, xmin, xmax, op) # Constraints are adjusted by the operating point
solver = OSQPSolver(
verbose = false,
eps_rel = 1e-6,
max_iter = 1500,
check_termination = 5,
polish = true,
)
Q2 = spdiagm(ones(nu)) # control cost matrix
Q1 = Diagonal([10, 1]) # state cost matrix
T = 40 # Simulation length (time steps)
```

The next step is to define a state observer and a prediction model, in this example, we will use a Kalman filter:

```
R1 = I(nx)
R2 = I(ny)
kf = KalmanFilter(ssdata(sys)..., R1, R2)
named_sys = named_ss(sys, x=[:pos, :vel], u=:force, y=:pos) # give names to signals for nicer plot labels
predmodel = LinearMPCModel(named_sys, kf; constraints, op, x0, state_reference=true)
prob = LQMPCProblem(predmodel; Q1, Q2, constraints, N, solver)
hist = MPC.solve(prob; x0, T, verbose = false)
plot(hist); hline!(xmin[2:2], lab="Constraint", l=(:black, :dash), sp=1)
```

Notice how the initial step response never goes below -0.2, and the second step response has very little overshoot due to the state constraint. If you run the code, try increasing the state constraint and see how the step response changes

We also notice that before the change in reference at $T \approx 20$, the response appears to make a slight move in the wrong direction. This behavior is actually expected, and arises due to the penalty and constraint on the control action. The system we are controlling is a simple double integrator, $f = ma$, and the control signal thus corresponds to a force which is proportional to the acceleration. If it's expensive to perform large accelerations, it might be beneficial to accept a small state error (small errors are penalized very little under a quadratic cost) before the step in order to build momentum for the large step in reference and avoid larger errors after the step has occurred.

## Observers

This package defines the type

`OpenLoopObserver`

This observer does not incorporate measurement feedback. It can be used if you assume availability of full state information.

In addition to `OpenLoopObserver`

, you may use any observer defined in LowLevelParticleFilters, such as

`LowLevelParticleFilters.ParticleFilter`

: This filter is simple to use and assumes that both dynamics noise and measurement noise are additive.`LowLevelParticleFilters.AuxiliaryParticleFilter`

: This filter is identical to ParticleFilter, but uses a slightly different proposal mechanism for new particles.`LowLevelParticleFilters.AdvancedParticleFilter`

: This filter gives you more flexibility, at the expense of having to define a few more functions.`LowLevelParticleFilters.KalmanFilter`

. Is what you would expect. Has the same features as the particle filters, but is restricted to linear dynamics and gaussian noise.`LowLevelParticleFilters.UnscentedKalmanFilter`

. Is also what you would expect. Has almost the same features as the Kalman filters, but handle nonlinear dynamics and measurement model, still requires an additive Gaussian noise model.`LowLevelParticleFilters.ExtendedKalmanFilter`

. Runs a regular Kalman filter on linearized dynamics. Uses ForwardDiff.jl for linearization.

How these observers are set up and used are shown in the examples above as well as in the examples section of the documentation.

## Discretization

Continuous-time dynamics functions on the form `(x,u,p,t) -> ẋ`

can be discretized (integrated) using the function `MPC.rk4`

, e.g.,

`discrete_dynamics = MPC.rk4(continuous_dynamics, sampletime; supersample=1)`

where the integer `supersample`

determines the number of RK4 steps that is taken internally for each change of the control signal (1 is often sufficient and is the default). The returned function `discrete_dynamics`

is on the form `(x,u,p,t) -> x⁺`

. The discretized dynamics can further be wrapped in a `FunctionSystem`

in order to add a measurement function and names of states, inputs and outputs.

Dynamics that is difficult to integrate due to stiffness etc. may make use of `MPCIntegrator`

. This type can use any method from the DifferentialEquations.jl ecosystem to perform the integration. This comes at a slight cost in performance, where `MPCIntegrator`

with an internal `RK4`

integrator is about 2-3x slower than the `MPC.rk4`

function. The main difference in performance coming from the choice of integrator arises during the linearization step when SQP iterations are used.

When solving MPC problems, it is often beneficial to favor a faster sample rate and a longer prediction horizon over accurate integration. The motivations for this are several

- The dynamics model is often inaccurate, and solving an inaccurate model to high accuracy can be a waste of effort.
- The performance is often dictated by the disturbances acting on the system, and having a higher sample rate may allow the controller to reject disturbances faster.
- Feedback from measurements corrects for slight errors due to integration.

## Constraints

`MPCConstraints(; xmin, xmax, umin, umax, op)`

Constraints are by default

- Hard for control inputs.
- Soft for states.

Soft state constraints should be favored in practical applications and in simulations with disturbances and noise. They are, however, significantly more computationally expensive than hard constraints, and in some simulation scenarios it may be favorable to select hard state constraints for faster simulation. The parameters `qs`

and `qs2`

determine the weights on the slack variables for the soft constraints, `qs`

weighs the absolute value of the slack and `qs2`

weighs the square of the slack. Hard constraints are used if `qs = qs2 = 0`

. In a practical application, using hard constraints is not recommended, a disturbance acting on the system might bring the state outside the feasible region from which there is no feasible trajectory back into the feasible region, leading to a failure of the optimizer.

By using only `qs2`

, i.e. a quadratic penalty on the slack variables, a small violation is penalized lightly, and the problem remains easy to solve. Using only `qs`

(with a sufficiently large value), constraint violation is kept zero if possible, violating contraints only if it is impossible to satisfy them due to, e.g., hard constraints on control inputs. A combination of `qs`

and `qs2`

can be used to tweak performance.

If an `OperatingPoint`

`op`

is provided, constraints are offset with the operating point to be valid for linear MPC around a non-zero operating point.

If `xmin, xmax`

are not provided, state constraints are disabled.

## Solving optimal-control problems

At the heart of the MPC controller is a numerical optimal-control problem that is solved repeatedly each sample instant. A single instance of this problem can be solved by calling

`x, u = MPC.optimize!(prob, x0, t)`

where `x0`

is the initial state and `t`

is the time at which the problem starts. The returned value `u`

may for linear problems need adjustment for offsets, the call

`MPC.get_u!(prob, x, u)`

transforms the result of `optimize!`

to the appropriate space.

## Stepping the MPC controller

The a single step of the MPC controller can be taken by calling `MPC.step!`

`uopt, x, u0 = MPC.step!(prob, u, y, r, t)`

where `u`

is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of `u`

is used as an initial guess for the optimizer. `y`

is the latest measurement and is used to update the observer in `prob`

. Internally, `step!`

performs the following actions:

- Measurement update of the observer, forms $\hat x_{k | k}$.
- Solve the optimization problem with the state of the observer as the initial condition.
- Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
- Advance the problem caches, including the reference trajectory if
`xr`

is a full trajectory.

The return values of `step!`

are

`uopt`

: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.`x`

: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.`u0`

The control signal used to update the observer in the prediction step. Similar to`x`

, this value may contain offsets and is usually of less external use than`uopt`

which is transformed to the correct units of the actual plant input.

## Interface to ModelingToolkit

This is an upcoming feature.

## Index

`JuliaSimControls.MPC.LQMPCProblem`

— Method```
LQMPCProblem(dynamics ;
N::Int,
Q1::AbstractMatrix,
Q2::AbstractMatrix,
Q3::Union{AbstractMatrix,Nothing} = nothing,
qs2 = 1000*maximum(Q1),
qs = sqrt(maximum(Q1)),
constraints,
observer = nothing,
xr,
ur = zeros(size(Q2,1)),
solver::AbstractMPCSolver = OSQPSolver(),
chunkA = ForwardDiff.Chunk{min(8, size(Q1, 1))}(),
chunkB = ForwardDiff.Chunk{min(8, size(Q2, 1))}(),
α = 1,
)
```

Defines a Linear Quadratic MPC problem. The cost is on the form `(x - xᵣ)'Q1*(x - xᵣ) + (u-uᵣ)'Q2*(u-uᵣ) + Δu'Q3*Δu`

.

**Arguments:**

`dynamics`

: Either 1) An instance of`FunctionSystem`

representing`x(t+1) = f(x(t), u(t), p, t)`

, i.e., already discretized. This alternative supports nonlinear systems.

- An
`AbstractStateSpace`

object defining a linear system. This alternative uses linear MPC. - An instance of
`LinearMPCModel`

or`RobustMPCModel`

`observer`

: An instance of`AbstractObserver`

, defaults to`OpenLoopObserver(dynamics, zeros(nx))`

. This argument is not supplied if`dynamics`

is a`RobustMPCModel`

or`LinearMPCModel`

.`N`

: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)`Q1`

: State penalty matrix`Q2`

: Control penalty matrix`Q3`

: Control derivative penalty matrix`qs`

: Soft state constraint linear penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).`qs2`

: Soft state constraint quadratic penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).`constraints`

: An instance of`MPCConstraints`

`xr`

: Reference state (or reference output if`state_reference=false`

for`LinearMPCModel`

). If`dynamics`

contains an operating point,`dynamics.op.x`

will be the default`xr`

if none is provided.`ur`

: Reference control. If`dynamics`

contains an operating point,`dynamics.op.u`

will be the default`ur`

if none is provided.`solver`

: The solver to use. See`OSQPSolver`

`α`

: A regularization parameter`0 ≤ α ≤ 1`

. The linearization of the nonlinear dynamics can be biased towards the reference trajectory by setting`α < 1`

, the linearization point will be x = α*x + (1-α)*xr

`JuliaSimControls.MPC.LinearMPCModel`

— Method`LinearMPCModel(G, observer; constraints::MPCConstraints, op::OperatingPoint = OperatingPoint(), state_reference, strictly_proper = false, z = I(G.nx), x0)`

A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer.

**Arguments:**

`G`

: A linear system created using, e.g., the function`ss`

.`observer`

: Any supported observer object, such as a`KalmanFilter`

.`constraints`

: An instace of`MPCConstraints`

`op`

: An instance of`OperatingPoint`

`state_reference`

: True if references are to be provided for states, false if references are for outputs.`strictly_proper`

: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.`z`

: Either a vector of state indices indicating controlled variables, or a matrix`nz × nx`

that multiplies the state vector to yield the controlled variables.`x0`

: The initial state.

`JuliaSimControls.MPC.MPCConstraints`

— Method`MPCConstraints(G; umin, umax, xmin = nothing, xmax = nothing, op::OperatingPoint = nothing)`

A structure representing the constraints of an MPC problem.

**Arguments:**

`G`

: System for which the consrtaints apply.`umin`

: Lower bound for control signals.`umax`

: Upper bound for control signals.`xmin`

: Lower bound for states signals.`xmax`

: Upper bound for states signals.`op`

: An operating opint around which the system is linearized. Only required for linear systems with linear observers. If`op`

is supplied, the constraints will be adjusted.

`JuliaSimControls.MPC.MPCIntegrator`

— Method`int = MPCIntegrator(dynamics, problem_constructor, alg::SciML.DEAlgorithm; Ts::Real, nx, nu, kwargs...)`

Discretize a dynamics function on the form `(x,u,p,t)->ẋ`

using DE-integrator `alg`

. The resulting object `int`

behaves like a discrete-time dynamics function `(x,u,p,t)->x⁺`

**Arguments:**

`dynamics`

: The continuous-time dynamics.`problem_constructor`

: One of`ODEProblem, DAEProblem`

etc.`alg`

: Any`DEAlgorithm`

, e.g.,`Tsit5()`

.`Ts`

: The fixed sample time between control updates. The algorithm may take smaller steps internally.`nx`

: The state (`x`

) dimension.`nu`

: The input (`u`

) dimension.`kwargs`

: Are sent to the integrator initialization function`init`

.

**Example:**

This example creates two identical versions of a discretized dynamics, one using the `rk4`

function and one using `MPCIntegrator`

. For the `MPCIntegrator`

, we set `dt`

and `adaptive=false`

in order to get equivalent results.

```
using JuliaSimControls.MPC
using OrdinaryDiffEq
"Continuous-time dynamics of a quadruple tank system."
function quadtank(h,u,p=nothing,t=nothing)
kc = 0.5
k1, k2, g = 1.6, 1.6, 9.81
A1 = A3 = A2 = A4 = 4.9
a1, a3, a2, a4 = 0.03, 0.03, 0.03, 0.03
γ1, γ2 = 0.3, 0.3
ssqrt(x) = √(max(x, zero(x)) + 1e-3)
# ssqrt(x) = sqrt(x)
xd = @inbounds SA[
-a1/A1 * ssqrt(2g*h[1]) + a3/A1*ssqrt(2g*h[3]) + γ1*k1/A1 * u[1]
-a2/A2 * ssqrt(2g*h[2]) + a4/A2*ssqrt(2g*h[4]) + γ2*k2/A2 * u[2]
-a3/A3*ssqrt(2g*h[3]) + (1-γ2)*k2/A3 * u[2]
-a4/A4*ssqrt(2g*h[4]) + (1-γ1)*k1/A4 * u[1]
]
end
nx = 4 # Number of states
nu = 2 # Number of inputs
Ts = 2 # Sample time
x0 = SA[1.0,1,1,1]
u0 = SVector(zeros(nu)...)
discrete_dynamics_rk = MPC.rk4(quadtank, Ts)
discrete_dynamics = MPC.MPCIntegrator(quadtank, ODEProblem, RK4(); Ts, nx, nu, dt=Ts, adaptive=false, saveat_end=false)
x1_rk = discrete_dynamics_rk(x0, u0, 0, 0)
x1 = discrete_dynamics(x0, u0, 0, 0)
@assert norm(x1 - x1_rk) < 1e-12
```

`JuliaSimControls.MPC.OSQPSolver`

— Type`OSQPSolver <: AbstractMPCSolver`

A solver which uses sequential quadratic programming on a system linearized around a trajectory computed using discretized dynamics.

**Arguments:**

`verbose::Bool = false`

`eps_rel::Float64 = 1e-5`

`eps_abs::Float64 = 1e-5`

`max_iter::Int = 5000`

`check_termination::Int = 15`

: The interval at which termination criteria is checked`sqp_iters::Int = 2`

: Number of sequential QP iterations to run. Oftentimes 1-5 is enough, but hard non-linear problems can be solved more robustly with more iterations.`dynamics_interval::Int = 1`

: How often to update the linearization. This is an expensive step, and the (average) solution time can be sped up by not updating every step. This must be set to 1 if`sqp_iters > 1`

.`polish::Bool = true`

: Try to obtain a high-accuracy solution, increases computational time.

`JuliaSimControls.MPC.RobustMPCModel`

— Method`RobustMPCModel(G; W1, W2 = I(G.ny), constraints::MPCConstraints, x0, strictly_proper = true, op::OperatingPoint = OperatingPoint(), state_reference, K)`

A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer. Internally, the Glover-McFarlane method is used to find a robustly stabilizing controller for the shaped plant $G_s = W_2 G W_1$, see `glover_mcfarlane`

and examples in the documentation for additional details.

Note, this model has automatically generated penalty matrices `Q1, Q2`

built in, and there is thus no need to supply them to the constructor of `LQMPCProblem`

.

**Arguments:**

`G`

: A linear system, created using, e.g., the function`ss`

.`W1`

: A precompensator for loop shaping. Set`W1`

to a LTI controller with an integrator to achieve integral action in the MPC controller.`W2`

: A post compensator for loop shaping.`K`

: Is an observer gain matrix of size`(G.nx, G.ny)`

, or an observer object for the plant`G`

, i.e., a`KalmanFilter`

.`constraints`

: An instace of`MPCConstraints`

`x0`

: The initial state.`strictly_proper`

: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.`op`

: An instance of`OperatingPoint`

.`state_reference`

: True if references are to be provided for states, false if references are for outputs.

`CommonSolve.solve`

— Method`solve(prob::AbstractMPCProblem; x0, T, p, verbose = false)`

Solve an MPC problem for `T`

time steps starting at initial state `x0`

.

Set `verbose = true`

to get diagnostic outputs, including tuning tips, in case you experience poor performance.

`JuliaSimControls.MPC.optimize!`

— Method`optimize!(prob::LQMPCProblem, x0, p, t; verbose = true)`

Solve a single instance of the optimal-control problem in the MPC controller.

**Arguments:**

`x0`

: Initial state`t`

: Initial time

`JuliaSimControls.MPC.step!`

— Method`step!(prob::LQMPCProblem, u, y, r, p, t; kwargs...)`

Take a single step using the MPC controller.

`uopt, x, u0 = MPC.step!(prob, u, y, r, p, t)`

where `u`

is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of `u`

is used as an initial guess for the optimizer. `y`

is the latest measurement and is used to update the observer in `prob`

. Internally, `step!`

performs the following actions:

- Measurement update of the observer, forms $\hat x_{k | k}$.
- Solve the optimization problem with the state of the observer as the initial condition.
- Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
- Advance the problem caches, including the reference trajectory if
`xr`

is a full trajectory.

The return values of `step!`

are

`uopt`

: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.`x`

: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.`u0`

The control signal used to update the observer in the prediction step. Similar to`xopt`

, this value may contain offsets and is usually of less external use than`uopt`

which is transformed to the correct units of the actual plant input.

`JuliaSimControls.rk4`

— Method`f_discrete = rk4(f, Ts; supersample = 1)`

Discretize `f`

using RK4 with sample time `Ts`

. See also `MPCIntegrator`

for more advanced integration possibilities. More details are available at https://help.juliahub.com/Control/mpc/#Discretization

`JuliaSimControls.rk4`

— Method`f_discrete, l_discrete = rk4(f, l, Ts)`

Discretize dynamics `f`

and loss function `l`

using RK4 with sample time `Ts`

. The returned function is on the form `(xₖ,uₖ,p,t)-> (xₖ₊₁, loss)`

. Both `f`

and `l`

take the arguments `(x, u, p, t)`

.

`JuliaSimControls.MPC.rollout`

— Function`x, u = rollout(f, x0::AbstractVector, u, p, t=1)`

Simulate discrete system `f`

from initial condition `x0`

and input array `u`

.

`JuliaSimControls.MPC.rms`

— Function`rms(x)`

Root-mean square error of `x`

. If `x`

is a matrix, the rms value will be calculated along rows, i.e.,

`sqrt.(mean(abs2, x, dims = 2))`

`JuliaSimControls.MPC.modelfit`

— Function`modelfit(y, yh)`

Calculate model-fit percentage (normalized RMS error)

\[100 \dfrac{1-rms(y - yh)}{rms(y - mean(y))}\]

## MPC signals

```
┌───────────────┐
│ │
│ ┌─────┐ │ ┌─────┐
w─┴───►│ │ └────►│ ├─────►v
│ │ u │ │
r─────►│ MPC ├──┬──────►│ P ├─────►z
│ │ │ │ │
┌────►│ │ │ d────►│ ├──┬──►
│ └─────┘ │ └─────┘ │y
│ │ │
│ ┌───────┐ │ │
│ │ │◄─┘ │
└───┤ OBS │ │
│ │◄──────────────────┘
└───────┘
```

All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.

The following signals are shown in the block diagram

- $w$ is a
*known*disturbance, i.e., its value is known to the controller through a measurement or otherwise. - $r$ is a reference value for the controlled output $z$.
- $\hat x$ is an estimate of the state of the plant.
- $u$ is the control signal.
- $v$ is a set of
*constrained*outputs. This set may include direct feedthrough of inputs from $u$. - $z$ is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
- $y$ is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
- $d$ is an
*unknown*disturbance, i.e., a disturbance of which there is no measurement or knowledge.

The controller assumes that there are references $r$ provided for *all* controlled outputs $z$. If $z$ is not provided, the controller assumes that *all states* are to be considered controlled variables and expects $Q_1$ to be a square matrix of size $n_x$, otherwise $Q_1$ is a square matrix of size $n_z$. $z$ may be provided as either a list of indices into the state vector, or as a matrix that multiplies the state vector.

## Internals

The MPC problem is formulated as a QP problem with

\[\operatorname{minimize}_{x,u} (x_{N+1} - x_r)^T Q_N (x_{N+1} - x_r) + \sum_{n=1}^N (x_n - x_r)^T Q_1 (x_n - x_r) + u_n^T Q_2 u_n + \Delta u_n^T Q_3 \Delta u_n\]

where $\Delta u_n = u_n - u_{n-1}$.

\[\operatorname{subject~to}\\ x_{n+1} = Ax_n + Bu_n\\ x_{min} \leq x \leq x_{max}\\ u_{min} \leq u \leq u_{max}\]

and $x_1$ is constrained to be equal to the MPC controller input and $Q_N$ is given by the solution to the discrete-time algebraic Riccati equation.

In the formulation above, $x_r$ is constant, but $x_r$ is allowed to be of length $n_x \times N+1$ as well.

Hard state constraints is known to be problematic in MPC applications, since disturbances might cause the initial state of the problem to become infeasible. The solution used to solve this problem is to make the state constraints *soft* by means of slack variables. The soft inequality constraints are formulated as

\[x \leq x_{max} + s_u\\ x \geq x_{min} - s_l\\ s_u \geq 0 \\ s_l \geq 0 \]

where $s_u$ and $s_l$ are the upper and lower slack variables. To ensure that the slack variables remain zero unless necessary, the penalty term $q_s \left(||s_u||_1 + ||s_l||_1\right) + q_{s2} \left(||s_u||_2^2 + ||s_l||_2^2\right)$ is added to the cost function. Since the slack variables are all non-negative, one-norm part of this term is linear, and equivalent to

\[q_s\mathbf{1} ^T s_u + q_s\mathbf{1} ^T s_l\]

### State constraints

Constraints on the state components (`xmin, xmax`

) are by default "soft", in the sense that violations are penalized rather than enforced. The penalty weights can be controlled by the keyword arguments `qs, qs2`

to `LQMPCProblem`

, where `qs`

penalizes a linear term corresponding to the one-norm of the constraint violation, while `qs2`

penalizes a quadratic term. By default, a combination of quadratic and linear penalty is used. Having a large linear penalty promotes tighter constraint enforcement, but makes the optimization problem less smooth and harder to solve.

### Output references

When problems using `LinearMPCModel`

with `state_reference = false`

are solved, the cost function used is

\[\operatorname{minimize}_{x,u} z_{N+1}^T Q_N z_{N+1} + \sum_{n=1}^N z_n^T Q_1 z_n + u_n^T Q_2 u_n + \Delta u_n^T Q_3 \Delta u_n \\ z_n = C_z x_n\]

and the references enter the observer instead of the cost function. In this case, reference pre-view is not available, i.e., `xr`

must be a vector.

The matrix $C_z$ can be chosen by either specifying the indices of the controlled variables `LinearMPCModel(...; z = state_indices)`

or directly by `LinearMPCModel(...; z = Cz)`

.

### Operating points

The type `OperatingPoint`

`(x,u,y)`

keeps track of state, input and output values around which a system is linearized. Nonlinear MPC formulations do not require this, but linear MPC controllers will generally perform better if the linearization point is provided to the constructors of `LinearMPCProblem`

and `RobustMPCProblem`

. If constraints are used, the operating point must be supplied to the constructor of `MPCConstraints`

as well. Note, nonlinear observers typically don't require operating point adjustment.

Given an `op::OperatingPoint`

, `JuliaSimControls.linearize(system, op, p, t)`

is shorthand to linearize the system around the operating point.

### Specialization for linear MPC

For linear MPC problems, further control over the constraints are allowed, the constraints are then posed on the *constrained output variable* $v$, defined as

\[\operatorname{subject~to}\\ v_{min} - S s_l \leq C_v x + D_v u \leq v_{max} + S s_u\\ s_u \geq 0 \\ s_l \geq 0 \]

Where $S$ is used to be able to mix hard and soft constraints, i.e., $S$ is a matrix with the same row-size as $v$ and a column-size equal to the number of soft constraints.

In the presence of output matrices $C_v, D_v$, the constraint equations change to

\[C_v x + D_v u \leq v_{max} + S s_u\\ C_v x + D_v u \geq v_{min} - S s_l\\\]

Note, if loop-shaping weights $W_1$ and $W_2$ are in use, the constraint equations have been transformed from what the user originally provided.

### Problem layout

The variables are ordered

\[\mathbf{x} = x_1,\; ...,\; x_{N+1},\; u_1,\; ...,\; u_N,\; s_{u_1},\; ...,\; s_{u_{N+1}},\; s_{l_1},\; ...,\; s_{l_{N+1}}\]

All equality constraints are encoded in the form (notation borrowed from the solver documentation)

\[l \leq \mathbf{Ax} \leq u\]

where $\mathbf{x}$ contains *all* variables and $\mathbf{A}$ is called the constraint matrix. Factorizing $\mathbf{A}$ is an expensive step of the solution to the MPC problem.

The cost function is encoded in the form

\[\dfrac{1}{2} \mathbf{x}^T P \mathbf{x} + q^T \mathbf{x}\]

When soft constraints are used, the inequality part of the constraint matrix $\mathbf{A}$ has the structure

\[\mathbf{A} = \begin{bmatrix} I_{N} \otimes C_v & \mathbf{1_n} \otimes C_v & I_{N} \otimes D_v & S_u \\ I_{N} \otimes C_v & \mathbf{1_n} \otimes C_v & I_{N} \otimes D_v & S_l \\ ~ & ~ & ~ & I_s \end{bmatrix}\]

\[S_u = \begin{bmatrix} -I_{s_u} & 0 \end{bmatrix}\]

\[S_l = \begin{bmatrix} 0 & I_{s_l} \end{bmatrix}\]

### Constraint updating

#### Linear MPC

For linear systems, the only constraint that must be updated each iteration is the constraint corresponding to the initial state, and if a $Q_3$ term is used, a similar constraint corresponding to $u_0$ (the previous control action).

#### Nonlinear MPC (NMPC)

When sequential quadratic programming (SQP) iterations are used, the nonlinear dynamics are linearized around a trajectory each SQP-iteration. This linearization leads to updated $A$ and $B$ matrices in the constraint $x_{n+1} = Ax_n + Bu_n$, and consequently an update and refactorization of the constraint matrix $\mathbf{A}$. This causes NMPC to be significantly more expensive than linear MPC. The number of SQP iterations is a user parameter and, due to the factorization dominating the computational time, often causes a linear increase in the computational time. Typically, a low number of SQP iterations is enough since the solution at the previous time step provides a very good initial guess for the optimization problem.