MPC Details

Observers

This package defines the type

  • StateFeedback This observer does not incorporate measurement feedback. It can be used if you assume availability of full state information.

In addition to StateFeedback, you may use any observer defined in LowLevelParticleFilters, such as

How these observers are set up and used are shown in the examples above as well as in the examples section of the documentation.

Discretization

When the dynamics are specified in continuous time, a discretization scheme must be employed in order for the optimizer to obtain a finite dimensional problem. While the quadratic MPC problem types always make use of multiple shooting, The GenericMPCProblem supports multiple different discretization methods, detailed in this section.

On a high level, the MPC library supports three general approaches to transcribe infinite-dimensional optimal-control problems to finite-dimensional optimization problems

  • Multiple shooting
  • Direct collocation on finite elements
  • Trapezoidal integration

The implementation of multiple shooting supports dynamics consisting of ODEs only, i.e., algebraic equations (DAEs) are not supported, while the collocation and trapezoidal methods support DAEs and stiff dynamics. Generally, we have the following properties of the different transcription methods:

  • Multiple shooting introduces optimization variables for the state at each sample instant, $n_x \times N$ in total.
  • Direct collocation introduces optimization variables for the state at each collocation point, $n_x \times n_c \times N$ in total, where $n_c$ is the number of collocation points (selected upon creating the CollocationFinE structure).
  • Trapezoidal integration is an implicit method that introduces optimization variables for the state at each sample instant, similar to multiple shooting, $n_x \times N$ in total.

The multiple-shooting transcription will thus introduce fewer variables than collocation, but is only applicable to non-stiff systems of ODEs. Direct collocation (and the simpler trapezoidal integration scheme) is an implicit method that handles stiff dynamics and algebraic equations.

Multiple shooting

Systems of ordinary differential equations (ODEs) can be discretized using an explicit method, such as Runge-Kutta 4. For non-stiff systems, the fastest option in this case is to make use of the special-purpose function MPC.rk4. To discretize continuous-time dynamics functions on the form (x,u,p,t) -> ẋ using the function MPC.rk4, we simply wrap the dynamics function by calling rk4 like so:

discrete_dynamics = MPC.rk4(continuous_dynamics, sampletime; supersample=1)

where the integer supersample determines the number of RK4 steps that is taken internally for each change of the control signal (1 is often sufficient and is the default). The returned function discrete_dynamics is on the form (x,u,p,t) -> x⁺. The discretized dynamics can further be wrapped in a FunctionSystem in order to add a measurement function and names of states, inputs and outputs.

Dynamics with algebraic equations can be discretized using the SeeToDee.SimpleColloc method, which uses a direct collocation method on finite elements. Example:

discrete_dynamics = SeeToDee.SimpleColloc(continuous_dynamics, sampletime; n=5)

for a 5:th order Gauss Lobatto collocation method. The returned object discrete_dynamics is once again callable on the form (x,u,p,t) -> x⁺.

Dynamics that is difficult to integrate due to stiffness etc. may make use of MPCIntegrator. This type can use any method from the DifferentialEquations.jl ecosystem to perform the integration. This comes at a slight cost in performance, where MPCIntegrator with an internal RK4 integrator is about 2-3x slower than the MPC.rk4 function. The main difference in performance coming from the choice of integrator arises during the linearization step when SQP iterations are used.

Direct collocation on finite elements

As an alternate to MPC.rk4, collocation of the system dynamics on finite elements provides a method that combines the rapid convergence of the orthogonal collocation method with the convenience associated with finite difference methods of locating grid points or elements where the solution is important or has large gradients. Instead of integrating the continuous-time dynamics, collocation on finite elements utilizes Lagrange polynomials to approximate the solution of the system dynamics over a finite element of time. These elements are collected over the time horizon of the MPC formulation to yield an optimal solution. The integer degree deg of the collocating Legendre polynomial determines the accuracy of the state solution obtained, and is related to the number of collocation points as deg = n_colloc-1 where n_colloc is a user choice. The number of collocation points used is thus a tradeoff between increased computational cost and higher-order convergence. The truncation error depends on the choice of collocation points roots_c. For a choice of Gauss-Legendre collocation roots, the truncation error is of the order $\mathcal{O}(h^{2k})$ where $k$ is the degree of the polynomial. For Gauss-Radau collocation, the truncation error is of the order $\mathcal{O}(h^{2k-1})$. Collocation on finite elements can also be used to solve continuous-time DAE problems. The discretization structure for collocation on finite elements can be constructed as

disc = CollocationFinE(dynamics, false; n_colloc = 5, roots_c = "Legendre")

where, among the arguments to CollocationFinE, false disables the threaded evaluation of dynamics and n_colloc refers to the size of the collocation point vector for each finite element. The roots_c option is set to choose Gauss-Legendre collocation by default. This can be specified explicitly by setting roots_c = "Legendre". For Radau collocation points, roots_c = "Radau". This discretization structure can be passed in GenericMPCProblem by specifying keyword argument disc.

Accuracy of integration vs. performance

When solving MPC problems, it is sometimes beneficial to favor a faster sample rate and a longer prediction horizon over highly accurate integration. The motivations for this are several

  • The dynamics model is often inaccurate, and solving an inaccurate model to high accuracy can be a waste of effort.
  • The performance is often dictated by the disturbances acting on the system, and having a higher sample rate may allow the controller to detect and reject disturbances faster.
  • Feedback from measurements will over time correct for slight errors due to integration.
  • Increasing sample rate leads to each subsequent optimization problem being more similar to the previous one, making warm-staring more efficient and a good solution being found in fewer iterations.

Solving optimal-control problems

At the heart of the MPC controller is a numerical optimal-control problem that is solved repeatedly each sample instant. For LQMPCProblem and QMPCProblem, a single instance of this problem can be solved by calling

controlleroutput = MPC.optimize!(prob, x0, p, t)            # Alternative 1
controlleroutput = MPC.optimize!(prob, controlleroutput, p, t) # Alternative 2

where x0 is the initial state and t is the time at which the problem starts. The return object is an instance of ControllerOutput which contains the optimal control signal u and the optimal state trajectory x. The returned value controlleroutput.u may for linear problems need adjustment for offsets, the call

MPC.get_u!(prob, controlleroutput.x, controlleroutput.u)

transforms the result of optimize! to the appropriate space.

For GenericMPCProblem, the interface to MPC.optimize! is

controlleroutput = MPC.optimize!(prob, controllerinput, p, t)

where controllerinput and controlleroutput are of types ControllerInput and ControllerOutput. The constructor to GenericMPCProblem also has an option presolve that solves the optimal-control problem directly, after which the state and control trajectories are available as

x, u = get_xu(prob)

For an example of solving optimal-control problem, see Optimal-control example.

Stepping the MPC controller

The a single step of the MPC controller can be taken by calling MPC.step!

uopt, x, u0 = MPC.step!(prob, observerinput, p, t)

where observerinput is of type ObserverInput containing the previous control input u, the latest measurement y which is used to update the observer in prob, a new reference r as well as any known disturbances w. Internally, step! performs the following actions:

  1. Measurement update of the observer, forms $\hat x_{k | k}$.
  2. Solve the optimization problem with the state of the observer as the initial condition.
  3. Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
  4. Advance the problem caches, including the reference trajectory if xr is a full trajectory.

The return values of step! are

  • uopt: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.
  • x: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.
  • u0 The control signal used to update the observer in the prediction step. Similar to x, this value may contain offsets and is usually of less external use than uopt which is transformed to the correct units of the actual plant input.

Interface to ModelingToolkit

Simulating MPC controllers with ModelingToolkit models is an upcoming feature. To use an MTK model as the prediction model in an MPC problem or to solve optimal-control problems for MTK models, see the tutorial Solving optimal-control problems with MTK models.

Index

JuliaSimControl.MPC.BoundsConstraintType
BoundsConstraint(xmin, xmax, umin, umax, xNmin, xNmax, dumin, dumax)
BoundsConstraint(; xmin, xmax, umin, umax, xNmin=xmin, xNmax=xmax, dumin=-Inf, dumax=Inf)

Upper and lower bounds for state and control inputs. This constraint is typically more efficient than StageConstraint for simple bounds constraints. dumin and dumax are the bounds on the change in control inputs from one stage to the next.

Separate bounds may be provided for the terminal state xN, if none are given, the terminal state is assumed to have the same bounds as the rest of the state trajectory.

source
JuliaSimControl.MPC.CollocationFinEType
CollocationFinE(dyn, threads=false; n_colloc = 5, roots_c = "Legendre", scale_x=ones(dyn.nx), hold_order = 0)

Orhogonal Collocation on Finite Elements dynamics constraint. This discretization method can use zero or first-order hold control inputs and supports ODE or DAE dynamics.

Arguments:

  • dyn::F: The continuous-time dynamics
  • threads::Bool = false: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
  • n_colloc::Int = 5: Number of collocation points per finite element
  • roots_c::String = "Legendre": Type of collocation points. Currently supports "Legendre" and "Radau" points.
  • hold_order::Int = 0 : Order of hold for control inputs. 0 for zero-order hold (piecewise constant inputs), 1 for first-order hold (piecewise affine inputs).
source
JuliaSimControl.MPC.CollocationFinEType
CollocationFinE{is_dae, F <: FunctionSystem}

Orhogonal Collocation on Finite Elements dynamics constraint. This discretization method can use zero or first-order hold control inputs.

Fields:

  • dyn::F: The continuous-time dynamics
  • Ts: Sample time
  • threads::Bool = false: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
  • hold_order::Int = 0 : Order of hold for control inputs. 0 for zero-order hold (piecewise constant inputs), 1 for first-order hold (piecewise affine inputs).
source
JuliaSimControl.MPC.ControllerInputType
ControllerInput <: CompositeSignal

Contains all signals that are required to run the MPC controller one iteration.

  ┌───────────────┬──────────────────────┐
  │               │                      │
  │    ┌─────┐    │     ┌─────┐          │
w─┴───►│     │    └────►│     ├─────►v   │
       │     │ u        │     │          │
r─────►│ MPC ├──┬──────►│  P  ├─────►z   │
       │     │  │       │     │          │
 ┌────►│     │  │ d────►│     ├──┬──►    │
 │     └─────┘  │       └─────┘  │y      │
 │              │                │       │
 │   ┌───────┐  │                │       │
 │   │       │◄─┘                │       │
 │   │       │                   │       │
 └───┤  OBS  │◄──────────────────┘       │
     │       │                           │
     │       │◄──────────────────────────┘
     └───────┘

All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.

The following signals are shown in the block diagram

  • \[w\]

    is a known disturbance, i.e., its value is known to the controller through a measurement or otherwise.
  • \[r\]

    is a reference value for the controlled output $z$.
  • \[\hat x\]

    is an estimate of the state of the plant $P$.
  • \[u\]

    is the control signal.
  • \[v\]

    is a set of constrained outputs. This set may include direct feedthrough of inputs from $u$.
  • \[z\]

    is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
  • \[y\]

    is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
  • \[d\]

    is an unknown disturbance, i.e., a disturbance of which there is no measurement or knowledge.
source
JuliaSimControl.MPC.ControllerOutputType
ControllerOutput <: CompositeSignal

Contains all signals resulting from one step of the MPC controller. Also contains sol which is the solution object from the optimization problem. sol.retcode indicates whether the optimization was successful or not.

  ┌───────────────┬──────────────────────┐
  │               │                      │
  │    ┌─────┐    │     ┌─────┐          │
w─┴───►│     │    └────►│     ├─────►v   │
       │     │ u        │     │          │
r─────►│ MPC ├──┬──────►│  P  ├─────►z   │
       │     │  │       │     │          │
 ┌────►│     │  │ d────►│     ├──┬──►    │
 │     └─────┘  │       └─────┘  │y      │
 │              │                │       │
 │   ┌───────┐  │                │       │
 │   │       │◄─┘                │       │
 │   │       │                   │       │
 └───┤  OBS  │◄──────────────────┘       │
     │       │                           │
     │       │◄──────────────────────────┘
     └───────┘

All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.

The following signals are shown in the block diagram

  • \[w\]

    is a known disturbance, i.e., its value is known to the controller through a measurement or otherwise.
  • \[r\]

    is a reference value for the controlled output $z$.
  • \[\hat x\]

    is an estimate of the state of the plant $P$.
  • \[u\]

    is the control signal.
  • \[v\]

    is a set of constrained outputs. This set may include direct feedthrough of inputs from $u$.
  • \[z\]

    is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
  • \[y\]

    is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
  • \[d\]

    is an unknown disturbance, i.e., a disturbance of which there is no measurement or knowledge.
source
JuliaSimControl.MPC.DifferenceCostType
DifferenceCost(metric, getter)
DifferenceCost(metric)

A cost-function object that represents a running cost of differences in a GenericMPCProblem. metric: (Δz, p, t)->scalar is a function that computes the cost of a difference $Δz = z(k) - z(k-1)$, and getter is a function (si, p, t)->z that outputs $z$. If getter is not provided, it defaults to output the control signal (si, p, t)->si.u.

Example:

The example below illustrates how to penalize $\Delta u = u(k) - u(k-1)$ for a single-input system

p = (; Q3)

difference_cost = DifferenceCost() do Δu, p, t
    dot(Δu, p.Q3, Δu) # Compute the penalty given a difference `Δu`
end

We may also penalize the difference of an arbitrary function of the state and inputs by passing an additional function to DefferenceCost. The example above is equivalent to the example above, but passes the explicit function getter that extracts the control signal. This function can extract any arbitrary value z = f(x, u)

getter = (si,p,t)->SVector(si.u[]) # Extract the signal to penalize differences for, in this case, the penalized signal `z = u`

difference_cost = DifferenceCost(getter) do e, p, t
    dot(e, p.Q3, e) # Compute the penalty given a difference `e = Δz`
end

Extended help

It it common to penalize control-input differences in MPC controllers for multiple reasons, some of which include

  • Reduce actuator wear and tear due to high-frequency actuation.
  • Avoid excitation of higher-order and unmodeled dynamics such as structural modes, often occuring at higher frequencies.
  • Reduce stationary errors without the presence of integral action.
  • Eliminate stationary errors in the presence of input integration, see Integral action for more details.
source
JuliaSimControl.MPC.GenericMPCProblemMethod
GenericMPCProblem(dynamics; N, observer, objective, constraints::Union{AbstractVector{<:AbstractGenericMPCConstraint}, CompositeMPCConstraint}, solver = IpoptSolver(), p = DiffEqBase.NullParameters(), objective_input, xr, presolve = true, threads = false)

Defines a generic, nonlinear MPC problem.

Arguments:

  • dynamics: An instance of FunctionSystem
  • N: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)
  • Ts: Sample time for the system, the control signal is allowed to change every Ts seconds.
  • observer: An instance of AbstractObserver, defaults to StateFeedback(dynamics, zeros(nx)).
  • objective: An instance of Objective.
  • constraints: A vector of AbstractGenericMPCConstraint.
  • solver: An instance of an optimizer recognized by Otpimazation.jl. The recommended default choice is the Ipopt.Optimizer, which can be created using IpoptSolver.
  • p: Parameters that will be passed to the dynamics, cost and constraint functions. It is possible to provide a different set of parameters for the cost and constraint functions by passing in an instance of MPCParameters.
  • objective_input: An instance of ObjectiveInput that contains initial guesses of states and control trajectories.
  • xr: Reference trajectory
  • presolve: Solve the initial optimal-control problem already in the constructor. This may provide a better initial guess for the first online solve of the MPC controller than the one provided by the user.
  • threads: Use threads to evaluate the dynamics constraints. This is beneficial for large systems.
  • scale_x = ones(dynamics.nx): Scaling factors for the state variables. These can be set to the "typical magnitude" of each state to improve numerical performance in the solver.
  • scale_u = ones(dynamics.nu): Scaling factors for the input variables.
  • disc = MultipleShooting(dynamics): Discretization method. Defaults to MultipleShooting. See Discretization for options.
  • robust_horizon: Set to a positive integer to enable robust MPC. The robust horizon is the number of initial control inputs that are constrained to be equal for robust optimization problems. Defaults to 0 (no robust MPC). When robust MPC is used, the parameter argument p is expected to be a vector of parameter structures, e.g., Vector{<:Vector} or Vector{Dict} etc. See the tutorial Robust MPC with uncertain parameters for additional information. For robust MPC, robust_horizon = 1 is the most common choice, for trajectory optimization in open loop, robust_horizon = N is recommended. For hierarchical controllers where several (Nh) time steps of the optimized MPC trajectory are exectured before re-optimizing, set the robust horizon to Nh. In situations where several consecutive steps of the optimized trajectory are executed before re-optimizing, set robust_horizon to the maximum number of steps before re-optimization.
  • int_x::Vector{Bool}: Vector of booleans indicating which state variables should be treated as integer variables. Defaults to falses(dynamics.nx). See https://docs.sciml.ai/Optimization/dev/optimization_packages/mathoptinterface/#Using-Integer-Constraints for more details.
  • int_u::Vector{Bool}: Vector of booleans indicating which input variables should be treated as integer variables. Defaults to falses(dynamics.nu).
  • Nint = N: Number of control inputs that should be treated as integer variables. Defaults to all control inputs. The computational effort required to solve control problems can sometimes be reduced significantly by relaxing the problem to only require the first $N_{int} < N$ control inputs to be integer.
  • jacobian_method: Method for calculating the constraint jacobian. The default depends on the chosen disc. Options are :symbolics, :forwarddiff. Symbolic jacobians are typically fast, but may take a long time to build and compile for very large problems.
  • gradient_method: Method for calculating the gradient of the cost function. Defaults to :forwarddiff. Options are :reversediff, :forwarddiff.
  • hessian_method: Method for calculating the hessian of the cost function and lagrangian. Defaults to :symbolics. Options are :symbolics, :forwarddiff, :none. Symbolic hessians are typically fast, but may take a long time to build and compile for very large problems. If exact_hessian = false is used in the solver, choose :none for hessian_method to avoid generating the Hessian function and use an LBFGS-approximation to the Hessian.

For very large problems, the following options are recommended

jacobian_method = :forwarddiff,
gradient_method = :reversediff,
hessian_method = :none,

together with exact_hessian = false as argument to IpoptSolver.

source
JuliaSimControl.MPC.LQMPCProblemMethod
LQMPCProblem(dynamics ;
    N::Int,
    Q1::AbstractMatrix,
    Q2::AbstractMatrix,
    Q3::Union{AbstractMatrix,Nothing} = nothing,
    qs2 = 1000*maximum(Q1),
    qs  = sqrt(maximum(Q1)),
    constraints,
    r,
    solver::AbstractMPCSolver = OSQPSolver(),
)

Defines a Linear Quadratic MPC problem. The cost is on the form (z - zᵣ)'Q1*(z - zᵣ) + (u-uᵣ)'Q2*(u-uᵣ) + Δu'Q3*Δu.

Arguments:

  • dynamics: An instance of LinearMPCModel or RobustMPCModel
  • N: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)
  • Q1: State penalty matrix
  • Q2: Control penalty matrix
  • Q3: Control derivative penalty matrix
  • qs: Soft state constraint linear penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).
  • qs2: Soft state constraint quadratic penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).
  • constraints: An instance of MPCConstraints
  • r: References. If dynamics contains an operating point, dynamics.op.x will be the default r if none is provided.
  • ur: Reference control. If dynamics contains an operating point, dynamics.op.u will be the default ur if none is provided.
  • solver: The solver to use. See OSQPSolver
source
JuliaSimControl.MPC.LinearMPCConstraintsType
LinearMPCConstraints(vmin, vmax, Cv, Dv, soft_indices)
LinearMPCConstraints(; vmin, vmax, Cv, Dv, soft_indices)

Applicable to LinearMPCModel, this structure allows you to constrain any linear combination of the states and inputs. The constrained output $v_{min} ≤ v ≤ v_{max}$ is defined as

\[v = C_v x + D_v u\]

  • soft_indices: A vector of integer indices indicating which components of v are soft constraints. It's recommended to make components of $v$ that depend on the state be soft constraints.
source
JuliaSimControl.MPC.LinearMPCModelMethod
LinearMPCModel(G, observer; constraints::AbstractLinearMPCConstraints, op::OperatingPoint = OperatingPoint(), strictly_proper = false, z = I(G.nx), x0)

A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer.

Arguments:

  • G: A linear system created using, e.g., the function ss.
  • observer: Any supported observer object, such as a KalmanFilter.
  • constraints: An instance of MPCConstraints or LinearMPCConstrints
  • op: An instance of OperatingPoint
  • strictly_proper: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.
  • z: Either a vector of state indices indicating controlled variables, or a matrix nz × nx that multiplies the state vector to yield the controlled variables.
  • v: Either a vector of state indices indicating constrained outputs, or a matrix nv × nx that multiplies the state vector to yield the constrained outputs. This option has no effect if LinearMPCConstraints are used.
  • x0: The initial state of the internal observer.
source
JuliaSimControl.MPC.MPCConstraintsMethod
MPCConstraints(; umin, umax, xmin = nothing, xmax = nothing, soft = true)

A structure representing the constraints of an MPC problem. See also LinearMPCConstraints for a more advanced interface to linear constraints.

Arguments:

  • umin: Lower bound for control signals.
  • umax: Upper bound for control signals.
  • xmin: Lower bound for constrained output signals.
  • xmax: Upper bound for constrained output signals.
  • soft: Indicate if constrained outputs are using soft constraints (recommended)
source
JuliaSimControl.MPC.MPCIntegratorMethod
int = MPCIntegrator(dynamics, problem_constructor, alg::SciML.DEAlgorithm; Ts::Real, nx, nu, kwargs...)

Discretize a dynamics function on the form (x,u,p,t)->ẋ using DE-integrator alg. The resulting object int behaves like a discrete-time dynamics function (x,u,p,t)->x⁺

Arguments:

  • dynamics: The continuous-time dynamics.
  • problem_constructor: One of ODEProblem, DAEProblem etc.
  • alg: Any DEAlgorithm, e.g., Tsit5().
  • Ts: The fixed sample time between control updates. The algorithm may take smaller steps internally.
  • nx: The state (x) dimension.
  • nu: The input (u) dimension.
  • kwargs: Are sent to the integrator initialization function init.

Example:

This example creates two identical versions of a discretized dynamics, one using the rk4 function and one using MPCIntegrator. For the MPCIntegrator, we set dt and adaptive=false in order to get equivalent results.

using JuliaSimControl.MPC
using OrdinaryDiffEq

"Continuous-time dynamics of a quadruple tank system."
function quadtank(h,u,p=nothing,t=nothing)
    kc = 0.5
    k1, k2, g = 1.6, 1.6, 9.81
    A1 = A3 = A2 = A4 = 4.9
    a1, a3, a2, a4 = 0.03, 0.03, 0.03, 0.03
    γ1, γ2 = 0.3, 0.3

    ssqrt(x) = √(max(x, zero(x)) + 1e-3)
    # ssqrt(x) = sqrt(x)
    xd = @inbounds SA[
        -a1/A1 * ssqrt(2g*h[1]) + a3/A1*ssqrt(2g*h[3]) +     γ1*k1/A1 * u[1]
        -a2/A2 * ssqrt(2g*h[2]) + a4/A2*ssqrt(2g*h[4]) +     γ2*k2/A2 * u[2]
        -a3/A3*ssqrt(2g*h[3])                          + (1-γ2)*k2/A3 * u[2]
        -a4/A4*ssqrt(2g*h[4])                          + (1-γ1)*k1/A4 * u[1]
    ]
end

nx = 4 # Number of states
nu = 2 # Number of inputs
Ts = 2 # Sample time
x0 = SA[1.0,1,1,1]
u0 = SVector(zeros(nu)...)

discrete_dynamics_rk = MPC.rk4(quadtank, Ts)
discrete_dynamics    = MPC.MPCIntegrator(quadtank, ODEProblem, RK4(); Ts, nx, nu, dt=Ts, adaptive=false, saveat_end=false)

x1_rk = discrete_dynamics_rk(x0, u0, 0, 0)
x1    = discrete_dynamics(x0, u0, 0, 0)
@assert norm(x1 - x1_rk) < 1e-12
source
JuliaSimControl.MPC.MPCParametersType
MPCParameters(p)
MPCParameters(p, p_mpc)

A struct containing two sets of parameters, p belong to the dynamics of the system being controlled, and p_mpc belong to cost and constraint functions. If p_mpc is not supplied, it defaults to be the same as p.

For robust MPC formulations using the robust_horizon, a vector of MPCParameters must be used to store the parameters for each uncertain realizaiton.

source
JuliaSimControl.MPC.MultipleShootingType
MultipleShooting(dyn, threads = false; scale_x)

Multiple-shooting dynamics constraint. This discretization method assumes zero-order-hold inputs and only supports ODE dynamics. For DAE dynamics, or to use first-order-hold inputs, see Trapezoidal or CollocationFinE.

Arguments:

  • dyn::F: The discrete-time dynamics
  • scale_x: Numerical scaling of state variables
  • threads::Bool = false: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
source
JuliaSimControl.MPC.MultipleShootingType
MultipleShooting{F <: FunctionSystem}

Multiple-shooting dynamics constraint. This discretization method assumes zero-order-hold inputs and only supports ODE dynamics.

Fields:

  • dyn::F: The discrete-time dynamics
  • scale_x: Numerical scaling of state variables
  • threads::Bool = false: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
source
JuliaSimControl.MPC.NonlinearMPCConstraintsType
NonlinearMPCConstraints(fun, min, max, soft_indices)
NonlinearMPCConstraints(; umin, umax, xmin=nothing, xmax=nothing)

A struct holding constraint information for nonlinear MPC.

If the signature NonlinearMPCConstraints(; umin, umax, xmin=nothing, xmax=nothing) is used, the fun and soft_indices are created automatically.

Arguments:

  • fun: A function (x,u,p,t)->constrained_outputs
  • min: The lower bound for constrained_outputs
  • max: The upper bound for constrained_outputs
  • soft_indices::Vector{Int}: A vector of indices that indicates which constrained_outputs are soft constraints. Slack variables will be added for each soft constraint and this increases the computational complexity. It is recommended to use soft constraints for states and functions of the states, but typically not for intputs.
source
JuliaSimControl.MPC.OSQPSolverType
OSQPSolver <: AbstractMPCSolver

A solver which uses sequential quadratic programming on a system linearized around a trajectory computed using discretized dynamics.

Arguments:

  • verbose::Bool = false
  • eps_rel::Float64 = 1e-5
  • eps_abs::Float64 = 1e-5
  • max_iter::Int = 5000
  • check_termination::Int = 15: The interval at which termination criteria is checked
  • sqp_iters::Int = 2: Number of sequential QP iterations to run. Oftentimes 1-5 is enough, but hard non-linear problems can be solved more robustly with more iterations. This option has no effect for linear MPC.
  • dynamics_interval::Int = 1: How often to update the linearization. This is an expensive step, and the (average) solution time can be sped up by not updating every step. This must be set to 1 if sqp_iters > 1. This option has no effect for linear MPC.
  • polish::Bool = true: Try to obtain a high-accuracy solution, increases computational time.
source
JuliaSimControl.MPC.ObjectiveInputMethod
ObjectiveInput(x, u, r[, x0, u0])

Create an ObjectiveInput structure containing trajectories for x,u and r. Initial state x0 and initial control u0 will be taken from x and u.

Arguments:

  • x: The state trajectory as a matrix of size (nx, N+1)
  • u: The control trajectory as a matrix of size (nu, N)
  • r: A reference vector or matrix of size (nr, N+1), nr does not have to equal nx.
source
JuliaSimControl.MPC.ObserverInputType
ObserverInput <: CompositeSignal
ObserverInput(u, y, r, w)
ObserverInput(; u, y, r, w)

Contains all signals that are required to update an observer.

  ┌───────────────┬──────────────────────┐
  │               │                      │
  │    ┌─────┐    │     ┌─────┐          │
w─┴───►│     │    └────►│     ├─────►v   │
       │     │ u        │     │          │
r─────►│ MPC ├──┬──────►│  P  ├─────►z   │
       │     │  │       │     │          │
 ┌────►│     │  │ d────►│     ├──┬──►    │
 │     └─────┘  │       └─────┘  │y      │
 │              │                │       │
 │   ┌───────┐  │                │       │
 │   │       │◄─┘                │       │
 │   │       │                   │       │
 └───┤  OBS  │◄──────────────────┘       │
     │       │                           │
     │       │◄──────────────────────────┘
     └───────┘

All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.

The following signals are shown in the block diagram

  • \[w\]

    is a known disturbance, i.e., its value is known to the controller through a measurement or otherwise.
  • \[r\]

    is a reference value for the controlled output $z$.
  • \[\hat x\]

    is an estimate of the state of the plant $P$.
  • \[u\]

    is the control signal.
  • \[v\]

    is a set of constrained outputs. This set may include direct feedthrough of inputs from $u$.
  • \[z\]

    is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
  • \[y\]

    is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
  • \[d\]

    is an unknown disturbance, i.e., a disturbance of which there is no measurement or knowledge.
source
JuliaSimControl.MPC.QMPCProblemMethod
QMPCProblem(dynamics ;
    N::Int,
    Q1::AbstractMatrix,
    Q2::AbstractMatrix,
    Q3::Union{AbstractMatrix,Nothing} = nothing,
    qs2 = 1000*maximum(Q1),
    qs  = sqrt(maximum(Q1)),
    constraints::NonlinearMPCConstraints,
    observer = nothing,
    xr,
    ur = zeros(size(Q2,1)),
    solver::AbstractMPCSolver = OSQPSolver(),
    chunkA = ForwardDiff.Chunk{min(8, size(Q1, 1))}(),
    chunkB = ForwardDiff.Chunk{min(8, size(Q2, 1))}(),
)

Defines a Nonlinear Quadratic MPC problem. The cost is on the form (z - zᵣ)'Q1*(z - zᵣ) + (u-uᵣ)'Q2*(u-uᵣ) + Δu'Q3*Δu.

Arguments:

  • dynamics: An instance of FunctionSystem representing x(t+1) = f(x(t), u(t), p, t), i.e., already discretized.
  • observer: An instance of AbstractObserver, defaults to StateFeedback(dynamics, zeros(nx)).
  • N: Prediction horizon in the MPC problem (number of samples, equal to the control horizon)
  • Q1: Controlled variable penalty matrix
  • Q2: Control signal penalty matrix
  • Q3: Control derivative penalty matrix
  • qs: Soft state constraint linear penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).
  • qs2: Soft state constraint quadratic penalty (scalar). Set to zero for hard constraints (hard constraints may render problem infeasible).
  • constraints: An instance of NonlinearMPCConstraints
  • xr: Reference state (or reference output if state_reference=false for LinearMPCModel). If dynamics contains an operating point, dynamics.op.x will be the default xr if none is provided.
  • ur: Reference control. If dynamics contains an operating point, dynamics.op.u will be the default ur if none is provided.
  • solver: The solver to use. See OSQPSolver
source
JuliaSimControl.MPC.RobustMPCModelMethod
RobustMPCModel(G; W1, W2 = I(G.ny), constraints::AbstractLinearMPCConstraints, x0, strictly_proper = true, op::OperatingPoint = OperatingPoint(), K)

A model structure for use with linear MPC controllers. This structure serves as both a prediction model and an observer. Internally, the Glover-McFarlane method is used to find a robustly stabilizing controller for the shaped plant $G_s = W_2 G W_1$, see glover_mcfarlane and examples in the documentation for additional details.

Note, this model has automatically generated penalty matrices Q1, Q2 built in, and there is thus no need to supply them to the constructor of LQMPCProblem.

Arguments:

  • G: A linear system, created using, e.g., the function ss.
  • W1: A precompensator for loop shaping. Set W1 to a LTI controller with an integrator to achieve integral action in the MPC controller.
  • W2: A post compensator for loop shaping.
  • K: Is an observer gain matrix of size (G.nx, G.ny), or an observer object for the plant G, i.e., a KalmanFilter.
  • constraints: An instace of MPCConstraints or LinearMPCConstrints
  • x0: The initial state.
  • strictly_proper: Indicate whether or not the MPC controller is to be considered a strictly proper system, i.e., if there is a one sample delay before a measurement has an effect on the control signal. This is typically required if the computational time of the MPC controller is close to the sample time of the system.
  • op: An instance of OperatingPoint.
  • v: Either a vector of state indices indicating constrained outputs, or a matrix nv × nx that multiplies the state vector to yield the constrained outputs. This option has no effect if LinearMPCConstraints are used.
source
JuliaSimControl.MPC.StageConstraintType
StageConstraint(f, lcons, ucons, N)

Create a constraint that holds for each stage of a GenericMPCProblem. The constraint may be any nonlinear function of states and inputs. NOTE: to implement simple bounds constraints on the states or control inputs, it is more efficient to use BoundsConstraint.

Arguments:

  • f: A function (si, p, t)->v that computes the constrained output, where si is an object of type StageInput.
  • lcons: A vector of lower bounds for v.
  • ucons: A vector of upper bounds for v, set equal to lcons to indicate equality constraints.
  • N: The optimization horizon.

Example:

This example creates a constraints that bounds the square of a single input $1 ≤ u^2 ≤ 3$ and the sum of the state components $-4 ≤ \sum x_i ≤ 4$. Note that we create v as a static array for maximum performance.

control_constraint = StageConstraint([1, -4], [3, 4], N) do si, p, t
    SA[
        si.u[1]^2
        sum(si.x)
    ]
end
source
JuliaSimControl.MPC.StageCostType
StageCost

A cost-function object that represents the cost at each time-step in a GenericMPCProblem.

Example:

p = (; Q1, Q2, Q3, QN)

running_cost = StageCost() do si, p, t
    Q1, Q2 = p.Q1, p.Q2
    e = si.x .- si.r
    u = si.u
    dot(e, Q1, e) + dot(u, Q2, u)
end
source
JuliaSimControl.MPC.TerminalStateConstraintType
TerminalStateConstraint(f, lcons, ucons)

Create a constraint that holds for the terminal set x(N+1) in an GenericMPCProblem. NOTE: to implement simple bounds constraints on the terminal state, it is more efficient to use BoundsConstraint.

Arguments:

  • f: A function (ti, p, t)->v that computes the constrained output, where ti is an object of type TerminalInput.
  • lcons: A vector of lower bounds for v.
  • ucons: A vector of upper bounds for v, set equal to lcons to indicate equality constraints.

Example

This example shows how to force the terminal state to equal a particular reference point r

terminal_constraint = TerminalStateConstraint(r, r) do ti, p, t
    ti.x
end

To make the terminal set a box $l ≤ x ≤ u$, use

terminal_constraint = TerminalStateConstraint(l, u) do ti, p, t
    ti.x
end
source
JuliaSimControl.MPC.TrapezoidalType
Trapezoidal{is_dae, F <: FunctionSystem}

Trapezoidal integration dynamics constraint. This discretization method assumes first-order hold control inputs.

source
JuliaSimControl.MPC.TrapezoidalMethod
Trapezoidal(; dyn, Ts, scale_x = ones(dyn.nx), threads = false)

Trapezoidal integration dynamics constraint. This discretization method assumes first-order hold control inputs and supports ODE or DAE dynamics.

Arguments:

  • dyn::F: The continuous-time dynamics
  • scale_x: Numerical scaling of state variables
  • threads::Bool = false: Use threaded evaluation of the dynamics. For small dynamics, the overhead of threads is too large to be worthwhile.
source
CommonSolve.solveFunction
MPC.solve(prob::GenericMPCProblem, alg = nothing; x0,
    T,
    verbose         = false,
    p               = MPC.parameters(prob),
    callback        = (actual_x, u, x, X, U)->nothing,
    sqp_callback,
    noise           = 0,
    reset_observer  = true,
    dyn_actual      = deepcopy(prob.dynamics),
    x0_actual       = copy(x0),
    p_actual        = p,
    disturbance     = (u,t)->0,
)

Simulate an MPC controller in feedback loop with a plant for T steps. To step an MPC controller forward one step, see step!.

Arguments:

  • x0: Initial state
  • T: Number of time steps to perform the simulation.
  • verbose: Print stuff
  • p: Parameters
  • callback: Called after each iteration
  • noise: Add measurement noise to the simulation, the noise is Gaussian with σ = noise (can be a vector or a scalar).
  • reset_observer: Set the intitial state of the observer to x0 before each simulation.
  • dyn_actual: Actual dynamics. This defaults to prob.dynamics, but can be set to any other dynamics in order to simulate model errors etc. If this is set, set also x0_actual and p_actual.
  • x0_actual: Initial state for the actual dynamics.
  • p_actual: Parameters for the actual dynamics.
  • disturbance: A function (u_out,t)->0 that takes the control input u and modifies this in place to compute the disturbance.
source
CommonSolve.solveFunction
solve(prob::AbstractMPCProblem, alg = nothing; x0, T, p, verbose = false)

Solve an MPC problem for T time steps starting at initial state x0.

Set verbose = true to get diagnostic outputs, including tuning tips, in case you experience poor performance.

source
CommonSolve.solveFunction
solve(prob::AbstractMPCProblem, alg = nothing; x0, T, p, verbose = false)

Solve an MPC problem for T time steps starting at initial state x0.

Set verbose = true to get diagnostic outputs, including tuning tips, in case you experience poor performance.

source
JuliaSimControl.MPC.IpoptSolverMethod
IpoptSolver(;
    verbose                    = false,
    printerval                 = 1,          # print this often if verbose
    tol                        = 0.0001,
    acceptable_tol             = 0.1,
    max_iter                   = 100,
    max_wall_time              = 180.0,
    max_cpu_time               = 2*max_wall_time,
    constr_viol_tol            = 0.0001,
    acceptable_constr_viol_tol = 0.1,
    acceptable_iter            = 5,
    exact_hessian              = true,
    mu_init                    = 0.1,
    mu_strategy                = "monotone", # can also be "adaptive" if problem has convergence issues
    lqg                        = false,      # Indicate that the problem has linear dynamics and constraints, with quadratic cost
    linear_inequality_constraints = false,   # Indicate that the problem has linear inequality constraints
    linear_system              = false,      # Indicate that the problem has linear dynamics and constraints
)

A convenience constructor to create an solver = Ipopt.Optimizer() and set options. See https://coin-or.github.io/Ipopt/OPTIONS.html for information about each option. The defaults provided here are more relaxed than Ipopt defaults.

Ipopt will try to meet tol and constr_viol_tol, but stops early if it has met acceptable_tol and acceptable_constr_viol_tol for acceptable_iter number of iterations.

For large problems, Ipopt may use multiple threads, in which case the argument max_wall_time is representative of the time the user experiences, while max_cpu_time is better set very large.

Note

When solving MPC problems, it is often beneficial to favor a faster sample rate and a longer prediction horizon over accurate integration and optimization. The motivations for this are several

  • The dynamics model is often inaccurate, and solving an inaccurate model to high accuracy can be a waste of effort.
  • The performance is often dictated by the disturbances acting on the system, and having a higher sample rate may allow the controller to reject disturbances faster.
  • Feedback from measurements corrects for slight errors due to integration.
  • Increasing sample rate leads to each subsequent optimization problem being closer to the previous one, making warm-staring more efficient and a good solution being found in fewer iterations.

The verbose option can be a Bool or an integer, true is interpreted as the default Ipopt verbosity of 5.

source
JuliaSimControl.MPC.get_xuMethod
x, u = get_xu(vars::Variables, u0=nothing, ri=1)
x, u = get_xu(prob::GenericMPCProblem, ri=1)

Extract x and u matrices for robust index ri. This function handles input integration for FunctionSystems with specified input_integrators, this requires the specification of u0 if Variables are passed.

source
JuliaSimControl.MPC.optimize!Method
controlleroutput = MPC.optimize!(prob::GenericMPCProblem, controllerinput::ControllerInput, p, t)

Perform optimization for the MPC problem prob at time t with parameters p and initial state controllerinput.

source
JuliaSimControl.MPC.optimize!Method
optimize!(prob::QMPCProblem, x0, p, t; verbose = true)

Solve a single instance of the optimal-control problem in the MPC controller.

Arguments:

  • x0: Initial state
  • t: Initial time
source
JuliaSimControl.MPC.optimize!Method
optimize!(prob::LQMPCProblem, x0, p, t; verbose = true)
optimize!(prob::LQMPCProblem, ci::ControllerInput, p, t; verbose = true)

Solve a single instance of the optimal-control problem in the MPC controller.

!!! NOTE This low-level function does not adjust for operating points, the user is thus responsible for handling operating points in the correct way when calling this function directly. This adjustment typically takes the form Δx = x - op.x co = MPC.optimize!(prob, Δx, p, t) Δu = co.u[1:nu] u = Δu + op.u

Arguments:

  • x0: Initial state
  • ci: An instance of ControllerInput containing the initial state, reference, and previous control input. Providing this struct offers the possibility to update the reference as well.
  • t: Initial time

Returns

An instance of ControllerOutput containing the optimal control input trajectory and the optimal state trajectory as seen by the optimizer. Note that the state trajectory will only correspond to the actual state trajectory for linear problems around the origin.

source
JuliaSimControl.MPC.step!Method
step!(prob::LQMPCProblem, observerinput, p, t; kwargs...)

Take a single step using the MPC controller.

Internally, step! performs the following actions:

  1. Measurement update of the observer, forms $\hat x_{k | k}$.
  2. Solve the optimization problem with the state of the observer as the initial condition.
  3. Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
  4. Advance the problem caches, including the reference trajectory if xr is a full trajectory.

The return value of step! is an instance of ControllerOutput, containing the fields

  • u: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.
  • x: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.

To bypass the observer handling, call the function optimize! with the initial state x0 or an instance of ControllerInput directly:

controlleroutput = MPC.optimize!(prob, x0, p, t; kwargs...)
source
JuliaSimControl.MPC.step!Method
step!(prob::QMPCProblem, u, y, r, p, t; kwargs...)

Take a single step using the MPC controller.

where u is a matrix $n_u \times n_T$ where the first column corresponds to the control signal that was last taken. The rest of u is used as an initial guess for the optimizer. y is the latest measurement and is used to update the observer in prob. Internally, step! performs the following actions:

  1. Measurement update of the observer, forms $\hat x_{k | k}$.
  2. Solve the optimization problem with the state of the observer as the initial condition.
  3. Advance the state of the observer using its prediction model, forms $\hat x_{k+1 | k}$.
  4. Advance the problem caches, including the reference trajectory if xr is a full trajectory.

The return values of step! are

  • uopt: the optimal trajectory (usually, only the first value is used in an MPC setting). This value is given in the correct space for interfacing with the true plant.
  • x: The optimal state trajectory as seen by the optimizer, note that this trajectory will only correspond to the actual state trajectory for linear problems around the origin.
  • u0 The control signal used to update the observer in the prediction step. Similar to xopt, this value may contain offsets and is usually of less external use than uopt which is transformed to the correct units of the actual plant input.
source
JuliaSimControl.rk4Method
f_discrete, l_discrete = rk4(f, l, Ts)

Discretize dynamics f and loss function lusing RK4 with sample time Ts. The returned function is on the form (xₖ,uₖ,p,t)-> (xₖ₊₁, loss). Both f and l take the arguments (x, u, p, t).

source
JuliaSimControl.MPC.rolloutFunction
x, u = rollout(f, x0::AbstractVector, u, p, t=1)

Simulate discrete system f from initial condition x0 and input array u.

source
JuliaSimControl.MPC.rmsFunction
rms(x)

Root-mean square error of x. If x is a matrix, the rms value will be calculated along rows, i.e.,

sqrt.(mean(abs2, x, dims = 2))
source

MPC signals

  ┌───────────────┬──────────────────────┐
  │               │                      │
  │    ┌─────┐    │     ┌─────┐          │
w─┴───►│     │    └────►│     ├─────►v   │
       │     │ u        │     │          │
r─────►│ MPC ├──┬──────►│  P  ├─────►z   │
       │     │  │       │     │          │
 ┌────►│     │  │ d────►│     ├──┬──►    │
 │     └─────┘  │       └─────┘  │y      │
 │              │                │       │
 │   ┌───────┐  │                │       │
 │   │       │◄─┘                │       │
 │   │       │                   │       │
 └───┤  OBS  │◄──────────────────┘       │
     │       │                           │
     │       │◄──────────────────────────┘
     └───────┘

All signals relevant in the design of an MPC controller are specified in the block-diagram above. The user is tasked with designing the MPC controller as well as the observer.

The following signals are shown in the block diagram

  • $w$ is a known disturbance, i.e., its value is known to the controller through a measurement or otherwise.
  • $r$ is a reference value for the controlled output $z$.
  • $\hat x$ is an estimate of the state of the plant $P$.
  • $u$ is the control signal.
  • $v$ is a set of constrained outputs. This set may include direct feedthrough of inputs from $u$.
  • $z$ is a set of controlled outputs, i.e., outputs that will be penalized in the cost function.
  • $y$ is the measured output, i.e., outputs that are available for feedback to the observer. $z$ and $y$ may overlap.
  • $d$ is an unknown disturbance, i.e., a disturbance of which there is no measurement or knowledge.

The controller assumes that there are references $r$ provided for all controlled outputs $z$. If $z$ is not provided, the controller assumes that all states are to be considered controlled variables and expects $Q_1$ to be a square matrix of size $n_x$, otherwise $Q_1$ is a square matrix of size $n_z$. $z$ may be provided as either a list of indices into the state vector, or as a matrix that multiplies the state vector.