Simulation of systems with inputs
Systems with input signals $u$ require special attention when simulating. Standard libraries for simulating differential equations, such as DifferentialEquations.jl, often handle autonomous systems $\dot x = f(x)$ only, while system with inputs $\dot x = f(x,u)$ lack first-class support. This page describes how to simulate such systems with inputs in Julia in a wide range of scenarios.
DifferentialEquations.jl takes the dynamics of an autonomous system as a function with the signature $\dot x = f(x, p, t)$ or $f(\dot x, x, p, t)$, where in the second form the input array ẋ
is mutated in place tol save memory allocations. Here, $p$ denotes a parameter object that can be used to store any parameters the dynamics may depend on, and $t$ is the time variable.
Inputs as functions of time (open loop)
When the input signal is a function of time only, such as $u(t) = \sin(t)$ or a sampled trajectory available as an array, we have the following alternatives available
$u$ is a function of time
In this case, we can easily create a new dynamics function $h$ that simply calls the input function $u(t)$. We have thus reduced the problem to simulating the autonomous system $h$.
h(x, p, t) = f(x, u(t), p, t)
This approach works even if the function $u$ is depends on the state $x$ as well, $u(x, t)$, but not if the input is generated by a separate dynamical system, a case which is further below.
$u$ is a sampled trajectory
In this case, we can create an interpolation function$u(t)$ that returns the value of the input signal at time $t$, for example:
using DataInterpolations
u = ConstantInterpolation(u_trajectory, time_trajectory) # Zero-order-hold
# alternatively
u = LinearInterpolation(u_trajectory, time_trajectory) # First-order-hold
We are then back at the problem above, and can use the same solution:
h(x, p, t) = f(x, u(t), p, t)
Many solvers, in particular high-order solvers, require the dynamics to be continuous and continuously differentiable of several orders. If the input signal is discontinuous or any of its low-order derivatives are, the solvers may slow down significantly. To mitigate this, we may tell the solver that it has to take a step exactly at the discontinuities. This can be done by passing the keyword argument tstops
to the solve
function (documentation).
An alternative approach is to use a manual fixed-step solver, like JuliaSimControl.MPC.rk4
.
The following snippet illustrates how this approach looks for a continuous-time system $f$ and a sampled input $u \in \mathbb{R}^{n_u \times T}$.
f_discrete = MPC.rk4(f, Ts)
T = size(u, 2)
x = zeros(nx, T)
for i = 1:size(x, 2)-1
x[:, i+1] = f_discrete(x[:, i], u[:, i], p, (i-1)*Ts)
end
The call f_discrete = MPC.rk4(f, Ts)
returns a new function that has the same signature as $f$, but returns the next state$x(t+Ts)$ instead of $\dot x(t)$. We have thus effectively discretized the continuous-time dynamics $f$.
The simulation using a discretized dynamics function above is packaged into the function MPC.rollout
used in many of the MPC tutorials.
Inputs generated by dynamical systems in closed loop
When the input is itself generated by a dynamical system, we must somehow account for any additional state variables that the input system may have, i.e., the total number of state variables is $n_{x_f} + n_{x_u}$.
The manual approach is to encode the dynamics of both $f$ and $u$ in a separate function $h$ that has $n_{x_f} + n_{x_u}$ state variables. This approach is very flexible, but is error prone and laborious.
A higher-level approach is to use ModelingToolkit.jl, see Modeling for control using ModelingToolkit for relevant examples.
Use of a discrete controller in a continuous-time simulation
Differential-equation solvers often lack explicit support for discrete-time dynamics and variables, but nevertheless often include mechanisms that can be used to simulate hybrid systems with both discrete and continuous-time dynamics.
There are several different ways one could go about including a discrete-time controller in a continuous-time simulation, in particular, we must choose a way to store the computed control signal. Two options are
- Use a global variable into which we write the control signal at each discrete time step.
- Add an extra state variable to the system, and use this variable to store the control signal. This is the approach taken in the example below since it has the added benefit of adding the computed control signal to the solution object.
In the example below, we simulate a discrete-time PID controller from the DiscretePIDs.jl package in feedback with a linear system, perturbed by an additive disturbance $d$. This controller maintains its own state, and we thus only need to handle the control signal.
We will use a DiffEqCallbacks.PeriodicCallback
in which we perform the PID-controller update, and store the computed control signal in the extra state variable. This will ensure that
- The solver always takes a step each $T_s$ seconds
- We only update the controller at these discrete time points
We start by creating the system model and the controller
using DiscretePIDs, ControlSystemsBase, OrdinaryDiffEq, DiffEqCallbacks, Plots
Tf = 15 # Simulation time
K = 1 # Proportional gain
Ti = 1 # Integral time
Td = 1 # Derivative time
Ts = 0.01 # sample time
P = ss(tf(1, [1, 1])) # Process to be controlled in continuous time
A, B, C, D = ssdata(P) # Extract the system matrices
pid = DiscretePID(; K, Ts, Ti, Td)
DiscretePIDs.DiscretePID{Float64}( # with parameters and state:
1 ,# K
1 ,# Ti
1 ,# Td
1 ,# Tt
10 ,# N
1 ,# b
-9.223372e+18 ,# umin
9.223372e+18 ,# umax
0.01 ,# Ts
0.01 ,# bi
0.01 ,# ar
9.090909 ,# bd
0.9090909 ,# ad
0 ,# I
0 ,# D
0 ,# yold
)
We then define the dynamics on the form $f(\dot x, x, p, t)$, i.e., an autonomous system without inputs. The total number of state variables is 2, one for the system model and one for the control signal. Inside the function dynamics!
, we extract x
and u
from the combined state xu
. Note, we do not call the controller in this function, only the continuous-time dynamics of the system is updated here.
function dynamics!(dxu, xu, p, t)
A, B, C, r, d = p # We store the reference and disturbance in the parameter object
x = xu[1:P.nx] # Extract the state
u = xu[P.nx+1:end] # Extract the control signal
dxu[1:P.nx] .= A*x .+ B*(u .+ d) # Plant input is control signal + disturbance
dxu[P.nx+1:end] .= 0 # The control signal has no dynamics, it's updated by the callback
end
The discrete-time dynamics is instead updated by a PeriodicCallback
that triggers every $T_s$ seconds. In this callback, we call the PID controller and write the computed control signal to the extra state variable.
cb = PeriodicCallback(Ts) do integrator
p = integrator.p # Extract the parameter object from the integrator
(; C, r, d) = p # Extract the reference and disturbance from the parameter object
x = integrator.u[1:P.nx] # Extract the state (the integrator uses the variable name `u` to refer to the state, in control theory we typically use the variable name `x`)
y = (C*x)[] # Simulated measurement
u = pid(r, y) # Compute the control signal
integrator.u[P.nx+1:end] .= u # Update the control-signal state variable
end
We are now ready to simulate the system
parameters = (; A, B, C, r=0, d=1) # reference = 0, disturbance = 1
xu0 = zeros(P.nx + P.nu) # Initial state of the system + control signals
prob = ODEProblem(dynamics!, xu0, (0, Tf), parameters, callback=cb)
sol = solve(prob, Tsit5(), saveat=Ts)
plot(sol, layout=(2, 1), ylabel=["x" "u"], lab="")
Due to the fast sample rate $T_s$, the control signal looks continuous, however, increase $T_s$ and you'll notice the zero-order-hold nature of $u$.
Linear systems
Simulation of linear systems from ControlSystems.jl, described by transfer functions or linear state-space models, is covered in the Time response tutorial.