Exported functions and types

In addition to the documentation for this package, we encourage the reader to explore the documentation for ControlSystems.jl and RobustAndOptimalControl.jl that contains functionality and types for basic control analysis and design that work well together with this package.

Index

Descriptions

NameSummary
JuliaSimControls.AbstractStateSpace
AutoTuningProblem{S, W}Optimizes a controller on the form
JuliaSimControls.Continuous
JuliaSimControls.ControlSystems
JuliaSimControls.DelayLtiSystem
JuliaSimControls.DemoSystems
JuliaSimControls.Discrete
JuliaSimControls.Disk
JuliaSimControls.Diskmargin
Eq29 <: InverseLQRMethodR is a weighting matrix, determining the relative importance of matching each input to that provided by the original controller.
Eq32 <: InverseLQRMethodr1 penalizes deviation from the original control signal while q1 penalizes deviation from the original effect the cotrol signal had on the state. In other words, the r1 term tries to use the same actuator configuration as the original controller, while the q1 term ensures that the effec of the controller is the same.
JuliaSimControls.ExtendedKalmanFilter
JuliaSimControls.ExtendedStateSpace
FixedGainObserver{F <: Function, x} <: AbstractFilterA linear observer, similar to a Kalman filer, but with a fixed measurement feedback gain. The gain can be designed using, e.g., pole placement or solving a Riccati equation. For a robust observer, consider using glover_mcfarlane followed by inverse_lqr.
FunctionSystem{TE <: ControlSystems.TimeEvolution, F, G}A structure representing the dynamical system
JuliaSimControls.GMF
JuliaSimControls.G_CS
JuliaSimControls.G_PS
JuliaSimControls.GainMarginObjective
JuliaSimControls.HeteroStateSpace
JuliaSimControls.KalmanFilter
JuliaSimControls.LMI
JuliaSimControls.LQGProblem
JuliaSimControls.LTISystem
JuliaSimControls.MaximumSensitivityObjective
JuliaSimControls.MvNormal
JuliaSimControls.NamedStateSpace
OpenLoopObserver{F <: Function, x} <: AbstractFilterAn observer that uses the dynamics model without any measurement feedback. This observer can be used as an oracle that has full knowledge of the state. Note, this is often an unrealistic assumption in real-world contexts and open-loop observers can not account for load disturbances. Use of this observer in a closed-loop context creates a false closed loop.
OperatingPoint(x, u, y)Structure representing an operating point around which a system is linearized. If no arguments are supplied, an empty operating point is created.
OptimizedPID(popt; name, kwargs...)Takes optimized parameters popt and returns the following system
JuliaSimControls.OvershootObjective
JuliaSimControls.PhaseMarginObjective
JuliaSimControls.RiseTimeObjective
JuliaSimControls.RobustAndOptimalControl
JuliaSimControls.SettlingTimeObjective
JuliaSimControls.Simulator
JuliaSimControls.StateSpace
JuliaSimControls.StepRejectionObjective
JuliaSimControls.StepTrackingObjective
JuliaSimControls.TransferFunction
JuliaSimControls.UncertainSS
JuliaSimControls.UnscentedKalmanFilter
JuliaSimControls.add_disturbance
JuliaSimControls.add_input
JuliaSimControls.add_input_differentiator
JuliaSimControls.add_input_integrator
JuliaSimControls.add_low_frequency_disturbance
JuliaSimControls.add_measurement_disturbance
JuliaSimControls.add_output
JuliaSimControls.add_output_differentiator
JuliaSimControls.add_output_integrator
JuliaSimControls.add_resonant_disturbance
JuliaSimControls.append
JuliaSimControls.are
JuliaSimControls.balance
JuliaSimControls.balance_statespace
JuliaSimControls.balreal
JuliaSimControls.baltrunc
JuliaSimControls.baltrunc2
JuliaSimControls.baltrunc_coprime
JuliaSimControls.baltrunc_unstab
JuliaSimControls.bilinearc2d
JuliaSimControls.bilineard2c
JuliaSimControls.blocksort
JuliaSimControls.bode
JuliaSimControls.bodeplot
JuliaSimControls.bodeplot!
JuliaSimControls.bodev
JuliaSimControls.broken_feedback
build_controlled_dynamicsBuild a function on the form (x,u,p,t) -> ẋ where
JuliaSimControls.c2d
JuliaSimControls.c2d_poly2poly
JuliaSimControls.c2d_roots2poly
JuliaSimControls.c2d_x0map
JuliaSimControls.care
JuliaSimControls.closedloop
JuliaSimControls.comp_sensitivity
JuliaSimControls.connect
JuliaSimControls.controller_reduction
JuliaSimControls.controller_reduction_plot
JuliaSimControls.controller_reduction_plot!
JuliaSimControls.controller_reduction_weight
JuliaSimControls.covar
JuliaSimControls.ctrb
JuliaSimControls.d2c
JuliaSimControls.dab
JuliaSimControls.damp
JuliaSimControls.dampreport
JuliaSimControls.dare
JuliaSimControls.dare3
JuliaSimControls.dcgain
JuliaSimControls.delay
JuliaSimControls.delaymargin
JuliaSimControls.den
JuliaSimControls.denpoly
JuliaSimControls.denvec
JuliaSimControls.diagonalize
JuliaSimControls.diskmargin
JuliaSimControls.dkalman
JuliaSimControls.dlqr
JuliaSimControls.dlyap
JuliaSimControls.dss
JuliaSimControls.evalfr
JuliaSimControls.expand_symbol
JuliaSimControls.extended_controller
JuliaSimControls.extended_gangoffour
JuliaSimControls.feedback
JuliaSimControls.feedback2dof
JuliaSimControls.feedback_control
JuliaSimControls.ff_controller
JuliaSimControls.find_lft
JuliaSimControls.fit_complex_perturbations
JuliaSimControls.freqresp
JuliaSimControls.freqresp!
JuliaSimControls.freqrespv
G, H, sol, d = frequency_response_analysis(sys::ODESystem, u, y; f0, f1, Tf = 5/f0, Ts = 0.1/f1, amplitude = 1)Frequency-response analysis of G u->y. Returns the frequency-response $G(iω)$ as a FRD object that contains the comple response G.r and the frequency vector G.w.
JuliaSimControls.frequency_weighted_reduction
JuliaSimControls.fudge_inv
JuliaSimControls.gain_and_delay_uncertainty
JuliaSimControls.gangoffourplot
JuliaSimControls.gangofseven
getbounds(x)Returns a dict with pairs p => (lb, ub) mapping parameters of sys to lower and upper bounds. Create parameters with bounds like this
getdist(x)Get the probability distribution associated with symbolc variable x. If no distribution is associated with x, nothing is returned. Create parameters with associated distributions like this
JuliaSimControls.glover_mcfarlane
JuliaSimControls.glover_mcfarlane_2dof
JuliaSimControls.gram
JuliaSimControls.grampd
JuliaSimControls.h2norm
JuliaSimControls.h2synthesize
JuliaSimControls.hankelnorm
JuliaSimControls.hanus
hasbounds(x)Determine whether or not symbolic variable x has bounds associated with it. See also getbounds.
hasdist(x)Determine whether or not symbolic variable x has a probability distribution associated with it.
JuliaSimControls.hess_form
JuliaSimControls.hinfassumptions
JuliaSimControls.hinfgrad
JuliaSimControls.hinfnorm
JuliaSimControls.hinfnorm2
JuliaSimControls.hinfpartition
JuliaSimControls.hinfsignals
K, γ = hinfsyn_lmi(P::ExtendedStateSpace;Computes an H-infinity optimal controller K for an extended plant P such that
JuliaSimControls.hinfsynthesize
JuliaSimControls.hsvd
JuliaSimControls.impulse
JuliaSimControls.innovation_form
JuliaSimControls.input_comp_sensitivity
input_names(P)Get a vector of strings with the names of the inputs of P. P can be an LTISystem, ODESystem, FunctionSystem
JuliaSimControls.input_sensitivity
Q,R,S,P,L2 = inverse_lqr(method::LMI, G::AbstractStateSpace, L; optimizer = Hypatia.Optimizer)Solve the inverse optimal control problem that finds the LQR cost function that leads to a controller approximating state feedback controller with gain matrix L. P is the solution to the Riccati equation and L2 is the recreated feedback gain. Note: S will in general not be zero, and including this cross term in the cost function may be important. If including S is not possible, use the LMI method to find a cost function with as small S as possible.
JuliaSimControls.iscontinuous
isdisturbance(x)Determine whether or not symbolic variable x is marked as a disturbance input.
JuliaSimControls.isproper
JuliaSimControls.isstable
istunable(x)Determine whether or not symbolic variable x is marked as a tunable for an automatic tuning algorithm. Create a tunable parameter by
JuliaSimControls.kalman
JuliaSimControls.laglink
JuliaSimControls.leadlink
JuliaSimControls.leadlinkat
JuliaSimControls.leadlinkcurve
JuliaSimControls.lft
JuliaSimControls.linfnorm
JuliaSimControls.linfnorm2
JuliaSimControls.loop_diskmargin
JuliaSimControls.loop_scale
JuliaSimControls.loop_scaling
JuliaSimControls.loopshapingPI
JuliaSimControls.lqr
JuliaSimControls.lqr3
JuliaSimControls.lsim
JuliaSimControls.luenberger
JuliaSimControls.lyap
JuliaSimControls.makeweight
JuliaSimControls.margin
JuliaSimControls.marginplot
JuliaSimControls.marginplot!
JuliaSimControls.markovparam
JuliaSimControls.measure
JuliaSimControls.minreal
JuliaSimControls.minreal2
JuliaSimControls.modal_form
JuliaSimControls.muplot
JuliaSimControls.muplot!
mussv(M::AbstractMatrix; optimizer = Hypatia.Optimizer{eltype(M)}, bu = opnorm(M), tol = 0.001)Compute (an upper bound of) the structured singular value μ of uncertain system model G.
mussv_DG(M::AbstractMatrix; optimizer = Hypatia.Optimizer{eltype(M)}, bu = opnorm(M), tol = 0.001)Compute (an upper bound of) the structured singular value μ for diagonal Δ of real perturbations (other structures of Δ are not yet supported). M is assumed to be an (n × n × N_freq) array or a matrix.
mussv_tv(M::AbstractArray{<:Any, 3}; optimizer = Hypatia.Optimizer, bu = 20, tol = 0.001)Compute (an upper bound of) the structured singular value margin when G is interconnected with time-varying uncertainty structure described by blocks. This value will in general be larger than the one returned by mussv, but can be used to guarantee stability for infinitely fast time-varying perturbations, i.e., if the return value is < 1, the system is stable no matter how fast the dynamics of the perturbations change.
JuliaSimControls.mvnyquistplot
JuliaSimControls.mvnyquistplot!
JuliaSimControls.named_ss
JuliaSimControls.ncfmargin
JuliaSimControls.neglected_delay
JuliaSimControls.neglected_lag
JuliaSimControls.nicholsplot
JuliaSimControls.nicholsplot!
JuliaSimControls.noise_mapping
JuliaSimControls.nominal
JuliaSimControls.nonlinearity
JuliaSimControls.norm
JuliaSimControls.norminf
JuliaSimControls.nugap
JuliaSimControls.num
JuliaSimControls.numpoly
JuliaSimControls.numvec
JuliaSimControls.nyquist
JuliaSimControls.nyquistcircles
JuliaSimControls.nyquistcircles!
JuliaSimControls.nyquistplot
JuliaSimControls.nyquistplot!
JuliaSimControls.nyquistv
JuliaSimControls.observer_controller
JuliaSimControls.observer_predictor
JuliaSimControls.obsv
JuliaSimControls.output_comp_sensitivity
output_names(P)Get a vector of strings with the names of the outputs of P. P can be an LTISystem, ODESystem, FunctionSystem
JuliaSimControls.output_sensitivity
JuliaSimControls.pade
JuliaSimControls.parallel
JuliaSimControls.partition
JuliaSimControls.performance_mapping
JuliaSimControls.pid
JuliaSimControls.pidplots
JuliaSimControls.place
JuliaSimControls.placePI
JuliaSimControls.pole
JuliaSimControls.poles
JuliaSimControls.pzmap
JuliaSimControls.pzmap!
JuliaSimControls.reduce_sys
JuliaSimControls.relative_gain_array
JuliaSimControls.rgaplot
JuliaSimControls.rgaplot!
JuliaSimControls.rlocus
JuliaSimControls.rlocusplot
JuliaSimControls.rlocusplot!
JuliaSimControls.robstab
JuliaSimControls.rstc
JuliaSimControls.rstd
JuliaSimControls.schur_form
JuliaSimControls.sensitivity
JuliaSimControls.series
JuliaSimControls.setPlotScale
JuliaSimControls.show_construction
JuliaSimControls.sigma
JuliaSimControls.sigmaplot
JuliaSimControls.sigmaplot!
JuliaSimControls.sigmav
JuliaSimControls.sim_diskmargin
JuliaSimControls.similarity_transform
JuliaSimControls.sminreal
solve(fx::ZeroProblem, args...; verbose=false, kwargs...)Simulate the system represented by s from initial state x0 over time span tspan = (t0,tf). args and kwargs are sent to the solve function from OrdinaryDiffEq, e.g., solve(s, x0, tspan, Tsit5(), reltol=1e-5) solves the problem with solver Tsit5() and relative tolerance 1e-5.
JuliaSimControls.specificationplot
JuliaSimControls.specificationplot!
JuliaSimControls.splitter
JuliaSimControls.ss
JuliaSimControls.ssdata
JuliaSimControls.ssdata_e
JuliaSimControls.ssrand
JuliaSimControls.stab_unstab
JuliaSimControls.stabregionPID
JuliaSimControls.starprod
state_names(P)Get a vector of strings with the names of the states of P. P can be an LTISystem, ODESystem, FunctionSystem
JuliaSimControls.static_gain_compensation
JuliaSimControls.step
JuliaSimControls.structured_singular_value
JuliaSimControls.sumblock
JuliaSimControls.system_mapping
JuliaSimControls.tf
tunable_parameters(sys, p = parameters(sys))Get all parameters of sys that are marked as tunable. Create a tunable parameter by
JuliaSimControls.tzero
JuliaSimControls.tzeros
JuliaSimControls.uss
JuliaSimControls.vec2sys
JuliaSimControls.zpconv
JuliaSimControls.zpk
JuliaSimControls.zpkdata
JuliaSimControls.δ
JuliaSimControls.δc
JuliaSimControls.δr
JuliaSimControls.δss
JuliaSimControls.νgap
ControlSystems.AbstractStateSpace
ControlSystems.Continuous
struct DelayLtiSystem{T, S <: Real} <: LTISystemCreate a delayed system by speciying both the system and time-delay vector. NOTE: if you want to create a system with simple input or output delays, use the Function delay(τ).
DemoSystemsA module with standard test systems from the control literature.
ControlSystems.Discrete
ControlSystems.HeteroStateSpace
ControlSystems.LTISystem
SimulatorUsed to simulate continuous-time systems. See function ?solve for additional info.
ControlSystems.StateSpace
TransferFunctionCreate a TransferFunction where the coefficients are Particles from MonteCarloMeasurements.jl that can represent uncertainty.
add_input(sys::AbstractStateSpace, B2::AbstractArray, D2 = 0)Add inputs to sys by forming
add_output(sys::AbstractStateSpace, C2::AbstractArray, D2 = 0)Add outputs to sys by forming
append(systems::StateSpace...), append(systems::TransferFunction...)Append systems in block diagonal form
are(::Continuous, A, B, Q, R)Compute X, the solution to the discrete-time algebraic Riccati equation, defined as A'XA - X - (A'XB)(B'XB + R)^-1(B'XA) + Q = 0, where Q>=0 and R>0
S, P, B = balance(A[, perm=true])Compute a similarity transform T = S*P resulting in B = T\A*T such that the row and column norms of B are approximately equivalent. If perm=false, the transformation will only scale A using diagonal S, and not permute A (i.e., set P=I).
A, B, C, T = balance_statespace{S}(A::Matrix{S}, B::Matrix{S}, C::Matrix{S}, perm::Bool=false)Computes a balancing transformation T that attempts to scale the system so that the row and column norms of [TA/T TB; C/T 0] are approximately equal. If perm=true, the states in A are allowed to be reordered.
balrealCalculates a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal G. T is the similarity transform between the old state x and the new state z such that Tz = x.
sysr, G, T = baltrunc(sys::StateSpace; atol = √ϵ, rtol=1e-3, n = nothing, residual = false)Reduces the state dimension by calculating a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal G, and truncating it to order n. If n is not provided, it's chosen such that all states corresponding to singular values less than atol and less that rtol σmax are removed.
mag, phase, w = bode(sys[, w])Compute the magnitude and phase parts of the frequency response of system sys at frequencies w. See also bodeplot
fig = bodeplot(sys, args...)Create a Bode plot of the LTISystem(s). A frequency vector w can be optionally provided. To change the Magnitude scale see setPlotScale(str)
ControlSystems.bodeplot!
bodevFor use with SISO systems where it acts the same as bode but with the extra dimensions removed in the returned values.
sysd = c2d(sys::AbstractStateSpace{<:Continuous}, Ts, method=:zoh; w_prewarp=0)
c2d_poly2poly(ro, Ts)returns the polynomial coefficients in discrete time given polynomial coefficients in continuous time
c2d_roots2poly(ro, Ts)returns the polynomial coefficients in discrete time given a vector of roots in continuous time
sysd, x0map = c2d_x0map(sys::AbstractStateSpace{<:Continuous}, Ts, method=:zoh; w_prewarp=0)Returns the discretization sysd of the system sys and a matrix x0map that transforms the initial conditions to the discrete domain by x0_discrete = x0map*[x0; u0]
ControlSystems.care
covarCalculate the stationary covariance P = E[y(t)y(t)'] of the output y of a StateSpace model sys driven by white Gaussian noise w with covariance E[w(t)w(τ)]=W*δ(t-τ) (δ is the Dirac delta).
ctrbCompute the controllability matrix for the system described by (A, B) or sys.
d2c(sys::AbstractStateSpace{<:Discrete}, method::Symbol = :zoh; w_prewarp=0)Resample discrete-time covariance matrix belonging to sys to the equivalent continuous-time matrix.
X,Y = dab(A,B,C)DAB Solves the Diophantine-Aryabhatta-Bezout identity
Wn, zeta, ps = damp(sys)Compute the natural frequencies, Wn, and damping ratios, zeta, of the poles, ps, of sys
dampreport(sys)Display a report of the poles, damping ratio, natural frequency, and time constant of the system sys
ControlSystems.dare
dcgain(sys, ϵ=0)Compute the dcgain of system sys.
delay(tau)Create a pure time delay of length τ of type T.
dₘ = delaymargin(G::LTISystem)Return the delay margin, dₘ. For discrete-time systems, the delay margin is normalized by the sample time, i.e., the value represents the margin in number of sample times. Only supports SISO systems.
ControlSystems.den
ControlSystems.denpoly
ControlSystems.denvec
diagonalizedsys = diagonalize(s::StateSpace, digits=12) Diagonalizes the system such that the A-matrix is diagonal.
ControlSystems.dkalman
ControlSystems.dlqr
ControlSystems.dlyap
evalfr(sys, x)Evaluate the transfer function of the LTI system sys at the complex number s=x (continuous-time) or z=x (discrete-time).
feedback(sys)Basic use feedback(sys1, sys2) forms the feedback interconnection
feedback2dof(P,R,S,T)Return the transfer function P(F+C)/(1+PC) which is the closed-loop system with process P, controller C and feedforward filter F from reference to control signal (by-passing C).
sys_fr = freqresp(sys, w)Evaluate the frequency response of a linear system
freqresp!(R::Array{T, 3}, sys::LTISystem, w_vec::AbstractVector{<:Real})In-place version of freqresp that takes a pre-allocated array R of size (ny, nu, nw)`
freqrespvFor use with SISO systems where it acts the same as freqresp but with the extra dimensions removed in the returned values.
S, PS, CS, T = gangoffour(P, C; minimal=true)Given a transfer function describing the plant P and a transfer function describing the controller C, computes the four transfer functions in the Gang-of-Four.
fig = gangoffourplot(P::LTISystem, C::LTISystem; minimal=true, plotphase=false, kwargs...)Gang-of-Four plot. kwargs is sent as argument to RecipesBase.plot.
S, PS, CS, T, RY, RU, RE = gangofseven(P,C,F)Given transfer functions describing the Plant P, the controller C and a feed forward block F, computes the four transfer functions in the Gang-of-Four and the transferfunctions corresponding to the feed forward.
gram(sys, opt; kwargs...)Compute the grammian of system sys. If opt is :c, computes the controllability grammian. If opt is :o, computes the observability grammian.
U = grampd(sys, opt; kwargs...)Return a Cholesky factor U of the grammian of system sys. If opt is :c, computes the controllability grammian G = U*U'. If opt is :o, computes the observability grammian G = U'U.
Ninf, ω_peak = hinfnorm(sys; tol=1e-6)Compute the H∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.
y, t, x = impulse(sys[, tfinal])Calculate the impulse response of system sys. If the final time tfinal or time vector t is not provided, one is calculated based on the system pole locations. The return value is a structure of type SimResult that can be plotted or destructured as y, t, x = result.
sysi = innovation_form(sys, R1, R2[, R12])Takes a system
iscontinuousReturns true for a continuous-time system sys, else returns false.
isdiscreteReturns true for a discrete-time system sys, else returns false.
isproperReturns true if the TransferFunction is proper. This means that order(den)
isstable(sys)Returns true if sys is stable, else returns false.
kalman(Continuous, A, C, R1, R2)Calculate the optimal Kalman gain
laglink(a, M; Ts=0)Returns a phase retarding link, the rule of thumb a = 0.1ωc guarantees less than 6 degrees phase margin loss. The bode curve will go from M, bend down at a/M and level out at 1 for frequencies > a
leadlink(b, N, K; Ts=0)Returns a phase advancing link, the top of the phase curve is located at ω = b√(N) where the link amplification is K√(N) The bode curve will go from K, bend up at b and level out at KN for frequencies > bN
leadlinkat(ω, N, K; Ts=0)Returns a phase advancing link, the top of the phase curve is located at ω where the link amplification is K√(N) The bode curve will go from K, bend up at ω/√(N) and level out at KN for frequencies > ω√(N)
leadlinkcurve(start=1)Plot the phase advance as a function of N for a lead link (phase advance link) If an input argument s is given, the curve is plotted from s to 10, else from 1 to 10.
lft(G, Δ, type=:l)Lower and upper linear fractional transformation between systems G and Δ.
Ninf, ω_peak = linfnorm(sys; tol=1e-6)Compute the L∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.
kp,ki,C = loopshapingPI(P,ωp; ϕl,rl, phasemargin, doplot = false)Selects the parameters of a PI-controller such that the Nyquist curve of P at the frequency ωp is moved to rl exp(i ϕl)
lqr(sys, Q, R)Calculate the optimal gain matrix K for the state-feedback law u = -K*x that minimizes the cost function:
[result = lsim(sys, u[, t]; x0, method])](@ref ControlSystems.lsim)
ControlSystems.luenberger
lyap(A, C)Compute the solution X to the discrete Lyapunov equation AXA' - X + Q = 0.
ωgₘ, gₘ, ωϕₘ, ϕₘ = margin(sys::LTISystem, w::Vector; full=false, allMargins=false)returns frequencies for gain margins, gain margins, frequencies for phase margins, phase margins
ControlSystems.marginplot
ControlSystems.marginplot!
markovparam(sys, n)Compute the nth markov parameter of state-space system sys. This is defined as the following:
minrealMinimal realisation algorithm from P. Van Dooreen, The generalized eigenstructure problem in linear system theory, IEEE Transactions on Automatic Control
fig = nicholsplot{T<:LTISystem}(systems::Vector{T}, w::AbstractVector; kwargs...)Create a Nichols plot of the LTISystem(s). A frequency vector w can be optionally provided.
ControlSystems.nicholsplot!
nonlinearity(T, f)Create a pure nonlinearity.
norm(A, p::Real=2)For numbers, return \left(
ControlSystems.norminf
ControlSystems.num
ControlSystems.numpoly
ControlSystems.numvec
re, im, w = nyquist(sys[, w])Transform a Disk representing a diskmargin to a exclusion disk in the Nyquist plane. This can be useful for visualizing a diskmargin in the Nyquist plane.
fig = nyquistplot(sys; Ms_circles=Float64[], unit_circle=false, hz = false, kwargs...)Create a Nyquist plot of the LTISystem(s). A frequency vector w can be optionally provided.
ControlSystems.nyquistplot!
nyquistvFor use with SISO systems where it acts the same as nyquist but with the extra dimensions removed in the returned values.
cont = observer_controller(sys, L::AbstractMatrix, K::AbstractMatrix)Returns the measurement-feedback controller that takes in y and forms the control signal u = -Lx̂. See also ff_controller.
observer_predictor(sys::AbstractStateSpace, K; h::Int = 1)Return the predictor system x' = (A - KC)x + (B-KD)u + Ky y = Cx + Du with the input equation [B-KD K] * [u; y]
obsv(A, C, n=size(A,1))Compute the observability matrix with n rows for the system described by (A, C) or sys. Providing the optional n > sys.nx returns an extended observability matrix.
pade(τ::Real, N::Int)Approximate all time-delays in G by Padé approximations of degree N.
parallel(sys1::LTISystem, sys2::LTISystem)Connect systems in parallel, equivalent to sys2+sys1
C = pid(; kp=0, ki=0; kd=0, time=false, series=false)Calculates and returns a PID controller on transfer function form.
pidplots(P, args...; kps=0, kis=0, kds=0, time=false, series=false, ω=0)Plots interesting figures related to closing the loop around process P with a PID controller Send in a bunch of PID-parameters in any of the vectors kp, ki, kd. The vectors must be the same length.
place(A, B, p, opt=:c)Calculate the gain matrix K such that A - BK has eigenvalues p.
piparams, C = placePI(P, ω₀, ζ; form=:standard)Selects the parameters of a PI-controller such that the poles of closed loop between P and C are placed to match the poles of s^2 + 2ζω₀s + ω₀^2.
ControlSystems.pole
poles(sys)Compute the poles of system sys.
fig = pzmap(fig, system, args...; kwargs...)Create a pole-zero map of the LTISystem(s) in figure fig, args and kwargs will be sent to the scatter plot command.
ControlSystems.pzmap!
reduce_sys(A::AbstractMatrix, B::AbstractMatrix, C::AbstractMatrix, D::AbstractMatrix, meps::AbstractFloat)Implements REDUCE in the Emami-Naeini & Van Dooren paper. Returns transformed A, B, C, D matrices. These are empty if there are no zeros.
relative_gain_array(G, w::AbstractVector)Reference: "On the Relative Gain Array (RGA) with Singular and Rectangular Matrices" Jeffrey Uhlmann https://arxiv.org/pdf/1805.10312.pdf
rgaplot(sys, args...; hz=false)Plot the relative-gain array entries of the LTISystem(s). A frequency vector w can be optionally provided.
ControlSystems.rgaplot!
ControlSystems.rlocus
rlocusplot(P::LTISystem, K)Computes and plots the root locus of the SISO LTISystem P with a negative feedback loop and feedback gains K, if K is not provided, range(1e-6,stop=500,length=10000) is used. If OrdinaryDiffEq.jl is installed and loaded by the user (using OrdinaryDiffEq), rlocusplot will use an adaptive step-size algorithm to select values of K. A scalar Kmax can then be given as second argument.
ControlSystems.rlocusplot!
rstcSee ?rstd for the discerte case
R,S,T=rstd(BPLUS,BMINUS,A,BM1,AM,AO,AR,AS)rstd Polynomial synthesis in discrete time.
series(sys1::LTISystem, sys2::LTISystem)Connect systems in series, equivalent to sys2*sys1
setPlotScale(str)Set the default scale of magnitude in bodeplot and sigmaplot. str should be either "dB" or "log10".
sv, w = sigma(sys[, w])Compute the singular values sv of the frequency response of system sys at frequencies w. See also sigmaplot
sigmaplot(sys, args...; hz=false)Plot the singular values of the frequency response of the LTISystem(s). A frequency vector w can be optionally provided.
ControlSystems.sigmaplot!
sigmavFor use with SISO systems where it acts the same as sigma but with the extra dimensions removed in the returned values.
syst = similarity_transform(sys, T; unitary=false)Perform a similarity transform T : Tx̃ = x on sys such that
sminreal(sys)Compute the structurally minimal realization of the state-space system sys. A structurally minimal realization is one where only states that can be determined to be uncontrollable and unobservable based on the location of 0s in sys are removed.
solve(fx::ZeroProblem, args...; verbose=false, kwargs...)Simulate the system represented by s from initial state x0 over time span tspan = (t0,tf). args and kwargs are sent to the solve function from OrdinaryDiffEq, e.g., solve(s, x0, tspan, Tsit5(), reltol=1e-5) solves the problem with solver Tsit5() and relative tolerance 1e-5.
sys = ss(A, B, C, D) # ContinuousCreate ExtendedStateSpace
$A, B, C, D = ssdata(sys)$
ssrand(T::Type, ny::Int, nu::Int, nstates::Int; proper=false, stable=true, Ts=nothing)Returns a random StateSpace model with ny outputs, nu inputs, and nstates states, whose matrix elements are normally distributed.
fig, kp, ki = stabregionPID(P, [ω]; kd=0, doplot = true)Segments of the curve generated by this program is the boundary of the stability region for a process with transfer function P(s) The PID controller is assumed to be on the form kp +ki/s +kd s
starprod(sys1, sys2, dimu, dimy)Compute the Redheffer star product.
step(r)Calculate the step response of system sys. If the final time tfinal or time vector t is not provided, one is calculated based on the system pole locations. The return value is a structure of type SimResult that can be plotted or destructured as y, t, x = result.
tfCreate as a fraction of polynomials:
ControlSystems.tzero
tzeros(sys)Compute the invariant zeros of the system sys. If sys is a minimal realization, these are also the transmission zeros.
zpc(a,r,b,s)form conv(a,r) + conv(b,s) where the lengths of the polynomials are equalized by zero-padding such that the addition can be carried out
zpkCreate transfer function on zero pole gain form. The numerator and denominator are represented by their poles and zeros.
z, p, k = zpkdata(sys)Compute the zeros, poles, and gains of system sys.
DiskRepresents a perturbation disc in the complex plane. Disk(0.5, 2) represents all perturbations in the circle centered at 1.25 with radius 0.75, or in other words, a gain margin of 2 and a pahse margin of 36.9 degrees.
DiskmarginThe notation follows "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet
se = ExtendedStateSpace(s::AbstractStateSpace; kwargs...)The conversion from a regular statespace object to an ExtendedStateSpace creates the following system by default
G_CS(P, C)The closed-loop transfer function from (-) measurement noise or (+) reference to control signal. Technically, it's (1 + CP)⁻¹C so SC would be a better, but nonstandard name.
G_PS(P, C)The closed-loop transfer function from load disturbance to plant output. Technically, it's (1 + PC)⁻¹P so SP would be a better, but nonstandard name.
G = LQGProblem(sys::ExtendedStateSpace, Q1, Q2, R1, R2; qQ=0, qR=0, SQ=nothing, SR=nothing)If only an ExtendedStateSpace system is provided, e.g. from hinfpartition, the system P is assumed to correspond to the H₂ optimal control problem with
NamedStateSpaceSee named_ss for a convenient constructor.
UncertainSS{TE} <: AbstractStateSpace{TE}Represents LFT_u(M, Diagonal(Δ))
add_disturbance(sys::StateSpace, Ad::Matrix, Cd::Matrix)See CCS pp. 144
add_input_differentiator(sys::StateSpace, ui = 1:sys.nu; goodwin=false)Augment the output of sys with the difference u(k+1)-u(k)
add_input_integrator(sys::StateSpace, ui = 1, ϵ = 0)Augment the output of sys with the integral of input at index ui, i.e., y_aug = [y; ∫u[ui]] See also add_low_frequency_disturbance
add_low_frequency_disturbance(sys::StateSpace, Ai::Integer; ϵ = 0)Augment sys with a low-frequency (integrating if ϵ=0) disturbance model. If an integrating input disturbance is used together with an observer, the controller will have integral action.
add_measurement_disturbance(sys::StateSpace{Continuous}, Ad::Matrix, Cd::Matrix)
add_differentiator(sys::StateSpace{<:Discrete})Augment the output of sys with the numerical difference (discrete-time derivative) of output, i.e., y_aug = [y; (y-y_prev)/sys.Ts] To add both an integrator and a differentiator to a SISO system, use
add_output_integrator(sys::StateSpace{<:Discrete}, ind = 1; ϵ = 0)Augment the output of sys with the integral of output at index ind, i.e., y_aug = [y; ∫y[ind]] To add both an integrator and a differentiator to a SISO system, use
add_resonant_disturbance(sys::StateSpace{Continuous}, ω, ζ, Ai::Int; measurement = false)Augment sys with a resonant disturbance model.
sysr, hs = baltrunc2(sys::LTISystem; residual=false, n=missing, kwargs...)Compute the a balanced truncation of order n and the hankel singular values
sysr, hs, info = baltrunc_coprime(sys; residual = false, n = missing, factorization::F = DescriptorSystems.gnlcf, kwargs...)Compute a balanced truncation of the left coprime factorization of sys. See baltrunc2 for additional keyword-argument help.
baltrunc_unstab(sys::LTISystem; residual = false, n = missing, kwargs...)Balanced truncation for unstable models. An additive decomposition of sys into sys = sys_stable + sys_unstable is performed after which sys_stable is reduced. The order n must not be less than the number of unstable poles.
bilinearc2d(Ac::AbstractArray, Bc::AbstractArray, Cc::AbstractArray, Dc::AbstractArray, Ts::Number; tolerance=1e-12)Applies a Balanced Bilinear transformation to a discrete-time statespace object
bilineard2c(Ad::AbstractArray, Bd::AbstractArray, Cd::AbstractArray, Dd::AbstractArray, Ts::Number; tolerance=1e-12)Applies a Balanced Bilinear transformation to continuous-time statespace object
blocks, M = blocksort(P::UncertainSS)Returns the block structure of P.Δ as well as P.M permuted according to the sorted block structure. blocks is a vector of vectors with the block structure of perturbation blocks as described by μ-tools, i.e.
broken_feedback(L, i)Closes all loops in square MIMO system L except for loops i. Forms L1 in fig 14. of "An Introduction to Disk Margins" https://arxiv.org/abs/2003.04771
closedloop(l::LQGProblem, L = lqr(l), K = kalman(l))Closed-loop system as defined in Glad and Ljung eq. 8.28. Note, this definition of closed loop is not the same as lft(P, K), which has B1 isntead of B2 as input matrix. Use lft(l) to get the system from disturbances to controlled variables w -> z.
comp_sensitivity
connect(systems, connections; w1, z1 = (:), verbose = true, kwargs...)Create complicated feedback interconnection.
controller_reduction(P::ExtendedStateSpace, K, r, out=true; kwargs...)Minimize
controller_reduction_plot(G, K)Plot the normalized-coprime margin (ncfmargin) as a function of controller order when baltrunc_coprime is used to reduce the controller. Red, orange and green bands correspond to rules of thumb for bad, okay and good coprime uncertainty margins. A value of 0 indicate an unstable closed loop.
RobustAndOptimalControl.controller_reduction_plot!
controller_reduction_weight(P::ExtendedStateSpace, K)Lemma 19.1 See Robust and Optimal Control Ch 19.1
dare3(P::AbstractStateSpace, Q1::AbstractMatrix, Q2::AbstractMatrix, Q3::AbstractMatrix; full=false)Solve the discrete-time algebraic Riccati equation for a discrete LQR cost augmented with control differences
diskmargin(L, σ = 0)Calculate the diskmargin at a particular frequency or vector of frequencies. If ω is a vector, you get a frequency-dependent diskmargin plot if you plot the returned value. See also ncfmargin.
sys = dss(A, E, B, C, D; Ts = 0, check_reg = false,Construct an input-output equivalent descriptor system representation sys = (Ad-λdE,Bd,Cd,Dd) to a pencil based linearization (A-λE,B-λF,C-λG,D-λH) satisfying
expand_symbol(s::Symbol, n::Int)Takes a symbol and an integer and returns a vector of symbols with increasing numbers appended to the end. E.g., (:x, 3) -> [:x1, :x2, :x3]
extended_controller(K::AbstractStateSpace)Returns an expression for the controller that is obtained when state-feedback u = -L(xᵣ-x̂) is combined with a Kalman filter with gain K that produces state estimates x̂. The controller is an instance of ExtendedStateSpace where C2 = -L, D21 = L and B2 = K.
extended_gangoffour(P, C, pos=true)Returns a single statespace system that maps
G = feedback_control(P, K)Return the (negative eedback) closed loop system from input of K to output of P while outputing also the control signal (output of K), i.e., G maps references to [y; u]
RobustAndOptimalControl.ff_controller
l = find_lft(sys::StateSpace{<:Any, <:StaticParticles{<:Any, N}}, δ) where NGiven an systems sys with uncertain coefficients in the form of StaticParticles, find a lower linear fractional transformation M such that lft(M, δ) ≈ sys.
centers, radii = fit_complex_perturbations(P, w; relative=true, nominal=:mean)For each frequency in w, fit a circle in the complex plane that contains all models in the model set P, represented as an LTISystem with Particles coefficients. Note, the resulting radii can be quite unstable if the number of particles is small, in particular if the particles are normally distributed instead of uniformly.
sysr, hs = frequency_weighted_reduction(G, Wo, Wi; residual=true)Find Gr such that
RobustAndOptimalControl.fudge_inv
gain_and_delay_uncertainty(kmin, kmax, Lmax)Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics k*exp(-s*L) where k ∈ [kmin, kmax] and L ≤ Lmax. This weight is slightly optimistic, an expression for a more exact weight appears in eq (7.27), "Multivariable Feedback Control: Analysis and Design"
RobustAndOptimalControl.gangoffour
K, γ, info = glover_mcfarlane(G::AbstractStateSpace, γ = 1.1; W1=1, W2=1)For discrete systems, the info tuple contains also feedback gains F, L and observer gain Hkf such that the controller on observer form is given by
K, γ, info = glover_mcfarlane_2dof(G::AbstractStateSpace{Continuous}, Tref::AbstractStateSpace{Continuous}, γ = 1.1, ρ = 1.1;Joint design of feedback and feedforward compensators
n = h2norm(sys::LTISystem; kwargs...)A numerically robust version of norm using DescriptorSystems.jl
K, Cl = h2synthesize(P::ExtendedStateSpace, γ = nothing)Synthesize H₂-optimal controller K and calculate the closed-loop transfer function from w to z. Ref: Cha. 14.5 in Robust and Optimal Control.
n, hsv = hankelnorm(sys::LTISystem; kwargs...)Compute the hankelnorm and the hankel singular values
Wh = hanus(W)Return Wh on Hanus form. Wh has twice the number of inputs, where the second half of the inputs are "actual inputs", e.g., potentially saturated. This is used to endow W with anti-windup protection. W must have an invertable D matrix and be minimum phase.
sysm, T, HF = hess_form(sys)Bring sys to Hessenberg form form.
flag = hinfassumptions(P::ExtendedStateSpace; verbose=true)Check the assumptions for using the γ-iteration synthesis in Theorem 1.
∇A, ∇B, ∇C, ∇D, hn, ω = hinfgrad(sys; rtolinf=1e-8, kwargs...)Compute the gradient of the H∞ norm w.r.t. the statespace matrices A,B,C,D. If only a system is provided, the norm hn and the peak frequency ω are automatically calculated. kwargs are sent to hinfnorm2. Note, the default tolerance to which the norm is calculated is set smaller than default for hinfnorm2, gradients will be discontinuous with any non-finite tolerance, and sensitive optimization algorithms may require even tighter tolerance.
n, ω = hinfnorm2(sys::LTISystem; kwargs...)A numerically robust version of hinfnorm using DescriptorSystems.jl
P = hinfpartition(G, WS, WU, WT)Transform a SISO or MIMO system G, with weighting functions WS, WU, WT into an LFT with an isolated controller, and write the resulting system, P(s), on a state-space form. Valid inputs for G are transfer functions (with dynamics, can be both MIMO and SISO, both in tf and ss forms). Valid inputs for the weighting functions are empty arrays, numbers (static gains), and LTISystems.
hinfsignals(P::ExtendedStateSpace, G::LTISystem, C::LTISystem)Use the extended state-space model, a plant and the found controller to extract the closed loop transfer functions.
K, γ, mats = hinfsynthesize(P::ExtendedStateSpace; gtol = 1e-4, interval = (0, 20), verbose = false, tolerance = 1.0e-10, γrel = 1.01, transform = true, ftype = Float64, check = true)Computes an H-infinity optimal controller K for an extended plant P such that
hsvd(sys::AbstractStateSpace{Continuous})Return the Hankel singular values of sys, computed as the eigenvalues of QP Where Q and P are the Gramians of sys.
input_comp_sensitivity(P,C)Transfer function from load disturbance to control signal.
input_sensitivity(P, C)Transfer function from load disturbance to total plant input.
RobustAndOptimalControl.linfnorm2
loop_diskmargin(L, args...; kwargs...)Calculate the loop-at-a-time diskmargin for each output and input of P. See also diskmargin, sim_diskmargin. Ref: "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet https://arxiv.org/abs/2003.04771
loop_scale(L::LTISystem, w = 0)Find the optimal diagonal scaling matrix D such that D\L(iw)*D has a minimized condition number at frequency w. Applicable to square L only. Use loop_scaling to obtain D.
loop_scaling(M0::Matrix, tol = 0.0001)Find the optimal diagonal scaling matrix D such that D\M0*D has a minimized condition number. Applicable to square M0 only. See also structured_singular_value with option dynamic=true. Use loop_scale to find and apply the scaling to a loop-transfer function.
lqr3(P::AbstractStateSpace, Q1::AbstractMatrix, Q2::AbstractMatrix, Q3::AbstractMatrix)Calculate the feedback gain of the discrete LQR cost function augmented with control differences
makeweight(low, f_mid, high)Create a weighting function that goes from gain low at zero frequency, through gain gain_mid to gain high at ∞
measure(s::NamedStateSpace, names)Return a system with specified states as measurement outputs.
RobustAndOptimalControl.minreal2
sysm, T, E = modal_form(sys; C1 = false)Bring sys to modal form.
muplot(sys, args...; hz=false)Plot the structured singular values (assuming time-varying diagonal complex uncertainty) of the frequency response of the LTISystem(s). This plot is similar to sigmaplot, but scales the loop-transfer function to minimize the maximum singular value. Only applicable to square systems. A frequency vector w can be optionally provided.
RobustAndOptimalControl.muplot!
fig = mvnyquistplot(sys, w; unit_circle=true, hz = false, kwargs...)Create a Nyquist plot of the LTISystem. A frequency vector w must be provided.
RobustAndOptimalControl.mvnyquistplot!
named_ss(sys::AbstractStateSpace{T}; x, u, y)If a single name is provided, the outputs, inputs and states will be automatically named y,u,x with name as prefix.
m, ω = ncfmargin(P, K)Normalized coprime factor margin, defined has the inverse of
neglected_delay(Lmax)Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics exp(-s*L) where L ≤ Lmax. "Multivariable Feedback Control: Analysis and Design" Ch 7.4.5
neglected_lag(τmax)Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics 1/(s*τ + 1) where τ ≤ τmax. "Multivariable Feedback Control: Analysis and Design" Ch 7.4.5
noise_mapping(P::ExtendedStateSpace)Return the system from w -> y See also performance_mapping, system_mapping, noise_mapping
RobustAndOptimalControl.nominal
nugap(sys0::LTISystem, sys1::LTISystem; kwargs...)Compute the ν-gap metric between two systems. See also ncfmargin.
nyquistcircles(w, centers, radii)Plot the nyquist curve with circles. It only makes sense to call this function if the circles represent additive uncertainty, i.e., were calculated with relative=false.
RobustAndOptimalControl.nyquistcircles!
output_comp_sensitivity(P,C)Transfer function from measurement noise / reference to plant output.
output_sensitivity(P, C)Transfer function from measurement noise / reference to control error.
partition(P::AbstractStateSpace; u, y, w=!u, z=!y)
performance_mapping(P::ExtendedStateSpace)Return the system from w -> z See also performance_mapping, system_mapping, noise_mapping
robstab(M0::UncertainSS, w=exp10.(LinRange(-3, 3, 1500)); kwargs...)Return the robust stability margin of an uncertain model, defined as the inverse of the structured singular value. Currently, only diagonal complex perturbations supported.
sysm, T, SF = schur_form(sys)Bring sys to Schur form.
sensitivity
show_construction([io::IO,] sys::LTISystem; name = "temp", letb = true)Print code to io that reconstructs sys.
sim_diskmargin(L, σ::Real, w::AbstractVector)Simultaneuous diskmargin at both outputs and inputs of P. Ref: "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet https://arxiv.org/abs/2003.04771 See also ncfmargin.
specificationplot([S,CS,T], [WS,WU,WT])This function visualizes the control synthesis using the hInfsynthesize with the three weighting functions :WS(s), WU(s), WT(s)$ inverted and scaled by γ, against the corresponding transfer functions $S(s), C(s)S(s), T(s)$, to verify visually that the specifications are met. This may be run using both MIMO and SISO systems.
RobustAndOptimalControl.specificationplot!
RobustAndOptimalControl.splitter
sys = ss(A, B, C, D) # ContinuousCreate ExtendedStateSpace
A, B1, B2, C1, C2, D11, D12, D21, D22 = ssdata_e(sys)
stab, unstab = stab_unstab(sys; kwargs...)Decompose sys into sys = stab + unstab where stab contains all stable poles and unstab contains unstable poles. See gsdec(sys; job = "finite", smarg, fast = true, atol = 0, atol1 = atol, atol2 = atol, rtol = nϵ) -> (sys1, sys2)
RobustAndOptimalControl.static_gain_compensation
μ = structured_singular_value(M; tol=1e-4, scalings=false, dynamic=false)
sumblock(ex::String; Ts = 0, n = 1)Create a summation node that sums (or subtracts) vectors of length n.
system_mapping(P::ExtendedStateSpace)Return the system from u -> y See also performance_mapping, system_mapping, noise_mapping
uss(D11, D12, D21, D22, Δ, Ts = nothing)Convert a δ object to an UncertainSS
vec2sys(v::AbstractArray, ny::Int, nu::Int, ts = nothing)Create a statespace system from the parameters
δ(N=32)Create an uncertain element of N uniformly distributed samples ∈ [-1, 1]
δc(val::Complex = complex(0.0), radius::Real = 1.0, name)Create a complex, uncertain parameter. If no name is given, a boring name will be generated automatically.
δr(val::Real = 0.0, radius::Real = 1.0, name)Create a real, uncertain parameter. If no name is given, a boring name will be generated automatically.
RobustAndOptimalControl.δss
RobustAndOptimalControl.νgap
AdvancedParticleFilter(Nparticles, dynamics, measurement, measurement_likelihood, dynamics_density, initial_density; p = SciMLBase.NullParameters(), kwargs...)See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#AdvancedParticleFilter-1
AuxiliaryParticleFilter(args...; kwargs...)Takes exactly the same arguments as ParticleFilter, or an instance of ParticleFilter.
DAEUnscentedKalmanFilter(ukf; g, get_x_z, build_xz, xz0, threads=false)An Unscented Kalman filter for differential-algebraic systems (DAE).
ExtendedKalmanFilter(kf, dynamics, measurement)A nonlinear state estimator propagating uncertainty using linearization.
KalmanFilter(A,B,C,D,R1,R2,d0=MvNormal(R1); p = SciMLBase.NullParameters())The matrices A,B,C,D define the dynamics
LowLevelParticleFilters.PFstate
ParticleFilter(num_particles, dynamics, measurement, dynamics_density, measurement_density, initial_density; p = SciMLBase.NullParameters())See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#Particle-filter-1
TupleProduct(v::NTuple{N,UnivariateDistribution})Create a product distribution where the individual distributions are stored in a tuple. Supports mixed/hybrid Continuous and Discrete distributions
UnscentedKalmanFilter(dynamics, measurement, R1, R2, d0=MvNormal(Matrix(R1)); p = SciMLBase.NullParameters(), ny, nu)A nonlinear state estimator propagating uncertainty using the unscented transform.
commandplot(pf, u, y, p=parameters(pf); kwargs...)Produce a helpful plot. For customization options (kwargs...), see ?pplot. After each time step, a command from the user is requested.
ll, e = correct!(f, u, y, p = parameters(f), t = index(f))Correct the state estimate using a measurement y, updates the internal state of the observer.
LowLevelParticleFilters.covariance
debugplot(pf, u, y, p=parameters(pf); runall=false, kwargs...)Produce a helpful plot. For customization options (kwargs...), see ?pplot.
densityplot(x,[w])Plot (weighted) particles densities
LowLevelParticleFilters.densityplot!
LowLevelParticleFilters.effective_particles
LowLevelParticleFilters.expweights
sol = forward_trajectory(kf::AbstractKalmanFilter, u::Vector, y::Vector, p=parameters(kf))Run the particle filter for a sequence of inputs and measurements. Return a solution with x,w,we,ll = particles, weights, expweights and loglikelihood
LowLevelParticleFilters.heatboxplot
LowLevelParticleFilters.heatboxplot!
LowLevelParticleFilters.index
ll(θ) = log_likelihood_fun(filter_from_parameters(θ::Vector)::Function, priors::Vector{Distribution}, u, y, p)returns function θ -> p(y
ll = loglik(filter, u, y, p=parameters(filter))Calculate loglikelihood for entire sequences u,y
ll = logsumexp!(w, we [, maxw])Normalizes the weight vector w and returns the weighted log-likelihood
x,ll = mean_trajectory(pf, u::Vector{Vector}, y::Vector{Vector}, p=parameters(pf))This Function resets the particle filter to the initial state distribution upon start
metropolis(ll::Function(θ), R::Int, θ₀::Vector, draw::Function(θ) = naive_sampler(θ₀))Performs MCMC sampling using the marginal Metropolis (-Hastings) algorithm draw = θ -> θ' samples a new parameter vector given an old parameter vector. The distribution must be symmetric, e.g., a Gaussian. R is the number of iterations. See log_likelihood_fun
LowLevelParticleFilters.mode_trajectory
LowLevelParticleFilters.num_particles
LowLevelParticleFilters.particles
LowLevelParticleFilters.particletype
predict!(f, u, p = parameters(f), t = index(f))Predict the state forward in time, updates the internal state of the observer.
reset!Reset the filter to initial state and covariance/distribution
LowLevelParticleFilters.sample_measurement
LowLevelParticleFilters.shouldresample
x,u,y = simulate(f::AbstractFilter, T::Int, du::Distribution, p=parameters(f), [N]; dynamics_noise=true)See also simplot, predict
xT,RT,ll = smooth(kf::KalmanFilter, u::Vector, y::Vector, p=parameters(kf))Perform particle smoothing using forward-filtering, backward simulation. Return smoothed particles and loglikelihood. See also smoothed_trajs, smoothed_mean, smoothed_cov
smoothed_cov(xb)Helper function to calculate the covariance of smoothed particle trajectories
smoothed_mean(xb)Helper function to calculate the mean of smoothed particle trajectories
smoothed_trajs(xb)Helper function to get particle trajectories as a 3-dimensions array (N,M,T) instead of matrix of vectors.
LowLevelParticleFilters.state
ll, e = update!(f::AbstractFilter, u, y, p = parameters(f), t = index(f))Perform one step of predict! and correct!, returns loglikelihood and prediction error
LowLevelParticleFilters.weights
weigthed_cov(x,we)Similar to weigthed_mean, but returns covariances
x̂ = weigthed_mean(x,we)

Docstrings

JuliaSimControls

Note

Docstrings of the MPC submodule are located under MPC and the Blocks submodule under Blocks.

JuliaSimControls.Eq29Type
Eq29 <: InverseLQRMethod

R is a weighting matrix, determining the relative importance of matching each input to that provided by the original controller.

source
JuliaSimControls.Eq32Type
Eq32 <: InverseLQRMethod
Eq32(q1, r1)

r1 penalizes deviation from the original control signal while q1 penalizes deviation from the original effect the cotrol signal had on the state. In other words, the r1 term tries to use the same actuator configuration as the original controller, while the q1 term ensures that the effec of the controller is the same.

source
JuliaSimControls.inverse_lqrMethod
Q,R,S,P,L2 = inverse_lqr(method::Eq29, G::AbstractStateSpace, L)

Solve the inverse optimal control problem that finds the LQR cost function that leads to a controller approximating state feedback controller with gain matrix L. P is the solution to the Riccati equation and L2 is the recreated feedback gain. Note: S will in general not be zero, and including this cross term in the cost function may be important. If including S is not possible, use the LMI method to find a cost function with as small S as possible.

Creates the stage-cost matrix

\[\begin{bmatrix} L^T R L & -L^T R\\ -R L & R \end{bmatrix} = \begin{bmatrix} Q & -S\\ -S^T & R \end{bmatrix} \]

Ref: "Designing MPC controllers by reverse-engineering existing LTI controllers", E. N. Hartley, J. M. Maciejowski

source
JuliaSimControls.inverse_lqrMethod
Q,R,S,P,L2 = inverse_lqr(method::Eq32, G::AbstractStateSpace, L)

Solve the inverse optimal control problem that finds the LQR cost function that leads to a controller approximating state feedback controller with gain matrix L. P is the solution to the Riccati equation and L2 is the recreated feedback gain. Note: S will in general not be zero, and including this cross term in the cost function may be important. If including S is not possible, use the LMI method to find a cost function with as small S as possible.

Creates the cost function

\[||Bu - BLu||^2_{q_1} + ||u - Lu||^2_{r_1}\]

Ref: "Designing MPC controllers by reverse-engineering existing LTI controllers", E. N. Hartley, J. M. Maciejowski

source
JuliaSimControls.inverse_lqrMethod
Q,R,S,P,L2 = inverse_lqr(method::GMF)

Solve the inverse optimal control problem that finds the LQR cost function on the form

\[x'Qx + u'Ru\]

that leads to an LQR controller approximating a Glover McFarlane controller. Using this method, S = 0.

Ref: Rowe and Maciejowski, "Tuning MPC using H∞ Loop Shaping" .

Example

disc(x) = c2d(ss(x), 0.01)
G = tf([1, 5], [1, 2, 10]) |> disc # Plant model
W1 = tf(1,[1, 0])          |> disc # Loop shaping weight
gmf2 = glover_mcfarlane(G; W1) # Design Glover McFarlane controller
Q,R,S,P,L = inverse_lqr(GMF(gmf2)) # Get cost function
source
JuliaSimControls.inverse_lqrMethod
Q,R,S,P,L2 = inverse_lqr(method::LMI, G::AbstractStateSpace, L; optimizer = Hypatia.Optimizer)

Solve the inverse optimal control problem that finds the LQR cost function that leads to a controller approximating state feedback controller with gain matrix L. P is the solution to the Riccati equation and L2 is the recreated feedback gain. Note, L is supposed to be used with negative feedback, i.e., it's designed such that u = -Lx.

Solves a convex LMI problem minimizing the cross term S. Note: there is no guarantee that the corss term will be driven to 0 exactly. If S remains large in relation to Q and R, S must be included in the cost function for high fidelity reproduction of the original controller.

Ref: "Designing MPC controllers by reverse-engineering existing LTI controllers", E. N. Hartley, J. M. Maciejowski

source
JuliaSimControls.FixedGainObserverType
FixedGainObserver{F <: Function, x} <: AbstractFilter
FixedGainObserver(sys::AbstractStateSpace, x0, K)

A linear observer, similar to a Kalman filer, but with a fixed measurement feedback gain. The gain can be designed using, e.g., pole placement or solving a Riccati equation. For a robust observer, consider using glover_mcfarlane followed by inverse_lqr.

source
JuliaSimControls.OpenLoopObserverType
OpenLoopObserver{F <: Function, x} <: AbstractFilter
OpenLoopObserver(discrete_dynamics, x0)

An observer that uses the dynamics model without any measurement feedback. This observer can be used as an oracle that has full knowledge of the state. Note, this is often an unrealistic assumption in real-world contexts and open-loop observers can not account for load disturbances. Use of this observer in a closed-loop context creates a false closed loop.

source
JuliaSimControls.OperatingPointType
OperatingPoint(x, u, y)
OperatingPoint()

Structure representing an operating point around which a system is linearized. If no arguments are supplied, an empty operating point is created.

Arguments:

  • x: State
  • u: Control input
  • y: Output
source
JuliaSimControls.mussvMethod
mussv(M::AbstractMatrix; optimizer = Hypatia.Optimizer{eltype(M)}, bu = opnorm(M), tol = 0.001)

Compute (an upper bound of) the structured singular value μ for diagonal Δ of complex perturbations (other structures of Δ are handled by supplying the block structure mussv(M, blocks)). M is assumed to be an (n × n × N_freq) array or a matrix.

Solves a convex LMI.

Arguments:

  • bu: Upper bound for bisection.
  • tol: tolerance.

See also mussv, mussv_tv, mussv_DG.

Extended help

By default, the SCS solver is used to solve the problem. Other solvers that handle the this problem formulation is

  • Mosek
  • Hypatia.jl (native Julia)

The solver can be selected using the keyword argument optimizer.

source
JuliaSimControls.mussvMethod
mussv(M::AbstractStateSpace, blocks; optimizer = Hypatia.Optimizer, bu0 = 20, tol = 0.001)

Compute (an upper bound of) the structured singular value μ of statespace model M interconnected with uncertainty structure described by blocks. Reference: MAE509 (LMIs in Control): Lecture 14, part C - LMIs for Robust Control with Structured Uncertainty

Example:

The following example illustrates how to use the structured singular value to determine how large diagonal complex uncertainty can be added at the input of a plant P before the closed-loop system becomes unstable

julia> Δ = uss([δc(), δc()]); # Diagonal uncertainty element

julia> a = 1;

julia> P = ss([0 a; -a -1], I(2), [1 a; 0 1], 0) * (I(2) + Δ);

julia> K = ss(I(2));

julia> G = lft(P, -K);

julia> stabmargin = 1/mussv(G) # We can handle 134% of the modeled uncertainty
1.3429508196721311

julia> # Compare with the input diskmargin
       diskmargin(K*system_mapping(P), -1)
Disk margin with:
Margin: 1.3469378397689664
Frequency: -0.40280561122244496
Gain margins: [-0.3469378397689664, 2.3469378397689664]
Phase margin: 84.67073122411068
Skew: -1
Worst-case perturbation: missing

Extended help

By default, the SCS solver is used to solve the problem. Other solvers that handle the this problem formulation is

  • Mosek
  • Hypatia.jl (native Julia)

The solver can be selected using the keyword argument optimizer.

source
JuliaSimControls.mussvMethod
mussv(G::UncertainSS; kwargs...)

Compute (an upper bound of) the structured singular value μ of uncertain system model G.

source
JuliaSimControls.mussv_DGMethod
mussv_DG(M::AbstractMatrix; optimizer = Hypatia.Optimizer{eltype(M)}, bu = opnorm(M), tol = 0.001)

Compute (an upper bound of) the structured singular value μ for diagonal Δ of real perturbations (other structures of Δ are not yet supported). M is assumed to be an (n × n × N_freq) array or a matrix.

See mussv for more details. See also mussv, mussv_tv, mussv_DG.

source
JuliaSimControls.mussv_tvMethod
mussv_tv(G::AbstractStateSpace, blocks; optimizer = Hypatia.Optimizer, bu = 20, tol = 0.001)

Compute (an upper bound of) the structured singular value margin when G is interconnected with time-varying uncertainty structure described by blocks. This value will in general be larger than the one returned by mussv, but can be used to guarantee stability for infinitely fast time-varying perturbations, i.e., if the return value is < 1, the system is stable no matter how fast the dynamics of the perturbations change.

G is an UncertainSS. The result will in general be more accurate if G is passed rather than a matrix M, unless a very dense grid around the critical frequency is used for to calculate M.

Solves a convex LMI.

Arguments:

  • bu: Upper bound for bisection.
  • tol: tolerance.

See also mussv, mussv_tv, mussv_DG.

source
JuliaSimControls.mussv_tvMethod
mussv_tv(M::AbstractArray{<:Any, 3}; optimizer = Hypatia.Optimizer, bu = 20, tol = 0.001)
mussv_tv(G::UncertainSS; optimizer = Hypatia.Optimizer, bu0 = 20, tol = 0.001)

Compute (an upper bound of) the structured singular value μ for diagonal, complex and time-varying Δ(t) using constant (over frequency) matrix scalings. This value will in general be larger than the one returned by mussv, but can be used to guarantee stability for infinitely fast time-varying perturbations, i.e., if the return value is < 1, the system is stable no matter how fast the dynamics of the perturbations change.

M is assumed to be an (n × n × N_freq) array or a matrix. G is an UncertainSS. The result will in general be more accurate if G is passed rather than M, unless a very dense grid around the critical frequency is used for to calculate M.

Solves a convex LMI.

Arguments:

  • bu: Upper bound for bisection.
  • tol: tolerance.

See also mussv, mussv_tv, mussv_DG.

Extended help

The normal μ is calculated by minimizing

\[\operatorname{min}_D ||D(\omega) M(\omega) D(\omega)^{-1}||\]

where a unique $D(\omega)$ is allowed for each frequency. However, in this problem, the $D$ scalings are constant over frequency, yielding a more conservative answer, with the additional bonus of being applicable for time-varying perturbations.

This strong guarantee can be used to prove stability of nonlinear systems by formulating them as linear systems with time-varying, norm-bounded perturbations. For such systems, mussv_tv < 1 is a sufficient condition for stability. See Boyd et al., "Linear Matrix Inequalities in System and Control Theory" for more details.

source
JuliaSimControls.hinfsyn_lmiMethod
K, γ = hinfsyn_lmi(P::ExtendedStateSpace;
    opt = Hypatia.Optimizer(), γrel = 1.01, ϵ = 1e-3, balance = true, perm = false,)

Computes an H-infinity optimal controller K for an extended plant P such that ||F_l(P, K)||∞ < γ (lft(P, K)) for the smallest possible γ given P. This implementation solves a convex LMI problem.

Arguments:

  • opt: A MathOptInterface solver instance.
  • γrel: If γrel > 1, the optimal γ will be found by γ iteration after which a controller will be designed for γ = γopt * γrel. It is often a good idea to design a slightly suboptimal controller, both for numerical reasons, but also since the optimal controller may contain very fast dynamics. If γrel → ∞, the computed controller will approach the 𝑯₂ optimal controller. Getting a mix between 𝑯∞ and 𝑯₂ properties is another reason to choose γrel > 1.
  • ϵ: A small offset to enforce strict LMI inequalities. This can be tuned if numerical issues arise.
  • balance: Perform a balancing transformation on P using ControlSystems.balance_statespace.
  • perm: If balance=true, perm=true, the balancing transform is allowed to permute the state vector. This is not allowed by default, but can improve the numerics slightly if allowed.

The Hypatia solver takes the following arguments https://github.com/chriscoey/Hypatia.jl/blob/42e4b10318570ea22adb39fec1c27d8684161cec/src/Solvers/Solvers.jl#L73

source
JuliaSimControls.build_controlled_dynamicsMethod

f, xout, pout = buildcontrolleddynamics(sys, u; kwargs...) f, obsf, xout, pout = buildcontrolleddynamics(sys, u; y, z=nothing, w=nothing, kwargs...)

Build a function on the form (x,u,p,t) -> ẋ where

  • x are the differential states

  • u are control inputs

  • p are parameters

  • t is time

  • kwargs are sent to ModelingToolkit.build_function

  • f is a tuple of functions, one out of palce and one in place (x,u,p,t) -> ẋ and (ẋ,x,u,p,t) -> nothing

  • xout contains the order of the states included in the dynamics

  • pout contains the order of the parameters included in the dynamics

If in addition to u, outputs y are also specified, an additional observed function tuple is returned.

Example

This example forms a feedback system and builds a function of the dynamics from the reference r to the output y.

using ModelingToolkit

@variables t x(t)=0 y(t)=0 u(t)=0 r(t)=0
@parameters kp=1
D = Differential(t)

eqs = [
u ~ kp * (r - y)
D(x) ~ -x + u
y ~ x
]

@named sys = ODESystem(eqs, t)

f, obsf, xout, pout = JuliaSimControls.build_controlled_dynamics(sys, sys.r, sys.y; checkbounds=true)
x = zeros(length(xout)) # sys.x
u = [1] # r
p = [1] # kp
xd = f[1](x,u,p,1)
varmap = Dict(
sys.x => 1,
)

@test xd == ModelingToolkit.varmap_to_vars(varmap, xout)
source

ControlSystems and RobustAndOptimalControl

ControlSystems.DelayLtiSystemType
struct DelayLtiSystem{T, S <: Real} <: LTISystem

Represents an LTISystem with internal time-delay. See ?delay for a convenience constructor.

ControlSystems.DelayLtiSystemMethod
DelayLtiSystem{T, S}(sys::StateSpace, Tau::AbstractVector{S}=Float64[]) where {T <: Number, S <: Real}

Create a delayed system by speciying both the system and time-delay vector. NOTE: if you want to create a system with simple input or output delays, use the Function delay(τ).

ControlSystems.SimulatorMethod
Simulator(P::StateSpace, u = (x,t) -> 0)

Used to simulate continuous-time systems. See function ?solve for additional info.

Usage:

using OrdinaryDiffEq, Plots
dt             = 0.1
tfinal         = 20
t              = 0:dt:tfinal
P              = ss(tf(1,[2,1])^2)
K              = 5
reference(x,t) = [1.]
s              = Simulator(P, reference)
x0             = [0.,0]
tspan          = (0.0,tfinal)
sol            = solve(s, x0, tspan, Tsit5())
plot(t, s.y(sol, t)[:], lab="Open loop step response")
ControlSystems.TransferFunctionMethod

F(s), F(omega, true), F(z, false)

Notation for frequency response evaluation.

  • F(s) evaluates the continuous-time transfer function F at s.
  • F(omega,true) evaluates the discrete-time transfer function F at exp(imTsomega)
  • F(z,false) evaluates the discrete-time transfer function F at z
Base.stepMethod
y, t, x = step(sys[, tfinal])
y, t, x = step(sys[, t])

Calculate the step response of system sys. If the final time tfinal or time vector t is not provided, one is calculated based on the system pole locations. The return value is a structure of type SimResult that can be plotted or destructured as y, t, x = result.

y has size (ny, length(t), nu), x has size (nx, length(t), nu)

CommonSolve.solveFunction
sol = solve(s::AbstractSimulator, x0, tspan,  args...; kwargs...)

Simulate the system represented by s from initial state x0 over time span tspan = (t0,tf). args and kwargs are sent to the solve function from OrdinaryDiffEq, e.g., solve(s, x0, tspan, Tsit5(), reltol=1e-5) solves the problem with solver Tsit5() and relative tolerance 1e-5.

See also Simulator lsim

ControlSystems.add_inputFunction
add_input(sys::AbstractStateSpace, B2::AbstractArray, D2 = 0)

Add inputs to sys by forming

\[x' = Ax + [B B2]u y = Cx + [D D2]u\]

If B2 is an integer it will be interpreted as an index and an input matrix containing a single 1 at the specified index will be used.

Example: The following example forms an innovation model that takes innovations as inputs

G   = ssrand(2,2,3, Ts=1)
K   = kalman(G, I(G.nx), I(G.ny))
sys = add_input(G, K)
ControlSystems.add_outputFunction
add_output(sys::AbstractStateSpace, C2::AbstractArray, D2 = 0)

Add outputs to sys by forming

\[x' = Ax + Bu y = [C; C2]x + [D; D2]u\]

If C2 is an integer it will be interpreted as an index and an output matrix containing a single 1 at the specified index will be used.

ControlSystems.appendMethod
append(systems::StateSpace...), append(systems::TransferFunction...)

Append systems in block diagonal form

ControlSystems.areMethod
are(::Continuous, A, B, Q, R)

Compute 'X', the solution to the continuous-time algebraic Riccati equation, defined as A'X + XA - (XB)R^-1(B'X) + Q = 0, where R is non-singular.

Uses MatrixEquations.arec.

ControlSystems.areMethod
are(::Discrete, A, B, Q, R; kwargs...)

Compute X, the solution to the discrete-time algebraic Riccati equation, defined as A'XA - X - (A'XB)(B'XB + R)^-1(B'XA) + Q = 0, where Q>=0 and R>0

Uses MatrixEquations.ared. For keyword arguments, see the docstring of ControlSystems.MatrixEquations.ared

ControlSystems.balanceFunction
S, P, B = balance(A[, perm=true])

Compute a similarity transform T = S*P resulting in B = T\A*T such that the row and column norms of B are approximately equivalent. If perm=false, the transformation will only scale A using diagonal S, and not permute A (i.e., set P=I).

ControlSystems.balance_statespaceFunction
A, B, C, T = balance_statespace{S}(A::Matrix{S}, B::Matrix{S}, C::Matrix{S}, perm::Bool=false)
sys, T = balance_statespace(sys::StateSpace, perm::Bool=false)

Computes a balancing transformation T that attempts to scale the system so that the row and column norms of [TA/T TB; C/T 0] are approximately equal. If perm=true, the states in A are allowed to be reordered.

The inverse of sysb, T = balance_statespace(sys) is given by similarity_transform(sysb, T)

This is not the same as finding a balanced realization with equal and diagonal observability and reachability gramians, see balreal

ControlSystems.balrealMethod

sysr, G, T = balreal(sys::StateSpace)

Calculates a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal G. T is the similarity transform between the old state x and the new state z such that Tz = x.

See also gram, baltrunc

Reference: Varga A., Balancing-free square-root algorithm for computing singular perturbation approximations.

ControlSystems.baltruncMethod
sysr, G, T = baltrunc(sys::StateSpace; atol = √ϵ, rtol=1e-3, n = nothing, residual = false)

Reduces the state dimension by calculating a balanced realization of the system sys, such that the observability and reachability gramians of the balanced system are equal and diagonal G, and truncating it to order n. If n is not provided, it's chosen such that all states corresponding to singular values less than atol and less that rtol σmax are removed.

T is the similarity transform between the old state x and the newstate z such that Tz = x.

If residual = true, matched static gain is achieved through "residualization", i.e., setting

\[0 = A_{21}x_{1} + A_{22}x_{2} + B_{2}u\]

where indices 1/2 correspond to the remaining/truncated states respectively.

See also gram, balreal

Glad, Ljung, Reglerteori: Flervariabla och Olinjära metoder

ControlSystems.bodeMethod
mag, phase, w = bode(sys[, w])

Compute the magnitude and phase parts of the frequency response of system sys at frequencies w. See also bodeplot

mag and phase has size (length(w), ny, nu)

ControlSystems.bodeplotFunction
fig = bodeplot(sys, args...)
bodeplot(LTISystem[sys1, sys2...], args...; plotphase=true, kwargs...)

Create a Bode plot of the LTISystem(s). A frequency vector w can be optionally provided. To change the Magnitude scale see setPlotScale(str)

If hz=true, the plot x-axis will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to RecipesBase.plot.

ControlSystems.bodevMethod

bodev(sys::LTISystem, w::AbstractVector; )

For use with SISO systems where it acts the same as bode but with the extra dimensions removed in the returned values.

ControlSystems.bodevMethod

bodev(sys::LTISystem; )

For use with SISO systems where it acts the same as bode but with the extra dimensions removed in the returned values.

ControlSystems.c2dFunction
sysd = c2d(sys::AbstractStateSpace{<:Continuous}, Ts, method=:zoh; w_prewarp=0)
Gd = c2d(G::TransferFunction{<:Continuous}, Ts, method=:zoh)

Convert the continuous-time system sys into a discrete-time system with sample time Ts, using the specified method (:zoh, :foh, :fwdeuler or :tustin). Note that the forward-Euler method generally requires the sample time to be very small relative to the time constants of the system.

method = :tustin performs a bilinear transform with prewarp frequency w_prewarp.

  • w_prewarp: Frequency (rad/s) for pre-warping when usingthe Tustin method, has no effect for other methods.

See also c2d_x0map

ControlSystems.c2d_poly2polyMethod
c2d_poly2poly(ro, Ts)

returns the polynomial coefficients in discrete time given polynomial coefficients in continuous time

ControlSystems.c2d_roots2polyMethod
c2d_roots2poly(ro, Ts)

returns the polynomial coefficients in discrete time given a vector of roots in continuous time

ControlSystems.c2d_x0mapFunction
sysd, x0map = c2d_x0map(sys::AbstractStateSpace{<:Continuous}, Ts, method=:zoh; w_prewarp=0)

Returns the discretization sysd of the system sys and a matrix x0map that transforms the initial conditions to the discrete domain by x0_discrete = x0map*[x0; u0]

See c2d for further details.

ControlSystems.covarMethod

P = covar(sys, W)

Calculate the stationary covariance P = E[y(t)y(t)'] of the output y of a StateSpace model sys driven by white Gaussian noise w with covariance E[w(t)w(τ)]=W*δ(t-τ) (δ is the Dirac delta).

Remark: If sys is unstable then the resulting covariance is a matrix of Infs. Entries corresponding to direct feedthrough (DWD' .!= 0) will equal Inf for continuous-time systems.

ControlSystems.ctrbMethod

ctrb(A, B) or ctrb(sys)

Compute the controllability matrix for the system described by (A, B) or sys.

Note that checking for controllability by computing the rank from ctrb is not the most numerically accurate way, a better method is checking if gram(sys, :c) is positive definite.

ControlSystems.d2cFunction
d2c(sys::AbstractStateSpace{<:Discrete}, method::Symbol = :zoh; w_prewarp=0)

Convert discrete-time system to a continuous time system, assuming that the discrete-time system was discretized using method. Available methods are `:zoh, :fwdeuler´.

  • w_prewarp: Frequency for pre-warping when usingthe Tustin method, has no effect for other methods.
ControlSystems.dabMethod
X,Y = dab(A,B,C)

DAB Solves the Diophantine-Aryabhatta-Bezout identity

AX + BY = C, where A, B, C, X and Y are polynomials and deg Y = deg A - 1.

See Computer-Controlled Systems: Theory and Design, Third Edition Karl Johan Åström, Björn Wittenmark

ControlSystems.dampMethod
Wn, zeta, ps = damp(sys)

Compute the natural frequencies, Wn, and damping ratios, zeta, of the poles, ps, of sys

ControlSystems.dampreportMethod
dampreport(sys)

Display a report of the poles, damping ratio, natural frequency, and time constant of the system sys

ControlSystems.dcgainFunction
dcgain(sys, ϵ=0)

Compute the dcgain of system sys.

equal to G(0) for continuous-time systems and G(1) for discrete-time systems.

ϵ can be provided to evaluate the dcgain with a small perturbation into the stability region of the complex plane.

ControlSystems.delayMethod
delay(tau)
delay(T::Type{<:Number}, tau)

Create a pure time delay of length τ of type T.

The type T defaults to promote_type(Float64, typeof(tau))

Example:

Create a LTI system with an input delay of L

L = 1
tf(1, [1, 1])*delay(L)
s = tf("s")
tf(1, [1, 1])*exp(-s*L) # Equivalent to the version above
ControlSystems.delaymarginMethod
dₘ = delaymargin(G::LTISystem)

Return the delay margin, dₘ. For discrete-time systems, the delay margin is normalized by the sample time, i.e., the value represents the margin in number of sample times. Only supports SISO systems.

ControlSystems.diagonalizeMethod

dsys = diagonalize(s::StateSpace, digits=12) Diagonalizes the system such that the A-matrix is diagonal.

ControlSystems.evalfrMethod
evalfr(sys, x)

Evaluate the transfer function of the LTI system sys at the complex number s=x (continuous-time) or z=x (discrete-time).

For many values of x, use freqresp instead.

ControlSystems.feedbackMethod
feedback(sys1::AbstractStateSpace, sys2::AbstractStateSpace;
         U1=:, Y1=:, U2=:, Y2=:, W1=:, Z1=:, W2=Int[], Z2=Int[],
         Wperm=:, Zperm=:, pos_feedback::Bool=false)

Basic use feedback(sys1, sys2) forms the feedback interconnection

           ┌──────────────┐
◄──────────┤     sys1     │◄──── Σ ◄──────
    │      │              │      │
    │      └──────────────┘      -1
    │                            |
    │      ┌──────────────┐      │
    └─────►│     sys2     ├──────┘
           │              │
           └──────────────┘

Advanced use feedback also supports more flexible use according to the figure below

              ┌──────────────┐
      z1◄─────┤     sys1     │◄──────w1
 ┌─── y1◄─────┤              │◄──────u1 ◄─┐
 │            └──────────────┘            │
 │                                        α
 │            ┌──────────────┐            │
 └──► u2─────►│     sys2     ├───────►y2──┘
      w2─────►│              ├───────►z2
              └──────────────┘

U1, W1 specifies the indices of the input signals of sys1 corresponding to u1 and w1 Y1, Z1 specifies the indices of the output signals of sys1 corresponding to y1 and z1 U2, W2, Y2, Z2 specifies the corresponding signals of sys2

Specify Wperm and Zperm to reorder the inputs (corresponding to [w1; w2]) and outputs (corresponding to [z1; z2]) in the resulting statespace model.

Negative feedback (α = -1) is the default. Specify pos_feedback=true for positive feedback (α = 1).

See also lft, starprod.

See Zhou, Doyle, Glover (1996) for similar (somewhat less symmetric) formulas.

ControlSystems.feedbackMethod
feedback(sys)
feedback(sys1, sys2)

For a general LTI-system, feedback forms the negative feedback interconnection

>-+ sys1 +-->
  |      |
 (-)sys2 +

If no second system is given, negative identity feedback is assumed

ControlSystems.feedback2dofMethod
feedback2dof(P,R,S,T)
feedback2dof(B,A,R,S,T)
  • Return BT/(AR+ST) where B and A are the numerator and denomenator polynomials of P respectively
  • Return BT/(AR+ST)
ControlSystems.feedback2dofMethod
feedback2dof(P::TransferFunction, C::TransferFunction, F::TransferFunction)

Return the transfer function P(F+C)/(1+PC) which is the closed-loop system with process P, controller C and feedforward filter F from reference to control signal (by-passing C).

         +-------+
         |       |
   +----->   F   +----+
   |     |       |    |
   |     +-------+    |
   |     +-------+    |    +-------+
r  |  -  |       |    |    |       |    y
+--+----->   C   +----+---->   P   +---+-->
      |  |       |         |       |   |
      |  +-------+         +-------+   |
      |                                |
      +--------------------------------+
ControlSystems.freqresp!Method
freqresp!(R::Array{T, 3}, sys::LTISystem, w_vec::AbstractVector{<:Real})

In-place version of freqresp that takes a pre-allocated array R of size (ny, nu, nw)`

ControlSystems.freqrespMethod
sys_fr = freqresp(sys, w)

Evaluate the frequency response of a linear system

w -> C*((iw*im*I - A)^-1)*B + D

of system sys over the frequency vector w.

ControlSystems.freqrespvMethod

freqrespv(G::AbstractMatrix, w_vec::AbstractVector{<:Real}; )

For use with SISO systems where it acts the same as freqresp but with the extra dimensions removed in the returned values.

ControlSystems.freqrespvMethod

freqrespv(G::Number, w_vec::AbstractVector{<:Real}; )

For use with SISO systems where it acts the same as freqresp but with the extra dimensions removed in the returned values.

ControlSystems.freqrespvMethod

freqrespv(sys::LTISystem, w_vec::AbstractVector{W}; )

For use with SISO systems where it acts the same as freqresp but with the extra dimensions removed in the returned values.

ControlSystems.gangoffourMethod
S, PS, CS, T = gangoffour(P, C; minimal=true)
gangoffour(P::AbstractVector, C::AbstractVector; minimal=true)

Given a transfer function describing the plant P and a transfer function describing the controller C, computes the four transfer functions in the Gang-of-Four.

  • S = 1/(1+PC) Sensitivity function
  • PS = P/(1+PC) Load disturbance to measurement signal
  • CS = C/(1+PC) Measurement noise to control signal
  • T = PC/(1+PC) Complementary sensitivity function

If minimal=true, minreal will be applied to all transfer functions. Only supports SISO systems

ControlSystems.gangoffourplotMethod
fig = gangoffourplot(P::LTISystem, C::LTISystem; minimal=true, plotphase=false, kwargs...)
gangoffourplot(P::Union{Vector, LTISystem}, C::Vector; minimal=true, plotphase=false, kwargs...)

Gang-of-Four plot. kwargs is sent as argument to RecipesBase.plot.

ControlSystems.gangofsevenMethod
S, PS, CS, T, RY, RU, RE = gangofseven(P,C,F)

Given transfer functions describing the Plant P, the controller C and a feed forward block F, computes the four transfer functions in the Gang-of-Four and the transferfunctions corresponding to the feed forward.

  • S = 1/(1+PC) Sensitivity function
  • PS = P/(1+PC)
  • CS = C/(1+PC)
  • T = PC/(1+PC) Complementary sensitivity function
  • RY = PCF/(1+PC)
  • RU = CF/(1+P*C)
  • RE = F/(1+P*C)

Only supports SISO systems

ControlSystems.gramMethod
gram(sys, opt; kwargs...)

Compute the grammian of system sys. If opt is :c, computes the controllability grammian. If opt is :o, computes the observability grammian.

See also grampd For keyword arguments, see grampd.

ControlSystems.grampdMethod
U = grampd(sys, opt; kwargs...)

Return a Cholesky factor U of the grammian of system sys. If opt is :c, computes the controllability grammian G = U*U'. If opt is :o, computes the observability grammian G = U'U.

Obtain a Cholesky object by Cholesky(U) for observability grammian

Uses MatrixEquations.plyapc/plyapd. For keyword arguments, see the docstring of ControlSystems.MatrixEquations.plyapc/plyapd

ControlSystems.hinfnormMethod
Ninf, ω_peak = hinfnorm(sys; tol=1e-6)

Compute the H∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.

Ninf := sup_ω σ_max[sys(iω)] if G is stable (σ_max = largest singular value) := Inf' ifG` is unstable

tol is an optional keyword argument for the desired relative accuracy for the computed H∞ norm (not an absolute certificate).

sys is first converted to a state space model if needed.

The continuous-time L∞ norm computation implements the 'two-step algorithm' in:
N.A. Bruinsma and M. Steinbuch, 'A fast algorithm to compute the H∞-norm of a transfer function matrix', Systems and Control Letters (1990), pp. 287-293.

For the discrete-time version, see:
P. Bongers, O. Bosgra, M. Steinbuch, 'L∞-norm calculation for generalized state space systems in continuous and discrete time', American Control Conference, 1991.

See also linfnorm.

ControlSystems.impulseMethod
y, t, x = impulse(sys[, tfinal])
y, t, x = impulse(sys[, t])

Calculate the impulse response of system sys. If the final time tfinal or time vector t is not provided, one is calculated based on the system pole locations. The return value is a structure of type SimResult that can be plotted or destructured as y, t, x = result.

y has size (ny, length(t), nu), x has size (nx, length(t), nu)

ControlSystems.innovation_formMethod
sysi = innovation_form(sys, R1, R2[, R12])
sysi = innovation_form(sys; sysw=I, syse=I, R1=I, R2=I)

Takes a system

x' = Ax + Bu + w ~ R1
y  = Cx + Du + e ~ R2

and returns the system

x' = Ax + Kv
y  = Cx + v

where v is the innovation sequence.

If sysw (syse) is given, the covariance resulting in filtering noise with R1 (R2) through sysw (syse) is used as covariance.

See Stochastic Control, Chapter 4, Åström

ControlSystems.innovation_formMethod
sysi = innovation_form(sys, K)

Takes a system

x' = Ax + Bu + Kv
y  = Cx + Du + v

and returns the system

x' = Ax + Kv
y  = Cx + v

where v is the innovation sequence.

See Stochastic Control, Chapter 4, Åström

ControlSystems.isproperMethod

isproper(tf)

Returns true if the TransferFunction is proper. This means that order(den)

= order(num))

ControlSystems.kalmanMethod
kalman(Continuous, A, C, R1, R2)
kalman(Discrete, A, C, R1, R2)
kalman(sys, R1, R2)

Calculate the optimal Kalman gain

The args...; kwargs... are sent to the Riccati solver, allowing specification of cross-covariance etc. See ?MatrixEquations.arec/ared for more help.

ControlSystems.laglinkMethod
laglink(a, M; Ts=0)

Returns a phase retarding link, the rule of thumb a = 0.1ωc guarantees less than 6 degrees phase margin loss. The bode curve will go from M, bend down at a/M and level out at 1 for frequencies > a

ControlSystems.leadlinkMethod
leadlink(b, N, K; Ts=0)

Returns a phase advancing link, the top of the phase curve is located at ω = b√(N) where the link amplification is K√(N) The bode curve will go from K, bend up at b and level out at KN for frequencies > bN

The phase advance at ω = b√(N) can be plotted as a function of N with leadlinkcurve()

Values of N < 1 will give a phase retarding link.

See also leadlinkat laglink

ControlSystems.leadlinkatMethod
leadlinkat(ω, N, K; Ts=0)

Returns a phase advancing link, the top of the phase curve is located at ω where the link amplification is K√(N) The bode curve will go from K, bend up at ω/√(N) and level out at KN for frequencies > ω√(N)

The phase advance at ω can be plotted as a function of N with leadlinkcurve()

Values of N < 1 will give a phase retarding link.

See also leadlink laglink

ControlSystems.leadlinkcurveFunction
leadlinkcurve(start=1)

Plot the phase advance as a function of N for a lead link (phase advance link) If an input argument s is given, the curve is plotted from s to 10, else from 1 to 10.

See also Leadlink, leadlinkat

ControlSystems.lftFunction
lft(G, Δ, type=:l)

Lower and upper linear fractional transformation between systems G and Δ.

Specify :l lor lower LFT, and :u for upper LFT.

G must have more inputs and outputs than Δ has outputs and inputs.

For details, see Chapter 9.1 in Zhou, K. and JC Doyle. Essentials of robust control, Prentice hall (NJ), 1998

ControlSystems.linfnormMethod
Ninf, ω_peak = linfnorm(sys; tol=1e-6)

Compute the L∞ norm Ninf of the LTI system sys, together with a frequency ω_peak at which the gain Ninf is achieved.

Ninf := sup_ω σ_max[sys(iω)] (σ_max denotes the largest singular value)

tol is an optional keyword argument representing the desired relative accuracy for the computed L∞ norm (this is not an absolute certificate however).

sys is first converted to a state space model if needed.

The continuous-time L∞ norm computation implements the 'two-step algorithm' in:
N.A. Bruinsma and M. Steinbuch, 'A fast algorithm to compute the H∞-norm of a transfer function matrix', Systems and Control Letters (1990), pp. 287-293.

For the discrete-time version, see:
P. Bongers, O. Bosgra, M. Steinbuch, 'L∞-norm calculation for generalized state space systems in continuous and discrete time', American Control Conference, 1991.

See also hinfnorm.

ControlSystems.loopshapingPIMethod
kp,ki,C = loopshapingPI(P,ωp; ϕl,rl, phasemargin, doplot = false)

Selects the parameters of a PI-controller such that the Nyquist curve of P at the frequency ωp is moved to rl exp(i ϕl)

If phasemargin is supplied, ϕl is selected such that the curve is moved to an angle of phasemargin - 180 degrees

If no rl is given, the magnitude of the curve at ωp is kept the same and only the phase is affected, the same goes for ϕl if no phasemargin is given.

Set doplot = true to plot the gangoffourplot and nyquistplot of the system.

See also pidplots, stabregionPID

ControlSystems.lqrMethod
lqr(sys, Q, R)
lqr(Continuous, A, B, Q, R, args...; kwargs...)
lqr(Discrete, A, B, Q, R, args...; kwargs...)

Calculate the optimal gain matrix K for the state-feedback law u = -K*x that minimizes the cost function:

J = integral(x'Qx + u'Ru, 0, inf) for the continuous-time model dx = Ax + Bu. J = sum(x'Qx + u'Ru, 0, inf) for the discrete-time model x[k+1] = Ax[k] + Bu[k].

Solve the LQR problem for state-space system sys. Works for both discrete and continuous time systems.

The args...; kwargs... are sent to the Riccati solver, allowing specification of cross-covariance etc. See ?MatrixEquations.arec / ared for more help.

Examples

Continuous time

using LinearAlgebra # For identity matrix I
using Plots
A = [0 1; 0 0]
B = [0; 1]
C = [1 0]
sys = ss(A,B,C,0)
Q = I
R = I
L = lqr(sys,Q,R) # lqr(Continuous,A,B,Q,R) can also be used

u(x,t) = -L*x # Form control law,
t=0:0.1:5
x0 = [1,0]
y, t, x, uout = lsim(sys,u,t,x0=x0)
plot(t,x', lab=["Position" "Velocity"], xlabel="Time [s]")

Discrete time

using LinearAlgebra # For identity matrix I
using Plots
Ts = 0.1
A = [1 Ts; 0 1]
B = [0;1]
C = [1 0]
sys = ss(A, B, C, 0, Ts)
Q = I
R = I
L = lqr(Discrete, A,B,Q,R) # lqr(sys,Q,R) can also be used

u(x,t) = -L*x # Form control law,
t=0:Ts:5
x0 = [1,0]
y, t, x, uout = lsim(sys,u,t,x0=x0)
plot(t,x', lab=["Position"  "Velocity"], xlabel="Time [s]")
ControlSystems.lsimMethod
result = lsim(sys, u[, t]; x0, method])
result = lsim(sys, u::Function, t; x0, method)

Calculate the time response of system sys to input u. If x0 is ommitted, a zero vector is used.

The result structure contains the fields y, t, x, u and can be destructured automatically by iteration, e.g.,

y, t, x, u = result

result::SimResult can also be plotted directly:

plot(result, plotu=true, plotx=false)

y, x, u have time in the second dimension. Initial state x0 defaults to zero.

Continuous time systems are simulated using an ODE solver if u is a function. If u is an array, the system is discretized (with method=:zoh by default) before simulation. For a lower level inteface, see ?Simulator and ?solve

u can be a function or a matrix/vector of precalculated control signals. If u is a function, then u(x,i) (u(x,t)) is called to calculate the control signal every iteration (time instance used by solver). This can be used to provide a control law such as state feedback u(x,t) = -L*x calculated by lqr. To simulate a unit step, use (x,i)-> 1, for a ramp, use (x,i)-> i*Ts, for a step at t=5, use (x,i)-> (i*Ts >= 5) etc.

Usage example:

using LinearAlgebra # For identity matrix I
using Plots

A = [0 1; 0 0]
B = [0;1]
C = [1 0]
sys = ss(A,B,C,0)
Q = I
R = I
L = lqr(sys,Q,R)

u(x,t) = -L*x # Form control law,
t=0:0.1:5
x0 = [1,0]
y, t, x, uout = lsim(sys,u,t,x0=x0)
plot(t,x', lab=["Position" "Velocity"], xlabel="Time [s]")
ControlSystems.lsimMethod
lsim(sys::HammersteinWienerSystem, u, t::AbstractArray{<:Real}; x0=fill(0.0, nstates(sys)), alg=Tsit5(), abstol=1e-6, reltol=1e-6, kwargs...)

Simulate system sys, over time t, using input signal u, with initial state x0, using method alg .

Arguments:

t: Has to be an AbstractVector with equidistant time samples (t[i] - t[i-1] constant) u: Function to determine control signal uₜ at a time t, on any of the following forms: Can be a constant Number or Vector, interpreted as uₜ .= u , or Function uₜ .= u(x, t), or In-place function u(uₜ, x, t). (Slightly more effienct) alg, abstol, reltol and kwargs...: are sent to DelayDiffEq.solve.

Returns: times t, and y and x at those times.

ControlSystems.lsimMethod
`y, t, x = lsim(sys::DelayLtiSystem, u, t::AbstractArray{<:Real}; x0=fill(0.0, nstates(sys)), alg=MethodOfSteps(Tsit5()), abstol=1e-6, reltol=1e-6, kwargs...)`

Simulate system `sys`, over time `t`, using input signal `u`, with initial state `x0`, using method `alg` .

Arguments:

`t`: Has to be an `AbstractVector` with equidistant time samples (`t[i] - t[i-1]` constant)
`u`: Function to determine control signal `ut` at a time `t`, on any of the following forms:
    Can be a constant `Number` or `Vector`, interpreted as `ut .= u` , or
    Function `ut .= u(t)`, or
    In-place function `u(ut, t)`. (Slightly more effienct)
`alg, abstol, reltol` and `kwargs...`: are sent to `DelayDiffEq.solve`.

Returns: times `t`, and `y` and `x` at those times.
ControlSystems.marginMethod
ωgₘ, gₘ, ωϕₘ, ϕₘ = margin(sys::LTISystem, w::Vector; full=false, allMargins=false)

returns frequencies for gain margins, gain margins, frequencies for phase margins, phase margins

If !allMargins, return only the smallest margin

If full return also fullPhase See also delaymargin and RobustAndOptimalControl.diskmargin

ControlSystems.markovparamMethod
markovparam(sys, n)

Compute the nth markov parameter of state-space system sys. This is defined as the following:

h(0) = D

h(n) = C*A^(n-1)*B

ControlSystems.minrealFunction

tf = minreal(tf::TransferFunction, eps=sqrt(eps()))

Create a minimial representation of each transfer function in tf by cancelling poles and zeros will promote system to an appropriate numeric type

ControlSystems.minrealMethod
minreal(sys::T; fast=false, kwargs...)

Minimal realisation algorithm from P. Van Dooreen, The generalized eigenstructure problem in linear system theory, IEEE Transactions on Automatic Control

For information about the options, see ?ControlSystems.MatrixPencils.lsminreal

See also sminreal, which is both numerically exact and substantially faster than minreal, but with a much more limited potential in removing non-minimal dynamics.

ControlSystems.nicholsplotFunction
fig = nicholsplot{T<:LTISystem}(systems::Vector{T}, w::AbstractVector; kwargs...)

Create a Nichols plot of the LTISystem(s). A frequency vector w can be optionally provided.

Keyword arguments:

text = true
Gains = [12, 6, 3, 1, 0.5, -0.5, -1, -3, -6, -10, -20, -40, -60]
pInc = 30
sat = 0.4
val = 0.85
fontsize = 10

pInc determines the increment in degrees between phase lines.

sat ∈ [0,1] determines the saturation of the gain lines

val ∈ [0,1] determines the brightness of the gain lines

Additional keyword arguments are sent to the function plotting the systems and can be used to specify colors, line styles etc. using regular RecipesBase.jl syntax

This function is based on code subject to the two-clause BSD licence Copyright 2011 Will Robertson Copyright 2011 Philipp Allgeuer

ControlSystems.nonlinearityMethod
nonlinearity(T, f)

Create a pure nonlinearity.

NOTE: The nonlinear functionality in ControlSystems.jl is currently experimental and subject to breaking changes not respecting semantic versioning. Use at your own risk.

The type T defaults to promote_type(Float64, typeof(f))

Example:

Create a LTI system with an input nonlinearity of f

tf(1, [1, 1])*nonlinearity(x->clamp(x, -1, 1))

See also predefined nonlinearities saturation, offser.

ControlSystems.nyquistMethod
re, im, w = nyquist(sys[, w])

Compute the real and imaginary parts of the frequency response of system sys at frequencies w. See also nyquistplot

re and im has size (length(w), ny, nu)

ControlSystems.nyquistplotFunction
fig = nyquistplot(sys; Ms_circles=Float64[], unit_circle=false, hz = false, kwargs...)
nyquistplot(LTISystem[sys1, sys2...]; Ms_circles=Float64[], unit_circle=false, kwargs...)

Create a Nyquist plot of the LTISystem(s). A frequency vector w can be optionally provided.

  • unit_circle: if the unit circle should be displayed
  • Ms_circles: draw circles corresponding to given levels of sensitivity (circles around -1 with radii 1/Ms). Ms_circles can be supplied as a number or a vector of numbers. A design staying outside such a circle has a phase margin of at least 2asin(1/(2Ms)) rad and a gain margin of at least Ms/(Ms-1).

If hz=true, the hover information will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to plot.

ControlSystems.nyquistvMethod

nyquistv(sys::LTISystem, w::AbstractVector; )

For use with SISO systems where it acts the same as nyquist but with the extra dimensions removed in the returned values.

ControlSystems.nyquistvMethod

nyquistv(sys::LTISystem; )

For use with SISO systems where it acts the same as nyquist but with the extra dimensions removed in the returned values.

ControlSystems.observer_controllerMethod
cont = observer_controller(sys, L::AbstractMatrix, K::AbstractMatrix)

Return the observer_controller cont that is given by ss(A - B*L - K*C + K*D*L, K, L, 0)

Such that feedback(sys, cont) produces a closed-loop system with eigenvalues given by A-KC and A-BL.

Arguments:

  • sys: Model of system
  • L: State-feedback gain u = -Lx
  • K: Observer gain

See also observer_predictor and innovation_form.

ControlSystems.observer_predictorMethod
observer_predictor(sys::AbstractStateSpace, K; h::Int = 1)
observer_predictor(sys::AbstractStateSpace, R1, R2[, R12])

If sys is continuous, return the observer predictor system

\[\begin{aligned} x̂' &= (A - KC)x̂ + (B-KD)u + Ky \\ ŷ &= Cx + Du \end{aligned}\]

with the input equation [B-KD K] * [u; y]

If sys is discrete, the prediction horizon h may be specified, in which case measurements up to and including time t-h and inputs up to and including time t are used to predict y(t).

If covariance matrices R1, R2 are given, the kalman gain K is calculated using kalman.

See also innovation_form and observer_controller.

ControlSystems.obsvFunction
obsv(A, C, n=size(A,1))
obsv(sys, n=sys.nx)

Compute the observability matrix with n rows for the system described by (A, C) or sys. Providing the optional n > sys.nx returns an extended observability matrix.

Note that checking for observability by computing the rank from obsv is not the most numerically accurate way, a better method is checking if gram(sys, :o) is positive definite.

ControlSystems.padeMethod
pade(G::DelayLtiSystem, N)

Approximate all time-delays in G by Padé approximations of degree N.

ControlSystems.padeMethod
pade(τ::Real, N::Int)

Compute the Nth order Padé approximation of a time-delay of length τ.

ControlSystems.parallelMethod
parallel(sys1::LTISystem, sys2::LTISystem)

Connect systems in parallel, equivalent to sys2+sys1

ControlSystems.pidMethod
C = pid(; kp=0, ki=0; kd=0, time=false, series=false)

Calculates and returns a PID controller on transfer function form.

  • time indicates whether or not the parameters are given as gains (default) or as time constants
  • series indicates whether or not the series form or parallel form (default) is desired
ControlSystems.pidplotsMethod
pidplots(P, args...; kps=0, kis=0, kds=0, time=false, series=false, ω=0)

Plots interesting figures related to closing the loop around process P with a PID controller Send in a bunch of PID-parameters in any of the vectors kp, ki, kd. The vectors must be the same length.

-time indicates whether or not the parameters are given as gains (default) or as time constants -series indicates whether or not the series form or parallel form (default) is desired

Available plots are :gof for Gang of four, :nyquist, :controller for a bode plot of the controller TF and :pz for pole-zero maps

One can also supply a frequency vector ω to be used in Bode and Nyquist plots

See also loopshapingPI, stabregionPID

ControlSystems.placeFunction
place(A, B, p, opt=:c)
place(sys::StateSpace, p, opt=:c)

Calculate the gain matrix K such that A - BK has eigenvalues p.

place(A, C, p, opt=:o)
place(sys::StateSpace, p, opt=:o)

Calculate the observer gain matrix L such that A - LC has eigenvalues p.

Uses Ackermann's formula. Currently handles only SISO systems.

ControlSystems.placePIMethod
piparams, C = placePI(P, ω₀, ζ; form=:standard)

Selects the parameters of a PI-controller such that the poles of closed loop between P and C are placed to match the poles of s^2 + 2ζω₀s + ω₀^2.

The form keyword allows you to choose which form the PI parameters should be returned on.

  • :standard - Kp*(1 + 1/Ti/s)
  • :series - Kc*(1 + 1/τi/s)
  • :parallel - Kp + Ki/s
  • :Ti - Kp + 1/(s*Ti) (non-standard form sometimes used in industry)

piparams is a named tuple with the controller parameters.

C is the transfer function of the controller.

ControlSystems.pzmapFunction
fig = pzmap(fig, system, args...; kwargs...)

Create a pole-zero map of the LTISystem(s) in figure fig, args and kwargs will be sent to the scatter plot command.

ControlSystems.reduce_sysMethod
reduce_sys(A::AbstractMatrix, B::AbstractMatrix, C::AbstractMatrix, D::AbstractMatrix, meps::AbstractFloat)

Implements REDUCE in the Emami-Naeini & Van Dooren paper. Returns transformed A, B, C, D matrices. These are empty if there are no zeros.

ControlSystems.relative_gain_arrayMethod
relative_gain_array(A::AbstractMatrix; tol = 1.0e-15)

Reference: "On the Relative Gain Array (RGA) with Singular and Rectangular Matrices" Jeffrey Uhlmann https://arxiv.org/pdf/1805.10312.pdf

ControlSystems.relative_gain_arrayMethod
relative_gain_array(G, w::AbstractVector)
relative_gain_array(G, w::Number)

Calculate the relative gain array of G at frequencies w. G(iω) .* pinv(tranpose(G(iω)))

The RGA can be used to find input-output pairings for MIMO control using individially tuned loops. Pair the inputs and outputs such that the RGA(ωc) at the crossover frequency becomes as close to diagonal as possible. Avoid pairings such that RGA(0) contains negative diagonal elements.

  • The sum of the absolute values of the entries in the RGA is a good measure of the "true condition number" of G, the best condition number that can be achieved by input/output scaling of G, -Glad, Ljung.
  • The RGA is invariant to input/output scaling of G.
  • If the RGA contains large entries, the system may be sensitive to model errors, -Skogestad, "Multivariable Feedback Control: Analysis and Design":
    • Uncertainty in the input channels (diagonal input uncertainty). Plants with
    large RGA-elements around the crossover frequency are fundamentally difficult to control because of sensitivity to input uncertainty (e.g. caused by uncertain or neglected actuator dynamics). In particular, decouplers or other inverse-based controllers should not be used for plants with large RGAeleme
    • Element uncertainty. Large RGA-elements imply sensitivity to element-by-element uncertainty.
    However, this kind of uncertainty may not occur in practice due to physical couplings between the transfer function elements. Therefore, diagonal input uncertainty (which is always present) is usually of more concern for plants with large RGAelemen

The relative gain array is computed using the The unit-consistent (UC) generalized inverse Reference: "On the Relative Gain Array (RGA) with Singular and Rectangular Matrices" Jeffrey Uhlmann https://arxiv.org/pdf/1805.10312.pdf

ControlSystems.rgaplotFunction
rgaplot(sys, args...; hz=false)
rgaplot(LTISystem[sys1, sys2...], args...; hz=false)

Plot the relative-gain array entries of the LTISystem(s). A frequency vector w can be optionally provided.

If hz=true, the plot x-axis will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to Plots.plot.

ControlSystems.rlocusplotFunction
rlocusplot(P::LTISystem, K)

Computes and plots the root locus of the SISO LTISystem P with a negative feedback loop and feedback gains K, if K is not provided, range(1e-6,stop=500,length=10000) is used. If OrdinaryDiffEq.jl is installed and loaded by the user (using OrdinaryDiffEq), rlocusplot will use an adaptive step-size algorithm to select values of K. A scalar Kmax can then be given as second argument.

ControlSystems.rstdMethod
R,S,T=rstd(BPLUS,BMINUS,A,BM1,AM,AO,AR,AS)
R,S,T=rstd(BPLUS,BMINUS,A,BM1,AM,AO,AR)
R,S,T=rstd(BPLUS,BMINUS,A,BM1,AM,AO)

rstd Polynomial synthesis in discrete time.

Polynomial synthesis according to CCS ch 10 to design a controller R(q) u(k) = T(q) r(k) - S(q) y(k)

Inputs: BPLUS : Part of open loop numerator BMINUS : Part of open loop numerator A : Open loop denominator BM1 : Additional zeros AM : Closed loop denominator AO : Observer polynomial AR : Pre-specified factor of R, e.g integral part [1, -1]^k AS : Pre-specified factor of S, e.g notch filter [1, 0, w^2]

Outputs: R,S,T : Polynomials in controller

See function dab how the solution to the Diophantine- Aryabhatta-Bezout identity is chosen.

See Computer-Controlled Systems: Theory and Design, Third Edition Karl Johan Åström, Björn Wittenmark

ControlSystems.seriesMethod
series(sys1::LTISystem, sys2::LTISystem)

Connect systems in series, equivalent to sys2*sys1

ControlSystems.setPlotScaleMethod
setPlotScale(str)

Set the default scale of magnitude in bodeplot and sigmaplot. str should be either "dB" or "log10".

ControlSystems.sigmaMethod
sv, w = sigma(sys[, w])

Compute the singular values sv of the frequency response of system sys at frequencies w. See also sigmaplot

sv has size (length(w), max(ny, nu))

ControlSystems.sigmaplotFunction
sigmaplot(sys, args...; hz=false)
sigmaplot(LTISystem[sys1, sys2...], args...; hz=false)

Plot the singular values of the frequency response of the LTISystem(s). A frequency vector w can be optionally provided.

If hz=true, the plot x-axis will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to Plots.plot.

ControlSystems.sigmavMethod

sigmav(sys::LTISystem, w::AbstractVector; )

For use with SISO systems where it acts the same as sigma but with the extra dimensions removed in the returned values.

ControlSystems.sigmavMethod

sigmav(sys::LTISystem; )

For use with SISO systems where it acts the same as sigma but with the extra dimensions removed in the returned values.

ControlSystems.similarity_transformMethod
syst = similarity_transform(sys, T; unitary=false)

Perform a similarity transform T : Tx̃ = x on sys such that

à = T⁻¹AT
B̃ = T⁻¹ B
C̃ = CT
D̃ = D

If unitary=true, T is assumed unitary and the matrix adjoint is used instead of the inverse. See also balance_statespace.

ControlSystems.sminrealMethod
sminreal(sys)

Compute the structurally minimal realization of the state-space system sys. A structurally minimal realization is one where only states that can be determined to be uncontrollable and unobservable based on the location of 0s in sys are removed.

Systems with numerical noise in the coefficients, e.g., noise on the order of eps require truncation to zero to be affected by structural simplification, e.g.,

trunc_zero!(A) = A[abs.(A) .< 10eps(maximum(abs, A))] .= 0
trunc_zero!(sys.A); trunc_zero!(sys.B); trunc_zero!(sys.C)
sminreal(sys)

See also minreal

ControlSystems.ssMethod
sys = ss(A, B, C, D)      # Continuous
sys = ss(A, B, C, D, Ts)  # Discrete

Create a state-space model sys::StateSpace{TE, T} with matrix element type T and TE is Continuous or <:Discrete.

This is a continuous-time model if Ts is omitted. Otherwise, this is a discrete-time model with sampling period Ts.

D may be specified as 0 in which case a zero matrix of appropriate size is constructed automatically. sys = ss(D [, Ts]) specifies a static gain matrix D.

ControlSystems.ssrandMethod
ssrand(T::Type, ny::Int, nu::Int, nstates::Int; proper=false, stable=true, Ts=nothing)

Returns a random StateSpace model with ny outputs, nu inputs, and nstates states, whose matrix elements are normally distributed.

It is possible to specify if the system should be proper or stable.

Specify a sample time Ts to obtain a discrete-time system.

ControlSystems.stabregionPIDFunction
fig, kp, ki = stabregionPID(P, [ω]; kd=0, doplot = true)

Segments of the curve generated by this program is the boundary of the stability region for a process with transfer function P(s) The PID controller is assumed to be on the form kp +ki/s +kd s

The curve is found by analyzing P(s)*C(s) = -1 ⟹ |PC| = |P| |C| = 1 arg(P) + arg(C) = -π

If P is a function (e.g. s -> exp(-sqrt(s)) ), the stability of feedback loops using PI-controllers can be analyzed for processes with models with arbitrary analytic functions

See also stabregionPID, loopshapingPI, pidplots

ControlSystems.starprodMethod
starprod(sys1, sys2, dimu, dimy)

Compute the Redheffer star product.

length(U1) = length(Y2) = dimu and length(Y1) = length(U2) = dimy

For details, see Chapter 9.3 in Zhou, K. and JC Doyle. Essentials of robust control, Prentice hall (NJ), 1998

ControlSystems.tfMethod

sys = tf(num, den[, Ts]), sys = tf(gain[, Ts])

Create as a fraction of polynomials:

sys::TransferFunction{SisoRational{T,TR}} = numerator/denominator where T is the type of the coefficients in the polynomial.

num: the coefficients of the numerator polynomial. Either scalar or vector to create SISO systems or an array of vectors to create MIMO system.

den: the coefficients of the denominator polynomial. Either vector to create SISO systems or an array of vectors to create MIMO system.

Ts: Sample time if discrete time system.

Other uses: tf(sys): Convert sys to tf form. tf("s"), tf("z"): Create the continuous transferfunction s.

See also: zpk, ss

ControlSystems.tzerosMethod
tzeros(sys)

Compute the invariant zeros of the system sys. If sys is a minimal realization, these are also the transmission zeros.

ControlSystems.zpconvMethod
zpc(a,r,b,s)

form conv(a,r) + conv(b,s) where the lengths of the polynomials are equalized by zero-padding such that the addition can be carried out

ControlSystems.zpkMethod

zpk(gain[, Ts]), zpk(num, den, k[, Ts]), zpk(sys)

Create transfer function on zero pole gain form. The numerator and denominator are represented by their poles and zeros.

sys::TransferFunction{SisoZpk{T,TR}} = k*numerator/denominator where T is the type of k and TR the type of the zeros/poles, usually Float64 and Complex{Float64}.

num: the roots of the numerator polynomial. Either scalar or vector to create SISO systems or an array of vectors to create MIMO system.

den: the roots of the denominator polynomial. Either vector to create SISO systems or an array of vectors to create MIMO system.

k: The gain of the system. Obs, this is not the same as dcgain.

Ts: Sample time if discrete time system.

Other uses:

zpk(sys): Convert sys to zpk form.

zpk("s"): Create the transferfunction s.

ControlSystems.zpkdataMethod
z, p, k = zpkdata(sys)

Compute the zeros, poles, and gains of system sys.

Returns

z : Matrix{Vector{ComplexF64}}, (ny x nu)

p : Matrix{Vector{ComplexF64}}, (ny x nu)

k : Matrix{Float64}, (ny x nu)

LinearAlgebra.lyapMethod
lyap(A, Q; kwargs...)

Compute the solution X to the discrete Lyapunov equation AXA' - X + Q = 0.

Uses MatrixEquations.lyapc / MatrixEquations.lyapd. For keyword arguments, see the docstring of ControlSystems.MatrixEquations.lyapc / ControlSystems.MatrixEquations.lyapd

LinearAlgebra.normFunction
norm(sys, p=2; tol=1e-6)

norm(sys) or norm(sys,2) computes the H2 norm of the LTI system sys.

norm(sys, Inf) computes the H∞ norm of the LTI system sys. The H∞ norm is the same as the H∞ for stable systems, and Inf for unstable systems. If the peak gain frequency is required as well, use the function hinfnorm instead. See hinfnorm for further documentation.

tol is an optional keyword argument, used only for the computation of L∞ norms. It represents the desired relative accuracy for the computed L∞ norm (this is not an absolute certificate however).

sys is first converted to a StateSpace model if needed.

RobustAndOptimalControl.DiskType
Disk

Represents a perturbation disc in the complex plane. Disk(0.5, 2) represents all perturbations in the circle centered at 1.25 with radius 0.75, or in other words, a gain margin of 2 and a pahse margin of 36.9 degrees.

A disk can be converted to a Nyquist exclusion disk by nyquist(disk) and plotted using plot(disk).

Arguments:

  • γmin: Lower intercept
  • γmax: Upper intercept
  • c: Center
  • r: Radius
  • ϕm: Angle of tangent line through origin.

If γmax < γmin the disk is inverted. See diskmargin for disk margin computations.

RobustAndOptimalControl.DiskmarginType
Diskmargin

The notation follows "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet

Fields:

α: The disk margin ω0: The worst-case frequency f0: The destabilizing perturbation f0 is a complex number with simultaneous gain and phase variation. This critical perturbation causes an instability with closed-loop pole on the imaginary axis at the critical frequency ω0 δ0: The uncertain element generating f0. γmin: The lower real-axis intercept of the disk (analogous to classical lower gain margin). γmax: The upper real-axis intercept of the disk (analogous to classical upper gain margin). ϕm: is analogous to the classical phase margin. σ: The skew parameter that was used to calculate the margin

Note, γmax and ϕm are in smaller than the classical gain and phase margins sicne the classical margins do not consider simultaneous perturbations in gain and phase.

The "disk" margin becomes a half plane for α = 2 and an inverted circle for α > 2. In this case, the upper gain margin is infinite. See the paper for more details, in particular figure 6.

RobustAndOptimalControl.ExtendedStateSpaceMethod
se = ExtendedStateSpace(s::AbstractStateSpace; kwargs...)

The conversion from a regular statespace object to an ExtendedStateSpace creates the following system by default

\[\begin{bmatrix} A & B & B \\ C & D & 0 \\ C & 0 & D \end{bmatrix}\]

i.e., the system and performance mappings are identical, system_mapping(se) == performance_mapping(se) == s. However, all matrices B1, B2, C1, C2; D11, D22 are overridable by a corresponding keyword argument.

Related: se = convert(ExtendedStateSpace{...}, s::StateSpace{...}) produces an ExtendedStateSpace with empty performance_mapping from w->z such that ss(se) == s.

RobustAndOptimalControl.LQGProblemType
G = LQGProblem(sys::ExtendedStateSpace, Q1, Q2, R1, R2; qQ=0, qR=0, SQ=nothing, SR=nothing)

Return an LQG object that describes the closed control loop around the process sys=ss(A,B,C,D) where the controller is of LQG-type. The controller is specified by weight matrices Q1,Q2 that penalizes state deviations and control signal variance respectively, and covariance matrices R1,R2 which specify state drift and measurement covariance respectively.

sys is an extended statespace object where the upper channel corresponds to disturbances to performance variables (w→z), and the lower channel corresponds to inputs to outputs (u→y), such that lft(sys, K) forms the closed-loop transfer function from external inputs/disturbances to performance variables.

qQ and qR can be set to incorporate loop-transfer recovery, i.e.,

L = lqr(A, B, Q1+qQ*C'C, Q2)
K = kalman(A, C, R1+qR*B*B', R2)

Increasing qQ will add more cost in output direction, e.g., encouraging the use of cheap control, while increasing qR adds fictious dynamics noise, makes the observer faster in the direction we control.

Example

s = tf("s")
P = [1/(s+1) 2/(s+2); 1/(s+3) 1/(s-1)]
sys = ss(P)
eye(n) = Matrix{Float64}(I,n,n) # For convinience

qQ = 1
qR = 1
Q1 = 10eye(4)
Q2 = 1eye(2)
R1 = 1eye(6)
R2 = 1eye(2)

G = LQGProblem(sys, Q1, Q2, R1, R2, qQ=qQ, qR=qR)

Gcl = G.cl
T = G.T
S = G.S
sigmaplot([S,T],exp10.(range(-3, stop=3, length=1000)))
stepplot(Gcl)
RobustAndOptimalControl.LQGProblemMethod
LQGProblem(P::ExtendedStateSpace)

If only an ExtendedStateSpace system is provided, e.g. from hinfpartition, the system P is assumed to correspond to the H₂ optimal control problem with

C1'C1    = Q1
D12'D12  = Q2
SQ       = C1'D12 # Cross term

B1*B1'   = R1
D21*D21' = R2
SR       = B1*D21' # Cross term

and an LQGProblem with the above covariance matrices is returned. The system description in the returned LQGProblem will have B1 = C1 = I. See Ch. 14 in Robust and optimal control for reference.

Example:

All the following ways of obtaining the H2 optimal controller are (almost) equivalent

using Test
G = ss(tf(1, [10, 1]))
WS = tf(1, [1, 1e-6]) 
WU = makeweight(1e-2, 0.1, 100) 
Gd = hinfpartition(G, WS, WU, [])

K, Gcl = h2synthesize(Gd)              # First option, using H2 formulas
K2, Gcl2 = h2synthesize(Gd, 1000)      # Second option, using H∞ formulas with large γ

lqg = LQGProblem(Gd)                   # Third option, convert to an LQGProblem and obtain controller
K3 = -observer_controller(lqg)

@test h2norm(lft(Gd, K )) ≈ 3.0568 atol=1e-3
@test h2norm(lft(Gd, K2)) ≈ 3.0568 atol=1e-3
@test h2norm(lft(Gd, K3)) ≈ 3.0568 atol=1e-3
RobustAndOptimalControl.G_CSMethod
G_CS(P, C)

The closed-loop transfer function from (-) measurement noise or (+) reference to control signal. Technically, it's (1 + CP)⁻¹C so SC would be a better, but nonstandard name.

         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.G_PSMethod
G_PS(P, C)

The closed-loop transfer function from load disturbance to plant output. Technically, it's (1 + PC)⁻¹P so SP would be a better, but nonstandard name.

         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.add_input_differentiatorFunction
add_input_differentiator(sys::StateSpace, ui = 1:sys.nu; goodwin=false)

Augment the output of sys with the difference u(k+1)-u(k)

Arguments:

  • ui: An index or vector of indices indicating which inputs to differentiate.
  • goodwin: If true, the difference operator will use the Goodwin δ operator, i.e., (u(k+1)-u(k)) / sys.Ts.

The augmented system will have the matrices

[A 0; 0 0]  [B; I]  [C 0; 0 -I]  [D; I]

with length(ui) added states and outputs.

RobustAndOptimalControl.add_low_frequency_disturbanceMethod
add_low_frequency_disturbance(sys::StateSpace; ϵ = 0, measurement = false)

Augment sys with a low-frequency (integrating if ϵ=0) disturbance model. If an integrating input disturbance is used together with an observer, the controller will have integral action.

Arguments:

  • ϵ: Move the integrator pole ϵ into the stable region.
  • measurement: If true, the disturbance is a measurement disturbance, otherwise it's an input diturbance.
RobustAndOptimalControl.add_output_differentiatorFunction
add_differentiator(sys::StateSpace{<:Discrete})

Augment the output of sys with the numerical difference (discrete-time derivative) of output, i.e., y_aug = [y; (y-y_prev)/sys.Ts] To add both an integrator and a differentiator to a SISO system, use

Gd = add_output_integrator(add_output_differentiator(G), 1)
RobustAndOptimalControl.add_output_integratorFunction
add_output_integrator(sys::StateSpace{<:Discrete}, ind = 1; ϵ = 0)

Augment the output of sys with the integral of output at index ind, i.e., y_aug = [y; ∫y[ind]] To add both an integrator and a differentiator to a SISO system, use

Gd = add_output_integrator(add_output_differentiator(G), 1)

Note: numerical integration is subject to numerical drift. If the output of the system corresponds to, e.g., a velocity reference and the integral to position reference, consider methods for mitigating this drift.

RobustAndOptimalControl.add_resonant_disturbanceMethod
add_resonant_disturbance(sys::StateSpace{Continuous}, ω, ζ, Ai::Int; measurement = false)

Augment sys with a resonant disturbance model.

Arguments:

  • ω: Frequency
  • ζ: Relative damping.
  • Ai: The affected state
  • measurement: If true, the disturbace is acting on the output, this will cause the controller to have zeros at ω (roots of poly s² + 2ζωs + ω²). If false, the disturbance is acting on the input, this will cause the controller to have poles at ω (roots of poly s² + 2ζωs + ω²).
RobustAndOptimalControl.baltrunc2Method
sysr, hs = baltrunc2(sys::LTISystem; residual=false, n=missing, kwargs...)

Compute the a balanced truncation of order n and the hankel singular values

For keyword arguments, see the docstring of DescriptorSystems.gbalmr, reproduced below

gbalmr(sys, balance = false, matchdc = false, ord = missing, atolhsv = 0, rtolhsv = nϵ, 
       atolmin = atolhsv, rtolmin = rtolhsv, 
       atol = 0, atol1 = atol, atol2 = atol, rtol, fast = true) -> (sysr, hs)

Compute for a proper and stable descriptor system sys = (A-λE,B,C,D) with the transfer function matrix G(λ), a reduced order realization sysr = (Ar-λEr,Br,Cr,Dr) and the vector hs of decreasingly ordered Hankel singular values of the system sys. If balance = true, a balancing-based approach is used to determine a reduced order minimal realization of the form sysr = (Ar-λI,Br,Cr,Dr). For a continuous-time system sys, the resulting realization sysr is balanced, i.e., the controllability and observability grammians are equal and diagonal. If additonally matchdc = true, the resulting sysr is computed using state rezidualization formulas (also known as singular perturbation approximation) which additionally preserves the DC-gain of sys. In this case, the resulting realization sysr is balanced (for both continuous- and discrete-time systems). If balance = false, an enhanced accuracy balancing-free approach is used to determine the reduced order system sysr.

If ord = nr, the resulting order of sysr is min(nr,nrmin), where nrmin is the order of a minimal realization of sys determined as the number of Hankel singular values exceeding max(atolmin,rtolmin*HN), with HN, the Hankel norm of G(λ). If ord = missing, the resulting order is chosen as the number of Hankel singular values exceeding max(atolhsv,rtolhsv*HN).

The keyword arguments atol1, atol2, and rtol, specify, respectively, the absolute tolerance for the nonzero elements of A, B, C, D, the absolute tolerance for the nonzero elements of E, and the relative tolerance for the nonzero elements of A, B, C, D and E. The default relative tolerance is , where ϵ is the working machine epsilon and n is the order of the system sys. The keyword argument atol can be used to simultaneously set atol1 = atol and atol2 = atol.

If E is singular, the uncontrollable infinite eigenvalues of the pair (A,E) and the non-dynamic modes are elliminated using minimal realization techniques. The rank determinations in the performed reductions are based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false.

Method: For the order reduction of a standard system, the balancing-free method of [1] or the balancing-based method of [2] are used. For a descriptor system the balancing related order reduction methods of [3] are used. To preserve the DC-gain of the original system, the singular perturbation approximation method of [4] is used in conjunction with the balancing-based or balancing-free approach of [5].

References

[1] A. Varga. Efficient minimal realization procedure based on balancing. In A. El Moudni, P. Borne, and S.G. Tzafestas (Eds.), Prepr. of the IMACS Symp. on Modelling and Control of Technological Systems, Lille, France, vol. 2, pp.42-47, 1991.

[2] M. S. Tombs and I. Postlethwaite. Truncated balanced realization of a stable non-minimal state-space system. Int. J. Control, vol. 46, pp. 1319–1330, 1987.

[3] T. Stykel. Gramian based model reduction for descriptor systems. Mathematics of Control, Signals, and Systems, 16:297–319, 2004.

[4] Y. Liu Y. and B.D.O. Anderson Singular Perturbation Approximation of Balanced Systems, Int. J. Control, Vol. 50, pp. 1379-1405, 1989.

[5] Varga A. Balancing-free square-root algorithm for computing singular perturbation approximations. Proc. 30-th IEEE CDC, Brighton, Dec. 11-13, 1991, Vol. 2, pp. 1062-1065.

RobustAndOptimalControl.baltrunc_coprimeMethod
sysr, hs, info = baltrunc_coprime(sys; residual = false, n = missing, factorization::F = DescriptorSystems.gnlcf, kwargs...)

Compute a balanced truncation of the left coprime factorization of sys. See baltrunc2 for additional keyword-argument help.

Coprime-factor reduction performs a coprime factorization of the model into $P(s) = M(s)^{-1}N(s)$ where $M$ and $N$ are stable factors even if $P$ contains unstable modes. After this, the system $NM = \begin{bmatrix}N & M \end{bmatrix}$ is reduced using balanced truncation and the final reduced-order model is formed as $P_r(s) = M_r(s)^{-1}N_r(s)$. For this method, the Hankel signular values of $NM$ are reported and the reported errors are $||NM - N_rM_r||_\infty$. This method is of particular interest in closed-loop situations, where a model-reduction error $||NM - N_rM_r||_\infty$ no greater than the normalized-coprime margin of the plant and the controller, guaratees that the closed loop remains stable when either $P$ or $K$ are reduced. The normalized-coprime margin can be computed with ncfmargin(P, K) (nfcmargin).

Arguments:

  • factorization: The function to perform the coprime factorization. A non-normalized factorization may be used by passing RobustAndOptimalControl.DescriptorSystems.glcf.
  • kwargs: Are passed to DescriptorSystems.gbalmr
RobustAndOptimalControl.baltrunc_unstabFunction
baltrunc_unstab(sys::LTISystem; residual = false, n = missing, kwargs...)

Balanced truncation for unstable models. An additive decomposition of sys into sys = sys_stable + sys_unstable is performed after which sys_stable is reduced. The order n must not be less than the number of unstable poles.

See baltrunc2 for other keyword arguments.

RobustAndOptimalControl.bilinearc2dMethod
bilinearc2d(Ac::AbstractArray, Bc::AbstractArray, Cc::AbstractArray, Dc::AbstractArray, Ts::Number; tolerance=1e-12)

Balanced Bilinear transformation in State-Space. This method computes a discrete time equivalent of a continuous-time system, such that

\[G_d(z) = s2z[G_c(s)]\]

in a manner which accomplishes the following (i) Preserves the infinity L-infinity norm over the transformation (ii) Finds a system which balances B and C, in the sense that $||B||_2=||C||_2$ (iii) Satisfies $G_c(s) = z2s[s2z[G_c(s)]]$ for some map z2s[]

RobustAndOptimalControl.bilinearc2dMethod
bilinearc2d(sys::ExtendedStateSpace, Ts::Number)

Applies a Balanced Bilinear transformation to a discrete-time extended statespace object

RobustAndOptimalControl.bilineard2cMethod
bilineard2c(Ad::AbstractArray, Bd::AbstractArray, Cd::AbstractArray, Dd::AbstractArray, Ts::Number; tolerance=1e-12)

Balanced Bilinear transformation in State-Space. This method computes a continuous time equivalent of a discrete time system, such that

G_c(z) = z2s[G_d(z)]

in a manner which accomplishes the following (i) Preserves the infinity L-infinity norm over the transformation (ii) Finds a system which balances B and C, in the sense that ||B||2=||C||2 (iii) Satisfies Gd(z) = s2z[z2s[Gd(z)]] for some map s2z[]

RobustAndOptimalControl.blocksortMethod
blocks, M = blocksort(P::UncertainSS)

Returns the block structure of P.Δ as well as P.M permuted according to the sorted block structure. blocks is a vector of vectors with the block structure of perturbation blocks as described by μ-tools, i.e.

  • [-N, 0] denotes a repeated real block of size N
  • [N, 0] denotes a repeated complex block of size N
  • [ny, nu] denotes a full complex block of size ny × nu
RobustAndOptimalControl.broken_feedbackMethod
broken_feedback(L, i)

Closes all loops in square MIMO system L except for loops i. Forms L1 in fig 14. of "An Introduction to Disk Margins" https://arxiv.org/abs/2003.04771

RobustAndOptimalControl.closedloopFunction
closedloop(l::LQGProblem, L = lqr(l), K = kalman(l))

Closed-loop system as defined in Glad and Ljung eq. 8.28. Note, this definition of closed loop is not the same as lft(P, K), which has B1 isntead of B2 as input matrix. Use lft(l) to get the system from disturbances to controlled variables w -> z.

The return value will be the closed loop from reference only, other disturbance signals (B1) are ignored. See feedback for a more advanced option.

Use static_gain_compensation to adjust the gain from references acting on the input B2, dcgain(closedloop(l))*static_gain_compensation(l) ≈ I

RobustAndOptimalControl.comp_sensitivityMethod

See output_comp_sensitivity

         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.connectMethod
connect(systems, connections; w1, z1 = (:), verbose = true, kwargs...)

Create complicated feedback interconnection.

Addition and subtraction nodes are achieved by creating a linear combination node, i.e., a system with a D matrix only.

Arguments:

  • systems: A vector of named systems to be connected
  • connections: a vector of pairs indicating output => input mappings.
    • u1: input mappings (alternative input argument)
    • y1: output mappings (alternative input argument)
  • w1: external signals
  • z1: outputs (can overlap with y1)
  • verbose: Issue warnings for signals that have no connection

Example: The following complicated feedback interconnection

                 yF
              ┌────────────────────────────────┐
              │                                │
    ┌───────┐ │  ┌───────┐ yR   ┌─────────┐    │    ┌───────┐
uF  │       │ │  │       ├──────►         │ yC │  uP│       │    yP
────►   F   ├─┴──►   R   │      │    C    ├────+────►   P   ├────┬────►
    │       │    │       │   ┌──►         │         │       │    │
    └───────┘    └───────┘   │  └─────────┘         └───────┘    │
                             │                                   │
                             └───────────────────────────────────┘

can be created by

F = named_ss(ssrand(1, 1, 2, proper=true), x=:xF, u=:uF, y=:yF)
R = named_ss(ssrand(1, 1, 2, proper=true), x=:xR, u=:uR, y=:yR)
C = named_ss(ssrand(1, 1, 2, proper=true), x=:xC, u=:uC, y=:yC)
P = named_ss(ssrand(1, 1, 3, proper=true), x=:xP, u=:uP, y=:yP)

addP = sumblock("uP = yF + yC") # Sum node before P
addC = sumblock("uC = yR - yP") # Sum node before C

connections = [
    :yP => :yP # Output to input
    :uP => :uP
    :yC => :yC
    :yF => :yF
    :yF => :uR
    :uC => :uC
    :yR => :yR
]
w1 = [:uF] # External inputs

G = connect([F, R, C, P, addP, addC], connections; w1)

If an external input is to be connected to multiple points, use a splitter to split up the signal into a set of unique names which are then used in the connections.

RobustAndOptimalControl.controller_reductionFunction
controller_reduction(P::ExtendedStateSpace, K, r, out=true; kwargs...)

Minimize ||(K-Kᵣ) W||∞ if out=false ||W (K-Kᵣ)||∞ if out=true See Robust and Optimal Control Ch 19.1 out indicates if the weight will be applied as output or input weight.

This method corresponds to the methods labelled SW1/SW2(SPA) in Andreas Varga, "Controller Reduction Using Accuracy-Enhancing Methods" SW1 is the default method, corresponding to out=true.

This method does not support unstable controllers. See the reference above for alternatives. See also stab_unstab and baltrunc_unstab.

RobustAndOptimalControl.controller_reduction_plotFunction
controller_reduction_plot(G, K)

Plot the normalized-coprime margin (ncfmargin) as a function of controller order when baltrunc_coprime is used to reduce the controller. Red, orange and green bands correspond to rules of thumb for bad, okay and good coprime uncertainty margins. A value of 0 indicate an unstable closed loop.

If G is an ExtendedStateSpace system, a second plot will be shown indicating the $H_∞$ norm between inputs and performance outputs $||T_{zw}||_\infty$ when the function controller_reduction is used to reduce the controller.

The order of the controller can safely be reduced as long as the normalized coprime margin remains sufficiently large. If the controller contains integrators, it may be advicable to protect the integrators from the reduction, e.g., if the controller is obtained using glover_mcfarlane, perform the reduction on info.Gs, info.Ks rather than on K, and form Kr using the reduced Ks.

RobustAndOptimalControl.dare3Method
dare3(P::AbstractStateSpace, Q1::AbstractMatrix, Q2::AbstractMatrix, Q3::AbstractMatrix; full=false)

Solve the discrete-time algebraic Riccati equation for a discrete LQR cost augmented with control differences

\[x^{T} Q_1 x + u^{T} Q_2 u + Δu^{T} Q_3 Δu, \quad Δu = u(k) - u(k-1)\]

If full, the returned matrix will include the state u(k-1), otherwise the returned matrix will be of the same size as Q1.

RobustAndOptimalControl.diskmarginFunction
diskmargin(L, σ = 0)
diskmargin(L, σ::Real, ω)

Calculate the disk margin of LTI system L. L is supposed to be a loop-transfer function, i.e., it should be square. If L = PC the disk margin for output perturbations is computed, whereas if L = CP, input perturbations are considered. If the method diskmargin(P, C, args...) is used, both are computed.

The implementation and notation follows "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet https://arxiv.org/abs/2003.04771

The margins are aviable as fields of the returned objects, see Diskmargin.

Arguments:

  • L: A loop-transfer function.
  • σ: If little is known about the distribution of gain variations then σ = 0 is a reasonable choice as it allows for a gain increase or decrease by the same relative amount. The choice σ < 0 is justified if the gain can decrease by a larger factor than it can increase. Similarly, the choice σ > 0 is justified when the gain can increase by a larger factor than it can decrease. If σ = −1 then the disk margin condition is αmax = inv(MT). This margin is related to the robust stability condition for models with multiplicative uncertainty of the form P (1 + δ). If σ = +1 then the disk margin condition is αmax = inv(MS)
  • kwargs: Are sent to the hinfnorm calculation
  • ω: If a vector of frequencies is supplied, the frequency-dependent disk margin will be computed, see example below.

Example:

L = tf(25, [1,10,10,10])
dm = diskmargin(L, 0)
plot(dm) # Plot the disk margin to illustrate maximum allowed simultaneous gain and phase variations.

nyquistplot(L)
plot!(dm, nyquist=true) # plot a nyquist exclusion disk. The Nyquist curve will be tangent to this disk at `dm.ω0`
nyquistplot!(dm.f0*L) # If we perturb the system with the worst-case perturbation `f0`, the curve will pass through the critical point -1.

## Frequency-dependent margin
w = exp10.(LinRange(-2, 2, 500))
dms = diskmargin(L, 0, w)
plot(dms)

See also ncfmargin.

RobustAndOptimalControl.diskmarginMethod
diskmargin(P::LTISystem, C::LTISystem, σ, w::AbstractVector, args...; kwargs...)

Simultaneuous diskmargin at outputs, inputs and input/output simultaneously of P. Returns a named tuple with the fields input, output, simultaneous_input, simultaneous_output, simultaneous where input and output represent loop-at-a-time margins, simultaneous_input is the margin for simultaneous perturbations on all inputs and simultaneous is the margin for perturbations on all inputs and outputs simultaneously. See also ncfmargin.

RobustAndOptimalControl.diskmarginMethod
diskmargin(L::LTISystem, σ::Real, ω)

Calculate the diskmargin at a particular frequency or vector of frequencies. If ω is a vector, you get a frequency-dependent diskmargin plot if you plot the returned value. See also ncfmargin.

RobustAndOptimalControl.expand_symbolMethod
expand_symbol(s::Symbol, n::Int)

Takes a symbol and an integer and returns a vector of symbols with increasing numbers appended to the end. E.g., (:x, 3) -> [:x1, :x2, :x3]

The short-hand syntax s^n is also available, e.g., :x^3 == expand_symbol(:x, 3).

Useful to create signal names for named systems.

RobustAndOptimalControl.extended_controllerFunction
extended_controller(l::LQGProblem, L = lqr(l), K = kalman(l))

Returns an expression for the controller that is obtained when state-feedback u = -L(xᵣ-x̂) is combined with a Kalman filter with gain K that produces state estimates x̂. The controller is an instance of ExtendedStateSpace where C2 = -L, D21 = L and B2 = K.

The returned system has inputs [xᵣ; y] and outputs the control signal u. If a reference model R is used to generate state references xᵣ, the controller from e = ry - y -> u is given by

Ce = extended_controller(l)
Ce = named_ss(Ce; x = :xC, y = :u, u = [R.y; :y^l.ny]) # Name the inputs of Ce the same as the outputs of `R`.
connect([R, Ce]; u1 = R.y, y1 = R.y, w1 = [:ry^l.ny, :y^l.ny], z1=[:u])

Since the negative part of the feedback is built into the returned system, we have

C = observer_controller(l)
Ce = extended_controller(l)
system_mapping(Ce) == -C
RobustAndOptimalControl.extended_controllerMethod
extended_controller(K::AbstractStateSpace)

Takes a controller and returns an ExtendedStateSpace version which has augmented input [r; y] and output y (z output is 0-dim).

RobustAndOptimalControl.extended_gangoffourFunction
extended_gangoffour(P, C, pos=true)

Returns a single statespace system that maps

  • w1 reference or measurement noise
  • w2 load disturbance

to

  • z1 control error
  • z2 control input
      z1          z2
      ▲  ┌─────┐  ▲      ┌─────┐
      │  │     │  │      │     │
w1──+─┴─►│  C  ├──┴───+─►│  P  ├─┐
    │    │     │      │  │     │ │
    │    └─────┘      │  └─────┘ │
    │                 w2         │
    └────────────────────────────┘

The returned system has the transfer-function matrix

\[\begin{bmatrix} I \\ C \end{bmatrix} (I + PC)^{-1} \begin{bmatrix} I & P \end{bmatrix}\]

The gang of four can be plotted like so

Gcl = extended_gangoffour(G, C) # Form closed-loop system
bodeplot(Gcl, lab=["S" "CS" "PS" "T"], plotphase=false) |> display # Plot gang of four

Note, the last output of Gcl is the negative of the CS and T transfer functions from gangoffour2. To get a transfer matrix with the same sign as G_CS and comp_sensitivity, call extended_gangoffour(P, C, pos=false). See glover_mcfarlane for an extended example. See also ncfmargin.

RobustAndOptimalControl.feedback_controlMethod
G = feedback_control(P, K)

Return the (negative eedback) closed loop system from input of K to output of P while outputing also the control signal (output of K), i.e., G maps references to [y; u]

Example:

The following are two equivalent ways of achieving the same thing

G = ssrand(3,4,2)
K = ssrand(4,3,2)

Gcl1 = feedback_control(G, K) # First option

# Second option using named systems and connect
G = named_ss(G, :G)
K = named_ss(K, :K)
S = sumblock("Ku = r - Gy", n=3) # Create a sumblock that computes r - Gy for vectors of length 3

z1 = [G.y; K.y] # Output both plant and controller outputs
w1 = :r^3       # Extenal inputs are the three references into the sum block
connections = [K.y .=> G.u; G.y .=> G.y; K.u .=> K.u] # Since the sumblock uses the same names as the IO signals of G,K, we can reuse these names here
Gcl2 = connect([G, K, S], connections; z1, w1)

@test linfnorm(minreal(Gcl1 - Gcl2.sys))[1] < 1e-10 # They are the same
RobustAndOptimalControl.find_lftMethod
l = find_lft(sys::StateSpace{<:Any, <:StaticParticles{<:Any, N}}, δ) where N

Given an systems sys with uncertain coefficients in the form of StaticParticles, find a lower linear fractional transformation M such that lft(M, δ) ≈ sys.

δ can be either the source of uncertainty in sys, i.e., a vector of the unique uncertain parameters that were used to create sys. These should be constructed as uniform randomly distributed particles for most robust-control theory to be applicable. δ can also be an integer, in which case a numer of δ sources of uncertainty are automatically created. This could be used for order reduction if the number of uncertainty sources in sys is large.

Note, uncertainty in sys is only supported in A and B, C and D must be deterministic.

Returns l::LFT that internaly contains all four blocks of M as well as δ. Call ss(l,sys) do obtain lft(M, δ) ≈ sys.

Call Matrix(l) to obtain M = [M11 M12; M21 M22]

RobustAndOptimalControl.fit_complex_perturbationsMethod
centers, radii = fit_complex_perturbations(P, w; relative=true, nominal=:mean)

For each frequency in w, fit a circle in the complex plane that contains all models in the model set P, represented as an LTISystem with Particles coefficients. Note, the resulting radii can be quite unstable if the number of particles is small, in particular if the particles are normally distributed instead of uniformly.

If realtive = true, circles encompassing |(P - Pn)/Pn| will be returned (multiplicative/relative uncertainty). If realtive = false, circles encompassing |P - Pn| will be returned (additive uncertainty).

If nominal = :mean, the mean of P will be used as nominal model. If nominal = :first, the first particle will be used. See MonteCarloMeasurements.with_nominal to set the nominal value in the first particle.

See also nyquistcircles to plot circles (only if relative=false).

RobustAndOptimalControl.frequency_weighted_reductionFunction
sysr, hs = frequency_weighted_reduction(G, Wo, Wi; residual=true)

Find Gr such that ||Wₒ(G-Gr)Wᵢ||∞ is minimized. For a realtive reduction, set Wo = inv(G) and Wi = I.

If residual = true, matched static gain is achieved through "residualization", i.e., setting

\[0 = A_{21}x_{1} + A_{22}x_{2} + B_{2}u\]

where indices 1/2 correspond to the remaining/truncated states respectively. This choice typically results in a better match in the low-frequency region and a smaller overall error.

Ref: Andras Varga and Brian D.O. Anderson, "Accuracy enhancing methods for the frequency-weighted balancing related model reduction" https://elib.dlr.de/11746/1/varga_cdc01p2.pdf

Note: This function only handles exponentially stable models. To reduce unstable and marginally stable models, decompose the system into stable and unstable parts using stab_unstab, reduce the stable part and then add the unstable part back.

RobustAndOptimalControl.gain_and_delay_uncertaintyMethod
gain_and_delay_uncertainty(kmin, kmax, Lmax)

Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics k*exp(-s*L) where k ∈ [kmin, kmax] and L ≤ Lmax. This weight is slightly optimistic, an expression for a more exact weight appears in eq (7.27), "Multivariable Feedback Control: Analysis and Design"

RobustAndOptimalControl.glover_mcfarlaneFunction
K, γ, info = glover_mcfarlane(G::AbstractStateSpace{<:Discrete}, γ = 1.1; W1=1, W2=1, strictly_proper=false)

For discrete systems, the info tuple contains also feedback gains F, L and observer gain Hkf such that the controller on observer form is given by

\[x^+ = Ax + Bu + H_{kf} (Cx - y)\\ u = Fx + L (Cx - y)\]

Note, this controller is not strictly proper, i.e., it has a non-zero D matrix. The controller can be transformed to observer form for the scaled plant (info.Gs) by Ko = observer_controller(info), in which case the following holds G*K == info.Gs*Ko (realizations are different).

If strictly_proper = true, the returned controller K will have D == 0. This can be advantageous in implementations where computational delays are present. In this case, info.L == 0 as well.

Ref discrete version: Iglesias, "The Strictly Proper Discrete-Time Controller for the Normalized Left-Coprime Factorization Robust Stabilization Problem"

RobustAndOptimalControl.glover_mcfarlaneFunction
K, γ, info = glover_mcfarlane(G::AbstractStateSpace, γ = 1.1; W1=1, W2=1)

Design a controller for G that maximizes the stability margin ϵ = 1/γ with normalized coprime factor uncertainty using the method of Glover and McFarlane

γ = 1/ϵ = ||[K;I] inv(I-G*K)*inv(M)||∞
G = inv(M + ΔM)*(N + ΔN)

γ is given as a relative factor above γmin and must be greater than 1, i.e., if γ = 1.1, the controller will be designed for γ = 1.1*γmin.

We want γmin (which is always ≥ 1) as small as possible, and we usually require that γmin is less than 4, corresponding to 25% allowed coprime uncertainty.

Performance modeling is incorporated in the design by calling glover_mcfarlane on the shaped system Gs = W2*G*W1 and then forming the controller as K = W1*Ks*W2. Using this formulation, traditional loop shaping can be done on Gs = W2*G*W1. The plant shaping is handled internally if keyword arguments W1, W2 are used and the returned controller is already scaled. In this case, Gs and Ks are included in the info named tuple for inspection.

See also glover_mcfarlane_2dof to design a feedforward filter as well and baltrunc_coprime for controller order reduction. When reducing the order of the calculated controller, reduce the order of info.Ks and form Kr=W1*Ksred*W2. Verify the robustness using ncfmargin(info.Gs, Ksred) as well as ncfmargin(G, Kr).

Example:

Example 9.3 from the reference below.

using RobustAndOptimalControl, ControlSystems, Plots, Test
G = tf(200, [10, 1])*tf(1, [0.05, 1])^2     |> ss
Gd = tf(100, [10, 1])                       |> ss
W1 = tf([1, 2], [1, 1e-6])                  |> ss
K, γ, info = glover_mcfarlane(G, 1.1; W1)
@test info.γmin ≈ 2.34 atol=0.005
Gcl = extended_gangoffour(G, K) # Form closed-loop system

bodeplot([G, info.Gs, G*K], lab=["G" "" "G scaled" "" "Loop transfer"]) |> display
bodeplot(Gcl, lab=["S" "KS" "PS" "T"], plotphase=false) |> display # Plot gang of four

plot( step(Gd*feedback(1, info.Gs), 3), lab="Initial controller")
plot!(step(Gd*feedback(1, G*K), 3), lab="Robustified") |> display

nyquistplot([info.Gs, G*K], ylims=(-2,1), xlims=(-2, 1),
    Ms_circles = 1.5,
    lab = ["Initial controller" "Robustified"],
    title = "Loop transfers with and without robustified controller"
    ) |> display

Example of controller reduction: The order of the controller designed above can be reduced maintaining at least 2/3 of the robustness margin like this

e,_ = ncfmargin(info.Gs, info.Ks)
Kr, hs, infor = baltrunc_coprime(info.Ks, n=info.Ks.nx)
n = findlast(RobustAndOptimalControl.error_bound(hs) .> 2e/3) # 2/3 e sets the robustness margin
Ksr, hs, infor = baltrunc_coprime(info.Ks; n)
@test ncfmargin(Gs, Ksr)[1] >= 2/3 * e
Kr = W1*Ksr
bodeplot([G*K, G*Kr], lab=["L original" "" "L Reduced" ""]) |> display

This gives a final controller Kr of order 3 instead of order 5, but a very similar robustness margin. You may also call

controller_reduction_plot(info.Gs, info.Ks)

to help you select the controller order.

Ref: Sec 9.4.1 of Skogestad, "Multivariable Feedback Control: Analysis and Design"

Extended help

Skogestad gives the following general advice:

  1. Scale the plant outputs and inputs. This is very important for most design procedures and is sometimes forgotten. In general, scaling improves the conditioning of the design problem, it enables meaningful analysis to be made of the robustness properties of the feedback system in the frequency domain, and for loop-shaping it can simplify the selection of weights. There are a variety of methods available including normalization with respect to the magnitude of the maximum or average value of the signal in question. If one is to go straight to a design the following variation has proved useful in practice: (a) The outputs are scaled such that equal magnitudes of cross-coupling into each of the outputs is equally undesirable. (b) Each input is scaled by a given percentage (say 10%) of its expected range of operation. That is, the inputs are scaled to reflect the relative actuator capabilities. An example of this type of scaling is given in the aero-engine case study of Chapter 12.

  2. Order the inputs and outputs so that the plant is as diagonal as possible. The relative gain array rga can be useful here. The purpose of this pseudo-diagonalization is to ease the design of the pre- and post-compensators which, for simplicity, will be chosen to be diagonal.

    Next, we discuss the selection of weights to obtain the shaped plant $G_s = W_2 G W_1$ where $W_1 = W_p W_a W_g$

  3. Select the elements of diagonal pre- and post-compensators $W_p$ and $W_2$ so that the singular values of $W_2 G W_p$ are desirable. This would normally mean high gain at low frequencies, roll-off rates of approximately 20 dB/decade (a slope of about 1) at the desired bandwidth(s), with higher rates at high frequencies. Some trial and error is involved here. $W_2$ is usually chosen as a constant, reflecting the relative importance of the outputs to be controlled and the other measurements being fed back to the controller. For example, if there are feedback measurements of two outputs to be controlled and a velocity signal, then $W_2$ might be chosen to be diag([1, 1, 0.1]), where 0.1 is in the velocity signal channel. $W_p$ contains the dynamic shaping. Integral action, for low frequency performance; phase-advance for reducing the roll-off rates at crossover, and phase-lag to increase the roll-off rates at high frequencies should all be placed in $W_p$ if desired. The weights should be chosen so that no unstable hidden modes are created in $G_s$.

  4. Optional: Introduce an additional gain matrix $W_g$ cascaded with $W_a$ to provide control over actuator usage. $W_g$ is diagonal and is adjusted so that actuator rate limits are not exceeded for reference demands and typical disturbances on the scaled plant outputs. This requires some trial and error.

  5. Robustly stabilize the shaped plant $G_s = W_2 G W_1$ , where $W_1 = W_p W_a W_g$, using glover_mcfarlane. First, the maximum stability margin $ϵ_{max} = 1/γ_{min}$ is calculated. If the margin is too small, $ϵmax < 0.25$, then go back and modify the weights. Otherwise, a γ-suboptimal controller is synthesized. There is usually no advantage to be gained by using the optimal controller. When $ϵ_{max}$ > 0.25 (respectively $γ_{min}$ < 4) the design is usually successful. In this case, at least 25% coprime factor uncertainty is allowed, and we also find that the shape of the open-loop singular values will not have changed much after robust stabilization. A small value of ϵmax indicates that the chosen singular value loop-shapes are incompatible with robust stability requirements. That the loop-shapes do not change much following robust stabilization if γ is small (ϵ large), is justified theoretically in McFarlane and Glover (1990).

  6. Analyze the design and if all the specifications are not met make further modifications to the weights.

  7. Implement the controller. The configuration shown in below has been found useful when compared with the conventional set up. This is because the references do not directly excite the dynamics of $K$, which can result in large amounts of overshoot (classical derivative kick). The constant prefilter ensures a steady-state gain of 1 between r and y, assuming integral action in $W_1$ or $G$ (note, the K returned by this function has opposite sign compared to that of Skogestad, so we use negative feedback here).

Anti-windup can be added to $W_1$ but putting $W_1$ on Hanus form after the synthesis, see hanus.

       ┌─────────┐      ┌────────┐      ┌────────┐
    r  │         │    us│        │  u   │        │  y
   ───►│(K*W2)(0)├──+──►│   W1   ├─────►│   G    ├────┬──►
       │         │  │-  │        │      │        │    │
       └─────────┘  │   └────────┘      └────────┘    │
                    │                                 │
                    │                                 │
                    │   ┌────────┐      ┌────────┐    │
                    │   │        │  ys  │        │    │
                    └───┤   K    │◄─────┤   W2   │◄───┘
                        │        │      │        │
                        └────────┘      └────────┘

Keywords: nfcsyn, coprimeunc

RobustAndOptimalControl.glover_mcfarlane_2dofFunction
K, γ, info = glover_mcfarlane_2dof(G::AbstractStateSpace{Continuous}, Tref::AbstractStateSpace{Continuous}, γ = 1.1, ρ = 1.1;
W1 = 1, Wo = I, match_dc = true, kwargs...)

Joint design of feedback and feedforward compensators

\[K = \begin{bmatrix} K_1 & K_2 \end{bmatrix}\]

   ┌──────┐   ┌──────┐        ┌──────┐    ┌─────┐
r  │      │   │      │        │      │    │     │
──►│  Wi  ├──►│  K1  ├───+───►│  W1  ├───►│  G  ├─┐y
   │      │   │      │   │    │      │    │     │ │
   └──────┘   └──────┘   │    └──────┘    └─────┘ │
                         │                        │
                         │    ┌──────┐            │
                         │    │      │            │
                         └────┤  K2  ◄────────────┘
                              │      │
                              └──────┘

Where the returned controller K takes the measurement vector [r; y] (positive feedback), i.e., it includes all blocks Wi, K1, K2, W1. If match_dc = true, Wi is automatically computed to make sure the static gain matches Tref exactly, otherwise Wi is set to I. The info named tuple contains the feedforward filter for inspection (info.K1 = K1*Wi).

Arguments:

  • G: Plant model
  • Tref: Reference model
  • γ: Relative γ
  • ρ: Design parameter, typically 1 < ρ < 3. Increase to emphasize model matching at the expense of robustness.
  • W1: Pre-compensator for loop shaping.
  • Wo: Output selction matrix. If there are more measurements than controlled variables, this matrix let's you select which measurements are to be controlled.
  • kwargs: Are sent to hinfsynthesize.

Ref: Sec. 9.4.3 of Skogestad, "Multivariable Feedback Control: Analysis and Design". The reference contains valuable pointers regarding gain-scheduling implementation of the designed controller as an observer with feedback from estimated states. In order to get anti-windup protection when W1 contains an integrator, transform W1 to self-conditioned Hanus form (using hanus) and implement the controller like this

W1h = hanus(W1)             # Perform outside loop

# Each iteration
us = filter(Ks, [r; y])     # filter inputs through info.Ks (filter is a fictive function that applies the transfer function)
u  = filter(W1h, [us; ua])  # filter us and u-actual (after input saturation) through W1h
ua = clamp(u, lower, upper) # Calculate ua for next iteration as the saturated value of u

Example:

P = tf([1, 5], [1, 2, 10]) # Plant
W1 = tf(1,[1, 0]) |> ss    # Loop shaping controller

Tref = tf(1, [1, 1]) |> ss # Reference model

K1dof, γ1, info1 = glover_mcfarlane(ss(P), 1.1; W1)
K2dof, γ2, info2 = glover_mcfarlane_2dof(ss(P), Tref, 1.1, 1.1; W1)

G1 = feedback(P*K1dof)
G2 = info2.Gcl

bodeplot(info2.K1, w, lab="Feedforward filter")
plot([step(G1, 15), step(G2, 15), step(Tref, 15)], lab=["1-DOF" "2-DOF" "Tref"])
RobustAndOptimalControl.h2normMethod
n = h2norm(sys::LTISystem; kwargs...)

A numerically robust version of norm using DescriptorSystems.jl

For keyword arguments, see the docstring of DescriptorSystems.gh2norm, reproduced below

gh2norm(sys, fast = true, offset = sqrt(ϵ), atol = 0, atol1 = atol, atol2 = atol, atolinf = atol, rtol = n*ϵ)

Compute for a descriptor system sys = (A-λE,B,C,D) the H2 norm of its transfer function matrix G(λ). The H2 norm is infinite, if sys has unstable poles, or, for a continuous-time, the system has nonzero gain at infinity. To check the stability, the eigenvalues of the pole pencil A-λE must have real parts less than for a continuous-time system or have moduli less than 1-β for a discrete-time system, where β is the stability domain boundary offset. The offset β to be used can be specified via the keyword parameter offset = β. The default value used for β is sqrt(ϵ), where ϵ is the working machine precision.

For a continuous-time system sys with E singular, a reduced order realization is determined first, without uncontrollable and unobservable nonzero finite and infinite eigenvalues of the corresponding pole pencil. The rank determinations in the performed reductions are based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false.

The keyword arguments atol1, atol2, and rtol, specify, respectively, the absolute tolerance for the nonzero elements of A, B, C, D, the absolute tolerance for the nonzero elements of E, and the relative tolerance for the nonzero elements of A, B, C, D and E. The keyword argument atolinf is the absolute tolerance for the gain of G(λ) at λ = ∞. The used default value is atolinf = 0. The default relative tolerance is n*ϵ, where ϵ is the working machine epsilon and n is the order of the system sys. The keyword argument atol can be used to simultaneously set atol1 = atol and atol2 = atol.

RobustAndOptimalControl.h2synthesizeFunction
K, Cl = h2synthesize(P::ExtendedStateSpace, γ = nothing)

Synthesize H₂-optimal controller K and calculate the closed-loop transfer function from w to z. Ref: Cha. 14.5 in Robust and Optimal Control.

If γ = nothing, use the formulas for H₂ in Ch 14.5. If γ is a large value, the H∞ formulas are used. As γ → ∞, these two are equivalent. The h∞ formulas do a coordinate transfromation that handles slightly more general systems so if you run into an error, it might be worth trying setting γ to something large, e.g., 1000.

RobustAndOptimalControl.hankelnormMethod
n, hsv = hankelnorm(sys::LTISystem; kwargs...)

Compute the hankelnorm and the hankel singular values

For keyword arguments, see the docstring of DescriptorSystems.ghanorm, reproduced below

ghanorm(sys, fast = true, atol = 0, atol1 = atol, atol2 = atol, rtol = n*ϵ) -> (hanorm, hs)

Compute for a proper and stable descriptor system sys = (A-λE,B,C,D) with the transfer function matrix G(λ), the Hankel norm hanorm = $\small ||G(\lambda)||_H$ and the vector of Hankel singular values hs of the system.

For a proper system with E singular, the uncontrollable infinite eigenvalues of the pair (A,E) and the non-dynamic modes are elliminated using minimal realization techniques. The rank determinations in the performed reductions are based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false.

The keyword arguments atol1, atol2, and rtol, specify, respectively, the absolute tolerance for the nonzero elements of A, B, C, D, the absolute tolerance for the nonzero elements of E, and the relative tolerance for the nonzero elements of A, B, C, D and E. The default relative tolerance is n*ϵ, where ϵ is the working machine epsilon and n is the order of the system sys. The keyword argument atol can be used to simultaneously set atol1 = atol and atol2 = atol.

RobustAndOptimalControl.hanusMethod
Wh = hanus(W)

Return Wh on Hanus form. Wh has twice the number of inputs, where the second half of the inputs are "actual inputs", e.g., potentially saturated. This is used to endow W with anti-windup protection. W must have an invertable D matrix and be minimum phase.

Ref: Sec 9.4.5 of Skogestad, "Multivariable Feedback Control: Analysis and Design"

RobustAndOptimalControl.hess_formMethod
sysm, T, HF = hess_form(sys)

Bring sys to Hessenberg form form.

The Hessenberg form is characterized by A having upper Hessenberg structure. T is the similarity transform applied to the system such that

sysm ≈ similarity_transform(sys, T)

HF is the Hessenberg-factorization of A.

See also modal_form and schur_form

RobustAndOptimalControl.hinfgradMethod
∇A, ∇B, ∇C, ∇D, hn, ω = hinfgrad(sys; rtolinf=1e-8, kwargs...)
∇A, ∇B, ∇C, ∇D        = hinfgrad(sys, hn, ω)

Compute the gradient of the H∞ norm w.r.t. the statespace matrices A,B,C,D. If only a system is provided, the norm hn and the peak frequency ω are automatically calculated. kwargs are sent to hinfnorm2. Note, the default tolerance to which the norm is calculated is set smaller than default for hinfnorm2, gradients will be discontinuous with any non-finite tolerance, and sensitive optimization algorithms may require even tighter tolerance.

In cases where the maximum singular value is reached at more than one frequency, a random frequency is used.

If the system is unstable, the gradients are NaN. Strategies to find an initial stabilizing controllers are outlined in Apkarian and D. Noll, "Nonsmooth H∞ Synthesis" in IEEE Transactions on Automatic Control.

An rrule for ChainRules is defined using this function, so hn is differentiable with any AD package that derives its rules from ChainRules (only applies to the hn return value, not ω).

RobustAndOptimalControl.hinfnorm2Method
n, ω = hinfnorm2(sys::LTISystem; kwargs...)

A numerically robust version of hinfnorm using DescriptorSystems.jl

For keyword arguments, see the docstring of DescriptorSystems.ghinfnorm, reproduced below

ghinfnorm(sys, rtolinf = 0.001, fast = true, offset = sqrt(ϵ), atol = 0, atol1 = atol, atol2 = atol, rtol = n*ϵ) -> (hinfnorm, fpeak)

Compute for a descriptor system sys = (A-λE,B,C,D) with the transfer function matrix G(λ) the H∞ norm hinfnorm (i.e., the peak gain of G(λ)) and the corresponding peak frequency fpeak, where the peak gain is achieved. The H∞ norm is infinite if the pole pencil A-λE has unstable zeros (i.e., sys has unstable poles). To check the stability, the eigenvalues of the pencil A-λE must have real parts less than for a continuous-time system or have moduli less than 1-β for a discrete-time system, where β is the stability domain boundary offset. The offset β to be used can be specified via the keyword parameter offset = β. The default value used for β is sqrt(ϵ), where ϵ is the working machine precision.

The keyword argument rtolinf specifies the relative accuracy for the computed infinity norm. The default value used for rtolinf is 0.001.

For a continuous-time system sys with E singular, a reduced order realization is determined first, without uncontrollable and unobservable nonzero finite and infinite eigenvalues of the corresponding pole pencil. The rank determinations in the performed reductions are based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false.

The keyword arguments atol1, atol2, and rtol, specify, respectively, the absolute tolerance for the nonzero elements of matrices A, B, C, D, the absolute tolerance for the nonzero elements of E, and the relative tolerance for the nonzero elements of A, B, C, D and E. The default relative tolerance is n*ϵ, where ϵ is the working machine epsilon and n is the order of the system sys. The keyword argument atol can be used to simultaneously set atol1 = atol and atol2 = atol.

RobustAndOptimalControl.hinfpartitionMethod
P = hinfpartition(G, WS, WU, WT)

Transform a SISO or MIMO system G, with weighting functions WS, WU, WT into an LFT with an isolated controller, and write the resulting system, P(s), on a state-space form. Valid inputs for G are transfer functions (with dynamics, can be both MIMO and SISO, both in tf and ss forms). Valid inputs for the weighting functions are empty arrays, numbers (static gains), and LTISystems.

Note, system_mapping(P) is equal to -G.

RobustAndOptimalControl.hinfsignalsMethod
hinfsignals(P::ExtendedStateSpace, G::LTISystem, C::LTISystem)

Use the extended state-space model, a plant and the found controller to extract the closed loop transfer functions.

  • Pcl : w → z : From input to the weighted functions
  • S : w → e : From input to error
  • CS : w → u : From input to control
  • T : w → y : From input to output
RobustAndOptimalControl.hinfsynthesizeMethod
K, γ, mats = hinfsynthesize(P::ExtendedStateSpace; gtol = 1e-4, interval = (0, 20), verbose = false, tolerance = 1.0e-10, γrel = 1.01, transform = true, ftype = Float64, check = true)

Computes an H-infinity optimal controller K for an extended plant P such that ||F_l(P, K)||∞ < γ (lft(P, K)) for the smallest possible γ given P. The routine is known as the γ-iteration, and is based on the paper "State-space formulae for all stabilizing controllers that satisfy an H∞-norm bound and relations to risk sensitivity" by Glover and Doyle.

Arguments:

  • gtol: Tolerance for γ.
  • interval: The starting interval for the bisection.
  • verbose: Print progress?
  • tolerance: For detecting eigenvalues on the imaginary axis.
  • γrel: If γrel > 1, the optimal γ will be found by γ iteration after which a controller will be designed for γ = γopt * γrel. It is often a good idea to design a slightly suboptimal controller, both for numerical reasons, but also since the optimal controller may contain very fast dynamics. If γrel → ∞, the computed controller will approach the 𝑯₂ optimal controller. Getting a mix between 𝑯∞ and 𝑯₂ properties is another reason to choose γrel > 1.
  • transform: Apply coordiante transform in order to tolerate a wider range or problem specifications.
  • ftype: construct problem matrices in higher precision for increased numerical robustness. If the calculated controller achieves
  • check: Perform a post-design check of the γ value achieved by the calculated controller. A warning is issued if the achieved γ differs from the γ calculated during design. If this warning is issued, consider using a higher-precision number type like ftype = BigFloat.

See the example folder for example usage.

RobustAndOptimalControl.hsvdMethod
hsvd(sys::AbstractStateSpace{Continuous})

Return the Hankel singular values of sys, computed as the eigenvalues of QP Where Q and P are the Gramians of sys.

RobustAndOptimalControl.input_comp_sensitivityMethod
input_comp_sensitivity(P,C)
input_comp_sensitivity(l::LQGProblem)

Transfer function from load disturbance to control signal.

  • "Input" signifies that the transfer function is from the input of the plant.
  • "Complimentary" signifies that the transfer function is to an output (in this case controller output)
         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.input_sensitivityMethod
input_sensitivity(P, C)
input_sensitivity(l::LQGProblem)

Transfer function from load disturbance to total plant input.

  • "Input" signifies that the transfer function is from the input of the plant.
         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.loop_diskmarginMethod
loop_diskmargin(P, C, args...; kwargs...)

Calculate the loop-at-a-time diskmargin for each output and input of P. See also diskmargin, sim_diskmargin. Ref: "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet https://arxiv.org/abs/2003.04771

RobustAndOptimalControl.loop_scaleFunction
loop_scale(L::LTISystem, w = 0)

Find the optimal diagonal scaling matrix D such that D\L(iw)*D has a minimized condition number at frequency w. Applicable to square L only. Use loop_scaling to obtain D.

RobustAndOptimalControl.lqr3Method
lqr3(P::AbstractStateSpace, Q1::AbstractMatrix, Q2::AbstractMatrix, Q3::AbstractMatrix)

Calculate the feedback gain of the discrete LQR cost function augmented with control differences

\[x^{T} Q_1 x + u^{T} Q_2 u + Δu^{T} Q_3 Δu, \quad Δu = u(k) - u(k-1)\]

RobustAndOptimalControl.makeweightMethod
makeweight(low, f_mid, high)
makeweight(low, (f_mid, gain_mid), high)

Create a weighting function that goes from gain low at zero frequency, through gain gain_mid to gain high at ∞

Arguments:

  • low: A number specifying the DC gain
  • mid: A number specifying the frequency at which the gain is 1, or a tuple (freq, gain). If gain_mid is not specified, the geometric mean of high and low is used.
  • high: A number specifying the gain at ∞
using ControlSystems, Plots
W = makeweight(10, (5,2), 1/10)
bodeplot(W)
hline!([10, 2, 1/10], l=(:black, :dash), primary=false)
vline!([5], l=(:black, :dash), primary=false)
RobustAndOptimalControl.modal_formMethod
sysm, T, E = modal_form(sys; C1 = false)

Bring sys to modal form.

The modal form is characterized by being tridiagonal with the real values of eigenvalues of A on the main diagonal and the complex parts on the first sub and super diagonals. T is the similarity transform applied to the system such that

sysm ≈ similarity_transform(sys, T)

If C1, then an additional convention for SISO systems is used, that the C-matrix coefficient of real eigenvalues is 1. If C1 = false, the B and C coefficients are chosen in a balanced fashion.

E is an eigen factorization of A.

See also hess_form and schur_form

RobustAndOptimalControl.muplotFunction
muplot(sys, args...; hz=false)
muplot(LTISystem[sys1, sys2...], args...; hz=false)

Plot the structured singular values (assuming time-varying diagonal complex uncertainty) of the frequency response of the LTISystem(s). This plot is similar to sigmaplot, but scales the loop-transfer function to minimize the maximum singular value. Only applicable to square systems. A frequency vector w can be optionally provided.

If hz=true, the plot x-axis will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to Plots.plot.

RobustAndOptimalControl.mvnyquistplotFunction
fig = mvnyquistplot(sys, w;  unit_circle=true, hz = false, kwargs...)

Create a Nyquist plot of the LTISystem. A frequency vector w must be provided.

  • unit_circle: if the unit circle should be displayed

If hz=true, the hover information will be displayed in Hertz, the input frequency vector is still treated as rad/s.

kwargs is sent as argument to plot.

Example

w = 2π .* exp10.(LinRange(-2, 2, 500))
W = makeweight(0.40, 15, 3) # frequency weight for uncertain dynamics
Pn = tf(1, [1/60, 1]) |> ss # nominal plant
d = δss(1,1)                # Uncertain dynamics

Pd = Pn*(I(1) + W*d)        # weighted dynamic uncertainty on the input of Pn
Pp = rand(Pd, 200)          # sample the uncertain plant
Gcl = lft(Pd, ss(-1))       # closed loop system
structured_singular_value(Gcl) # larger than 1 => not robustly stable
unsafe_comparisons(true)
mvnyquistplot(Pp, w, points=true) # MV Nyquist plot encircles origin for some samples => not robustly stable
RobustAndOptimalControl.named_ssMethod
named_ss(sys::AbstractStateSpace, name; x, y, u)

If a single name is provided, the outputs, inputs and states will be automatically named y,u,x with name as prefix.

RobustAndOptimalControl.named_ssMethod
named_ss(sys::AbstractStateSpace{T}; x, u, y)

Create a NamedStateSpace system. This kind of system uses names rather than integer indices to refer to states, inputs and outputs

Arguments:

  • sys: A system to add names to.
  • x: A list of symbols with names of the states.
  • u: A list of symbols with names of the inputs.
  • y: A list of symbols with names of the outputs.

Default names of signals if none are provided are x,u,y.

Example

G1 = ss(1,1,1,0)
G2 = ss(1,1,1,0)
s1 = named_ss(G1, x = :x, u = :u1, y=:y1)
s2 = named_ss(G2, x = :z, u = :u2, y=:y2)

s1[:y1, :u1] # Index using symbols

fb = feedback(s1, s2, r = :r) # 
RobustAndOptimalControl.named_ssMethod
named_ss(sys::ExtendedStateSpace;       kwargs...)
named_ss(sys::ExtendedStateSpace, name; kwargs...)

Assign names to an ExtendedStateSpace. If no specific names are provided for signals z,y,w,u and statesx, names will be generated automatically.

Arguments:

  • name: Prefix to add to all automatically generated names.
  • x
  • u
  • y
  • w
  • z
RobustAndOptimalControl.ncfmarginMethod
m, ω = ncfmargin(P, K)

Normalized coprime factor margin, defined has the inverse of

\[\begin{Vmatrix} I \\ K \end{bmatrix} (I + PK)^{-1} \begin{bmatrix} I & P \end{Vmatrix}_\infty\]

A margin ≥ 0.25-0.3 is a reasonable for robustness.

If controller K stabilizes P with margin m, then K will also stabilize if nugap(P, P̃) < m.

See also extended_gangoffour, diskmargin, controller_reduction_plot.

RobustAndOptimalControl.neglected_delayMethod
neglected_delay(Lmax)

Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics exp(-s*L) where L ≤ Lmax. "Multivariable Feedback Control: Analysis and Design" Ch 7.4.5

RobustAndOptimalControl.neglected_lagMethod
neglected_lag(τmax)

Return a multiplicative weight to represent the uncertainty coming from neglecting the dynamics 1/(s*τ + 1) where τ ≤ τmax. "Multivariable Feedback Control: Analysis and Design" Ch 7.4.5

RobustAndOptimalControl.nugapMethod
nugap(sys0::LTISystem, sys1::LTISystem; kwargs...)

Compute the ν-gap metric between two systems. See also ncfmargin.

For keyword arguments, see the docstring of DescriptorSystems.gnugap, reproduced below

gnugap(sys1, sys2; freq = ω, rtolinf = 0.00001, fast = true, offset = sqrt(ϵ), 
       atol = 0, atol1 = atol, atol2 = atol, rtol = n*ϵ) -> (nugapdist, fpeak)

Compute the ν-gap distance nugapdist between two descriptor systems sys1 = (A1-λE1,B1,C1,D1) and sys2 = (A2-λE2,B2,C2,D2) and the corresponding frequency fpeak (in rad/TimeUnit), where the ν-gap distance achieves its peak value.

If freq = missing, the resulting nugapdist satisfies 0 <= nugapdist <= 1. The value nugapdist = 1 results, if the winding number is different of zero in which case fpeak = [].

If freq = ω, where ω is a given vector of real frequency values, the resulting nugapdist is a vector of pointwise ν-gap distances of the dimension of ω, whose components satisfies 0 <= maximum(nugapdist) <= 1. In this case, fpeak is the frequency for which the pointwise distance achieves its peak value. All components of nugapdist are set to 1 if the winding number is different of zero in which case fpeak = [].

The stability boundary offset, β, to be used to assess the finite zeros which belong to the boundary of the stability domain can be specified via the keyword parameter offset = β. Accordingly, for a continuous-time system, these are the finite zeros having real parts within the interval [-β,β], while for a discrete-time system, these are the finite zeros having moduli within the interval [1-β,1+β]. The default value used for β is sqrt(ϵ), where ϵ is the working machine precision.

Pencil reduction algorithms are employed to compute range and coimage spaces which perform rank decisions based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false.

The keyword arguments atol1, atol2 and rtol, specify, respectively, the absolute tolerance for the nonzero elements of A1, A2, B1, B2, C1, C2, D1 and D2, the absolute tolerance for the nonzero elements of E1 and E2, and the relative tolerance for the nonzero elements of all above matrices. The default relative tolerance is n*ϵ, where ϵ is the working machine epsilon and n is the maximum of the orders of the systems sys1 and sys2. The keyword argument atol can be used to simultaneously set atol1 = atol, atol2 = atol.

The keyword argument rtolinf specifies the relative accuracy to be used to compute the ν-gap as the infinity norm of the relevant system according to [1]. The default value used for rtolinf is 0.00001.

Method: The evaluation of ν-gap uses the definition proposed in [1], extended to generalized LTI (descriptor) systems. The computation of winding number is based on enhancements covering zeros on the boundary of the stability domain and infinite zeros.

References:

[1] G. Vinnicombe. Uncertainty and feedback: H∞ loop-shaping and the ν-gap metric. Imperial College Press, London, 2001.

RobustAndOptimalControl.output_comp_sensitivityMethod
output_comp_sensitivity(P,C)
output_comp_sensitivity(l::LQGProblem)

Transfer function from measurement noise / reference to plant output.

  • "output" signifies that the transfer function is from the output of the plant.
  • "Complimentary" signifies that the transfer function is to an output (in this case plant output)
         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.output_sensitivityMethod
output_sensitivity(P, C)
output_sensitivity(l::LQGProblem)

Transfer function from measurement noise / reference to control error.

  • "output" signifies that the transfer function is from the output of the plant.
         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.robstabMethod
robstab(M0::UncertainSS, w=exp10.(LinRange(-3, 3, 1500)); kwargs...)

Return the robust stability margin of an uncertain model, defined as the inverse of the structured singular value. Currently, only diagonal complex perturbations supported.

RobustAndOptimalControl.schur_formMethod
sysm, T, SF = schur_form(sys)

Bring sys to Schur form.

The Schur form is characterized by A being Schur with the real values of eigenvalues of A on the main diagonal. T is the similarity transform applied to the system such that

sysm ≈ similarity_transform(sys, T)

SF is the Schur-factorization of A.

See also modal_form and hess_form

RobustAndOptimalControl.sensitivityMethod

See output_sensitivity

         ▲
         │e₁
         │  ┌─────┐
d₁────+──┴──►  P  ├─────┬──►e₄
      │     └─────┘     │
      │                 │
      │     ┌─────┐    -│
 e₂◄──┴─────┤  C  ◄──┬──+───d₂
            └─────┘  │
                     │e₃
                     ▼
RobustAndOptimalControl.show_constructionMethod
show_construction([io::IO,] sys::LTISystem; name = "temp", letb = true)

Print code to io that reconstructs sys.

  • letb: If true, the code is surrounded by a let block.
julia> sys = ss(tf(1, [1, 1]))
StateSpace{Continuous, Float64}
A = 
 -1.0
B = 
 1.0
C = 
 1.0
D = 
 0.0

Continuous-time state-space model

julia> show_construction(sys, name="Jörgen")
Jörgen = let
    JörgenA = [-1.0;;]
    JörgenB = [1.0;;]
    JörgenC = [1.0;;]
    JörgenD = [0.0;;]
    ss(JörgenA, JörgenB, JörgenC, JörgenD)
end
RobustAndOptimalControl.sim_diskmarginFunction
sim_diskmargin(P::LTISystem, C::LTISystem, σ::Real = 0)

Simultaneuous diskmargin at both outputs and inputs of P. Ref: "An Introduction to Disk Margins", Peter Seiler, Andrew Packard, and Pascal Gahinet https://arxiv.org/abs/2003.04771 See also ncfmargin.

RobustAndOptimalControl.specificationplotFunction
specificationplot([S,CS,T], [WS,WU,WT])

This function visualizes the control synthesis using the hInfsynthesize with the three weighting functions WS(s), WU(s), WT(s)$ inverted and scaled by γ, against the corresponding transfer functions $S(s), C(s)S(s), T(s)$, to verify visually that the specifications are met. This may be run using both MIMO and SISO systems.

Keyword args

  • wint: (-3, 5) frequency range (log10)
  • wnum: 101 number of frequency points
  • hz: true
  • nsigma: typemax(Int) number of singular values to show
  • s_labels: `[ "σ(S)", "σ(CS)", "σ(T)",

]`

  • w_labels: `[ "γ σ(Wₛ⁻¹)", "γ σ(Wᵤ⁻¹)", "γ σ(Wₜ⁻¹)",

]`

  • colors: [:blue, :red, :green] colors for $S$, $CS$ and $T$
RobustAndOptimalControl.stab_unstabMethod
stab, unstab = stab_unstab(sys; kwargs...)

Decompose sys into sys = stab + unstab where stab contains all stable poles and unstab contains unstable poles. See gsdec(sys; job = "finite", smarg, fast = true, atol = 0, atol1 = atol, atol2 = atol, rtol = nϵ) -> (sys1, sys2)

Compute for the descriptor system sys = (A-λE,B,C,D) with the transfer function matrix G(λ), the additive spectral decomposition G(λ) = G1(λ) + G2(λ) such that G1(λ), the transfer function matrix of the descriptor system sys1 = (A1-λE1,B1,C1,D1), has only poles in a certain domain of interest Cg of the complex plane and G2(λ), the transfer function matrix of the descriptor system sys2 = (A2-λE2,B2,C2,0), has only poles outside of Cg.

The keyword argument smarg, if provided, specifies the stability margin for the stable eigenvalues of A-λE, such that, in the continuous-time case, the stable eigenvalues have real parts less than or equal to smarg, and in the discrete-time case, the stable eigenvalues have moduli less than or equal to smarg. If smarg = missing, the used default values are: smarg = -sqrt(ϵ), for a continuous-time system, and smarg = 1-sqrt(ϵ), for a discrete-time system), where ϵ is the machine precision of the working accuracy.

The keyword argument job, in conjunction with smarg, defines the domain of interest Cg, as follows:

for job = "finite", Cg is the whole complex plane without the point at infinity, and sys1 has only finite poles and sys2 has only infinite poles (default); the resulting A2 is nonsingular and upper triangular, while the resulting E2 is nilpotent and upper triangular;

for job = "infinite", Cg is the point at infinity, and sys1 has only infinite poles and sys2 has only finite poles and is the strictly proper part of sys; the resulting A1 is nonsingular and upper triangular, while the resulting E1 is nilpotent and upper triangular;

for job = "stable", Cg is the stability domain of eigenvalues defined by smarg, and sys1 has only stable poles and sys2 has only unstable and infinite poles; the resulting pairs (A1,E1) and (A2,E2) are in generalized Schur form with E1 upper triangular and nonsingular and E2 upper triangular;

for job = "unstable", Cg is the complement of the stability domain of the eigenvalues defined by smarg, and sys1 has only unstable and infinite poles and sys2 has only stable poles; the resulting pairs (A1,E1) and (A2,E2) are in generalized Schur form with E1 upper triangular and E2 upper triangular and nonsingular.

The keyword arguments atol1, atol2, and rtol, specify, respectively, the absolute tolerance for the nonzero elements of A, the absolute tolerance for the nonzero elements of E, and the relative tolerance for the nonzero elements of A and E. The default relative tolerance is n*ϵ, where ϵ is the working machine epsilon and n is the order of the system sys. The keyword argument atol can be used to simultaneously set atol1 = atol, atol2 = atol.

The separation of the finite and infinite eigenvalues is performed using rank decisions based on rank revealing QR-decompositions with column pivoting if fast = true or the more reliable SVD-decompositions if fast = false. for keyword arguments (argument job is set to "stable" in this function).

RobustAndOptimalControl.structured_singular_valueMethod
μ = structured_singular_value(M; tol=1e-4, scalings=false, dynamic=false)

Compute (an upper bound of) the structured singular value μ for diagonal Δ of complex perturbations (other structures of Δ are not yet supported). M is assumed to be an (n × n × N_freq) array or a matrix.

We currently don't have any methods to compute a lower bound, but if all perturbations are complex the spectral radius ρ(M) is always a lower bound (usually not a good one).

If scalings = true, return also a n × nf matrix Dm with the diagonal scalings D such that

D = Diagonal(Dm[:, i])
σ̄(D\M[:,:,i]*D)

is minimized.

If dynamic = true, the perturbations are assumed to be time-varying Δ(t). In this case, the same scaling is used for all frequencies and the returned D if scalings=true is a vector d such that D = Diagonal(d).

RobustAndOptimalControl.sumblockMethod
sumblock(ex::String; Ts = 0, n = 1)

Create a summation node that sums (or subtracts) vectors of length n.

Arguments:

  • Ts: Sample time
  • n: The length of the input and output vectors. Set n=1 for scalars.

Examples:

julia> sumblock("uP = vf + yL")
NamedStateSpace{Continuous, Int64}
D = 
 1  1

With state  names: 
     input  names: vf yL
     output names: uP


julia> sumblock("x_diff = xr - xh"; n=3)
NamedStateSpace{Continuous, Int64}
D = 
 1  0  0  -1   0   0
 0  1  0   0  -1   0
 0  0  1   0   0  -1

With state  names: 
     input  names: xr1 xr2 xr3 xh1 xh2 xh3
     output names: x_diff1 x_diff2 x_diff3
     

julia> sumblock("a = b + c - d")
NamedStateSpace{Continuous, Int64}
D = 
 1  1  -1

With state  names: 
     input  names: b c d
     output names: a
RobustAndOptimalControl.ussFunction
uss(d::AbstractVector{<:δ}, Ts = nothing)

Create a diagonal uncertain statespace object with the uncertain elements d on the diagonal.

RobustAndOptimalControl.ussFunction
uss(D::AbstractArray, Δ, Ts = nothing)

If only a single D matrix is provided, it's treated as D11 if Δ is given, and as D22 if no Δ is provided.

RobustAndOptimalControl.ussFunction
uss(D11, D12, D21, D22, Δ, Ts = nothing)

Create an uncertain statespace object with only gin matrices.

RobustAndOptimalControl.vec2sysFunction
vec2sys(v::AbstractArray, ny::Int, nu::Int, ts = nothing)

Create a statespace system from the parameters

v = vec(sys) = [vec(sys.A); vec(sys.B); vec(sys.C); vec(sys.D)]

Use vec(sys) to create v.

This can be useful in order to convert to and from vectors for, e.g., optimization.

julia> sys  = ss(tf(1, [1, 1]))
StateSpace{Continuous, Float64}
A = 
 -1.0
B = 
 1.0
C = 
 1.0
D = 
 0.0

Continuous-time state-space model

julia> v    = vec(sys)
4-element Vector{Float64}:
 -1.0
  1.0
  1.0
  0.0

julia> sys2 = vec2sys(v, sys.ny, sys.nu)
StateSpace{Continuous, Float64}
A = 
 -1.0
B = 
 1.0
C = 
 1.0
D = 
 0.0

Continuous-time state-space model
RobustAndOptimalControl.δcFunction
δc(val::Complex = complex(0.0), radius::Real = 1.0, name)

Create a complex, uncertain parameter. If no name is given, a boring name will be generated automatically.

RobustAndOptimalControl.δrFunction
δr(val::Real = 0.0, radius::Real = 1.0, name)

Create a real, uncertain parameter. If no name is given, a boring name will be generated automatically.

LowLevelParticleFilters.AdvancedParticleFilterMethod
AdvancedParticleFilter(Nparticles, dynamics, measurement, measurement_likelihood, dynamics_density, initial_density; p = SciMLBase.NullParameters(), kwargs...)

See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#AdvancedParticleFilter-1

LowLevelParticleFilters.DAEUnscentedKalmanFilterMethod
DAEUnscentedKalmanFilter(ukf; g, get_x_z, build_xz, xz0, threads=false)

An Unscented Kalman filter for differential-algebraic systems (DAE).

Ref: "Nonlinear State Estimation of Differential Algebraic Systems", Mandela, Rengaswamy, Narasimhan

Warning

This filter is still considered experimental and subject to change without respecting semantic versioning. Use at your own risk.

Arguments

  • ukf is a regular UnscentedKalmanFilter that contains dynamics(xz, u, p, t) that propagates the combined state xz(k) to xz(k+1) and a measurement function with signature (xz, u, p, t)
  • g(x, z, u, p, t) is a function that should fulfill g(x, z, u, p, t) = 0
  • get_x_z(xz) -> x, z is a function that decomposes xz into x and z
  • build_xz(x, z) is the inverse of get_x_z
  • xz0 the initial full state.
  • threads: If true, evaluates dynamics on sigma points in parallel. This typically requires the dynamics to be non-allocating (use StaticArrays) to improve performance.

Assumptions

  • The DAE dynamics is index 1 and can be written on the form
ẋ = f(x, z, u, p, t) # Differential equations
0 = g(x, z, u, p, t) # Algebraic equations
y = h(x, z, u, p, t) # Measurements

the measurements may be functions of both differential states x and algebraic variables z. Note, the actual dynamcis and measurement functions stored in the internal ukf should have signatures (xz, u, p, t), i.e., they take the combined state containing both x and z in a single vector as dictated by the function build_xz. It is only the function g that is assumed to actually have the signature g(x,z,u,p,t).

LowLevelParticleFilters.ExtendedKalmanFilterType
ExtendedKalmanFilter(kf, dynamics, measurement)

A nonlinear state estimator propagating uncertainty using linearization.

An extended Kalman filter takes a standard Kalman filter as well as dynamics and measurement functions. The filter will linearize the dynamics using ForwardDiff. The dynamics and measurement function are on the following form

x' = dynamics(x, u, p, t) + w
y  = measurement(x, u, p, t) + e

where w ~ N(0, R1), e ~ N(0, R2) and x(0) ~ d0

See also UnscentedKalmanFilter which is typically more accurate than ExtendedKalmanFilter. See KalmanFilter for detailed instructions on how to set up the Kalman filter kf.

LowLevelParticleFilters.KalmanFilterType
KalmanFilter(A,B,C,D,R1,R2,d0=MvNormal(R1); p = SciMLBase.NullParameters())

The matrices A,B,C,D define the dynamics

x' = Ax + Bu + w
y  = Cx + Du + e

where w ~ N(0, R1), e ~ N(0, R2) and x(0) ~ d0

The matrices can be time varying such that, e.g., A[:, :, t] contains the $A$ matrix at time index t. They can also be given as functions on the form

Afun(x, u, p, t) -> A

For maximum performance, provide statically sized matrices from StaticArrays.jl

LowLevelParticleFilters.ParticleFilterMethod
ParticleFilter(num_particles, dynamics, measurement, dynamics_density, measurement_density, initial_density; p = SciMLBase.NullParameters())

See the docs for more information: https://baggepinnen.github.io/LowLevelParticleFilters.jl/stable/#Particle-filter-1

LowLevelParticleFilters.TupleProductType
TupleProduct(v::NTuple{N,UnivariateDistribution})

Create a product distribution where the individual distributions are stored in a tuple. Supports mixed/hybrid Continuous and Discrete distributions

LowLevelParticleFilters.UnscentedKalmanFilterType
UnscentedKalmanFilter(dynamics, measurement, R1, R2, d0=MvNormal(Matrix(R1)); p = SciMLBase.NullParameters(), ny, nu)

A nonlinear state estimator propagating uncertainty using the unscented transform.

The dynamics and measurement function are on the following form

x' = dynamics(x, u, p, t) + w
y  = measurement(x, u, p, t) + e

where w ~ N(0, R1), e ~ N(0, R2) and x(0) ~ d0

The matrices R1, R2 can be time varying such that, e.g., R1[:, :, t] contains the $R_1$ matrix at time index t. They can also be given as functions on the form

Rfun(x, u, p, t) -> R

For maximum performance, provide statically sized matrices from StaticArrays.jl

ny, nu indicate the number of outputs and inputs.

LowLevelParticleFilters.commandplotFunction
commandplot(pf, u, y, p=parameters(pf); kwargs...)

Produce a helpful plot. For customization options (kwargs...), see ?pplot. After each time step, a command from the user is requested.

  • q: quit
  • s n: step n steps
LowLevelParticleFilters.correct!Function
ll, e = correct!(f, u, y, p = parameters(f), t = index(f))

Update state/covariance/weights based on measurement y, returns loglikelihood and prediction error (the error is always 0 for particle filters).

LowLevelParticleFilters.debugplotFunction
debugplot(pf, u, y, p=parameters(pf); runall=false, kwargs...)

Produce a helpful plot. For customization options (kwargs...), see ?pplot.

  • runall=false: if true, runs all time steps befor displaying (faster), if false, displays the plot after each time step.

The generated plot becomes quite heavy. Initially, try limiting your input to 100 time steps to verify that it doesn't crash.

LowLevelParticleFilters.forward_trajectoryFunction
sol = forward_trajectory(kf::AbstractKalmanFilter, u::Vector, y::Vector, p=parameters(kf))

Run a Kalman filter forward

Returns a KalmanFilteringSolution: with the following

  • x: predictions
  • xt: filtered estimates
  • R: predicted covariance matrices
  • Rt: filter covariances
  • ll: loglik

sol can be plotted

plot(sol::KalmanFilteringSolution; plotx = true, plotxt=true, plotu=true, ploty=true)
LowLevelParticleFilters.forward_trajectoryFunction
sol = forward_trajectory(pf, u::AbstractVector, y::AbstractVector, p=parameters(pf))

Run the particle filter for a sequence of inputs and measurements. Return a solution with x,w,we,ll = particles, weights, expweights and loglikelihood

If MonteCarloMeasurements.jl is loaded, you may transform the output particles to Matrix{MonteCarloMeasurements.Particles} using Particles(x,we). Internally, the particles are then resampled such that they all have unit weight. This is conventient for making use of the plotting facilities of MonteCarloMeasurements.jl.

sol can be plotted

plot(sol::ParticleFilteringSolution; nbinsy=30, xreal=nothing, dim=nothing)
LowLevelParticleFilters.logsumexp!Function
ll = logsumexp!(w, we [, maxw])

Normalizes the weight vector w and returns the weighted log-likelihood

https://arxiv.org/pdf/1412.8695.pdf eq 3.8 for p(y) https://discourse.julialang.org/t/fast-logsumexp/22827/7?u=baggepinnen for stable logsumexp

LowLevelParticleFilters.mean_trajectoryMethod
x,ll = mean_trajectory(pf, u::Vector{Vector}, y::Vector{Vector}, p=parameters(pf))

This Function resets the particle filter to the initial state distribution upon start

LowLevelParticleFilters.metropolisFunction
metropolis(ll::Function(θ), R::Int, θ₀::Vector, draw::Function(θ) = naive_sampler(θ₀))

Performs MCMC sampling using the marginal Metropolis (-Hastings) algorithm draw = θ -> θ' samples a new parameter vector given an old parameter vector. The distribution must be symmetric, e.g., a Gaussian. R is the number of iterations. See log_likelihood_fun

Example:

filter_from_parameters(θ) = ParticleFilter(N, dynamics, measurement, MvNormal(n,exp(θ[1])), MvNormal(p,exp(θ[2])), d0)
priors = [Normal(0,0.1),Normal(0,0.1)]
ll     = log_likelihood_fun(filter_from_parameters,priors,u,y,1)
θ₀ = log.([1.,1.]) # Initial point
draw = θ -> θ .+ rand(MvNormal(0.1ones(2))) # Function that proposes new parameters (has to be symmetric)
burnin = 200 # If using threaded call, provide number of burnin iterations
# @time theta, lls = metropolis(ll, 2000, θ₀, draw) # Run single threaded
# thetam = reduce(hcat, theta)'
@time thetalls = LowLevelParticleFilters.metropolis_threaded(burnin, ll, 5000, θ₀, draw) # run on all threads, will provide (2000-burnin)*nthreads() samples
histogram(exp.(thetalls[:,1:2]), layout=3)
plot!(thetalls[:,3], subplot=3) # if threaded call, log likelihoods are in the last column
LowLevelParticleFilters.predict!Function
predict!(f, u, p = parameters(f), t = index(f))

Move filter state forward in time using dynamics equation and input vector u.

LowLevelParticleFilters.simulateFunction
x,u,y = simulate(f::AbstractFilter, T::Int, du::Distribution, p=parameters(f), [N]; dynamics_noise=true)
x,u,y = simulate(f::AbstractFilter, u, p=parameters(f); dynamics_noise=true)

Simulate dynamical system forward in time T steps, or for the duration of u, returns state sequence, inputs and measurements du is a distribution of random inputs.

If MonteCarloMeasurements.jl is loaded, the argument N::Int can be supplied, in which case N simulations are done and the result is returned in the form of Vector{MonteCarloMeasurements.Particles}.

LowLevelParticleFilters.smoothFunction
xT,RT,ll = smooth(kf::KalmanFilter, u::Vector, y::Vector, p=parameters(kf))

Returns smoothed estimates of state x and covariance R given all input output data u,y

LowLevelParticleFilters.update!Function
ll, e = update!(f::AbstractFilter, u, y, p = parameters(f), t = index(f))

Perform one step of predict! and correct!, returns loglikelihood and prediction error