Control Library

The JuliaSim control library implements functionality for the design, simulation and optimization of control systems.

JuliaSimControl builds upon ModelingToolkit.jl and the JuliaControl ecosystem, providing a wide array of modeling, simulation, analysis and design methods for every step in the design and implementation of control systems.

Why JuliaSimControl?

JuliaSim handles the entire modeling, simulation and control workflow from component-based modeling from first principles, parameter estimation using data, and simulation, to the analysis, design and implementation of control systems.

  • JuliaSim is a comprehensive suite of tools for modeling and simulation. The underlying SciML ecosystem provides an unparalleled set of tools ranging from state-of-the-art workhorses like ODE solvers to cutting-edge methods like Neural PDEs.
  • ModelingToolkit is the first acausal modeling language embedded in a high-performance, high-level language. This allows you to build component-based models that are both easy to understand, modify and maintain, and integrate naturally with surrounding, non-modeling code.
  • JuliaSim and all of the surrounding tools are entirely implemented in Julia. Julia is a modern, high-performance, open-source programming language that is easy to learn and use. Julia allows you to quickly prototype and test your ideas, and then deploy them in production without translating the code to a lower-level language, making it a great choice for the entire modeling and control workflow.
  • JuliaSim Control offers a wide selection of tools for analysis and synthesis of control systems, ranging from classical PID control, through $\mathcal{H}_\infty$ and $\mathcal{H}_2$ synthesis, to advanced Model-Predictive Control using detailed ModelingToolkit models or Neural Surrogates.
  • JuliaSim Control is the first and only control-systems library that offers a fully differentiable environment for control design. This allows you to use gradient-based optimization methods to tune your controllers, perform gradient-based co design, and to use control systems as part of larger, differentiable machine-learning workflows.

Modeling and simulation

JuliaSim builds upon ModelingToolkit.jl, a symbolic, acausal modeling framework. ModelingToolkit.jl allows you to model component-based physical systems, making it easy to build detailed plant models out of reusable components. Learn more about ModelingToolkit in the documentation or in the tutorial Modeling for control using ModelingToolkit.

Under the surface, ModelingToolkit uses DifferentialEquations.jl to solve ODEs and DAEs. This interface can be used directly, learn more about this in the documentation.

Analysis and design of linear control systems

JuliaControl contains a wide range of tools for analysis and design of linear systems, learn more about this ecosystem here.

To learn how to work with linear system types, such as linear statespace systems and transfer functions, as well as basic control analysis and design, consult the ControlSystems.jl documentation. To learn about robust and optimal linear control, consult the documentation of RobustAndOptimalControl.jl. Both aforementioned documentations contains examples and tutorial on the respective topics. JuliaSim extends the functionality of the JuliaControl ecosystem in several ways, exposed in this documentation.

Getting started with JuliaSimControl

JuliaSimControl can be used directly in the JuliaSim IDE at, but can also be installed locally using these instructions.

The following sections contain high-level overviews of how to approach the different tools available in JuliaSimControl. While these sections are intended for novice users, they may be useful also to the advanced user looking for an introduction to the tools offered in JuliaSimControl.


Tutorials and reference documentation can be found in the sidebar on the left (hamburger menu on mobile).

Control architecture cheat sheet

When faced with a control problem for the first time, the number of options available can be overwhelming. This decision tree can help novice users find the right tool for the job, and advanced users find appropriate tutorials on how to use the tools implemented in JuliaSimControl.

The first question to ask yourself is often whether you are solving a regulation problem or a reference-tracking problem, i.e., are you trying to keep the output of your system steady, or are you trying to make it move in a desired way?



When solving regulation problems, we are often primarily concerned with the ability of the controller to reject disturbances. A prime example is that of a temperature-controlled room, where the primary concern is to keep the temperature steady, and disturbances affecting the temperature such as people entering and leaving the room, or the sun shining through the window, are to be rejected. In this case, the reference is often constant, and the reference response of the controller is less of a concern.

In situations like this, the first step is usually to tune a feedback controller for disturbance rejection. If the system has a single input and a single output (SISO), a PID controller is a common choice. See the documentation on PID Autotuning for a user-friendly interface to automatic tuning of PID controllers. If you instead have a ModelingToolkit model with a predetermined structure of the control system, see the tutorial on Automatic tuning of structured controllers.

If references may change every now and then and the controller has a poor response to reference changes, consider making use of a controller structure with two degrees of freedom. The simplest alternative in this case is to let, e.g., the P and D parts of a PID controller act on the measurement only, and let the integral part be the only part that acts on the reference. This approach is supported by the LimPID block in ModelingToolkitStandardLibrary through the parameters wp, wd, as well as by DiscretePIDs.jl. A more sophisticated approach that affords more options to shape the reference response is to make use of a reference prefilter or a trajectory generator. See the tutorials Modeling for control using ModelingToolkit, Control design for a pendulum on a cart and MPC for autonomous lane changes for examples of how to use these tools, as well as the package TrajectoryLimiters.jl.

For more complex regulation problems, such as when controlling systems with multiple inputs and multiple outputs (MIMO), JuliaSimControl offers a wide range of synthesis and analysis methods, including

Of these alternatives, MPC is particularity advantageous when the problem is constrained, e.g., by limits on the control inputs, states our outputs.

Reference tracking

When solving reference-tracking problems, we are often primarily concerned with the ability of the controller to track a reference. A prime example is that of a robot arm, where the goal is to make the end-effector move in a desired way, i.e., to make it follow a reference trajectory.

The first question to ask when designing a controller for a reference-tracking applications is whether or not the performance requirements are high. If the requirements are modest, we may be fine tuning a simple feedback controller to optimize the reference-tracking performance and call it a day. To this end, we may use the optimization-based automatic tuning described in Automatic tuning of structured controllers. If, on the other hand, the performance requirements are tough, the best approach is often to make use of a trajectory generator and model-based feedforward, or Model-Predictive Control (MPC). A trajectory generator is responsible for generating a dynamically feasible reference trajectory, i.e., a trajectory that is physically realizable by the system. For systems where we are controlling positions and velocities, this requires that the reference trajectory has a bounded acceleration, i.e., the position reference has to be at least a $C^1$ function that is once continuously differentiable. The model-based feedforward makes use of an inverse model that maps the reference trajectory to a feedforward signal that is added to the feedback-control signal. In the absence of model errors, this feedforward signal will make the system follow the reference trajectory perfectly. In the presence of model errors (which are always present), the feedback signal is responsible for correcting the control error due to model mismatch and external disturbances. See the tutorials Feedforward using an inverse model, MPC for autonomous lane changes and Iterative-Learning Control for examples using these methods.

Choosing a controller type

Once we have chosen a controller architecture, we choose a controller type. A surprising number of control problems can be solved with PID controller in one way or another, but there are many other alternatives available that may be better suited for a particular problem. The choice of controller type is often dictated by external factors, and when it is not, it is often a matter of taste. Any given control problem can be solved in many different ways, and this diagram is thus not to be taken as a strict guide, but rather as a preliminary starting point for the design process.


There are two strong dichotomies among control problems that dictate the choice of controller, linear vs. nonlinear and SISO vs. MIMO.

Linear SISO problems are often adequately solved using PID controllers, and reference tracking can in these situations be handled by a trajectory generator (example) and model-based feedforward (example). A model-based feedforward is not limited to linear models, use of a non-linear inverse model together with linear feedback is common in, e.g., robotics. Linear MIMO problems can, if the system is square and the interactions between loops not too strong, be solved using independent PID controllers (example). Strongly interacting systems are often better approached using MIMO controller types such as LQG controllers and $\mathcal{H}_\infty$ controllers (example).

Nonlinear SISO problems are often solved using gain scheduling, i.e., by designing several different linear controllers and switching or interpolating between them depending on the current operating point. Nonlinear MIMO problems and problems with tight constraints are often solved using MPC.

If significant disturbances affect the system, effort can be put in to model the disturbances. Several controller types allow easy incorporation of such disturbance models for improved disturbance rejection. See the tutorials Disturbance modeling and rejection with MPC controllers, MPC with model estimated from data, Disturbance modeling in ModelingToolkit for examples.

Control problems with a clear economic objective, such as minimizing the energy consumption of a system, are often solved using optimal-control methods such as MPC (example).

Obtaining a model for control

While simple controllers such as PID controllers can be tuned by experimentation on the controlled system, many more sophisticated control synthesis methods require a model of the system to be controlled. The model also may also be used for robustness analysis, as well as for model-based feedforward. A model may be obtained in multiple different ways, some of which are outlined here.

Modeling from first principles

A first-principles model is derived from the laws of physics governing the system. Such a model can vary in complexity depending on how detailed laws of physics we include, and what phenomena we include in our model. While simple models can be encoded by manually defining the dynamics of the system on the form $\dot x = f(x, u, p, t)$, more complex models may be more conveniently defined using a symbolic modeling language such as ModelingToolkit.jl. See the tutorial Modeling for control using ModelingToolkit for an introduction to this approach.

Fitting first-principles models to data

Models typically include several parameters that are not known a priori. These parameters may be estimated by fitting the model to experimental data obtained from the system. JuliaSim ModelOptimizer is a suite of tools to help you estimate parameters in this situation.

Modeling directly from data

Modeling from first principles may sometimes be too time consuming and expensive, and sometimes the physics governing the system is unknown. In such situations, performing experiments on the system and building a black-box model from data may be more effective, a process commonly referred to as system identification. The simplest input-output models of dynamical systems are linear time-invariant models (LTI), these models can be obtained using any of the methods from ControlSystemIdentification.jl. Linear models often work surprisingly well, in particular for regulation problems where the task of the controller is to keep the output of the system fixed at a reference point. If a linear model is found to be inadequate, we may attempt to fit a nonlinear model, e.g., using DataDrivenDiffEq.jl.

Obtaining a linear model from a nonlinear model

Several synthesis and analysis methods in the field of control theory require a linear model. A nonlinear model may be linearized around an operating point in order to obtain a simplified, linear model. The page Linear analysis describes several methods for linearization.

Robustness analysis

When tuning controllers using a model of the system, we should always perform a robustness analysis to ensure that the controller is robust to model mismatch and variation over time etc. This is particularity important when using some form of optimization-based control, since pushing the boundaries of performance requires a model with increasingly high fidelity.

For SISO systems (single input, single output), classical robustness measures include the gain margin and the phase margin. These can be computed using the functions margin and marginplot. The gain margin tells us how much the gain of the system can vary before our closed-loop system goes unstable. There are several reasons for why a closed-loop system must be robust w.r.t. gain variations

  • The model of the system is inaccurate, e.g., due to modeling errors, linearization or other simplifications.
  • The gain may vary with time, e.g., due to changes in the load on the system or changes in temperature.

The phase margin similarly tells us how much the phase of the system can vary before our closed-loop system goes unstable. It is common to aim for a phase margin of 30-60 degrees. A large phase margin also guard us against unexpected delays in the system, such as delays in reading from the sensor, or communicating a change to an actuator.

Slightly more sophisticated robustness measures include computing the peak gains of the sensitivity function and the complementary sensitivity function, which can be computed using the functions sensitivity, comp_sensitivity as well as using the functions gangoffour and gangoffourplot. These concepts are all outlined in the video tutorial on basic usage of robustness analysis with JuliaControl, referenced below.

Additional tutorials performing robustness analysis for are DC Motor with PI-controller and Robust MPC tuning using the Glover McFarlane method.

For MIMO systems (multiple inputs, multiple outputs), gain and phase margins are considerably less useful measures of robustness, while the peak gain of the sensitivity functions remain useful. Other, more advanced measures of robustness applicable also to MIMO systems are diskmargin[diskmargin] (also mentioned in the video above), and ncfmargin.

Comparison of synthesis methods

The following table indicates which control synthesis methods are applicable in different scenarios. A green circle (🟢) indicates that a particular synthesis method is well suited for the situation, an orange diamond (🔶) indicates that a match is possible, but somehow not ideal, while a red square (🟥) indicates that a method in its standard form is ill suited for the situation. The table is not exhaustive, and is intended to give a rough overview of the applicability of different synthesis methods. For more information, see the tutorials on the individual synthesis methods. Several methods can be adopted to handle the situations for which they are indicated as poor choices, such as using several independent PID-controllers to control square MIMO systems with weak coupling etc.

Synthesis methodSISOSIMOMISOMIMODisturbance modelsUncertainty modelsTime-varying systemNonlinear systemConstraints
Cascade PID🟢🟢🟥🟥🟥🟥🟥🟥🟥
Mid-ranging control🟢🟥🟢🟥🟥🟥🟥🟥🟥
Gain-scheduled PID🟢🟥🟥🟥🟥🟥🟥🟢🟥
  • An example of Cascaded PID controllers is provided under Automatic tuning of structured controllers.
  • Mid-ranging control refers to a control strategy where one controller handles large input variations but slowly, while a second controller handles small input variations quickly. The slow controller is in this setup designed to keep the fast controller in the middle of its operating range to avoid saturation, hence the name midranging.
  • Gain scheduling refers to the variation of controller gains depending on some measurable quantity. As an example, parameters of a PID controller for an electrical motor may be varied depending on the moment of inertia of the motor load. This is a common and simple method of realizing a nonlinear controller. Gain scheduling is not limited to PID control, and may also be used for other linear control architectures, such as LQR or $\mathcal{H}_\infty$ controllers. A tutorial using gain scheduling is available here: Batch Linearization and gain scheduling.
  • LQG-control refers to state-feedback control from states estimated with a Kalman filter, several tutorials are available on this topic, e.g., Disturbance modeling and rejection with LQG controllers.
  • $\mathcal{H}_\infty$-control refers to the design of controllers that minimize a worst-case objective over frequencies of the closed-loop system. $\mathcal{H}_\infty$-synthesis is often used as a component in robust control design. See the tutorials $\mathcal{H}_\infty$ control design and Mixed-sensitivity $\mathcal{H}_\infty$ control design for examples.
  • The method of Glover-McFarlane is a robust-control design method that is based on $\mathcal{H}_\infty$-synthesis. This method prompts the user to perform an initial (possibly MIMO) loop-shaping design, followed by an automatic robustification procedure. See the documentation for glover_mcfarlane and the tutorials Control design for a pendulum on a cart and Robust MPC tuning using the Glover McFarlane method for examples.
  • Model-predictive control (MPC) is a modern feedback-control strategy that is based on repeated optimization of a cost function over a finite time horizon. MPC natively handles nonlinear, time-varying MIMO systems with constraints, and is thus a very powerful tool for control design. This power comes at a cost of increased computational complexity compared to more traditional control strategies. See Model-Predictive Control (MPC) for the documentation of the MPC functionality in JuliaSimControl, as well as the numerous tutorials available.

Comparison of analysis methods

The following table indicates which closed-loop analysis methods are applicable in different scenarios. A green circle (🟢) indicates that a particular analysis method is well suited for the situation, an orange diamond (🔶) indicates that a method can be used to gain some insight, but it's use is somehow not ideal, while a red square (🟥) indicates that a method in its standard form is ill suited for the analysis task.

Analysis methodSISOMIMOModeled uncertaintiesTime-varying systemNonlinear systemConstraints
Gain margin🟢🟥🟥🟥🟥🟥
Phase margin🟢🟥🟥🟥🟥🟥
Peak gain of sensitivity functions🟢🟢🔶🟥🟥🟥
NCF margin🟢🟢🔶🟥🟥🟥
Structured singular value🟢🟢🟢🔶🔶🟥
Lyapunov-function search🟢🟢🟥🟢🟢🟢

Of these methods, the first 5 are well established and easy to use, while the structured singular value and Lyapunov-function search are more advanced methods that require some knowledge of the underlying theory. The following list provides links to the documentation of the methods that are easily accessible from within JuliaSimControl: