Computer simulations often differentiate between steady state and dynamic simulation. We want to define these terms more precisely and explain the differences.

A steady state system is a system that does not change its state without external excitation. For example, a ball that has rolled down a hill and come to rest in a hollow. This ball will not change its state (position, speed) again as long as it is not stimulated from the outside - for example by a strong kick.

In the real world, everything is always in motion. In fact, one will hardly find a real steady state anywhere in reality. Even the ball of the example above is not really in a steady state. No ball in the world is 100% tight. This means that some air escapes continuously and the pressure inside the ball slowly decreases. So the steady state is a rather theoretical concept. It is directly related to the idea of the model, i.e. the simplification of a system. The central question here is which variables describe the state of a system. If I, as a modeller, assume that the position and speed of the ball are sufficient to describe its state, then the ball in the sink is in a steady state. On the other hand, if I am also interested in the air pressure in the ball, it is not.

Dynamic systems are described mathematically with differential equations. For example by an explicit ordinary differential equation of form:

$$\frac{{\rm d}x}{{\rm d}t} = f(x, p)$$

The time derivative of the state x is a function of the state itself and a variable but time-constant parameter p. To determine the steady state xs of the modeled system, it is sufficient to set the time derivative to zero:

$$0 = f(x_{\rm s}, p)$$

This equation is equivalent to the steady-state model of a system, and provides the desired steady-state at given parameters. Either the function f can be transformed symbolically explicitly to xs, or numerical iteration methods like the Newton method have to be used. This process of implicit or explicit calculation of a steady state for given parameters is called steady-state simulation. In practice, this often results in large coupled systems of equations whose numerical solution is anything but trivial. We will deal with the different solution methods in future articles.

A dynamic simulation, however, calculates the time course of the state variables starting from a given initial value. In addition to the differential equation defined above, a value for the state variables at the time t=0 must therefore be given. Simple differential equations can be solved analytically. For more complex systems of equations, numerical solution methods such as the Euler method are used.

Support your engineering team with our thermodynamics and simulation experts and proven software solutions.

Get to know system simulationThe short answer is: comparability. For the longer answer we go a little further. When developing technical systems, engineers have a large number of degrees of freedom, but also competing development goals (e.g. minimum packaging and maximum performance). Computer simulation can be used to investigate the complex relationship between degrees of freedom and different development goals. The degrees of freedom of development are described in the form of adjustable model parameters. For most technical systems, the steady state that is achieved for given parameters and operating conditions is unique and reproducible. This means that results of a steady-state simulation can be directly compared with each other. The influence of model parameters can be investigated systematically. If, on the other hand, dynamic states of the system were investigated, the comparison would be much more difficult. The results would then depend on the current state of the system at the (randomly selected) observation time.

For some questions it is not sufficient to analyse steady state operating points of a technical system. For example, if the development of a passenger car is concerned with the well-known target value “from 0 to 100 km/h in x seconds”. This acceleration test is dynamic by definition. Or in general, when it comes to the design of a controller. Whether a closed control loop is stable or not can only be answered in a dynamic simulation. Often, dynamic simulation is also only used as a tool to calculate steady-state points. If the simulation time is chosen long enough, a steady state is achieved without external excitation for every stable dynamic system (like in the real world). For large nonlinear system models, this method is often more reliable - although slower - than a direct determination of the steady state.