PI and PID controllers are the most frequently used controllers in practice. Even though the actual control law is very simple, the selection of suitable parameters is not trivial. We present a practical and well-proven step-by-step method to determine good PID parameters for a given application.

[…] the Ziegler–Nichols methods result in poor control […]. We should revere the Ziegler–Nichols rules for the original ideas, but they are not useful for practical PID tuning and there are better methods to teach in e.g. basic control courses.Olof Garpinger, Tore Hägglund, Karl Johan Åström, Performance and robustness trade-offs in PID control

PI and PID controllers are the most frequently used controllers in practice. Even though the actual control law is very simple, the selection of suitable parameters is anything but trivial. There are a variety of theoretically and empirically derived methods to determine optimal or good parameters for a specific application (process). However, the mathematical-theoretical methods are often too complex and therefore not suitable for practical use. And empirical methods such as the frequently taught Ziegler-Nichols method can lead to very poor results in practice.

In the following, we present a method step by step, which has proven itself in numerous applications on real plants or in system simulation. The results are certainly not optimal in theoretical sense but usually lead to a good behavior of the closed control loop.

The individual steps are:

1. Check the sign of process gain

2. Tune pure proportional component (P)

3. Add integral component (I)

4. Add differential component (D)

In the following, we will go into the individual steps in detail. We assume a basic understanding of control engineering and the terms process, manipulated variable, process variable, set-point and control loop. Otherwise, it is better to start with the basics of feedback control systems.

The basic idea of closed-loop control is to change the behavior of a dynamic system via specific **feedback**. For feedback in dynamic systems the sign is elementary important. Positive feedback leads to unstable – exponentially growing – behavior. A current example is the spread of a pandemic: more infected people lead to more infections, thus even more infected people. Such positive reinforcement is undesirable in control engineering; control loops must always be closed in such a way that feedback is negative.

In practice, this means: Select the sign of the controller opposite to the process gain. Standard settings of controllers usually assume positive process gain and therefore binds the actual value of the controlled variable negatively. For:

Plus *(process)* * Minus *(controller*)= Minus *(feedback in the control loop)*

This works perfect, for example, with a temperature controller connected to an electric water heater. A higher heating power (manipulated variable) causes a higher water temperature (controlled variable). Due to negative consideration of the actual value in the controller, feedback of the control loop is negative and thus fulfills a requirement for stability.

If the same controller were to control a refrigerating machine that cools the water bath, process gain cooling capacity → temperature would be negative. If the controller were left at its factory setting, the control loop would always be unstable. For such cases, you must actively change the sign of the controller.

The sign of process gain can usually be easily found out by a few physical considerations. However, I personally prefer the trial-and-error method: switch on a controller with active I-component and observe if the manipulated variable runs in the right direction, if not, change the sign of the controller.

There are different mathematical representations for PID control laws. They are all equivalent but lead to different definitions of the PID parameters. It is important that you understand how the parameters are defined in your specific controller when tuning them.

The basic form of a PID controller can be described with this differential equation:

$$u = K_{\rm P} \ e + K_{\rm I} \int e \ {\rm d} t+K_{\rm D} \frac{{\rm d}e}{{\rm d}t}$$

The manipulated variable *u* is calculated from the sums of the P, I, and D components.

Where *e* describes the control error, i.e. the difference between setpoint and actual value of the controlled variable. The individual components are described with independent parameters *K*.

A common alternative description form is:

$$u = K_{\rm P} \left(e + \frac{1}{T_{\rm I}} \int e\ {\rm d} t + T_{\rm D} \frac{{\rm d}e}{{\rm d}t}\right)$$

The proportional component is described unchanged by proportional gain *\(K_{\rm P}\)*, but the integral component is defined by the **integration time** *\(T_{\rm I}\) *and the differential component by the **derivative time ***\(T_{\rm D}\)*. Both newly introduced parameters have the dimension time and are either

entered in industrial controllers either in seconds or minutes.

In many industrial controllers, the proportional gain is entered in a slightly modified form, dimensionless as **proportional range** (or **proportional band**)* \(X_{\rm P}\)*. This value is given as a percentage and is referenced to the maximum range of the controller's output and input signals:

$$X_{\rm P} := \frac{100}{K_{\rm P}} \frac{\delta u}{\delta e}$$

The exact definitions are not crucial for the method described here. It is only important to understand that small values for* \(X_{\rm P}\)* and *\(T_{\rm I}\)* generally lead to larger manipulated variables and thus a more aggressive control behavior, while it is exactly the opposite for *\(T_{\rm D}\) *and *\(K_{\rm P}\)*.

Initially, the controller is operated as a pure P-controller, i.e. I-portion and D-portion are completely turned off. Repeated jumps to the setpoint are given and the **jump response** of the closed loop is observed.

We start with a low gain i.e. a rather passive controller. A good starting point for* \(K_{\rm P}\)* can be found by considering in which order of magnitude changes in the manipulated variable cause a change in the controlled variable. You then take a fraction of this value – for example, one hundredth. If in your controller the P component is specified via the proportional band *\(X_{\rm P}\)*, choose a high value – for example 100.

It should be noted that with a P-only controller, there is always a **remaining control deviation**. That is, we will never hit the setpoint exactly. The first step response should look something like the bottom of the first figure. Repeat the step response experiment with gradually increased gain, i.e., an increasingly aggressive controller. As you do so, the response of the control loop becomes faster and faster and the permanent control deviation decreases. At some point, you reach the point where the control loop clearly oscillates and even becomes unstable when the gain is increased further. Then you have overdone it. A good setting is when there is a noticeable overshoot, but it quickly subsides.

In general, you have some leeway here. Choose a smaller* \(K_{\rm P}\) *or larger* \(X_{\rm P}\)* for a robust conservative setting, and the exact opposite for a more aggressive possible fast control. We keep the P proportion selected in this way for all subsequent steps.

Now the controller is operated as a PI controller. The **integral component** ensures, after an initial fast response of the P component, that the remaining control error is compensated over time.

We initially start with a large value for the integration time *\(T_{\rm I}\) ,* which corresponds to a lazy response. Intuitive estimation of the time constant of the controlled system gives a good indication. Approximately how long does it take for the process to return to a steady state after a step change in the manipulated variable? The answer to this question is a good starting point for *\(T_{\rm I}\)*.

Analogous to setting the P component, we repeatedly perform set-point jumps and look at the closed loop response. In doing so, we gradually decrease *\(T_{\rm I}\)* and thus increase the aggressiveness of the controller. The same applies to the integral component: too aggressive a setting leads to unwanted oscillations or even instability.

Again, there is a margin here. Larger integration times lead to more robust control and smaller integration times to faster control.

We keep the selected setting for *\(T_{\rm I}\) *for the following step.

My personal opinion is: for many practical applications a well adjusted PI controller is perfectly sufficient. That is, if you are already satisfied with the performance of the controller, just stop. If you still want to get something out of it by adding the D component, then we repeat the procedure from the previous two steps.

We start again with a slow setting, so choose a small derivative time *\(T_{\rm D}\)*. As a guideline, we can use one tenth of the previously selected integration time of the I component. We decrease this gradually until we are satisfied with the performance of the control loop.

Theoretically, the D component also causes larger P components to be selected without the system starting to oscillate. This means that we could now readjust the P-portion again with active PID controllers.

With the method described here, it is possible to adjust a PID controller well in practice. Regardless of the adjustment method, however, a PID controller is always a **linear controller** that can only be adjusted well for one operating point in a nonlinear world. It depends strongly on the process – more precisely on its nonlinearity – how well the control parameters found also work at other operating points. I think this phenomenon is known to many from practical experience. A previously well working controller suddenly starts to oscillate at a different operating point (e.g. partial load instead of full load).

To avoid such problems, the PID controller can be set more robustly from the beginning. In general, there is always a performance/robustness trade-off. That is, if in the steps above, I choose the parameters more toward the slow side, I get a more robust controller that is then more likely to work under changing operating conditions.

If you want to fully understand process nonlinearities and design controls that are as performant as possible, you cannot avoid a detailed analysis of process dynamics. A powerful tool for this is the system simulation with the modeling language **Modelica.** Systems can be clearly structured from individual components and existing libraries. Through virtual experiments, the most important physical effects and interactions can be specifically investigated and understood.

For many technical areas, there are industry-proven model libraries that greatly reduce the modeling effort. With TIL for thermal systems and PSL for process engineering, we offer professional Modelica libraries and the associated know-how. We would be happy to help you with control development for your specific application.