Long process deadtime in closed-loop control--Smith Predictor



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


Goals

• Demonstrate the correct use of a process simulation for process variable prediction

• Show how control loops with long deadtimes are dealt with correctly

• List the procedures for tuning of control loops with long deadtimes.

Process deadtime

Overcoming the deadtime in a feedback control loop can present one of the most difficult problems to the designer of a control system. This is especially true if the deadtime is >20% of the total time taken for the PV to settle to its new value after a change to the SP value of a system.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


We have seen that little or no deadtime in a control system presents us with a simple and easy set of algorithms that when applied correctly give us extremely stable loop characteristics.

Unfortunately, if the time from a change in the manipulated variable (controller output) and a detected change in the PV is excessive, any attempt to manipulate the process variable before the deadtime has elapsed will inevitably cause unstable operation of the control loop.

+=+=+=+ various deadtimes and their relationship to the PV reaction time.

Time constant Slope = reaction rate; Process variable; Short Long; 0.63 K; K T L1 L2 L3 = Effective deadtime (T ) Medium

An example of process deadtime

Process deadtime occurs in virtually all types of process, as a result of the PV measurement being some distance away, both physically and in time, from the actuator that is driven by the manipulated variable.

An example of this is in the overland transportation of material from a loading hopper to a final process, this being some distance away. The critical part of the operation is to detect the amount of material arriving at the end of its journey, the end of the conveyor belt, and from this performing two functions:


AMAZON multi-meters discounts AMAZON oscilloscope discounts


1. To 'tell' the ongoing process how much material is arriving; and…

2. To adjust the hopper feed rate at the other end of the belt.

+=+=+=+ problem: the controller is measuring the weight of arriving material that during its journey from the supply hopper has encountered some loss due to spillage from the conveyor. Also the amount of material deposited on the belt has varied due to variability of the amount, or head, of material in the hopper.

+=+=+=+ Reaction curves showing short, medium and long deadtimes

+=+=+=+ a long conveyor system giving an excessive deadtime to the control loop. The deadtime can be calculated very simply by the product of the belt speed and the distance between the input hopper, where the action of the manipulated variable (controller output) occurs and the PV or point where the belt weigher is located. In this example, the controller measures the weight/meter/minute of the arriving material, compares this with the SP and generates an output, but now it must wait for the deadtime period, which in this example is about 10 min, before seeing a result of this change in the value of the MV. If the controller expects a result before the deadtime has elapsed, and none occurs, it will assume that its last change had no effect and it will continue to increase its output until such time as the PV senses a change has occurred. By this time it will be too late, the controller will have overcompensated, either by now supplying too much or too little material. The magnitude of this resultant error will depend on the sensitivity of the system and the difference between the assumed and actual deadtime. That is, if the system is highly sensitive (high gains and fast responses tuned into it) it will affect large movements of the inlet hopper for small PV changes. Also if the assumed deadtime is much shorter than the actual deadtime it will spend longer time changing its output (MV) before sensing a change in the PV.

Overcoming process deadtime:

Solving these problems depends, to a great extent, on the operating requirement(s) of the process. The easiest solution is to 'detune' the controller to a slower response rate. The controller will then not overcompensate unless the deadtime is excessively long.

The integrator (I mode) of the controller is very sensitive to 'deadtime' as during this period of inactivity of the PV (an ERR term is present) the integrator is busy 'ramping' the output value. Ziegler and Nichols determined the best way to 'detune' a controller to handle a deadtime of D min is to reduce the integral time constant TINT by a factor of D2 and the proportional constant by a factor of D. The derivative time constant TDER is unaffected by deadtime as it only occurs after the PV starts to move.

If, however, we could inform the controller of the deadtime period, and give it the patience to wait and be content until the deadtime has passed, then detuning and making the whole process very sluggish would not be required. This is what the Smith Predictor attempts to perform.

The Smith Predictor model

In 1957 O.J.M. Smith, of the University of California at Berkeley proposed the predictor control strategy as explained below. +=+=+=+ the mathematical model of the predictor which consists of:

• An ordinary feedback loop

• A second, or inner, loop that introduces two extra terms into the feedback path.

Controller: Controller output; Gains and time constants; Process; Deadtime; Set point; Actual process variable; ERR; Process model.

Disturbances: Disturbance free process variable Predicted process variable with disturbances Predicted process variable Estimated disturbances

+=+=+=+ The Smith Predictor model.

First term explanation (disturbance-free PV):

The first term is an estimate of what the PV would be like in the absence of any process disturbances. It’s produced by running the controller output through a model that is designed to accurately represent the behavior of the process without taking any load disturbances into account. This model consists of two elements connected in series.

1. The first represents all of the process behavior not attributable to deadtime.

This is usually calculated as an ordinary differential or difference equation that includes estimates of all the process gains and time constants.

2. The second represents nothing but the deadtime and consists simply of a time delay; what goes in, comes out later, unchanged.

Second term explanation (predicted PV):

The second term introduced into the feedback path is an estimate of what the PV would look like in the absence of both disturbances and deadtime. It’s generated by running the controller output through the first element of the model (gains and TCs) but not through the time delay element.

It thus predicts what the disturbance-free PV will be like once the deadtime has elapsed.

===

Controller-- Model gains and time constants-- Process --Model deadtime --Setpoint ERR-- Estimated disturbances --Actual process variable-- Estimated process variable-- Actual disturbances --Controller output

Predicted process variable with disturbances.

+=+=+=+ The Smith Predictor in use

===

The Smith Predictor in theoretical use

+=+=+=+ the Smith Predictor in a practical configuration, or as it’s really used.

It shows an estimate of the PV (with both disturbances and deadtime) generated by adding the estimated disturbances back into the disturbance-free PV. The result is a feedback control system with the deadtime outside the loop.

The Smith Predictor essentially works to control the modified feedback variable (the predicted PV with disturbances included) rather than the actual PV. If it’s successful in doing so, and the process model accurately emulates the process itself, then the controller will simultaneously drive the actual PV toward the SP value, irrespective of SP changes or load disturbances.

The Smith Predictor in reality

In reality there is plenty of room for errors to creep into this 'predictive ideal'. The slightest mismatch between the process dynamic values and the model can cause the controller to generate an output that successfully manipulates the modified feedback variable but drives the actual PV into nihilism, never to return.

There are many variations on the Smith Predictor principle, but re-admits, especially long ones remain a particularly difficult control problem to control and solve.

An exercise in deadtime compensation

We have seen that if a long deadtime is part of the process behavior, the quality of control becomes unacceptably low. The main problem lies in the fact that the reaction to an MV change is not seen by the PV until the deadtime has expired. During this time, neither a human operator nor an automatic controller knows how the MV change has effected the process.

Exercise: the concepts of deadtime compensation.

Industrial process (dynamic and deadtime); Deadtime simulation SPE PVE OP MODE PVE-PRED PV%-PRED 49.96 PV%-real TC1 TC2 Deadtime simulation; Delay 1.14 1000.00 999.45 49.84 AUTO 999.45 49.97; 0.30; Dynamic 50.02 Error -0.02 50.00 OUT%-DT; Process disturbance

+=+=+=+ Block diagram; closed loop control with process simulation As there are no means of separating process deadtime from process dynamic in order to find out how the process would behave without deadtime, we make use of the values provided by a process simulation.

The process simulation is split into two parts as seen; these two parts are described.


NEXT: Fuzzy logic, neural networks--Basic principles
PREV: Tuning aspects

All related articles   Top of Page   Home



Updated: Thursday, May 30, 2013 23:42 PST