Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software Input Devices | Data Loggers + Recorders | Books | Links + Resources |
AMAZON multi-meters discounts AMAZON oscilloscope discounts Goals • Describe the theory and operation of a self-tuning controller • Describe the concept of statistical process control (SPC) and its use in analyzing and indicating the standards of performance in control systems. This section introduces the basic concepts of self-tuning or adaptive controllers, intelligent controllers, and provides an overview of statistical process control (SPC). Self-tuning controllers Self or auto-tuning controllers are capable of automatically re-adjusting the process controllers tuning parameters. They first appeared on the market in the early 1970s and evolved from ones using optimum regulating and control types through to the current types that, with the advent of high speed processors, rely on adaptive control algorithms. The main elements of a self-tuning system, these being: • A system identifier: This model estimates the parameters of the process. • A controller synthesizer: This model has to synthesize or calculate the controller parameters specified by the control object functions. • A controller implementation block: This is the controller whose parameters (gain KC, TINT, TDER, etc.) are changed and modified at periodic intervals by the controller synthesizer. AMAZON multi-meters discounts AMAZON oscilloscope discounts The system identifier: The system identifier, by comparing the PV action as a result of the MV change, and using algorithms based on recursive estimations, determines the response of the system. This is commonly achieved by the use of fuzzy logic that extracts key dynamic response features from the transient excursion in the system dynamics. These excursions may be deliberately invoked by the controller, but are usually the start-up, and ones caused by process disturbances that are the normal ones. === Control synthesis; System identifier; Controller; Process Setpoint ERR Output (Implementation) Process variable; Design criterion +=+=+=+ The main components of a self-tuning system === The controller synthesizer: The desired values for the PI and D algorithms used by the controller are determined by this block. The calculations can vary from simple to highly complex, depending on the rules used. Self-tuning operation: This technique requires a starting point derived from knowledge of known and operational plants of a similar nature. This method is affected by the relationship between plant and controller parameters. Since the plant parameters are unknown they are obtained by the use of recursive parameter identification algorithms. The control parameters are then obtained from estimation of the plant parameters. Referring to +=+=+=+ 1, the controller is called 'self-tuning' since it has the ability to tune its own parameters. It consists of two loops, an inner one that is the conventional control loop having however varying parameters, and an outer one consisting of an identifier and control synthesizer which adjust the process controller’s parameters. Gain scheduling controller Gain scheduling control relies on the fact that auxiliary or alternate process variables are found that correlate well with the main process variable. By taking these alternate process variables it’s possible to compensate for process variations by changing the parameter settings of the controller as functions within the auxiliary variables. Gain scheduling advantages: The main advantage is that the parameters can be changed quite quickly in response to changes in plant dynamics. It’s convenient if the plant dynamics are simple and well known. Gain scheduling disadvantages: The disadvantage is that the gain scheduling is an open loop adaptation and has no real learning or intelligence. The extent and design criteria can be very large too. Selection of the auxiliary point of measurement has to be done with a great deal of knowledge and thought regarding the process operation. AMAZON multi-meters discounts AMAZON oscilloscope discounts === Gain scheduler Controller; Controller parameter; Auxiliary measurement OP or MV Process, Process variable (PV) Setpoint +=+=+=+ Gain scheduling controller === Implementation requirements for self-tuning controllers Self-tuning controllers that deliberately introduce known disturbances into a system in order to measure the effect from a known cause are not popular. Preference is given to self-tuning controllers that sit in the background, measure and evaluate what the control controller is doing. Then comparing this with the effect this has on the process, and making decisions on these measured parameters, the controllers-operating variables are updated. To achieve this, the updating algorithms are usually kept dormant until the error term generated by the system controller becomes unacceptably high (>1%) at which point the correcting algorithms can be unleashed on the control system, and corrective action can commence. After the error has evolved, the self-tuning algorithm can check the response of the controller in terms of the period of oscillation, damping and overshoot values. T (period of oscillation); Time Damping = = amplitude damping; Overshoot +=+=+=+ Controller measurement parameters Statistical process control (SPC) The ultimate objective of a process control system is to keep the product, the final result as produced by the process, always within all the pre-defined limits set by the products description. There are almost an infinite number of methods and systematic approaches available in the real engineering world to help achieve this. However, although all these tools exist, it’s necessary to have procedures that analyze the process's performance, compare this with the quality of the product and produce results that are both 'understandable' by all personnel involved in the management of the process and of course are also both accurate and meaningful. There are a few terms and concepts that need to be understood to enable a basic and useable concept of control quality to be managed, and once these have been understood, the world of statistical process control, or SPC becomes apparent, meaningful and useable as a powerful tool in keeping a process control system under economical, operationally practical and acceptable control. Uniform product: Only by understanding the process with all of its variations and quirks, product disturbances, hiccups and getting to know its individual 'personality' can we hope to achieve a state of virtually uniform product. No two 'identical' plants or systems will ever produce identical product, similar, yes, but never identical. This is where SPC helps in identifying the 'identical' differences. Dr Shewhart, working at the Bell Laboratories in the early 1920s, after comparisons made between variations in nature and items produced in process systems found inconsistencies and formulated the following statement: While every process displays variation, • Some processes display controlled variation • Others display uncontrolled variation. Controlled variation: This is characterized by a stable and consistent pattern of variation over time, attributable to 'chance' causes. Consider a product with a measurable dimension or characteristic (mechanical or chemical). Samples are taken in the course of a production run. The results of inspection of these products shows variances caused by machines, materials, operators and methods all interacting producing these variations. These variations are consistent over time because they are caused by many contributing factors. These chance causes produce 'controlled variation'. Uncontrolled variation: This is caused by assignable causes, caused by a pattern of variation over time. In addition to changes made by chance causes, there exists special factors that can cause large impacts on product measurement; these can be caused by maladjusted machines, difference in materials, a change in methods end even changes in the environment. These assignable factors can be large enough to create marked changes in known and understood patterns of variation. Two ways to improve a production process The two methods described here to improve a process are fundamentally different, one looks for change to a consistent process, the other for modifications to the process. Controlled variations problem: When a process displays controlled variation it should be considered stable and consistent. The variations are caused by factors inherent in the actual process. To reduce these variations it will be necessary to change the process. Uncontrolled variations problem: This means the process is varying from time to time. It’s both inconsistent and unstable. The solution is to identify and remove the cause(s) of the problem(s). 14.7 Obtaining the information required for SPC There is fundamentally only one way to record the real processes performance, and that is by a strip chart showing the process variable signal, and possibly the controller output signals. That is, the commands sent into the process and the processes reply to these commands in both magnitude and time. Statistical inference: The average of 2, 4, 6 and 8 is 5 this being the balance point for this sample of data values. The sample range for this data is 6, that is how far apart 2 and 8 are (the maxima and minima). However statistical inference relies on the fact that a conceptual population exists, this being needed to enable us to rationalize any attempt at prediction and that all samples taken were from this population, this being needed to believe in the estimates based on the sample statistics. For the sake of simplicity and clarity we will consider that all samples are objective and represent one conceptual population. If this is not true then the results may well be inconsistent and the statistics will be erratic. In fact, if this happens, the process can be considered schizophrenic; thus the process is displaying uncontrolled variation. The resultant statistics simply could not be generalized. Using sub-groups to monitor the process: Each sample collected at a single point in time is a sub-group, each one being treated as a separate sample. +=+=+=+ four sub-groups selected from a stable process, one sub-group per hour. The bell-shaped profiles represent the total output of the process each hour, the dots representing the measurements taken in each group. +=+=+=+ Four sub-groups selected from a stable system Recording averages and ranges for a stable process: The next step is to record the average and ranges onto a time-scaled strip chart. As long as these plots move around within the defined upper and lower limits displayed also on the chart, we can consider that all sub-groups were derived from the same conceptual population. +=+=+=+The average and range chart for a stable process If we now consider the same example, but the process itself is changing from hour to hour, i.e. there is variation in the process. We let the bell-shaped curves represent the variation in the processes output each hour. +=+=+=+ the average and range chart for an unstable process +=+=+=+ Four sub-groups from an unstable system === The data in this example is taken from a stable process. The measurements represent the thickness of a product. The numbers represent how much the part exceeded 0.300 in., in 0.001 units. Sub-group Number Variance Average Range Sub-group Number Variance Average Range; Table of Data. === Recording averages and ranges for an unstable process: At 09:00 the process average increased, moving the sub-group average above the upper limit. At 10:00 the process average dropped dramatically, and the sub-group moved below the lower limit. During these first three hours, 08:00-11:00, the process dispersion did not change and the sub-group ranges all remained within the control limits. But at 11:00, the process dispersion increased and the process average moved back to its initial value. The sub-group obtained during this hour has a range that falls above the upper control limit, and an average that falls within the control limits. It can be seen that with the use of periodic sub-groups, two additional variables have been introduced, namely - the sub-group average and the sub-group range. These are the two variables used to monitor the process. The following example shows the behavior of these two variables and how they relate to the measurements when the process is stable. Example of a stable process: The histograms are all to scale on both axes. However all three represent totally different profiles and dispersions. It’s therefore essential to distinguish between these variables. +=+=+=+ Histograms and measurements, averages and ranges: Individual measurements Subgroup averages Subgroup ranges Distributions of measurements, averages and ranges: While the measurements, averages and ranges have different distributions, these are related in certain ways when they are derived from a stable process. +=+=+=+ 9 shows these relationships more clearly. Denote distribution average and standard deviation by AVER(X) and SD(X) Then, for distribution of X: AVER(X) =AVER(X) and Then, for distribution of R: AVER(R)= d2 SD(X) and SD(R)= d3 SD(X) Distribution of X: SD(X) n v SD(X) = +=+=+=+ Distributions of measurements, averages and ranges Notations related: Let AVER(X) denote the average of the distribution of the X values. Let SD(X) denote the standard deviation of the distribution of X. In a similar manner AVER( X ) and SD( X ) denote the average and standard deviation of the distribution of the sub-group averages, while AVER(R) and SD(R) denote the ranges. With this notation the relationships between the averages and standard deviations can be expressed as: Constants d2 and d3 are scaling factors that depend on the sub-group size n. These factors are shown and are based on the normality of X. Table: Factors for the average and standard range deviation of the range distribution. Calculating control limits From the four relationships that have been shown above it’s possible to obtain control limits for the sub-group averages and ranges. There are two principle methods of calculating control limits, one is the structural approach and the other is by formulae. Both methods are illustrated. When first obtaining control limits it’s customary to collect 20-30 sub-groups before calculating the limits. By using many sub-groups the impact of an extreme value is minimized. Using the example of sub-group averages and range values from a control process, the next two sections serve to illustrate the structural and formulated approach in calculating the process control limits. Using the data shown the control limits, will be found using both structural and formulated approaches. Table : Calculating control limits The structural approach: 1. First we estimate the distribution of the X values: 2. Estimation of the distribution of the X values: 3. Control for sub-group averages are: 4. Estimates for the distribution of R values: The average range 4 R = 5. Control limits for sub-group ranges are: Since the sub-group ranges are non-negative, the negative lower limit has no meaning. In this case the lower control limit = 0. The formulated approach: • The grand average is 4.763 • The average range is 4.05 • The sub-group size is 4. +=+=+=+ The resultant control chart for the worked example. The logic behind control charts In conclusion, both to this section and to the workshop, the following +=+=+=+ the logic behind control charts. Compare actual X and R values with predicted limits Assume process is stable (So X and R are appropriate) If observations are consistent with predictions, then process may be stable If observations are inconsistent with predictions, then process is definitely unstable Continued operation of process within limits is the 'proof' of stability Predict behavior of X and R (Calculate control limits) Take action to identify and remove assignable causes +=+=+=+ The logic behind control charts NEXT: |
Updated:
Monday, May 5, 2014 10:25
PST