What The HELL is Measurement and Control?



Home | Glossary | Books | Links/Resources
EMC Testing | Environmental Testing | Vibration Testing




AMAZON multi-meters discounts AMAZON oscilloscope discounts


INTRODUCTION--What Is Measurement and Control?

Measurement and control is the brain and nervous system of any modern plant. Measurement and control systems monitor and regulate processes that otherwise would be difficult to operate efficiently and safely while meeting the requirements for high quality and low cost.

Process Measurement and Control (also known as Process Automation, Process Instrumentation and Control, or just Instrumentation) is needed in modern industrial processes for a business to remain profitable. It improves product quality, reduces plant emissions, minimizes human error, and reduces operating costs among many other benefits.



The production quantities and requirements define the type of process required to make a certain product. In the process industries, two types are commonly used: continuous process and batch process. Often, a combination of the two processes exists in a typical plant.

The continuous process consists of raw materials entering the process and following a number of operations emerge as a new product. The material throughout the process is in constant movement and each operation performs a specific function.

The batch process consists of raw materials transformed into a new product according to a batch recipe and a sequence. The raw materials typically are fed into reactors or tanks where the reactions occur to produce a new product.

Definitions:

  • automation. A system or method in which many or all of the processes of production, movement, and inspection of parts and materials are automatically performed or controlled by self operating machinery, electronic devices, and so on.
  • instrument. Any of various devices for indicating or measuring conditions, performance, position, direction, and the like, or sometimes for controlling operations.
  • measurement. Extent, quantity, or size as determined by measuring.

Overview:

Some measurement and control technologies have evolved rapidly over the past few years, while others have almost disappeared. Instruments presently in use may become obsolete as newer and more efficient measurement techniques are introduced. The emergence of new techniques is driven by the ongoing need for better quality, by the increasing cost of materials, by continuous product changes, by tighter environmental requirements, by better accuracies, by improved plant performance, and by the evolution of the microprocessor-based devices. These technical developments have made possible the measurement of variables judged impossible to measure only a few decades ago.

Effective measurement requires a solid understanding of the process conditions. Selecting the correct measuring and control device is sometimes a challenge even for the most seasoned engineers, technicians, and sales personnel.



This guide provides the tools to enable users to correctly implement measuring and control systems, which in many cases is an activity not well understood and therefore not successfully implemented. Given the ever-growing demand for measurement and control applications and the wide range of devices on the market, the user must be able to assess different methods of measurement and control and select the most appropriate one. It’s not wise to consider only one type of measurement or control since each has its own advantages and disadvantages. The user must compare the different types in terms of which best fits the user's application since many techniques are available for measuring a parameter (such as flow, level, etc.). Making the optimum selection involves considering the requirements of the process, the desired degree of accuracy, the installation, dependability factors, maintenance, and economic factors. Since there is probably no one best method for measuring a specific variable this guide should help the user decide which method is more appropriate for the application.

One final note: when describing the function of instrumentation it’s important to ensure that we are using uniform terminology. ANSI/ISA-51.1-1979 (R1993), Process Instrumentation Terminology, includes definitions for many of the specialized terms used in the industrial process industries, such as accuracy, dead band, drift, hysteresis, linearity, repeatability, and reproducibility.

Historical Summary:

Even a few years ago the scope of process measurement and control was much simpler to define than today. It was referred to simply as "instrumentation." With the advent of software based functionality and advances in technology in most fields, this specialty has begun to branch out into individual subspecialties.

Process measurement and control, also commonly referred to as "Instrumentation and Control," has evolved from a manual and mechanical technology to, successively, a pneumatic, electronic, and digital technology. This field's exponential growth began after World War II, and its progress toward digitally based systems and devices is still proceeding rapidly today.

We don’t know with certainty who invented the field of measurement and control. About 2600 B.C., Egyptian engineers were surely using simple yet precise measuring devices to level the foundation and build the Great Pyramid and to cut its stones to precise dimensions. They also used weirs to measure and distribute irrigation water across the fertile delta. Many centuries later, the Romans built their aqueducts and distributed water using elementary flowmeters.

The pitot tube was invented in the 1600s. The flyball governor for steam engines was invented in 1774 during the Industrial Revolution (with improved versions still in use today). The flyball governor is considered the first application of a feedback controller concept.

In the late 1800s, tin-case and wood-case thermometers and mercury barometers became commercially available. In the early 1900s, pen recorders, pneumatic controllers, and temperature controllers hit the market.

With the advent of World War I, the need for more efficient instruments helped improve and further develop the field of instrumentation. Control rooms were developed, and the concept of proportional, integral, and derivative (PID) control emerged. By the mid-1930s, analyzers, flowmeters, and electronic potentiometers were developed. At that time there were more than 600 companies selling industrial instruments.

In the early 1940s, the Ziegler-Nichols tuning method (still in use today) was developed.

World War II was a major influence in moving the field of measurement and control to a new plateau. Pressure transmitters, all-electronic instruments, and force-balance instruments were produced. In the late 1940s and through the 1950s, the process control industry was transformed by the introduction of the transistor. The following also were introduced to the market during this period: pneumatic differential pressure transmitters, electronic controls, and the 4-20 mA DC signal range.

In the 1960s, computers were introduced along with the implementation of direct digital control (DDC) with CRT-based operator interfaces, the programmable logic controllers (PLCs), the vortex meter, and improved control valves. The 1970s brought the microprocessor, distributed control systems (DCSs), fiber-optic transmission, in-situ oxygen analyzers, and the random access memory (RAM) chip.

The 1980s and 1990s saw the advent of the personal computer and the software era, which widened the application of DCSs and PLCs. Neural networks, expert systems, fuzzy logic, smart instruments, and self-tuning controllers were also introduced.

The future of measurement and control is unknown. However, based on present trends, it’s expected that the line of demarcation between DCSs and PLCs will continue to disappear, auto-diagnostics and self-repair will increase, artificial intelligence will expand in acceptance and ease of use, and a standard plant-wide communication bus system will become the rule. The age of the total integration of digital components-from the measurement to the control system to the final control element-is on the horizon.

Guide Outline:

This Guide is divided into Sections and appendices. Units of measurement are shown in customary U.S. units followed by the SI units in parentheses.

The guide is divided into five major parts:

1. Sections 1 to 14 are for design activities-typically these are the first steps in implementing process control systems.

2. Sections 15 to 17 deal with the installation, maintenance and calibration of control equipment-these activities typically follow Part 1 above.

3. Sections 18 and 19 cover project management and decision making tools-an activity that covers both parts 1 and 2.

4. Section 20 describes the road to consulting-a subject of interest to experienced practitioners thinking of (or already) providing consulting services.

5. A number of appendices to support all the above Sections.

The following is a further breakdown of each of these parts.

Identification and Symbols Section 2 covers the naming of instruments including the correct functional identification using typical tag numbers. This Section is based on ISA-5.1-1984 (R1992), Instrumentation Symbols and Identification.

Measurement:

Sections 3 through 7 focus on the measurement of analytical values, flow, level, pressure, and temperature, respectively. Each Section consists of an overview and a comparison table. These tables provide, in condensed form, the guidance the user needs to select a type of measuring device. For each type of device listed in the tables, a description follows that provides its principle of measurement and related application notes.

The Engineering Contractor:

Section C describes the activities of an engineering contractor on a typical instrumentation and control job.

Packaged Equipment Section D describes the activities of a packaged equipment supplier from the point of view of instrumentation and control.

Typical Scope of Work Section E lists the many engineering activities typically encountered in instrumentation and control work.

Standard Development:

Section F describes the steps required to develop a set of corporate standards or guidelines.

Typical Job Descriptions Section G provides a set of typical job descriptions for personnel working in the field of process automation.

Sample Audit Protocol and Sample Audit Report

Selecting Measurement and Control Devices The instrumentation and control (I&C) designer must first understand the process if he or she is to be able to implement the required control system with the proper instruments. The proper selection of instruments and controls typically involves considering the following:

1. Compliance with all code, statutory, safety, and environmental requirements in effect at the site.

2. Process and plant requirements, including required accuracy and speed of response.

3. Good engineering practice, including acceptable cost, durability, and maintainability.

Selecting instrumentation and control items entails several important aspects other than the specific technology. These include:

• safety

• performance

• equipment location

• air supply

• electrical supply

• grounding

• installation and maintenance

Safety:

Safety must be considered a top priority. Improper materials may produce corrosion and material failure that may lead to leakage or major spills. For the same reasons, gasket and seal materials must also be compatible. All measurement and control equipment must be manufactured, installed, and maintained in compliance with the codes when they are located in hazardous areas or in the presence of flammable gases, vapors, liquids, or dusts. ANSI/ISA-12.01.01 1999, Definitions and Information Pertaining to Electrical Apparatus in Hazardous (Classified) Locations provides a general review of the applicable codes and standards as well as guidance for safe implementation.

Performance:

The implementation of measurement and control equipment must meet certain performance requirements as dictated by the user's process needs, such as desired accuracy and turndown capability. A typical measurement and control device has span and zero adjustments capability. The type of output signal required in today's modern devices is either a 4-20 mA output or a bus protocol. In many cases transmitters are specified to be of the indicating type. When indicating transmitters are required, the user should determine whether digital or analog displays are needed, what size the digits should be, and whether to display in percentage or in engineering units.

The accuracy requirement is directly related to the needs of the process. For example, in flow measurement, elbow tap accuracy may reach 10 percent, while on magnetic meters accuracies of ±0.5 percent are common. Thus, two questions arise. What is the accuracy the user requires, and which measuring device can meet this accuracy? It should be noted that this accuracy should be maintained within the process's minimum to maximum operating range (not just at the normal value). Turndown is the range between the maximum and minimum measurement, an essential parameter when determining which measurement technique to use. For example, flowmeters using orifice plates have a 3:1 turndown, whereas mass flowmeters reach 100:1.

The measurement and control equipment should be capable of handling corrosive environments, both from the process side (e.g., acid fluids) and from the environment side (e.g., sea water spray). In addition, abrasion is caused by solids entrained with the fluid coming into con tact with the components of the device. In these environments, the user should choose obstructionless devices or hardened material to reduce such effects.

Additional considerations include the electrical noise, vibration, and shock surrounding the equipment, as well as variations in power supply and their effect on the instrument's performance.

Enclosures must be suitable for the process, for the ambient local conditions, and for the area classification.

Equipment Location:

All measurement and control equipment should be installed in an easily accessible location (see Section 15 for further information). In addition, the user must consider both the maximum and minimum ambient temperatures, and the equipment's electronics must be protected from the process temperature. In the case of high process temperature, remote electronics are typically used. The accuracy of the measurement should remain unaffected by temperature variations. For low ambient temperature, winterizing may be required, and the user should assess the potential effects of winterizing failure.

Air Supply:

An instrument air system is typically required in most plants. In modern control systems, air is generally used to drive control valves. In most designs, control valves go to their fail-safe positions when the instrument air fails.

There are few cases where, in addition to control valves, measuring devices (i.e., transmitters and controllers) are pneumatic instead of electronic. Their signal range is typically 3-15 psig or 20-100 KPag. Pneumatic control systems are generally used in specially corrosive or hazardous environments and are immune to electrical noise. However, they have a slow system response and have a limited transmission distance. In addition, they cannot communicate directly with computer systems and require air to electronic signal transformation. Their installation cost is relatively high since they cannot be marshaled in groups or networked. In addition, the availability of pneumatic instruments is limited in comparison to their electronic counterparts.

The need for instrument air necessitates some minimum quality requirements. Dirty air will plug the instrument's sensitive pneumatic systems, and moisture can freeze, rendering pneumatic devices inoperable or unreliable. Thus, clean, dry, oil-free instrument air is generally supplied at a minimum pressure of 90 psig (630 kPag) and with a dew point of 20°F (10°C) below the ambient winter design temperature at atmospheric pressure.

An instrument air supply system consists, in most cases, of air generation (i.e, compressors), air drying, and air distribution, which includes an air receiver that protects against the loss of air compression and is independent of any non-instrument air users. This receiver should be sized to provide acceptable hold capacity (e.g., a minimum of 5 minutes) in the event the instrument air supply is lost. Air supply distribution systems generally consist of air headers that have header takeoff points mounted at the headers' top or side to feed the branches.

Air drying is typically done through the use of one of three common types of air dryers:

• refrigerated,

• absorbent (deliquescent desiccant), or

• adsorbent (regenerative desiccant).

A refrigerated air dryer uses mechanical refrigerated cooling. It provides a constant dew point, low maintenance, low operating cost, and is not damaged by oil vapors. However, it has limited dew points. In an absorbent air dryer, a hygroscopic desiccant is consumed and typically requires a pre-filter and after-filter. It has a low initial cost, is simple to use, and has no moving parts. However, its desiccant needs to be replaced periodically, it requires high maintenance, and has a high operating cost. The adsorbent dryer is the most common type used in industrial plants. In this type of dryer, a hygroscopic desiccant is regenerated using alternate flow paths in two towers. This type of dryer also requires a pre-filter and after-filter. The adsorbent dryer has low dew points with a reasonable operating cost. However, it has a high initial cost.

Additional information on instrument air is available from ANSI/ISA-7.0.01-1996, Quality Standard for Instrument air.

Electrical Supply:

Electrical power supply is required for all modern control systems. This power supply must conform to the requirements of all regulatory bodies that have jurisdiction at the site.

In most industrial applications, it’s particularly important that the quality and integrity of the power supply for process computers and their auxiliary hardware be maintained at a very high level. Such power integrity can be achieved by using properly sized devices such as an on-line uninterruptible power supply (UPS), a ferroresonant isolating transformer, or a surge suppressor. If the process under control would be affected by a power loss of the control system, or if a system outage cannot be tolerated, the user may have to consider a UPS. UPSs are available in many types and options. The two most common types are on-line and off-line. The on-line type basically converts the incoming AC power to DC and stores it in the batteries-then the battery output is converted back to AC feeding the load. Any power interruption on the incoming side is not felt at the output-in addition, any incoming electrical noise is not passed to the output, thus providing a clean output AC source.

The off-line type charges its batteries and waits until it’s required to supply the load, while the control equipment uses incoming raw power. The off-line UPS will power the load when it senses an incoming power failure.

When specifying a UPS, the user needs to ensure that the equipment bears the label of the approval authority (e.g., UL, CSA, etc.). In addition, the user should specify the required discharge rate, discharge time at rated load (e.g., 45 minutes), and recharging time under full load as a percentage of full capacity (e.g., 95%) and at a preset time (e.g., 10 hrs).

Additional features commonly required in industrial UPSs are:

• extra capacity (for future needs),

• the ability to directly use raw on-line power in case the UPS fails,

• local panel displaying incoming AC volts, output indication of AC volts, amps and frequency, and a bypass switch to use raw on-line power,

• remote alarm indication when the UPS fails, when AC is fed from the automatic transfer switch, and when AC is fed from the manual bypass, and

• sealed and maintenance-free batteries to avoid generating hazardous gases emitted by the batteries.

Often, a UPS is installed with two separate service feeders; one feeder for the UPS and the other for the bypass. Where raw power is used (i.e., bypassing the UPS), an isolation trans former is required on the raw power side to reduce the transfer of electrical noise present in the electrical supply system.

For large time-retention capacity, a UPS with a diesel-driven generator is generally provided.

This approach avoids having a large number of batteries.

When electronic equipment is connected to a breaker panel (also know as a fuse panel), electromagnetic interference (EMI) noise may travel to sensitive devices. EMI does not easily travel through transformers, hence, isolating transformers are needed to isolate the electronic control equipment from other EMI-generating devices.

Grounding:

Grounding is an essential part of any modern control system. Good grounding requirements help ensure quality installations and trouble-free operations. Users should implement grounding systems in compliance with the code and with the system vendor's recommendations.

Many electrical codes accept the use of a conductive rigid conduit to ground equipment. How ever, electronic equipment necessitates the use of a copper wire conductor to ensure proper operation.

Three grounds need to be considered; power, shield, and signal. Power ground is typically implemented by Electrical Engineering and won’t be covered in this book. Proper grounding is vital to the operation of computer-based control systems. Some organizations will involve the control equipment manufacturer in reviewing the detailed grounding drawings to ensure correctness.

When grounding the shield that wraps around a pair of wires carrying the signal, only one end should be grounded. The other end (typically on the field side) should be cut back and taped to prevent accidental grounding. Signal ground should also be grounded at one point only (typically, the point closest to the signal's power source). Multiple signal grounds generally result in ground loops (i.e., grounds at different potentials). Such ground loops add to or subtract from the 4-20 mA signal, introducing an error to the measured signal. It may be difficult to eliminate grounds for some devices such as analyzers, grounded thermocouples, and instruments grounded for safety. For these devices, and in situations where more than one ground exists, signal isolators should be used.

__-1 Ground loop errors.

Installation / Maintenance:

The user should determine the capabilities of the plant's in-house maintenance staff when selecting measurement and control devices. Maintenance may need to be done by an outside contractor, in which case the user should determine whether that contractor has the necessary expertise and can reach the site in an acceptable time. Other considerations include the difficulty and frequency of calibration, as well as whether calibration should be done at the site or at the vendor's facilities.

Maintenance is part of the cost of ownership, and the user should consider the cost of high maintenance items that require specialized equipment and expertise. The frequency of required preventive maintenance should be determined as well as the robustness of the instrument in comparison to its required performance.

Because some installation and maintenance activities require the process to be shut down, it’s often necessary to determine whether the measurement and control device can be removed on line and how essential the device is to the ongoing process. In all cases, the measurement and control devices should be accessible from either grade or platform.

Accuracy / Repeatability:

Accuracy and repeatability are essential terms in the world of measurement and control. Accuracy (an indication of the measured error) displays the instrument's capability to provide the correct value. Repeatability is the instrument's ability to give the same value every time.

The composite accuracy of a measuring device includes the combined effects of repeatability and accuracy. Unfortunately, this concept is sometimes referred to simply as "accuracy." Yet, without repeatability good accuracy cannot be obtained. Where repeatability changes with time or where accuracy is an important factor, good performance will be a direct result of the frequency with which the equipment is calibrated.

It’s possible to have good repeatability without good accuracy. The term repeatability is an indication of the ability of a measuring device to reproduce a measurement each time a set of conditions is obtained. It does not mean that the measurement is correct, only that the indication is the same each time.

--2 Measured error (accuracy) and repeatability. REPEATABILITY (CASE 1) BIAS ERROR (CASE 1) TOTAL MEASURED ERROR (CASE 1) TOTAL MEASURED ERROR (CASE 2) BIAS ERROR (CASE 2) REPEATABILITY (CASE 2)

Related: MAINTENANCE of Measurement and Control Equipment

Next:

Prev:

top of page  Article Index   Home



Home | Glossary | Books | Links/Resources
EMC Testing | Environmental Testing | Vibration Testing

Updated: Wednesday, 2013-04-10 10:28 PST