Sensor Technologies (part 2)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


(cont from part 1)

Ultrasonic Transducers

Ultrasonic devices are used in many fields of measurement, particularly for measuring fluid flow rates, liquid levels, and translational displacements. Details of such applications can be found in later sections.

Ultrasound is a band of frequencies in the range above 20 kHz, that is, above the sonic range that humans can usually hear. Measurement devices that use ultrasound consist of one device that transmits an ultrasound wave and another device that receives the wave. Changes in the measured variable are determined either by measuring the change in time taken for the ultrasound wave to travel between the transmitter and receiver or, alternatively, by measuring the change in phase or frequency of the transmitted wave.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


The most common form of ultrasonic element is a piezoelectric crystal contained in a casing. Such elements can operate interchangeably as either a transmitter or a receiver. These are available with operating frequencies that vary between 20 kHz and 15 MHz. The principles of operation, by which an alternating voltage generates an ultrasonic wave and vice versa, have already been covered in the section on piezoelectric transducers. For completeness, mention should also be made of capacitive ultrasonic elements. These consist of a thin, dielectric membrane between two conducting layers. The membrane is stretched across a backplate and a bias voltage is applied. When a varying voltage is applied to the element, it behaves as an ultrasonic transmitter and an ultrasound wave is produced. The system also works in the reverse direction as an ultrasonic receiver. Elements with resonant frequencies in the range between 30 kHz and 3 MHz can be obtained.

---

Transmission Speed of Ultrasound through Different Media Medium; Velocity (m/s)

---

+ Transmission Speed

The transmission speed of ultrasound varies according to the medium through which it travels.

Transmission speeds for some common media are given in Table ++1.

When transmitted through air, the speed of ultrasound is affected by environmental factors such as temperature, humidity, and air turbulence. Of these, temperature has the largest effect. The velocity of sound through air varies with temperature according to ...

V = 331:6 + 0:6T m/s,

where T is the temperature in _ C. Thus, even for a relatively small temperature change of 20_ from 0 to 20_C, the velocity changes from 331.6 to 343.6 m/s.

Humidity changes have a much smaller effect on speed. If the relative humidity increases by 20%, the corresponding increase in the transmission velocity of ultrasound is 0.07% (corresponding to an increase from 331.6 to 331.8 m/s at 0_ C).

Changes in air pressure itself have a negligible effect on the velocity of ultrasound. Similarly, air turbulence normally has no effect. However, if turbulence involves currents of air at different temperatures, then random changes in ultrasound velocity occur according to Equation (2).

+ Directionality of Ultrasound Waves

An ultrasound element emits a spherical wave of energy, although the peak energy is always in a particular direction. The magnitude of energy emission in any direction is a function of the angle made with respect to the direction that is normal to the face of the ultrasonic element.

Peak emission occurs along a line that is normal to the transmitting face of the ultrasonic element, which is loosely referred to as the direction of travel. At any angle other than the "normal" one, the magnitude of transmitted energy is less than the peak value. --- shows characteristics of the emission for a range of ultrasonic elements. This is shown in terms of the attenuation of the transmission magnitude (measured in dB) as the angle with respect to "normal" direction increases. For many purposes, it’s useful to treat the transmission as a conical volume of energy, with edges of the cone defined as the transmission angle where the amplitude of the energy in the transmission is _6 dB compared with the peak value (i.e., where the amplitude of the energy is half that in the normal direction). Using this definition, a 40-kHz ultrasonic element has a transmission cone of _50_ and a 400-kHz element has a transmission cone of _3_.


AMAZON multi-meters discounts AMAZON oscilloscope discounts


It should be noted that air currents can deflect ultrasonic waves such that the peak emission is no longer normal to the face of the ultrasonic element. It has been shown experimentally that an air current moving with a velocity of 10 km/h deflects an ultrasound wave by 8 mm over a distance of 1 m.

+ Relationship Between Wavelength, Frequency, and Directionality of Ultrasound Waves

The frequency and wavelength of ultrasound waves are related according to:

lambda = v/f,

where lambda is the wavelength, v is the velocity, and f is the frequency of the ultrasound waves.

This shows that the relationship between l and f depends on the velocity of the ultrasound and hence varies according to the nature and temperature of the medium through which it travels.

Table ++2 compares nominal frequencies, wavelengths, and transmission cones (_6 dB limits) for three different types of ultrasonic elements.

It’s clear from Table ++2 that the directionality (cone angle of transmission) reduces as the nominal frequency of the ultrasound transmitter increases. However, the cone angle also depends on factors other than nominal frequency, particularly on the shape of the transmitting horn in the element, and different models of ultrasonic element with the same nominal frequency can have substantially different cone angles.

---- 2 Comparison of Frequency, Wavelength, and Cone Angle for Various Ultrasonic Transmitters

+ Attenuation of Ultrasound Waves

Ultrasound waves suffer attenuation in the amplitude of the transmitted energy according to the distance traveled. The amount of attenuation also depends on the nominal frequency of the ultrasound and the adsorption characteristics of the medium through which it travels. The amount of adsorption depends not only on the type of transmission medium but also on the level of humidity and dust in the medium.

The amplitude X_d of the ultrasound wave at a distance d from the emission point can be expressed as the medium that the ultrasound travels through, and any pollutants in the medium, such as dust or water particles.

+ Ultrasound as a Range Sensor

The basic principles of an ultrasonic range sensor are to measure the time between transmission of a burst of ultrasonic energy from an ultrasonic transmitter and receipt of that energy by an ultrasonic receiver. Then, the distance d can be calculated from...

d = vt,

where v is the ultrasound velocity and t is the measured energy transit time. An obvious difficulty in applying this equation is the variability of v with temperature according to Equation (2). One solution to this problem is to include an extra ultrasonic transmitter/receiver pair in the measurement system in which the two elements are positioned a known distance apart. Measurement of the transmission time of energy between this fixed pair provides the necessary measurement of velocity and hence compensation for any environmental temperature changes.

The degree of directionality in the ultrasonic elements used for range measurement is unimportant as long as the receiver and transmitter are positioned carefully so as to face each other exactly (i.e., such that the "normal" lines to their faces are coincident). Thus, directionality imposes no restriction on the type of element suitable for range measurement. However, element choice is restricted by the attenuation characteristics of different types of elements, and relatively low-frequency elements have to be used for the measurement of large ranges.

Measurement resolution and accuracy

The best measurement resolution that can be obtained with an ultrasonic ranging system is equal to the wavelength of the transmitted wave. As wavelength is inversely proportional to frequency, high-frequency ultrasonic elements would seem to be preferable. For example, while the wavelength and hence resolution for a 40-kHz element is 8.6 mm at room temperature (20_ C), it’s only 0.86 mm for a 400-kHz element. However, the choice of element also depends on the required range of measurement. The range of higher frequency elements is much reduced compared with low-frequency ones due to the greater attenuation of the ultrasound wave as it travels away from the transmitter. Hence, the choice of element frequency has to be a compromise between measurement resolution and range.

The best measurement accuracy obtainable is equal to the measurement resolution value, but this is only achieved if the electronic counter used to measure the transmission time starts and stops at exactly the same point in the ultrasound cycle (usually the point in the cycle corresponding to peak amplitude is used). However, the sensitivity of the ultrasonic receiver also affects measurement accuracy. The amplitude of the ultrasound wave generated in the medium that the ultrasound travels through, and any pollutants in the medium, such as dust or water particles.

+ Ultrasound as a Range Sensor

The basic principles of an ultrasonic range sensor are to measure the time between transmission of a burst of ultrasonic energy from an ultrasonic transmitter and receipt of that energy by an ultrasonic receiver. Then, the distance d can be calculated from

d = vt,

where v is the ultrasound velocity and t is the measured energy transit time. An obvious difficulty in applying this equation is the variability of v with temperature according to Equation (2). One solution to this problem is to include an extra ultrasonic transmitter/ receiver pair in the measurement system in which the two elements are positioned a known distance apart. Measurement of the transmission time of energy between this fixed pair provides the necessary measurement of velocity and hence compensation for any environmental temperature changes.

The degree of directionality in the ultrasonic elements used for range measurement is unimportant as long as the receiver and transmitter are positioned carefully so as to face each other exactly (i.e., such that the "normal" lines to their faces are coincident). Thus, directionality imposes no restriction on the type of element suitable for range measurement. However, element choice is restricted by the attenuation characteristics of different types of elements, and relatively low-frequency elements have to be used for the measurement of large ranges.

Measurement resolution and accuracy

The best measurement resolution that can be obtained with an ultrasonic ranging system is equal to the wavelength of the transmitted wave. As wavelength is inversely proportional to frequency, high-frequency ultrasonic elements would seem to be preferable. For example, while the wavelength and hence resolution for a 40-kHz element is 8.6 mm at room temperature (20_C), it’s only 0.86 mm for a 400-kHz element. However, the choice of element also depends on the required range of measurement. The range of higher frequency elements is much reduced compared with low-frequency ones due to the greater attenuation of the ultrasound wave as it travels away from the transmitter. Hence, the choice of element frequency has to be a compromise between measurement resolution and range.

The best measurement accuracy obtainable is equal to the measurement resolution value, but this is only achieved if the electronic counter used to measure the transmission time starts and stops at exactly the same point in the ultrasound cycle (usually the point in the cycle corresponding to peak amplitude is used). However, the sensitivity of the ultrasonic receiver also affects measurement accuracy. The amplitude of the ultrasound wave generated in the transmitter ramps up to full amplitude. The receiver has to be sensitive enough to detect the peak of the first cycle, which can usually be arranged.

However, if the range of measurement is large, attenuation of the ultrasound wave may cause the amplitude of the first cycle to become less than the threshold level that the receiver is set to detect. In this case, only the second cycle will be detected and there will be an additional measurement error equal to one wavelength. For large transmission distances, even the second cycle may be undetected, meaning that the receiver only "sees" the third cycle.

+ Effect of Noise in Ultrasonic Measurement Systems

Signal levels at the output of ultrasonic measurement systems are usually of low amplitude and are therefore prone to contamination by electromagnetic noise. Because of this, it’s necessary to use special precautions such as making ground (earth) lines thick, using shielded cables for transmission of the signal from the ultrasonic receiver, and locating the signal amplifier as close to the receiver as possible.

Another potentially serious form of noise is background ultrasound produced by manufacturing operations in the typical industrial environment in which many ultrasonic range measurement systems operate. Analysis of industrial environments has shown that ultrasound at frequencies up to 100 kHz is generated by many operations, and some operations generate ultrasound at higher frequencies up to 200 kHz. There is not usually any problem if ultrasonic measurement systems operate at frequencies above 200 kHz, but these often have insufficient range for the needs of the measurement situation. In these circumstances, any objects likely to generate energy at ultrasonic frequencies should be covered in sound-absorbing material such that interference with ultrasonic measurement systems is minimized. The placement of sound absorbing material around the path that the measurement ultrasound wave travels along contributes further toward reducing the effect of background noise. A natural solution to the problem is also partially provided by the fact that the same processes of distance traveled and adsorption that attenuate the amplitude of ultrasound waves traveling between the transmitter and the receiver in the measurement system also attenuate ultrasound noise generated by manufacturing operations.

Because ultrasonic energy is emitted at angles other than the direction that is normal to the face of the transmitting element, a problem arises in respect of energy that is reflected off some object in the environment around the measurement system and back into the ultrasonic receiver. This has a longer path than the direct one between the transmitter and the receiver and can cause erroneous measurements in some circumstances. One solution to this is to arrange for the transmission-time counter to stop as soon as the receiver first detects the ultrasound wave. This will usually be the wave that has traveled along the direct path, and so no measurement error is caused as long as the rate at which ultrasound pulses are emitted is such that the next burst is not emitted until all reflections from the previous pulse have died down. However, in circumstances where the direct path becomes obstructed by some obstacle, the counter will only be stopped when the reflected signal is detected by the receiver, giving a potentially large measurement error.

+ Exploiting Doppler Shift in Ultrasound Transmission

he Doppler effect is evident in all types of wave motion and describes the apparent change in frequency of the wave when there is relative motion between the transmitter and the receiver. If a continuous ultrasound wave with velocity v and frequency f takes t seconds to travel from source S to receiver R, then R will receive ft cycles of sound during time t.

Nuclear Sensors

Nuclear sensors are uncommon measurement devices, partly because of the strict safety regulations that govern their use and partly because they are usually expensive. Some very low-level radiation sources are now available that largely overcome the safety problems, but measurements are then prone to contamination by background radiation. The principle of operation of nuclear sensors is very similar to optical sensors in that radiation is transmitted between a source and a detector through some medium in which the magnitude of transmission is attenuated according to the value of the measured variable. Caesium-137 is used commonly as a g-ray source, and a sodium iodide device is used commonly as a g-ray detector. The latter gives a voltage output that is proportional to the radiation incident upon it. One current use of nuclear sensors is in a noninvasive technique for measuring the level of liquid in storage tanks. They are also used in mass flow rate measurement and in medical-scanning applications.

Microsensors

Microsensors are millimeter-sized two- and three-dimensional micro-machined structures that have smaller size, improved performance, better reliability, and lower production costs than many alternative forms of sensors. Currently, devices used to measure temperature, pressure, force, acceleration, humidity, magnetic fields, radiation, and chemical parameters are either in production or at advanced stages of research.

Microsensors are usually constructed from a silicon semiconductor material, but are sometimes fabricated from other materials, such as metals, plastics, polymers, glasses, and ceramics deposited on a silicon base. Silicon is an ideal material for sensor construction because of its excellent mechanical properties. Its tensile strength and Young's modulus are comparable to that of steel, while its density is less than that of aluminum. Sensors made from a single crystal of silicon remain elastic almost to the breaking point, and mechanical hysteresis is very small.

In addition, silicon has a very low coefficient of thermal expansion and can be exposed to extremes of temperature and most gases, solvents, and acids without deterioration.

Micro-engineering techniques are an essential enabling technology for microsensors, which are designed so that their electromechanical properties change in response to a change in the measured parameter. Many of the techniques used for integrated circuit (IC) manufacture are also used in sensor fabrication, with common techniques being crystal growing and polishing, thin film deposition, ion implantation, wet and dry chemical and laser etching, and photolithography. However, apart from standard IC production techniques, some special techniques are also needed in addition to produce the three-dimensional structures that are unique to some types of microsensors. The various manufacturing techniques are used to form sensors directly in silicon crystals and films. Typical structures have forms such as thin diaphragms, cantilever beams, and bridges.

--- Etched piezoresistive element Proof mass Silicon layer Glass substrate

--- While the small size of a microsensor is of particular benefit in many applications, it also leads to some problems that require special attention. For example, microsensors typically have very low capacitance. This makes the output signals very prone to noise contamination. Hence, it’s usually necessary to integrate microelectronic circuits that perform signal processing in the device, which therefore becomes a smart microsensor. Another problem is that microsensors generally produce output signals of very low magnitude. This requires the use of special types of analogue-to-digital converters that can cope with such low-amplitude input signals. One suitable technique is sigma-delta conversion. This is based on charge balancing techniques and gives better than 16-bit accuracy in less than 20 ms. Special designs can reduce conversion time down to less than 0.1 ms if necessary.

Microsensors are used most commonly for measuring pressure, acceleration, force, and chemical parameters. They are used in particularly large numbers in the automotive industry, where unit prices can be very low. Microsensors are also widely used in medical applications, particularly for blood pressure measurement.

Mechanical microsensors transform measured variables such as force, pressure, and acceleration into a displacement. The displacement is usually measured by capacitive or piezoresistive techniques, although some devices use other technologies, such as resonant frequency variation, resistance change, inductance change, piezoelectric effect, and changes in magnetic or optical coupling. The design of a cantilever silicon microaccelerometer.

The proof mass within this is about 100 mm across, and the typical deflection measured is of the order of 1 mm (10_3 mm).

An alternative capacitive micro-accelerometer provides a calibrated, compensated, and amplified output. It has a capacitive silicon microsensor to measure displacement of the proof mass.

This is integrated with a signal processing chip and is protected by a plastic enclosure. The capacitive element has a three-dimensional structure, which gives higher measurement sensitivity than surface-machined elements.

Microsensors to measure many other physical variables are either in production or at advanced stages of research. Microsensors measuring magnetic field are based on a number of alternative technologies, such as the Hall effect, magnetoresistors, magnetodiodes, and magnetotransistors.

Radiation microsensors are made from silicon p-n diodes or avalanche photodiodes and can detect radiation over wavelengths from the visible spectrum to infrared. Microsensors in the form of a microthermistor, a p-n thermodiode, or a thermotransistor are used as digital thermometers. Microsensors have also enabled measurement techniques that were previously laboratory-based ones to be extended into field instruments. Examples are spectroscopic instruments and devices to measure viscosity.

Summary

This section revealed 11 different physical principles used in measurement sensors. As noted in the introduction to the section, the chosen order of presentation of these principles has been arbitrary and is not intended to imply anything about the relative popularity of these various principles.

The first principle covered was capacitance change, which we found was based on two capacitor plates with either a variable or a fixed distance between them. We learned that sensors with a variable distance between plates are used primarily for displacement measurement, either as displacement sensors in their own right or to measure the displacement within certain types of pressure, sound, and acceleration sensors. The alternative type of capacitive sensor where the distance between plates is fixed is used typically to measure moisture content, humidity values, and liquid level.

Moving on to the resistance change principle, we found that this is used in a wide range of devices for temperature measurement (resistance thermometers and thermistors) and displacement measurement (strain gauges and piezoresistive sensors). We also noted that some moisture meters work on the resistance variation principle.

We then looked at sensors that use the magnetic phenomena of inductance, reluctance, and eddy currents. We saw that the principle of inductance change was used mainly to measure translational and rotational displacements, reluctance change was used commonly to measure rotational velocities, and the eddy current effect was used typically to measure the displacement between a probe and a very thin metal target, such as the steel diaphragm of a pressure sensor.

Next we looked at the Hall effect. This measures the magnitude of a magnetic field and is used commonly in a proximity sensor. It’s also employed in computer keyboard push buttons.

We then moved on to piezoelectric transducers. These generate a voltage when a force is applied to them. Alternatively, if a voltage is applied to them, an output force is produced. A common application is in ultrasonic transmitters and receivers. They are also used as displacement transducers, particularly as part of devices measuring acceleration, force, and pressure.

Our next subject of study was strain gauges. These devices exploit the physical principle of a change in resistance when the metal wire that they are made from is stretched or strained. They detect very small displacements and are used typically within devices such as diaphragm pressure sensors to measure the small displacement of the diaphragm m when a pressure is applied to it. We looked into some of the science involved in strain gauge design, particularly in respect to the alternative materials used for the active element.

Moving on, we then looked at piezoresistive sensors. We saw that these could be regarded as a semiconductor strain gauge, as they consist of a semiconductor material whose resistance varies when it’s compressed or stretched. They are used commonly to measure the displacement in diaphragm pressure sensors where the resistance change for a given amount of diaphragm displacement is much greater than is obtained in metal strain gauges, thus leading to better measurement accuracy. They are also used as accelerometers. Before concluding this discussion, we also observed that the term piezoresistive sensor is sometimes (but incorrectly) used to describe metal strain gauges as well as semiconductor ones.

In our discussion of optical sensors that followed, we observed first of all that these could involve both transmission of light through air and transmission along a fiber-optic cable. Air path optical sensors exploit the transmission of light from a source to a detector across an open air path and are used commonly to measure proximity, translational motion, rotational motion, and gas concentration.

Sensors that involve the transmission of light along a fiber-optic cable are commonly called fiber-optic sensors. Their principle of operation is to translate the measured physical quantity into a change in the intensity, phase, polarization, wavelength, or transmission time of the light carried along the cable. We went on to see that two kinds of fiber-optic sensors can be distinguished, known as intrinsic sensors and extrinsic sensors. In intrinsic sensors, the fiber optic cable itself is the sensor, whereas in extrinsic sensors, the fiber-optic cable is merely used to transmit light to/from a conventional sensor. Our look at intrinsic sensors revealed that different forms of these are used to measure a very wide range of physical variables, including proximity, displacement, pressure, pH, smoke intensity, acceleration, temperature, cryogenic leakage, oil content in water, liquid level, refractive index of a liquid, parameters in biomedical applications (oxygen concentration, carbon monoxide concentrations, blood pressure level, hormone concentration, steroid concentration), mechanical strain, magnetic field strength, electric field strength, electrical current, electrical voltage, angular position and acceleration in gyroscopes, liquid flow rate, and gas presence. In comparison, the range of physical variables measured by extrinsic sensors is much less, being limited mainly to the measurement of temperature, pressure, force, and displacement (both linear and angular).

This then led to a discussion of ultrasonic sensors. These are used commonly to measure range, translational displacements, fluid flow rate, and liquid level. We learned that ultrasonic sensors work in one of two ways, either by measuring the change in time taken for an ultrasound wave to travel between a transmitter and a receiver or by measuring the change in phase or frequency of the transmitted wave. While both of these principles are simple in concept, we went on to see that the design and use of ultrasonic sensors suffer from a number of potential problems. First, the transmission speed can be affected by environmental factors, with temperature changes being a particular problem and humidity changes to a lesser extent. The nominal operating frequency of ultrasonic elements also has to be chosen carefully according to the intended application, as this affects the effective amount of spread of the transmitted energy either side of the direction normal to the face of the transmitting element. Attenuation of the transmitted wave can cause problems. This is particularly so when ultrasonic elements are used as range sensors.

This follows from the start-up nature of a transmitted ultrasonic wave, which exhibits increasing amplitude over the first two or three cycles of the emitted energy wave. Attenuation of the wave as it travels over a distance may mean that the detector fails to detect the first or even second cycle of the transmitted wave, causing an error that is equal to one or two times the ultrasound wavelength. Noise can also cause significant problems in the operation of ultrasonic sensors, as they are contaminated easily by electromagnetic noise and are particularly affected by noise generated by manufacturing operations at a similar frequency to that of the ultrasonic-measuring system. Because there is some emission of ultrasonic energy at angles other than the normal direction to the face of the ultrasonic element, stray reflections of transmissions in these non-normal directions by structures in the environment around the ultrasonic system may interfere with measurements.

The next type of sensor discussed was nuclear sensors. We learned that these did not enjoy widespread use, with main applications being in the noninvasive measurement of liquid level, mass flow rate measurement, and in some medical scanning applications. This limited number of applications is partly due to the health dangers posed to users by the radiation source that they use and partly due to their relatively high cost. Danger to users can largely be overcome by using low-level radiation sources but this makes measurements prone to contamination by background radiation.

Finally, we looked at micro-sensors, which we learned were millimeter-sized, two- and three dimensional micro-machined structures usually made from silicon semiconductor materials but sometimes made from other materials. These types of sensors have smaller size, improved performance, better reliability, and lower production costs than many alternative forms of sensors and are used to measure temperature, pressure, force, acceleration, humidity, magnetic fields, radiation, chemical parameters, and some parameters in medical applications such as blood pressure. Despite being superior in many ways to larger sensors, they are affected by several problems. One such problem is their very low capacitance, which makes their output signals very prone to noise contamination. To counteract this, it’s normally necessary to integrate microelectronic circuits in the device to perform signal processing. Another problem is the very low magnitude of the output signal, which requires the use of special types of analogue-to-digital converters.

QUIZ

1. Describe the general working principles of capacitive sensors and discuss some applications of them.

2. Discuss some applications of resistive sensors.

3. What types of magnetic sensors exist and what are they mainly used for? Describe the mode of operation of each.

4. What are Hall-effect sensors? How do they work and what are they used for?

5. How does a piezoelectric transducer work and what materials are typically used in their construction? Discuss some common applications of this type of device.

6. What is a strain gauge and how does it work? What are the problems in making and using a traditional metal-wire strain gauge and how have these problems been overcome in new types of strain gauges?

7. Discuss some applications of strain gauges.

8. What are piezoresistive sensors and what are they typically used for?

9. What is the principal advantage of an optical sensor? Discuss the mode of operation of the two main types of optical sensors.

10. What are air path optical sensors? Discuss their mode of operation, including details of light sources and light detectors used. What are their particular advantages and disadvantages over fiber-optic sensors?

11. How do fiber-optic sensors work? Discuss their use in intrinsic and extrinsic sensors.

12. Explain the basic principles of operation of ultrasonic sensors and discuss what they are typically used for.

13. What factors govern the transmission speed and directionality of ultrasonic waves? How do these factors affect the application of ultrasonic sensors?

14. Discuss the use of ultrasonic sensors in range-measuring systems, mentioning the effect of attenuation of the wave as it travels. How can measurement resolution and accuracy be optimized?

15. Discuss the effects of extraneous noise in ultrasonic measurement systems. How can these effects be reduced?

16. Discuss the phenomenon of Doppler shift in ultrasound transmission and explain how this can be used in sensors.

17. Why are nuclear sensors not in common use?

18. What are micro-sensors? How are they made and what applications are they used in?

NEXT: Flow Measurement: Intro and Mass Flow Rate

PREV: Sensor Technologies (part 1)

All related articles   Top of Page   Home



Updated: Thursday, November 8, 2012 17:22 PST