Digital recording and transmission principles (part 1)

Home | Audio Magazine | Stereo Review magazine | Good Sound | Troubleshooting




Recording and transmission are quite different tasks, but they have a great deal in common and have always been regarded as being different applications of the same art. Digital transmission consists of converting data into a waveform suitable for the path along which it’s to be sent.

Digital recording is basically the process of recording a digital transmission waveform on a suitable medium. Although the physics of the recording or transmission processes are unaffected by the meaning attributed to signals, digital techniques are rather different from those used with analog signals, although often the same phenomenon shows up in a different guise. In this section the fundamentals of digital recording and transmission are introduced along with descriptions of the coding techniques used in practical applications. The parallel subject of error correction is dealt with in the next section.

1. Introduction to the channel

Data can be recorded on many different media and conveyed using many forms of transmission. The generic term for the path down which the information is sent is the channel. In a transmission application, the channel may be no more than a length of cable. In a recording application the channel will include the record head, the medium and the replay head. In analog systems, the characteristics of the channel affect the signal directly. It’s a fundamental strength of digital audio that by pulse code modulating an audio waveform the quality can be made independent of the channel. The dynamic range required by the program material no longer directly decides the dynamic range of the channel.

In digital circuitry there is a great deal of noise immunity because the signal has only two states, which are widely separated compared with the amplitude of noise. In both digital recording and transmission this is not always the case. In magnetic recording, noise immunity is a function of track width and reduction of the working SNR of a digital track allows the same information to be carried in a smaller area of the medium, improving economy of operation. In broadcasting, the noise immunity is a function of the transmitter power and reduction of working SNR allows lower power to be used with consequent economy. These reductions also increase the random error rate, but, as was seen in Section 1, an error correction system may already be necessary in a practical system and it’s simply made to work harder.

In real channels, the signal may originate with discrete states which change at discrete times, but the channel will treat it as an analog waveform and so it won’t be received in the same form. Various loss mechanisms will reduce the amplitude of the signal. These attenuations won’t be the same at all frequencies. Noise will be picked up in the channel as a result of stray electric fields or magnetic induction. As a result the voltage received at the end of the channel will have an infinitely varying state along with a degree of uncertainty due to the noise.

Different frequencies can propagate at different speeds in the channel; this is the phenomenon of group delay. An alternative way of considering group delay is that there will be frequency-dependent phase shifts in the signal and these will result in uncertainty in the timing of pulses.

In digital circuitry, the signals are generally accompanied by a separate clock signal which reclocks the data to remove jitter as was shown in Section 1. In contrast, it’s generally not feasible to provide a separate clock in recording and transmission applications. In the transmission case, a separate clock line would not only raise cost, but is impractical because at high frequency it’s virtually impossible to ensure that the clock cable propagates signals at the same speed as the data cable except over short distances. In the recording case, provision of a separate clock track is impractical at high density because mechanical tolerances cause phase errors between the tracks. The result is the same; timing differences between parallel channels which are known as skew.

The solution is to use a self-clocking waveform and the generation of this is a further essential function of the coding process. Clearly if data bits are simply clocked serially from a shift register in so-called direct recording or transmission this characteristic won’t be obtained. If all the data bits are the same, for example all zeros, there is no clock when they are serialized.

It’s not the channel which is digital; instead the term describes the way in which the received signals are interpreted. When the receiver makes discrete decisions from the input waveform it attempts to reject the uncertainties in voltage and time. The technique of channel coding is one where transmitted waveforms are restricted to those which still allow the receiver to make discrete decisions despite the degradations caused by the analog nature of the channel.

2. Types of transmission channel

Transmission can be by electrical conductors, radio or optical fiber.

Although these appear to be completely different, they are in fact just different examples of electromagnetic energy travelling from one place to another. If the energy is made to vary is some way, information can be carried.

Even today electromagnetism is not fully understood, but sufficiently good models, based on experimental results, exist so that practical equipment can be made. It’s not actually necessary to fully understand a process in order to harness it; it’s only necessary to be able to reliably predict what will happen in given circumstances.

Electromagnetic energy propagates in a manner which is a function of frequency, and our partial understanding requires it to be considered as electrons, waves or photons so that we can predict its behavior in given circumstances.

At DC and at the low frequencies used for power distribution, electromagnetic energy is called electricity and it’s remarkably aimless stuff which needs to be transported completely inside conductors. It has to have a complete circuit to flow in, and the resistance to current flow is determined by the cross-sectional area of the conductor. The insulation around the conductor and the spacing between the conductors has no effect on the ability of the conductor to pass current. At DC an inductor appears to be a short circuit, and a capacitor appears to be an open circuit.

As frequency rises, resistance is exchanged for impedance. Inductors display increasing impedance with frequency, capacitors show falling impedance. Electromagnetic energy becomes increasingly desperate to leave the conductor. The first symptom is that the current flows only in the outside layer of the conductor effectively causing the resistance to rise. This is the skin effect and gives rise to such techniques as Litz wire which has as much surface area as possible per unit cross-section, and to silver-plated conductors in which the surface has lower resistivity than the interior.

As the energy is starting to leave the conductors, the characteristics of the space between them become important. This determines the impedance. A change of impedance causes reflections in the energy flow and some of it heads back towards the source. Constant impedance cables with fixed conductor spacing are necessary, and these must be suitably terminated to prevent reflections. The most important characteristic of the insulation is its thickness as this determines the spacing between the conductors.

As frequency rises still further, the energy travels less in the conductors and more in the insulation between them. Their composition becomes important and they begin to be called dielectrics. A poor dielectric like PVC absorbs high-frequency energy and attenuates the signal. So-called low-loss dielectrics such as PTFE are used, and one way of achieving low loss is to incorporate as much air in the dielectric as possible by making it in the form of a foam or extruding it with voids.

Further rise in frequency causes the energy to start behaving more like waves and less like electron movement. As the wavelength falls it becomes increasingly directional. The transmission line becomes a waveguide and microwaves are sufficiently directional that they can keep going without any conductor at all. Microwaves are simply low frequency radiant heat, which is itself low-frequency light. All three are reflected well by electrical conductors, and can be refracted at the boundary between media having different propagation speeds. A waveguide is the microwave equivalent of an optical fiber.

This frequency-dependent behavior is the most important factor in deciding how best to harness electromagnetic energy flow for information transmission. It’s obvious that the higher the frequency, the greater the possible information rate, but in general, losses increase with frequency, and flat frequency response is elusive. The best that can be managed is that over a narrow band of frequencies, the response can be made reasonably constant with the help of equalization. Unfortunately raw data when serialized have an unconstrained spectrum. Runs of identical bits can produce frequencies much lower than the bit rate would suggest. One of the essential steps in a transmission system is to modify the spectrum of the data into something more suitable.

At moderate bit rates, say a few megabits per second, and with moderate cable lengths, say a few meters, the dominant effect will be the capacitance of the cable due to the geometry of the space between the conductors and the dielectric between. The capacitance behaves under these conditions as if it were a single capacitor connected across the signal. Fig. 1 shows the equivalent circuit.

The effect of the series source resistance and the parallel capacitance is that signal edges or transitions are turned into exponential curves as the capacitance is effectively being charged and discharged through the source impedance. This effect can be observed on the AES/EBU interface with short cables. Although the position where the edges cross the center-line is displaced, the signal eventually reaches the same amplitude as it would at DC.

Fig. 1 With a short cable, the capacitance between the conductors can be lumped as if it were a discrete component. The effect of the parallel capacitor is to slope off the edges of the signal.

Fig. 2 A transmission line conveys energy packets which appear to alternate with respect to the dielectric. In (a) the driver launches a pulse which charges the dielectric at the beginning of the line. As it propagates the dielectric is charged further along as in (b). When the driver ends the pulse, the charged dielectric discharges into the line. A current loop is formed where the current in the return loop flows in the opposite direction to the current in the 'hot' wire.

As cable length increases, the capacitance can no longer be lumped as if it were a single unit; it has to be regarded as being distributed along the cable. With rising frequency, the cable inductance also becomes significant, and it too is distributed.

The cable is now a transmission line and pulses travel down it as current loops which roll along as shown in Fig. 2. If the pulse is positive, as it’s launched along the line, it will charge the dielectric locally as at (a). As the pulse moves along, it will continue to charge the local dielectric as at (b). When the driver finishes the pulse, the trailing edge of the pulse follows the leading edge along the line. The voltage of the dielectric charged by the leading edge of the pulse is now higher than the voltage on the line, and so the dielectric discharges into the line as at (c).

The current flows forward as it’s in fact the same current which is flowing into the dielectric at the leading edge. There is thus a loop of current rolling down the line flowing forward in the 'hot' wire and backwards in the return. The analogy with the tracks of a Caterpillar tractor is quite good. Individual plates in the track find themselves being lowered to the ground at the front and raised again at the back.

The constant to-ing and fro-ing of charge in the dielectric results in dielectric loss of signal energy. Dielectric loss increases with frequency and so a long transmission line acts as a filter. Thus the term 'low-loss' cable refers primarily to the kind of dielectric used.

Transmission lines which transport energy in this way have a characteristic impedance caused by the interplay of the inductance along the conductors with the parallel capacitance. One consequence of that transmission mode is that correct termination or matching is required between the line and both the driver and the receiver. When a line is correctly matched, the rolling energy rolls straight out of the line into the load and the maximum energy is available. If the impedance presented by the load is incorrect, there will be reflections from the mismatch. An open circuit will reflect all the energy back in the same polarity as the original, whereas a short circuit will reflect all the energy back in the opposite polarity. Thus impedances above or below the correct value will have a tendency towards reflections whose magnitude depends upon the degree of mismatch and whose polarity depends upon whether the load is too high or too low. In practice it’s the need to avoid reflections which is the most important reason to terminate correctly.

Fig. 3 A signal may be square at the transmitter, but losses increase with frequency and as the signal propagates, more of the harmonics are lost until only the fundamental remains. The amplitude of the fundamental then falls with further distance.

Reflections at impedance mismatches have practical applications; electricity companies inject high-frequency pulses into faulty cables and the time taken until the reflection from the break or short returns can be used to locate the source of damage. The same technique can be used to find wiring breaks in large studio complexes.

A perfectly square pulse contains an indefinite series of harmonics, but the higher ones suffer progressively more loss. A square pulse at the driver becomes less and less square with distance as Fig. 3 shows.

The harmonics are progressively lost until in the extreme case all that is left is the fundamental. A transmitted square wave is received as a sine wave. Fortunately data can still be recovered from the fundamental signal component.

Once all the harmonics have been lost, further losses cause the amplitude of the fundamental to fall. The effect worsens with distance and it’s necessary to ensure that data recovery is still possible from a signal of unpredictable level.

3. Types of recording medium

There is considerably more freedom of choice for digital media than was the case for analog signals. Once converted to the digital domain, audio is no more than data and can take advantage of the research expended in computer data recording.

Digital media don’t need to have linear transfer functions, nor do they need to be noise-free or continuous. All they need to do is to allow the player to be able to distinguish the presence or absence of replay events, such as the generation of pulses, with reasonable (rather than perfect) reliability. In a magnetic medium, the event will be a flux change from one direction of magnetization to another. In an optical medium, the event must cause the pickup to perceive a change in the intensity of the light falling on the sensor. In CD, the apparent contrast is obtained by interference. In some disks it will be through selective absorption of light by dyes. In magneto-optical disks the recording itself is magnetic, but it’s made and read using light.

Fig. 4 The classification of paramagnetic materials. The ferromagnetic materials exhibit the strongest magnetic behavior.

4. Magnetism

Magnetism is vital to digital audio recording. Hard disks and tapes store magnetic patterns and media are driven by motors which themselves rely on magnetism.

A magnetic field can be created by passing a current through a solenoid, which is no more than a coil of wire. When the current ceases, the magnetism disappears. However, many materials, some quite common, display a permanent magnetic field with no apparent power source. Magnetism of this kind results from the spin of electrons within atoms. Atomic theory describes atoms as having nuclei around which electrons orbit, spinning as they go. Different orbits can hold a different number of electrons. The distribution of electrons determines whether the element is diamagnetic (non-magnetic) or paramagnetic (magnetic characteristics are possible). Diamagnetic materials have an even number of electrons in each orbit, and according to the Pauli exclusion principle half of them spin in each direction. The opposed spins cancel any resultant magnetic moment. Fortunately there are certain elements, the transition elements, which have an odd number of electrons in certain orbits. The magnetic moment due to electronic spin is not cancelled out in these paramagnetic materials.

Fig. 4 shows that paramagnetism materials can be classified as antiferromagnetic, ferrimagnetic and ferromagnetic. In some materials alternate atoms are anti-parallel and so the magnetic moments are cancelled. In ferrimagnetic materials there is a certain amount of antiparallel cancellation, but a net magnetic moment remains. In ferromagnetic materials such as iron, cobalt or nickel, all the electron spins can be aligned and as a result the most powerful magnetic behavior is obtained.

Fig. 5 (a) A magnetic material can have a zero net moment if it’s divided into domains as shown here. Domain walls (b) are areas in which the magnetic spin gradually changes from one domain to another. The stresses which result store energy. When some domains dominate, a net magnetic moment can exist as in (c).

It’s not immediately clear how a material in which electron spins are parallel could ever exist in an unmagnetized state or how it could be partially magnetized by a relatively small external field. The theory of magnetic domains has been developed to explain what is observed in practice. Fig. 5(a) shows a ferromagnetic bar which is demagnetized.

It has no net magnetic moment because it’s divided into domains or volumes which have equal and opposite moments. Ferromagnetic material divides into domains in order to reduce its magnetostatic energy.

Fig. 5(b) shows a domain wall which is around 0.1 micrometer thick.

Within the wall the axis of spin gradually rotates from one state to another. An external field of quite small value is capable of disturbing the equilibrium of the domain wall by favoring one axis of spin over the other. The result is that the domain wall moves and one domain becomes larger at the expense of another. In this way the net magnetic moment of the bar is no longer zero as shown in (c).

For small distances, the domain wall motion is linear and reversible if the change in the applied field is reversed. However, larger movements are irreversible because heat is dissipated as the wall jumps to reduce its energy. Following such a domain wall jump, the material remains magnetized after the external field is removed and an opposing external field must be applied which must do further work to bring the domain wall back again. This is a process of hysteresis where work must be done to move each way. Were it not for this, non-linear mechanism magnetic recording would be impossible. If magnetic materials were linear, tapes would return to the demagnetized state immediately after leaving the field of the head and this guide would be a good deal thinner.

Fig. 6 shows a hysteresis loop which is obtained by plotting the magnetization M when the external field H is swept to and fro. On the macroscopic scale, the loop appears to be a smooth curve, whereas on a small scale it’s in fact composed of a large number of small jumps. These were first discovered by Barkhausen. Starting from the unmagnetized state at the origin, as an external field is applied, the response is initially linear and the slope is given by the susceptibility. As the applied field is increased a point is reached where the magnetization ceases to increase.

This is the saturation magnetization M_s . If the applied field is removed, the magnetization falls, not to zero, but the remanent magnetization Md. This remanence is the magnetic memory mechanism which makes recording and permanent magnets possible. The ratio of Mr to Md is called the squareness ratio. In recording media squareness is beneficial as it increases the remanent magnetization.

Fig. 6 A hysteresis loop which comes about because of the non-linear behavior of magnetic materials. If this characteristic were absent, magnetic recording would not exist.

Fig. 7 The recording medium requires a large loop area (a) whereas the head requires a small loop area (b) to cut losses.

If an increasing external field is applied in the opposite direction, the curve continues to the point where the magnetization is zero. The field required to achieve this is called the intrinsic coercive force mHc. A small increase in the reverse field reaches the point where, if the field were to be removed, the remanent magnetization would become zero. The field required to do this is the remanent coercive force, rHc.

As the external field H is swept to and fro, the magnetization describes a major hysteresis loop. Domain wall transit causes heat to be dissipated on every cycle around the loop and the dissipation is proportional to the loop area. For a recording medium, a large loop is beneficial because the replay signal is a function of the remanence and high coercivity resists erasure. The same is true for a permanent magnet. Heating is not an issue.

For a device such as a recording head, a small loop is beneficial. Fig. 7(a) shows the large loop of a hard magnetic material used for recording media and for permanent magnets. Fig. 7(b) shows the small loop of a soft magnetic material which is used for recording heads and transformers.

According to the Nyquist noise theorem, anything which dissipates energy when electrical power is supplied must generate a noise voltage when in thermal equilibrium. Thus magnetic recording heads have a noise mechanism which is due to their hysteretic behavior. The smaller the loop, the less the hysteretic noise. In conventional heads, there are a large number of domains and many small domain wall jumps. In thin film heads there are fewer domains and the jumps must be larger. The noise this causes is known as Barkhausen noise, but as the same mechanism is responsible it’s not possible to say at what point hysteresis noise should be called Barkhausen noise.

Fig. 8 A digital record head is similar in principle to an analog head but uses much narrower tracks.

5. Magnetic recording

Magnetic recording relies on the hysteresis of certain magnetic materials.

After an applied magnetic field is removed, the material remains magnetized in the same direction. By definition the process is non-linear, and analog magnetic recorders have to use bias to linearize it. Digital recorders are not concerned with the non-linearity, and HF bias is unnecessary.

Fig. 8 shows the construction of a typical digital record head, which is not disimilar to an analog record head. A magnetic circuit carries a coil through which the record current passes and generates flux. A non magnetic gap forces the flux to leave the magnetic circuit of the head and penetrate the medium. The current through the head must be set to suit the coercivity of the tape, and is arranged to almost saturate the track. The amplitude of the current is constant, and recording is performed by reversing the direction of the current with respect to time. As the track passes the head, this is converted to the reversal of the magnetic field left on the tape with respect to distance. The magnetic recording is therefore bipolar. Fig. 9 shows that the recording is actually made just after the trailing pole of the record head where the flux strength from the gap is falling. As in analog recorders, the width of the gap is generally made quite large to ensure that the full thickness of the magnetic coating is recorded, although this cannot be done if the same head is intended to replay.

Fig. 9 The recording is actually made near the trailing pole of the head where the head flux falls below the coercivity of the tape.

Fig. 10 shows what happens when a conventional inductive head, i.e. one having a normal winding, is used to replay the bipolar track made by reversing the record current. The head output is proportional to the rate of change of flux and so only occurs at flux reversals. In other words, the replay head differentiates the flux on the track. The polarity of the resultant pulses alternates as the flux changes and changes back. A circuit is necessary which locates the peaks of the pulses and outputs a signal corresponding to the original record current waveform. There are two ways in which this can be done.

Fig. 10 Basic digital recording. At (a) the write current in the head is reversed from time to time, leaving a binary magnetization pattern shown at (b). When replayed, the waveform at (c) results because an output is only produced when flux in the head changes. Changes are referred to as transitions.

Fig. 11 Gated peak detection rejects noise by disabling the differentiated output between transitions.

Fig. 12 Integration method for re-creating write-current waveform.

Fig. 13 The major mechanism defining magnetic channel bandwidth.

The amplitude of the replay signal is of no consequence and often an AGC system is used to keep the replay signal constant in amplitude.

What matters is the time at which the write current, and hence the flux stored on the medium, reverses. This can be determined by locating the peaks of the replay impulses, which can conveniently be done by differentiating the signal and looking for zero crossings. Fig. 11 shows that this results in noise between the peaks. This problem is overcome by the gated peak detector, where only zero crossings from a pulse which exceeds the threshold will be counted. The AGC system allows the thresholds to be fixed. As an alternative, the record waveform can also be restored by integration, which opposes the differentiation of the head as in Fig. 12.

1. The head shown in Fig. 8 has a frequency response shown in Fig. 13. At DC there is no change of flux and no output. As a result, inductive heads are at a disadvantage at very low speeds. The output rises with frequency until the rise is halted by the onset of thickness loss.

As the frequency rises, the recorded wavelength falls and flux from the shorter magnetic patterns cannot be picked up so far away. At some point, the wavelength becomes so short that flux from the back of the tape coating cannot reach the head and a decreasing thickness of tape contributes to the replay signal.

2. In digital recorders using short wavelengths to obtain high density, there is no point in using thick coatings. As wavelength further reduces, the familiar gap loss occurs, where the head gap is too big to resolve detail on the track. The construction of the head results in the same action as that of a two-point transversal filter, as the two poles of the head see the tape with a small delay interposed due to the finite gap. As expected, the head response is like a comb filter with the well-known nulls where flux cancellation takes place across the gap. Clearly the smaller the gap, the shorter the wavelength of the first null. This contradicts the requirement of the record head to have a large gap. In quality analog audio recorders, it’s the norm to have different record and replay heads for this reason, and the same will be true in digital machines which have separate record and playback heads. Clearly where the same pair of heads are used for record and play, the head gap size will be determined by the playback requirement.

As can be seen, the frequency response is far from ideal, and steps must be taken to ensure that recorded data waveforms don’t contain frequencies which suffer excessive losses.

A more recent development is the magneto-resistive (M-R) head. This is a head which measures the flux on the tape rather than using it to generate a signal directly. Flux measurement works down to DC and so offers advantages at low tape speeds. Unfortunately flux-measuring heads are not polarity conscious but sense the modulus of the flux and if used directly they respond to positive and negative flux equally, as shown in Fig. 14. This is overcome by using a small extra winding in the head carrying a constant current. This creates a steady bias field which adds to the flux from the tape. The flux seen by the head is now unipolar and changes between two levels and a more useful output waveform results.

Fig. 14 The sensing element in a magneto-resistive head is not sensitive to the polarity of the flux, only the magnitude. At (a) the track magnetization is shown and this causes a bidirectional flux variation in the head as at (b), resulting in the magnitude output at (c). However, if the flux in the head due to the track is biased by an additional field, it can be made unipolar as at (d) and the correct output waveform is obtained.

Fig. 15 Readout pulses from two closely recorded transitions are summed in the head and the effect is that the peaks of the waveform are moved outwards. This is known as peak-shift distortion and equalization is necessary to reduce the effect.

Recorders which have low head-to-medium speed, such as DCC (digital compact cassette) use M-R heads, whereas recorders with high speeds, such as DASH (digital audio stationary head), DAT (digital audio tape) and magnetic disk drives use inductive heads.

Heads designed for use with tape work in actual contact with the magnetic coating. The tape is tensioned to pull it against the head. There will be a wear mechanism and need for periodic cleaning.

In the hard disk, the rotational speed is high in order to reduce access time, and the drive must be capable of staying on line for extended periods. In this case the heads don’t contact the disk surface, but are supported on a boundary layer of air. The presence of the air film causes spacing loss, which restricts the wavelengths at which the head can replay. This is the penalty of rapid access.

Digital audio recorders must operate at high density in order to offer a reasonable playing time. This implies that shortest possible wavelengths will be used. Fig. 15 shows that when two flux changes, or transitions, are recorded close together, they affect each other on replay.

The amplitude of the composite signal is reduced, and the position of the peaks is pushed outwards. This is known as inter-symbol interference, or peak-shift distortion and it occurs in all magnetic media.

The effect is primarily due to high-frequency loss and it can be reduced by equalization on replay, as is done in most tapes, or by pre compensation on record as is done in hard disks.

Fig. 16 In azimuth recording (a), the head gap is tilted. If the track is played with the same head, playback is normal, but the response of the reverse azimuth head is attenuated (b).

6. Azimuth recording and rotary heads

Fig. 16(a) shows that in azimuth recording, the transitions are laid down at an angle to the track by using a head which is tilted. Machines using azimuth recording must always have an even number of heads, so that adjacent tracks can be recorded with opposite azimuth angle. The two track types are usually referred to as A and B. Fig. 16(b) shows the effect of playing a track with the wrong type of head. The playback process suffers from an enormous azimuth error. The effect of azimuth error can be understood by imagining the tape track to be made from many identical parallel strips.

In the presence of azimuth error, the strips at one edge of the track are played back with a phase shift relative to strips at the other side. At some wavelengths, the phase shift will be 180°, and there will be no output; at other wavelengths, especially long wavelengths, some output will reappear. The effect is rather like that of a comb filter, and serves to attenuate crosstalk due to adjacent tracks so that no guard bands are required. Since no tape is wasted between the tracks, more efficient use is made of the tape. The term guard-band-less recording is often used instead of, or in addition to, the term azimuth recording. The failure of the azimuth effect at long wavelengths is a characteristic of azimuth recording, and it’s necessary to ensure that the spectrum of the signal to be recorded has a small low-frequency content. The signal will need to pass through a rotary transformer to reach the heads, and cannot therefore contain a DC component.

In recorders such as DAT there is no separate erase process, and erasure is achieved by overwriting with a new waveform. Overwriting is only successful when there are no long wavelengths in the earlier recording, since these penetrate deeper into the tape, and the short wavelengths in a new recording won’t be able to erase them. In this case the ratio between the shortest and longest wavelengths recorded on tape should be limited. Restricting the spectrum of the code to allow erasure by overwrite also eases the design of the rotary transformer.

Fig. 17 CD readout principle and dimensions. The presence of a bump causes destructive interference in the reflected light.

7. Optical disks

Optical recorders have the advantage that light can be focused at a distance whereas magnetism cannot. This means that there need be no physical contact between the pickup and the medium and no wear mechanism. In the same way that the recorded wavelength of a magnetic recording is limited by the gap in the replay head, the density of optical recording is limited by the size of light spot which can be focused on the medium. This is controlled by the wavelength of the light used and by the aperture of the lens. When the light spot is as small as these limits allow, it’s said to be diffraction limited. The recorded details on the disk are minute, and could easily be obscured by dust particles. In practice the information layer needs to be protected by a thick transparent coating.

Light enters the coating well out of focus over a large area so that it can pass around dust particles, and comes to a focus within the thickness of the coating. Although the number of bits per unit area is high in optical recorders the number of bits per unit volume is not as high as that of tape because of the thickness of the coating.

Fig. 17 shows the principle of readout of the Compact Disc which is a read-only disk manufactured by pressing. The track consists of raised bumps separated by flat areas. The entire surface of the disk is metallized, and the bumps are one quarter of a wavelength in height. The player spot is arranged so that half of its light falls on top of a bump, and half on the surrounding surface. Light returning from the flat surface has travelled half a wavelength further than light returning from the top of the bump, and so there is a phase reversal between the two components of the reflection. This causes destructive interference, and light cannot return to the pickup. It must reflect at angles which are outside the aperture of the lens and be lost. Conversely, when light falls on the flat surface between bumps, the majority of it’s reflected back to the pickup. The pickup thus sees a disk apparently having alternately good or poor reflectivity. The sensor in the pickup responds to the incident intensity and so the replay signal is unipolar and varies between two levels in a manner similar to the output of a M-R head.

Some disks can be recorded once, but not subsequently erased or rerecorded. These are known as WORM (Write Once Read Many) disks.

One type of WORM disk uses a thin metal layer which has holes punched in it on recording by heat from a laser. Others rely on the heat raising blisters in a thin metallic layer by decomposing the plastic material beneath. Yet another alternative is a layer of photo-chemical dye which darkens when struck by the high-powered recording beam. Whatever the recording principle, light from the pickup is reflected more or less, or absorbed more or less, so that the pickup senses a change in reflectivity.

Certain WORM disks can be read by conventional CD players and are thus called recordable CDs, or CD-R, whereas others will only work in a particular type of drive.

All optical disks need mechanisms to keep the pickup following the track and sharply focused on it. These will be discussed in Section 12 and need not be treated here.

Fig. 18 Frequency response of laser pickup. Maximum operating frequency is about half of cut-off frequency E.

Fig. 19 The thermomagneto-optical disk uses the heat from a laser to allow magnetic field to record on the disk.

The frequency response of an optical disk is shown in Fig. 18. The response is best at DC and falls steadily to the optical cut-off frequency.

Although the optics work down to DC, this cannot be used for the data recording. DC and low frequencies in the data would interfere with the focus and tracking servos and, as will be seen, difficulties arise when attempting to demodulate a unipolar signal. In practice the signal from the pickup is split by a filter. Low frequencies go to the servos, and higher frequencies go to the data circuitry. As a result the optical disk channel has the same inability to handle DC as does a magnetic recorder, and the same techniques are needed to overcome it.

8. Magneto-optical disks

When a magnetic material is heated above its Curie temperature, it becomes demagnetized, and on cooling will assume the magnetization of an applied field which would be too weak to influence it normally. This is the principle of magneto-optical recording used in the Sony MiniDisc.

The heat is supplied by a finely focused laser and the field is supplied by a coil which is much larger.

Fig. 19 shows that the medium is initially magnetized in one direction only. In order to record, the coil is energized with a current in the opposite direction. This is too weak to influence the medium in its normal state, but when it’s heated by the recording laser beam the heated area will take on the magnetism from the coil when it cools. Thus a magnetic recording with very small dimensions can be made even though the magnetic circuit involved is quite large in comparison.

Readout is obtained using the Kerr effect or the Faraday effect, which are phenomena whereby the plane of polarization of light can be rotated by a magnetic field. The angle of rotation is very small and needs a sensitive pickup. The pickup contains a polarizing filter before the sensor.

Changes in polarization change the ability of the light to get through the polarizing filter and results in an intensity change which once more produces a unipolar output.

The magneto-optic recording can be erased by reversing the current in the coil and operating the laser continuously as it passes along the track.

A new recording can then be made on the erased track.

A disadvantage of magneto-optical recording is that all materials having a Curie point low enough to be useful are highly corrodible by air and need to be kept under an effectively sealed protective layer.

The magneto-optical channel has the same frequency response as that shown in Fig. 18.

9. Equalization

The characteristics of most channels are that signal loss occurs which increases with frequency. This has the effect of slowing down rise times and thereby sloping off edges. If a signal with sloping edges is sliced, the time at which the waveform crosses the slicing level will be changed, and this causes jitter. Fig. 20 shows that slicing a sloping waveform in the presence of baseline wander causes more jitter.

Fig. 20 A DC offset can cause timing errors.

On a long cable, high-frequency rolloff can cause sufficient jitter to move a transition into an adjacent bit period. This is called inter-symbol interference and the effect becomes worse in signals which have greater asymmetry, i.e. short pulses alternating with long ones. The effect can be reduced by the application of equalization, which is typically a high frequency boost, and by choosing a channel code which has restricted asymmetry.

Fig. 21 Peak-shift distortion is due to the finite width of replay pulses. The effect can be reduced by the pulse slimmer shown in (a) which is basically a transversal filter.

The use of a linear operational amplifier emphasizes the analog nature of channels.

Instead of replay pulse slimming, transitions can be written with a displacement equal and opposite to the anticipated peak shift as shown in (b).

Compensation for peak shift distortion in recording requires equalization of the channel, 3 and this can be done by a network after the replay head, termed an equalizer or pulse sharpener, 4 as in Fig. 21(a). This technique uses transversal filtering to oppose the inherent transversal effect of the head. As an alternative, pre-compensation in the record stage can be used as shown in (b). Transitions are written in such a way that the anticipated peak shift will move the readout peaks to the desired timing.

Fig. 22 Slicing a signal which has suffered losses works well if the duty cycle is even. If the duty cycle is uneven, as in (a), timing errors will become worse until slicing fails. With the opposite duty cycle, the slicing fails in the opposite direction as in (b). If, however, the signal is DC free, correct slicing can continue even in the presence of serious losses, as (c) shows.

10. Data separation

The important step of information recovery at the receiver or replay circuit is known as data separation. The data separator is rather like an analog-to-digital convertor because the two processes of sampling and quantizing are both present. In the time domain, the sampling clock is derived from the clock content of the channel waveform. In the voltage domain, the process of slicing converts the analog waveform from the channel back into a binary representation. The slicer is thus a form of quantizer which has only one-bit resolution. The slicing process makes a discrete decision about the voltage of the incoming signal in order to reject noise. The sampler makes discrete decisions along the time axis in order to reject jitter. These two processes will be described in detail.

Fig. 23 (a) Slicing a unipolar signal requires a non-zero threshold. (b) If the signal amplitude changes, the threshold will then be incorrect. (c) If a DC-free code is used, a unipolar waveform can be converted to a bipolar waveform using a series capacitor. A zero threshold can be used and slicing continues with amplitude variations.

Fig. 24 An adaptive slicer uses delay lines to produce a threshold from the waveform itself. Correct slicing will then be possible in the presence of baseline wander.

Such a slicer can be used with codes which are not DC-free.

11. Slicing

The slicer is implemented with a comparator which has analog inputs but a binary output. In a cable receiver, the input waveform can be sliced directly. In an inductive magnetic replay system, the replay waveform is differentiated and must first pass through a peak detector ( Fig. 11) or an integrator ( Fig. 12). The signal voltage is compared with the midway voltage, known as the threshold, baseline or slicing level by the comparator. If the signal voltage is above the threshold, the comparator outputs a high level, if below, a low level results.

Fig. 22 shows some waveforms associated with a slicer. At (a) the transmitted waveform has an uneven duty cycle. The DC component, or average level, of the signal is received with high amplitude, but the pulse amplitude falls as the pulse gets shorter. Eventually the waveform cannot be sliced. At (b) the opposite duty cycle is shown. The signal level drifts to the opposite polarity and once more slicing is impossible. The phenomenon is called baseline wander and will be observed with any signal whose average voltage is not the same as the slicing level. At (c) it will be seen that if the transmitted waveform has a relatively constant average voltage, slicing remains possible up to high frequencies even in the presence of serious amplitude loss, because the received waveform remains symmetrical about the baseline.

It’s clearly not possible to simply serialize data in a shift register for so called direct transmission, because successful slicing can only be obtained if the number of ones is equal to the number of zeros; there is little chance of this happening consistently with real data. Instead, a modulation code or channel code is necessary. This converts the data into a waveform which is DC-free or nearly so for the purpose of transmission.

The slicing threshold level is naturally zero in a bipolar system such as magnetic inductive replay or a cable. When the amplitude falls it does so symmetrically and slicing continues. The same is not true of M-R heads and optical pickups, which both respond to intensity and therefore produce a unipolar output. If the replay signal is sliced directly, the threshold cannot be zero, but must be some level approximately half the amplitude of the signal as shown in Fig. 23(a). Unfortunately when the signal level falls it falls towards zero and not towards the slicing level.

The threshold will no longer be appropriate for the signal as can be seen at (b). This can be overcome by using a DC-free coded waveform. If a series capacitor is connected to the unipolar signal from an optical pickup, the waveform is rendered bipolar because the capacitor blocks any DC component in the signal. The DC-free channel waveform passes through unaltered. If an amplitude loss is suffered, Fig. 23(c) shows that the resultant bipolar signal now reduces in amplitude about the slicing level and slicing can continue.

Whilst cables and optical recording channels need to be DC-free, some channel waveforms used in magnetic recording have a reduced DC component, but are not completely DC-free. As a result the received waveform will suffer from baseline wander. If this is moderate, an adaptive slicer which can move its threshold can be used. As Fig. 24 shows, the adaptive slicer consists of a pair of delays. If the input and output signals are linearly added together with equal weighting, when a transition passes, the resultant waveform has a plateau which is at the half-amplitude level of the signal and can be used as a threshold voltage for the slicer. The coding of the DASH format is not DC-free and a slicer of this kind is employed.

Fig. 25 A certain amount of jitter can be rejected by changing the signal at multiples of the basic detent period Td.

Fig. 26 A transmitted waveform which is generated according to the principle of Fig. 25 will appear like this on an oscilloscope as successive parts of the waveform are superimposed on the tube. When the waveform is rounded off by losses, diamond-shaped eyes are left in the centre, spaced apart by the detent period.

Fig. 27 A typical phase-locked loop where the VCO is forced to run at a multiple of the input frequency. If the input ceases, the output will continue for a time at the same frequency until it drifts.

12. Jitter rejection

The binary waveform at the output of the slicer will be a replica of the transmitted waveform, except for the addition of jitter or time uncertainty in the position of the edges due to noise, baseline wander, intersymbol interference and imperfect equalization.

Binary circuits reject noise by using discrete voltage levels which are spaced further apart than the uncertainty due to noise. In a similar manner, digital coding combats time uncertainty by making the time axis discrete using events, known as transitions, spaced apart at integer multiples of some basic time period, called a detent, which is larger than the typical time uncertainty. Fig. 25 shows how this jitter-rejection mechanism works. All that matters is to identify the detent in which the transition occurred. Exactly where it occurred within the detent is of no consequence.

As ideal transitions occur at multiples of a basic period, an oscilloscope, which is repeatedly triggered on a channel-coded signal carrying random data, will show an eye pattern if connected to the output of the equalizer.

Study of the eye pattern reveals how well the coding used suits the channel. In the case of transmission, with a short cable, the losses will be small, and the eye opening will be virtually square except for some edge sloping due to cable capacitance. As cable length increases, the harmonics are lost and the remaining fundamental gives the eyes a diamond shape.

The same eye pattern will be obtained with a recording channel where it’s uneconomic to provide bandwidth much beyond the fundamental.

Noise closes the eyes in a vertical direction, and jitter closes the eyes in a horizontal direction, as in Fig. 26. If the eyes remain sensibly open, data separation will be possible. Clearly more jitter can be tolerated if there is less noise, and vice versa. If the equalizer is adjustable, the optimum setting will be where the greatest eye opening is obtained.

In the centre of the eyes, the receiver must make binary decisions at the channel bit rate about the state of the signal, high or low, using the slicer output. As stated, the receiver is sampling the output of the slicer, and it needs to have a sampling clock in order to do that. In order to give the best rejection of noise and jitter, the clock edges which operate the sampler must be in the centre of the eyes.

Fig. 28 The clocking system when channel coding is used. The encoder clock runs at the channel bit rate, and any transitions in the channel must coincide with encoder clock edges. The reason for doing this is that, at the data separator, the PLL can lock to the edges of the channel signal, which represent an intermittent clock, and turn it into a continuous clock. The jitter in the edges of the channel signal causes noise in the phase error of the PLL, but the damping acts as a filter and the PLL runs at the average phase of the channel bits, rejecting the jitter.

As has been stated, a separate clock is not practicable in recording or transmission. A fixed-frequency clock at the receiver is of no use as even if it was sufficiently stable, it would not know what phase to run at.

The only way in which the sampling clock can be obtained is to use a phase-locked loop to regenerate it from the clock content of the self clocking channel coded waveform. In phase-locked loops, the voltage controlled oscillator is driven by a phase error measured between the output and some reference, such that the output eventually has the same frequency as the reference. If a divider is placed between the VCO and the phase comparator, as in Fig. 27, the VCO frequency can be made to be a multiple of the reference. This also has the effect of making the loop more heavily damped. If a channel-coded waveform is used as a reference to a PLL, the loop will be able to make a phase comparison whenever a transition arrives and will run at the channel bit rate. When there are several detents between transitions, the loop will flywheel at the last known frequency and phase until it can rephase at a subsequent transition. Thus a continuous clock is recreated from the clock content of the channel waveform. In a recorder, if the speed of the medium should change, the PLL will change frequency to follow.

Once the loop is locked, clock edges will be phased with the average phase of the jittering edges of the input waveform. If, for example, rising edges of the clock are phased to input transitions, then falling edges will be in the centre of the eyes. If these edges are used to clock the sampling process, the maximum jitter and noise can be rejected. The output of the slicer when sampled by the PLL edge at the centre of an eye is the value of a channel bit. Fig. 28 shows the complete clocking system of a channel code from encoder to data separator.

Clearly data cannot be separated if the PLL is not locked, but it cannot be locked until it has seen transitions for a reasonable period. In recorders, which have discontinuous recorded blocks to allow editing, the solution is to precede each data block with a pattern of transitions whose sole purpose is to provide a timing reference for synchronizing the phase-locked loop. This pattern is known as a preamble. In interfaces, the transmission can be continuous and there is no difficulty remaining in lock indefinitely. There will simply be a short delay on first applying the signal before the receiver locks to it.

One potential problem area which is frequently overlooked is to ensure that the VCO in the receiving PLL is correctly centered. If it’s not, it will be running with a static phase error and won’t sample the received waveform at the centre of the eyes. The sampled bits will be more prone to noise and jitter errors. VCO centering can simply be checked by displaying the control voltage. This should not change significantly when the input is momentarily interrupted.

Fig. 29 The major components of a channel coding system. See text for details.

===

Prev. | Next

Top of Page   All Related Articles    Home

Updated: Saturday, 2017-10-14 14:00 PST