Fundamentals of Digital Design -- BUILDING BLOCKS WITH MEMORY (part 1)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


In the preceding sections, we tacitly assumed that electronic devices are infinitely fast and that they generate outputs that depend only on the present input values.

In this section, we explore the interesting consequences of violating these assumptions.

First, we examine what can happen when there are finite propagation delays within gates. Output signals from assemblies of gates sometimes have spurious short pulses that are not predicted by standard Boolean algebra. These spurious pulses are seldom useful, but we must contend with them, usually by waiting until they have gone away.

Next, we explore gate circuits that include feedback. Some of these circuits exhibit memory, which is an essential tool for the system designer. We consider useful sequential (memory) building blocks: flip-flops, registers, counters, and so on. These are basic tools for developing the digital architectures in the coming sections of Part II. We then discuss large memory arrays--RAMs, ROMs, and allied solid state memories. Then, after a treatment of programmable logic devices, we conclude with a description of some timing devices that are needed when designing with these large memories.

THE TIME ELEMENT

Hazards

The outputs of real gates cannot change instantaneously when an input is changed.

Integrated circuits operate by movement of holes and electrons within some physical material, usually silicon. Not even very light particles such as electrons can move at infinite speeds, and their movement will always involve delays. The time between a change in an input signal and a corresponding change in an output is called the propagation delay of the circuit. When inputs change, an output may undergo a change from L to H or from H to L. The corresponding propagation delays are denoted tpLH and tpHL . Propagation delays depend on the input wave forms, temperature, output loadings, operating power, logic family, and a host of other parameters. Single-gate propagation delays are about 5 nanoseconds in TTL low-power Schottky devices.

Another source of delay is the wire carrying signals between gates. Electricity in a wire can travel only about 8 inches in a nanosecond, so when wires become long, the interconnection delays may become serious.

Our purpose here is to show how these delays can create spurious outputs called hazards. Consider the following simple circuit that changes the voltage polarity of a signal:


Assume that the voltage at the input A has been stable for a long time. The output will also be stable and of the opposite voltage level. If the voltage at the input changes, the output will change a short time later. When an input changes from L to H, the output will change from H to L after a propagation delay tpHL ; similarly, a H --> L transition in the input will produce a L -> H output transition after a time tpLH. FIG. 1 is a timing diagram, a graph of input and output values (either voltage or logic) as a function of time. Each variable's graph is called a waveform.


FIG. 1. Timing diagram showing propagation delays in a logic circuit.

To see what can happen when we introduce time into Boolean algebra, consider the following circuit, whose output is A + A:


Of course, we know that A + A = T regardless of the logic value of A, and we predict, from Boolean algebra, that the output of the circuit will always be L. But assume that each circuit element has a propagation delay tp for any transition. If A changes from T to F, the voltage pattern in FIG. 2 will prevail; there is a spurious high-voltage (F) output that lasts for one gate delay.


FIG. 2. A hazard caused by propagation delay in an inverter.

These spurious outputs of combinational circuits, called hazards or glitches, are common in digital systems. Fortunately, given sufficient time they will die out and the outputs of gates will assume the values predicted by classical Boolean algebra.

Occasionally, it is necessary to generate gate outputs that are clean-that have no hazards. It can be shown that a function may have a hazard if the function's Karnaugh map has adjacent 1 's not enclosed in the same circle. The preceding example, when plotted on a one-variable K-map, becomes


The two adjacent 1's do not share a common circle, and indeed the circuit has a hazard. If we circle both 1's in the K-map, we have the TRUE function, which is hazard-free.

The following function is a more complex example


The theory is that a circuit based on the two solid loops mayor may not contain a hazard; however, if we build a circuit that includes the dashed loop, we can be sure that the circuit will have no hazards. Using the dashed loop requires extra hardware (additional AND and OR gates), a necessary penalty when we cannot tolerate hazards.

This technique of eliminating hazards works in simple sum-of-products circuits derived from K-maps. In more general circuits, the elimination of hazards is quite complex, and therefore we must use finesse instead of brute force. Rather than use design techniques that require hazard-free signals, we will make our designs insensitive to the hazards that occur when combinational inputs are changing. A standard technique is to wait a fixed time after the inputs of the gates change, during which time the hazards will die out. We may then proceed to use the stable signals. This idea is the basis of synchronous (clocked) design, which we introduce in Section 5.

Circuits with Feedback

In the preceding section, we discussed purely combinational circuits. Except for momentary hazards, the behavior of the circuits is adequately described by the Boolean algebraic or truth-table methods used in the previous sections. After a sufficient time to "settle," the circuit's outputs become a function only of the inputs. We now consider another class of circuits, in which the value of the outputs after the settling time depends not only on the external inputs but also on the original value of the outputs. Such circuits exhibit feedback:


the output feeds back to contribute to the inputs of earlier elements in the circuit.

Feedback yields curious results in some circuits. The following circuit, which has no external inputs, consists of three inverters and feedback:



FIG. 3. Memory displayed by a circuit with feedback.

The voltage at the output is fed back into the input where, after a short time, it appears inverted on the output. The new voltage causes a similar inversion; the output voltage oscillates rapidly.

Remove one inverter from this circuit, producing the following circuit: If you construct this circuit with real inverters and apply operating power, the output voltages of each inverter will go through a period of instability, during which one output will settle at a high level and the other at a low level. Although there is no way to predict which output will be high and which low, the circuit will remain stable after the settling time. You can verify the stability by tracing voltages around the circuit. Redrawing the circuit, as in FIG. 3, helps to illustrate the stability. Since neither of the inverter feedback circuits shown above has external inputs, Boolean algebra is powerless to describe the circuit's behavior.

SEQUENTIAL CIRCUITS

The circuit in FIG. 3 exhibits a primitive form of memory: the circuit' 'remembers" the resolution of the initial voltage conflict. Without external inputs, this memory is useless. In contrast, certain feedback circuits with external inputs not only exhibit memory, but also allow the designer to control the value stored in the memory. Controllable memory is the digital designer's most powerful tool. Digital systems with memory are called sequential circuits.

Sequential devices may be synthesized from gates, but this procedure is not within the scope of this book, except in that it shows the typical structure of some simple memory elements. Manufacturers have packaged proven gate designs of various sequential circuits, and we can use these as building blocks once we know their behavior. Sequential building blocks have names such as Latch, flip-flop, and register.


FIG. 4. A latch circuit; the heavy line is the feedback path.

Unclocked Sequential Circuits

The latch. The latch is the simplest data storage element. Its logic diagram is in FIG. 4. To describe the action of the latch, we must introduce time as a parameter. This was not necessary in combinational logic, but it is always necessary in sequential logic. The timing diagram is frequently used to portray sequential circuit behavior. To analyze the latch circuit, consider the several cases shown in the timing diagram, FIG. 5.

Case A: HOLD = F. In this case, Y = DATA.


FIG. 5. A timing diagram for a latch. Note the 1's catching behavior.

Case B: HOLD = T. Any occurrence of DATA = T will be captured, and the output will thereafter remain true until HOLD becomes false. We consider three subcases:

Case B1: DATA is false throughout the period when HOLD is true. Then Y is false.

Case B2: DATA is true when HOLD is true. When HOLD becomes true, the latch captures the (true) value of DATA and stores it as long as HOLD remains true. (After HOLD becomes false, case A applies.) Case B3: DATA is false when HOLD becomes true. At the beginning, Y is false. The first occurrence of a true signal on the DATA line will cause Y to become true; the output will remain true until HOLD becomes false.

The latch has the property of passing true input data to its output immediately.

This behavior is sometimes useful in digital design, but it can be quite dangerous.

Suppose that while HOLD is true, a glitch or noise pulse on the DATA line causes DATA to become true momentarily. This momentary true, or 1, will cause output Y to become true and remain true as long as HOLD is true. This behavior is sometimes called 1's catching; it is useful only rarely.

The latch circuit in FIG. 4 is not frequently used, and it is not generally available as an SSI integrated circuit. A true latch is a memory element that exhibits combinational behavior at some values of its inputs. There are other varieties of latch; unfortunately, designers use the term loosely to describe various signal-capturing events. We will soon develop more satisfactory memory devices.

Timing diagrams may be used to show gross voltage or logic behavior, or to show fine detail. The timing diagrams in Figs. 4-1 and 4-2 show the fine detail of gate delays. On the other hand, the timing diagram in FIG. 5 shows only the gross behavior of the latch circuit and is accurate only when the time scale is sufficiently large. On a fine time scale, the output Y in FIG. 5 would be shifted slightly to the right to account for the delays incurred while changes in DATA or HOLD are absorbed by the gates in the circuit.

The asynchronous RS flip-flop. The feedback circuit in FIG. 3 exhibits a peculiar form of memory: it remembers which inverter had a low output after "power-up." The circuit has two stable states, and is indeed a memory, albeit a useless one, since there is no way to change it from one state to the other.

By changing the inverters to two-input Nor gates, we obtain a useful device known as the asynchronous RS flip-flop (see FIG. 6). We will study voltage behavior in this circuit before we introduce the concept of logic truth.


FIG. 6. An asynchronous RS flip flop constructed with Nor gates.

The RS flip-flop is a bistable device, which means that in the absence of any inputs it can assume either of two stable states. To see this, assume that R = S = L, and assume that the output X of gate 1 is L. Gate 2 will then present a high voltage level to Q. When this H feeds back to the input of gate 1, it will produce an L at X, which is consistent with our original assumption about the polarity of X. We can describe this behavior by saying that the circuit is in a stable state when gate 1 outputs L and gate 2 outputs H. Once the circuit assumes this state, it will remain there as long as there are no changes in the Rand S inputs.

There is another stable state during which gate 1 outputs H and gate 2 outputs L. We could predict this from the symmetry of the circuit, but you should verify it by tracing signals as we just did.

We have shown that the circuit of two cross-coupled Nor gates can exist in two stable states. We call one of the stable states the set state and the other the reset state. By convention, the set state corresponds to Q = H, and the reset state to Q = L. The conventional representation of a flip-flop is a rectangle from which Q.H emerges at the upper right side. Most flip-flops produce two voltages of opposite polarity and the second output appears below the Q.H output. In data books, the second output is usually called Q. Since this output behaves like Q with a voltage inversion, mixed logicians prefer to designate the signal as Q.L, the alternative voltage form of Q.H. Nevertheless, the nomenclature within the flip-flop symbol, like our other building blocks, must conform to normal data book usage so that there will be no confusion about the interpretation of the pins of the chip. The interior of the symbol serves to identify pin functions; the external notations for inputs and outputs represent specific signals in a logic design. Thus, if we have a flip-flop whose output is a logic variable RUN, our standard notation for the output is:


Now we will consider the Sand R inputs to the RS flip-flop. We know that as long as Sand R are low, the flip-flop remains in its present state. We may use the Sand R lines to force the flip-flop into either state. S is a control input that places the RS flip-flop into the set state (Q = H) whenever S = H. Analogously, R = H resets the flip-flop by making Q = L. The obvious association of truth and voltage is T = H at S, R, and Q, so that we set the flip-flop by making S = T, and we reset by making R = T. This leads us to our usual mixed-logic notation for an RS flip-flop constructed of Nor gates:


FIG. 7 is a similar asynchronous RS flip-flop designed with Nand gates.

This figure, a mixed-logic diagram of the cross-coupled gates, emphasizes that T = L at the inputs of this flip-flop.


FIG. 7. An asynchronous RS flip-flop constructed with Nand gates.

The term asynchronous associated with the RS flip-flop implies that there is no master clocking signal that governs the activity of the flip-flop; suitable changes of S or R cause the outputs to react immediately. Asynchronous means unclocked. Its counterpart is a clocked, or synchronous, circuit. (Some workers refer to all the unclocked storage elements as latches; we will not adopt this practice.) The asynchronous RS flip-flop is sensitive to noise, or glitches, at the S input when in the reset state, and at the R input when in the set state. This sensitivity is occasionally useful, but in general you should avoid using asynchronous devices, since glitches are undesirable byproducts of gate delays and noise is usually unpredictable in digital systems. Part of our goal is to develop design techniques that bypass these inevitable problems. Therefore, one of our dictums will be: don't use asynchronous RS flip-flops as a general design tool.

Switch de-bouncing. There is one standard use of the RS flip-flop-as a switch de-bouncer. It is an unfortunate fact that mechanical switches do not make or break contact cleanly. At closure there will be several separate contacts over a period of many microseconds. The same is true during switch opening.

The switch bounces. Since we do not wish to use a bouncy or spiky signal in our digital designs, we need a way to clean up the switch output.

Whenever a mechanical switch changes its position, we wish the associated digital signal to undergo one smooth change of voltage level. The asynchronous RS flip-flop is well suited for this. FIG. 8 contains two switch de-bouncing circuits. In Section 12 we discuss the electrical details of the input circuits; here we will be satisfied to state that the resistors keep the control inputs inactive unless the voltage from the switch forces one input to become active. When the switch is off, it is constantly resetting the flip-flop, producing a constant F output. As the switch moves toward the on position, there will be a period of oscillation or bounce on the R input, caused by the mechanical switch breaking and making its contact with its off terminal. The S input is false throughout all of this, and the repeated resetting does not affect the false output of the flip flop. There follows a "long" period when the switch moves between its off and on positions, during which time both Sand R are false. Then the switch begins its bouncy contact with the on terminal. The first contact causes S to become true, which sets the flip-flop to its true state, where it remains throughout the on-position bounce and until the switch is returned to off.


FIG. 8. Mechanical switch de-bouncing circuits using asynchronous RS flip-flops.

Ambiguous behavior in the RS flip-flop. Of the four voltage combinations of the S and R inputs, we have used three: to hold, set, and reset. What happens when Sand R are simultaneously true? In the Nor-gate version, the voltages at both outputs of the flip-flop will be low-a disturbing situation. In the Nand gate version, both will be high. Although this deviation from voltage complementarity is unwelcome, it nevertheless represents a well-defined and stable configuration of the flip-flop. But watch what happens when we try to retreat from this configuration of inputs. If we change only one of the inputs, the flip flop enters either the set or reset state, without difficulty. But if we try to change both inputs simultaneously (in an attempt to move to the hold state), the flip flop is in deep trouble. Consider the Nor-gate version of the RS flip-flop, Fig. 6. If the voltages at Sand R are both high, then they are low at both X and Q. If the voltages at S and R both become low simultaneously, then after one gate delay both gates in the flip-flop will produce high outputs. These high outputs, feeding back to the inputs of the Nor gates, will result in low gate outputs after one more gate delay. And so on. The circuit oscillates rapidly, at least at the beginning, with both outputs producing either high or low voltage levels "in phase." The resulting changes occur so rapidly that the flip-flop is forced out of the digital mode of operation for which it was designed, and the output voltages quickly cease to conform to reliable digital voltage levels-an example of metastable behavior that is discussed in Section 12. Eventually, the slight differences in the physical properties of the two gates will allow the flip-flop to drop into the set state or the reset state. The time required for the voltages to settle and the final result are uncertain, so this behavior is of no use to designers. Therefore, it is considered improper design practice to allow R and S to be asserted at the same time.

Excitation tables. Timing diagrams are useful for displaying the time dependent characteristics of sequential circuits, but for most purposes a tabular form is better. The excitation table is the sequential counterpart of the truth table or voltage table for combinational circuits. The excitation table looks much like a truth table, but it contains the element of time. In a sequential circuit, the new outputs depend on the present inputs and also on the present values of the outputs. We can display the behavior of the RS flip-flop of FIG. 6 in the following excitation table:


Q(t) is the value of output Q at time t; Q(t+B) is the value of Q at a small time () after t, where () is sufficiently long for the effects of the gate delays to settle down.

The excitation table is also useful for displaying the logical behavior of sequential circuits. For instance, the following excitation table describes the logical behavior of RS flip-flops, using a modification of the previous notation:


In the literature, notations for excitation tables vary greatly and in this section we will use a variety of forms. You should be able to recognize these notational differences.

Clocked Sequential Circuits

Asynchronous flip-flops are 1's catchers. A more useful class of flip-flop is available for general digital design. In these flip-flops, outputs will not change unless another signal, called the clock, is asserted. Since the activity is synchronized with the clock signal, these flip-flops are called synchronous. Digital systems usually have a repetitive clock with a square waveform. The clock signal alternates between its Hand L signal levels. Depending on the application, we may view either H or L as representing truth on the clock line, although in almost all our applications we shall use the T = H assignment for clock signals. In Section 12 we discuss ways of generating this important signal, and you will encounter clocked circuits throughout the remainder of this book.

Clocked RS flip-flop. We can derive a clocked flip-flop from an asynchronous RS flip-flop by gating the Rand S input signals to restrict the time during which they are active, as in FIG. 9. The flip-flop outputs may change whenever the clock is true-a potentially risky situation similar to the l' s catching of the latch circuit. In digital systems, flip-flop outputs often contribute to combinational circuits that produce inputs to other flip-flops. Shortly after the rise of the clock, the system is in "shock" owing to the changing of flip-flops.

During this period of shock, hazards may be present that can feed erroneous signals into flip-flop inputs while the clock is still true, resulting in false setting or resetting of the flip-flops.


FIG. 9. A clocked RS flip-flop circuit.

It is natural to try to avoid this problem by making the true portion of the clock signal as narrow as possible. Unfortunately, this is not a good solution, since the system's behavior is crucially dependent on the quality of the clock and narrow clock signals are difficult to generate and distribute.

The aim is to reduce the time during which the flip-flop outputs respond to the inputs. Since altering the clock waveform leads to difficulties, can we achieve the goal by further modification of the flip-flop circuit itself? Can we devise a flip-flop that will recognize R and S only at a single instant and ignore the inputs at other times? Such behavior would be desirable because all flip flops would change at precisely the same time if they were clocked from the same source. This would mean that we could arrange for all the R and S inputs on all flip-flops to be stable at the time of clocking, and the flip-flops would not be influenced by the shock of the changes induced just after clocking.

Flip-flops that allow output changes to occur only at a single clocked instant are called edge-driven or edge-triggered. An edge is a voltage transition on the clock signal, and may be either a positive edge (L --> H) or a negative edge (H--> L). The clocked circuit in FIG. 9 is level-driven, since its outputs may change at any time during the true part of the clock cycle. In your designs of clocked sequential circuits, use only edge-driven devices.

Master-slave flip-flop. The master-slave flip-flop is a relic from the early days of integrated circuit technology, but is still widely used because of its pseudo-edge-driven characteristics. It is a relatively simple device that we can easily discuss at the gate level, so we will show how one is derived by extending the clocked RS flip-flop. FIG. 10 is a master-slave flip-flop schematic. The master flip-flop will respond to inputs Sand R as long as the clock signal is high. This period must be long enough to ensure that Sand R are stable when the clock goes from high to low. This H --> L transition, the negative clock edge, isolates the master flip-flop from the inputs Sand R. The master flip-flop will now remain unchanged until the next positive clock edge.


FIG. 10. A master-slave clocked RS flip-flop.

Because of the voltage inverter, the slave flip-flop does not become sensitive to its input until one gate delay after the negative clock edge. At that time, it receives its Sand R inputs from a stable master flip-flop. The net effect is that the outputs of the master-slave combination change only on the negative clock edge rather than during a clock level.

Pure edge-driven flip-flop. The master-slave flip-flop appears to be an attractive edge-driven device. Why are we not content with this design? Because the master flip-flop is still ai' s catcher during the positive half of the clock cycle. This means that R and S must stabilize during the negative half of the clock, since the master flip-flop will react to any T glitches during the positive clock phase. We could greatly simplify our digital circuit designs if we could eliminate the 1 's-catching behavior. We need a flip-flop that samples its inputs only on a clock edge and changes its outputs only as a result of the clock edge.

Such a device is called a pure edge-driven flip-flop. The F --> T clock transition is called the active edge. It may be either the H --> L or L --> H transition, although in the most useful integrated circuits the L --> H transition is the active edge.

The property of changing state and sensing inputs only at a given instant gives the designer a powerful tool for combating glitches and noise. We can now choose the time to look at signals and can fix that time to allow adequate stabilization of the system. We will make constant use of pure edge-driven sequential circuits in our designs. The internal structure of these devices is rather complex, but for purposes of digital system design it is not necessary for us to examine their construction in detail. Hereafter, in all our discussions of clocked sequential circuits, we will assume the use of pure edge-driven devices.

Excitation tables for edge-driven flip-flops. Assume that the edge-driven flip-flop is subjected to a steady stream of active clock edges. Each clock edge will cause the flip-flop to enter either its set or its reset state, in accordance with the values of its inputs and the current value stored in the flip-flop. After the flip-flop has received n clock triggers, the value stored in the flip-flop is Q(n)' If the flip-flop is in the set state after the nth clock edge, then Q(n) = T; if in the reset state, Q(n) = F. After the appearance of the next clock edge, the value of Q will be Q(n+ 1)' The excitation table for edge-driven devices is a tabulation of Q(n+1) for all combinations of the exciting variables.

In the remainder of this section, we will use excitation tables to classify flip-flops. For the excitation table to be valid, we must ensure that the control inputs are stable for a short time before the active clock edge (the setup time), and perhaps for a short time after the active clock edge (the hold time). The input voltages may go through wild excursions prior to the onset of the setup time and after the hold time, as long as they remain stable during the setup and hold times. (See Section 12 for a discussion of setup and hold times.)

CLOCKED BUILDING BLOCKS

In this section, we present the common SSI building blocks for clocked digital design. Table 1, at the end of Section 12, contains a selected list of useful integrated circuits for these as well as more complex building blocks.

The JK Flip-Flop

Whereas the RS flip-flop displays ambiguous behavior if both Rand S are true simultaneously, the JK flip-flop produces unambiguous results in all combinations of its inputs. A logical excitation table for the basic JK flip-flop is:


J is the counterpart of the S input of an RS flip-flop, and K is the counterpart of R. The first two lines of the excitation table demonstrate the edge-triggered behavior of the flip-flop: when the clock signal is a stable false or true, the output of the flip-flop is insensitive to the other inputs. Often these lines do not appear in the excitation table, since such behavior is expected of an edge-triggered device. The remaining four lines in the table describe the flip-flop behavior when the clock undergoes its active (F -> T) transition. The first three of these lines are analogous to the RS flip-flop. The last line shows that, if both control inputs are true when the clock fires, the flip-flop will complement its output. This behavior is called toggling.

Commercial JK flip-flops come in various forms. The most interesting variations are:

(a) Active clock edge: positive or negative. On all clocked devices, we show the clock input as a small wedge p. inside the device symbol. A negative edge-triggered flip-flop has a small circle (representing T = L) on the clock input: ! >.

(b) Active voltage level for J and K. We find flip-flops with both J and K active-high (T = H), and also a flip-flop with J active-high and K active low. In the latter form, the K input has a small circle on the circuit symbol.

(c) Availability of asynchronous R and S inputs. These are often called direct clear or preclear and direct set or preset. One, both, or neither may be present on the chip. Direct set usually appears at the top of the flip-flop symbol, and direct clear at the bottom. Truth is usually a low voltage level, in which case these inputs will bear small circles. As long as an asynchronous input is asserted, it will override the normal synchronous behavior of the flip-flop.

The 74LSI09 Dual JK Flip-Flop is our most-used version. It is positive edge-triggered, which is compatible with the standard MSI sequential building blocks, and has a high-active J input and a low-active K input. As usual, when designing with JK flip-flops we think in terms of logical operations rather than voltages. It is useful to describe the primary logical operations on the JK flip flop as the "set" and the "clear," setting Q to T and clearing Q to F. The 74LS 109 flip-flop has two useful mixed-logic representations, shown in FIG. 11 with appropriate input and output signals. "Pr" is the asynchronous preset input; "Clr" is preclear. The symbol in FIG. 11a is conventional and causes no difficulty. The circuit shown in FIG. 11b is less easy to derive, but it gives us a degree of flexibility that repays our efforts. The voltage excitation table for the 74LS109 is


Here, ~Q(n) means the voltage inversion of Q(n); as usual, X means "don't care." This excitation table contains yet another variation of notation, in which the monotonous input column for the present value of the flip-flop's output is eliminated.


FIG. 11. Two mixed-logic uses of the 74LSI09 Dual JK Flip-Flop.

When T = H at Q, we derive the mixed-logic symbol in FIG. 11a, the usual form. If T = L at Q, the logical act of setting the flip-flop must result in an L output at Q; logical clearing must yield Q = H. In order to match this behavior with the voltage excitation table, we are led to the conclusion that we must set the flip-flop with the K input and clear with the J input. In turn, this causes the preset input Pr to perform as a logical direct clear, and the preclear input Clr to perform as a logical direct set.

The advantage of the form used in FIG. 11b is its versatility, since we use a different voltage convention for setting and clearing than we do with the conventional symbol. This mixed-logic symbol for the 74LSI09 is the most difficult of the common building blocks to derive, yet having once derived and mastered it, we may use either symbol for the 74LSI09, as our use dictates, without further thought.

This exercise in mixed logic illustrates one aspect of good design: We try to define the general behavior of common circuit elements, and arrive at general solutions to common design problems. We move these recurring but perhaps difficult items up front, where we face them squarely, so that having dealt with their intricacies once, we may thereafter use the standard results in our design work. You will see this principle invoked many times in this book; it is the essence of top-down design.

The JK flip-flop is our most powerful SSI storage element, and you must master its use. There are several ways of using a single flip-flop, and later you will see many larger constructions based on this flexible element.

JK flip-flop as controlled storage. The most general use of the JK flip flop, and the one that gives it such power and flexibility, is as a storage element under explicit control. In digital design, whenever we must set or clear or toggle a signal to form a specific value for later use, we usually think of a JK flip-flop.

The penalty for this generality is the need to control two separate inputs.

JK flip-flop for storing data. The JK flip-flop is basically a controlled storage element. On occasion, we wish to adopt a different posture and view the JK flip-flop as a medium for entering and storing data. From the excitation table, we see that Q(n+1) = Q(n) whenever J = K = F at the clock edge. This is simply a data-storage mode. All that is necessary to continue holding data in the flip-flop is to ensure that J = K = F during the setup time before each clock edge.

JK flip-flop for entering data. The J and K inputs are not data lines; they are control lines for the flip-flop storage. Nevertheless, we can view the JK flip-flop as a data-entry device. We can enter data in three ways:

(a) Clearing, followed by later setting if necessary.

(b) Setting, followed by later clearing if necessary.

(c) Forcing the data into the flip-flop in one clock cycle.

The rule for case (a) is:

If you are sure that the flip-flop is cleared, you may enter data D into the flip-flop on a clock edge by having J = D, independent of the value of K. Case (b) is analogous to case (a). The rule is:

If you are sure that the flip-flop's output is true, you may enter data D into the flip-flop on a clock edge by having K = D, regardless of the value of J.

You should verify the rules for cases (a) and (b).

As for case (c), the designer often cannot guarantee that a flip-flop will be in a given state. Proceeding as we did in cases (a) and (b) would waste one clock cycle for the initial clearing or setting operation. It would be nice to have a mode that would force data to enter the flip-flop at a clock edge, regardless of the present condition at the output. Such a data-entry mode is called a jam transfer, since the data is "jammed" into the flip-flop independent of prior conditions. Examination of the excitation table for the JK flip-flop shows that such a mode is indeed available. We enter data D as follows: If D = F, J must equal F and K must equal T. If D = T, J must equal T and K must equal F. Combining these conditions, we see that Q(n+1) will equal D whenever J = D and K = D. Now you see the utility of having opposite voltage conventions for truth on the J and K inputs of the 74LSI09 flip-flop. With this device, we can connect the J and K inputs together to make K = J as required by the analysis above. Then, by connecting the input data D to the joined inputs of the flip-flop, we will enter D into Q at each clock edge.

The D Flip-Flop

The D (Delay) flip-flop has a simpler excitation table than the JK, and is used in applications that do not require the full power of the JK flip-flop. The symbol and excitation table for the D flip-flop are:


As an SSI device, the D flip-flop appears in these common varieties: (a) The active clock edge can be either positive (L --> H) or negative (H --> L), which is shown by the absence or presence of a small circle on the clock terminal.

(b) Direct (asynchronous) set and clear inputs appear in these combinations: both, neither, or clear only. Almost always, these inputs, when present, are low-active, and appear in the diagram with the small circle. These asynchronous inputs are 1's catchers, and you should use them only with great caution.

(c) Some D flip-flops have only the Q output; others provide both polarities.

Although it appears to be ideal for data storage, there are, in fact, just a few common uses of the D flip-flop in good design.

D flip-flop as a delay. As its name implies, the D flip-flop serves to delay the value of the signal at its input by one clock time. You will see such a use in Section 6 when we discuss the single-pulser circuit for manual switch processing.

D flip-flop as a synchronizer. One natural application of the D flip-flop is as a synchronizer of an input signal. Clocked logic must sometimes deal with input signals that have no fixed temporal relation to the master clock. An example is a manual pushbutton such as a stop switch on a computer console. The operator may close this switch at any time, perhaps so near the next edge of the system clock that the effect of the changing signal cannot be fully propagated through the circuit before the clock edge arrives. If the inputs to clocked elements are not stable during their setup times, their behavior is not predictable after the clock edge: some outputs may change, others may not. We need some way to process this manual switch signal so that it changes only when the active clock edges appear. This is called synchronization. Since the output of a clocked element changes only in step with the system clock, we may use the D flip-flop as a synchronizer by feeding the unsynchronized signal to the flip-flop input.

We deal with this matter more fully in later sections.

D flip-flop for data storage. The D flip-flop appears to be well suited to data entry and storage. Unfortunately, designers use it far too often for this purpose. The problem is that every clock pulse will "load" new data and this is seldom wanted. We usually need a device that allows us to control when the flip-flop accepts new data, just as we could with the JK flip-flop. With the D flip-flop, it seems natural to gate the clock by ANDing it with a control signal in order to produce a clock edge at the flip-flop only when we wish to load data.

This is a dangerous practice, as you will see in later sections. Clocked circuit design relies on a clean clock signal that arrives at all clock inputs simultaneously.

We have the best chance of meeting these conditions if we use unmodified clock signals. This means that the devices will be clocked every cycle, so we must seek other ways of effecting the necessary control over the flip-flop activities.

The enabled D flip-flop. To alleviate the problems caused by gating the clock input to a D flip-flop, we will construct a new type of device called the enabled D flip-flop. FIG. 12 shows the principle. The circuit consists of a D flip-flop with a multiplexer on its input. A new control signal LOAD appears, in addition to the customary data input.


FIG. 12. An enabled D flip-flop.

The system clock goes directly to the clock input, thereby avoiding the problems of a gated clock. As long as LOAD is false, the data selector selects the current value of the flip-flop output as input to the flip-flop. The net effect is that Q recirculates unchanged: the flip-flop stores data. When LOAD = T, the multiplexer routes the external signal DATA into the D input, where it will be loaded into the flip-flop on the next clock edge. The loading process is a jam transfer.

The enabled D flip-flop is the element of choice for simple data storage applications. Although we can accomplish the same effect with the JK flip-flop, the enabled D device provides a more natural way of handling data. Curiously, integrated circuit manufacturers were slow to produce SSI elements containing several independent enabled D flip-flops. However, the concept is widely used in arrays of storage elements, and we have available a good selection of tools for registers.

REGISTER BUILDING BLOCKS

A register is an ordered set of flip-flops. It is normally used for temporary storage of a related set ,of bits for some operation. This is a common activity in digital design, especially when the system must process byte- or word-organized data. You are familiar with the use of the word register in the context of digital computers, but the notion is more general than just accumulators and instruction registers. Multiple-bit storage is such a desirable architectural element that it is a natural candidate for building blocks. Integrated-circuit manufacturers have provided an assortment of useful devices at the MSI level of complexity.

Data Storage

Enabled D register. The most elegant data storage element for registers contains the enabled D flip-flop. Chips are available with four, six, or eight identical elements per package with common clock and enable inputs. The 74LS378 Hex D Register with Enable is typical of this building block. This 16 pin chip provides six flip-flops, each with a data input and a single Q output.

The 74LS379 Quad D Register with Enable has four flip-flops, each with both output signal polarities.

As you have seen, we favor the enabled D configuration because we may hook the system clock directly to the device's clock input. The apparently small point of not gating the clock line is really of great importance to the reliability of the system, and you should adopt the practice routinely.

Pure D register. There are a few occasions when a register of pure D flip-flops is the element of choice. We can always achieve this behavior with the enabled variety by setting the enabling input to the true condition. Pure D registers are also available, usually with a common asynchronous (direct) clear input. The only reason to choose such an element is if you want the direct clear feature; you know to be wary of its l's-catching properties.

Counters

Modulus counting. Counting is a necessary operation in digital design.

Since all binary counters are modulus counters, we will explore the concept of modulus counting before we examine the hardware for it.

Counting the positive integers is an infinite process. We have a mathematical rule for writing down the integer n + 1 if we are given the integer n. This may cause the creation of a new column of digits; for example, if n is the three-digit decimal number 999, then n + 1 is the four-digit number 1,000. In an abstract mathematical sense, the creation of the fourth digit is trivial. Not so in hardware.

Hardware counters are limited to a given number of columns of digits, and thus there is a maximum number that a counter can represent. A three-digit decimal counter can represent exactly 103 different numbers, from 000 through 999. We define such a counter as a modulus (mod) 1000 counter. (A number M modulo some modulus N, written M modulo N, is defined as the remainder after dividing M by N.) Another way of viewing this is that the counter will count normally from 000 through 999, and one more count will cause it to cycle back to 000. An automobile's odometer behaves much the same way.

Counting with the JK flip-flop. The JK flip-flop, operating in its toggle mode, goes through the following sequence

Clock pulse number: 0123456 .. .

Flip-flop output Q: 0101010 .. .


FIG. 13. A two-bit binary counter. The least significant bit is on the left.

We see that the flip-flop behaves as a modulo-2 binary counter. Counters of higher moduli can be formed by concatenating other binary counters. For instance, a modulo-4 counter made from two modulo-2 counters must behave as follows Clock pulse number: 012345678 ...

Counter outputs Q\, Qo: 00 01 10 11 00 01 10 11 00 .. .

Can we devise a logic configuration that will cause two JK flip-flops to count in this fashion? One answer is in FIG. 13. Here, for drafting convenience, we draw the least significant bit Qo on the left, whereas Qo appears on the right in the usual mathematical representation of the number QJ, Qo· Qo alternates in value (toggles) at each clock. At alternate clock edges, QJ is clocked when Qo = T; at these times the value QJ toggles.


FIG. 14 contains another solution that appears to give equivalent results.

Again, Qo will toggle at each clock pulse, since J = K = T on that flip-flop.

This is necessary for a binary counting sequence. Every time Qo generates a T --> F transition (H --> L in this circuit), Q\ will toggle since J = K = T on that flip-flop also, and Qo provides the Q\ clock. FIG. 15 is a timing diagram for this circuit.


FIG. 14. A binary ripple counter.

The least significant bit is on the left.

This circuit displays both logic and voltage, whereas the related FIG. 13 displays only logic.

The timing diagram for FIG. 13 is almost identical to FIG. 15; the difference is due to propagation delays. In FIG. 13, if we assume that tp is the flip-flop propagation delay, both Q, and Qo will change tp nanoseconds after the clock edge, since J and K were stable during the setup time of both flip flops. We define such counters as synchronous.


FIG. 15. A timing diagram for a 2-bit ripple counter. Each stage suffers a cumulative propagation delay. In synchronous counters there is only one delay.

By contrast, Q, in FIG. 14 cannot change until tp nanoseconds after Qo has changed. Counters that change their outputs in this staggered fashion are called asynchronous, or ripple, counters, since a change in output must ripple through all the lower-order bits before it can serve as a clock for a high-order bit.

Ripple counters are easily extensible to any number of bits. Thus a modulo 16 ripple counter would be as in FIG. 16. This simple configuration is useful if you are not interested in the temporal relation of Q3 to any lower-order bits.

A common example is the digital watch, in which a 32,768-(2'5)-Hz quartz crystal oscillator is the primary timing source. The watch display is driven at a rate of 1 Hz, using the output of a IS-stage ripple counter.


FIG. 16. A 4-bit (modulo-16) ripple counter.

To discover the problems that can arise with ripple counters, consider a modulo-8 counter that is changing its count from 3 to 4. The ripple sequence from the initial clock edge would be


In a typical application, the count code is fed into a decoder to produce individual signal lines for each count. In this case, we would have momentary true hazards at decoder outputs 2 and 0, each lasting for a time tp. These glitches are seldom useful and may be quite harmful. We can eliminate them by using a synchronous counter.

FIG. 13 represents a 2-bit special case of synchronous counters. The rule for changing the nth bit of a binary counter is that all lower bits must be

1. Using this rule, we can construct a modulo-16 sychronous counter from JK flip-flops, as in FIG. 17. At the cost of the extra AND gates, we have manipulated the inputs to each flip-flop to cause the flip-flops to toggle at the proper time.

Since a common clock signal runs to each flip-flop, the output changes will occur simultaneously, without ripple.


FIG. 17. A 4-bit (modulo-16) synchronous counter.

MSI counters. Synchronous counters are so useful that manufacturers have prepared a wide variety as MSI integrated circuits, typically modulo-10 (decade) and modulo-16 (4-bit binary) devices with provisions for cascading to form higher-modulus counters. Some synchronous counters have an asynchronous clear input. You must be careful to supply a noise-free signal to this terminal to ensure reliable operation. Remember, it is a 1's catcher! Novices (and some experienced designers) tend to use the asynchronous clear feature too often.

About the only time it can be safely used is during a power-up or master reset sequence to drive crucial flip-flops to a known state.

The ideal counter would be cascadable and have a synchronous clear input terminal, such as in the 74LS163 Four-Bit Programmable Binary Counter. With this device, a cascaded 12-bit synchronous counter would appear as in FIG. 18.

Each 74LS163 chip is a synchronous counter. CLOCK.H is the system clock signal, and since it goes to each 4-bit chip, all output changes will be synchronous.


FIG. 18. A 12-bit binary counter constructed with 74LS163 chips.

TC (Terminal Count) and CET (Count Enable Trickle) are individual device controls that permit proper counting in a cascaded configuration. The counting rule is that a given bit must toggle if all lower bits are equal to 1. This rule will yield the normal binary counting sequence. We may reinterpret the binary sequence as a hexadecimal sequence by grouping the binary bits into 4-bit units and giving each 4-bit unit a range of 016 to F 16' The rule for counting a hexadecimal digit IS: all lower-order hexadecimal digits must equal F 16.

TC and CET implement this rule. The defining equation for TC is

TC = (COUNT=F16) CET

The signal to the CET input comes from the TC output of the previous counting stage. When CET is true, it serves as a signal to the chip that all previous stages are at the terminal F16 count. TC is thus an output that notifies the next stage if all lower bits in the counter cascade are 1. You can see the proper cascading connections in FIG. 18.

CEP (Count Enable Parallel) is a master count enable signal which goes to all chips. It allows the designer to specify when the circuit should engage in counting activity.

The 74LS163 has two more controls: the synchronous clear input CLR, which permits clearing of the chip to zero at the next clock pulse, and the synchronous load input LD, which allows loading of the counter with the 4-bit pattern appearing at the data inputs. The existence of the load feature accounts for the "programmable" in the chip's name. The priority of operations is such that asserting CLR will override LD, which will override CEP. It will be a useful exercise for you to derive the 1 and K inputs to each flip-flop in the 74LS163 counter. For an input data bit Do, the (unsimplified) result for bit 0 is


Many other synchronous counters are available as MSI chips. We have covered the 74LS163 in detail, since it is an example of a useful and well engineered MSI building block. When dealing with circuits of MSI or LSI complexity, you must be cautious about adopting new designs. Frequently, a chip that appears to be exciting will have subtle features that make it less desirable or useless in good design. An example is the 74LS193 Four-Bit Binary Up Down Counter. This chip has two serious flaws that may escape the attention of the "chip-happy" designer. First, its data-loading feature is asynchronous, which presents us with the necessity of keeping the data-load input signal clean at all times, to avoid its habit of reacting to any momentary true glitch. Second, the chip has two clocks, one for counting up and another for counting down.

U sing this chip requires very careful and arduous planning of the type that we choose to avoid entirely in our designs in Part II. Another up-down counter, the,74LS669, does not suffer from the 74LS193's deficiencies. Our point is that you must be alert to detect such deviations from good design and should choose your building blocks carefully and conservatively.

Shift Registers

A shift register performs an orderly lateral movement of data from one bit position to an adjacent position. We may construct a simple shift register from D flip flops, as shown in FIG. 19. This circuit accepts a single bit of data DATA and shifts it down the chain of flip-flops, one shift per clock pulse. Data enter the circuit serially, one bit at a time, but the entire 4-bit shifted result is available in parallel. Bits shifted off the right-hand end are lost. Such a circuit is a primitive serial-in, parallel-out shift register.

In practice, we have need for four shift register configurations: serial-in, parallel-out; parallel-in, serial-out; parallel-in, parallel-out; and serial-in, serial out. The parallel-in, parallel-out variety is the most general, subsuming the other forms. Let's design one.


FIG. 19. A simple shift register constructed with D flip-flops.

Assume that we are building a 4-bit general shift register. What features do we require?

(a) We must be able to load initial data into the register, in the form of a 4 bit parallel load operation.

(b) We must be able to shift the assembly of bits right or left one bit position, accepting a new bit at one end and discarding a bit from the other end.

(c) When we are not shifting or loading, we must retain the present data unchanged.

(d) We must be able to examine all 4 bits of the output.

Suppose we start with an assembly of four identical and independent D flip-flops, clocked by a common clock signal. Let the flip-flop inputs be DrDo and the outputs be D3-D0, from left to right. Let the external data inputs be DA TA3-DA TAo . We have four shift register operations: load, shift left, shift right, and hold. These will require at least 2 bits of control input to the circuit; let S1 and S0 be the names of two such control bits. Our task is to derive the proper input to each D flip-flop, based on the value of the control inputs S1 and S0. In our design of an enabled D flip-flop, we encountered a related problem, actually a subset of the present problem. There we had two operations, hold and load, that we implemented with one control input, using a multiplexer. We may employ the same technique here, using a four-input multiplexer to provide input to each flip-flop. We may then define codes S1, S0 for our four operations.

Using S1 and S0 as mux selector signals, we may infer the proper inputs to the multiplexers. Here are the inputs for a typical bit i of the shift register:


In FIG. 20, the logic for the ith bit is displayed.


FIG. 20. A typical bit Q; of a general shift register.

This circuit makes a useful parallel-in, parallel-out shift register. These functions are incorporated into the 74LS194 Four-Bit Bidirectional Universal Shift Register in this way.

Providing both parallel inputs and parallel outputs requires many pins on an integrated circuit chip, so chip manufacturers make a variety of good shift registers for all four combinations of serial and parallel input and output, with up to 8 bits per package.

Three-State Outputs

In Section 3, you learned the advantage of three-state control of the inputs to a data bus. We may provide such control with three-state buffers, such as the 74LS244. However, some register chips provide built-in three-state control of their outputs. In bussing applications, this can be very convenient. When you are using such a chip but do not need the high-impedance state, you may permanently enable the outputs by wiring the three-state output enable line to a true value.


FIG. 21. The architecture of a typical processor.

Bit Slices

In Section 3 we discussed combinational circuits that incorporated a complex set of functions for a several-bit array of inputs. The arithmetic logic unit was the central example of this purely combinational bit-slice circuitry. In Section 7 you will see in detail how to assemble registers, ALUs, multiplexers, shifters, and other components into a central processing unit for a computer. Experience has shown that the architecture of the processing units of a large class of register based computers and device controllers is quite similar, even when the width of the computer words varies over a wide range. FIG. 21 is an abstraction of this common architecture. A collection of registers, usually called a register file, provides inputs to an ALU. The output of the ALU passes through a shift unit before being routed back into the register file. The register file is a three port memory that accepts three addresses and simultaneously reads from two addresses and writes into the third. Within this common architecture, the width of the data paths, the internal structure of the register file, the operations performed by the ALU, and the complexity of the shifter vary. Several manufacturers have abstracted a processor bit slice from this architecture, and have provided integrated circuits that capture 4 or 8 bits of the architecture. These chips are frequently general enough to span a range of likely designs of processors, and powerful enough to eliminate much of the MSI-level "glue" usually found in such designs.

By stacking the bit slices side-by-side, it is often possible to design a high-speed, simple, and powerful processor.

One of the first chips to be designed as a bit-slice component was Advanced Micro Devices' Am2901 Four-Bit Processor, which incorporated a 16-word register file, 16 arithmetic and logical operations, and rudimentary provisions for supporting the extra storage required for multiplication and division. Bit-slice architecture has evolved into quite powerful ALU slices, supported by large families of auxiliary chips for processor sequencing, input-output support, and so forth. The Am2903 and the TI888 are representative of the central elements in such bit slice families. The TI888 has an auxiliary multiplier-quotient register to support multiplication and division, and also supports a wide range of logical and arithmetic operations, including binary multiplication and division and binary-coded decimal arithmetic. It has provisions for attaching an external register file, to enlarge the capacity of the holding store.

To build structures based on conventional processor architecture, you should investigate the use of processor bit-slice chips. But be aware that these are highly complex circuits whose data sheets will require considerable study before you understand the details of the architecture, set of instructions, and timing. NEXT>>

PREV. | NEXT

Related Articles -- Top of Page -- Home

Updated: Saturday, March 25, 2017 9:55 PST