Fundamentals of Digital Design -- BUILDING BLOCKS WITH MEMORY (part 2)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


<<PREV.

LARGE MEMORY ARRAYS

If a few bits of register storage are good, would 1,024 bits be better? How about 4K, 64K, or 1M? For data storage alone, the more bits per package the better.

But with such a large number of bits stored, we would have to give up the ability to gain access to all the bits simultaneously, and we would need some way of specifying which bit or group of bits we wish to look at. Several forms of large scale solid-state memories are available. Such devices have revolutionized computer technology by making inexpensive, fast storage readily available to the computer architect. Integrated circuit manufacturers are doubling the number of bits per package roughly every three years, whereas the price per bit is steadily declining.

This trend will continue, and memories will be in the forefront of new electronic technology, because manufacturers can spread engineering and development costs over millions of identical units.

Random Access Memory

A large memory that requires the same time to access each data bit is called a random access memory (RAM). All RAMs share some common features. The unit of storage is the bit, which is built into the surface of a thin silicon wafer.

The area of silicon devoted to a bit is a cell. Several cell structures are in use: some closely resemble the D flip-flop and others store a bit by the presence or absence of an electric charge on a microscopic capacitor embedded in the silicon surface.

A typical RAM has so many cells that it would be impossible to connect each cell to its own integrated circuit pin. To conserve pins, RAMs contain a demultiplexer to distribute an incoming data bit along an internal bus to the correct cell. Similarly, there is a multiplexer to select one cell's output and route it to the output pin. Both the demultiplexer and the multiplexer receive their control from an address supplied to the chip on a set of address lines. The address is encoded: eight pins devoted to an address can select one of 256 cells; 12 pins will' handle 4,096 cells.

RAMs are characterized by their total number of storage cells, for example 4K (4,096), and by the size of the words they contain. A 4Kx 1 RAM contains 4K 1-bit words, whereas a lKx4 RAM still contains 4K cells, but the cells are organized as 1,024 four-bit words. The 1-bit-per-word organization saves integrated circuit package pins as compared to the 4-bits-per-word structure. (You should figure out why this is so, and how many pins are saved.) Thus in large memories, 1-bit-per-word RAMs are common, whereas designers of smaller memories often use the 4-bits-per-word type to keep down the number of chips devoted to memory.

Large RAMs require many address bits to specify the cells. A 64K x 1-bit RAM requires 16 address bits; a 1M x 1-bit RAM requires 20 address bits. In each case there are too many address bits to allocate each bit to a separate pin on the chip. To alleviate this problem, the address bits in large RAMs are split into two parts, a row address and a column address. At the proper time in the memory cycle, each part is fed to the RAM over the same address inputs-that is, the row address and column address are time-multiplexed. (The row-and column nomenclature derives from the internal structure of the RAM, which can be viewed as a large, two-dimensional array with elements addressed by row and column indices.) A 256K RAM has 9 input pins devoted to the address; the row address and the column address each have 9 bits. The multiplexing of address inputs saves valuable pins on the chip, but at the cost of considerable additional complexity in the timing of the memory. The multiplexing of addresses permits physically smaller integrated circuit packages, which saves valuable space on printed circuit boards having many RAM chips. We discuss aspects of RAM memory timing later in this section.


FIG. 22. A 3M-word by 2-bit memory constructed with 1M x 1 RAM chips.

Memory system organization. Just as RAM chips have an internal bus structure, so is each chip designed to be a component of a bus-organized memory system in which only one unit of memory is available at any time. Suppose we are given the task of building a 3M x 2 memory (3 million words, 2 bits per word), using 1M x 1 RAM chips. We would create a pair of system busses, data in and data out, and hang the memories onto the busses as shown in FIG. 22.

As in all bus-organized systems, we must have some way of selecting a given element while ensuring that all other devices stay off the bus. RAMs provide a chip enable (CE) for this purpose. The system in FIG. 22 enables RAMs in pairs, since the original specification called for 2 bits per word. The input and output busses may be either open-collector or three-state, depending on the type of chip. For most applications, we prefer the three-state systems because they are faster and eliminate the open-collector's pull-up resistors.

Having developed an architecture for our memory system, we must now determine how to address a particular word in the memory. To derive this address, we shall back away from the chips, look at the memory system as a whole, and ask how we specify a location within the system. Our memory device will need a 22-bit address, since 221 = 2M, and 222 = 4M. Each 1-megabit RAM chip requires 20 bits of address; the remaining 2 bits will specify which of the three banks of 1M-bit RAM chips is being selected. Although the partition of the address is arbitrary, it is common to select the bits for the bank field from the most significant positions in the address. Since 1M-bit RAM chips require multiplexed address inputs, the 20 bits of RAM chip address must be presented as a to-bit row address followed by a 10-bit column address. The division of the 20 bits of chip address into row and column addresses is arbitrary, and is often decided by the ease of layout on the printed circuit board. Here is one straightforward choice for the system memory address:

System Memory Address(A21 ••• A0) = (A21 A20)(A19 ... A10)(A9 ••• A0)

Bank Address -- Column Address -- Row Address

We can now route the 10 RAM address lines to all RAMs in parallel, and use the bank address A21A20 as a code for the proper bank. A decoder can produce individual enable signals for each bank from the 2-bit bank address code:


BE0 enables the chips in bank 0, BE1 enables bank 1, and so on. A bank contains two RAM chips, since there are 2 bits per word. (The row-select and column select control lines are not shown in the diagram.) The last system-wide signal is the read-write selector WRITE. RAMs can both read and write, so we must supply a logic signal for the operation to be performed when the chip is enabled. By convention, WRITE = F implies reading; WRITE = T means writing.

RAM timing requirements. RAMs are unclocked (asynchronous) devices that require detailed specification of the timing of their control and data signals.

Timing is a function of the internal technology of the chip, and varies widely with the type of memory. To achieve reliable RAM operation, you must adhere rigidly to the manufacturer's requirements. In this section, we will present a few of the major timing parameters. The crucial parameters are:

(a) The period of stability of the address lines.

(b) The period when the chip enable CE is active.

(c) The value of the WRITE signal.

(d) During RAM writes, the period of stability of the write-data input.

(e) During RAM reads, the period of stability of the read-data output.

There are three parameters that characterize all RAMs. The read cycle time is the minimum time that must elapse after the start of a read operation until another operation may begin. This usually determines the time during which the address must be stable. Read-cycle times may range from a few nanoseconds to several hundred nanoseconds, depending on the type of chip. The write cycle time is the corresponding parameter for the write operation. It is usually similar in magnitude to the read-cycle time. Read access time is a measure of the time that must elapse after the start of a read operation before read data is available for use. The access time for read is equal to or less than the cycle time for read.

For RAM write operations, the data sheet specifies when the WRITE signal may become true relative to the address, chip enable, and data signals, and for how long WRITE must remain stable to complete the write operation.

Besides these fundamental measures of RAM performance, the data sheets usually contain a welter of other timing figures, specifying times between various possible signal changes. You can simplify all this by making such reasonable design assumptions as: (a) Chip enable (CE) becomes true at the same time as the address becomes stable.

(b) During a RAM write operation, the data to be written stabilizes at the same time as the address lines.

With these assumptions, provided you choose the most conservative values for the timing parameters, you can usually reduce the number of relevant timing figures to a handful.

RAM timing parameters fall loosely into two groups: setup times and hold times. Setup implies that a signal must be stable prior to some event; it is similar to the setup time for inputs to clocked circuits. Hold time means that an input signal must remain stable for a time after some event.

Read access time is an example of a setup time; it describes the required period of stability of the address prior to the appearance of stable data at the output. The data sheets also contain setup times for chip enable. The intervals during which address, chip enable, and data must remain stable after WRITE becomes false are examples of hold times. Conversely, in RAM reading, the data is often stable for a period after the address or CE changes; such a period is also called a hold time.

In using RAMs in digital designs, you must remember that there will also be delays in the host system. These will arise from combinational propagation delays, bus driving delays, and so on. The RAM timing computations begin with the arrival of stable signals at the RAM, so you must take into account any delays in the host when you determine how fast your system will really be.

Static RAM. The static RAM contains bit cells that are similar to a type D flip-flop, similar enough that you may use the D flip-flop as a model for predicting static RAM behavior. As long as the address is constant and the WRITE line is false, the selected cell will continue to put its read data onto the output data bus. A transition from false to true on the WRITE line serves as a clock signal to the RAM to cause input data to be written into the specified RAM cell. As long as the power is on, the static RAM will retain its contents without the need for intervention. The address lines of most static RAMs are not multiplexed-the entire address is presented to the chips, considerably simplifying the RAM control circuitry. These properties make static RAMs simple to incorporate into designs, once you have mastered the basic timing requirements.

We will use a static RAM for the memory in our minicomputer design in Sections 7 and 8.

Dynamic RAM. This device stores data on a tiny capacitor within each bit cell. The dynamic storage cell is much smaller than a cell in a static RAM, typically allowing four times as many bits per unit area of silicon. This factor of 4 becomes of overwhelming importance in large memory systems and nearly all large systems use dynamic RAM chips. The penalty is a significantly increased complexity of the control of the memory; nevertheless, you should consider dynamic RAMs for systems that would require more than, say, 32 static RAM chips.

Most dynamic RAMs have such a large storage capacity that their address inputs are multiplexed to save pins. The multiplexing complicates the control of the RAM --each half of the address must be presented separately to the RAM. The RAM has additional row address strobe and column address strobe control pins, which the circuit that controls the memory must assert at the proper time in the memory cycle to announce that the row address or column address is stable on the address input pins.

The storage element in a dynamic RAM is a capacitor which, like all capacitors, is an imperfect holder of charge. After a period of time-a function of the temperature and geometry of the device and of circuit technology-the charge will leak away. To preserve the data, the system using a dynamic RAM must periodically read and rewrite the stored contents. This periodic restoration is called refreshing: the entire memory must be refreshed at intervals of several milliseconds. Dynamic RAMs allow an entire column of bits to be refreshed at once. A typical RAM controller will cycle through the column addresses, inserting a refresh cycle at regular intervals so that each column is refreshed in a timely fashion. The control circuitry of dynamic RAMs may be contained within each RAM chip or in a separate controller chip.

Read-Only Memory

Read-only memory (ROM) is actually a write-once, read-thereafter memory.

Since the writing takes place during the manufacture of the chip, large numbers of identical ROMs can be fabricated at low cost. Changing the write pattern is very expensive, and ROMs are therefore only appropriate when we can amortize the initial cost by purchasing several thousand identical chips.

The bussing and timing requirements tend to be similar to those of the static RAM, with the omission of the now-superfluous read-write line.

The ROM has important uses in digital design as a permanent memory, a code converter and, surprisingly, a logic function generator.

Firmware memory. Consider the ubiquitous pocket calculator with transcendental functions such as square root and trigonometrics. A debugged square root routine need never change, and could therefore be committed to ROM and made a part of the calculator's address space. Such an application is called firmware. ROM firmware remains in the memory when the power is off, and is ready to use as soon as power is restored.

Code conversion. As you learned in Sections 2 and 3, we may use conventional Boolean algebraic techniques to synthesize outputs from inputs.

In a sense, we are using gates to compute the outputs. Consider Exercise 1 36, in which a 4-bit BCD code generates the outputs to drive a seven-segment lamp. To aid your recollection, we will sketch the Boolean algebraic solution: (a) List the BCD bit patterns for digits 0 through 9, and draw the corresponding segments to be lit for each digit:


(b) Plot each segment a through g on a 4-bit Karnaugh map with the unused codes 10 through 15 plotted as "don't cares." Derive equations for each of the seven segments. For example, the result for segment a is


(c) Assemble gates to compute the functions a through g. In other words, synthesize logic circuits corresponding to the equations for segments a through g.

Viewing the problem in this light, we have a large combinational logic circuit that accepts four inputs and computes seven outputs. We may package this as a building block if we wish; such a seven-segment lamp driver is available as an MSI chip.

From another viewpoint, the seven-segment lamp driver is a problem in code conversion, in which we must convert a 4-bit BCD code into a 7-bit lamp driver code. A seven-function truth table with four inputs and seven outputs is a convenient way to display the code conversions; refer to Table 4-1. We may consider the truth table as representing the contents of an addressable memory: Each row of the truth table is an address, and the corresponding set of output values is the contents of the addressed memory element. Table 4-1 then represents a 16-word memory, with 7 bits in each word. The outputs (the contents of the memory) do not change, so we can realize the truth table as a 16-word by 7-bit ROM; such a ROM requires four address inputs: the BCD code.

TABLE 1. TRUTH TABLE FOR A BCD-TO-SEVEN-SEGMENT CODE CONVERTER


The conversion of a 7-bit ASCII code to the corresponding 12-bit Hollerith card code can be accomplished with a ROM. The ASCII code contains 128 characters; each character has a Hollerith representation as a set of holes (1's) and blanks (O's) in the 12 rows of a card column. The ROM will contain 27 = 128 words, each of 12 bits. Every ASCII code represents the address of one ROM word; the content is the corresponding Hollerith code. Each word of ROM contains a unique code.

ROM may also be used to convert a valid 12-bit Hollerith code into 7-bit ASCII. This ROM has 12 address inputs and consists of 212 = 4,096 words of 7 bits. Of the 4,096 rows in the Hollerith-to-ASCII truth table, only 128 are relevant; the rest correspond to don't-care outputs, present in the ROM but of no interest in this code conversion.

Generating logic functions. The foregoing descriptions show that a ROM provides a general way to generate an arbitrary logic function. Every logic function has a truth-table representation. A ROM gives the function value, T or F, for every row in some canonical truth table; the ROM explicitly contains each bit of information in the full truth table. In a synthesis of logic functions using gates, we explicitly use only a portion of the truth-table's information the product terms leading to true values of functions (or, equivalently, those leading to false values). Furthermore, we often can simplify the logic expressions to reduce the number of terms.

Such simplifications are not useful, and in fact are disadvantageous, in ROM synthesis of logic functions. Although the ROM approach is completely general, it suffers severely because the size of the ROM is doubled by each additional input '."ariable. In logic synthesis, ROMs are best used in situations similar to code conversion, where highly encoded information must be transformed into a large number of output functions. Other, more appropriate, devices are available to support the uniform synthesis of logic functions. We will discuss these devices later in this section.

Field-Programmable Read-Only Memories

In the preceding section you learned some of the uses of the ROM in digital design. A ROM must be programmed by the manufacturer and is therefore not a suitable tool in the developmental phases of a design, nor in systems in which the read-only material must occasionally be altered. When only a few copies of a system are required, or occasional changes in the memory are required, we need permanent or semi-permanent memories that can be programmed by the designer in the laboratory. Several types of programmable memories are available.

PROM. The programmable read-only memory, or PROM, is a ROM in which the one-time writing process has been deferred to the end user. During manufacture, the bits of the PROM have microscopic metallic or polysilicon fuses that set all the bits to 1. The user can blow these fuses by selecting a given bit and then applying a pulse to a special programming pin, thereby creating a O. The process is inexpensive and relatively easy, and PROM programming devices are readily available. In use, the access to a PROM is rapid, equivalent in speed to ROM. PROMs are also physically similar to ROMs, and the layout of the circuit board is simplified in systems that will eventually contain ROMs.

Once the PROM is programmed, it cannot be reprogrammed.

EPROM. The erasable programmable read-only memory (EPROM) is an even more flexible design based on the ROM. As its name implies, the EPROM allows the designer to erase the contents and start over. Bits are stored by electric charges that are trapped in the silicon of the chip. When erased, all the bits of the EPROM become 1 'so To write a zero value into a bit, the bit receives a high voltage that temporarily makes the silicon a conductor, allowing charge to accumulate at the site of the bit. After the high voltage is removed, the trapped charge remains, essentially forever. EPROMs are programmed with PROM programmers, using techniques similar to, but more complex than, those used to program PROMs.

A transparent quartz lid covers the silicon of the EPROM chip. Erasure occurs when high-intensity ultraviolet light makes the silicon a weak conductor, allowing the trapped charge to bleed slowly away from the bits in the memory.

To be erased, the chip must be placed in an EPROM eraser and exposed to ultraviolet light for about a half-hour. The EPROM will withstand many erasures before it becomes unreliable.

The EPROM is convenient in system development because the designer may write the content of the memory, test the design and, if necessary, erase the old pattern and install a new one. The housing of microcomputer firmware is a common and important use of EPROMs, but we can use them with equal facility to generate digital logic functions. The device is slower than the PROM, but its capacity for erasure makes it a potent design tool. EPROMs are physically larger than ROMs or PROMs, and have longer access times. Sizes range from 16K (2K x 8) to 256K (32K x 8). EEPROM. The electrically-erasable programmable read-only memory (EEPROM) may be altered without removing it from the circuit in which it is used. Sequences of voltages of 5V or greater are applied to special programming pins to permit the rewriting of selected bytes of the memory. This device and its cousins are important in designs in which in-place "writing with difficulty" is necessary - for instance in a unit housed in a remote area that must be manipulated at a distance. The structure and the pin assignment of EEPROMs are not compatible with other read-only devices, and only a meager selection of chips is available. A typical size is 16K, as exemplified by the 2K x 8-bit 2816 EEPROM. As you have seen, the devices for "read-only memory," whether permanent or reprogrammable, are used in data and program storage, in code conversion, and even to generate random logic functions, but the first two uses of ROMs are by far the most important. We shall now describe programmable devices suitable for general logic applications in digital design.

PROGRAMMABLE LOGIC

Programmable logic provides a systematic way to generate complex logic functions.

Programmable logic devices allow the designer to create arbitrary sum-of-product functions of many variables, in a highly structured and regular manner. The term programmable means that the functions may be specified after the chip has been manufactured.

Usually the programming is accomplished by blowing (breaking) fuses along the chip's internal data lines, as is done in the PROM. The fuse is a narrow ribbon of metal or other conductor deposited during the manufacture of the chip,

and it serves a role similar to the three-state gate. When the fuse is intact, it allows an input signal to proceed unimpeded; when the fuse is blown, no signal can pass and the input is disconnected from the remainder of the circuit. Unlike a three-state circuit, once the fuse is blown, its input is forever disconnected.

As you saw in Section 1, it is always possible to express an arbitrary combinational logic function in sum-of-products form. This is the starting point of the application of programmable logic. To construct an arbitrary sum-of products function, we need to be able to form the required AND (product) terms, and then form the OR of the product terms to create the appropriate sum.

The PLA

The most general programmable logic structure is the programmable logic array (PLA), which allows the programming of arbitrary products and arbitrary sums.

Within a VLSI design, the PLA is an important tool for creating complex circuits on a single chip. The PLA may also be packaged as an LSI-level integrated circuit, to be used at the level of design that we are now studying. But because of its extreme generality, requiring many inputs, many outputs, and much internal circuitry, the PLA is not widely available as a discrete integrated circuit.

Consider the programmable logic implementation of a single function of three variables

TEST = A·B + A·e + A·B·e

In this example, the programmable device must have at least three inputs and one output, and must have the capability of specifying at least three product terms within its structure. Within the device, each input passes through a non inverting and an inverting buffer, thereby providing the true and complemented forms of each input variable. FIG. 23 is a schematic of the portion of the PLA required for this example. Every vertical line represents a product term, and each horizontal crossline is an input to the AND gate that forms the product.

The horizontal line at the bottom represents the single sum of the product terms; each product term is an input to the OR gate that forms the sum. Before programming, each crossing has a fuse that determines if the input will contribute to its product or sum. In FIG. 23, each vertical line initially generates the trivial case of which, of course, is identically false. The act of programming the PLA destroys the unwanted fuses. In FIG. 23, the large dots represent the remaining fuses; all the other fuses have been blown.


FIG. 23. A PLA realization of TEST = A B + A C + A·B·C.

Usually, the designer wishes to create several functions using the same input variables. FIG. 24 is a PLA pattern for producing four functions of five input variables. This PLA is capable of receiving up to six input variables, generating up to twelve different product terms, and producing up to five different

sums of these products. One input, three product terms, and one sum output are unused. The equations implemented by this PLA are:


A typical PLA that is available as an integrated circuit is the National Semiconductor DM7575, which accepts 14 inputs, supports 96 product terms, and generates 8 logic functions as outputs.


FIG. 24. A PLA realization of Eqs. (1) to (4). The sixth input, the fourth output, and AND gates 10, 11, and 12 are not used. This PLA is smaller than commercial chips.

The PROM as a Programmable Logic Device

We have already mentioned that the programmable memory devices, exemplified by the PROM, can be used to generate logic functions. We usually think of a PROM as a memory device whose n address inputs select one of 2n words of memory. To perform this selection, the PROM must contain a complete decoding of its address inputs. Think of a truth table with the address lines as the inputs.

For a PROM, the truth table is canonical-it explicitly contains all 2n rows.

Thus, the PROM provides all possible product terms of its inputs, whether we like it or not. Each bit of the PROM's output can represent a logic function a column in the output section of the truth table. To generate a logic function, we must specify whether each canonical product term is to be a contributor (the value in the memory is 1) or not (the value in the memory is 0). In such applications as code conversion, this is quite useful, since we are concerned with converting all possible patterns of input bits. For random logic, the PROM is of limited attractiveness.

Monolithic Memories, a leader in the development of programmable logic devices, has coined the term PLE, programmable logic element, for such applications. Manufacturers produce a limited variety of PLEs, with a fixed array of ANDs (fixed product terms) but with less than the full canonical set found in PROMs.

The PAL

The PLA generates a general sum of general products; within the limits of the design, the designer may specify the detailed structure of each product term and each sum of products. However, the great generality of the PLA makes for difficulties in its manufacture and use. The PROM (or PLE) provides a pre determined set of product terms, and the designer may specify the nature of each sum of products. This structure is useful as a memory and for logic operations requiring the full decoding of inputs, but the PROM has limited application as a tool for programmable logic.

The PAL (programmable array logic) allows the designer to specify the nature of the product terms, but the ways in which the products may be formed into sums is fixed in the chip. In practice, the PAL is by far the most useful of the trio for generating random logic functions. The programmable product array, coupled with a limited summing capability, fulfills the requirements of digital designs well. The term PAL is a registered trademark of Monolithic Memories, which originated this application. PALs are available from many manufacturers in a wide variety of useful configurations. The PAL is indeed a pal of the designer of digital circuits.

The PAL18L4, shown schematically in FIG. 25, is typical of the PALs that generate combinational logic functions. As suggested by its number, the PAL18L4 has 18 input pins and 4 output pins. The "L" signifies that the outputs are produced in low-active (T = L) form. Two of the 4 outputs, pins 18 and 19, each provide the sum of four product terms. The other two outputs, pins 17 and 20, each provide the sum of six product terms. Within the chip, each of the 18 inputs is split into its true and complemented form. Each of the twenty product terms available in this chip can contain any combination of the true and complemented forms of the 18 inputs. Fuses appear at each intersection on the product-term lines. The programming task consists of blowing the unwanted fuses.


FIG. 25. A PAL18L4. (Courtesy of Monolithic Memories, Inc.)

FIG. 26 shows an implementation of Y1, Y2, and Y3 in Eqs. (1) through (4-3), using the PAL18L4. Unused portions of the PAL do not appear in the figure. Y1.L, Y2.L, and Y3.L are formed as sums of products, drawn directly from the equations (the high-active Y2.H is discussed later). Each bold dot denotes a contribution to a product term, and represents a fuse that is to remain unblown. The unused product terms in a sum have been blacked in.

Inputs of either voltage polarity are easily handled, using standard mixed-logic notations. Within the chip, the PAL generates sums of products with T = H. The manufacturer's diagram shows the input buffers with the inverted voltage form of the input emerging below the non-inverted form in each buffer. If T = H at the input, the logic inversion occurs on the bottom of the two buffer outputs; if T = L at the input, we redraw the input buffer to show that the logic-inverted form emerges from the upper buffer. FIG. 26 contains illustrations of the notational changes to accommodate low-active inputs.


FIG. 26. An implementation of Eqs. ( 1) through ( 3), using the PALl8L4. Note the mixed-logic changes to the diagram.

The PAL18L4 produces low-active sum-of-products outputs. If we desire a high-active output, we mixed logicians have several choices. We may add a voltage inverter to the output signal to change its polarity-a reasonable but inelegant solution. If the PAL has an unused output and an unused input, we may feed the low-active form of our signal back into the PAL and produce the high-active form on the unused output. We may use a different PAL, such as a PAL18H4, that produces high-active outputs. Another approach, useful when one is striving to use minimal hardware, is to generate a sum-of-products form of the inverse of the desired function. The mixed logician immediately recognizes that the output represents a high-active version of the desired function

FUNCTION.H = FUNCTION.L

For instance, suppose we wish to produce the Y z of Eq. ( 2) in high-active form, using the PAL 18L4. To generate the inverse of Y z, we may plot a Karnaugh map of the function, circle the O's, and write an equation for Y :

FIG. 26 contains the resulting implementation of Y z .H.

Many PALs are available with clocked flip-flops and three-state control at their outputs. An example is the PAL16R8, shown in FIG. 27. The R8 in the chip number signifies a PAL with eight "registered" outputs. Each of the eight flip-flops in the register receives a clock signal from a single input pin, and each flip-flop's output is buffered by a three-state inverter controlled by another input pin. The 16 inputs implied by the chip number arise from 8 external input pins and 8 signals fed back from the flip-flop outputs.

PAL, PLA, and PLE Programming

Programmable logic devices are a powerful tool for the designer, providing the equivalent of complex, tedious logic and architecture within compact and relatively inexpensive integrated circuit chips. These devices must be programmed and, with the exception of reprogrammable devices such as EPROMs and certain PALs, the programming is a one-time, unalterable act. The programming requires special equipment for the presentation to the chip of a complex sequence of properly timed voltages. Modern "programmers" are costly, but they provide many necessary capabilities conveniently and reliably. Most such programming devices are capable of accepting programming data from an external source valuable when one is transmitting computer-generated programming files. The fuse patterns become the raw data for the programmer, but developing fuse patterns from diagrams such as FIG. 26 is a tedious and error-prone task.

Many programmers for PALs permit the designer to enter logic equations directly; these equations, expressed in a suitable notation, provide the input to a software translator that produces fuse patterns.


FIG. 27. A PAL16R8. (Courtesy of Monolithic Memories, Inc.)

Programmable logic devices are powerful design tools that provide some of the desirable attributes of VLSI for the chip-level designer, without the extraordinary commitment necessary to develop a custom-made VLSI chip. The size and complexity of programmable logic devices are growing rapidly-PALs with 64 inputs and 30 registered outputs, and containing more than 32,000 fuses, are available. You will read about several applications of PALs and PROMs in later sections of this book.

TIMING DEVICES

Such systems as RAMs and ROMs have rigid timing requirements that a designer must obey. In this section, we discuss two non-digital devices, the single shot and delay line, that help the designer to develop appropriate signals for timing.

We might use a single shot or a delay line when we must derive a timing event of arbitrary duration and with respect to some arbitrary starting point.

For instance, in RAM reading, we must produce a timing signal for the host system that starts when the RAM read operation begins, and ends only after the read access time has elapsed. This time is independent of the system clock; the RAM's timing requirements remain the same whether the system clock is slow or fast.

The idea is simple, but the simplicity of the concept does not mean that the corresponding hardware is simple to use. Another way of describing the arbitrary starting time and duration of the operation is to say that the timing is asynchronous. Synchronous devices derive their timing from some external source, usually the system clock. Asynchronous devices are internally timed.

This is not inherently bad, but the result is that independent timings are spread throughout the system. We have relinquished central control, and for this reason alone we should avoid asynchronous devices unless absolutely necessary.

Centralized control is nice, but we cannot always have it. Memory devices, for example, require internal timing that is independent of the host system. We cannot do without devices for memory so we are forced to generate their arbitrary timing locally. The alternative is distinctly worse: we could have central control if we rigidly fixed our system clock to match the local asynchronous timing requirements. This would eliminate our most powerful hardware debugging tool, the ability to slow the central clock to zero speed, thereby freezing our system in a given condition.

The Single-Shot

The single-shot, or monostable multivibrator, is based on the electrical properties of capacitors. A capacitor stores a charge (electrons) as a function of the voltage impressed across the capacitor. The amount of stored charge q is proportional to the impressed voltage V; the proportionality constant is the capacitance C. Thus q = CV

Capacitors require a period of time for the charge to build up or decay, and it is this behavior that allows the single shot to function as a timed delay element.

A single-shot behaves like the electrical circuit below

Normally, the single-shot switch is open and no current is flowing. The voltage at point P is Vee, which causes the single-shot output Q to be false. When we trigger the single-shot by (digitally) closing the switch, charge begins to rush into the capacitor C. Because of this current flow through R, the voltage at point P becomes low, which in turn causes the single-shot output Q to assume a true value. As the flow of current charges the capacitor, the voltage at P slowly rises back toward Vee. When the voltage at P reaches a certain level, the single-shot turns off, bringing output Q to false. The time during which Q is asserted is the single-shot delay time.

By choosing capacitor C and resistor R according to tables or formulas in the single-shot data sheet, we may select any desired single-shot delay time within a wide range of values. Typical delays, using the popular 96L02 Dual Single-Shot, range from about 50 nanoseconds to 10 seconds.

In circuit diagrams, we draw a single-shot as a rectangle similar to a flip flop and add the external timing capacitor and resistor (see FIG. 28),


FIG. 28. A single-shot circuit symbol.

Single-shots respond to a F - > T transition on the trigger input, so they will start up if any glitch occurs at the input. Therefore, you must be careful to drive the single shot with a clean trigger to avoid spurious results. The value of the single-shot delay will drift somewhat, and 5 percent stability is about all you should expect.

The Delay Line

Delay lines, on the other hand, generate highly stable delays. Their stability derives from the fact that they are passive devices-they have no transistor amplifiers built into them, as do all gates, flip-flops, and so on. This explains their lack of power-supply pins, since with no active amplifiers they do not need operating power.

Delay lines are available in standard integrated circuit packages. Internally, they are constructed from a series of stable inductors and capacitors. Whatever waveform is put into the delay line will appear, after a given delay, on the output.

There is no concept of a trigger; the waveform at the input is simply reproduced after the delay.

In their most convenient form, delay lines have a series of taps (pins), often 10, that yield fractions of the nominal delay. For instance, in a to-tap delay line with delay rated at td, the nth tap will produce a delay of td x nl10. Such taps are useful in generating the timing in dynamic RAMs, which require a sequence of accurately timed pulses of varying duration. We may obtain the pulses by sending an edge down the delay line and using an Exclusive Or gate to detect the period during which the edge has arrived at the input but not yet reached the output. For example, suppose that we need a pulse that starts 200 nanoseconds after START becomes true and lasts for 150 nanoseconds. FIG. 29 is a diagram of the circuit, using the standard delay-line symbol.


FIG. 29. A circuit for a delay line to produce a 150-nsec pulse delayed by 200 nsec.

Delay lines are moderately expensive but can generate delays from about 2 nanoseconds to greater than 1 microsecond. Since a delay line faithfully transmits the input waveform, you must be sure the signal at the input is clean.

Often the operating specifications of the delay line will require that the input signal be derived from a line driver, and that the output be properly terminated (see Section 12).

THE METASTABILITY PROBLEM

We began this section with a discussion of hazards, a nuisance created by the characteristics of physical devices used to implement logical concepts. In Section 5 you will encounter other design pitfalls rooted in physical behavior-pitfalls that arise through the interactions of several components of a design. In this section, there remains to discuss the most alarming physical problem of all metastability. We will alert you to the problem and give some advice, but you should look to Section 12 for a more extensive treatment of this topic.

Digital devices are fundamentally analog devices that behave digitally only when stringent rules of operation are obeyed. Sequential devices contain amplifiers (gates) and feedback loops to achieve their storage properties. In addition to establishing proper voltage levels at the inputs, to assure proper operation of a sequential device you must adhere to the setup times, hold times, and other timing specified in the data sheets. When the operational requirements are met, the device's outputs will be proper digital voltage levels, and changes in the level of the output will occur quickly and cleanly. Except during the rapid period of transition, the circuit remains in one of its stable states. You have seen that there are difficulties associated with the RS flip-flop when one tries to move from the R = S = T input configuration to the hold configuration, in which R = S = F. The difficulties arose from the attempt to change both inputs simultaneously. As long as no more than one input is changing at a time, the sequential circuit performs well, but if the voltage level of more than one input is allowed to change at nearly the same time, the circuit is being required to perform outside the framework of design for digital operation and the result may be unpleasant. For the proper operation of clocked circuits, the setup and hold times require that certain inputs must not change too near the time that the clock signal is changing.

Violation of the timing requirements of a sequential circuit may throw the circuit into a metastable state, during which the outputs may hold improper or non-digital values for an unspecified duration. In one form of metastability, the output voltage lingers for an indefinite period in the transition region between digital voltage levels, before it eventually resolves into a stable value. In another form of metastability, the output appears to be a proper digital value, but after an unpredictable interval switches to another value. Metastability can be disastrous.

In synchronous design, we sidestep the problem by never changing the inputs in the vicinity of the clock. As you will see, this allows vast simplification of the design of complex circuits. But every circuit is at some point exposed to external reality-other circuits with different clocks, unclocked or nondigital devices, and human operators, for instance. Signals from such sources are not tied to our clock and may change at any time during our clock cycle. Therefore, although we can simplify our design by using good practices, no amount of digital or analog wizardry will eliminate the problem of metastability. However, by proper design or choice of components, we may lower the probability of finding the circuit in a metastable state to a satisfactory level. In Section 12, we discuss metastability in more detail and offer guidelines for dealing with the problem.

CONCLUSION

You have completed Part 1 of this guide, in which we have explored the fundamental tools underlying digital design. From basic combinational circuits we have de veloped a set of building blocks that range from simple logic gates to complex ALU s, from flip-flops to large memories. Now you are ready to begin the exciting activity of digital design. Part II introduces you to this process.

RESOURCES

BLAKESLEE, THOMAS R., Digital Design with Standard MSI and LSI, 2nd ed. John Wiley & Sons, New York, 1979. Sound design practices.

DIETMEYER, DONALD L., Logic Design o/Digital Systems, 2nd ed. Allyn & Bacon, Boston, 1978. Section 12: hazards. Section 13: traditional asynchronous design.

ERCEGOVIC, MILOS D., and TOMAS LANG, Digital Systems and Hardware/Firmware Algorithms. John Wiley & Sons, New York, 1985. Good treatment of sequential systems.

FLETCHER, WILLIAM I., An Engineering Approach to Digital Design. Prentice-Hall, Englewood Cliffs, N.J., 1980. Section 5 contains a good discussion of flip-flops.

HILL, FREDERICK J., and GERALD R. PETERSON, Digital Logic and Microprocessors. John Wiley & Sons, New York, 1984.

HILL, FREDERICK J., and GERALD R. PETERSON, Introduction to Switching Theory and Logical Design, 3rd ed. John Wiley & Sons, New York, 1981. Good standard treatment of sequential circuits.

HWANG, KAI, Computer Arithmetic-Principles, Architecture, and Design. John Wiley & Sons, New York, 1979.

KLINGMAN, EDWIN E., Microprocessor System Design. Vol. 2, Micro-coding, Array Logic, and Architectural Design. Prentice-Hall, Englewood Cliffs, N.J., 1982. Bit slices and programmable logic.

MANO, M. MORRIS, Digital Design. Prentice-Hall, Englewood Cliffs, N.J., 1984.

MICK, JOHN, and JAMES BRICK, Bit-Slice Microprocessor Design. McGraw-Hill Book Co., New York, 1980. A collection of design notes for the Advanced Micro Devices 2900 bit-slice family. This book is useful far beyond the Am2900 chips.

MYERS, GLENFORD J., Digital System Design with LSI Bit-Slice Logic. John Wiley & Sons, New York, 1980. WIATROWSKI, CLAUDE A., and CHARLES H. HOUSE, Logic Circuits and Microcomputer Systems, McGraw-Hill Book Co., New York, 1980. Data Books

Am29300 Family Handbook. Advanced Micro Devices, 901 Thompson Place, P.O. Box 3453, Sunnyvale, Calif. 94088. High-performance 32-bit building blocks.

Bipolar Microprocessor Logic and Interface. Advanced Micro Devices, 901 Thompson Place, P.O. Box 3453, Sunnyvale, Calif. 94088. Data book for the AM2900 family and its support devices.

Bipolar/MOS Memories. Advanced Micro Devices, 901 Thompson Place, P.O. Box 3453, Sunnyvale, Calif. 94088. Data book.

FAST: Fairchild Advanced Schottky TTL. Fairchild Camera and Instrument Corporation, Digital Products Division, South Portland, Maine. Data book.

LSI Databook. Monolithic Memories, 2175 Mission College Blvd., Santa Clara, Calif.

95954. PALs, memory products, arithmetic units, system building blocks.

Memory Components Handbook. Intel Corp., Literature Department, 3065 Bowers Avenue, Santa Clara, Calif. 95051.

PAL Programmable Array Logic Handbook. Monolithic Memories, 2175 Mission College Blvd., Santa Clara, Calif. 95054. Authoritative practical treatise on PALs and their uses.

Systems Design Handbook, 2nd ed. Monolithic Memories, 2175 Mission College Blvd., Santa Clara, Calif. 95054, 1985. Good discussion of DRAMs and their control in Section 10. Good discussion of multiplication algorithms in Section 8.

The TTL Data Book. Texas Instruments, P.O. Box 225012, Dallas, Tex. 75265.

QUIZ & EXERCISES

1. (a) Show that the following combinational circuit contains a hazard.


(b) Write the logic equation corresponding to the circuit, and draw a K-map with circles corresponding to the circuit.

(c) Most of the time our design techniques will nullify the bad effects of hazards; nevertheless, suppose that you must eliminate the above hazard from the circuit.

Starting with the K-map you drew for part (b), produce a hazard-free map by making certain that adjacent 1 's share at least one circle. Write the logic equation and draw the hazard-free circuit.

(d) Prove, by using a timing diagram, that your new circuit is free of hazards .

2. Assume that each combinational circuit element has a propagation delay of tp.

What is the total (worst-case) propagation delay in the following circuit?


3. In Fig. 3-5, the circuit for the enabled multiplexer imposes the enabling operation on each of the initial AND gates, forcing them to have three inputs. Suggest why, in Fig. 3-5, the enabling operation was not designed as a single final AND gate with only two inputs.

4. A circuit consisting of a closed loop of an odd number of inverters (greater than one) can function as an oscillator. Assume that the propagation delay through an inverter is 10 nanoseconds.

(a) With a timing diagram, show the oscillatory behavior of a loop of three inverters.

(b) The oscillator consisting of a loop with just a single inverter is not stable.

Speculate about why this circuit is unsatisfactory.

5. What is feedback in digital design? Draw a gate circuit that exhibits feedback with memory.

6. Why are combinational methods inadequate to deal with sequential circuits?

7. Explain "1' s catching." Why is this behavior usually a disadvantage in digital design?

8. Explain the terms asynchronous and synchronous.

9. Show that the asynchronous RS flip-flop has two stable states.

10. Why do we usually avoid asynchronous flip-flops in digital design?

11. What is switch debouncing? Why can we usually not use a mechanical switch signal directly in a digital design? Draw a switch-debouncing circuit.

12. Using a timing diagram, analyze the behavior of the switch debouncer shown in FIG. 8a or 4-8b.

13. Assume that two (noisy) mechanical switches generate the DATA and HOLD signals for the latch in FIG. 4. Is there any sequence of switch closings and openings that would yield a clean output signal at Y?

14. The RS flip-flop exhibits anomalous output behavior if both Rand S are true.

(a) What is the anomaly? (b) Does the anomaly occur in outputs X and Q of FIG. 6? (c) In FIG. 6, assume that R = S = T. What is the value of Q if both signals become false, but R becomes false slightly before S? (d) Under similar conditions, what value does Q assume after precisely simultaneous T --> F transitions of R and S?

15. What is an edge-driven flip-flop? Why is it desirable? What is the defect in the master-slave flip-flop? What is a pure edge-driven flip-flop? What kind of flip-flops do we use in digital design?

16. Consider an edge-driven JK flip-flop such as the 74LS109 with the direct set input and the K input asserted (true), and the direct clear input and the J input negated (false). What will be the flip-flop's output shortly after the next active clock edge arrives?

17. Suppose your design requires a 74LS109 JK flip-flop with output FLG. You wish to set the flip-flop with a logic variable SETFLG and clear the flip-flop with a variable CLRFLG. In the design, you find that the control inputs are available as SETFLG.L and CLRFLG.H. Draw the desired circuit with a 74LS109 flip-flop, using no inverters. Draw the mixed-logic diagram.

18. Repeat Exercise 4-17 under the condition that both SETFLG and CLRFLG are available with T = L, and inverters are available.

19. The text describes three cases in which the JK flip-flop may be used to store a bit. Two. of these cases are (a) clearing, followed by later setting if the data bit is true; (b) setting, followed by later clearing if the data bit is false. Verify the text's rules for implementing these two cases.

20. FIG. 30 shows the structure of a commercial 74LS74 D flip-flop. The notation is mixed logic; the asynchronous set and clear inputs have been deleted to simplify the diagram. In this exercise, you will demonstrate that the circuit does indeed exhibit pure edge-triggered behavior-it copies and stores its D input only when the clock fires, and it changes its output only as a result of the clock transition.

Assume (reasonably) that the propagation delay tpr is the same through each gate in the circuit. To demonstrate the flip-flop's behavior, you will start from a known configuration of gate inputs and outputs and will manually simulate the effects of a given change in one input, in time units of tpr , until no further changes occur in the circuit. Using the mixed-logic diagram, you may choose to deal with either logical variables or voltage signals. In the explanation below, we use logic rather than voltage.

Start, for instance, with both Q outputs, the D input, and the clock input all false, and establish the initial conditions of gate outputs A through G. Tabulate the values of all signals so that you will have a detailed record. Make the D input true and follow any changes in gate outputs until the circuit stabilizes. (You should, of course, observe that the outputs are unaffected by this activity.) Then, from this new configuration, bring the D input back to false and verify that the outputs do not change as the circuit stabilizes. Once again, bring the D input true to set the stage for some clocked activity. Make the clock input true and watch the circuit accept and store the value on the D input. Continue in this manner with other sequences of changes at the input.

21. Do you want to observe metastability in action? In the preceding simulation of the actions of the D flip-flop, you changed only one input at a time and observed the effect on the internal structure of the circuit. Changing both the input D and clock input at the same time is a violation of the flip-flop's setup specifications.


FIG. 30. A 74LS74 D flip-flop without an asynchronous set and clear.

Start at the same point as in the previous exercise, with the input D and clock input false, and with both Q outputs false. Set both inputs true simultaneously and simulate the behavior of the circuit. What behavior do you observe on the Q outputs?

22. FIG. 31 is a mixed-logic diagram of the internal structure of a commercial 74LS109 JK flip-flop, with the asynchronous set and clear inputs deleted to simplify the diagram. Perform a hand simulation similar to that of Exercise 4-20 to show the edge-triggered operation of the JK flip-flop.

23. What is the difference between the names used for inputs and outputs inside a mixed-logic circuit symbol and the names appearing outside the symbol?

24. Draw an efficient mixed-logic circuit in which you use a 74LSI09 JK flip-flop to synchronize a signal TIME*.L. On your diagram, call the synchronized logic variable TIME.SYNC.


FIG. 31. A 74LSI09 JK flip-flop without an asynchronous set and clear.

25. There are four possible transitions Q(n) to Q(n+1) for a clocked flip-flop output: o --? 0, 0 --? 1, 1 --? 0, and 1 --? 1. These transitions are given the names to, ta , tfJ , and t1, respectively. Consider the ways in which we can make a D flip-flop and a JK flip-flop execute each of these transitions. Fill in the missing elements in the following table:

[In each case there will be two ways that the JK flip-flop can execute the transition.

For instance, the 0 --? 0 (to) transition occurs by clearing the flip-flop to 0 (having J = 0, K = 1), or by holding the previous 0 (having J = 0, K = 0). These cases give rise to the X (don't-care) entry in the table.]

26. Compare the asynchronous RS flip-flop and the synchronous JK, D, and enabled D flip-flops as to their best uses in digital design.

27. Two types of clocked flip-flop behavior that are occasionally useful are the T (toggle) and the SOC (set overrides clear) flip-flop modes. A toggle flip-flop changes its output Q only when its input TOG is true at the time of the clock edge. A SOC flip-flop behaves like a clocked RS flip-flop except that it ignores the value of input R whenever input S is true. Write excitation tables defining each type.

28. By means of external gates, convert the 74LSI09 flip-flop into a type T (toggle) and a type SOC (set overrides clear) flip-flop.

29. By analogy with FIG. 12, construct a type T (toggle) flip-flop from a D flip-flop.

30. What is a register? How does it differ from a flip-flop?

31. Construct synchronous modulo-2, modulo-4, and modulo-8 counters using: (a) D flip-flops.

(b) JK flip-flops.

(c) T (toggle) flip-flops.

32. Repeat Exercise 31 with ripple counters instead of synchronous counters.

33. For a 4-bit ripple counter, demonstrate how the output ripple can produce hazards in circuits that receive the outputs.

34. Using a TTL data book, compare the characteristics of the 74LS160, 74LS161, 74LS162, and 74LS163 counters.

35. Use two 74LS163 counters to build a divide-by-24 circuit. The output of your circuit should be true during 1 of every 24 clock periods. This and similar circuits are frequency dividers.

36. Using two 74LS163 counters and any required gates, design a circuit that functions as an 8-bit counter or as an 8-bit multiply-bY-2 circuit. The circuit should be controlled by two control signals, according to the following table:

37. There are many special counting sequences that are of some interest in digital design. The binary counter produces the sequence of binary integers. The gray code counter produces a sequence in which exactly one bit changes in moving from one element of the sequence to the next. For a 2-bit counter, the gray code is 00, 01, 11, 10. (Where have you seen this sequence in this book?) Build a series of 2-bit gray code counters using the following approaches: (a) Use logic gates to compute the inputs to D flip-flops.

(b) Use multiplexers to look up the inputs to D flip-flops.

(c) Use logic gates to compute the inputs to JK flip-flops.

(d) Use multiplexers to look up the inputs to JK flip-flops.

38. The moebius counter produces another special sequence. The algorithm for N bits numbered CN ••• C1 is when k= N - 1, ... , 1 (a) Design a 4-bit moebius counter, using JK flip-flops as the storage elements.

(b) Design a 4-bit moebius counter using a shift register as the basic storage element.

(c) How many elements are in an N-bit moebius sequence that begins with O? Determine the answer empirically.

39. Consider a 74LS163 4-bit Programmable Binary Counter used in a mixed-logic circuit with data outputs and inputs expressed as T = L. (a) What are the logical effects of performing the 74LS163 count, load, and clear operations with such a circuit? (b) With the T = L interpretation of the data signals, can we still cascade 74LS163 chips to provide larger counters? (c) Why are we not free to alter the voltage representations of the LD, CLR, CEP, CET, or TC signals?

40. The programmable counters in the 74LS160 through 74LS163 series may act as enabled D registers. In such an application, what, if anything, should you do with the clear and count control inputs?

41. From a TTL data book, find an example of: (a) A serial-in, serial-out shift register.

(b) A serial-in, parallel-out shift register.

(c) A parallel-in, serial-out shift register.

42. Use three 74LS194 shift registers to implement 12-bit shift registers with the following capabilities: (a) Left shift with serial input; parallel output.

(b) Right shift with serial input; parallel load; serial output.

(c) Full left and right shifts with serial inputs; parallel load; parallel and serial outputs.

43. Use gates or multiplexers to modify the 74LS194 to include a fifth mode of operation.

This new mode will preserve the most significant (leftmost) bit during a right shift; in other words, after the shift, the two leftmost bits will be the same. This is called an arithmetic right shift-useful in computing with signed two's-complement numbers.

44. Describe the principal characteristics of the RAM, ROM, PROM, and EPROM.

45. Describe the principal characteristics and applications of the PLA, PLE, and PAL.

46. Consider two static RAMs, each with a capacity of 16K (214) bits, one RAM organized in a 16K x 1 configuration, the other in a 4K x 4 format. Assume that each type of RAM has two power supply pins, one read-write-select pin, one chip select pin, and one three-state-output-enable pin. Also assume that each data bit has a single bus-oriented pin for input and output. Determine the smallest number of pins in each type of RAM organization.

47. How do static and dynamic RAMs differ? What advantages do dynamic RAMs offer? What disadvantages?

48. Design PROMs that realize the following sets of logic functions: (a)

(b)

x = A-B-e + A-B-e + A-B-e + A-B-e Y = A-B-e + A-B-e + A-B-e Z = A-B-e + A-B-e + A-B-e x = A-B + A-e + A-e Y = A-B + B-e + A-B-e Z = B-e + B-e + A 4-49. Design PLAs that realize the sets of logic functions in Exercise 48.

50. Synthesize the logic equations in Exercise 4-48, using a PALI8L4. For each set of equations, use X and Y with T = L, and Z with T = H.

51. The PAL16R8, whose functions are shown in FIG. 27, produces T = L outputs in a natural way. A proper mixed-logic diagram for an output stage of this PAL is For emphasis, the diagram shows each signal and its polarity. If a T = H output is required, one useful technique is to produce the inverse of the required flip-flop input. Develop a standard mixed-logic diagram for an output stage of this PAL that produces an output having T = H. 4-52. The following prescription will convert an n-bit binary number into an n-bit gray code (n is the most significant bit): gray n = binary n graYk = binaryk ED binaryk+l (k = n - 1, ... ,2, 1)

(a) Tabulate the 5-bit binary and 5-bit gray codes.

(b) Design a PROM that converts 5-bit binary numbers into 5-bit gray codes.

53. When k = 1, 2, ... , n, bit k of an n-bit binary number is equal to the EaR of the corresponding gray code bits from k through n (n is the most significant bit). That is

(a) Tabulate the 4-bit gray code and the 4-bit binary code.

(b) Design a PLA that converts a 4-bit gray code into a binary number.

54. Under what circumstances do we use a single-shot in digital design? How is the delay period for the single-shot established? 4-55. Refer to a data sheet for the 96L02 Dual Single-Shot. Determine suitable values for the resistor and capacitor to provide delays of: (a) 100 nsec (b) 1 msec (c) 1 sec 4-56. Why does a delay line not require a power supply? 4-57. Using the delay line in FIG. 29, construct circuits that will do the following: (a) Delay a signal HURRY by 400 nsec.

(b) Assert a signal BINGO for 100 nsec, beginning when the signal GOMANGO becomes true.

PREV. | NEXT

Related Articles -- Top of Page -- Home

Updated: Saturday, March 25, 2017 17:12 PST