Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software Input Devices | Data Loggers + Recorders | Books | Links + Resources |
AMAZON multi-meters discounts AMAZON oscilloscope discounts Binary Addition The full adder. We assume that you are familiar with the process of binary addition and the representation of numbers in the two's-complement notation. For each bit position, the truth tables defining the addition process are given as Table 3-2. A and B are the bits to be added, CIN is the carry bit generated by the previous bit position, SUM is the sum bit for the current bit position, and COUT is the carry generated in the current bit position. A device for summing three bits in this manner is called a full adder. (A similar circuit without the CIN input is called a half adder.) TABLE 2 TRUTH TABLE FOR BINARY ADDITION The full-adder truth table yields Boolean equations for the sum and carry bits: You may derive the simplification of COUT from the K-map: To perform addition on arrays of bits representing unsigned binary numbers, we may connect full adders together as in FIG. 19. As a concrete example, let's add two 3-bit binary numbers A and B, where A = 101 and B = 110. The result of the binary addition is 1011. The corresponding values that would be present on the wires of the hardware are shown in FIG. 20.
This method of connecting full adders is called the ripple carry configuration, since stage zero must produce output before stage 1 can become stable. After stage one becomes stable, stage two will begin to develop its stable outputs. In other words, the carry does indeed ripple down the chain of adders. This is the simplest but slowest way to perform binary addition. Presently we will look at ways of speeding up the process. Cascading single-bit full adders is not a particularly useful way to perform addition. In digital design we need to add numbers whose binary representations span several bits, and we wish to have building blocks suited to this task. The 74LS283 Four-Bit Full Adder is a useful MSI chip constructed with four full adders packaged as an integral, interconnected unit. The device accepts two 4 bit inputs and a single carry-in bit CIN into the low-order bit position; it produces a 4-bit sum and a carry-out bit COUT from the most significant bit position. By feeding the COUT of the chip into the CIN of the next-most-significant stage, it is easy to produce binary adders for any reasonable word length. In the typical application, the CIN to bit 0 would be forced to be false, representing numeric 0. In FIG. 21, we show the data paths for a 12-bit full adder composed of three 4-bit full adders. Signed arithmetic. The multibit full adder circuit of FIG. 21 does binary addition on 12-bit positive numbers. If the inputs A and B represent signed integers in the two's-complement notation, the circuit of FIG. 21 can perform signed arithmetic. In the two's-complement notation, the leftmost bit represents the sign of the number, and so the circuit shown in FIG. 21 can handle 11-bit integers plus a sign.
When the circuit receives two integers, it produces the (signed) sum: A PLUS B. The circuit performs subtraction if the B input receives the two's complement of the subtrahend: A MINUS B = A PLUS (MINUS B). Incrementing and decrementing are useful special cases: A PLUS 1, and A MINUS 1 = A PLUS (MINUS 1). Similar results occur if the A and B inputs are represented in one’s-complement form. The same circuit performs addition on positive integers and on signed integers represented in one's- or two's-complement notation. The hardware did not change-only the interpretation of the data. The Arithmetic Logic Unit This building block, as its name implies, combines the logic capability of the universal logic circuit with a general set of binary arithmetic operations built around cascaded full adders. With the multibit full adder, we may produce arithmetic operations such as subtraction and incrementing only by manipulating the input data; the adder circuit itself only adds. If we are to have a general arithmetic capability as a building block, such special preparation of the input data is inappropriate; the arithmetic unit should allow us to select an operation. This is a similar situation to that which led us to the universal logic circuit for the performance of arbitrary logic operations on its inputs. For operands A and B, each consisting of several bits, some useful types of binary arithmetic operations are Addition: Subtraction: Incrementing: Decrementing: Negation: A PLUS B A MINUS B A PLUS 1 A MINUS 1 MINUS A To supplement operations of this type, we might wish to have a source of special constants, such as 0, plus 1, minus 1, and so on. It would be nice to include multiplication and division in our list, but these operations prove to be quite complex. Although some LSI integrated circuit chips perform multiplication and division, we will exclude these operations from the present discussion. We might wish to include operations such as B MINUS A, MINUS B, A PLUS A, and so on-close relatives of the basic operations given above. In any event, it appears that the number of basic arithmetic operations in our list will not exceed 16, so a 4-bit control input would suffice to select any desired operation. Aiming toward a 4-bit arithmetic unit, we would have two 4-bit data inputs, a 4-bit output for the result, an input for carry-in and an output for carry out, and a 4-bit control input to select the operation. Can we combine these arithmetic operations with the logic capability of the universal logic implementer to produce an arithmetic logic unit? Our universal logic circuit requires four control inputs to specify a code for any of its 16 logic functions. To include the arithmetic operations within a similar control structure will require one additional control bit, producing a 5-bit code. We may use this fifth bit to separate the 32 possible operations into two groups-the 16 logic operations and 16 operations that are arithmetic in nature. An arithmetic logic unit (ALU) circuit would provide a nice building block for our bag of design tools, and there are a number of integrated circuit chips that approximate the structure developed above. The original chip of this type is the 74LS181 Four-Bit Arithmetic Logic Unit. It has five control inputs and provides the 16 logic functions of two variables, operating simultaneously on each pair of bits. It also provides 16 other operations that have an arithmetic character. The 74LS181 does not have all the arithmetic operations in our wish list, but it is capable of adding, subtracting, incrementing, and decrementing numbers in the two's-complement representation. Several of the "arithmetic" operations of the 74LS181 are somewhat bizarre, and are of little interest to us, but do no harm. The 74LS181 allows the cascading of 4-bit chips to provide arithmetic capability for long words. The arrangement of the data paths is the same as in FIG. 21 for the full adder. Most of the building blocks described in this section are tools for performing a single useful function, such as decoding or multiplexing. The ALU is a step toward more powerful LSI building blocks that perform a variety of functions on a small number of bits, under the control of a set of input signals that we may view as an operation code. Such building blocks are frequently called bit slices, alluding to their ability to be ganged together to process larger numbers of bits. Most bit-slice devices have internal registers and are designed to support the basic operations required in modem computer processors. You will encounter additional bit-slice components in Section 4 and later in this book. Speeding Up the Addition Bit-slice circuits such as the 74LS283 Four-Bit Adder and the 74LS181 Arithmetic Logic Unit are helpful digital building blocks, but if they depended on the simple ripple-carry scheme for binary addition they would be very slow. Binary addition is a combinational process. You know that, at least in theory, any combinational process can be expressed as a truth table and implemented as a two-level sum of-products function. This approach has only limited practical value in binary arithmetic, since the truth tables for a multibit sum become too large to manage. For instance, the truth table for a 12-bit sum has 24 input variables (25 if we allow for separately specifying the initial carry-in to bit position 0). Truth tables are a way of specifying in detail the outputs for each combination of input values. In non-arithmetic work we can usually find a simple repetitive pattern of one or two bits that serves as a model for the behavior of the entire circuit, and we can express the repeating function as a small truth table or as an equation. This works well in logic operations, since for each bit the result of an operation depends only on the data entered for that bit, and not on the data in adjacent or more distant bits. Unfortunately, arithmetic does not have this simple property, because of the complex way in which the carry bits affect the result. So two-level binary addition, although desirable because of its speed, is intractable when there are more than a few bits. Within a small bit-slice, however, it is sometimes feasible to produce two-level addition-for instance, four-bit data inputs yield five 9-input truth tables, for the 4 bits of the sum and the single carry-out bit. Each of these truth tables has 2^9 = 512 rows-painful but not impossible to produce if the rewards are great enough. But you can see that this is hardly a promising general approach. Ripple-carry is a serial method, slow and simple; two-level circuits are fully parallel, fast but difficult. We need an intermediate technique that provides some parallelism with a reasonable effort. A widely used approach is to cast the problem of addition into terms of carry generate and carry propagate functions. For the moment, consider a one-bit full adder, with inputs A1, B;, and C1 , and outputs S1 and C1+ 1. We will focus on some properties of the data inputs A1 and B1 • We introduce a carry-generate function G1 that is true only when we can guarantee that the data inputs will generate a carry-out. We introduce a carry propagate function P1 that is true only when a carry-in will be propagated as an identical carry-out. For a 1-bit sum, the truth tables for G1 and P1 are:
Using these functions, we may express the carry-out and sum: Ci+1 = G; + Pi Ci (4) Si = Pi (+) Ci (5)
(To verify the equation for Si, you may wish to refer to Table 3-2, our original definition of the full adder.) These equations express the sum and carry-out in terms of just the generate and propagate operators and the carry-in-an important property that we will use when we extend these concepts to bit-slice adders. With these equations, we may implement multibit full adders, but the ripple carry effect is still present, since each bit's carry-in depends on the preceding bit's carry-out. However, we may expand the equation for Ci in terms of the equations for less-significant bits, to achieve a degree of carry look-ahead. For instance C1 = G0 + P0·C0 (6) C2 = G1 + P1·(GO+ P0·C0) (7) We can derive similar equations (of increasing complexity) for C3 and C4 • Again, each equation involves only the generate and propagate operators and the original carry-in. When the generate and propagate operators apply to one-bit slices (8) (9) These equations involve only the arithmetic data inputs for the ith one-bit unit. By substituting and simplifying, we may derive the following equations for the carry bits:
These equations yield two-level results. The expansion of C3 and C4 are more lengthy, and will be left as homework problems. Within a bit-slice chip, for instance the 74LS181, these equations could be used for high-speed implementations of the sum outputs. So through S3 and the carry-out bit C4. If we are building a large adder or ALU from smaller bit-slices, we are still faced with the ripple-carry problem across the boundaries of each chip, even though within each chip the carry-out is being computed rapidly. For instance, in FIG. 21, the most-significant carry-out cannot be computed until its corresponding carry-in (into bit 8) is stable, which in turn must await the stabilization of the carry-in to bit 4. We have carry look-ahead within the chips, but not among the chips. The concept of carry generate and propagate, as typified by Eqs. (6) and (7), may be extended to larger bit slices. The 74LS181, in addition to producing the carry-out C4, also produces two outputs G and P that are equivalent to our one-bit G3 and P3. The 74LS181 's G output is true whenever the data inputs assure that its carry-out is true; the P output is true only when the data inputs are such that the block carry-out is the same as the block carry-in. These G and P functions are considerably more complex than our one-bit Eqs. (8) and (9), but still involve only the arithmetic data inputs for the chip.
FIG. 22 is the circuit for a 12-bit adder constructed with 74LS181 ALU s and a block-carry look-ahead circuit. Instead of relying on each 74LS181 to send its computed carry-out on to the next most significant 74LS181, we send the G and P outputs to the block-carry look-ahead box. The look-ahead box is a combinational circuit that accepts all the G's and P's and the initial carry-in, and simultaneously computes all the carry-outs that must be sent to the 74LS181s. The more significant 74LS181 chips do not have to wait for their carry-in signals to ripple in from lower stages. We may build the look-ahead box from equations such as the following, which can be inferred from the intuitive meaning of the generate and propagate operators in FIG. 22. C4 = G3 + P3oCO Cs8 = G7 + P7 G3 + P7 P3 C0 Integrated circuits that perform the block-carry look-ahead functions are available. The 74LS182 Look-Ahead Carry Generator supports the look-ahead process in up to four 74LS181 chips. Furthermore, the 74LS182 produces its own version of G and P, so that look-ahead circuits of more than 16 bits may be constructed by adding levels of74LS182s. Two levels of74LS182 will support 64-bit addition. Each new level of look-ahead circuitry increases the time required for the adder outputs to stabilize, but only by about 10 nanoseconds. Since the 74LS181 requires about 20 nanoseconds to perform a binary addition and produce its generate and propagate outputs, the 12-bit adder in FIG. 22 requires about 30 nanoseconds and a 64-bit adder would require only about 40 nanoseconds. On the other hand, the 74LS181 requires about 25 nanoseconds to compute its carry out. Were the 74LS181s used in a ripple-carry configuration, as in FIG. 21, each stage would require 25 nanoseconds. The 12-bit addition would require 50 nanoseconds, with longer words requiring an additional 25 nanoseconds for each additional 4 bits. Arithmetic is a vital function in most computer applications, and much effort has gone into producing fast and efficient arithmetic circuits. Multiplication and division present their own sets of difficulties; fast division is a particularly challenging problem. We will not cover these specialized areas; consult either the textbooks listed at the end of the section or the technical literature on specific computers. DATA MOVEMENT One of the most important operations in digital design is moving data between a source and a destination. This often occurs inside computers, and is a major activity of peripheral devices. Frequently, the data itself involves several bits an n-bit byte or word-that must move through the system in parallel. Typical data paths in modern digital design are 8 to 64 bits wide. If there is only a single source of data and a single destination, we have no problem. We simply run n wires from the source's output to the destination's input. With several sources and various destinations, the situation becomes more complex and requires that we allow data to be moved from any source to any destination. One alternative is separate data paths from each of S sources to each of D destinations. An item that is a source of data at one time may be a destination of data at other times. In FIG. 23, we show the data paths in two configurations: three sources SJ-S3 with three different destinations DI-D3, and four common sources and destinations, A, B, C, D. Sources Destinations (a) Sources and destinations are different (b) Sources and destinations are the same
All paths are n bits wide. There are obvious drawbacks to this scheme. The number of wires becomes very large as the number of sources and destinations increases, in general being S x D x N. Adding new nodes to the system involves massive rewiring, affecting each source and destination. A completely connected system allows several simultaneous data transactions to occur over the independent data paths, and this is the main advantage of the scheme. When high-speed parallel movements of data are vital, we would expect to pay the price of inflexibility and complex wiring, and would adopt some form of the completely connected system. The Bus In most applications, especially when the number of sources and destinations is not fixed at design time, we need a more flexible solution to the problem of moving data. By giving up parallelism, we may achieve this flexibility. Suppose we run all sources into a single node and take all destination paths from this node. We call this configuration a data bus, or just a bus. FIG. 24 consists of two ways to represent a bus, FIG. 24b being the more common way. (Remember that all the data paths are actually n bits wide, although we usually only draw a single path representing the entire word or byte.) The bus structure permits only one data transfer to occur at a time, since all data paths funnel through the bus node. However, adding or deleting elements on the bus is simple, and this overwhelming advantage accounts for the widespread use of this con figuration in digital computers.
Controlling the bus. We have a building block to move data-the bus that takes the form of just n wires. How do we regulate the traffic over these wires? In any scheme with more than one source or destination, there is a need to control the movement of the data. This control takes two forms: who talks, and who listens. On the bus, there must be no more than one talker (source) at a time, but several destinations may listen. The responsibility for listening on the bus (receiving data) is part of each destination device and is not directly a part of the bus operation. All destinations are physically capable of listening; whether they actually accept data is under their control. Maintaining control over the bus sources, to assure only one talker at a time, is very much a concern of the designer of the bus. We shall mention four control mechanisms, two of which you have already encountered. Bus access with the multiplexer. Our job is to select one source from several candidates. The digital designer, when encountering the concept of se- lection, has a knee-jerk response-the multiplexer. For each bit of the bus's data path, attach a multiplexer output to the bus, making each source an input to the mux. We control this collection of n multiplexers with a common source select code feeding into the multiplexers' select inputs. We show the idea in FIG. 25. In this approach, we collect the control for access to the bus in one spot, and assure that only one source is talking at a time-both important advantages. Further, it is easy to debug, since we maintain explicit centralized control over which source has access to the bus. On the other hand, the data mux method of bussing requires considerable hardware; we use an S-wide mux for each of the n data bits in the bus path. If n is large, we have a boardful of data multiplexers. Adding new sources is convenient as long as we do not exhaust the input capability of our muxes. If we exceed this capacity, we have a difficult hardware-modification job. For instance, with 8-input multiplexers, we may manage up to eight sources, but the ninth source causes great agony. Thus, the data multiplexer method of bussing suffers from a certain inflexibility and is not very conserving of hardware. Nevertheless, it is a good method of bussing a moderate number of sources and we will use it in the design of a minicomputer in Sections 7 and 8.
The remaining three methods lack the security of the multiplexer's encoded selection control. Bus access with OR gates. A primitive form of bus control is to merge all sources into the bus data path, using OR gates. For S n-bit sources, we would have n OR gates, each accepting S inputs. This produces the merging required to give all sources access to the single bus path, but it does not provide the control needed to allow only one source onto the bus at a time. Each bit of each source is either T or F; we must arrange for all sources but one to have all their bits false, while the one designated source presents its T or F data through the OR gates onto the bus. This approach places the responsibility for access with each source, rather than directly with the bus as in the multiplexer method. Each source must have its own gating signal to open or close the gate on its data bits. Typically, the sources have some form of AND gate on each data bit: the data forms one input and the control signal forms the other. (It is this usage that gave rise to the "gate" terminology in digital circuits.) The method is shown in FIG. 26.
The OR-gate method has little to recommend it. Electronically, it performs the same functions as the mux method, with the mux circuits split into OR gates and AND gates. We might view this as a "poor man's mux," although its components will cost more than those of the actual mux method. It suffers from the same inflexibility of input-size as the mux method and lacks the certainty of control provided by the multiplexer's encoded selection process. The remaining two methods introduce new concepts of digital building blocks. Open-collector gates. Open-collector technology provides a way to implement the OR logic function, and thus can be used in bussing applications. We must adhere to the stipulation that open-collector gates produce wired-OR when truth is represented by a low voltage. In FIG. 27, we show the 7407 Open-Collector Buffer used as a bus driver. Since their primary use was for bussing, where several destinations may listen in on the bus, open-collector chips usually can carry more current than their normal TTL counterparts. This provision of extra power is called buffering, and such chips are called buffers or drivers. The advantage of open-collector circuits is the elimination of the wide OR gates. As long as only one source at a time is on the bus (at most one input is asserting truth), we may connect a large number of open-collector outputs together. Aside from this, achieving proper control of the bus with open-collector wired OR logic involves the same concerns as the ordinary OR-gate method: we must still control each of the sources so that at most one is talking at a time.
Three-state outputs. The three-state output has, as its name implies, three stable states instead of the customary two. In addition to the usual high and low voltage levels, the third state provides a high-impedance mode, usually called Z, in which the output appears as if it were disconnected from its destinations. The three-state output requires an enabling three-state control input. When the output is enabled, the circuit transmits the normal H or L signal presented at the input of the three-state circuit. If the output is disabled, the circuit output is for practical purposes not there at all. (Logicians should note that three-state outputs are not the same as ternary logic, which is a true base-3 system.) Many SSI, MSI, and LSI chips incorporate three-state data outputs. The fundamental use is in bussing, so three-state outputs often provide power buffering like their open-collector cousins. We might select the 74LS244 Octal Three State Buffer as our prototype three-state chip. This is one of several SSI building blocks that perform no logic but simply afford three-state control of their inputs. This particular chip has two three-state sections-a two-buffer unit controlled by another three-state-enable input. We find three-state outputs in many useful building blocks. The multiplexers discussed earlier in this section have an enable input that holds their output false when the chip is disabled. In the three-state varieties, the output is "disconnected" when the chip is disabled. In Section 4, you will see more examples of three-state outputs in MSI building blocks. The uses of three-state output control in data bussing are substantial. We may connect almost any number of three-state devices together and, with proper three-state enabling of only one source at a time, control access to the bus. Often the chips providing the bus's source data will have three-state output control built in; in other cases, we may need to add buffers such as the 74LS244 to achieve three-state control. FIG. 28 is a typical three-state bussing con figuration.
There are two drawbacks to three-state bus control. First, as in the last two bussing methods, the control of access is decentralized, residing with the sources rather than with the bus structure itself. This makes more difficult the task of assuring that only one source at a time is talking on the bus. Second, the three-state bus is more difficult to debug than the bus formed from multiplexers. The three-state bus itself is just a collection of n wires. A debugger who sees bad data on the bus cannot easily identify which source or sources are contributing. Failure of the three-state circuitry often requires tedious disconnecting of sources from the bus until the guilty party identifies itself by a change in the bus's behavior. In the multiplexer method, we may check inputs, controls, and output directly, and quickly determine which element is misbehaving. These drawbacks to three-state bus control are insufficient to counteract the tremendous advantages that this technology offers, and three-state control is used in most modern applications of data bussing. One caveat: Do not try to use three-state control at the output of a control signal. Control signals must be either true or false (asserted or negated) at all times, and we cannot afford to have them simply not there. Only with data whose use is governed by control signals do we have the opportunity to have certain data sources disconnected some of the time. RESOURCES BLAKESLEE, THOMAS R., Digital Design with Standard MSI and LSI, 2nd ed. John Wiley & Sons, New York, 1979. FAST: Fairchild Advanced Schottky TTL. Fairchild Camera and Instrument Corp., Digital Products Division, South Portland, Maine. FLETCHER, WILLIAM I., An Engineering Approach to Digital Design. Prentice-Hall, Englewood Cliffs, N.J., 1980. Section 4 discusses MSI building blocks. HILL, FREDERICK J., and GERALD R. PETERSON, Digital Logic and Microprocessors. John Wiley & Sons, New York, 1984. MANO, M. MORRIS, Digital Design. Prentice-Hall, Englewood Cliffs, N.J., 1984. The TTL Data Book. Texas Instruments, P.O. Box 225012, Dallas, Texas 75265. WIATROWSKI, CLAUDE A., and CHARLES H. HOUSE, Logic Circuits and Microcomputer Systems, McGraw-Hill Book Co., New York, 1980. QUIZ 1. What are SSI, MSI, LSI, and VLSI? 2. What distinguishes a combinational circuit from a sequential one? 3. Explain the structure and the function of the multiplexer. What are the two major types of output enable found in MSI multiplexers? 4. FIG. 5, which corresponds to the commercial equivalent of the enabled multiplexer, is not a direct counterpart of FIG. 4 or Eq. (1). Give a logic equation that best describes the device shown in FIG. 5. 5. By consulting a TTL data book, make a list of the available MSI multiplexers and their distinguishing characteristics. You must inspect the data sheets to discern the details; chip names alone are not sufficient. (Since the multiplexer is such an important digital building block, this effort is well worthwhile.) 6. Why not have one select input for each multiplexer data input rather than encoding the select information? 7. The 74LS251 Eight-Input Multiplexer has a three-state output-enable feature. Construct a 16-input multiplexer building block from two 74LS251 chips and an inverter. 8. Build the 4-input multiplexer in FIG. 10, using SSI gates. 9. Show how to construct a 64-input multiplexer building block using eight 74LS251 chips and a 74LS42 decoder. 10. The 4-input multiplexer symbol below looks like a mixed-logic notation. Why do we not find this symbol useful? 11. The 74LS42 serves as a demultiplexer and a decoder. Characterize the difference in these two views of it. 12. Build the 4-output demultiplexer in FIG. 12, using SSI gates. Will your design also serve as a decoder? If so, how? 13. What is the most important characteristic of the outputs of a decoder? 14. Explain how we may view the 74LS42's decoding capability as either a 4-to-10 or an enabled 3-to-8 decoder. 15. The 74LS42 is sometimes called a BCD (binary-coded decimal) decoder. Why is this an appropriate name? 16. Construct a building block that will decode a 4-bit binary code into one of 16 outputs, using 74LS42 decoders and any necessary gates. 17. What is the purpose of an encoder? Why are practical encoders priority encoders? 18. Explain the difference between the concepts of encoding and decoding. 19. Using SSI gates, design a priority encoder that accepts five inputs and produces a 3-bit output code. Use method 1 of the text. 20. Parity is an important concept, frequently used in error-detection circuits within digital systems. The parity of a group of bits is odd if there are an odd number of I-bits in the group; even parity implies an even number of I-bits. Although rapid parity-computing circuits are available, the EXCLUSIVE-OR function provides the basis for parity computation. (a) Show that the EXCLUSIVE OR of two bits computes odd parity. (b) Show that, in general, A1 (+) A2 (+) A3 (+) ... (+) An expresses an odd-parity function of n bits. 21. Multiplexers offer another interesting approach to parity computation. In the following, the output should be asserted if the parity is odd. (a) Show how a 4-input multiplexer (for instance, one-half of a 74LS153) can be used to compute the parity of a 3-bit group. (b) Write the logic equation (in terms of AND, OR, and NOT) for the actions of the multiplexer in part (a), and show that this equation is equivalent to the EXCLUSIVE-OR use suggested in the preceding exercise. (c) Show how to use four 4-input multiplexers to compute the parity of a 9-bit group. How many 74LS153 chips would be required for this design? 22. The 74LS280 Parity Generator/Checker accepts 9 bits of data and reports the parity. Use this chip and any necessary SSI gates to design a circuit that will assert an output when the parity of a 10-bit group is odd. 23. Derive logic equations for determining if one 4-bit positive binary number is greater in magnitude than another. Design a circuit for this logic, using SSI gates. 24. The 74LS85 Magnitude Comparator has outputs for designating A < B, A = B, and A > B. How may we determine if A :;;; B? A;;;, B? A f= B? 25. Draw a circuit for 10-bit magnitude comparison, using 74LS85 chips. 26. Performing arithmetic comparisons on signed numbers is more complex than comparing magnitudes. Consider two 4-bit signed numbers A and B, recorded in signed magnitude notation. (This notation denotes a negative number with a 1 in the leftmost bit position and a positive number with a 0 in that bit position; the other bits record the magnitude of the number.) Develop logic equations to determine if A < B, A = B, and A > B in this notation. Explore whether the 74LS85 Magnitude Comparator is useful in realizing these equations. Produce a circuit (either with or without the 74LS85) for generating the three comparisons. 27. Design a 3-bit full adder equivalent to FIG. 19, using I-bit full adders fabricated from SSI gates. 28. Modify FIG. 21 to perform the operation A (+) B (+) 1. 29. Using 74LS283 Four-Bit Full Adders and any necessary SSI gates, design a circuit that will accept a 12-bit signed number in the two's-complement representation and produce the negative of that number. 30. Devise a circuit that will accept a 12-bit signed number in the two's-complement representation and produce the absolute value of that number. 31. Verify Eq. (5) for the full-adder sum expressed in terms of the carry-generate and carry-propagate operators. 32. Derive equations for C3 and C4 , similar to Eqs. (6) and (7), using the carry generate and carry-propagate operators. 33. Derive the expressions for the 74LS181's G and P outputs. (Hint: consider the generate and propagate operators for the bits within the 4-bit slice; ask yourself what conditions must apply to obtain truth on the overall G and P.) Verify your results by consulting a 74LS181 data sheet. 34. Describe a data bus. Give the main advantages and disadvantages of this method of moving data. Why is the bus such a widely used concept? 35. Discuss the merits of controlling a bus with: (a) Multiplexers. (b) OR gates. (c) Open-collector buffers. (d) Three-state buffers. 36. Three-state control of outputs is common. Why do we not employ three-state control of inputs? 3-37. Design bussing systems similar to FIG. 25 for six 4-bit devices, using: (a) Open-collector bus drivers. (b) Three-state bus drivers. 38. The multiplexer bus control method shown in FIG. 25 has the desirable property that only one source can be talking on the bus at any time. Devise a three-state bus control system that also has this "guaranteed single-talker" feature. 39. A logic probe is a small laboratory instrument that, when touched to a point in a digital circuit, indicates the digital voltage level at that point. A typical logic probe shows a low voltage level as a green light and a high voltage level as a red light. If the voltage level is outside the acceptable ranges, the probe shows no light or indicates invalidity in some other way. Read the sections in Section 12 on integrated circuit data sheets and performance parameters. Using this information, specify the following voltages for a logic probe designed to operate with the 74LS family of integrated circuits: (a) The highest voltage that will light the green light. (b) The lowest voltage that will light the red light. Related Articles -- Top of Page -- Home |
Updated: Saturday, March 25, 2017 8:32 PST