Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software Input Devices | Data Loggers + Recorders | Books | Links + Resources |
AMAZON multi-meters discounts AMAZON oscilloscope discounts 1. Introduction The sections on memory discuss the external connection between a processor and the memory system. The previous section discusses connections with external I/O de vices, and shows how a processor uses the connections to control the device or transfer data. The section reviews concepts such as serial and parallel transfer, defines terminology, and introduces the idea of multiplexing data transfer over a set of wires. This section extends the ideas by explaining a fundamental architectural feature present in all computer systems, a bus. It describes the motivation for using a bus, ex plains the basic operation, and shows how both memory and I/O devices can share a common bus. We will learn that a bus defines an address space and understand the relationship between a bus address space and a memory address space. 2. Definition of a Bus A bus is a digital communication mechanism that allows two or more functional units to transfer control signals or data. Most buses are designed for use inside a single computer system; some are used within a single integrated circuit. Many bus designs exist because a bus can be optimized for a specific purpose. For example, a memory bus is intended to interconnect a processor with a memory system, and an I/O bus is in tended to interconnect a processor with a set of I/O devices. We will see that general purpose designs are possible. 3. Processors, I /O Devices, and Buses The notion of a bus is broad enough to encompass most external connections (i.e., a connection between a processor and a coprocessor). Thus, instead of viewing the connection between a processor and a device as a set of wires (as in Section 14), we can be more precise: the two units are interconnected by a bus. FIG. 1 uses a graphic that is common in engineering diagrams to illustrate the concept.
We can summarize: A bus is the digital communication mechanism that interconnects functional units of a computer system. A computer contains one or more buses that interconnect the processors, memories, and external I/O devices. 3.1 Proprietary and Standardized Buses A bus design is said to be proprietary if the design is owned by a private company and not available for use by other companies (i.e., covered by a patent). The alternative to a proprietary bus is known as a standardized bus, which means the specifications are available. Because they permit equipment from two or more vendors to communicate and interoperate, standardized buses allow a computer system to contain devices from multiple vendors. Of course, a bus standard must specify all the details needed to con struct hardware, including the exact electrical specifications (e.g., voltages), timing of signals, and the encoding used for data. Furthermore, to ensure correctness, each device that attaches to the bus must implement the bus standard precisely. 3.2 Shared Buses and an Access Protocol We said that a bus can be used to connect a processor to an I/O device. In fact, most buses are shared, which means that a single bus is used to connect the processor to a set of I/O devices. Similarly, if a computer contains multiple processors, all the processors can connect to a shared bus. To permit sharing, an architect must define an access protocol to be used on the bus. The access protocol specifies how an attached device can determine whether the bus is available or is in use, and how attached devices take turns using the bus. 3.3 Multiple Buses A typical computer system contains multiple buses. For example, in addition to a central bus that connects the processor, I/O devices, and memory, some computers have a special-purpose bus used to access coprocessors. Other computers have multiple buses for convenience and flexibility - a computer with several standard buses can accommodate a wider variety of devices. Interestingly, most computers also contain buses that are internal (i.e., not visible to the computer's owner). For example, many processors have one or more internal buses on the processor chip. A circuit on the chip uses an onboard bus to communicate with another circuit (e.g., with an onboard cache). 3.4 A Parallel, Passive Mechanism As Section 14 describes, an interface is either classified as using serial data transfer or parallel data transfer. Most of the buses used in computer systems are parallel. That is, a bus is capable of transferring multiple bits of data at the same time. The most straightforward buses are classified as passive because the bus itself does not contain electronic components. Instead, each device that attaches to a bus contains the electronic circuits needed to communicate over the bus. Thus, we can imagine a bus to consist of parallel wires to which devices attach†. †In practice, some buses do contain a digital circuit known as a bus arbiter that coordinates devices attached to the bus. However, such details are beyond the scope of this text. 4. Physical Connections Physically, a bus can consist of tiny wires etched in silicon on a single chip, a cable that contains multiple wires, or a set of parallel metal wires on a circuit board. Most computers use the third form for an I/O bus: the bus is implemented as a set of parallel wires on the computer's main circuit board, which is known as a mother board. In addition to a bus, the mother board contains the processor, memory, and other functional units. A set of sockets on the mother board connects to the bus to allow devices to be plugged in or removed easily (i.e., a device can be connected to the bus merely by plugging the device into a socket). Typically, the bus and the sockets are positioned near the edge of the mother board to make them easily accessible from outside. FIG. 2 illustrates a bus and sockets on a mother board.
5. Bus Interface Attaching a device to a bus is nontrivial. To operate correctly, a device must adhere to the bus standard. Recall, for example, that a bus is shared and that the bus specifies an access protocol that is used to determine when a given device can access the bus to transfer data. To implement the access protocol, each device must have a digital circuit that connects to the bus and follows the bus standard. Known as a bus interface or a bus controller, the circuit implements the bus access protocol and controls exactly when and how the device uses the bus. If the bus protocol is complicated, the interface circuit can be large; many bus interfaces require multiple chips. What is the physical connection between a bus interface circuit and the bus itself? Interestingly, the sockets of many buses are chosen to make it possible to plug a printed circuit board directly into the socket. The circuit board must have a region cut to the exact size of a socket, and must have metal fingers that align exactly with metal contacts in the socket. FIG. 3 illustrates the concept. The figure helps us envision how a physical computer system is constructed. If the mother board lies in the bottom of a cabinet, the circuit boards for individual devices that plug into the mother board will be perpendicular, meaning that the device circuit boards will be vertical. A key piece of the physical arrangement concerns the placement of sockets - by locating sockets near the edge of a mother board, a designer can guarantee that device boards are located adjacent to the side of the cabinet. Choosing a location near the side means a short connection between the circuit board and the out side of the cabinet. The arrangement is used in a typical PC.
6. Control, Address, and Data Lines Although the physical structure of a bus provides interesting engineering challenges, we are more concerned with the logical structure. We will examine how the wires are used, the operations the bus supports, and the consequences for programmers. Informally, the wires that comprise a bus are called lines. There are three conceptual functions for the lines: -- Control of the bus -- Specification of address information -- Transfer of data To help us understand how a bus operates, we will assume that the bus contains three separate sets of lines that correspond to the three functions†. FIG. 4 illustrates the concept.
†The description here simplifies details; a later section explains how the functionality can be achieved without physically separate groups of wires. As the figure implies, bus lines need not be divided equally among the three uses. In particular, control functions usually require fewer lines than other functions. 7. The Fetch-Store Paradigm Recall from Section 10 that memory systems use the fetch-store paradigm in which a processor can either fetch (i.e., read) a value from memory or store (i.e., write) a value to memory. A bus uses the same basic paradigm. That is, a bus only supports fetch and store operations. As unlikely as it seems, we will learn that when a processor communicates with a device or transfers data across a bus, the communication always uses fetch or store operations. Interestingly, the fetch-store paradigm is used with all devices, including microphones, video cameras, sensors, and displays, as well as with storage devices, such as disks. We will see later how it is possible to control all devices with the fetch-store paradigm. For now, it is sufficient to understand the following: Like a memory system, a bus employs the fetch-store paradigm; all control or data transfer operations are performed as either a fetch or a store. 8. Fetch-Store and Bus Size Knowing that a bus uses the fetch-store paradigm helps us understand the purpose of the three conceptual categories of lines that FIG. 4 illustrates. All three categories are used for either a fetch or store operation. Control lines are used to ensure that only one pair of entities attempts to communicate over the bus at any time, and to allow two communicating entities to interact meaningfully. The address lines are used to pass an address, and the data lines are used to transfer a data value. FIG. 5 explains how the three categories of lines are used during a fetch or store operation. The figure lists the steps that are taken for each operation, and specifies which group of lines is used for each step. We said that most buses use parallel transfer - the bus contains multiple data lines, and can simultaneously transfer one bit over each data line. Thus, if a bus contains K data lines, the bus can transfer K bits at a time. Using the terminology from Section 14, we say that the bus has a width of K bits. Thus, a bus that has thirty-two data lines (i.e., can transfer thirty-two bits at the same time) is called a thirty-two-bit bus. Of course, some buses are serial rather than parallel. A serial bus can only transfer one bit at a time. Technically, a serial bus has a width of one bit. However, engineers do not usually talk about a bus having a width of one bit; they simply call it a serial bus. Fetch 1. Use the control lines to obtain access to the bus 2. Place an address on the address lines 3. Use the control lines to request a fetch operation 4. Test the control lines to wait for the operation to complete 5. Read the value from the data lines 6. Set the control lines to allow another device to use the bus Store 1. Use control lines to obtain access to the bus 2. Place an address on the address lines 3. Place a value on the data lines 4. Use the control lines to specify a store operation 5. Test the control lines to wait for the operation to complete 6. Set the control lines to allow another device to use the bus FIG. 5 The steps taken to perform a fetch or store operation over a bus, and the group of lines used in each step. 9. Multiplexing How wide should a bus be? Recall from Section 14 that parallel interfaces represent a compromise: although increasing the width increases the throughput, greater width also takes more space and requires more electronic components in the attached devices. Furthermore, at high data rates, signals on parallel wires can interfere with one another. Thus, an architect chooses a bus width as a compromise between space, cost, and performance. One technique stands out as especially helpful in reducing the number of lines in a bus: multiplexing. A bus can use multiplexing in two ways: data multiplexing alone or a combination of address and data multiplexing. Data Multiplexing. In Section 14, we learned how data multiplexing works. In essence, when a device attached to a bus has a large amount of data to transfer, the de vice divides the data into blocks that are exactly as large as the bus is wide. The device then uses the bus repeatedly, by sending one block at a time. †Of course, a device that receives a request over a multiplexed bus must store the address while the data is transferred. Address And Data Multiplexing. The motivation for multiplexing addresses is to reduce the number of lines. To understand how address multiplexing works, consider the steps in FIG. 5 carefully. In the case of a fetch operation, the address lines and data lines are never used at the same time (i.e., in the same step). Thus, an architect can use the same lines to send an address and receive data. For a store operation, multiplexing can be used: bus hardware sends the address first and then sends the data†. Most buses make heavy use of multiplexing. Thus, instead of three conceptual sets of lines, a typical bus has two: a few lines used for control, and a set of lines used to send either an address or data. FIG. 6 illustrates the idea.
Multiplexing offers two advantages. On the one hand, multiplexing allows an architect to design a bus that has fewer lines. On the other hand, if the number of lines in a bus is fixed, multiplexing produces higher overall performance. To see why, consider a data transfer. If K of the lines in the bus are reserved for addresses, those K lines cannot be used during a data transfer. If all the lines are shared, however, an additional K bits can be transferred on each bus cycle, which means higher overall throughput. Despite its advantages, multiplexing does have two disadvantages. First, multiplexing takes more time because a store operation requires two bus cycles (i.e., one to transfer the address and another to transfer the data item). Second, multiplexing re quires a more sophisticated bus protocol, and therefore, more complex bus interface hardware. Despite the disadvantages, many bus designs use multiplexing. In the extreme case, a bus can be designed that multiplexes control information over the same set of lines used for data and addresses. 10. Bus Width and Size of Data Items The use of multiplexing helps explain another aspect of computer architecture: uniform size of all data objects, including addresses. We will see that all data transfers among a processor, memories, and devices occur over a bus. Furthermore, because the bus multiplexes the transfers over a fixed number of lines, a data item that exactly matches the bus width can be transferred in one cycle, but any item that is larger than the bus width requires multiple cycles. Thus, it makes sense for an architect to choose a single size for the bus width, the size of a general-purpose register, and the size of a data value that the ALU or functional units use (e.g., the size of an integer or a floating point value). More important, because addresses are also multiplexed over the bus lines, it makes sense for the architect to choose the same size for an address as for other data items. The point is: In many computers, both addresses and data values are multiplexed over the same bus. To optimize performance of the hardware, an architect chooses a single size for both data items and addresses. 11. Bus Address Space A memory bus (i.e., a bus that the processor uses to access memory) is the easiest form of a bus to understand. Previous sections discuss the concepts of memory access and a memory address space; we will see how a bus is used to implement the concepts. As FIG. 7 illustrates, a memory bus provides a physical interconnection among a processor and one or more memories.
As the figure shows, the processor and memory modules connected to a memory bus each contain an interface circuit. The interface implements the bus protocol, and handles all bus communication. The interface uses the control lines to gain access to the bus, and then sends addresses or data values to carry out the operation. Thus, only the interface understands the bus details, such as the voltage to use and the timing of control signals. From a processor's point of view, the bus interface provides the fetch-store paradigm. That is, the processor can only perform two operations: fetch from a bus address and store to a bus address. When the processor encounters an instruction that references memory, the processor hardware invokes the bus interface. For example, on many architectures, a load instruction fetches a value from memory and places the value in a general-purpose register. When the processor executes a load, the hardware issues a fetch instruction to the bus interface. Similarly, if the processor executes an instruction that deposits a value in memory, the hardware uses a store operation on the bus interface. From a programmer's point of view, the bus interface is invisible. The programmer thinks of the bus as defining an address space. The key to creating a single ad dress space lies in memory configuration - each memory is configured to respond to a specific set of bus addresses. That is, the interface for memory 1 is assigned a different set of addresses than the interface for memories 2, 3, 4, and so on. When a processor places a fetch or store request on the bus, all memory controllers receive the request. Each memory controller compares the address in the request to the set of addresses the memory module has been assigned. If the address in the request lies within the controller's set, the controller responds. Otherwise, it ignores the request. The point is: When a request passes across a bus, all attached memory modules receive the request. A memory module only responds if the address in the request lies in the range that has been assigned to the module. 12. Potential Errors FIG. 8. lists the conceptual steps that each memory module interface implements. Let R be the range of addresses assigned to this module Repeat forever { Monitor the bus until a request appears; if ( the request specifies an address in R) { respond to the request } else { ignore the request } }
An error that the bus hardware reports is referred to as a bus error; a typical bus protocol includes mechanisms that detect and report each type of bus error. Allowing each memory module to act independently means two types of bus errors can occur: -- Address conflict -- Unassigned address Address Conflict. We use the term address conflict to describe a bus error that results when two or more interfaces are mis-configured so they each respond to a given address. Some bus hardware is designed to detect and report address conflicts when the system boots. Other hardware is designed to prevent conflicts. In any case, most bus protocols include a test for address conflicts that occur at run-time - if two or more interfaces attempt to respond to a given request, the bus hardware detects the problem and sets a control line to indicate that an error occurred. When it uses a bus, a processor checks the bus control lines and takes action if an error occurs. Unassigned Address. An unassigned address bus error occurs if a processor at tempts to access an address that has not been assigned to any interface. To detect an unassigned address, most bus protocols use a timeout mechanism - after sending a re quest over the bus, the processor starts a timer. If no interface responds, the timer expires, which causes the processor hardware to report the bus error. The same timeout mechanism used to detect unassigned addresses also detects malfunctioning hardware (e.g., a memory module that is not responding to requests). 13. Address Configuration and Sockets Some bus hardware prevents bus errors. Unassigned addresses pose a thorny problem for prevention. On the one hand, to prevent bus errors, each possible address must be assigned to a memory module. On the other hand, most memory systems are designed to accommodate expansion. That is, a bus typically contains enough wires to address more memory than is installed in the computer (i.e., some addresses will be unassigned). Fortunately, architects have devised a scheme that helps avoid the problem of two modules that both answer to a given request: special sockets. The idea is straightforward. Memory is manufactured on small printed circuits that each plug into a socket on the mother board. To avoid problems caused by misconfiguration, all memory boards are identical, and no configuration is required before a board is plugged in. Instead, circuitry and wiring is added to the mother board so that the first socket only receives re quests for address 0 through K-1, the second socket only receives requests for address K through 2K-1, and so on. When a socket does recognize an address, the socket passes the low-order bits of the address on to the memory. The point is: To avoid memory configuration problems, architects can place memory on small circuit boards that each plug into a socket on the mother board. An owner can install memory without configuring the hardware because each socket is configured with the range of ad dresses to which the memory should respond. As an alternative, some computers contain sophisticated circuitry that allows the MMU to configure socket addresses when the computer boots. The MMU determines which sockets are populated, and assigns each a range of addresses. Although it adds cost, the extra circuitry to prevent conflicts makes installing memory much easier - an owner can purchase memory modules and plug them into sockets without configuring the modules and with no danger of conflicts. 14. The Question of Multiple Buses Should a computer system have multiple buses? If so, how many? Computers designed for high performance (e.g., mainframe computers) often contain several buses. Each bus is optimized for a specific purpose. For example, a mainframe computer might have one bus for memory, another for high-speed I/O devices, and another for slow-speed I/O devices. As an alternative, less powerful computers (e.g., personal computers) often use a single bus for all connections. The chief advantages of a single bus are lower cost and more generality. A processor does not need multiple bus inter faces, and a single bus interface can be used for both memory and devices. Of course, designing a single bus for all connections means choosing a compromise. That is, the bus may not be optimal for any given purpose. In particular, if the processor uses a single bus to access instructions and data in memory as well as per form I/O, the bus can easily become a bottleneck. Thus, a system that uses a single bus often needs a large memory cache that can answer most of the memory requests without using the bus. 15. Using Fetch-Store With Devices Recall that a bus is used as the primary connection between a processor and an I/O device, and that all operations on a bus must use the fetch-store paradigm. The two statements may seem contradictory - although it works well for data transfer, fetch store does not appear to handle device control. For example, consider an operation like testing whether a wireless radio is currently in range of an access point or moving paper through a printer. It seems that fetch and store operations are insufficient, and that de vices require a large set of control commands. To understand how a bus works, we must remember that a bus provides a way to communicate a set of bits from one unit to another without specifying what each bit means. The names fetch and store mislead us into thinking about values in memory. On a bus, however, the interface hardware of each device provides a unique interpretation of the bits. Thus, a device can interpret certain bits as a control operation rather than as a request to transfer data. An example will clarify the relationship between the fetch-store paradigm and de vice control. Imagine a simplistic hardware device that contains sixteen status lights, and suppose we want to attach the device to a bus. Because the bus only offers fetch and store operations, we need to build interface hardware that uses the fetch-store paradigm for control. An engineer who designs a device interface begins by listing the operations to be performed. FIG. 9 lists the five functions for our imaginary de vice. -- Turn the display on -- Turn the display off -- Set the display brightness -- Turn the ith status light on -- Turn the ith status light off
To cast control operations in the fetch-store paradigm, a designer chooses a set of bus addresses that are not used by other devices, and assigns meanings to each address. For example, if our imaginary status light device is attached to a bus that has a width of thirty-two bits, a designer might choose bus addresses 10000 through 10011, and might assign meanings according to FIG. 10.
16. Operation of an Interface Although bus operations are named fetch and store, a device interface does not act like a memory - data is not stored for later recall. Instead, a device treats the address, operation, and data in a bus request merely as a set of bits. The interface contains logic circuits that compare the address bits in each request to the addresses assigned to the device. If a match occurs, the interface enables a circuit that responds to the fetch or store operation. For example, the first item in FIG. 10 can be implemented by a circuit that tests bits on the bus for a store request with address 10000 and uses the data to take action. In essence, the circuit performs the following test: if ( address == 10000 && op == store && data != 0 ) turn_on_display; } else if ( address == 10000 && op == store && data == 0 ) { turn_off_display; } Although we have used programming language notation to express the operations, interface hardware does not perform the test sequentially. Instead, an interface is constructed from Boolean circuits that can test the address, operation, and data values in parallel and take the appropriate action. 17. Asymmetric Assignments and Bus Errors The example in FIG. 10 does not define the effect of fetch or store operations on some of the addresses. For example, the specification does not define a fetch operation for address 10004. To capture the idea that fetch and store operations do not need to be defined for each address, we say that the assignment is asymmetric. The specification in FIG. 10 is asymmetric because the processor can store a value to the four bytes starting at address 10004, but a bus error results if the processor attempts to read from address 10004. 18. Unified Memory and Device Addressing In some computers, a single bus provides access to both memory and I/O devices. In such an architecture, the assignment of addresses on the bus defines the processor's view of the address space. For example, imagine a computer system with a single bus as FIG. 11 illustrates.
The bus connects memories as well as devices. In the figure, the bus defines a single address space that the processor can use. Each memory module and each device must be assigned a unique address range of bus addresses. For example, if we assume the memories are each 1 Mbyte and each device requires twelve memory locations, four address ranges must be assigned for use on the bus as FIG. 12 illustrates.
We can also imagine the address space drawn graphically like the illustrations of a memory address space in Section 11. Of course the space occupied by each device is extremely small compared to the space occupied by a memory, which means the diagram will not show much detail. For example, FIG. 13 shows the diagram that results from the assignments in FIG. 12.
19. Holes in a Bus Address Space The address assignment in FIG. 12 is said to be contiguous, which means that the address ranges do not contain gaps - the first byte assigned to one range is the immediate successor of the last byte assigned to the previous range. Contiguous address assignment is not required - if the software accidentally accesses an address that has not been assigned, the bus hardware detects the problem and reports a bus error. Using the terminology from Section 13, we say that if an assignment of addresses is not contiguous, the assignment leaves one or more holes in the address space. For example, a bus may reserve lower addresses for memory and assign devices to high ad dresses, leaving a hole between the two areas. 20. Address Map As part of the specification, a bus standard specifies exactly which type of hardware can be used at each address. We call the specification an address map. Note that an address map is not the same as an address assignment because a map only specifies what assignments are possible. For example, FIG. 14 gives an example of an address map for a sixteen-bit bus.
In the figure, the two areas of the address space available for memory are not contiguous. Instead, a hole is located between them. Furthermore, a hole is located between the second memory area and the device area. When a computer system is constructed, the owner must follow the address map. For example, the sixteen-bit bus in FIG. 14 only allows two blocks of memory that total 32,768 bytes. The owner can choose to install less than a full complement of memory, but not more. The device space in a bus address map is especially interesting because the space reserved for devices is often much larger than necessary. In particular, most address maps reserve a large piece of the address space for devices, making it possible for the bus to accommodate extreme cases with thousands of devices. However, a typical computer has fewer than a dozen devices, and a typical device uses a few addresses. The consequence is: In a typical computer, the part of the address space available to de vices is sparsely populated - only a small percentage of the available addresses are used. 21. Program Interface to a Bus From a programmer's point of view, there are two ways to use a bus. Either a processor provides special instructions used to access each bus or the processor interprets all memory operations as references to the bus. The latter is known as a memory mapped architecture. As an example of using a memory-mapped approach, consider address assignment of the imaginary light display described in FIG. 10†. To turn the device on, the program must store a nonzero value in bytes 10000 through 100003. If we assume an integer consists of four bytes (i.e., thirty-two bits) and the processor uses little-endian byte order, the program only needs to store a nonzero value into the integer at location 10000. A programmer can use the following C code to perform the operation: int *ptr; /* declare ptr to be a pointer to an integer */ ptr = (*int)10000; /* set pointer to address 10000 */ *ptr = 1; /* store nonzero value in addresses 10000 - 10003 */ We can summarize: A processor can use special instructions to access a bus or can use a memory-mapped approach in which normal memory operations are used to communicate with devices as well as memory. 22. Bridging Between Two Buses Although a single bus offers the advantage of simplicity and lower cost, a given device may only work with a specific bus. For example, some earphones require a Universal Serial Bus (USB) and some Ethernet devices require a Peripheral Component Interconnect (PCI) bus. Clearly, a computer that has multiple buses can accommodate a greater variety of devices. Of course, a system with multiple buses can be expensive and complex. Therefore, designers have created inexpensive and straightforward ways to attach multiple buses to a computer. One approach uses a hardware device, known as a bridge, that interconnects two buses as FIG. 15 illustrates.
The bridge uses a set of K addresses. Each bus chooses an address range of size K and assigns it to the bridge. The two assignments are not usually the same; the bridge is designed to perform translation. Whenever an operation on one bus involves the ad dresses assigned to the bridge, circuits in the bridge translate the address and perform the operation on the other bus. Thus, if a processor on bus 1 performs a store operation to one of the bridged addresses, the bridge hardware performs an equivalent store operation on bus 2. In fact, bridging is transparent in the sense that processors and devices are unaware that multiple bridges are involved.
23. Main and Auxiliary Buses Logically, a bridge performs a one-to-one mapping from the address space of one bus to the address space of another. That is, the bridge maps a set of addresses on one bus into the address space of the other. FIG. 16 illustrates the concept of address mapping. In the figure, both bus address spaces start at zero, and the address space of the auxiliary bus is smaller than the address space of the main bus. More important, the architect has chosen to map only a small part of the auxiliary bus address space, and has specified that it maps onto a region of the main bus that is reserved for devices. As a result, any device on the auxiliary bus that responds to addresses in the mapped region appears to be connected to the computer's main bus. To understand why bridging is popular, consider a common case where a new de vice must be added to a computer that already has a bus. If the interface on the new de vice does not match the computer's main bus, new adapter hardware can be created or a bridge can be used to add an auxiliary bus to the system. Using a bridge has two ad vantages: bridging is simpler than adding a bus interface to each new device, and once a bridge has been installed, a computer owner can add additional devices to the auxiliary bus without changing the hardware further. To summarize: A bridge is a hardware device that interconnects two buses and maps addresses between them. Bridging allows a computer to have one or more auxiliary buses that are accessed through the computer's main bus. 24. Consequences for Programmers As FIG. 16 shows, the sets of mapped addresses do not need to be identical in both address spaces. The goal is to make a bridge so transparent that software does not know about the auxiliary bus. Unfortunately, a programmer who writes device driver software or someone who configures computers may need to understand the mapping. For example, when a device is installed in an auxiliary bus, the device obtains a bus ad dress, A. As part of the initialization sequence, the device may report its bus address to the driver software†. Because a bridge only translates addresses, communication between the device and the driver that uses data lines will not be changed. Thus, to generate an address on the main bus, the driver software may need to understand how the bridge maps addresses. 25. Switching Fabrics as an Alternative to Buses Although a bus is fundamental to most computer systems, a bus has a disadvantage: bus hardware can only perform one transfer at a time. That is, although multiple hardware units can attach to a given bus, at most one pair of attached units can communicate at any time. The basic paradigm always consists of three steps: wait for exclusive use of the bus, perform a transfer, and release the bus so another transfer can occur. Some buses extend the paradigm by permitting multiple attached units to transfer N bytes of data each time they obtain the bus. For situations where bus architectures are insufficient, architects have invented alternative technologies that permit multiple transfers to occur simultaneously. Known as switching fabrics, the technologies use a variety of forms. Some fabrics are designed to handle a few attached units, and other fabrics are designed to handle hundreds or thousands. Similarly, some fabrics restrict transfers so only a few attached units can initiate transfers at the same time, and other fabrics permit many simultaneous transfers. One of the reasons for the variety of architectures arises from economics: higher performance (i.e., more simultaneous exchanges) can cost much more, and the higher cost may not be justified. Perhaps the easiest switching fabric to understand consists of a crossbar switch. We can imagine a crossbar to be a matrix with N inputs and M outputs. The crossbar contains N×M electronic switches that each connect an input to an output. At any time, the crossbar can turn on switches to connect pairs of inputs and outputs as FIG. 17 illustrates.
The figure helps us understand why switching fabrics are expensive. First, each line in the diagram represents a parallel data path composed of multiple wires. Second, each potential intersection between an input and output requires an electronic switch that can connect the input to the output at that point. Thus, a crossbar requires N×M switching components, each of which must be able to switch a parallel connection. By comparison, a bus only requires N+M electronic components (one to connect each in put and each output to the bus). Despite the cost, switching fabrics are popular for high-performance systems. 26. Summary A bus is the fundamental mechanism used to interconnect memory, I/O devices, and processors within a computer system. Most buses operate in parallel, meaning that the bus consists of parallel wires that permit multiple bits to be transferred simultaneously. Each bus defines a protocol that attached devices use to access the bus. Most bus protocols follow the fetch-store paradigm; an I/O device connected to a bus is designed to receive fetch or store operations and interpret them as control operations on the device. Conceptually, a bus protocol specifies three separate forms of information: control information, address information, and data. In practice, a bus does not need independent wires for each form because a bus protocol can multiplex communication over a small set of wires. A bus defines an address space that may contain holes (i.e., unassigned addresses). A computer system can have a single bus to which memory and I/O devices attach, or can have multiple buses that each attach to specific types of devices. As an alternative, a hardware device called a bridge can be used to add multiple auxiliary buses to a computer by mapping all or part of the auxiliary bus address space onto the address space of the computer's main bus. The chief alternative to a bus is known as a switching fabric. Although they achieve higher throughput by using parallelism, switching fabrics are restricted to high end systems because a switching fabric is significantly more expensive than a bus. EXERCISES 1. A hardware architect asks you to choose between a single, thirty-two bit bus design that multiplexes both data and address information across the bus or two sixteen-bit buses, one used to send address information and one used to send data. Which design do you choose? Why? 2. In a computer, what is a bus, and what does it connect? 3. Your friend claims that their computer has a special bus that is patented by Apple. What term do we use to characterize a bus design that is owned by one company? 4. What are the three conceptual categories of wires in the bus? 5. What is the fetch-store paradigm? 6 If the lines on a bus are divided into control lines and other lines, what are the main two uses of the other lines? 7. What is the advantage of having a separate socket for each memory chip? 8. Suppose a device has been assigned addresses 0x4000000 through 0x4000003. Write C code that stores the value 0xff0001A4 into the addresses. 9. If a bus can transfer 64 bits in each cycle and runs at a rate of 66 MHz, what is the bus throughput measured in megabytes per second? 10. What is a switching fabric, and what is its chief advantage over a bus? 11. How many simultaneous transfers can occur over a crossbar switching fabric of N inputs and M outputs? 12. Search the Internet, and make a list of switching fabric designs. 13. Look on the Internet for an explanation of a CLOS network, which is used in switching fabrics, and write a short description. 14. What does a bridge connect? Related Articles -- Top of Page -- Home |
Updated: Wednesday, April 26, 2017 19:02 PST