Guide to Computer Architecture and System Design--CENTRAL PROCESSING UNIT: POWER OF ARCHITECTURES (part 2)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


<<PREV

IMMEDIATE ADDRESSING

Fig. 3 (b) shows the general format of this scheme. Here the operands or data is a part of the instruction, that is, the data is stored consecutively in memory locations.

Initializing count registers with a definite integer for perfect program loops, comparing the real variable with a fixed set point value as in decision making applied to data base systems and critical quantified values in process instrumentation systems serve as illustrative examples of this scheme. The points worth mention of this method are, it is simple to understand and decode; is also fast to implement and more often fits to be used with the direct addressing mode combination.

=========

Instruction

Operand

Address value or count index or arithmetic or logical data


Fig. 3 (b) Immediate addressing

=========

INDIRECT ADDRESSING

The address part of the instruction just links to another directory (like registers or another memory address) whose contents directly points to the data under use. See Fig. 3 (c). Thus, the effective address calculation needs more fetches and slows down the process. The following points highlight the characteristic views of this mode, viz., it is a slower method of access; is highly flexible for data routing and is often utilized with addressing schemes like index and base register accessing.


Fig. 3 (c) Indirect addressing

STACK ADDRESSING

Stack grows physically downward, Fig. 3 (d) gives the stack property. In most of the conventional sequential machines, either system or user stacks are accessed by this method. Here the contents of a specific register, called, stack pointer, is used to point to the stack data. Automatic storing of the subroutine return addresses, saving of essential register data for reuse are some routine applications of the scheme. Apart from this, at the peak level the computers follow a stack architecture for a data flow in compute bound activities subjected to SIMD domain too. This type is also referred to as 0 - address instructions.

The features are - Specific instruction mnemonics like PUSH, POP identify this mechanism; system indirect calls attended to at appropriate links and with more number of stack pointers fit'> well for data switching in the multiuser timeshared machines.

=========

Stack instruction

Examples: Include

PUSH

POP

Stack pointer

Book keeping becomes the burden of the stack users, exception being stack architectures.

Top of stack


Fig. 3 (d) Stack addressing

=========

IMPLIED ADDRESSING

The machine control instructions at assembly level, the system calls at shell level with respect to various operating systems inviting the special interrupt driven and DMA transfer schemes are the embedded properties of computer system.

INDEX AND BASE REGISTER ADDRESSING

Here, some registers specifically assigned the status of Base and index registers are used to assist in data fetching and data parallelism. These methods of addressing give a quantum of values to classify a machine as complex instruction type besides having more application with pipelined actions.

RELATIVE ADDRESSING

When a program is in execution, the program counter register keeps track of the current instruction in use for a total control activity. This relative addressing method can go a long way with PC relative addressing for dynamic multitasking activities suited best for data flow computations for independent and interdependent calculations. Examples lie in distributed data processing in the realm of module linking towards a parallel processing environment.

The above listed addressing mechanisms used in a varying mix of combinations will reflect the instruction set complexity and power of a system.

It is by experience gained, it has been observed that certain code portion of a program is repeatedly used as in do loops, which may include also the system function calls which can be used either iteratively or recursively in an arithmetic bound situation to converge to the solution. Also when the problem is one of data processing of file handling, the operand') ( the data) have to be fetched continuously at discrete intervals of time. Under the above situations, the speed of memory will help a long way in enhancing the thruput of the system. In this context, the very high speed memory unit called the Cache memory

CACHE MEMORY AND ASSOCIATIVE LINKING

Fig. 4 depicts the instruction time distribution. It is often noticed that the fetch cycle occupies almost all the time whereas execution takes place instantaneously at one clock pulse. The value of the instruction times depends on length of instruction, type of instruction and also at times the inherent delay of synchronizing (leading to wait states).


Fig. 4 Instruction cycle -- includes opcode, fetch operands ( more often pronounced as CASH memory) has come to stay balancing with the CPU speed so that a good data rate is maintained in the realm of managing goad operating systems.

Systems with high performance requirements often use a high-speed buffer, or cache memory between the CPU and the primary semiconductor memories. Instruction look ahead is mandatory for powerful processors; cache memory will provide this capability as well as a fast scratch pad area for operands. The access time of less than 100n-sec. is common for caches compared to 0.5 to 1 u-sec. of the primary memories. While the job is running, the contribution due to cache memory linking are determined by various factors like cache size, the addressing mechanism and replacement policies. The utility of a cache system is often dictated by the hit ratio "h" defined as:

h = number of references to the cache/total number of memory references x 100

for most of the reading operations done by the cpu.

Caches of size 8 to 16 K bytes with block sizes of 64 to 128 bytes and a set associative mapping scheme will yield goad hit ratios of over 95% in the realm of maintenance of virtual memory management systems which employ paging and segmentation. The list of above attributes discussed will reflect the general power of a machine.

3. LANGUAGE SUPPORT ENVIRONMENT

The systems support is decided by the nature and number of programming language environments it can withstand. This calls for secure systems without incurring severe penalties in execution time or storage space. In this context, a batch processing machine suits to the above points to attract diverging users in an interleaved timing fashion. But all interactive machines are proficient systems by converging to one variety of programming environment in achieving the best out of distributed processing. An operating system, in general, consists of the kernel, the shell and utilities.

For compiler construction is a difficult one which can be made rigid by using formal syntax. Notation that is used in describing the syntax (Backus - Naur Fornlalism (BNF) is a well-known meta-language for specifying concrete syntax) or semantics of a language is term('d a meta language. Dangling references and uninitialized locations are common insecurities because they are difficult to prevent by language design and their detection by processors can he quite difficult and expensive. Even in the domain of database applications, no standardization is easily possible to mean a specific language environment at the user's disposal mainly because of the broad band ( range) of specifications to he met in real-time.

So, obviously, it is the burden of the programmer who has to select the language to communicate and sensitize the problem application. This stimulates a rapid and ever increasing demand of software exports with reference to experts in the computing market around the globe.


Fig. 5 Trends in computing


Fig. 6 Potential growth

Thus, precisely, the CPU power will to a greater extent also reflect the consumer selectivity part for architectural designers, because the stated facts of 2-2 and 2-3 have to go in parallel with the guidance of experts and consultant teams. Fig. 5 gives the trends in computing and communication. need and hardware during the lifetime of computers. With more of interactive and online computer usage, the emphasis is on developing methods for formally specifying programs and verifying that they meet their specifications. This factor has led to developing more reliable programs which reduce the need for testing and debugging. Formal semantics is a tool which can help language designers achieve their objectives. Program decomposition and reuse of precompiled modules are sought after in differing language environments. Already the packages have captured the customer market with the natural competition coping to the factors like user ease and system friendliness. Working at Assembly language level will truthfully generate good system software in an effective manner. Parsing methods are compared by the criteria like generality, ease of writing a grammar, ease of debugging a grammar, Error detection, space requirements, Time requirements and Ease of application of semantics.

STATIC STORAGE ALLOCATION

Here, it is necessary to be able to decide at compile time exactly where each data object will reside at runtime.

In this case, dynamic arrays and variable - length strings are disallowed.

WATF IV is a one - pass compiler. A system or language which describes another language is known as a metalanguage. The metalanguage used to teach German at most universities is English, while the metalanguage used to teach English is English.

Efficiency is important after, not before, reliability.

NFA

Non Deterministic finite-state automata, are similar to D F A except that there may be more than one possible state that is attainable from a given state for the same input character.

It is often easier to recover from context-sensitive errors than from context- free errors.

Two methods of parsing are Top - down and Bottom - up schemes.

In general, data-processing applications, do not require a very dynamic run-time environment, and therefore a language like COBOL is often used.

Burroughs 6700, however, support dynamic storage allocation with machine-level instructions. The apparent overhead in dynamic storage addressing is significantly reduced in these machines.

COMPACTION works by actually moving blocks of data, etc., from one location in memory to another so as to collect all the free blocks into one large block. An intermediate source form is an internal form of a program created by the compiler while translating the program from a HLL to assembly-level or machine-level code.

e.g Register allocation. Compilers which produce a machine-independent intermediate source form are more portable than those which do not.

A program is adaptable if it can be readily customized to meet several user and system requirements.

Semantic Analysis, Code Optimization and machine dependent adoption are based mainly on the ability in tapping the capabilities of a CPU structure.

PDP - 11 is a 16 bit minicomputer (1970's).

M C 68000 has more addressing modes and registers.

COMPILER - COMPILERS

Compiler - Compilers, also called as compiler generators are used as a tool in this domain. Fig. 7 shows a translator writing system.

Function: Scanning

Parsing or code generation

Good lexical analyzers and parsers can now be automatically produced for many programming languages. More work, however, on good error- handling strategies remains to he done.

===========

Language Specification

Semantic description or machine specification

Compiler – Compiler

Source program Object code (Translator writing system)


Fig. 7 Compiler - compilers

===========

With the arrival of VLSI technology, the question that often arises concerns what machine should be built to implement a given language. Compiler- Compilers will play an important role in answering this question. The CPU resource utility can well be evaluated by making use of standard software metrics which can go a long way in absorbing the power of CPU architectures.

4. WORKING CONFIGURATIONS

According to M.J. FLYNN (1966), the working of computers fall into the following classifications:

Single instruction single data SISD ;

Single instruction multiple data SIMD ;

Multiple instruction single data MISD ; and

Multiple instruction multiple data MIMD.

In order to achieve computing rates of 100 M flops with adequate precision, the super computers architecture have to depart from the strict von Neumann concept. Hence the decomposition of computations into tasks for distributed processing shall become mandatory. which invites the present concepts and theories on parallel processing trends.

Some major attributes like pipelining, vectorization, array processing and algorithms will playa continuing role in the evolving strategies in architectural designs. There are already example systems living to suit the different attributes on date. The above listed varying parameters is discussed in sections 4 and 5 of the book. Multiple CPU systems of same nature or differing architectures will find a place in coordinating processes and distributed computing environments to meet the desired turn-around-times.

Presently dedicated microprocessor based instrumentation and real-time systems are evolving due to their reliability and affordable cost. Though the inherent power of CPUS are responsible for system performance, the careful authority of memories (operating system software, in specific) play the dominant role in accelerating the Throughput of a system. More often, it is remarked that memory cost is the machine cost. The organization of memories is a key factor in radiating the CPU power for user feelings which is taken up in the following section 3 on Memory Organization.

Keywords

Microprocessor, assembler, portability; Clock, wordlength, relocation, RISC, memory mapped I/O, thruput, opcode, stack, program counter, cache memory, metalanguage, parsing, compaction, compiler-compiler; pipelining.

QUIZ

1. Distinguish debugging by means of breakpoints with single step operation on assembler level.

2. Correlate the usage of high level language programs and assembly language programs from the eyes of varying users for ease of use.

3. What is a Cross - Assembler?

4. What is in-circuit emulation for language portability on microcomputer development systems?

5. Comment on the length of instruction cycles and clock speed for CPU designs.

6. Write on the use of stacks at assembly language level as an architect.

7. What are the attributes of an instruction format?

8. Devise an algorithm for binary to BCD conversion.

9. Bring out the differences between I/O program-controlled transfer and DMA transfer.

10. Explain instruction word and data word formats.

11. Explain immediate and indirect addressing modes with assembly level instructions.

12. Attach the usefulness of bit slice processors for compute-time reliability with single user serial organizations.

13. What it, a RISC machine?

14. What is "LABEL" declaration in programs? How does this help in relocation?

15. Is stack addressing mode often referred as zero address instruction? If so, why?

16. Explain the use of parsing tables in the recognition of syntax.

17. Write notes on error detection and recovery phase of a compiler.

18. Write notes on catching the syntax versus semantic errors at compute time, valuing the measurements as a software engineer.

19. Mention and explain some dynamic debugging tools for program interaction.

20. Explain the role of multiprocessors high-speed computing. Also distinguish loosely coupled and tightly coupled multiprocessor systems.

21. Mention the role played by demultiplexer in interrupt I/O systems. What is parallel I/O activity?

22. Enumerate the factors contributing to the turn around time on a distributed processing bench.

23. Account for the usage of integer arithmetic towards inaccuracies in real accounting applications.

PREV. | NEXT

Related Articles -- Top of Page -- Home

Updated: Wednesday, March 8, 2017 18:57 PST