Guide to Computer Architecture and System Design--DESIGN METHODOLOGY AND EXAMPLE SYSTEMS (part 2)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


<<PREV.

8. SUPER COMPUTERS

The advances in technology by way of better devices offer only a low speedup. Gallium arsenide is employed in Super computers. A fault tolerant system halts an entire pipeline even if one pipe segment fails. The expected dock of 2 nano-second period will aid supercomputers. FORtran fits well as a language offering good vecrotization ratio, besides being supported by efficient translators.

The Amdahl's law states…

When a computer has two distinct modes of operation, a high speed mode and a low speed mode (example, MAXIMIN of 8086 CPU ), the overall speed is dominated by the low speed mode.

A 3D hypercube contains 8 compute elements ( CEs). A general hypercube structure of dimension d has n = 2d processors and the number of links is n*d/2. The hypercube interconnection of CEs is employed with commercial multiprocessing and has a maximum distance equal to d. Convex is a shared memory multiprocessor made by Convex Computer Corporation, U.S.A. CDAC's PARAM comprises of 256 CEs each with 4 Mbytes of main memory. This is a message passing multicomputer system under PARAS environment. The augmentive tools are used in process management, debugging and profiling. Benchmarks are becoming available today to evaluate supercomputers performance.

Ab-initio methods are used to study chemical properties of various clinically active drug molecules, where experimentation is difficult. Cyclophosphomide, one of the most widely used anti cancer and immuno-suppressive agents acts on a wide variety of tumors and leukemia. Typical matrices in computational chemistry are sparse with about 1% non zero elements. The algorithms can tremendously speed up the turn around-time in this case.

9. OPERATING SYSTEM REVIEW

An operating system makes good sense in acting as senses of the hardware constituting the Physical Machine. The routine tasks of the Operating System include job setup and changeovers, context switching and time sharing by jobs, batch processes manuring towards good thruputs.

The division of jobs like processor, memory and device management by the Operating Systems (OS) was primarily aimed at improving the system performance besides the distributed data maintenance aspects and constructing of good editors. The interruptions during any process grows higher from a Unibus to Multibus architectures and the interruption class dictates the salient features of the respective O.S. based on the end application. These interactive bases on a distributed process are expected to cater to the reliability and maintenance requirements. The PC-DOS is an example for single user O.S. while batch systems cater to voluminous inputs on a Queue basis. Presently, the real-time operating systems grow in size and complexity for the on-line user category, wherein the dual needs of accuracy and speed have to be met. Unix is one example of time sharing multi-user environment of a Unibus category.

The hard disk device gave the impetus for a self-resident monitor with large supervisory support and an opening for file management systems in today’s information technology race. Job-scheduling and device management are primary concerns on server client modeling, by the clever use of dispatchers. Memory management protocols become a challenging sphere from scientific computing at one end to physical data base crashes in the knowledge bases. Buffering is a continuous activity for all Input-Output activities, starting from the use of shell command to a big real-time process like a robot with programmable automation benches. The real-time distributed operating system ( RT DOS) has been developed to provide a hierarchical set of functions that provide applications processes with system management services, control and network services for a wide range of real-time processing requirements.

10. PARALLEL & DISTRIBUTED COMPUTING

The problems involving array processors belong to the SIMD category of parallel processes ( example being computer synthesis of hardware design). The shared memory in message passing tasks has to accommodate structured data flow and lesser pipelengths to cope up parallel computing.

The Vector Processor Intel 80860 having a 64 -bit word length uses RISC topology with on-chip cache memory on UNIX operating system and is fit for floating point arithmetic speed up. The performance evaluation of parallel computing is a research domain that has to account for evolution of parallel program constructs, optimization with structural systems ( more application-specific) and the availability of distributed operating systems in the real MIMD category. Distributed computer systems represent an evolutionary path from early operating systems. Resource networks such as Arpanet is a beginning to distributed computing. The mainframes would operate in usual fashion and link into the network to send or receive messages based on usage. The transmission lines were leased lines of high speed and controlled by the network interface control processors. Routing and security aspects add strength in computer communications.

Reliable transmissions involve CRC checks, coding and retransmission schemes. The evolution of languages for distributed processes is one of object-orientation.

11. SYSTEMS SOFTWARE

The term firmware is used for the software stored in read only storage devices. Utility programs can serve in their capacity for program development, debugging and documentation. Linkers with good relocation abilities and fast loaders are desirable features of the computing hardware. If a compiler runs on a computer for which it produces the object code, then it is known as a self or resident compiler. The tools towards debugging are simulators, logic analyzers and trace routines with memory dump facility. Ada is used in the control of multiple concurrent processes and embeds the best features of Pascal, Algol and PL/1.

An assembler produces object codes for programs supplied in assembly language of a machine. A two-pass assembler collects all labels during the first pass and assigns addresses to the labels counting their positions from the starting address. On the second pass, the assembler produces the machine code for each instruction and assigns address to each. Certain directives used under assembler environment is discussed in the next paragraph. Standard directives are commonly used to define constants, data areas, and reserve blocks of memory.

Examples:

Label: EQU expression. At assembly time the expression is evaluated and assigned to label and an entry to that effect is made in the symbol table. A label can be defined by an EQU directive only once in a program.

Date: SET OA(H) Constants are defined by the SET directive and can be changed anywhere to another value by a reassignment.

ORG 0100 (H) The first byte of object code will be assigned to 0100 (H) and all succeeding bytes will be occupied by higher addresses until another ORG statement is encountered. The END directive makes the assembler to terminate the present pass.

Message: DB 'This is a message' is a valid string definition, DS is used to reserve data storage locations as specified in the define storage directive. Conditional Assembly is allowed by the IF and END IF words. EPROM programmers are available on dedicated systems and an MDS ( Microcomputer development system) costs more, especially, to support high level conventional languages with a rich and complex debugging and diagnostic software. Programs written with HLL (High level languages) are easy to maintain, portable and often occupy more memory for execution. Pascal and LISP serve as interpretable language for machine tool in sophisticated electronic systems.

12. Software Reliability

The complexity of many software systems has become unmanageable, and the natural consequences are delays and unreliability. A program must react to errors. It must indicate regarding errors or malfunctions without shutting the entire system down. Software correctness is concerned with the consistency of a program and its specification. A program is correct, if it meets its specification; otherwise it is incorrect. When considering correctness it is not asked whether the specification corresponds to the intentions of the user. A program that can be extended to tasks other than for which it has been designed and developed is a better approach in programming houses. In software development, the programming time is given a major weigh for it has to meet the essential constraints of speed, reliability and ease of use.

Reliability is (at least) a binary relation in the product space of software and users, possibly a ternary one in the space of software and users and time. System reliability must take into account the influence of hardware errors and incorrect inputs. A software system is said to be robust when the consequences of an input or hardware error related to a given application are inversely proportional to the probability of the appearance of that error in that application. The cost to user "c" is a measure of the error effect.

C= 1/p, where p is error probability. Errors which are expected to occur frequently should only have a minor effect on the given application. A list of critical error effects and of probable error causes must be available. It is a better practice of recording critical as well as noncritical errors towards performance valuation of systems.

Reconfiguration is followed with the occurrence of permanent errors. Though software testing helps out in combating permanent faults, the elimination of other errors by software means may not be satisfactory due to prohibition of testing costs for quality pass. Normally permanent faults and failures are fit enough for repeatability and only transient errors are difficult to catch. EX DAMS (Extendable Debugging and Monitoring system) meets the twin requirements of repeatability and observability useful with real time systems.

On-line debuggers provide the programmers ample facility to control and observe program flows. The facilities include setting and deleting of break- points, error traps, interrupts simulation and performance reports generation. Any modification that is required to be performed on a system after acceptance to enhance a particular software status can be called software maintenance. The supervision effort required in determining development progress can be comparable with the development effort.

13. TEXT PROCESSES

The aims of a text processing system is to organize and control large volumes of text so that they can be used for the efficient communication of information. A text processing system consists of a number of text sources, a means of storing text, a means of extracting information from the stored text and a group of users.

It is likely that in time every business will find a need for a computerized text processing system, just as now it has need for a typewriter. Privacy and security are essential ingredients of a text processor usage. The functions of text data processing involves document retrieval, data retrieval and question answering. Text analysis of differing kinds will be pronounced as the associated objectives. Because of the voluminous amount of data growth, the following factors are worth to be tackled by a text processor namely, optimization enhancement and innovation.

The quality of documentation is a good indicator to the quality of the software.

Good documentation should be precise, to the point, and complete. Perhaps, good management towards reliable software throughout the product life-cycle shall make the package easily marketable and hence reach standards. Line, string and cursor oriented editors assist for software developmental activities. With the present day keyboards, a large number of keys are added on to increase and improve the editing functions available of a system.

A software error is present when the software does not meet the end- user's reasonable expectations. The best way to dramatically reduce software costs is to reduce the maintenance and testing costs. Fault avoidance is concerned with detection and removal of errors at each translation step.

The concept of fault detection are accepted, as on date, for application software projects. The PRIME system is a Virtual memory multiprocessing system developed at the University of California at Berkeley wherein data security is a primary goal. From the eyes of language designers, the language must not encourage obscurity and in effect, decreases the programmer's opportunities to make mistakes.

Assertion statements at runtime help for debugging as well as modification. The Simon monitor is a production tool developed by Mitre Corporation used during the programming process. The one language formula for a development team is important for portability at one end and environment preferences at the other extreme. Language compilers must provide exhaustive checks for syntax errors and reduce semantic gap. The most common method of proving programs is the informal method of inductive assertions; proofs on non-numerical programs are much more difficult than proofs of numerical programs. The difficulty of the proof is proportional to the richness and complexity of the programming language. Computer architecture has an indirect yet important effect on software reliability for the hardware reliability is often ascertained with the help of software programs. Explicit program specification by self-documentation ensures avoidance of program misinterpretation.

14. DATA COMMUNICATION

The word Robot was first used in 1921 by Karel capek, czech playwright, novelist and essayist. The recovery from errors, which is fairly straightforward on basic data communication links becomes highly complex in a distributed processing environment.

The use of distributed processes typically permits a simpler program structure and more computational power in most applications. Error correcting codes are sometimes used for data transmission especially when the channel is Simplex. Security and privacy govern the performance of a network. Some of the Bus structures is listed in Table. 2.


Table 2

It is worth mention of the data rate for channels thanks to Nyquist and Shannon. H. Nyquist, in 1924, derived an equation expressing the maximum data rate for a finite bandwidth noiseless channel.

If the signal consists of V discrete levels, Nyquist's theorem states maximum data rate = 2 H log2 V bits/sec. Where 'H' is the bandwidth. For example, a noiseless 3 KHz channel cannot transmit binary signals at a rate exceeding 6000 bits/sec.

According to Claude Shannon ( 1948) , the maximum data rate of a noisy channel whose bandwidth is H Hertz and signal to noise ratio is SIN, is given by

Maximum number of bits/sec == H log2 (1 + S/N)

If H =3000 Hertz at 30 dB, then speed ~= 30,000 bits/sec.

15. EPILOGUE

Both the methods of hardwired and microprogrammed control employ counters, sequencing registers and combinational logic in generating the control configuration on a computing system. The hardwired approach has been identified for high speed scientific computing and reliable networks. Delays can be well managed with this topology.

The microprogramming splits each macroinstruction to well-defined control operations with good design from the micro-programmers. Microinstruction lengths can vary depending on concurrency in micro-operations, the coding of control data and microinstructions sequencing. Multiplexers are often employed in steering the micro-control fields besides governing the functional attribute in SIMD (Single instruction multiple data) as well pipelining controls.

The CPU pins also dictate quite a good amount of control complexity for interactive on-line as well batch computers. The system commands operated at user nodes activate the interface hardware units to a greater extent and also they occupy a fair portion of memory for directly executable architecture based on package environments.

File transfers often utilize the direct memory access mechanisms and network servers must deploy more number of interrupt control pins on the mother-board in improving the control complexity. Whereas good processor resource management is an embedded property of RISC work stations. the cache memories add weight to account for retaining the system speeds. Redundancy approaches by standby or other means in hardware helps in fault detection and reduces system failure.

16. FUTURE DIRECTIVES

The architectures can be language-based but not biased on multiprogramming domains for multiuser environments. Front end translators must cope up to support many a programming language for systems design and distributed processing. Faster servers and secure data communications shall pave the way on business data processing application.

A huge amount of cache partitions and user memory segments with a good job scheduling policy will go a long way to assist both on-line and off-line users and optimize user resources on multiuser scenario. Program measurement tools and good diagnostic debugger shall increase the productivity in meeting standard, ( ISO 9000's). The effective fault tolerance measures continuously evolve in the software scenario to make machines perform better. Future compilers has to face the challenges like layout of data to reduce memory hierarchy and communications overhead and exploitation of parallelism.

Ignorance of I/O will lead to wasted performance as CPO's get faster. The future demands for I/O include better algorithms and organizations and more caching in a struggle to keep pace. Latency and throughput never go together.

17. FAULT-TOLERANCE FEATURES

Earlier fault-tolerant computers were limited to military aerospace and telephony applications where system failures meant significant economic impact and loss of life.

But today with the increased momentum in all areas, fault-tolerant techniques provide relief and a longer service under harsh usage and environment .Toleration of transient faults is felt essential on microprocessor based instrumentation systems. Permanent hardware faults can never be tolerated for on-line real time use applications. Large mainframe manufacturers like Amdahl and IBM use redundancy to improve user reliability as well assisting field service personnel in fault isolation; Minicomputers have gone in for Hamming error-correcting code on memory and special LSI chips like cyclic redundancy code encoder/decoders. The effect of defects can be overcome through the use of temporal redundancy (Repeated calculations) or spatial redundancy ( Extra hardware or software). Error detection is a specific technique employed to cover hardware faults even at user level predominant for business data processes.

The range of fault tolerance techniques adopted vary in response to the different reliability requirements imposed on different systems. Co-operation between humans and computer system takes on differing forms, depending on the extent to which the computer's software is sophisticated enough to be able to perform what we might call "Cognitive processing". Factors leading to economics of scale for computers often include several dimensions. The same software can be used on many models. Sales and maintenance personnel can service a wide range of equipment. Flexible manufacturing automation leads to adaptability.

In commercial aircraft, computers are usually used to provide a variety of services such as navigation, semiautomatic landings, flight control and stability augmentation.

The crew are always available to provide manual backup if computer failure occurs.

Software implemented fault tolerance (SIFT) machine employs hardware redundancy for critical tasks. Executive software is responsible for implementing the NMR ( Modular redundancy) structure; since design faults in the executive may lead to system failures, SIFT designers intend to give a rigorous mathematical proof of its correctness. Some of the fault-tolerant systems are brought out and discussed of their specific features.

18. SYSTFM R

System R is an experimental relational database management system designed and implemented at the IBM San Jose Research Laboratory. Relational data interface is provided for high level features in data retrieval, manipulation and definition as well as mechanisms to assist in the provision of fault tolerance at the user level. It is expected that the information entrusted to the system is not lost or corrupted. Thus, fault tolerance has been adopted as a means of providing reliable storage in many data-base systems, particularly for the purpose of providing tolerance to failures of the hardware units on which the data are stored, and against system stoppages which could leave the data -base in an inconsistent state.

19 EXAMPLE SYSTEMS

19-1 THE STAR COMPUTER

The presented work is a continuous investigation of fault-tolerant computing conducted at the Jet Propulsion Laboratory (JPL) during the period 1961-70. The star (Self Testing and repairing computer) system employs balanced mixture of coding, monitoring, standby redundancy, replication with voting, component redundancy and repetition in order to attain hardware controlled self-repair and protection against all types of faults.

The standard computer is supplemented with one or more spares of each subsystem. The spares are put into operation to recover from permanent faults. The principal methods of error detection and recovery are as follows.

All machine words (data and instructions) are encoded in error detecting codes and fault detection occurs concurrently with the execution of the programs. The arithmetic logic unit function is subdivided by replaceable functional units with more of a centralized control that allows simple fault location procedures. Special hardware takes care of error recovery besides software augmenting the memory damage. Program segment repetition helps out transient fault recovery. Monitoring circuits help error detection for synchronization in error control for data communications.

The CPU overall is divided into various processing ensembles providing a good pipeline design. Logical faults result either in word error or control errors. The opcode formation employs 2-out-of-4 code. The Test and Repair processor (TARP) monitors the STAR operation. The replacement of faulty functional units is commanded by the TARP vote and is implemented by power switching and strong isolation is provided for catastrophic failures.

The interested reader may refer to Avizienis et. al ( 1971). SCAP (the Star computer assembly program) is the first module of STAR software. SCAL is the assembly language used. The software comprises of an assembler, a loader and a functional simulator. The wordlength is 32 bits. Data sharing with processes of a read-write capacity allows automatic maintenance information from the space craft telemetry system.

19-2 ELECTRONIC SWITCHING SYSTEMS (ESS)

The stored program control of Bell systems has been developed since 1953 for varying exchange capacities. The best hardware is deployed for switching circuits and a duplicate ( image) processor runs side by side serving as a checker as well a standby to avoid severe downtimes. It is approximately no more than few minutes in a year.

The reliability in software &hardware is achieved by redundant packages. Automatic error correction for data routing during communication alleviates channel noise recovery problems. The trained maintenance personnel shall be an asset at main distribution frames ( exchange side) for on-line maintenance without affecting the on-line service.

Good documentation directions and concise manuals shall help the maintenance staff for the DO'!> so that procedural errors can be minimized.

The system Architecture is shown in Fig. 2.


FIG. 2 ESS I A System organization

The reliability and availability requirements for the ESS systems are extremely stringent: downtime of the total system is not supposed to exceed 2 hours over a 40 year period, with not more than 0.02% of calls being handled incorrectly. Continuous commercial operation commenced in 1965 with the No.1 ESS, a system supporting large telephone offices of up to 65000 lines and capable of handling 100,000 calls per hour. The processor being out of service cannot exceed 2 minutes per year, which operates continuously.

Extensive redundancy for hardware faults is called for. Buses duplicated. CPUS duplicated. The timings (CONTROL) operate synchronously for both CPUs. The information necessary for processing and routing telephone calls is held in the call store complex which is also constructed from core stores.

The random access memory in the system is divided into protected and unprotected areas. This division is enforced by the active CPU, which contains the mapping registers defining these areas, and which provides different instructions to write to the different areas. The mapping registers are software controlled via other ( special) write instructions.

The protected area contains the part of the memory that are not duplicated (e.g., the program stores)., as well as those locations which affect the operation of the system (e.g. the internal registers of the CPUs). All of the auxiliary units operate autonomously, competing with the CPUs for access to the core-stores. The call processing programs are divided into two classes, deferrable and non-deferrable. The deferrable programs are those for which data is already In the system, and these programs are not therefore critically time dependent. The non-deferrable programs (e.g. input/output) are those which must be executed according to a strict schedule and are activated by a clock interrupt. For example, the program that detects and receives dial pulses has to be run at regular intervals.

The program store (PS) is read only memory ( ROM) containing the call processing, maintenance, and administration programs besides the long-term translation and system parameters. The call store contains the transient data related to telephone calls in progress.

The Mean-Time-To-Failure (M1TF) a measure of availability, is

MTTF = μ/2λ2

Where μ is the repair rate and λ is the failure rate.

System downtime is made up of:

Hardware unreliability - 0.4 Minutes per year

Software unreliability - 0.3 Minutes per year

Procedural faults - 0.6 Minutes per year

Fault tolerance deficiencies - 0.7 Minutes per year.

Procedural faults arise from manual interactions with the operation of the ESS.

Tolerance of hardware faults is achieved by a combination of hardware and software techniques. In the main, error detection is implemented in hardware ( ego operation of the active CPU is checked against that of the stand-by CPU by matching circuits) with exceptions being signaled by means of an interrupt mechanism to invoke the fault treatment programs which form a major component of the fault tolerance techniques.

Fault tolerance deficiencies, which are expected to be the major cause of system downtime, concern the deficiencies in these programs. Tolerance of software fault .. IS limited to attempts at maintaining the consistency of the data base. The programs that perform these error detection and recovery actions are referred to as the audit programs. Today, microprocessor based (EPABX) switching for office and intra-inter communications is picking up with architectural blends.

19-3 THE. TANDEM 16

A fault-tolerant computing system:

The increasing need for businesses to go on-line is stimulating a requirement for cost effective computer systems having continuous availability as in data base transaction processing. A power supply failure in the I/O bus switch or a single integrated circuit(IC) package failure in any I/O controller on the I/O channel emanating from the I/O bus switch will cause the entire system to fail. The hardware expansion are typically dwarfed in magnitude by the software changes needed when applications are to be geographically changed. The Tandem 16 uses dual-ported I/O controllers with a Dynabus and a DC power distribution system. The on-line maintenance aspects were key factors in the design of the physical packaging of the system.

The processor includes a 16-bit CPU, main memory, the I/O channel and control, employing Schottky TIL circuitry. The CPU is a microprogrammed processor consisting of a bank of 8 registers which can be utilized as general purpose registers, as a LIFO register stack, or for indexing - and an AUT ( arithmetic logic unit.) It also has CPU stack management registers and scratch pad stores and miscellaneous flags for the use of the micro-programmer.

Microprograms are organised in 512-word sectors of 32-bits in Read only memories.

The address space for the microprogram is 2K words. The microprocessor has a 100 ns cycle time and is a two stage pipelined microprocessor. The software has 123 machine instructions each having a length of 2 bytes. Interrupt levels of up to 16 are provided with the I/O system. Main memory is organized in physical pages of 1 K words of 16 bits/ word up to 256k words may be attached to a processor. This includes additionally 6 check bits/word to provide Single error correction and double error detection. Memory is logically divided into 4 address spaces each of 64k words. The lowest level language provided on the Tandem 16 system is T/TaL, a high-level, block-structured, ALGOL like language which provides structures to get at the more efficient machine instructions.

The basic program unit is the PROCEDURE.

Each process in the system has unique identifier or "processid" in the form: < cpu #, proccss # >, which allows it to be referenced on a system-wide basis. This leads to the next abstraction, the message system, # which provides a processor-independent. failure tolerant method for interprocess communication. A memory management process residing in each processor where pages are brought in on a demand basis and pages to overlay are selected on a "least recently used" basis.

The heart of the tandem 16 I/O system is the I/O channel. All I/O is done on a direct memory access basis. The greatest fear that an on-line system user has is that "the data base is down". To meet this critical phase, Tandem provides automatic mirroring of data bases. The disc controller uses a Fire code for burst error control. It can correct 11 bit bursts in the controller's buffer before transmission to the channel. The Tandem 16 power is met by a 5V interruptible section, a 5 volt uninterruptable section and a 12-15 volt uninterruptable section. The power supply provides over voltage, over current, and over temperature protection. The system provides on-line maintenance. The operating system Guardian, provides a failure-tolerant system. As many as sixteen processors, each with up to 512 K bytes of memory. may be connected into one system. Each processor may also have upto 256 I/O devices connected to it. Error detection is provided on all communication paths and error correction is provided within each processor's memory.

Systems with between two and ten processors, have been installed and are running on-line applications.

19-4 PLURIBUS

Is a multiprocessor system designed to serve as an interface message processor (IMP) of the Advanced Research projects Agency (ARPA) computer network. Essentially, the IMPs act as communications processor, providing a message transmission facility between nodes of the network implemented as a store-and-forward packet switching system. They also perform packet receipt, routing and retransmission as well as connecting host systems to the rest of the network.

FAULT TOLERANCE: if a sending IMP does not receive an acknowledgement after the transmission of a packet, then retransmission Via another IMP and communications line ( if one exists) can be attempted. Thus the reliability requirements for an IMP place emphasis on availability, rather than fault-free operation. Occasional looses of a packet or message, or short periods of down-time, are considered to he acceptable.

The pluribus system places the major responsibility for recovery from failures on the software. The system may be characterized as a symmetric, tightly coupled multiprocessor and is highly modular. A processor failure merely causes it to run a little slower. In the Pluribus, the first detection of a fault is usually through failure of an embedded check in the main program, and frequently that is all that is required to initiate a correct recovery procedure.

The software reliability mechanisms for a pluribus system are coordinated by a small operating system called STA GE which performs the management of the system configuration and the recovery functions. The overall aim is to maintain a current map of the available hardware and software resource,. The tools and diagnostics are well enough defined and documented for repair purposes. Some application architectures to pluribus include message systems, real-time signal processing, reservation system<; employing time shared configuration and process control.

20. CONCLUSION

The future systems are more tuned for software support environment for dedicated applications imparting also portable programs. Automatic program generation, performance valuation programs and monitors give added strength for productivity. The new emerging topics of artificial intelligence, neural networks and expert systems have to await the discovery of fresh and biased languages besides good graphic tools to take on a different scenario. Thus reliable design of systems shall pave he way to good productivity on software workhouses.

TERMS

SIMD, Spoofing, Vectorizing compilers, reliability, availability, maintenance, segment, Burroughs B 5500, page-fault, Intel 8089, hypercube, concurrency, algorithms, SECDED, multiprocessing, symmetric, recoverability, crossbar switch, speedup, deadlock, compiler-compilers, polysilicon, WSI (Wafer scale integration). VLSI computing; packet switching networks, Solomon, TI-ASC, array processor, MIMD, MPP, stack architecture, VAX-11, MAXI computer, B 5000, PDP-11, read only memory, CRAY-1, Cray users, symbol computer, IBM system/38, Alto, BCPL emulator, Microprogramming, PDP-8 ISP, graphics raster, VLSI. electron beam lithography, ADA. transputers, Occam, Ethernet, microcoded CPU, Gallium Arsenide, Sparse matrix, hard disk, Intel 80860, linkers, label, robust, EXDAMS, Prime system, Nyquist, Claude Shannon, compiler, SIFT, TARP, MTTF, fire code, guardian, ARPA network, stage.

QUIZ

1. Write about differing factors that assist parallel processing.

2. Define multi processing giving the context under which the above term is pronounced.

3. Give the constructional features of MPP for image processing.

4. Write on the evolutions of Digital Equipment Corporation's PDP series of minicomputers.

5. Write a detailed note on Intel's microprocessors evolution.

6. Differentiate bit and byte organised memories for system reliability. Discuss the salient features of CRAY machines.

7. Explain clearly how transputers contribute to multiprocessing environment.

8. State Amdahl's law.

9. What are the merits of data compression in data communications? Explain the Huffman coding scheme in achieving the same.

10. Write short notes on SIFT system architecture towards fault-tolerance.

11. What is software robustness? Mention the need for subroutine calls in program writing.

12. Explain the desirable features of systems software factors that shall contribute to supercomputing area.

13. Explain digital data communication system with a block diagram and also Shannon's coding theorem for a noisy channel.

14. With a block diagram, explain the operation of electronic switching system organization.

15. Discuss the architecture of Tandem-16 fault-tolerant system towards system reliability.

16. Comment on the system maintainability with respect to the pluribus multiprocessing environment.

17. Discuss on the hardware/software trade-offs for the following applications:

a. Artificial intelligence and expert systems.

b. Real time control systems for robotics and computer aided production processes in mechanical engineering circles.

PREV. | NEXT

Related Articles -- Top of Page -- Home

Updated: Saturday, March 11, 2017 11:03 PST