Fundamentals of Computer Architecture and Organization -- Computation (part 1)



Home | Forum | DAQ Fundamentals | DAQ Hardware | DAQ Software

Input Devices
| Data Loggers + Recorders | Books | Links + Resources


AMAZON multi-meters discounts AMAZON oscilloscope discounts


1. Systems

1.1 Definition and characterization

Discrete digital system

A system is simply any entity which generates output from input. A system may have any given number of input and output ports. The name "port" is derived from the analogy with shipping. However, unlike shipping, a system port may only be used for either input or output, never both. We may make use of the mathematical concept of a function to describe how each possible input value causes a particular output value. The statement of system function then serves to define a particular system. An alternative means of characterizing the behavior of a system, the system process, is described below and is the preferred means used throughout this text.

By time-discrete system (Fig. 1) is meant one whose output changes only at regular, discrete intervals of time. The intervals may be thought of as ending with each tick of a system clock. Regardless of any change to the input, no change of output will take place until the clock ticks. The input is only sampled upon the tick of the system clock. Any system which does not wait for the tick of a clock is called a continuous system.

Analogue systems represent a varying abstract quantity (e.g. temperature) by varying a physical quantity which serves as its analogue. If such a system is implemented using electrical technology the physical analogue may be a current or voltage1. In contrast, digital systems use an internal representation of abstract quantities by first assigning it a cardinal integer value and then representing each digit separately. Binary representation has a distinct advantage which greatly simplifies implementation. The machine need only physically represent two digit values, 0 and 1. Digital systems are thus not able to internally represent any value an abstract quantity may take. Before encoding it must be quantized. Imagine that you have the task of sorting ball bearings into ten bins, coarsely labeled for bearing size.

The obvious procedure is to first measure their size and then place them into the appropriate bin. Someone else will now quickly be able to tell the size of each ball bearing with sufficient precision for their purpose. Quantization means the selection of the nearest allowed value and is thus exactly like binning.

The computer is a special type of time-discrete digital system. It is programmable.


Fig. 1: A time-discrete system

State

State is a concept fundamental to physics. It is the instantaneous configuration of a system. The simplest example is that of a tetrahedron resting on a table.

There are four states corresponding to the four sides it may rest on. It is possible to label each side with a symbol and use it to remind yourself about one of four things. In other words it constitutes a four-state memory. Memories are labeled physical states.

One kind of symbol you could use, to label the tetrahedron, would be the numeric digits {0, 1, 2, 3}. It is then possible to use it as a 1-digit memory! We are now able to store a multi-digit, base 4, number by employing one tetrahedron for each digit. The group used to store a single value is called a register. The statespace is the range of values possible and is determined by the number of ways in which the states of the tetrahedra may be combined. N of them allows 4N values, 0? (4N-1). If we wish to store many values we simply use many words.

State, or memory, is not necessarily used to store numbers. Symbols, or symbol combinations, may represent characters or graphic objects. Alternatively they may represent objects in the real world. The combined state may then be used to represent relations between objects.

[1. Electric analogue computers were once common and still find application today. ]

Process

A process describes the behavior pattern of an object. It consists of a sequence of events which is conditionally dependent on both its initial state and communication with its environment, which may be modeled as a number of other processes. Processes are said to start, run and then terminate.

Each possesses private state, which is inaccessible to any other, and a number of channels by which to communicate externally. The complete set of symbols which may be stored must include those which may be input or output.

The named description of a process is called a procedure. The fundamental indivisible atomic, or primitive, events of which any process is capable may be categorized…

• Assignment

• Input

• Output

Describing a process is one way of characterizing a system. The system process must then possess a channel corresponding to each system port. Any system which runs a process may be called a processor.

Processes may be iteratively reduced into subordinate processes. For example, an assembly line producing cars may be described as a process with several input channels (one for each component) and one output channel (of cars). This single process may be broken down into lower level processes making sub-assemblies such as engine, gearbox and suspension. These can be further broken down until individual actions of robots (or human workers) are reached. Consider the subordinate process of gearbox assembly. Input channels must be available for receiving gears of each required type. One output channel will transmit finished gearboxes.

Designing a process network to implement a complex system such as a car factory is very far from easy. The problem is that of scheduling processes onto processors (robots or workers) in such a way as to maximize efficiency at all levels. The solution must ensure the synchronization of all necessary communication. In other words, for instance, the gearbox assembly process must have at least one gear of each type available as it is required. Further discussion is outside the scope of this guide.

Protocol

A stream is a sequence of symbols. It may or may not be infinitely long. If finite then it is terminated by a special value, End Of Transmission (EOT). The length of the stream is not known a priori to either the receiver or transmitter. Upon any given clock tick only the current symbol in the stream is accessible to either.

Returning to the manufacturing analogy, it may be necessary to represent a worker transmitting a stream of identical components to another who requires them to assemble something. This stream may initially be defined as infinite.

Later it may be necessary to allow for the possibility that the supply of components is exhausted. The message sent to inform the worker that this has occurred constitutes an EOT.

A channel, just as in nature, is the means for a stream to flow. Just as a processor is the physical entity which runs a process, a stream is said to flow along a channel. In the manufacturing example above, a channel may be a conveyor belt or an "Autonomous Guided Vehicle" (A.G.V.) which allows objects to flow from one process to another.

Unlike a stream, a string is characterized by prior knowledge of the length of a message. The length must be finite and is transmitted first so that the receiver is made aware of the number of symbols which follow. To continue the manufacturing analogy, because the assembly worker (receiver process) has some means to check if any components have been lost or stolen, it is said to be more secure if they are sent a batch at a time with the quantity transmitted first.

This is then an example of the use and benefit of string protocol.

Stream and string are both examples of communication protocol. A protocol is an agreed set of rules by which communication may take place. Once both transmitting and receiving processes agree a protocol, communication may proceed without difficulty. In the example above of a string, the recipient need know in advance only the protocol, i.e. that first to arrive is a quantity followed by that number of components. The origin of the term lies in inter-government diplomacy. Before one head of state may visit another certain events must take place. The sequence and nature of these events are agreed before any communication occurs. Typically, various officials, of increasing rank, pass messages to each other until heads of state themselves talk.

The simplest form of protocol is the signal where no message is passed.

Attention is merely attracted. A common use for the signal, by both human and machine, is to cause a process to alternate between subordinate processes. An example of this is a person engaged in playing two chess games simultaneously.

The process of playing one may be interrupted by a signal from the other.

Alternation between many processes requires one channel of signal protocol for each.

1.2 Classes of system

Causal systems

Physics tells us that output can never be simultaneous with, or precede, the input which causes it. This is called the law of causality. It is one of the most fundamental laws of physics. All real, natural and artificial systems obey it.

There is no explanation of it. The universe is simply made that way.

For a computer, causality means that every output value takes some interval of time to derive from the input stream on which it depends. The interval must always be shorter than that between clock ticks at the level of the primitive action.

Linear systems

There are several specific classes of system which are of interest. Knowledge of them helps in analyzing problems. One we shall mention now is that of systems which exhibit linearity. In a linear system one can add together several inputs and get an output which is simply the sum of those which would have resulted from each separate input. Very few natural systems are linear. However there is a growing range of applications which are benefiting from the simplicity, and thus low cost, of linear processors[2].

Deterministic systems

A deterministic system is one whose output is predictable with certainty given prior knowledge of some or all preceding input. All conventional computers are deterministic systems.

An example of a deterministic system is a processor running a process which adds integers on two input channels placing the result on a single output channel.

Here one need only know the input symbols on the current clock tick to predict (determine) the result output on the next one.

Another system, whose process is the computation of the accumulated sum of all integers input on a single stream, requires prior knowledge of all symbols input since process start.

Stochastic systems

A stochastic system is one where it is only possible to determine the probability distribution of the set of possible symbols at the output. In other words the output at each clock tick is a random variable.

Although it is the more general system, the stochastic computer is rare and very much just the subject of research within the field of artificial intelligence at the time of writing this guide.

2. Automata

2.1 Switches

Normally-open switches

A switch is the simplest system which exhibits state. Of the many types of switch, the one we shall consider is the normally-open switch (Fig. 2).

Imagine a light switch which is sprung so that the light is immediately extinguished on removal of your finger.

It only closes, allowing electrons to flow, in response to pressure from your finger, thus turning on the light.

Now for just a little physics. Some energy must be expended in closing a normally-open switch. Our simple model has one output and two inputs. Are both inputs the same? Anything that flows has kinetic energy. Something which may potentially flow is said to have potential energy. In other words, some energy is available which may be converted into kinetic energy.

[2. Often referred to as signal processors.

3. It is interesting to note that natural systems are predominantly non-linear and stochastic whereas artificial systems are predominantly linear and deterministic. ]


Fig. 2: Normally-open switch

We have already met one fundamental law of physics in the law of causality.

Here we have a second one. The first law of thermodynamics requires the conservation of energy. It therefore tells us that flow can only occur if some potential energy is available. A stream of water flows down a hill because the water has gravitational potential energy available.

To understand our normally-open switch we observe that potential energy must be available at both inputs. In this sense both inputs are the same. They differ in that flow may occur through one and not the other. Different kinds of potential energy may characterize each input. For example, a light switch uses electrical potential energy (voltage) on its flow input and mechanical potential energy (finger pressure) on its other input.

Another example of a normally-open switch would in fact allow us to construct an entire (albeit rather slow) computer. A pressure-operated valve is designed to switch air pressure. It may be referred to as a pneumatic switch. The interesting property of this switch is that the output of one may operate others.

The number of others which may be operated defines the fan-out of the switch.

By now it should come as little surprise that the fundamental building block of all electronic systems, the transistor, is in fact no more than a switch. It is very important to understand that a computer can be built using any technology capable of implementing a normally-open switch. There is nothing special to computation about electronics. Nor is it true that there exists only artificial computers. Biological evolution may be regarded as a form of computation.

Computers are often rather noisy. Most of the noise comes from a cooling system which is usually just a fan. There is a fundamental reason for this… switches consume energy. Yet another fundamental law of physics, the second law of thermodynamics, tells us that no machine may do work without producing heat. This heat is being produced continuously. If nothing is done to remove it the system will overheat. Computers get hotter the faster they run! There are a lot of switches in a computer. Hence it is very important for the technology chosen to offer a switch which requires as little energy to operate as possible. Designs usually involve a trade-off between power consumption6 and speed.

Switch operation is the most fundamental event in computation. Therefore the operation speed of the switch will limit the speed of the computer. Biological switches (e.g. the neurons in our brains) switch rather slowly. They take~10-3s.

It appears that the best an electronic switch can do is~10-9s. Optical switches, recently (1990) developed in the UK, promise switching in~10-12s. The reader should have noticed the contrast between the switching speed of the neuron and that of the transistor. The capabilities of the human brain compare somewhat favorably with those of current computers. It is obvious that we are doing something wrong! Memory "State" is really just another term for memory. The number of states of a system is equal to the number of things it can remember. States are memories. We may label them how we like, e.g. by writing symbols on the sides of our tetrahedra.


Fig. 3: Binary memory (latch) constructed from normally-open switches

Fig. 3 shows how normally-open switches may be connected to realize a binary memory or latch. Flow resistance is necessary to ensure that not all potential supplied is lost when either switch closes. The existence of potential is marked "1" and the lack of it by "0". In addition, the medium (e.g. air, electrons or water) is able to flow away through any terminal labeled "0", which may thus be regarded as a sink.

Careful study of Fig. 3 will reveal that there are only two stable states in which the latch can exist. It is impossible for both switches to be simultaneously either closed or open. The closure of one reduces the potential above it sufficiently to prevent the other from closing. Either point between resistance and the switch flow input may be state-labeled and used as an output. It is unimportant which.

[4. Computers using only optical switches are now being built.

5. Energy is not actually consumed but is converted from one form to another, in this case work into heat.

6. Power=energy×time. ]

It may also be used as an input. The application of "0" or "1" will write that datum into the latch. The output state shown is "0" since the left switch is closed, removing any potential present. Applying a potential will cause the right hand switch to close and thus the left hand one to subsequently open, changing the latch output state. Removing the applied potential does not affect the latch state which now remains "1". In other words, it remembers it! The reader should verify that the latch will similarly remember the application (input) of "0".


Fig. 4: Logic gates implemented using normally-open switches


Fig. 5: Automaton

Logic gates

Logic gates are devices which implement systems with binary input and output values. The presence or absence of a potential, at either input or output, is used to infer the truth or otherwise of a proposition. A full treatment of the functions they may implement and how they are used in the construction of computers is left for Part II of this guide. It is however appropriate now to illustrate how our simple normally-open switches may be used to construct logic gates. Fig. 4 shows how AND, OR and NOT gates may be made.

Output is "1" from AND only if A and B are "1". Output is "1" from OR only if A or B is "1". NOT merely inverts a single input.

In fact logic gates merely define standard ways in which switches may be connected together. Their usefulness is that they allow formal (mathematically precise) design of logic systems simply by combination, i.e. by connecting them together as building blocks. Part II shows how.

2.2 Finite automata

An automaton is any entity which possesses state and is able to change that state in response to input from its environment. It is a discrete system whose next state depends both on input and its current state. Fig. 5 illustrates the idea from both functional and procedural points of view.

Imagine you are monitoring an instrument which is equipped with three lights, each of which may be only green or red. It is your job to interpret the meaning of the pattern of lights according to a small instruction set. Let us say that the instrument is designed to detect military aircraft and identify them as either friend or foe. A certain pattern is interpreted as" aircraft detected".

Subsequently, some patterns mean "friend", some mean "foe" and the rest mean "failure to identify". These are the states, you are the (4-state) automaton and the light patterns form a symbol alphabet.

Automata are fundamental building blocks of computers. They may be found implemented in both software and hardware. The automaton process may be described as a series of series of IF…THEN…statements inside a REPEAT… UNTIL…FALSE loop (infinite loop). These form the instruction set of the automaton. Each one is thus composed of…

• Condition part (state & input)

• Action part

…where the condition is in two parts, state and input symbol. The action is simply the updating of state. To summarize, a procedural description of the behavior of an automaton is…

REPEAT IF <symbol> AND <state> THEN <action> IF <symbol>

AND <state> THEN <action> … UNTIL FALSE

The automaton may have output only in the sense that the state may be visible externally. Typically, output forms the input to another automaton. In our analogy the second might be a 2-state automaton enabling a missile to be fired at a "foe". A formal definition of an automaton must consist of the following…

• Set of symbols

• Set of states

• Set of instructions

The set of states allowed forms the statespace, the set of symbols the alphabet. It is programmed simply by specification of the instruction set. No ordering of instructions is needed.

Finite automata [7] are simply automata with a finite number of states and may alternatively be described by a state transition graph which defines how the system proceeds from one state to the next, given both state and input symbol. Part II describes their design.

2.3 Turing Machines

Origin and purpose

A Turing Machine is built on the concept of an automaton. It gains its name from Alan Turing who invented it as a means of investigating the class of computable functions [Turing 36].

Turing Machines are not used as the basis for the design of real computers.

Their most important use is in determining those functions which are not computable. If a Turing Machine cannot evaluate a given function then neither can any other computer!

[7. In the field of hardware design automata are usually referred to as state machines. The two terms mean the same. ]


Fig. 6: Turing Machine

Structure

The Turing Machine is composed of three subsystems. One of these is a processor, which is much like an automaton except that it can also output a symbol distinct from its state. Symbols are input from, and output to, a linear memory composed of a sequence of memory "cells". The processor is also able to move one position in either direction along memory. The linear memory forms a second subsystem which contains symbols drawn from an alphabet. One of the symbols is special and usually termed blank. Each memory cell is capable of storing a single symbol. The operation of the machine is cyclic. A single cell is read, then one of just three actions is carried out.

The third subsystem is a channel which allows the processor to read or write a memory cell. Note that moving along the linear memory may equally well be performed by…

• Moving the processor (as if it slid back and forth along the memory)

• Moving the communication channel (as if it were a pipe, through which symbols pass)

• Moving the memory

(as if its symbols were on a tape)

A Turing Machine may be summarized as…

• Processor with internal memory for instruction execution

• Linear memory composed of a sequence of memory cells

• Channel allowing processor to read and write memory

It may help to imagine the processor as a "rubber stamp" stamping new symbols onto a tape (linear memory) which depend on the current one it sees and an internal memory (Fig. 6).

Programming

As with automata, instructions are solely of the if…then kind. The condition part of the instruction is simply made up of the state and the symbol which has been read. The action part has three possibilities only…

Instruction type Action modify <sym> Modify symbol at current position to sym move <dir> Move in direction dir halt Halt computation

The input to a Turing Machine is the contents of the memory at the start. The output is the memory contents when it halts. If it fails to halt then the function is not computable. The program is the (unordered) set of instructions.

A particular Turing Machine is defined by specifying the following…

• Q: Finite set of states

• S : Finite set of symbols including blank which is not allowed as an input

• I: Finite set of instructions

• q: Initial state

• : Finite set of final states

Computability

One model of computation is that of a process which evaluates a function. Not all functions are computable. There is no known way of distinguishing the incomputable from the computable. Each problem must be investigated in its own right. The Turing Machine is a useful model for such investigation because of the Church thesis [Church 36] which may be paraphrased thus… Church thesis: Every effectively computable function may be computed using a Turing Machine.

The term "effectively" implies that it must be possible to write a program for a computer to achieve the evaluation of the function. In other words it must be possible to describe the process of evaluation. For a good introduction to the subject of computability see [Rayward-Smith 86].


Fig. 7: Game of Life 1.2.4 Cellular automata

Origin

Here the reader may be introduced to part of the legacy of another great person who helped create a science of computation…J.von Neumann, von Neumann wished to compare living entities with artificial systems. The element of life is the living cell. Cellular automata were his invention to promote understanding of self-replicating systems [von Neumann 66].

Interest

Just as the study of finite automata promotes understanding of sequential computation, the study of cellular automata promotes that of parallel computation. It is an area of research which is progressing rapidly, at the time of writing, and promises machines which might earn the adjective "intelligent".

Structure

An automata network is any graph of finite automata which evolves by means of discrete interactions which are both mutual and local. A graph G is defined as a set of sites together with a neighborhood system . Hence G={S, F}.

Programming

There is no global program for a cellular automata network. Cells share a common local program which describes how to interact with their neighbors to update their (purely local) state. It may define either a deterministic or a stochastic process.

Perhaps the most frequently investigated deterministic process is that of the game of Life (Fig. 7) [Conway 82]. Here the graph is a set of sites (cells) arranged as a two-dimensional array with a simple neighborhood (e.g. the four cells surrounding any given specimen). Each cell is a simple two-state automaton, the states being labeled "alive" or "dead". There are just two state updating actions for the cell, birth and death. Values input are simply the neighbor states. Local state is assigned according to some simple rules and output to its neighbors. These rules are (typically)…

• A cell is born if and only if it has exactly three living neighbors

• A cell dies unless it has exactly two or three living neighbors

If the state of the entire graph is displayed as an image (two dimensional array of brightness, or color, values) behavior cycles emerge as patterns which move or repeat themselves. Those cycles which infinitely repeat are known as limit cycles.

PREV. | NEXT

Related Articles -- Top of Page -- Home

Updated: Tuesday, March 3, 2020 22:27 PST