انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

Computer Decads and Flynn,s architecture

Share |
الكلية كلية تكنولوجيا المعلومات     القسم قسم البرامجيات     المرحلة 3
أستاذ المادة عباس محسن عبد الحسين البكري       25/03/2013 07:33:52
four computer decades

feature batch time-sharing desktop network
decade 1960s 1970s 1980s 1990s
location computer room terminal room desktop mobile
users experts specialists individuals groups
data alphanumeric text, numbers fonts, graphs multimedia
objective calculate access present communicate
interface punched card keyboard and crt see and point ask and tell
operation process edit layout orchestrate
connectivity none peripheral cable lan internet
owners corporate computer
centers
divisional is shops departmental
end-users
ever

flynn’s taxonomy of computer architecture

the most popular taxonomy of computer architecture was defined by flynn in 1966.
flynn’s classification scheme is based on the notion of a stream of information. two
types of information flow into a processor: instructions and data. the instruction
stream is defined as the sequence of instructions performed by the processing
unit. the data stream is defined as the data traffic exchanged between the memory
and the processing unit. according to flynn’s classification, either of the instruction
or data streams can be single or multiple. computer architecture can be classified
into the following four distinct categories:
. single-instruction single-data streams (sisd)
. single-instruction multiple-data streams (simd)
. multiple-instruction single-data streams (misd) and
. multiple-instruction multiple-data streams (mimd).
conventional single-processor von neumann computers are classified as sisd
systems. parallel computers are either simd or mimd. when there is only
one control unit and all processors execute the same instruction in a synchronized
fashion, the parallel machine is classified as simd. in a mimd machine, each
processor has its own control unit and can execute different instructions on different
data. in the misd category, the same stream of data flows through a linear
array of processors executing different instruction streams. in practice, there is
no viable misd machine however, some authors have considered pipelined
machines (and perhaps systolic-array computers) as examples for misd.
figures 1.1, 1.2, and 1.3 depict the block diagrams of sisd, simd, and
mimd, respectively.
an extension of flynn’s taxonomy was introduced by d. j. kuck in 1978. in his
classification, kuck extended the instruction stream further to single (scalar and
array) and multiple (scalar and array) streams. the data stream in kuck’s classification
is called the execution stream and is also extended to include single(scalar and array) and multiple (scalar and array) streams. the combination of these
streams results in a total of 16 categories of architectures.

simd architecture
the simd model of parallel computing consists of two parts: a front-end computer
of the usual von neumann style, and a processor array as shown in figure 1.4. the
processor array is a set of identical synchronized processing elements capable of
simultaneously performing the same operation on different data. each processor
in the array has a small amount of local memory where the distributed data resides
while it is being processed in parallel. the processor array is connected to the
memory bus of the front end so that the front end can randomly access the local processor memories as if it were another memory. thus, the front end can issue
special commands that cause parts of the memory to be operated on simultaneously
or cause data to move around in the memory. a program can be developed and
executed on the front end using a traditional serial programming language. the
application program is executed by the front end in the usual serial way, but
issues commands to the processor array to carry out simd operations in parallel.
the similarity between serial and data parallel programming is one of the strong
points of data parallelism. synchronization is made irrelevant by the lock–step synchronization
of the processors. processors either do nothing or exactly the same
operations at the same time. in simd architecture, parallelism is exploited by applying
simultaneous operations across large sets of data. this paradigm is most useful
for solving problems that have lots of data that need to be updatingd on a wholesale
basis. it is especially powerful in many regular numerical calculations.


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم