انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

Parallel Processing(Introduction)

Share |
الكلية كلية تكنولوجيا المعلومات     القسم قسم البرامجيات     المرحلة 3
أستاذ المادة عباس محسن عبد الحسين البكري       24/03/2013 09:55:12
There is an ever-increasing appetite among computer users for faster and faster machines. This was epitomized
in a statement by Steve Jobs, founder/CEO of Apple and Pixar. He noted that when he was at Apple
in the 1980s, he was always worried that some other company would come out with a faster machine than
his. But now at Pixar, whose graphics work requires extremely fast computers, he is always hoping someone
produces faster machines, so that he can use them!
A major source of speedup is the parallelizing of operations. Parallel operations can be either withinprocessor,
such as with pipelining or having several ALUs within a processor, or between-processor, in
which many processor work on different parts of a problem in parallel. Our focus here is on betweenprocessor
operations.
For example, the Registrar’s Office at UC Davis uses shared-memory multiprocessors for processing its
on-line registration work. A shared-memory multiprocessor machine consists of several processors, plus a
lot of memory, all connected to the same bus or other interconnect. All processors then access the same
memory chips. As of March 2004, the Registrar’s current system was a SPARC Sunfire 3800, with 16 GB
RAM and eight 900 MHz UltraSPARC III+ CPUs.

Programming Paradigms
There are two main paradigms today in parallel-processing, shared memory and message passing. These
distinctions can occur at either the software or hardware level. In other words, both software and hardware
can be designed around both the shared-memory and message-passing paradigms. Thus for example, the
UCD Registrar could run message-passing software such as the MPI package on their shared-memory hardware,
while we could use the shared-memory software package Treadmarks on the message-passing NOW
in CSIF.

World Views
To explain the two paradigms, we will use the term nodes, where roughly speaking one node corresponds
to one processor, and use the following example:
Suppose we wish to multiply an nx1 vector X by an nxn matrix A, putting the product in an nx1
vector Y, and we have p processors to share the work.

Shared-Memory
In the shared-memory paradigm, the arrays for A, X and Y would be held in common by all nodes. If for
instance node 2 were to execute
Y[3] = 12;
and then node 15 were to subsequently execute
print("%d\n",Y[3]);

المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم