انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

Shared Memory

Share |
الكلية كلية تكنولوجيا المعلومات     القسم قسم البرامجيات     المرحلة 3
أستاذ المادة عباس محسن عبد الحسين البكري       25/03/2013 07:04:57

Shared memory

Today, programming on shared-memory multiprocessors is typically done via threading. A thread is similar
to a process in an operating system (OS), but with much less overhead. Threaded applications have
become quite popular in even uniprocessor systems, and Unix, Windows, Python, Java and Perl all support
threaded programming. One of the most famous threads packages is Pthreads.

How Threads Work on Multiprocessor Systems

Even if you have had some exposure to threaded programming before, it’s important to understand how
things change when we use it in a multiprocessor environment. To this end, let’s first review how an OS
schedules processes on a uniprocessor system.
Say persons X and Y are both running programs on the same uniprocessor machine. Since there is only one
CPU, only one program is running at any given time, but they do “take turns.” X’s program will run for
a certain amount of time, which we’ll assume for concreteness is 50 milliseconds. After 50 milliseconds,
a hardware timer will issue an interrupt, which will cause X’s program to suspend and the OS to resume
execution. The state of X’s program at the time of the interrupt, i.e. the values in the registers etc., will be
saved by the OS, and the OS will then restore the state of Y’s program which had been saved at its last turn.
Finally, the OS will execute a interrupt-return instruction, which will cause Y’s program to restore execution
in exactly the setting which it had at the end of its last turn. Note also that if the program which is currently
running makes a system call, i.e. calls a function in the OS for input/output or other services, the program’s
turn ends before 50 ms.
But again, at any given time only one of the three programs (X, Y and the OS) is running. By contrast, on a
multiprocessor system with k CPUs, at any given time k programs are running. When a turn for a program
ends on a given CPU, again an interrupt occurs and the OS resumes execution, at which time it looks for
another program to run.
Though we have been speaking in terms of programs, the proper term is processes. Say for instance that
three people are running the GCC compiler right now on a certain machine. That would be only one program
but three processes.
For the type of threads we are discussing here—nonpreemptive and system level—a thread essentially is
a process. If for instance a program creates four threads, then all four will show up when one runs the
ps command on a Unix system. The difference is that threads have much less overhead than do ordinary
processes.
Threaded programming is natural for shared-memory multiprocessors, since it does share memory.4 Just
like the process pool is shared by all the processors, so is the thread pool. Whenever a processor finishes a
timeslice for a thread, it goes to the thread pool to find another one to process. In that manner, there usually
will be many threads executing truly simultaneously, i.e. we get real parallelism.

المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
الرجوع الى لوحة التحكم