انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة
الكلية كلية تكنولوجيا المعلومات
القسم قسم البرامجيات
المرحلة 3
أستاذ المادة رفاه محمد كاظم المطيري
22/11/2017 20:12:09
ch 5 : deadlocks
chapter objectives • to develop a description of deadlocks, which prevent sets of concurrent processes from completing their tasks. • to present a number of different methods for preventing or avoiding deadlocks in a computer system.
there are various synchronization tools, such as mutex locks and semaphores. these tools are also considered system resources, and they are a common source of deadlock. however, a lock is typically associated with protecting a specific data structure—that is, one lock may be used to protect access to a queue, another to protect access to a linked list, and so forth. for that reason, each lock is typically assigned its own resource class, and definition is not a problem. mutex lock the term of mutex is short for mutual exclusion. the mutex lock can be used to protect critical regions and thus prevent race conditions. that is, a process must acquire the lock before entering a critical section it releases the lock when it exits the critical section. the acquire() function acquires the lock, and the release() function releases the lock. a mutex lock has a boolean variable available whose value indicates if the lock is available or not. if the lock is available, a call to acquire() succeeds, and the lock is then considered unavailable. a process that attempts to acquire an unavailable lock is blocked until the lock is released. the definition of acquire() is as follows: in computer science, a lock or mutex (from mutual exclusion) is a synchronization mechanism for enforcing limits on access to a resource in an environment where there are many threads of execution. a lock is designed to enforce a mutual exclusion concurrency control policy.
the simplest type of lock is a binary semaphore. it provides exclusive access to the locked data.
? if the mutex isn t currently locked by any thread, the calling thread locks it (from this point, and until its member unlock is called, the thread owns the mutex). ? if the mutex is currently locked by another thread, execution of the calling thread is blocked until unlocked by the other thread (other non-locked threads continue their execution). ? if the mutex is currently locked by the same thread calling this function, it produces a deadlock (with undefined behavior). mutex: is a key to a room. one person can have the key - occupy the room - at the time. when finished, the person gives (frees) the key to the next person in the queue. officially: "mutexes are typically used to serialize access to a section of re-entrant code that cannot be executed concurrently by more than one thread. a mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section. (a mutex is really a semaphore with value 1.) semaphore: is the number of free identical room keys. example, say we have four rooms with identical locks and keys. the semaphore count - the count of keys - is set to 4 at beginning (all four rooms are free), then the count value is decremented as people are coming in. if all rooms are full, ie. there are no free keys left, the semaphore count is 0. now, when eq. one person leaves the room, semaphore is increased to 1 (one free key), and given to the next person in the queue. officially: "a semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore). the dining-philosophers problem consider five philosophers who spend their lives thinking and eating. the philosophers share a circular table surrounded by five chairs, each belonging to one philosopher. in the center of the table is a bowl of rice, and the table is laid with five single chopsticks. when a philosopher thinks, she does not interact with her colleagues. from time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her and her left and right neighbors). a philosopher may pick up only one chopstick at a time. obviously, she cannot pick up a chopstick that is already in the hand of a neighbor. when a hungry philosopher has both her chopsticks at the same time, she eats without releasing the chopsticks. when she is finished eating, she puts down both chopsticks and starts thinking again. one simple solution is to represent each chopstick with a semaphore. a philosopher tries to grab a chopstick by executing a wait() operation on that semaphore. she releases her chopsticks by executing the signal() operation on the appropriate semaphores. thus, the shared data are semaphore chopstick[n] /* n=5 in our example */ do { wait(chopstick[i]) wait(chopstick[(i+1) % n]) . . . /* eat for awhile */ . . . signal(chopstick[i]) signal(chopstick[(i+1) % n]) . . . /* think for awhile */ . . . } while (true) figure : the structure of philosopher i . where all the elements of chopstick are initialized to 1. the structure of philosopher i is shown in figure above. although this solution guarantees that no two neighbors are eating simultaneously, it nevertheless must be rejected because it could create a deadlock. suppose that all five philosophers become hungry at the same time and each grabs her left chopstick. all the elements of chopstick will now be equal to 0. when each philosopher tries to grab her right chopstick, she will be delayed forever. several possible remedies to the deadlock problem are replaced by:
allow at most four philosophers to be sitting simultaneously at the table. allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this, she must pick them up in a critical section). use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left chopstick and then her right chopstick, whereas an even numbered philosopher picks up her right chopstick and then her left chopstick.
in section 5.8, we present a solution to the dining-philosophers problem that ensures freedom from deadlocks. note, however, that any satisfactory solution to the dining-philosophers problem must guard against the possibility that one of the philosophers will starve to death. a deadlock-free solution does not necessarily eliminate the possibility of starvation.
deadlocks
in a multiprogramming environment, several processes may compete for a finite number of resources. a process requests resources if the resources are not available at that time, the process enters a waiting state. sometimes, a waiting process is never again able to change state, because the resources it has requested are held by other waiting processes. this situation is called a deadlock. perhaps the best illustration of a deadlock is “when two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.”
in this chapter, we describe methods that an operating system can use to prevent or deal with deadlocks. although some applications can identify programs that may deadlock, operating systems typically do not provide deadlock-prevention facilities, and it remains the responsibility of programmers to ensure that they design deadlock-free programs.
mutex locks and semaphores. these tools are also considered system resources, and they are a common source of deadlock.
in a deadlock, processes never finish executing, and system resources are tied up, preventing other jobs from starting.
a process must request a resource before using it and must release the resource after using it. a process may request as many resources as it requires carrying out its designated task. obviously, the number of resources requested may not exceed the total number of resources available in the system. in other words, a process cannot request three printers if the system has only two. under the normal mode of operation, a process may utilize a resource in only the following sequence:
1. request. the process requests the resource. if the request cannot be granted immediately (for example, if the resource is being used by another process), then the requesting process must wait until it can acquire the resource.
2. use. the process can operate on the resource (for example, if the resource is a printer, the process can print on the printer).
3. release. the process releases the resource. the request and release of resources may be system calls.
examples are the request() and release() device, open() and close() file, and allocate() and free() memory system calls. similarly, as we said, the request and release of semaphores can be accomplished through the wait() and signal() operations on semaphores or through acquire() and release() of a mutex lock. for each use of a kernel-managed resource by a process or thread, the operating system checks to make sure that the process has requested and has been allocated the resource. a system table records whether each resource is free or allocated. for each resource that is allocated, the table also records the process to which it is allocated. if a process requests a resource that is currently allocated to another process, it can be added to a queue of processes waiting for this resource. a set of processes is in a deadlocked state when every process in the set is waiting for an event that can be caused only by another process in the set. the events with which we are mainly concerned here are resource acquisition and release. the resources may be either physical resources (for example, printers, tape drives, memory space, and cpu cycles) or logical resources (for example, semaphores, mutex locks, and files). however, other types of events may result in deadlocks.
to illustrate a deadlocked state, consider a system with three cd rw drives. suppose each of three processes holds one of these cdrw drives. if each process now requests another drive, the three processes will be in a deadlocked state. each is waiting for the event “cd rw is released,” which can be caused only by one of the other waiting processes. this example illustrates a deadlock involving the same resource type. deadlocks may also involve different resource types. for example, consider a system with one printer and one dvd drive. suppose that process pi is holding the dvd and process pj is holding the printer. if pi requests the printer and pj requests the dvd drive, a deadlock occurs.
developers of multithreaded applications must remain aware of the possibility of deadlocks. the locking tools are designed to avoid race conditions. however, in using these tools, developers must pay careful attention to how locks are acquired and released. otherwise, deadlock can occur, as illustrated in the dining-philosophers problem.
necessary conditions a deadlock situation can arise if the following four conditions hold simultaneously in a system:
1. mutual exclusion. at least one resource must be held in a non-sharable mode that is, only one process at a time can use the resource. if another process requests that resource, the requesting process must be delayed until the resource has been released.
2. hold and wait. a process must be holding at least one resource and waiting to acquire (earn) additional resources that are currently being held by other processes.
3. no preemption. resources cannot be preempted that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.
4. circular wait. a set {p0, p1, ..., pn} of waiting processes must exist such that p0 is waiting for a resource held by p1, p1 is waiting for a resource held by p2, ..., pn?1 is waiting for a resource held by pn, and pn is waiting for a resource held by p0.
we emphasize that all four conditions must hold for a deadlock to occur. the circular-wait condition implies the hold-and-wait condition, so the four conditions are not completely independent.
resource-allocation graph
deadlocks can be described more precisely in terms of a directed graph called a system resource-allocation graph. this graph consists of a set of vertices v and a set of edges e. the set of vertices v is partitioned into two different types of nodes: p = {p1, p2, ..., pn}, the set consisting of all the active processes in the system, and r = {r1, r2, ..., rm}, the set consisting of all resource types in the system. a directed edge from process pi to resource type rj is denoted by pi ? rj it signifies that process pi has requested an instance of resource type rj and is currently waiting for that resource. a directed edge from resource type rj to process pi is denoted by rj ? pi it signifies that an instance of resource type rj has been allocated to process pi . a directed edge pi ? rj is called a request edge a directed edge rj ? pi is called an assignment edge.
pictorially, we represent each process pi as a circle and each resource type rj as a rectangle. since resource type rj may have more than one instance, we represent each such instance as a dot within the rectangle. note that a request edge points to only the rectangle rj , whereas an assignment edge must also designate one of the dots in the rectangle. when process pi requests an instance of resource type rj , a request edge is inserted in the resource-allocation graph. when this request can be fulfilled, the request edge is instantaneously transformed to an assignment edge. when the process no longer needs access to the resource, it releases the resource. as a result, the assignment edge is deletingd. the resource-allocation graph shown in the figure below depicts the following situation. the sets p, r, and e: o p = {p1, p2, p3} o r = {r1, r2, r3, r4} o e = {p1 ? r1, p2 ? r3, r1 ? p2, r2 ? p2, r2 ? p1, r3 ? p3} ? resource instances: o one instance of resource type r1 o two instances of resource type r2 o one instance of resource type r3 o three instances of resource type r4
? process states: o process p1 is holding an instance of resource type r2 and is waiting for an instance of resource type r1. o process p2 is holding an instance of r1 and an instance of r2 and is waiting for an instance of r3. o process p3 is holding an instance of r3.
given the definition of a resource-allocation graph, it can be shown that, if the graph contains no cycles, then no process in the system is deadlocked. if the graph does contain a cycle, then a deadlock may exist. if each resource type has exactly one instance, then a cycle implies that a deadlock has occurred. if the cycle involves only a set of resource types, each of which has only a single instance, then a deadlock has occurred. each process involved in the cycle is deadlocked. in this case, a cycle in the graph is both a necessary and a sufficient condition for the existence of deadlock.
if each resource type has several instances, then a cycle does not necessarily imply that a deadlock has occurred. in this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of deadlock.
to illustrate this concept, we return to the resource-allocation graph depicted in figure 7.1. suppose that process p3 requests an instance of resource type r2. since no resource instance is currently available, we add a request edge p3? r2 to the graph (figure 7.2). at this point, two minimal cycles exist in the system:
p1 ? r1 ? p2 ? r3 ? p3 ? r2 ? p1 p2 ? r3 ? p3 ? r2 ? p2 processes p1, p2, and p3 are deadlocked. process p2 is waiting for the resource r3, which is held by process p3. process p3 is waiting for either process p1 or process p2 to release resource r2. in addition, process p1 is waiting for process p2 to release resource r1. now consider the resource-allocation graph in figure 7.3. in this example, we also have a cycle: p1 ? r1 ? p3 ? r2 ? p1 however, there is no deadlock. observe that process p4 may release its instance of resource type r2. that resource can then be allocated to p3, breaking the cycle. in summary, if a resource-allocation graph does not have a cycle, then the system is not in a deadlocked state. if there is a cycle, then the system may or may not be in a deadlocked state. this observation is important when we deal with the deadlock problem.
methods for handling deadlocks generally speaking, we can deal with the deadlock problem in one of three ways: ? we can use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a deadlocked state. ? we can allow the system to enter a deadlocked state, detect it, and recover. ? we can ignore the problem altogether and pretend that deadlocks never occur in the system. the third solution is the one used by most operating systems, including linux and windows. it is then up to the application developer to write programs that handle deadlocks.
to ensure that deadlocks never occur, the system can use either deadlock prevention (stopping) or a deadlock-avoidance scheme. deadlock prevention provides a set of methods to ensure that at least one of the necessary conditions (section above) cannot hold. these methods prevent deadlocks by constraining how requests for resources can be made. deadlock avoidance requires that the operating system be given additional information in advance concerning which resources a process will request and use during its lifetime. with this additional knowledge, the operating system can decide for each request whether or not the process should wait. to decide whether the current request can be satisfied or must be delayed, the system must consider the resources currently available, the resources currently allocated to each process, and the future requests and releases of each process.
المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .
|