What are the different methods for handling dead- lock? Explain deadlock prevention in detail.

Deadlock Handling Methods and Deadlock Prevention (Operating Systems)

Deadlock is a situation in an operating system where a set of processes are permanently blocked because each process is waiting for resources held by others. Understanding how to handle deadlock is essential for designing reliable and efficient systems.

Methods for Handling Deadlock

  • Ignore the problem (Ostrich approach): Assume deadlocks are rare and let the system run. Used in systems where deadlocks are infrequent and the cost of handling them is high (e.g., many desktop OSes).
  • Deadlock Prevention: Design the system so that at least one of the necessary conditions for deadlock never holds. This is structural and proactive.
  • Deadlock Avoidance: Make resource-allocation decisions at runtime to keep the system in a “safe state.” Classic example: Banker’s Algorithm (for multiple instances) and Resource Allocation Graph with claim edges (for single-instance resources).
  • Deadlock Detection and Recovery: Allow deadlocks to occur, detect them using algorithms (e.g., wait-for graph), then recover by terminating or rolling back processes, or by preempting resources.

Necessary Conditions for Deadlock (Coffman Conditions)

A deadlock can occur only if all these four conditions hold simultaneously:

  1. Mutual exclusion: At least one resource is non-shareable (only one process can use it at a time).
  2. Hold and wait: A process holds at least one resource and is waiting to acquire additional resources.
  3. No preemption: Resources cannot be forcibly taken; they must be released voluntarily by the process.
  4. Circular wait: A closed chain of processes exists where each process waits for a resource held by the next process in the chain.

Deadlock Prevention (Detailed Explanation)

Deadlock prevention ensures that the system is configured so that deadlock cannot occur. The key idea is to break at least one of the four necessary conditions.

1) Break Mutual Exclusion

  • Make resources shareable where possible. For example, instead of giving a process direct access to a printer, use spooling: processes write print jobs to a spool (disk file/queue), and a print daemon prints them. The “printer device” is no longer directly contended.
  • Limitations: Many resources (printers, mutexes, tape drives) are inherently non-shareable, so this technique has limited scope.

2) Break Hold-and-Wait

  • All-at-once (one-shot) allocation: Require a process to request all needed resources before it begins execution. If not all are available, it waits without holding any resource.
  • Release-before-request policy: If a running process needs additional resources, it must first release the ones it holds, then request the combined set again.
  • Pros: Eliminates scenarios where a process holds some resources while waiting for others, thus breaking hold-and-wait.
  • Cons: Leads to low resource utilization (resources may be held but idle), increased waiting time, possible starvation for large requests, and difficulty predicting all needs in advance.

3) Break No-Preemption

  • Preemption policy: If a process holding resources requests another resource that is not immediately available, preempt (forcibly take back) its held resources and place the process back in the wait queue. The resources are then made available to others.
  • Where it fits: Works well for preemptible resources like CPU and memory pages. For I/O devices or critical sections protected by mutexes, preemption may be unsafe or impossible.
  • Practical rule: Save the process state, release its resources, and restore later when all needed resources can be granted together.
  • Trade-offs: Overhead due to context saving/restoring; may lead to repeated rollbacks and potential starvation if not managed with fair scheduling.

4) Break Circular Wait

  • Total ordering of resource types: Assign a unique global order to resource categories and require that processes request resources only in non-decreasing order of this ranking.
Example:
Let the global order be: R1 < R2 < R3
- If a process holds R2, it can only request R3 next (not R1).
- If another process holds R3, it cannot request R2 or R1.
Result: No cycles can form because requests never go "backward" in the order.
  • Variants: If a process needs a lower-ranked resource later, it must release higher-ranked ones first, then reacquire in order.
  • Pros: Simple to implement and highly effective for locks and semaphores.
  • Cons: Requires careful design to classify and order resources; can complicate programming and may reduce concurrency.

Prevention vs. Other Methods (Quick View)

  • Prevention: Guarantees no deadlocks by design; may reduce resource utilization or concurrency; simpler runtime control.
  • Avoidance: Keeps system in safe states using algorithms like Banker’s; higher runtime overhead; needs maximum-need info in advance.
  • Detection & Recovery: High resource utilization; simpler to allocate, but requires periodic detection and possibly disruptive recovery actions.
  • Ignoring: Lowest overhead; acceptable only if deadlocks are extremely rare or non-critical.

Key Takeaways for Exams

  • List the four handling methods: ignore, prevention, avoidance, detection and recovery.
  • State the four necessary conditions and explain that prevention breaks at least one.
  • Prevention techniques: share resources (spooling), one-shot allocation, preemption rules, resource ordering.
  • Discuss trade-offs: utilization, overhead, starvation risk, and applicability to resource types.