Where is use page-replacement policy? Explain any two policies in detail.
Page-Replacement Policy: Where It Is Used and Two Key Policies Explained
In operating systems, a page-replacement policy decides which memory page to evict when a new page must be loaded into RAM and no free frame is available. This decision directly affects page-fault rate, throughput, and overall system performance.
Where is a Page-Replacement Policy Used?
- Virtual memory with demand paging: when a process accesses a page that is not in main memory (page fault) and all frames are full.
- Kernel memory manager: choosing a victim page to free a frame for the incoming page.
- Related contexts (conceptually similar): CPU caches and TLBs also use replacement policies, though their constraints and hardware support differ.
Key terms: - Page: fixed-size block of a process’s virtual address space. - Frame: fixed-size block in physical memory (RAM). - Page fault: occurs when the requested page is not in RAM.
Policy 1: FIFO (First-In, First-Out)
FIFO evicts the page that has been in memory the longest, regardless of how frequently or recently it was used. It is simple and fast to implement using a queue.
How FIFO Works
- Maintain a queue of pages in the order they were loaded into frames.
- On a page fault and no free frame, remove the page at the front of the queue (the oldest).
- Load the new page into that frame and place it at the back of the queue.
Example (3 frames)
Reference string: 1 2 3 2 4 1 5 2 Step Page Frames (left→right) Result 1 1 [1, -, -] Fault 2 2 [1, 2, -] Fault 3 3 [1, 2, 3] Fault 4 2 [1, 2, 3] Hit 5 4 [4, 2, 3] (evict 1) Fault 6 1 [4, 1, 3] (evict 2) Fault 7 5 [4, 1, 5] (evict 3) Fault 8 2 [2, 1, 5] (evict 4) Fault
Advantages
- Very easy to implement; minimal bookkeeping.
- Low overhead and predictable behavior.
Limitations
- Ignores recency of use; may evict a frequently used page.
- Can exhibit Belady’s anomaly (more frames can sometimes cause more faults).
Policy 2: LRU (Least Recently Used)
LRU evicts the page that has not been used for the longest time in the past, approximating the “temporal locality” principle. It typically offers lower fault rates than FIFO in practice.
How LRU Works
- Track the last access time (or order) for each page in memory.
- On a page fault and no free frame, evict the page with the oldest last-access timestamp.
- Update the timestamp of a page on every access (hit or load).
Example (3 frames)
Reference string: 1 2 3 2 4 1 5 2 Step Page Frames (left→right) Result / Eviction (LRU) 1 1 [1, -, -] Fault 2 2 [1, 2, -] Fault 3 3 [1, 2, 3] Fault 4 2 [1, 2, 3] Hit (2 becomes most recent) 5 4 [4, 2, 3] Fault; evict LRU=1 6 1 [4, 2, 1] Fault; evict LRU=3 7 5 [4, 5, 1] Fault; evict LRU=2 8 2 [2, 5, 1] Fault; evict LRU=4
Implementation Notes
- Exact LRU: per-page timestamps or a stack-like structure (higher overhead in software).
- Approximate LRU: Clock/Second-Chance algorithm using reference bits for lower overhead with similar behavior.
Advantages
- Usually lower page-fault rate than FIFO; respects temporal locality.
- Does not suffer from Belady’s anomaly.
Limitations
- Higher bookkeeping cost (time stamps or hardware support required).
- Exact LRU can be expensive to maintain for large workloads.
FIFO vs LRU: Quick Comparison
- Performance: LRU is generally better or equal to FIFO in fault rate for typical workloads.
- Overhead: FIFO is minimal; LRU needs extra tracking (or an approximation like Clock).
- Stability: FIFO can show Belady’s anomaly; LRU does not.
Key Takeaways
- Page-replacement policies are used during demand paging when memory frames are full.
- FIFO is simple and fast but can evict useful pages.
- LRU leverages recency for better hit rates, at the cost of extra overhead.
- In practice, many systems use Clock (an LRU approximation) for a good balance of speed and accuracy.
