Explain the concept of demand paging with example.

Demand Paging in Operating Systems: Concept and Example

Demand paging is a memory management technique where the operating system loads only the required pages of a process into RAM when they are actually needed. Instead of loading the entire program at once, pages are brought into memory on demand. This reduces memory usage, speeds up startup time, and allows more processes to run concurrently.

Key Concepts

  • Page: A fixed-size block of the process’s virtual address space (e.g., 4 KB).
  • Frame: A fixed-size block in physical memory (same size as a page).
  • Page Table: Maps virtual pages to physical frames; contains bits like valid/present, dirty, and reference.
  • Valid/Present Bit: Indicates whether a page is currently in RAM. 0 = not in memory; 1 = in memory.
  • Page Fault: An interrupt that occurs when a process references a page that is not in RAM.
  • Pager (Lazy Swapper): The OS component that loads pages into memory on demand.

How Demand Paging Works (Step-by-Step)

  1. CPU generates a virtual address (page number + offset).
  2. MMU checks the page table entry:
    • If valid bit = 1, translate to the corresponding frame and access memory.
    • If valid bit = 0, a page fault occurs.
  3. On a page fault, the OS:
    • Pauses the process and locates the needed page on disk (swap space or executable file).
    • Chooses a free frame. If none is free, selects a victim frame using a page replacement algorithm (e.g., FIFO/LRU).
    • If the victim page is dirty, writes it back to disk.
    • Reads the needed page from disk into the chosen frame.
    • Updates the page table (sets valid bit, frame number; clears/sets relevant flags).
    • Restarts the faulting instruction; it now succeeds.

Numerical Example of Demand Paging

Assume:

Page size = 1 KB (1024 bytes)
Virtual address referenced by the process = 2500
Currently, Page 2 is not in RAM (valid bit = 0)

Address breakdown:

Page number = 2500 / 1024 = 2 (integer division)
Offset      = 2500 % 1024 = 452

Step-by-step:

  1. CPU tries to access Page 2, Offset 452 → page table shows valid = 0 → page fault.
  2. OS finds a free frame (say Frame 5). If no free frame exists, it chooses a victim frame and evicts it.
  3. OS reads Page 2 from disk into Frame 5.
  4. Page table entry for Page 2 is updated: valid = 1, frame = 5.
  5. Instruction restarts, MMU forms the physical address:
    Physical address = (Frame number × Page size) + Offset
                     = (5 × 1024) + 452
                     = 5120 + 452
                     = 5572
        
  6. Memory access completes successfully.

If No Free Frame Is Available

The OS applies a page replacement policy to evict a page:

  • FIFO (First-In, First-Out): Evict the oldest loaded page.
  • LRU (Least Recently Used): Evict the page that hasn’t been used for the longest time.
  • Clock/Second-Chance: An efficient approximation of LRU using a reference bit.

If the victim page is dirty (modified), it must be written back to disk before reuse. Clean pages can be discarded immediately.

Advantages of Demand Paging

  • Faster program startup (only needed pages are loaded).
  • Lower RAM usage; supports larger programs than physical memory.
  • Higher multiprogramming level, improving CPU utilization.
  • Supports features like copy-on-write and shared libraries efficiently.

Disadvantages and Issues

  • Page faults are expensive due to disk I/O latency.
  • Poor locality or too small a working set can cause frequent faults.
  • Thrashing: Excessive paging activity that severely degrades performance.

Performance Note (Effective Access Time)

The average memory access time increases with page faults. A common model:

EAT = (1 - p) × MA + p × (PF_time + MA)
Where:
- p        = page fault rate (0 ≤ p ≤ 1)
- MA       = memory access time
- PF_time  = page fault service time (interrupt + disk read/write + reload)

Goal: keep p very low by using good page replacement, adequate RAM, and locality-aware workloads.

Real-World Uses

  • Loading code segments only when executed (lazy loading).
  • Large datasets and sparse arrays where many pages are never touched.
  • Process creation with fork using copy-on-write: pages are loaded/duplicated only when written.

Summary

Demand paging loads pages only when referenced, using page faults to trigger on-demand loading from disk. This approach minimizes memory usage and speeds up startup but requires careful management of page replacement and working sets to avoid thrashing.