STUDY GUIDE 2026 COMPLETE QUESTIONS WITH CORRECT
DETAILED ANSWERS || 100% GUARANTEED PASS <RECENT
VERSION>
Section 1: Process & Thread Management
1. What is the primary difference between a process and a thread?
• Answer: A process is an instance of a program in execution, possessing its own
independent memory space (code, data, heap, stack). A thread is a lightweight unit of
execution within a process. Threads of the same process share the process's memory
space and resources but have their own stack and register state.
2. What is a Process Control Block (PCB)?
• Answer: A data structure in the operating system kernel that stores all essential
information about a process (process state, program counter, CPU registers, memory
management info, accounting info, I/O status). It allows the OS to suspend and resume
processes.
3. Describe the possible states in a simple process lifecycle model.
• Answer: New, Ready, Running, Waiting/Blocked, Terminated. A process moves from
New to Ready, is dispatched (Ready -> Running), may issue an I/O request (Running ->
Waiting), has I/O complete (Waiting -> Ready), and is finally removed (Running ->
Terminated).
4. What is the purpose of a scheduler?
• Answer: To select which process/thread from the ready queue should be assigned to the
CPU next, aiming to optimize criteria like CPU utilization, throughput, turnaround time,
waiting time, and response time.
5. What is the critical difference between preemptive and non-preemptive (cooperative)
scheduling?
• Answer: In preemptive scheduling, the OS can forcibly remove a running process from
the CPU. In non-preemptive scheduling, a process retains the CPU until it voluntarily
yields by terminating or switching to the waiting state.
6. Describe the First-Come, First-Served (FCFS) scheduling algorithm.
, • Answer: A non-preemptive algorithm that executes processes in the exact order they
arrive in the ready queue. It is simple but can lead to the convoy effect, where short
processes wait behind a long one, increasing average waiting time.
7. What is the main advantage of Shortest-Job-First (SJF) scheduling?
• Answer: It provides the minimum average waiting time for a given set of processes, if
preemption is not allowed. It requires knowledge/estimation of next CPU burst length.
8. What is the key feature of the Round-Robin (RR) scheduling algorithm?
• Answer: It is a preemptive algorithm designed for time-sharing systems. Each process
gets a small unit of CPU time (a time quantum). If not finished, it is preempted and
moved to the back of the ready queue. Performance depends heavily on the quantum
size.
9. What is a race condition?
• Answer: A situation where the outcome of concurrent execution of processes/threads
depends on the non-deterministic order of their execution (timing), leading to
inconsistent or incorrect results. It occurs when they access and manipulate shared data
without proper synchronization.
10. What is the critical section problem?
* Answer: The problem of designing a protocol for processes/threads to cooperate such that
only one at a time can execute its critical section—a segment of code where shared data is
accessed. A solution must provide mutual exclusion, progress, and bounded waiting.
Section 2: Memory Management
11. What is the main goal of memory management?
* Answer: To efficiently manage the hierarchy of memory (registers, cache, RAM, disk) to keep
frequently needed data in fast storage, provide abstraction (virtual memory), isolation between
processes, and allow programs to use more memory than physically exists.
12. What are logical (virtual) and physical addresses?
* Answer: A logical address is generated by the CPU during program execution; it is the address
seen by the process. A physical address is the actual address in physical RAM. The Memory
Management Unit (MMU) translates logical to physical addresses at runtime.
,13. What is the purpose of paging?
* Answer: To eliminate external fragmentation and allow the physical address space of a
process to be non-contiguous. Physical memory is divided into fixed-size frames, and logical
memory is divided into same-size pages. A page table translates page numbers to frame
numbers.
14. What is a page fault?
* Answer: An interrupt (trap) that occurs when a program accesses a page that is mapped in its
virtual address space but is not currently loaded in physical RAM. The OS must handle it by
loading the required page from disk, potentially swapping another page out.
15. What is the role of the Translation Lookaside Buffer (TLB)?
* Answer: The TLB is a hardware cache inside the MMU that stores recent page table
translations (page # -> frame #). It dramatically speeds up virtual-to-physical address translation
by avoiding a main memory (or multi-level page table) access for every memory reference.
16. What is thrashing?
* Answer: A severe performance degradation that occurs when the system spends more time
swapping pages in and out (handling page faults) than executing useful work. It happens when
the degree of multiprogramming is too high relative to the available physical frames.
17. Describe the Belady's (OPT) optimal page replacement algorithm.
* Answer: An idealized, unrealizable algorithm that replaces the page that will not be used for
the longest period in the future. It serves as a benchmark to compare the performance of
realizable algorithms like FIFO and LRU.
18. How does the Least Recently Used (LRU) page replacement algorithm work?
* Answer: It approximates OPT by using the recent past as an approximation of the near future.
It replaces the page that has not been used for the longest period of time. It requires hardware
support (counters or a stack) to track usage.
19. What is the working set model?
* Answer: A model to prevent thrashing. A process's working set is the set of pages it has
actively used in a recent time window (Δ). The principle states that a process should only be
allowed to run if its working set can be kept in memory.
20. What is segmentation?
* Answer: A memory-management scheme that supports the programmer's view of memory as
a collection of variable-sized logical segments (e.g., code, data, stack). Each segment has a
name, length, and base address. Addresses are specified as (segment number, offset).
, Section 3: Storage & I/O Systems
21. What is the purpose of a device driver?
* Answer: It is OS-specific software that acts as a translator between a generic OS I/O request
and the specific commands, registers, and protocols required by a particular hardware device
controller. It hides device details from the kernel.
22. Describe the three primary disk scheduling algorithms.
* Answer:
* FCFS: Handles requests in the order they arrive. Simple but can have long seek times.
* SSTF (Shortest Seek Time First): Selects the request with the minimum seek distance from the
current head position. Can lead to starvation.
* SCAN (Elevator): The head moves in one direction, servicing requests until it reaches the end
of the disk, then reverses direction. More fair than SSTF.
23. What is the difference between blocking and non-blocking I/O?
* Answer: In blocking I/O, a process/thread is suspended (moved to the waiting state) until the
I/O operation completes. In non-blocking I/O, a call returns immediately, either with the results
or an error code, allowing the process to continue other work.
24. What is RAID, and what is RAID 1?
* Answer: RAID (Redundant Array of Independent Disks) uses multiple disks to improve
performance, reliability, or both. RAID 1 (Mirroring) involves duplicating all data from one disk
to another, providing fault tolerance (if one disk fails, data is on the other) but at a 100% storage
overhead.
25. What is RAID 5?
* Answer: RAID 5 uses block-level striping with distributed parity. Parity information (for error
recovery) is distributed across all disks in the array. It can tolerate a single disk failure, provides
good read performance, and has a lower storage overhead than mirroring (1/n overhead for n
disks).
26. How does an SSD (Solid State Drive) differ fundamentally from an HDD in terms of access?
* Answer: An HDD uses mechanical moving parts (spinning platters, moving read/write heads),
so access time involves seek time and rotational latency. An SSD uses flash memory with no
moving parts, so access is electronic and provides consistent, very low-latency random access.