Scheduling
IB Syllabus: A1.3.3 – Compare different approaches to scheduling
Table of Contents
- Key Concepts
- Worked Examples
- Quick Check
- Trace Exercise
- Spot the Error
- Fill in the Blanks
- Predict the Output
- Practice Exercises
- Connections
Key Concepts
Why Scheduling Matters
The CPU can only execute one process at a time (on a single core). At any moment, there may be dozens or even hundreds of processes competing for CPU time – your web browser, background updates, an antivirus scan, the operating system itself. The scheduler is the component of the OS that decides which process gets the CPU next.
Good scheduling aims to:
- Maximise throughput – complete as many processes as possible per unit of time
- Minimise waiting time – reduce the time processes spend in the ready queue
- Ensure fairness – prevent any single process from monopolising the CPU
- Maintain responsiveness – keep interactive applications feeling fast
Process States
Every process in the system is in one of five states at any given time:
| State | Description |
|---|---|
| New | Process has been created but not yet admitted to the ready queue |
| Ready | Waiting in the ready queue for CPU time – it can run but the CPU is busy |
| Running | Currently executing on the CPU |
| Waiting | Blocked – waiting for I/O completion or an external event |
| Terminated | Finished execution; resources being released |
New --> Ready <--> Running --> Terminated
| |
+ Waiting -+
A process transitions from Ready to Running when the scheduler selects it. It moves from Running back to Ready if it is preempted (time slice expires). It moves from Running to Waiting if it requests I/O. When I/O completes, it returns to Ready.
Preemptive vs Non-Preemptive
There are two fundamental approaches to scheduling:
Non-preemptive scheduling
- Once a process starts running, it keeps the CPU until it finishes or voluntarily releases it (e.g., makes an I/O request)
- Simpler to implement – no forced context switches
- Risk: one long process can block all others
Preemptive scheduling
- The OS can forcibly remove a running process from the CPU (e.g., when a time slice expires or a higher-priority process arrives)
- More complex – requires saving and restoring process state (context switch)
- Better for interactive systems where responsiveness matters
Exam questions often ask you to identify whether a given algorithm is preemptive or non-preemptive. Make sure you can classify each algorithm.
Scheduling Algorithms
First-Come First-Served (FCFS)
FCFS is the simplest scheduling algorithm. Processes are served in the order they arrive, exactly like a queue at a shop.
- Type: Non-preemptive
- How it works: The first process to arrive runs to completion, then the next, and so on
- Advantage: Simple to implement and understand; no starvation
- Disadvantage: Convoy effect – short processes stuck behind a long one experience excessive waiting time
- Best for: Batch systems where order matters and there is no urgency
Round Robin (RR)
Round Robin gives each process a fixed time quantum (time slice), typically a few milliseconds. When the quantum expires, the running process is moved to the back of the ready queue and the next process gets its turn.
- Type: Preemptive
- How it works: Each process runs for at most one time quantum, then yields the CPU
- Advantage: Fair – every process gets equal CPU time; good responsiveness
- Disadvantage: Context switch overhead – saving and restoring process state takes time; if the quantum is too small, the CPU spends more time switching than executing
- Quantum size trade-off:
- Small quantum = more responsive but higher overhead
- Large quantum = less overhead but approaches FCFS behaviour
- Best for: Interactive and time-sharing systems
Think of Round Robin like a group presentation where each speaker gets exactly 2 minutes. After 2 minutes, the next person speaks, regardless of whether the previous speaker was finished. Those who still have more to say go to the back of the queue.
Priority Scheduling
Each process is assigned a priority number. The process with the highest priority runs first. In this course, lower numbers indicate higher priority (priority 1 runs before priority 5).
- Type: Can be preemptive or non-preemptive
- How it works: The scheduler always selects the highest-priority process from the ready queue
- Advantage: Important processes (e.g., system tasks, real-time monitoring) always run first
- Disadvantage: Starvation – low-priority processes may never execute if high-priority processes keep arriving
- Solution: Ageing – gradually increase the priority of processes that have been waiting a long time, ensuring they eventually get CPU time
- Best for: Real-time systems (e.g., hospital monitoring, flight control)
Starvation is a key exam concept. Always mention ageing as the solution when discussing priority scheduling disadvantages.
Multilevel Queue (MLQ)
MLQ divides the ready queue into multiple separate queues, each with its own scheduling algorithm. Processes are permanently assigned to a queue based on their type.
- Structure example:
- Queue 1 (highest priority): System processes – scheduled using Round Robin
- Queue 2 (medium priority): Interactive user processes – scheduled using Round Robin
- Queue 3 (lowest priority): Batch processes – scheduled using FCFS
- Inter-queue scheduling: Typically fixed priority – Queue 1 must be empty before Queue 2 runs
- Disadvantage: Lower-priority queues may starve if higher queues are always busy
Multilevel Feedback Queue (MLFQ) is a variant where processes can move between queues based on their behaviour. A CPU-intensive process may be demoted to a lower queue, while a process that frequently performs I/O may be promoted. This provides adaptability that standard MLQ lacks.
Comparison Table
| Algorithm | Type | Throughput | Fairness | Starvation Risk | Overhead | Best for |
|---|---|---|---|---|---|---|
| FCFS | Non-preemptive | Low-Medium | Order-based | No | Very low | Batch systems |
| Round Robin | Preemptive | Medium | High (equal time) | No | Medium (context switches) | Interactive systems |
| Priority | Either | High | Low | Yes (without ageing) | Low | Real-time systems |
| MLQ | Mixed | High | Queue-based | Yes (lower queues) | Medium | Complex systems |
The comparison of scheduling algorithms is one of the most frequently examined A1.3 topics. You must be able to describe each algorithm, draw Gantt charts, and evaluate advantages/disadvantages.
Key Metrics
When comparing algorithms, you need to calculate:
- Turnaround time = completion time - arrival time (total time from arrival to finish)
- Waiting time = turnaround time - burst time (time spent waiting in the ready queue)
- Average waiting time = sum of all waiting times / number of processes
Worked Examples
Example 1: FCFS Gantt Chart
Three processes arrive:
| Process | Burst Time | Arrival Time |
|---|---|---|
| P1 | 6 | 0 |
| P2 | 3 | 1 |
| P3 | 2 | 2 |
Since FCFS is non-preemptive and processes are served in arrival order:
| P1 | P2 | P3 |
0 6 9 11
Step-by-step:
- t=0: P1 arrives and starts running (no other processes yet)
- t=1: P2 arrives but P1 is still running – P2 joins the ready queue
- t=2: P3 arrives but P1 is still running – P3 joins the ready queue
- t=6: P1 finishes. P2 starts (it arrived before P3)
- t=9: P2 finishes. P3 starts
- t=11: P3 finishes
Calculations:
| Process | Completion | Turnaround (completion - arrival) | Waiting (turnaround - burst) |
|---|---|---|---|
| P1 | 6 | 6 - 0 = 6 | 6 - 6 = 0 |
| P2 | 9 | 9 - 1 = 8 | 8 - 3 = 5 |
| P3 | 11 | 11 - 2 = 9 | 9 - 2 = 7 |
Average waiting time: (0 + 5 + 7) / 3 = 4.0
Notice the convoy effect: P3 only needs 2ms of CPU time but waits 7ms because it arrived after two longer processes.
Example 2: Round Robin Gantt Chart (Quantum = 3)
Same three processes:
| Process | Burst Time | Arrival Time |
|---|---|---|
| P1 | 6 | 0 |
| P2 | 3 | 1 |
| P3 | 2 | 2 |
Step-by-step trace:
- t=0: Only P1 is in the queue. P1 runs for quantum=3. During this time, P2 (t=1) and P3 (t=2) arrive and join the queue. After P1’s quantum: queue = [P2, P3, P1(remaining 3)]
- t=3: P2 runs for 3ms (its full burst). P2 finishes at t=6. Queue = [P3, P1(3)]
- t=6: P3 runs for 2ms (its full burst, less than quantum). P3 finishes at t=8. Queue = [P1(3)]
- t=8: P1 runs for remaining 3ms. P1 finishes at t=11.
| P1 | P2 | P3 | P1 |
0 3 6 8 11
Calculations:
| Process | Completion | Turnaround (completion - arrival) | Waiting (turnaround - burst) |
|---|---|---|---|
| P1 | 11 | 11 - 0 = 11 | 11 - 6 = 5 |
| P2 | 6 | 6 - 1 = 5 | 5 - 3 = 2 |
| P3 | 8 | 8 - 2 = 6 | 6 - 2 = 4 |
Average waiting time: (5 + 2 + 4) / 3 = 3.67
In this case, Round Robin gives a slightly lower average waiting time than FCFS (3.67 vs 4.0). The shorter processes (P2 and P3) benefit from not being stuck behind P1 for its entire burst.
Quick Check
Q1. Which scheduling algorithm causes the convoy effect?
Q2. In Round Robin scheduling, what is a time quantum?
Q3. What problem does ageing solve in priority scheduling?
Q4. Which algorithm ensures all processes receive equal CPU time?
Q5. In a standard Multilevel Queue (MLQ), can processes move between queues?
Trace Exercise
Complete the Gantt chart for a Round Robin schedule with quantum = 2.
Three processes arrive:
| Process | Burst Time | Arrival Time |
|---|---|---|
| P1 | 5 | 0 |
| P2 | 3 | 1 |
| P3 | 4 | 2 |
The first time slot (0–2) has been filled in for you. Enter the process that runs during each remaining time interval, then calculate the completion times.
Trace carefully: after each quantum, the running process goes to the back of the ready queue. New arrivals join the queue in order of arrival. When a process has less remaining time than the quantum, it finishes early and the next process starts immediately.
Round Robin Scheduling (Quantum = 2)
| Time | 0--2 | 2--4 | 4--6 | 6--8 | 8--9 | 9--11 | 11--12 |
|---|---|---|---|---|---|---|---|
| Process | P1 |
Now fill in the completion times for each process:
| Process | Completion Time | Turnaround Time | Waiting Time |
|---|---|---|---|
| P1 | |||
| P2 | |||
| P3 |
Trace walkthrough (verify your answers):
- t=0–2: P1 runs (remaining 3). At t=1, P2 arrives; at t=2, P3 arrives. Queue: [P2, P3, P1(3)]
- t=2–4: P2 runs (remaining 1). Queue: [P3, P1(3), P2(1)]
- t=4–6: P3 runs (remaining 2). Queue: [P1(3), P2(1), P3(2)]
- t=6–8: P1 runs (remaining 1). Queue: [P2(1), P3(2), P1(1)]
- t=8–9: P2 runs 1ms and finishes. Queue: [P3(2), P1(1)]
- t=9–11: P3 runs 2ms and finishes. Queue: [P1(1)]
- t=11–12: P1 runs 1ms and finishes.
Spot the Error
A student wrote this description of the Round Robin scheduling algorithm. One line contains a critical error about what happens when a time quantum expires. Click the line with the error, then pick the correct fix.
Pick the correct fix for line 4:
Fill in the Blanks
Complete the following statements about CPU scheduling by filling in the correct terms.
Fill in each blank with the correct scheduling term or concept:
processes jobs in the order they arrive.
In Round Robin, each process gets a fixed .
occurs when low-priority processes never get CPU time.
gradually increases the priority of waiting processes.
The algorithm uses multiple ready queues with different scheduling policies.
Predict the Output
Calculation 1: FCFS Average Waiting Time
Three processes arrive at the same time (t=0) and are scheduled using FCFS. What is the average waiting time?
Process Burst Time Arrival Time
P1 4 0
P2 2 0
P3 1 0
Gantt chart:
| P1 | P2 | P3|
0 4 6 7Average waiting time (round to 2 decimal places):
Calculation 2: Round Robin Completion Time
Two processes are scheduled using Round Robin with a quantum of 3. At what time does P2 finish?
Process Burst Time Arrival Time
P1 7 0
P2 4 0
Trace the Gantt chart:
| P1 | P2 | P1 | P2|
0 3 6 9 ?Time at which P2 completes:
Practice Exercises
Core
-
FCFS Gantt chart – Four processes arrive: P1 (burst=3, arrival=0), P2 (burst=5, arrival=1), P3 (burst=2, arrival=2), P4 (burst=4, arrival=3). Draw the FCFS Gantt chart and calculate the turnaround time and waiting time for each process. What is the average waiting time?
-
Round Robin advantages and disadvantages – Explain two advantages and two disadvantages of Round Robin scheduling. Discuss how the size of the time quantum affects performance – what happens if the quantum is very small? Very large?
-
Scheduling goals – Describe three goals that a CPU scheduling algorithm should aim to achieve. For each goal, name one algorithm that performs well and one that performs poorly.
Extension
-
Hospital scenario comparison – A hospital computer system runs three types of processes: patient-monitoring alerts (must respond within milliseconds), doctor record lookups (interactive, should feel responsive), and end-of-day report generation (can run overnight). Compare FCFS, Round Robin, and Priority scheduling for this scenario. Which algorithm (or combination) would you recommend and why?
-
Starvation and ageing – Explain the concept of starvation with a specific example involving three processes with different priorities. Then describe how ageing works step by step to prevent the problem. Include a timeline showing priority changes.
Challenge
-
Design an MLQ system – A university computer lab must handle three categories of processes: (a) real-time lab equipment monitoring (highest priority), (b) student interactive terminals for programming (medium priority), and (c) batch grade processing that runs overnight (lowest priority). Design a Multilevel Queue system: specify the number of queues, the scheduling algorithm for each queue, the inter-queue scheduling policy, and explain how you would prevent starvation of batch processes.
-
Web server response time – A web server uses Round Robin scheduling with a quantum of 50ms. 100 users connect simultaneously, each requiring 200ms of CPU time. Assuming zero context-switch overhead, calculate: (a) the maximum time before a user gets their first quantum, (b) the total time for all users to finish, and (c) how the answer changes if context switching takes 5ms per switch.
Connections
- Prerequisites: OS Fundamentals – scheduling is one of the key OS functions (process management); you need to understand the role of the OS before studying how it allocates CPU time
- Related: CPU Architecture – the scheduler allocates CPU time through the FDE cycle; multi-core processors allow parallel scheduling across cores
- Forward: Polling and Interrupts – interrupts can trigger context switches and rescheduling; a timer interrupt is what makes preemptive scheduling possible
- Forward: Multitasking (HL) – scheduling enables multitasking; deadlock and starvation occur when scheduling and resource allocation fail