Multitasking

IB Syllabus: A1.3.5 – Explain the role of the OS in managing multitasking and resource allocation

HL Only – This entire page covers content assessed at HL level only.

Table of Contents

  1. Key Concepts
    1. Multitasking vs Multiprocessing (A1.3.5)
    2. Context Switching
      1. Context Switch Steps
    3. Resource Contention
    4. Mutual Exclusion
      1. Why Is It Necessary?
      2. Implementation Mechanisms
    5. Deadlock
      1. The Four Necessary Conditions
      2. Circular Wait Diagram
    6. Deadlock Handling Strategies
    7. Related Problems
  2. Worked Examples
    1. Example 1: Dining Philosophers (Simplified)
    2. Example 2: Context Switch Trace
  3. Quick Check
  4. Trace Exercise
  5. Spot the Error
  6. Fill in the Blanks
  7. Predict the Output
  8. Practice Exercises
    1. Core
    2. Extension
    3. Challenge
  9. Connections

Key Concepts

Multitasking vs Multiprocessing (A1.3.5)

Modern computers give the impression that many programs run at the same time – you can listen to music, browse the web, and download a file simultaneously. But how does the OS achieve this?

Multitasking is the ability of an OS to run multiple processes concurrently on a single CPU. The CPU rapidly switches between processes, giving each a small time slice. The switching happens so fast (thousands of times per second) that it creates the illusion of parallelism – it appears that everything runs at once, but at any given instant only one process is executing.

Multiprocessing uses multiple CPUs or cores to execute processes truly in parallel. A quad-core processor can genuinely run four processes at the same time, one on each core. Most modern computers combine multiprocessing (multiple cores) with multitasking (each core time-shares among many processes).

Multithreading is when a single process is divided into threads – lightweight sub-tasks that share the same memory space. For example, a web browser might use one thread to render the page, another to download images, and a third to run JavaScript. Threads within the same process share resources (heap memory, open files) but each thread has its own stack and program counter.

Concept Key Idea Parallelism
Multitasking One CPU, many processes, time-sharing Apparent (illusion)
Multiprocessing Multiple CPUs/cores, simultaneous execution True
Multithreading One process, multiple threads, shared memory Can be true (on multi-core) or apparent (on single-core)

Exam trap: Do not confuse multitasking with multiprocessing. Multitasking involves a single CPU switching rapidly between tasks (concurrent but not truly parallel). Multiprocessing involves multiple CPUs/cores executing tasks at the same time (truly parallel).


Context Switching

When the OS switches from running one process to another, it performs a context switch. This is the mechanism that makes multitasking possible.

What gets saved and loaded? Each process has a Process Control Block (PCB) – a data structure maintained by the OS that stores everything needed to pause and resume the process:

PCB Field What it stores
Program Counter (PC) Address of the next instruction to execute
Register values Contents of all CPU registers (R1, R2, accumulator, etc.)
Process state Ready, Running, or Waiting
Memory management info Page tables, base/limit registers
I/O status Open files, pending I/O requests
Process ID (PID) Unique identifier for the process
Priority Scheduling priority level

Context Switch Steps

  1. A timer interrupt fires (or the process blocks on I/O)
  2. The OS saves the current process’s state into its PCB (registers, PC, state changes from Running to Ready)
  3. The OS selects the next process to run (using the scheduling algorithm)
  4. The OS loads the new process’s PCB into the CPU registers and sets the PC
  5. The new process resumes execution from where it last stopped
Process P1 running
     |
     v
[Timer interrupt fires]
     |
     v
Save P1 state to PCB1:  PC=100, R1=42, R2=7, State=Ready
     |
     v
Load P2 state from PCB2: PC=200, R1=15, R2=3, State=Running
     |
     v
Process P2 running

Context switching has overhead – during the switch, the CPU does no useful work for any process. It is purely administrative. If the OS switches too frequently (very small time slices), too much CPU time is wasted on switching. If the OS switches too rarely (very large time slices), responsiveness suffers. The scheduler must balance these trade-offs.


Resource Contention

Resource contention occurs when multiple processes need the same resource at the same time. Resources include:

  • CPU time – the most commonly contended resource
  • Memory – physical RAM, virtual memory pages
  • Files – reading or writing the same file
  • I/O devices – printers, disk drives, network interfaces
  • Network connections – sockets, bandwidth

The OS must mediate access to shared resources, ensuring that:

  • Processes do not corrupt each other’s data
  • Resources are allocated fairly
  • The system remains responsive

Without proper resource management, processes could overwrite each other’s data, read partially written files, or cause hardware conflicts.


Mutual Exclusion

Mutual exclusion is a mechanism ensuring that only one process can access a shared resource at a time. When one process is using a resource, all other processes must wait until it is released.

Why Is It Necessary?

Consider two processes both writing to the same file simultaneously. Without mutual exclusion, their writes could interleave, producing corrupted output. Mutual exclusion ensures that one process finishes its write completely before the other begins.

Implementation Mechanisms

Semaphores – integer variables used to signal between processes:

  • Binary semaphore (mutex): can only be 0 (locked) or 1 (unlocked). A process waits if the semaphore is 0, and signals (sets to 1) when done. Only one process can hold the lock at a time.
  • Counting semaphore: allows up to N processes to access a resource pool simultaneously. The semaphore counts down as processes acquire the resource and counts up as they release it. When it reaches 0, further requests must wait.

Locks – a simple acquire/release mechanism. A process acquires the lock before entering the critical section (the code that accesses the shared resource) and releases it when finished.

Monitors – a higher-level synchronisation construct that bundles the shared data, the operations on it, and the synchronisation into a single unit. Only one process can execute any of the monitor’s operations at a time.

Think of a semaphore like a ticket system at a bakery. A binary semaphore has only one ticket (one customer served at a time). A counting semaphore with N=3 has three tickets (three customers can be served simultaneously). When all tickets are taken, new customers must wait.


Deadlock

Deadlock is a situation where two or more processes are permanently blocked, each waiting for a resource held by another process in the group. No process can proceed because each is waiting for the other to release a resource first.

The Four Necessary Conditions

For deadlock to occur, all four of the following conditions must hold simultaneously. If even one is prevented, deadlock cannot happen.

  1. Mutual exclusion – at least one resource is non-shareable (only one process can use it at a time)
  2. Hold and wait – a process holds at least one resource while waiting to acquire another
  3. No preemption – resources cannot be forcibly taken from a process; they must be released voluntarily
  4. Circular wait – a chain of processes exists where each process waits for a resource held by the next process in the chain

Circular Wait Diagram

Process A holds Resource 1
    |
    waits for Resource 2
    |
Process B holds Resource 2
    |
    waits for Resource 1
    |
(Deadlock!)

Neither process can make progress. Process A will not release Resource 1 until it gets Resource 2, but Process B holds Resource 2 and will not release it until it gets Resource 1.

The four deadlock conditions are a key exam topic. You must be able to list all four AND explain how each applies to a specific scenario. Remember: all four must hold simultaneously for deadlock to occur.


Deadlock Handling Strategies

Strategy Approach Advantage Disadvantage
Prevention Design the system so at least one of the four conditions cannot hold Deadlock is impossible May reduce resource utilisation or throughput
Avoidance Dynamically check if granting a request could lead to deadlock Allows more flexible resource use than prevention Requires advance knowledge of resource needs; computationally expensive
Detection Allow deadlocks to occur, then detect and recover No overhead during normal operation Recovery can be disruptive (terminate processes or rollback)
Ignorance (Ostrich Algorithm) Assume deadlocks are rare and restart if they occur Zero overhead; simple to implement Deadlocks go unresolved until manual intervention

Prevention examples:

  • Require processes to request all resources at once before starting (eliminates hold and wait)
  • Impose a global ordering on resource types and require processes to request them in order (eliminates circular wait)
  • Allow preemption – forcibly take resources from waiting processes (eliminates no preemption)

Avoidance uses algorithms like the Banker’s Algorithm, which checks whether granting a resource request would leave the system in a “safe state” where all processes can still complete. If not, the request is denied. (Concept only – you do not need to implement this algorithm.)

Detection periodically examines a resource allocation graph for cycles. If a cycle is found, deadlock exists. Recovery options include:

  • Terminating one or more processes in the cycle
  • Preempting resources from a process and rolling it back

Ignorance (Ostrich Algorithm) is used by most desktop operating systems in practice – deadlocks in everyday computing are rare enough that the cost of prevention or detection outweighs the occasional need to restart.


These three problems are often confused in exams. Make sure you can distinguish between them.

Starvation – a process is perpetually denied resources despite being ready to run. This can happen when a scheduling algorithm always favours higher-priority processes, so a low-priority process never gets CPU time. Starvation is not the same as deadlock – in starvation, other processes are making progress; in deadlock, none are.

Priority inversion – a high-priority process is forced to wait because a low-priority process holds a shared resource. The high-priority process cannot proceed, effectively running at the priority of the low-priority process. This is resolved using priority inheritance – temporarily raising the low-priority process’s priority so it can finish and release the resource quickly.

Livelock – two or more processes continuously change state in response to each other but make no progress. Unlike deadlock (where processes are frozen), in livelock the processes are active but trapped in an endless cycle of reacting to each other. Think of two people trying to pass each other in a corridor – both step left, then both step right, then both step left again, forever.

Problem Processes active? Progress? Key difference
Deadlock No (blocked) None All processes permanently frozen
Starvation One process denied Others progress One process starved while others run
Livelock Yes (active) None Processes keep changing state but achieve nothing

Worked Examples

Example 1: Dining Philosophers (Simplified)

Five philosophers sit around a round table. Between each pair of philosophers is one chopstick (five chopsticks total). To eat, a philosopher needs both the left and right chopstick.

Scenario: All five philosophers simultaneously pick up their left chopstick.

Result: Deadlock! Each philosopher holds one chopstick and waits for the one to their right – which is held by the next philosopher.

     Phil 1
    /       \
  C5         C1
  /             \
Phil 5         Phil 2
  \             /
  C4         C2
    \       /
     Phil 4
    /       \
  C4         C3
        |
     Phil 3

Each philosopher holds their left chopstick (Cn) and waits for the chopstick to their right.

Checking the four conditions:

Condition How it applies
Mutual exclusion Each chopstick can only be held by one philosopher at a time
Hold and wait Each philosopher holds their left chopstick while waiting for the right
No preemption A chopstick cannot be taken from a philosopher’s hand
Circular wait Phil 1 waits for Phil 2’s chopstick, Phil 2 waits for Phil 3’s, …, Phil 5 waits for Phil 1’s

Prevention strategy: Make one philosopher (say, Phil 5) pick up the right chopstick first instead of the left. This breaks the circular wait – Phil 5 reaches for the same chopstick as Phil 1, so one of them must wait without holding anything.


Example 2: Context Switch Trace

Two processes (P1 and P2) share a single CPU. P1 is currently running when a timer interrupt fires.

Before the switch:

Field P1 (Running) P2 (Ready – stored in PCB)
PC 100 200
R1 42 15
R2 7 3
State Running Ready

Step 1: Save P1’s state to its PCB

The OS copies P1’s current register values (PC=100, R1=42, R2=7) into P1’s PCB and sets P1’s state to Ready.

Step 2: Load P2’s state from its PCB

The OS loads P2’s PCB values (PC=200, R1=15, R2=3) into the CPU registers and sets P2’s state to Running.

After the switch:

Field P1 (Ready – stored in PCB) P2 (Running)
PC 100 200
R1 42 15
R2 7 3
State Ready Running

The CPU now executes P2’s next instruction at address 200. P1 is paused but its state is safely stored – when P1 gets the CPU again, it will resume exactly from PC=100 with R1=42 and R2=7.


Quick Check

Q1. What is context switching?

Q2. Which of the following is NOT a necessary condition for deadlock?

Q3. What is a semaphore?

Q4. What is the difference between multitasking and multiprocessing?

Q5. What is starvation in the context of OS resource management?


Trace Exercise

Process P1 is running when a timer interrupt fires, causing a context switch to P2. Fill in the missing values in the state table.

P1’s current register values: PC=100, R1=42, R2=7. P2’s PCB stores: PC=200, R1=15, R2=3.

StepPCR1R2StateRunning
Before switch 100 42 7 Running P1
Save P1 to PCB --
Load P2 from PCB

Spot the Error

A student listed the four necessary conditions for deadlock in their exam answer. One condition is stated incorrectly. Click the incorrect line, then pick the fix.

1The four necessary conditions for deadlock are: 21. Shared access — multiple processes can use the resource simultaneously 32. Hold and wait — a process holds one resource while waiting for another 43. No preemption — resources cannot be forcibly taken from a process 54. Circular wait — a chain of processes each waits for a resource held by the next

Pick the correct fix for this line:


Fill in the Blanks

Complete the following statements about multitasking and resource management:

Concept Summary
===============
 occurs when the OS switches from one process to another.

A  is an integer variable used to signal between processes.

For deadlock to occur, all  conditions must hold simultaneously.

 means a process holds one resource while waiting for another.

 is when a process is indefinitely denied CPU time despite being ready.

Predict the Output

Process A holds File X and requests the Printer. Process B holds the Printer and requests File X. Neither process will release its current resource until it gets the one it is waiting for.

Is this a deadlock? (Type Yes or No)

A system has 2 instances of a shared resource. Three processes need this resource:

  • P1 currently holds 1 instance
  • P2 currently holds 1 instance
  • P3 holds 0 instances and requests 1

Can the OS safely grant P3's request? (Type Yes or No)


Practice Exercises

Core

  1. Context Switching – Explain what happens during a context switch. In your answer, describe what information is saved, where it is stored, and why context switching has overhead.

  2. Four Deadlock Conditions – List and describe the four necessary conditions for deadlock. For each condition, write one sentence explaining what it means.

  3. Mutual Exclusion – Define mutual exclusion and explain why it is necessary when multiple processes share a resource. Give an example of what could go wrong without it.

Extension

  1. Dining Philosophers – Explain how the dining philosophers problem illustrates deadlock. Identify how each of the four deadlock conditions applies. Describe one prevention strategy and explain which condition it eliminates.

  2. Deadlock Handling Strategies – Compare three different deadlock handling strategies (prevention, avoidance, and detection). For each strategy, describe the approach, give one advantage, and give one disadvantage.

Challenge

  1. Hospital System Analysis (IB-style, 6 marks) – A hospital system has 3 processes sharing 2 printers and 1 scanner.
    • Process A holds a printer and needs the scanner.
    • Process B holds the scanner and needs a printer.
    • Process C needs both a printer and the scanner.

    (a) Identify which of the four deadlock conditions are present in this scenario. [4 marks]

    (b) Determine whether deadlock currently exists between Process A and Process B. Justify your answer. [2 marks]

    (c) Suggest a resolution strategy that would allow all three processes to complete. [2 marks]

  2. Deadlock, Starvation, and Livelock – Explain the difference between deadlock, starvation, and livelock. For each, provide a real-world analogy and describe whether the affected processes are active or blocked. Explain why livelock is often harder to detect than deadlock.

Connections

  • Prerequisites: OS Fundamentals – process management is one of the key OS functions; understanding processes and their states is essential
  • Prerequisites: Scheduling – scheduling algorithms determine which process runs next; context switching occurs during scheduling decisions
  • Related: Polling and Interrupts – timer interrupts trigger context switches; interrupt handling is essential for preemptive multitasking
  • Related: CPU Architecture – context switching saves and restores CPU registers; multi-core processors enable true parallel execution (multiprocessing)
  • Forward: Control Systems (HL) – real-time control systems must avoid deadlock and ensure timely resource allocation for safety-critical operations

Back to top

© EduCS.me — A resource hub for IB Computer Science

This site uses Just the Docs, a documentation theme for Jekyll.