Criterion C: System overview
Marks: 6 • Recommended words: 150
Criterion C is the design section – the bridge between your plan and your build. You show the reader what the product will look like as a system: its components, how they relate, the algorithms inside each, the user interface, and how you will test that the finished product works.
Three things must be in the system overview:
- A system model – diagrams showing the key components, their relationships, the rules governing their interaction, the algorithms each component uses, and the user interface.
- Enough clarity that a third party could recreate the product from the system overview alone.
- A testing strategy aligned to the success criteria.
The system overview must be consistent with the problem specification in Criterion A and the planning in Criterion B. Word count is only 150 because the diagrams and tables do the heavy lifting – invest time in the visuals, not the prose.
1. The system model
The system model is a set of diagrams that describe the product as a system.
Important. A complete system model does not include the algorithms themselves – those are presented separately (see section 2 below). Class diagrams that show method bodies, or flowcharts that double as the algorithm and the system view, are scored as incomplete system models. Keep the two artefacts apart: the system model says what the components are and how they relate; the algorithms say how each component does its job.
The system model must show:
- The key components – e.g. modules, classes, or services.
- Their relationships – which components call or depend on which others.
- The rules governing their interaction – when a component is invoked, what inputs it receives, what it returns.
- The user interface – layout of the screens or views the user interacts with.
Useful diagram types
Choose the diagrams that best describe your specific product. You typically need two or three, not all of these.
| Diagram | What it shows | When to use it |
|---|---|---|
| Class diagram (UML) | Classes, their attributes, methods, and relationships (inheritance, association, composition) | OOP products – almost always appropriate |
| Structure chart | Top-down module hierarchy with data passing between modules | Non-OOP or procedural products |
| Entity-relationship diagram (ERD) | Entities, attributes, and relationships between entities | Products with a database or structured persistent data |
| Data flow diagram (DFD) | Processes, data stores, external entities, and how data moves | Products where the story is about transforming or routing data |
| System flowchart | Abstract control flow through the system as a whole | Products with a clear top-level sequence |
| UI wireframes / mock-ups | Layout of screens or views, labelled with controls | Always – the UI is explicitly required |
What “a third party could recreate it” means
The examiner should be able to read your system overview (plus Criterion A and B) and have a strong idea of how to build the same product themselves. That does not mean they would make identical code choices – it means:
- They know what the classes / modules are and what each is responsible for.
- They know what inputs and outputs each component exchanges.
- They know what the user sees and what actions they can take.
- They know how the algorithms work well enough to implement them.
If any of these is missing, the system model is incomplete.
A practical self-check: show your draft system model to a peer who is not on your project. Ask them to describe to you, in their own words, what you are building. If they cannot, the diagrams are not yet doing their job.
2. Algorithms
Algorithms are presented separately from the system model and can take different forms:
- Natural language – numbered steps in plain English. Acceptable for simple algorithms.
- Flow charts – symbols for start, process, decision, end. Good for control-flow-heavy algorithms.
- Pseudocode – the usual choice. Use IB-style pseudocode or a neutral Python-like form.
Each algorithm should address the individual components of the system model, not the whole product at once.
Example (pseudocode for the meal planner’s shopping-list generator)
FUNCTION generateShoppingList(weeklyPlan, pantry)
shoppingList = empty dictionary of {ingredient: quantity}
FOR each day IN weeklyPlan
recipe = weeklyPlan[day]
FOR each (ingredient, amount) IN recipe.ingredients
IF ingredient IN shoppingList
shoppingList[ingredient] = shoppingList[ingredient] + amount
ELSE
shoppingList[ingredient] = amount
END IF
END FOR
END FOR
FOR each (ingredient, amountNeeded) IN shoppingList
IF ingredient IN pantry
inStock = pantry[ingredient]
IF inStock >= amountNeeded
mark ingredient as "already in stock"
ELSE
shoppingList[ingredient] = amountNeeded - inStock
END IF
END IF
END FOR
RETURN shoppingList
END FUNCTION
Include algorithms for the important, non-trivial operations – not every getter and setter. Two to four well-explained algorithms is typical for a 150-word overview.
3. Testing strategy
The testing strategy describes your systematic approach to evaluating whether the computational solution works as intended. It must:
- Align with the success criteria from Criterion A.
- Check that the code functions correctly on expected input.
- Handle unexpected or incorrect input – edge cases, invalid data, boundary values.
The effective format is a testing strategy table:
| Test # | Success criterion | Description | Test data (input) | Expected outcome |
|---|---|---|---|---|
| 1 | SC1: Recipe CRUD | Add a new recipe | Name = “Dal”, ingredients = [(lentils, 200g), (onion, 1)] | Recipe appears in the list; file saved |
| 2 | SC1: Recipe CRUD | Add recipe with invalid quantity | Name = “Dal”, ingredients = [(lentils, “-50g”)] | Validation error shown; no recipe created |
| 3 | SC3: Shopping list | Aggregate across multiple recipes | Plan = 3 recipes all requiring onion | Shopping list shows combined onion quantity |
| 4 | SC5: Plan swap | Swap one recipe | Remove recipe A from Monday, add recipe B | Shopping list updates within 1 second |
| 5 | SC4: Pantry check | Item fully in stock | Pantry has 500g lentils; needed 200g | Lentils marked “already in stock” |
| 6 | SC4: Pantry check | Item partially in stock | Pantry has 100g lentils; needed 200g | Shopping list shows 100g remaining |
Include a mix of:
- Typical inputs – the normal use case.
- Boundary values – empty lists, maximum sizes, zeros.
- Abnormal inputs – invalid types, negative numbers, malformed files.
Every success criterion from Criterion A should map to at least one test.
The testing strategy you design here is the same strategy you deploy and report on in Criterion D. Design once, reuse it – do not invent a new strategy for D.
Mark bands
| Marks | Level descriptor |
|---|---|
| 0 | The response does not reach the standard described below. |
| 1–2 | The response: outlines a limited system model; identifies algorithms for the components of the system model; identifies a testing strategy for at least one success criterion. |
| 3–4 | The response: constructs a system model that is not complete; constructs algorithms for the components of the system model that lead to partial functionality of the product; outlines a testing strategy that aligns with at least three success criteria. |
| 5–6 | The response: constructs a complete system model; constructs algorithms for the components of the system model that enable the product to perform; describes a testing strategy that aligns with the success criteria. |
Key thresholds:
- System model completeness – 5–6 requires every essential component and the UI to be in the diagrams. 3–4 means some part is missing; 1–2 is a sparse outline.
- Algorithms – 5–6 means the algorithms, if implemented, would produce the required behaviour. 3–4 means they would only deliver part of the functionality. 1–2 means algorithms are named but not meaningfully described.
- Testing strategy – 5–6 aligns with the success criteria (all of them); 3–4 aligns with at least three; 1–2 aligns with at least one.
Word count and formatting
- Recommended word count: 150 (excluding diagrams, algorithms and tables).
- Diagrams, pseudocode and the testing strategy table are not counted – put the detail there.
- Suggested section heading in the single-PDF documentation: “Criterion C: System overview”.
Common pitfalls
- Designing the system after you have already built it. The system overview should be consistent with the product, but it should feel like a design document, not a reverse-engineered summary. Date your drafts if you need to.
- Missing the UI. Wireframes are explicitly part of the system model.
- Algorithms reduced to “it does X.” The examiner needs enough detail to recreate the algorithm, not a one-line summary.
- Testing strategy that only covers happy-path inputs. Include edge cases and abnormal data.
- Testing strategy with no mapping to success criteria. Every row should cite which success criterion it tests.