Common Pitfalls
These are the patterns examiners flag again and again on CS Extended Essays. Each one comes with the criterion it costs marks under and the fix.
If you read only one EE page before submitting, read this one.
1. Non-academic sources cost marks on Criterion B
The pattern. The bibliography is dominated by blogs, YouTube videos, and Wikipedia. The student treats these as primary references and reasons from them.
Why it costs marks. Criterion B (Knowledge and Understanding). Examiners treat source quality as evidence of academic rigour, not as a presentation detail. A weak bibliography routinely drops a mark band on this criterion.
How to avoid it. At the interim reflection, audit your own bibliography. If more than a third of your sources are blogs, YouTube videos, or Wikipedia entries, replace them. Aim for textbook chapters, peer-reviewed papers, and conference proceedings (ACM, IEEE). Google Scholar is the minimum starting point.
2. Describing data instead of analysing it
The pattern. Results are presented and described (“Algorithm X was faster than Algorithm Y”) without interpreting why, what trade-off explains it, or how it connects to theory.
Why it costs marks. Criterion C (Analysis and Line of Argument). This is the single most common criticism examiners raise on CS EEs.
How to avoid it. Highlight every paragraph in your draft that describes data without interpreting it. Write “so what?” in the margin. Then answer:
- Why did this happen?
- What does it reveal about the system or algorithm?
- How does it connect to theory you covered in the background section?
Each “so what?” you answer turns description into analysis.
3. No cross-referencing with external studies
The pattern. The discussion section talks about your findings in isolation, with no comparison to existing literature.
Why it costs marks. Criterion D (Discussion and Evaluation). This is consistently flagged across CS EEs at every grade band – even strong essays lose marks here.
How to avoid it. Include at least one paragraph in your discussion that explicitly compares your findings to a published study. Frame it as: “Do my results agree or disagree with [prior study]? Why might they differ?” This one paragraph can be the difference between a 5/8 and a 7/8 on Criterion D.
4. Introduction missing the “how” and “why”
The pattern. The introduction states the research question but does not explain why the topic matters or how the question will be answered.
Why it costs marks. Criterion A (Framework). Even otherwise-strong essays lose a mark here.
How to avoid it. Every introduction must contain three things:
- Why this topic is significant (beyond your CS class).
- The research question, clearly stated.
- A brief preview of the method and structure.
If any one of those is missing, fix it before the final draft.
5. Reflective statement reads as a timeline, not a reflection
The pattern. The 500-word RPF statement is a chronological account of the EE process: “First I picked a topic, then I gathered sources, then I ran the experiments.” The student describes events instead of evaluating learning.
Why it costs marks. Criterion E (Reflection). This is the dominant reason reflections fail to move past 2/4.
How to avoid it. Organise the reflective statement around insights and growth, not around events. Two prompts to drive the rewrite:
- What do I understand about my CS topic now that I did not at the start?
- Name one specific skill from this process I could use in a different context, and explain how.
Include CS-specific learning, not only generic study skills like time management.
6. Knowledge without evaluative commentary
The pattern. The background section defines and describes algorithms or concepts correctly, but never comments on their trade-offs, limitations, or failure modes.
Why it costs marks. Criterion B. The difference between a 3/6 and a 5/6 is whether you merely define concepts or actually demonstrate understanding of them.
How to avoid it. For each algorithm or concept in the background section, ask: “Can I also explain when this approach fails or what its trade-offs are?” Pure definitions without analysis signal surface-level understanding.
7. Missing broader implications
The pattern. The conclusion answers the research question for the specific experiment but does not discuss what the findings mean beyond that experiment.
Why it costs marks. Criterion D. Flagged in CS EEs across the grade range.
How to avoid it. In the discussion or conclusion, answer: “What does this mean for someone outside this experiment?” For CS essays, this could be:
- Implications for software engineers choosing between algorithms.
- Impact on a specific industry application.
- Connections to emerging technology trends.
One paragraph is enough – but you must include it.
8. Staying at textbook level
The pattern. The background uses only standard textbook explanations. No engagement with advanced variations, recent developments, or current research.
Why it costs marks. Criterion B. Even strong essays lose a mark here for not engaging with recent developments or limitations of current approaches.
How to avoid it. After writing the background, ask: “Is there anything in this section that could not be found in a standard CS textbook?” If the answer is no, engage with at least one piece of current research, an advanced variation, or a known limitation of the standard approach – and cite it.
9. Reflective statement does not name transferable skills
The pattern. The reflection covers the EE experience but never explicitly names a skill that transfers to another context.
Why it costs marks. Criterion E. This is a specific strand examiners look for; missing it is a frequent reason reflections do not move past 2/4.
How to avoid it. Include at least one explicit sentence of the form:
“The [specific skill] I developed through this research is applicable to [specific other context] because [reason].”
It needs to be a real skill, a real other context, and a real reason – not a generic statement.
10. Methodology does not justify design choices
The pattern. The methodology describes what you did but never explains why you chose that approach over alternatives. Decisions like dataset size, parameter values, or evaluation metrics are presented as given.
Why it costs marks. Criterion A. Methodology must be “explained and applied effectively” for top marks. “I used a dataset of 1,000 items because it fit on the graph” is the kind of justification that costs marks.
How to avoid it. For each major design decision – dataset, parameters, environment, metrics – answer: “Why this choice, and what were the alternatives?” If you cannot answer, you have not justified the decision.
11. Research-question drift between title page, introduction, and conclusion
The pattern. The research question on the title page does not match the one in the introduction; or the conclusion answers a slightly different question from the one stated up front. Often this happens because the student refined the RQ mid-process and forgot to propagate the change.
Why it costs marks. Criterion A (Framework) – the structural conventions cease to support the research. Criterion C (Line of argument) – the argument no longer connects RQ to findings to conclusions if the RQ silently changed.
How to avoid it. Before submission, write the RQ out and compare it word-for-word in three places: title page, end of introduction, opening of conclusion. They should all say the same thing.
12. Anonymity and title-page errors
The pattern. Student name, supervisor name, or school name appears somewhere in the file – header, footer, file metadata, an embedded screenshot. Or the title page is missing one of: student code, research question, subject (Computer Science).
Why it costs marks. Anonymity violations are a process failure that the IB takes seriously and can flag for academic integrity. Title-page omissions cost a mark on Criterion A and signal carelessness to the examiner before they even reach the introduction.
How to avoid it. Before exporting the final PDF: search the document for your name, your school name, and your supervisor’s name (Find / Replace). Check screenshots for visible usernames or school logos. Verify the title page contains exactly: student code, research question, subject (Computer Science). Check the file’s PDF metadata (Author / Title fields) – some PDF exporters fill these in automatically with your username.
13. Code or text reused from sources without proper attribution
The pattern. Code snippets, algorithm pseudocode, datasets, or whole paragraphs are reused from textbooks, papers, online repositories, or AI tools, without a citation. Often the student does not realise this counts as a problem – “it is just an algorithm everyone uses.”
Why it costs marks. This is an academic-integrity failure, not a marking issue. The IB has flagged unattributed code reuse on CS EEs in subject reports. At minimum it loses marks; at worst it triggers an academic-integrity investigation that can put your diploma at risk.
How to avoid it. Cite the source for any code you did not write yourself, including standard algorithms taken from a textbook, code from a tutorial or Stack Overflow, code generated or modified by AI, and any dataset you did not collect yourself. A short attribution comment in the code plus a bibliography entry is enough. If you adapted code, say “adapted from…” – do not paste and pretend it is yours.
Cross-cutting patterns
A few themes run through almost every weak CS EE:
- The “broader implications” gap. Findings discussed only inside the experiment; no link to real-world applications or industry practice.
- The “external literature” gap. Discussion never compares results to published studies.
- Source quality matters more than source quantity. Twelve blog posts is worse than three peer-reviewed papers.
- Description is the default mode. It is natural to describe data before interpreting it. The shift to analysis – why, trade-offs, theory connection – does not happen automatically; you have to push yourself to do it on every section.