Summary, Terminology and Practice (DP IB Theory of Knowledge): Revision Note
Summary
Here we will summarise the main ideas covered in the optional theme “Knowledge and Technology”
TOK element | Content summary | Example | Possible knowledge questions |
|---|---|---|---|
Scope | Technology can widen the scope of what we can know by extending our ability to observe, measure, store, share and compare information. At the same time, it can narrow scope because technologies influence what information we encounter (e.g. via rankings and personalised feeds), which can repeatedly expose us to similar viewpoints. Technology can also shape what counts as “valuable” knowledge by prioritising knowledge that can be measured and converted into data. | 1) A student researching a controversial issue relies heavily on top search results and ends up reading very similar ideas, missing contrasting viewpoints lower down the page. 2) A local authority uses a machine-learning model to predict accident hotspots; the predictions help target interventions, but the results are limited as the training data reflects under-reporting in certain locations. | In what ways do technologies shape what information is encountered, and how does this affect what we take to be worth knowing? When a tool produces accurate results but cannot explain them (e.g., some machine-learning outputs), should we treat those results as knowledge? |
Perspectives | People approach technology with pre-held viewpoints, values, and interests that shape what they search for, which sources they trust, and how they interpret outputs. Personalisation systems can intensify this by feeding back information that aligns with a person’s existing assumptions, reducing exposure to alternatives. Different perspectives also shape what counts as convincing evidence: someone who prioritises quantitative data may favour tool-generated metrics, while others may emphasise lived experience. Attitudes to automation and machine learning vary, influencing whether people accept an output as authoritative, treat it as one input among many, or challenge it. Views on privacy, safety and power strongly affect how people judge surveillance and data collection, often depending on how much they trust institutions and what they think the main risks are. | 1) A parent who already believes a new phone mast is harmful searches online and mainly clicks sources that confirm that belief; algorithms then keep showing similar content. 2) A hiring manager who already believes algorithms are objective uses automated CV-screening. They treat the “rejected” label as definitive and do not review borderline cases. If the model was trained on past hiring decisions, it may reproduce existing biases. | How do pre-existing values and interests shape what people look for and trust when using technology to learn? How might personalisation reinforce existing viewpoints, and how can knowers recognise when their exposure is being narrowed? How do different perspectives affect what people count as good evidence when technology produces measurements or rankings? |
Methods and tools | Tools such as sensors, imaging devices, databases and software extend our ability to observe, measure, and store information; this often increases precision and allows knowledge to be produced at a larger scale. However, the reliability of the knowledge depends on method choices: how data is collected, what is measured (and what is ignored), how tools are calibrated, and how results are processed and interpreted. Digital methods (like modelling, simulation, and machine learning) can identify patterns humans might miss, but they can also reduce transparency, e.g. when models have been trained with limited or biased data. | 1) A school uses air-quality sensors to decide when to keep students indoors. The sensor readings seem precise, but if devices are poorly calibrated or placed on the side of the school nearest to a road, the method may produce misleading conclusions. 2) A hospital uses a machine-learning tool to flag patients as high-risk. It performs well on the original training population, but accuracy drops for underrepresented groups. | When does increased precision from technology genuinely increase reliability, and when does it create false confidence? To what extent does a lack of transparency limit our ability to treat machine learning outputs as knowledge? How should we evaluate technology-based knowledge claims when the data or methods may embed hidden biases or assumptions? |
Ethics | Data collection can create privacy risks when people are tracked, profiled, or their data is reused beyond its original purpose. Organisations that control platforms, datasets or algorithms can influence what people see and what decisions are made, sometimes without meaningful consent or transparency. There are fairness concerns when technologies rely on biased data or unequal access, e.g. some groups are underrepresented in datasets or lack access to the tools needed to benefit. Finally, there are responsibility questions when technology contributes to harm: who is accountable? The developer, the user, the institution, or the regulator? | 1) A learning app collects detailed usage data and sells it to third parties; students and parents may not realise how much is being inferred about them or how long the data is kept. 2) A police force uses facial recognition in public spaces. If error rates are higher for certain demographic groups, some people face a greater risk of being misidentified. | What ethical limits should apply to collecting and using data, especially when consent is unclear? Who should be accountable when technology-based decisions cause harm, and how can responsibility be traced in complex systems? How should we balance social benefits (efficiency, safety, new knowledge) against risks like privacy loss and unequal impact? |
Terminology
Key terminology | Definition |
|---|---|
Gatekeeper | A person or thing that controls access to something |
Digital divide | The difference between people who have access to digital technology/the internet and those who do not. |
Algorithm | Well-defined instructions to perform a computation |
Bots | Automated computer programs that can be programmed to mimic human behaviour and engagement |
Echo chamber | An (online) environment in which members only hear opinions and ideas that reflect/echo their own |
Big data | Vast amounts of data that can be analysed to identify trends |
Google effect | Forgetting or not trying hard to remember facts that can easily be found online |
Augmented reality | The technology that overlays a computer simulation onto the real world |
Deepfake | The use of artificial intelligence (AI) to create fake videos giving the false impression of authenticity |
Worked Example
Read the quote below and answer the following questions:
What do you understand by the phrase “god-like technology”?
Do you agree that we are “approaching a point of crisis”? Why?
What role does knowledge and technology play in answering the “huge questions of philosophy”?
“The real problem of humanity is the following: we have Paleolithic emotions; medieval institutions; and god-like technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall... Until we answer those huge questions of philosophy that the philosophers abandoned a couple of generations ago—Where do we come from? Who are we? Where are we going?—rationally, we’re on very thin ground”.
(E. O. Wilson, Pulitzer-prize winning sociobiologist in conversation with James Watson (co-discoverer of the molecular structure of DNA) at Sanders Theatre, Harvard University. September 9, 2009).
Unlock more, it's free!
Was this revision note helpful?