Technology as a Tool (DP IB Theory of Knowledge): Revision Note
Technology as a tool
Technology is a set of human-made tools, techniques, and systems designed to solve problems or achieve goals by extending human abilities
As a tool, technology shapes knowledge by changing what we can observe, measure, model and share
Technology plays a role in producing knowledge through data collection and analysis
Technology allows us to share knowledge and make it more widely available, e.g. the printing press, the internet
Instruments and measurement
Technical instruments (e.g. thermometer, microscope, telescope) can extend human senses, improving the human ability to observe, record and quantify the world
Instruments provide measurements that turn a quality (e.g. temperature) into a number, using a chosen method and unit
Using instruments involves a trade-off: they can improve the quality and shareability of measurements, but they can also introduce errors and limitations
Precision vs accuracy: an instrument can be precise but not accurate if it gives precise readings that are consistently different to the true value
E.g. an incorrectly calibrated digital balance gives 102 g every time for a 100 g mass
Detection limit: very small amounts, or weak signals, may not be picked up at all, e.g.:
a pollutant present at a very low concentration is reported as “not detected” even though it is present
a low-resolution microscope cannot show viruses, so they cannot be directly observed with this tool
Interference/background noise: other factors can distort readings, especially in real-world settings
E.g. measuring heart rate with a wrist sensor during exercise can be distorted by arm movement and poor skin contact
Simulation and modelling
A model is a simplified representation of a system
Simulations use models to generate predictions about how a system might behave over time
Simulations can test scenarios that are too dangerous, expensive, slow or large-scale to study directly, expanding the scope of questions we can ask, e.g. modelling how an epidemic might spread under different vaccination rates
Modelling outputs depend on assumptions, inputs, and parameters, so they can create a false sense of certainty
Model quality needs to be checked by comparing predictions with real-world data and testing how sensitive results are to changes in assumptions, e.g.:
does the model match real-world data?
do small input changes cause big output changes?
Data generation, processing and visualisation
Data generation means collecting raw data using a method or tool, e.g. a weather station generates data by recording temperature every minute with a digital probe
Once data has been generated, it needs to be processed; this means changing raw data to make it usable, for example by:
averaging; combining many readings into hourly/daily means
normalising; adjusting values to allow fair comparison, e.g. per 1000 people
categorising; turning continuous data into groups, e.g. age bands
Processed data can then be visualised in a format that makes patterns easier to interpret, e.g. line graphs for change over time, or bar charts for comparing categories
Processing data and converting it into visual forms can affect the interpretation of data, so it is important that the correct processes and visualisations are chosen:
Processing choices can affect reliability, e.g.:
averaging can simplify data and make it easier to read, but it may hide outlying values
categorising continuous data can make results easier to compare, but the chosen cut-offs can change the pattern that appears
Visualisation can improve reliability by making scales, units, and uncertainty visible (e.g. error bars), but it can reduce reliability if design choices distort the message (e.g. misleading axes, missing labels or cherry-picked ranges)
Good practice is to make the generation and processing transparent so others can judge the reliability
How tools influence what counts as evidence
The quality of evidence is often judged using criteria linked to tools:
reliability: does it give consistent results?
validity: does it measure what it claims to measure?
reproducibility: can others get similar results using the same method?
While tool-produced results are often treated as stronger evidence because they may be standardised and easily repeatable, data generated in this way can create problems when it is not evaluated carefully
Factors that can be measured quantitatively using tools can become more valued than other types of measurement, so tools can narrow the kinds of knowledge we focus on
E.g. exams prioritise measurable scores over less measurable outcomes like student confidence
Tool outputs can be mistaken as objective, even though tools reflect human design choices, e.g. what data is collected and what is ignored
Ethical issues arise when evidence is defined by tools that can disadvantage or exclude some groups, e.g. biased measurement practices or unequal access to the tools
Unlock more, it's free!
Was this revision note helpful?