8.1 Algorithms (OCR A Level Computer Science) Flashcards

Exam code: H446

1/92

0Still learning

Know0

Cards in this collection (92)

  • Define time complexity.

    Time complexity is a measure of the number of operations or steps performed by an algorithm, regardless of the speed of the hardware.

  • Why do we compare algorithms for suitability?

    We compare algorithms to determine which is most suitable for a specific dataset, usually preferring those that finish quicker and use less memory.

  • The measurement of how much memory an algorithm requires is called           .

    The measurement of how much memory an algorithm requires is called space complexity.

  • True or False?

    The time complexity of an algorithm depends on the speed of the CPU that runs it.

    False.

    Time complexity is independent of the hardware or CPU speed and is only based on the number of steps or instructions performed.

  • Define space complexity.

    Space complexity is the amount of memory an algorithm requires to complete its task.

  • As     increases, an algorithm that performs a single operation for each value becomes more           .

    As n increases, an algorithm that performs a single operation for each value becomes more inefficient.

  • Which algorithm is more efficient for calculating the sum of the first n integers: one that uses a loop or one that uses a formula?

    The algorithm that uses a formula (sum = n * (n+1)/2) is more efficient because it always performs the same number of instructions, regardless of n.

  • When comparing algorithms, those that         quicker and use         memory are usually preferred.

    When comparing algorithms, those that finish quicker and use less memory are usually preferred.

  • Define Big O Notation.

    Big O Notation is used to express the complexity and scalability of an algorithm as its order (O), removing all terms except the one with the largest exponent and any constants.

  • Who is Big O Notation named after?

    Big O Notation is named after the German mathematician Edmund Landau.

  • Define linear time.

    Linear time refers to an algorithm whose number of instructions grows proportionally with the size of the input; this is usually written as O(n).

  • What does it mean if an algorithm has O(n) time complexity?

    It means that the number of instructions executed grows proportionally with the input size, so if the input doubles, the execution time also roughly doubles.

  • In a linear time algorithm, if the input size increases from 100 to 1000, the number of instructions executed will increase by       times.

    In a linear time algorithm, if the input size increases from 100 to 1000, the number of instructions executed will increase by 10 times.

  • True or False?

    In O(n) algorithms, constant terms like +5 become more important than the linear term as n increases.

    False.

    As n increases, the linear term (e.g., 3x) dominates and the constant term becomes insignificant in determining time complexity.

  • Define polynomial time complexity.

    Polynomial time complexity is when an algorithm's performance is proportional to the input size raised to a power, such as O(n²), O(n³), etc.

  • True or False?

    Bubble sort is an example of an algorithm with polynomial time complexity.

    True.

    Bubble sort requires nested loops, resulting in O(n²) time, which is polynomial.

  • As the value of      increases, the term         becomes the most significant in polynomial time algorithms.

    As the value of x increases, the term becomes the most significant in polynomial time algorithms.

  • What happens to the Big-O notation when you add more nested loops to an algorithm?

    Each additional nested loop increases the power of n in the Big-O notation, such as O(n²), O(n³), and so on.

  • Define logarithmic time complexity.

    Logarithmic time complexity describes an algorithm whose execution time increases very little as the input size grows larger, typically measured as O(log n) in computer science, often using base 2.

  • True or False?

    Doubling the size of the input in a logarithmic time algorithm has a large effect on the number of instructions executed.

    False.

    In a logarithmic time algorithm, doubling the input size increases the number of instructions by only a small amount, such as one extra step in a binary search.

  • Define worst case complexity.

    The worst case complexity of an algorithm is the scenario in which it performs at its least efficient, requiring the maximum number of instructions to complete, such as finding an item in the last position during a linear search or sorting a reversed list with bubble sort.

  • True or False?

    Big-O notation is almost always measured by the worst case performance of an algorithm.

    True.

    Big-O notation typically describes the worst case performance of an algorithm so that programmers can guarantee a minimum performance level.

  • From the Big-O time complexity graph,         time grows the quickest and is the most inefficient, while        time is the most efficient.

    From the Big-O time complexity graph, exponential time grows the quickest and is the most inefficient, while constant time is the most efficient.

  • Why do we drop coefficients and non-dominant terms when expressing Big-O notation?

    In Big-O notation, only the dominant factor is used because as input size grows, coefficients and lower-order terms become insignificant in comparison to the dominant term.

  • Define binary search.

    A binary search is an efficient search algorithm that repeatedly compares the middle element of a sorted list to the target value, halving the search space each time until the item is found or not present.

  • True or False?

    A binary search can be performed on any list, regardless of whether it is sorted.

    False.

    A binary search requires the list to be sorted to work correctly. If the list is not sorted, the algorithm will not function as intended.

  • The time complexity of a binary search is        , because with each iteration the search space is     .

    The time complexity of a binary search is O(log n), because with each iteration the search space is halved.

  • What is the difference in space complexity between an iterative and a recursive binary search?

    An iterative binary search uses O(1) space because it only requires a fixed number of variables. A recursive binary search uses O(log n) space due to the call stack from recursive calls.

  • What is the purpose of the mid pointer in the binary search algorithm?

    The mid pointer identifies the middle element of the current search range, which is compared to the target item to determine if the search should continue to the left or right half of the list.

  • In a binary search, the list must be         before the algorithm can work correctly.

    In a binary search, the list must be ordered before the algorithm can work correctly.

  • True or False?

    In binary search, if the middle item is greater than the target, the start pointer is moved to mid + 1.

    False.

    If the middle item is greater than the target, the end pointer is moved to mid - 1.

  • If the item is not found during a binary search, the algorithm returns    .

    If the item is not found during a binary search, the algorithm returns -1.

  • What happens to the search range in a binary search after each iteration of the loop?

    After each iteration, the search range is halved, narrowing down to either the left or right half of the list depending on the comparison result.

  • Define linear search.

    A linear search is an algorithm that sequentially checks each element in an unordered list for the target value until it is found or the list ends.

  • What is the time complexity of a linear search in the worst case scenario?

    The worst case time complexity of a linear search is O(n), where n is the number of items in the list.

  • In a linear search, the list is searched         from start to end, comparing each element to the value being searched for.

    In a linear search, the list is searched sequentially from start to end, comparing each element to the value being searched for.

  • True or False?

    Linear search can only be used on lists that are sorted in order.

    False.

    Linear search works on unordered lists, as it checks each element one by one.

  • What is the space complexity of a linear search?

    The space complexity of a linear search is O(1), as it only needs a constant amount of extra memory regardless of input size.

  • If the value being searched for is       in the list during a linear search, the algorithm outputs where it was found.

    If the value being searched for is found in the list during a linear search, the algorithm outputs where it was found.

  • What value does the linear search algorithm return if the item is not found in the list?

    If the item is not found, linear search returns -1.

  • In linear search, the variable      is used as a counter to move through the list, and       is a flag to signal when the item is found.

    In linear search, the variable i is used as a counter to move through the list, and found is a flag to signal when the item is found.

  • True or False?

    In a linear search, the search stops as soon as the item is found.

    True.

    The algorithm breaks out of the loop once the item is found, so it does not continue checking the rest of the list.

  • The default value of       in linear search is -1, which indicates that the item was      .

    The default value of index in linear search is -1, which indicates that the item was not found.

  • Which variables are initialised at the start of the linear search algorithm and what are their purposes?

    At the start of linear search, index = -1 (indicates not found), i = 0 (counter for the list), and found = False (flag to signal when the item is found) are initialised.

  • Define bubble sort.

    A bubble sort is a sorting algorithm that repeatedly compares and swaps adjacent elements if they are out of order, moving the largest value to the end with each pass.

  • What is a pass in the context of bubble sort?

    A pass is a complete cycle through the list where all adjacent pairs are compared and swapped if necessary.

  • The worst case time complexity of bubble sort is        and its space complexity is    .

    The worst case time complexity of bubble sort is O(n²) and its space complexity is O(1).

  • True or False?

    Bubble sort always requires the same number of passes, regardless of whether the list is already sorted.

    False.

    If the list is already sorted, bubble sort will only need one pass to check all items, thanks to its swap detection.

  • True or False?

    Bubble sort reduces the number of comparisons in each pass because the sorted end of the list does not need to be rechecked.

    True.

    Each pass through the list places the next largest value at the end, so the sorted section does not need to be checked in later passes, reducing unnecessary comparisons.

  • In bubble sort, using a         loop instead of a for loop allows the algorithm to        early if no swaps occur.

    In bubble sort, using a while loop instead of a for loop allows the algorithm to terminate early if no swaps occur.

  • Define early termination in bubble sort.

    Early termination in bubble sort is a technique where the algorithm stops if a full pass is completed with no swaps, indicating the list is already sorted.

  • What is the purpose of decreasing the number of compared items by one in each pass of bubble sort?

    Decreasing the number of compared items by one on each pass ensures that the already sorted end of the list is not checked again, which improves efficiency by avoiding unnecessary comparisons and swaps.

  • Define insertion sort.

    An insertion sort is a sorting algorithm that sorts one item at a time by placing it in the correct position of an unsorted list, repeating until all items are sorted.

  • What is the worst-case time complexity of an insertion sort?

    The worst-case time complexity of an insertion sort is O(n²).

  • Insertion sort is considered to have a space complexity of    , as it does not require any additional storage space beyond the input list.

    Insertion sort is considered to have a space complexity of O(1), as it does not require any additional storage space beyond the input list.

  • True or False?

    Insertion sort compares the current item only to the last item in the list.

    False.

    Insertion sort compares the current item to each previous item in the list until it finds its correct position.

  • True or False?

    In the insertion sort algorithm, indexing starts at 1.

    False.

    In computer science, indexing always starts at 0, not 1.

  • The insertion sort algorithm compares each item with the values            it in the list to find the correct position.

    The insertion sort algorithm compares each item with the values before it in the list to find the correct position.

  • What does the statement list[position] = list[position - 1] do in the insertion sort algorithm?

    The statement list[position] = list[position - 1] shifts the previous item one position to the right to make space for the item being inserted in its correct position.

  • Define merge sort.

    A merge sort is a divide and conquer algorithm that repeatedly divides a list into smaller sub lists until each contains one element, and then merges the sub lists to produce a sorted list.

  • What are the two main steps in the merge sort algorithm?

    The two main steps are: first, divide the list into sub lists until each contains only one element; second, repeatedly merge groups of two sub lists and sort them until all have been merged into a single sorted list.

  • The worst case time complexity of merge sort is         , and its space complexity is     .

    The worst case time complexity of merge sort is O(nlogn), and its space complexity is O(n).

  • True or False?

    Merge sort merges more than two sub lists at a time.

    False.

    Merge sort can only merge two sub lists at a time during each merging iteration.

  • Define recursion in the context of merge sort.

    In merge sort, recursion means the algorithm calls itself repeatedly to divide the list into smaller sublists until the base case of one element is reached.

  • In Python, the expression [:mid] returns                    up to, but not including, the       index.

    In Python, the expression [:mid] returns all elements up to, but not including, the mid index.

  • What is the role of leftPointer, rightPointer, and newListPointer in the merge sort algorithm?

    In merge sort, leftPointer and rightPointer keep track of the current position in the left and right sublists, while newListPointer keeps track of the next available space in the merged (sorted) list.

  • True or False?

    Merge sort always calls both the final two while loops to merge remaining left and right elements.

    False.

    Only one of the two final while loops is called, depending on which sublist has remaining elements after the first merge stage.

  • Define quick sort.

    A quick sort is a divide and conquer algorithm that reduces a problem into smaller subproblems, sorts them recursively using a pivot to partition the list, and combines the results to produce a sorted order.

  • What is the function of the pivot in the quick sort algorithm?

    The pivot in quick sort divides the list into two parts: values less than the pivot and values greater than the pivot. Each sublist is then sorted recursively.

  • Quick sort has a best and average case time complexity of           , and a worst case time complexity of       .

    Quick sort has a best and average case time complexity of O(n log n), and a worst case time complexity of O(n²).

  • True or False?

    Quick sort always uses O(n log n) time, regardless of how the pivot is chosen.

    False.

    In the worst case, if the pivot is always the smallest or largest element, quick sort can take O(n²) time.

  • Define pivot in the context of quick sort.

    The pivot is the element in the list used to compare and partition the other elements during each iteration of quick sort. Typically, it is set to the first element of the sublist.

  • In quick sort, after the pointers cross, the       is swapped with the value indexed by the       .

    In quick sort, after the pointers cross, the pivot is swapped with the value indexed by the right pointer.

  • What is the purpose of the left and right pointers in the quick sort algorithm?

    The left and right pointers are used to scan the sublist from both ends, identifying values that are on the wrong side of the pivot so they can be swapped, ensuring that elements less than the pivot are to its left and elements greater than the pivot are to its right.

  • True or False?

    Quick sort works by repeatedly partitioning the list around a pivot and then recursively sorting sublists on both sides of the pivot.

    True.

    Quick sort partitions the list using a pivot, then recursively applies the same process to sublists on either side of the pivot until the entire list is sorted.

  • Define Dijkstra's shortest path algorithm.

    Dijkstra's shortest path algorithm is an optimisation algorithm that calculates the shortest path from a starting node to all other nodes in a weighted graph.

  • What type of problem does Dijkstra's algorithm solve?

    Dijkstra's algorithm solves optimisation problems by finding the shortest route between two points or from a start to all other nodes in a network.

  • In Dijkstra's algorithm, each connection between two nodes is called an       or       , and carries a         value.

    In Dijkstra's algorithm, each connection between two nodes is called an arc/edge or node/vertex, and carries a weighted value.

  • True or False?

    Dijkstra's algorithm only calculates the shortest path to one destination node.

    False.

    Dijkstra's algorithm calculates the shortest path from the starting node to every other node in the graph.

  • During Dijkstra's algorithm, unvisited nodes are organised in a         queue so that the node with the         path is always chosen next.

    During Dijkstra's algorithm, unvisited nodes are organised in a priority queue so that the node with the lowest path is always chosen next.

  • When starting Dijkstra’s algorithm, the starting node is initialised to    and all other nodes are set to           .

    When starting Dijkstra’s algorithm, the starting node is initialised to 0 and all other nodes are set to infinity.

  • What is the next step after visiting the starting node in Dijkstra’s algorithm?

    After visiting the starting node, you update each of its neighbours by adding the distance from the starting node to each neighbour.

  • For each node in Dijkstra’s algorithm, you must keep track of the        node that offered the          route.

    For each node in Dijkstra’s algorithm, you must keep track of the previous node that offered the shortest route.

  • True or False?

    In Dijkstra’s algorithm, a node’s shortest distance is updated only if a shorter path is found through a neighbouring node.

    True.

    A node’s distance is updated only if a path through a neighbour results in a shorter total distance than previously recorded.

  • Define A* search algorithm.

    An A* search algorithm is a pathfinding algorithm that combines the real cost from the start node with a heuristic estimate of the cost to the goal node, aiming to efficiently find the shortest path.

  • What is the purpose of the heuristic function h(x) in the A* search algorithm?

    The purpose of the heuristic function h(x) is to estimate the cost from the current node to the goal node, often using the straight line (Euclidean) distance, helping the algorithm focus towards the goal.

  • In the A* algorithm, the total cost for each node is calculated as      = g(x) +      .

    In the A* algorithm, the total cost for each node is calculated as f(x) = g(x) + h(x).

  • True or False?

    The A* algorithm stops as soon as the goal node is reached, unlike Dijkstra’s algorithm which finds distances to all nodes.

    True.

    A* search is designed to focus on reaching the goal node and terminates once it is found, while Dijkstra’s algorithm continues until it has calculated the shortest path to every node.

  • Define heuristic in relation to the A* algorithm.

    A heuristic in the A* algorithm is a value that estimates the cost or distance from a given node to the goal node.

  • What should the values of g(x) and f(x) be when initialising the starting node in A* search?

    When initialising the starting node, g(x) should be 0, and f(x) should be equal to h(x) (the heuristic value).

  • To update a neighbour node in A* search, add the distance from the         node to the neighbour and the         value for the neighbour.

    To update a neighbour node in A* search, add the distance from the current node to the neighbour and the heuristic value for the neighbour.

  • True or False?

    When tracing A* search, you must keep track of each node's previous node that offered the shortest route.

    True.

    In A* search, tracking the previous node that gave the shortest route is essential for reconstructing the optimal path at the end.

Sign up to unlock flashcards

or