1.
The min-heap property states that for every node i other than the root, A[PAERENT(i)] ≤ A[i].
Correct Answer
A. True
Explanation
The min-heap property states that for every node i other than the root, the value of its parent node (A[PAERENT(i)]) is less than or equal to the value of the node itself (A[i]). This means that in a min-heap, the smallest element is always at the root, and for any other node, its value is greater than or equal to its parent's value. Therefore, the given statement is true.
2.
An array constructed with the values [23, 17, 14, 6, 13, 10, 1, 5, 7, 12] is a max heap.
Correct Answer
B. False
3.
For a node A at index i in a heap, the right child of A can be computed as 2*i.
Correct Answer
B. False
Explanation
The given statement is false. The right child of a node A in a heap can be computed as 2*i + 1, not 2*i. This is because in a heap, the left child of a node is located at index 2*i, and the right child is located at index 2*i + 1.
4.
The master method can be used to solve all recurrence equations that have the form T(n) = aT(n/b) + f(n).
Correct Answer
B. False
Explanation
The master method can only be used to solve recurrence equations that have the form T(n) = aT(n/b) + f(n), where a ≥ 1, b > 1, and f(n) is an asymptotically positive function. Therefore, the statement "The master method can be used to solve all recurrence equations that have the form T(n) = aT(n/b) + f(n)" is false.
5.
Quicksort has a worst-case running time of θ(n^2).
Correct Answer
A. True
Explanation
Quicksort has a worst-case running time of θ(n^2) because in certain scenarios, the algorithm can make poor pivot choices, resulting in highly unbalanced partitions. When the pivot is consistently chosen as the smallest or largest element, the algorithm performs poorly, taking quadratic time to sort the array. However, on average, Quicksort has a time complexity of θ(n log n) due to its efficient partitioning and recursive nature.
6.
One good reason for using a hash table would be to reduce storage requirements.
Correct Answer
A. True
Explanation
Using a hash table can help reduce storage requirements because it allows for efficient storage and retrieval of data. Hash tables use a hashing function to map keys to indexes in an array, which allows for quick access to data. This eliminates the need for large amounts of memory to store data in a linear or hierarchical structure. Additionally, hash tables can handle collisions, where multiple keys map to the same index, by using techniques like chaining or open addressing. Overall, using a hash table can help optimize storage space and improve performance when dealing with large amounts of data.
7.
If we are using the division method for our hashing function, a table size of 64 would be a good choice.
Correct Answer
B. False
Explanation
A table size of 64 may not necessarily be a good choice when using the division method for hashing function. The division method involves dividing the key by the table size and using the remainder as the hash value. If the table size is not a prime number or does not have a good distribution of prime factors, it can lead to clustering and poor performance. Therefore, a table size of 64 may not be a good choice as it may not provide an optimal distribution of hash values.
8.
Printing every node in a binary tree takes θ(lg n) time.
Correct Answer
B. False
Explanation
Printing every node in a binary tree takes O(n) time, not θ(lg n) time. In order to print every node, we need to visit each node once, which will take linear time proportional to the number of nodes in the tree. The height of the tree does not affect the time complexity of printing every node. Therefore, the given statement is false.
9.
We do not have to make any key comparisons to find the min or max nodes of a binary search tree.
Correct Answer
A. True
Explanation
In a binary search tree, the minimum value is always found at the leftmost node and the maximum value is always found at the rightmost node. Since the tree is structured in a way that smaller values are stored on the left and larger values are stored on the right, we can directly access the minimum and maximum nodes without the need for any key comparisons. Therefore, the statement is true.
10.
Open addressing is usually preferred over chaining for solving hash function collisions.
Correct Answer
B. False
Explanation
Open addressing and chaining are two different methods used to handle collisions in hash functions. Open addressing involves finding an alternative empty slot within the hash table to store the collided element, while chaining involves creating a linked list of elements that have collided. The choice between open addressing and chaining depends on various factors such as the expected number of collisions, the size of the hash table, and the type of data being stored. There is no general preference for one method over the other, as it depends on the specific requirements of the application. Therefore, the statement that open addressing is usually preferred over chaining for solving hash function collisions is false.
11.
Red-black trees are an attempt to minimize the time of dynamic set operations by creating a balanced tree.
Correct Answer
A. True
Explanation
Red-black trees are a type of self-balancing binary search tree that maintain their balance by following a set of rules. These rules ensure that the tree remains relatively balanced, which helps to minimize the time required for dynamic set operations such as insertion, deletion, and search. By maintaining balance, red-black trees can guarantee a worst-case time complexity of O(log n) for these operations, making them efficient for a wide range of applications. Therefore, the statement that red-black trees are an attempt to minimize the time of dynamic set operations by creating a balanced tree is true.
12.
In a red-black tree, a black node must have only red children.
Correct Answer
B. False
Explanation
In a red-black tree, a black node can have both red and black children. This is because one of the properties of a red-black tree is that every path from a node to its descendant leaves must have the same number of black nodes. Therefore, a black node can have any combination of red and black children as long as this property is maintained.
13.
A problem that generates completely new subproblems at each step is a good candidate for a dynamic programming solution.
Correct Answer
B. False
Explanation
A problem that generates completely new subproblems at each step is not a good candidate for a dynamic programming solution. Dynamic programming is typically used for problems that can be broken down into overlapping subproblems, where the solutions to these subproblems can be stored and reused to solve the larger problem more efficiently. If a problem generates completely new subproblems at each step, it is unlikely that dynamic programming techniques can be effectively applied.
14.
Dynamic programming has the potential to transform exponential-time algorithms to polynomial time.
Correct Answer
A. True
Explanation
Dynamic programming is a technique that involves breaking down a complex problem into smaller subproblems and solving them in a bottom-up manner. By storing the solutions to these subproblems in a table, dynamic programming can avoid redundant computations and greatly improve the efficiency of algorithms. This approach can be particularly effective for problems that have overlapping subproblems, allowing for the transformation of exponential-time algorithms into polynomial time. Therefore, the statement that dynamic programming has the potential to transform exponential-time algorithms to polynomial time is true.
15.
A topological sort for a directed acyclic graph (DAG) can be created by performing a breadth-first search on the DAG.
Correct Answer
B. False
Explanation
A topological sort for a directed acyclic graph (DAG) cannot be created by performing a breadth-first search on the DAG. Instead, a topological sort is typically created using depth-first search or other specialized algorithms such as Kahn's algorithm. Breadth-first search does not guarantee the correct order of the vertices in a topological sort, as it focuses on exploring all the neighbors of a vertex before moving on to the next vertex.
16.
A minimum spanning tree for a graph with N vertices will have N-2 edges.
Correct Answer
B. False
Explanation
A minimum spanning tree for a graph with N vertices will have exactly N-1 edges. This is because a spanning tree is a connected subgraph that includes all vertices of the original graph, but does not contain any cycles. In order for the spanning tree to be connected, it must have at least N-1 edges. Adding any additional edge would create a cycle, which contradicts the definition of a tree. Therefore, the correct answer is False.
17.
A matrix is a good choice for representing a dense graph.
Correct Answer
A. True
Explanation
A matrix is a good choice for representing a dense graph because it allows for efficient storage and retrieval of edge information. In a dense graph, most of the possible edges are present, resulting in a matrix with many non-zero entries. This makes it convenient to use a matrix, as it provides constant time access to any edge in the graph. Additionally, matrix operations such as matrix multiplication can be used to perform various graph algorithms efficiently. Therefore, using a matrix representation is a suitable choice for dense graphs.
18.
Both matrix and lists can be used to represent weighted graphs.
Correct Answer
A. True
Explanation
Both matrices and lists can be used to represent weighted graphs. Matrices provide a more compact representation, where each element in the matrix represents the weight of the edge between the corresponding vertices. Lists, on the other hand, provide a more flexible representation, where each vertex is associated with a list of its neighboring vertices along with their corresponding weights. Both representations have their own advantages and disadvantages, and the choice between them depends on the specific requirements of the problem at hand.
19.
In Kruskal's algrorithm for building a minimum spanning tree, we proceed node by node, adding the connecting edges to the tree.
Correct Answer
B. False
Explanation
In Kruskal's algorithm for building a minimum spanning tree, we do not proceed node by node. Instead, we sort all the edges in ascending order of their weights and then start adding the edges one by one to the tree, making sure that adding an edge does not form a cycle. So, the statement "we proceed node by node, adding the connecting edges to the tree" is incorrect.
20.
Huffman encoding uses a dynamic programming strategy to compress data.
Correct Answer
B. False
Explanation
Huffman encoding does not use a dynamic programming strategy to compress data. Instead, it uses a greedy algorithm that assigns shorter codes to more frequently occurring characters. This approach helps in achieving efficient compression by reducing the overall length of the encoded data. Therefore, the correct answer is False.
21.
Which one of the following is a true for a red-black tree?
Correct Answer
B. The root and leaves are black.
Explanation
A red-black tree is a type of binary search tree that follows certain rules to maintain balance and ensure efficient operations. One of these rules is that the root and leaves of a red-black tree must be black. This is because the color of the root and leaves does not affect the balance of the tree, and by making them black, we can simplify the rules for balancing the tree during insertions and deletions. Therefore, the statement "The root and leaves are black" is true for a red-black tree.
22.
What distinguishing feature should cause us to choose dynamic programming rather than a divide-and-conquer approach to a problem?
Correct Answer
D. The subproblems for the solution overlap.
Explanation
The distinguishing feature that should cause us to choose dynamic programming rather than a divide-and-conquer approach to a problem is when the subproblems for the solution overlap. In dynamic programming, we store the solutions to subproblems in a table or array, and then use those solutions to solve larger subproblems, ultimately solving the original problem. This approach is effective when there is overlapping subproblems because it allows us to avoid redundant calculations and improve the overall efficiency of the solution.
23.
Which of the following is true regarding Prim's algorithm for finding a minimum spanning tree?
Correct Answer
B. The algorithm grows only a single tree from start to finish.
Explanation
Prim's algorithm for finding a minimum spanning tree starts with a single node and gradually grows a tree by adding edges that have the least weight. The algorithm continues this process until it has connected all nodes in the graph, resulting in a single tree that spans all nodes. The statement "The algorithm grows only a single tree from start to finish" accurately describes this characteristic of Prim's algorithm.
24.
Which of the following is NOT a property of a minimum spanning tree?
Correct Answer
A. It may have one or more cycles.
Explanation
A minimum spanning tree is a tree that connects all the vertices in a graph with the minimum possible total edge weight. One of the properties of a minimum spanning tree is that it does not contain any cycles. This is because a cycle would introduce redundant edges and increase the total weight, contradicting the definition of a minimum spanning tree. Therefore, the statement "It may have one or more cycles" is incorrect and not a property of a minimum spanning tree.
25.
What is the name of the technique that stores computed values for future lookup, rather than recomputing them?
Correct Answer
C. Memoization
Explanation
Memoization is the technique that stores computed values for future lookup, rather than recomputing them. This is done in order to optimize the performance of a program by avoiding redundant calculations. By storing the results of expensive function calls and returning the cached result when the same inputs occur again, memoization helps to improve the efficiency of algorithms that have overlapping subproblems. Topological sort is a different technique used for ordering the nodes of a directed graph, while dynamic recursion refers to a recursive approach that dynamically adjusts its behavior based on the input.
26.
Bottom-up dynamic programming works by
Correct Answer
B. Transforming recursive calls to a loop
Explanation
Bottom-up dynamic programming works by transforming recursive calls to a loop. This means that instead of solving the problem by recursively calling a function, the solution is obtained by iteratively solving smaller subproblems and using their solutions to build up to the final solution. This approach avoids redundant computations and improves efficiency by storing the solutions to subproblems in a table or array. By iteratively solving the subproblems in a bottom-up manner, the final solution can be obtained without the need for recursive function calls.
27.
The basic restructuring operation for red-black trees is
Correct Answer
D. Rotation
Explanation
Rotation is the basic restructuring operation for red-black trees. Red-black trees are a type of self-balancing binary search tree, and rotation is the primary operation used to maintain the balance of the tree. During a rotation, the positions of certain nodes in the tree are rearranged while preserving the binary search tree property and the red-black tree properties. This helps to ensure that the tree remains balanced and efficient for operations such as insertion and deletion. Therefore, rotation is an essential operation for maintaining the integrity and efficiency of red-black trees.
28.
Which of the following is true for optimal binary search trees?
Correct Answer
C. Optimal binary search trees are constructed through the principal of optimal substructure.
Explanation
An optimal binary search tree is constructed through the principle of optimal substructure. This means that the optimal solution to the problem can be obtained by combining optimal solutions to its subproblems. In the case of an optimal binary search tree, the optimal solution for a subtree is obtained by considering the optimal solutions for its left and right subtrees. This allows for efficient construction of the tree by breaking down the problem into smaller subproblems and combining their solutions. Therefore, the statement that optimal binary search trees are constructed through the principle of optimal substructure is true.
29.
The general comparison used to determine if a graph is sparse or dense is between
Correct Answer
C. |E| and |V^2|
Explanation
The correct answer is |E| and |V^2|. This is because the comparison between the number of edges (|E|) and the square of the number of vertices (|V^2|) is a common way to determine if a graph is sparse or dense. If the number of edges is much smaller than the square of the number of vertices, then the graph is considered sparse. If the number of edges is closer to the square of the number of vertices, then the graph is considered dense.
30.
Greedy algorithms make _______________optimal choices leading to _______________ optimal solutions.
Correct Answer
C. Locally, globally
Explanation
Greedy algorithms make locally optimal choices, meaning they make the best possible decision at each step based on the current information. However, these choices may not necessarily lead to globally optimal solutions, which are the best overall solutions for the entire problem. This is because greedy algorithms do not consider the future consequences of their choices and may overlook better options that could result in a more optimal solution in the long run. Therefore, while locally optimal, the solutions obtained through greedy algorithms may not be globally optimal.
31.
Before running MAX-HEAPIFY on node i of a heap, what must be true?
Correct Answer
D. Both children of node i must be the roots of max heaps.
Explanation
Before running MAX-HEAPIFY on node i of a heap, it is necessary for both children of node i to be the roots of max heaps. This ensures that the subtree rooted at node i also satisfies the max heap property. MAX-HEAPIFY is a procedure used to maintain the max heap property by comparing the value at node i with its children and swapping them if necessary. Therefore, for MAX-HEAPIFY to work correctly, the children of node i must already be max heaps.
32.
In which approach do we randomly select a hash function to avoid bad hashing behavior caused by a selected set of keys?
Correct Answer
D. Universal hashing
Explanation
Universal hashing is an approach where a hash function is randomly selected from a set of hash functions to avoid bad hashing behavior caused by a selected set of keys. This random selection helps to distribute the keys more evenly across the hash table, reducing the chances of collisions and improving the overall performance of the hashing algorithm.
33.
The recurrence equation T(n) = T(n-1) + θ(n) represents the
Correct Answer
B. Worst-case running time for Quicksort
Explanation
The given recurrence equation T(n) = T(n-1) + θ(n) represents the worst-case running time for Quicksort. This is because in the worst-case scenario, the pivot chosen for partitioning the array divides it into two subarrays of size 1 and n-1, resulting in a recurrence relation of T(n) = T(n-1) + θ(n). This means that the time complexity of Quicksort in the worst-case scenario is dependent on the size of the input array, making the worst-case running time for Quicksort the correct answer.
34.
Primary clustering is a problem in what hashing technique?
Correct Answer
D. Linear probing
Explanation
Primary clustering is a problem that occurs in linear probing hashing technique. In linear probing, when a collision occurs, the next available slot in the hash table is checked sequentially until an empty slot is found. This can lead to primary clustering, where consecutive collisions cause items to cluster together in the hash table. As a result, the efficiency of the hash table decreases as the number of collisions increases.
35.
In hashing with chaining, the load factor α represents
Correct Answer
C. The number of expected elements at any slot
Explanation
The load factor α in hashing with chaining represents the number of expected elements at any slot in the hash table. It indicates the average number of elements that are stored in each slot, which helps determine the efficiency of the hashing algorithm. A higher load factor means that more elements are being stored in each slot, potentially leading to longer search times and decreased performance.
36.
The worst-case search time for hashing with chaining is
Correct Answer
A. θ(n)
Explanation
The worst-case search time for hashing with chaining is θ(n). This means that in the worst-case scenario, the time it takes to search for an element in a hash table using chaining will be directly proportional to the number of elements in the table. As the number of elements increases, the search time also increases. This occurs because in the case of a collision, where multiple elements hash to the same index, we need to traverse the chain at that index to find the desired element. The length of the chain can grow with the number of elements, leading to longer search times.
37.
N^2 = o(n^2)
Correct Answer
B. False
Explanation
The statement "n^2 = o(n^2)" is false. In Big O notation, "o(n^2)" represents a function that grows slower than n^2. However, in this case, "n^2" and "o(n^2)" are equivalent because they both represent functions that grow at the same rate. Therefore, the correct answer is false.
38.
N = O(n^2)
Correct Answer
A. True
Explanation
The statement "n = O(n^2)" is true. This means that the function or algorithm represented by n has a time complexity of O(n^2), which indicates that its running time grows quadratically with the size of the input. In other words, as the input size increases, the time taken by the algorithm increases at a rate proportional to the square of the input size.
39.
The reflexive property hold for o (little-oh) and w.
Correct Answer
B. False
Explanation
The reflexive property states that every element is related to itself. However, the statement that the reflexive property holds for o (little-oh) and w is false. The reflexive property does not hold for o (little-oh) and w because they are not related to themselves. Therefore, the correct answer is False.
40.
For two functions f(n) and g(n), if f(n) = w(g(n)), we know that f(n) is asymptotically larger than g(n).
Correct Answer
A. True
Explanation
If f(n) is equal to w(g(n)), it means that f(n) grows at least as fast as g(n). This implies that as n approaches infinity, the growth rate of f(n) will be greater than or equal to the growth rate of g(n). Therefore, f(n) is asymptotically larger than g(n).
41.
For two algorithms A and B, if O(A) = n^100 and O(B) = 10^n, then algorithm A is the better choice.
Correct Answer
A. True
Explanation
The time complexity of algorithm A, O(A), is given as n^100, while the time complexity of algorithm B, O(B), is given as 10^n. As n grows larger, the exponential growth rate of algorithm B (10^n) becomes much faster than the polynomial growth rate of algorithm A (n^100). Therefore, algorithm A is the better choice as it has a slower growth rate and will be more efficient for larger input sizes.
42.
The recurrence
could be written equivalently as T(n) = T(n/2) + θ(n)
Correct Answer
B. False
Explanation
The given recurrence relation cannot be written as T(n) = T(n/2) + θ(n). This is because the given relation does not follow the form of the master theorem, which states that T(n) = aT(n/b) + f(n), where a ≥ 1, b > 1, and f(n) is an asymptotically positive function. In the given relation, the recurrence relation does not have the form of aT(n/b), and therefore it is not equivalent to T(n) = T(n/2) + θ(n).
43.
There is a significant difference in the running time f(n) = n and the running time g(n) = lg n.
Correct Answer
B. False
Explanation
There is no significant difference in the running time between f(n) = n and g(n) = lg n. Both functions have different growth rates, where f(n) grows linearly with the input size n, and g(n) grows logarithmically with n. However, the difference in their running time is not significant as the logarithmic growth rate of g(n) makes it increase much slower than the linear growth rate of f(n). Therefore, the statement that there is a significant difference between their running times is false.
44.
F(n) = o(g(n)) means that f(n) is asymptotically smaller than g(n).
Correct Answer
B. False
Explanation
The statement is false because "f(n) = o(g(n))" actually means that f(n) is asymptotically strictly smaller than g(n), not just smaller. In other words, f(n) grows at a slower rate than g(n) as n approaches infinity.
45.
All functions can be compared asymptotically.
Correct Answer
B. False
Explanation
The statement is false because not all functions can be compared asymptotically. Asymptotic comparison is used to analyze the behavior of functions as the input size approaches infinity. However, not all functions have well-defined asymptotic behavior, and some functions may have the same asymptotic complexity but behave differently for smaller input sizes. Therefore, it is not always possible to compare all functions asymptotically.
46.
F(n) = Ω(g(n)) is like saying f(n) ≥ g(n)
Correct Answer
A. True
Explanation
The statement f(n) = Ω(g(n)) is true because it means that the function f(n) grows at least as fast as g(n). In other words, f(n) is lower bounded by g(n), or f(n) ≥ g(n). This implies that g(n) is a lower bound for the growth rate of f(n), indicating that f(n) cannot grow slower than g(n). Therefore, the correct answer is True.
47.
If the following functions were sorted in increasing order of complexity (big-Oh), which one would be in the 3rd place?
f1(n) = n^(0.999999)logn
f2(n) = 10000000n
f3(n) = 1.000001^n
f4(n) = n^2
Correct Answer
D. F4
Explanation
The complexity of a function refers to the rate at which it grows as the input size increases. In this case, f4(n) = n^2 has a complexity of O(n^2), meaning that as the input size increases, the function's runtime will increase quadratically. On the other hand, f1(n) = n^(0.999999)logn has a complexity of O(n^(0.999999)logn), f2(n) = 10000000n has a complexity of O(n), and f3(n) = 1.000001^n has a complexity of O(1.000001^n). Since f4(n) has the highest complexity among the given functions, it would be in the 3rd place when sorted in increasing order of complexity.
48.
Select the correct asymptotic notation for the function f(n) n^3 + 1000 - 100n^2 - 100n.
Correct Answer
A. θ(n^3)
Explanation
The function f(n) has a dominant term of n^3, which means that as n approaches infinity, the growth rate of the function is primarily determined by n^3. The other terms (1000, -100n^2, -100n) are of lesser significance and can be ignored in the asymptotic analysis. Therefore, the correct asymptotic notation for f(n) is θ(n^3), indicating that the function grows at the same rate as n^3.
49.
What is the smallest value of n such that an algorithm whose running time is n^2 runs slower than an algorithm whose running time is 3n + 1?
Correct Answer
C. 4
Explanation
The smallest value of n such that an algorithm whose running time is n^2 runs slower than an algorithm whose running time is 3n + 1 is 4. This can be determined by comparing the growth rates of the two functions. As n increases, the function n^2 grows faster than the function 3n + 1. At n = 4, the running time of the n^2 algorithm is 16, while the running time of the 3n + 1 algorithm is 13. Therefore, the n^2 algorithm runs slower than the 3n + 1 algorithm for n = 4 and onwards.
50.
Given the algorithm for Bubblesort below, what is an accurate description of it's running time?
BUBBLESORT(A)
for I = 1 to A.length -1
for j = A.length downto I + 1
If A[j] < A[j-1]
exchange A[j] with A[j-1]
Correct Answer
B. θ(n^2)
Explanation
The given algorithm for Bubblesort has a running time of θ(n^2). This is because there are two nested loops, one iterating from 1 to n-1 and the other iterating from n to i+1. In the worst case scenario, where the array is in reverse order, the inner loop will always execute n-1 comparisons in the first iteration, n-2 comparisons in the second iteration, and so on, resulting in a total of (n-1) + (n-2) + ... + 2 + 1 = n(n-1)/2 = θ(n^2) comparisons. Therefore, the running time of the algorithm is θ(n^2).