I've encountered a homework problem:
which of these is an asymptotically tight upper bound for the best-case running time of an optimal algorithm that finds the maximum element in an arbitrary array of integers of size
n
- O(log n)
- O(n2)
- O(n)
- O(1)
- O(n log n)
Based on my understanding, it's O(n) since even it's the best-case we still need to scan over the arr to get the result. Please correct me
Yes, that's correct. One way to see this is via an adversarial argument. Suppose that you have an algorithm that allegedly finds the maximum value in the array, but doesn't inspect every array element at least once.
Suppose I run your algorithm on some array A1 that consists of nothing but the number 0. Since your algorithm doesn't look at all positions in the array, there's some position it doesn't look at; call that position k. Now, define A2 to be an array that's the same as array A1, except that the element at position k in A2 is defined to be 1.
Now, what happens if I run your algorithm on A2? Since your algorithm never looked at position k in A1 and A2 is identical to A2 except at position k, your algorithm can't tell A1 and A2 apart. Therefore, whatever it says for A1 must be the same as what it says for A2.
That's a problem, though. If your algorithm says that the maximum value is 0, then it's wrong for A2, whose max is 1. If your algorithm says that the maximum value is 1, then it's wrong for A1, whose max is 0. Therefore, the algorithm has to be wrong in at least one case, so it can't be a correct algorithm for finding the maximum value.
This argument shows that any deterministic algorithm that always finds the maximum value in an array must look at all positions in that array, so the runtime must be Ω(n) to be correct.
Hope this helps!