Solving the two sum problem can be implemented using an O(n) complexity algorithm but, I just tried out the O(n^2) complexity which is the naive approach using 2 nested loops that check the sum of each ith integer with each of the rest integers against the target value, following is the O(n^2) implementation, for the 2 implementations, nums is the array of integers, n is the size of nums, and indices is an array of size 2 to hold the indices of the 2 integers

for(int i=0; i<n; ++i)
for(int j=i+1; j<n; ++j) {
    if(nums[i] + nums[j] == target) {
        indices[0] = i; indices[1] = j; return indices;
    }
}

This implementation solves the problem in 140ms. I tried another O(n^2) approach which is, for each k value starting from 1 to n-1, check the sum of ith integer and the (i+k)th integer against the target value, following is the implementation,

for(int k=1; k<n; k++)
for(i=0; i<n-k; i++) {
    int j=i+k;
    if(nums[i] + nums[j] == target) {
        indices[0] = i; indices[1] = j; return indices;
    }
}

as you see, the same loop body, but this run much faster, an 8ms run-time. Why is that? Is it related to the spatial locality?

1

There are 1 answers

1
Gassa On BEST ANSWER

A fair comparison would have both programs execute to the end and still NOT find the indices. By the looks of it, you are testing against a case where the answer exists. Naturally, in that case, the order in which we search for the answer matters greatly.

For example, when the only answer is {n - 2, n - 1}, the first code would require O(n^2) operations to find it, while the second one finds it in O(n). Code to generate:

std::fill (&num[0], &num[0] + n, 0);
target = 2;
num[n - 2] = 1;
num[n - 1] = 1;

Conversely, when the only answer is {0, n - 1}, the first code finds it in O(n), while the second one will take O(n^2) steps. Code to generate:

std::fill (&num[0], &num[0] + n, 0);
target = 2;
num[0] = 1;
num[n - 1] = 1;

The &num[0] stuff is to ensure it works whether num is an array or a vector.