So I am reading some code from this website: http://www.geeksforgeeks.org/write-a-c-program-to-find-the-parity-of-an-unsigned-integer/
And it shows how to determine whether a number has even or odd parity. However, I don't understand why the runtime efficiency is log(n). Here is the code for reference:
# include <stdio.h>
# define bool int
/* Function to get parity of number n. It returns 1
if n has odd parity, and returns 0 if n has even
parity */
bool getParity(unsigned int n)
{
bool parity = 0;
while (n)
{
parity = !parity;
n = n & (n - 1);
}
return parity;
}
The runtime efficiency is O(log(n)), where
n
is the value of the integer you pass in. However, that's an unconventional way to use O notation.More frequently, O notation is expressed in terms of the size of the input in bits (the # of bits needed to represent the input), in which case the size of the input is k=O(log2(n)) and the runtime is O(k).
(Even more accurately, the runtime is Θ(s) where s is the number of bits set in n, although that assumes bit operations are O(1)).