what could be the time complexity of sine function by taylor's series?

2.7k views Asked by At

I did search everywhere, even on SO. and scholarly articles on google are far beyond my mental capabilities. This is as close as it gets, but doesn't answer my question. There is no answer for this as Far as I have looked for. Believe Me. ( But feel free to prove me wrong )

I have a homework problem for calculating Time complexity of sine function using Taylor's expansion of sine(x) function. I am not asking for Taylor series or Taylor series function program, but its time complexity. I know the next term in Taylor expansion is given as :

Term in x^n = (Term in x^n-2) * x * x / n / (n-1)

The function snippet is this:

double sine(double x) {
int n;
double sum, term;
n = 3;
sum = x;
term = x;
while(isgreater(fabs(term), 0.0000000001)) {
    term = (term * x * x) / ( n * (n -1));
        if(n % 4 == 3)
            sum -= term;
    else
            sum += term;
         n = n + 2;
}   
return sum;
}

fabs() is the function for absolute value and 0.0000000001 is the precision required. If my understanding is correct, the code will stop when the value of the last calculated term is less than/equal to the precision float set.

My deduction so far is maybe the Time complexity will depend on x^2/n^2 ? or it is not deduce-able cause we don't know at which specific index/number the tern will be less then precision float ?

Math is not strong with me, But thankfully there are Masters like you out there :)

3

There are 3 answers

2
chux - Reinstate Monica On BEST ANSWER

what could be the time complexity of sine function by Taylor's series?
Math is not strong with me

As an adjunct to LutzL fine analysis, sometimes a sanity check and visual of simple testing the code provides insight as to the time complexity.

#define N 1000000
clock_t test_sine(double x) {
  clock_t c1 = clock();
  for (int i = 0; i < N; i++) {
    sine(x);
  }
  clock_t c2 = clock();
  return c2 - c1;
}

void test_sines(void) {
  double x0 = 0;
  double x1 = 70;
  double dx = 0.2;
  for (double x = x0; x <= x1; x += dx) {
    printf("%f \t %d\n", x, (int) test_sine(x));
    fflush(stdout);
  }
}

int main(void) {
  test_sines();
}

Sample output

0   16
0.2 78
0.4 93
0.6 94
0.8 94
1   124
...

Graphing this is fairly linear once x > 3.

enter image description here

Looking closer we see a step in time every power and using the trend looks very close to t = pow(x,1/3.)

void test_sines(void) {
  double x0 = 0.0000000001/2;
  double x1 = 1;
  double dx = pow(2, 1.0/5);
  for (double x = x0; x <= x1; x *= dx) {

enter image description here


Using clock() is fairly crude, yet provides a visual to the analysis.

With x * x, I would have expected term to affect isgreater() about every sqrt(x) as that would oblige another loop iteration.

Conclusion: time complexity for small values about power(x,1.0/d) (2.0 <= d <= 3.1) and linear for large values.


Note there are a number of issues about the quality of OP's sine() that render its result weak for many x. For many values x > 900, sine(x) was an infinite loop.

0
AudioBubble On

In the real world, you would use the periodicity of the function and keep a bounded number of terms, hence O(1).

In the theoretical world, you need larger and larger numbers because of the exponential growth of x^n, and the complexity of multiplication enters into play. As explained by LutzL, the number of terms must exceed x, and the global complexity must be like O(x M(log(x))) where M(k) denotes the cost of a multiplication of k-digits numbers. (O(k²) for the naive implementation, O(k Log k Log Log k) for more efficient ones).

0
Lutz Lehmann On

It seems rather straight-forward that the termination condition is

abs(x^n/n!) < eps.

What is not as straight-forward is to solve this for n. Even using the Sterling approximation, you would still have to solve

abs(e*x/n)^n < eps

You could make some simplifying assumptions to get upper bounds, such as n > 6*abs(x) so that then you would have to solve

(e/6)^n < eps  =>  n > log(eps)/log(e/6).

However, long before this n = O(abs(x)) really kicks in your calculation gets murdered by cancellation errors as the term where n=round(abs(x)) has size e^n and the surrounding terms have to reduce that back to an absolute value less than 1. At x=35 this means you lose all digits of the sum to the cancellation.