I'm trying to fit a set of points with error bars using the scipy.optimize
function curve_fit
.
The file I use to read the input is something like
y x dy_1 dy_2
0.64 45.1 6.65E-004 1.20E-002
0.72 38.3 1.81E-004 3.93E-002
0.62 46.3 1.20E-004 6.65E-002
0.71 39.1 3.93E-004 6.65E-002
0.76 33.2 6.65E-004 1.81E-002
0.69 39.4 5.86E-003 0.58
0.76 33.5 6.65E-004 6.65E-002
0.79 28.5 1.27E-002 1.27
where dy_1 and dy_2 are two different uncertainties for y. The code I'm using to fit the data looks like
#!/usr/bin/env python
import numpy as np
from scipy.optimize import curve_fit
from scipy import stats
import inspect
## READ INPUT ##
array_fit=np.loadtxt(file.dat)
xdata=array_fit[1]
ydata=array_fit[0]
delta_y=array_fit[2] #or [3]
## FIT ##
print 'Fit with f(x)=ax+b'
def fit(x,a,b):
return a*x+b
popt,pcov=curve_fit(fit,xdata,ydata,p0=(0.0,0.0),sigma=delta_y)
a=popt[0]; err_a=np.sqrt(pcov[0,0])
b=popt[1]; err_b=np.sqrt(pcov[1,1])
perr = np.sqrt(np.diag(pcov))
print 'a=',a,"+/-",err_a,' (',abs(err_a/a*100.0),'% )'
print 'b=',b,"+/-",err_b,' (',abs(err_b/b*100.0),'% )'
The output not including the error bars reads
a= -0.00964422694853 +/- 0.000497582672356 ( 5.15938369153 % )
b= 1.07794220116 +/- 0.0190839336114 ( 1.77040416368 % )
And I expected the deviation using the tiny error bars to be very small. My problem is not only that the difference between outputs is big but also that regardless if I use the tiny error bars or the big error bars I obtain the same output:
a= -0.0115247039688 +/- 0.000341896188408 ( 2.96663748877 % )
b= 1.15636325135 +/- 0.0148255830134 ( 1.28208700823 % )
It seems that my program is insensible to the dimension of the errors. Is that normal? Did I make some mistakes?
I asked a very similar question in
Wrong fit with error bars in Gnuplot
using gnuplot where I got the same results and the same problem. There also some plots are shown.