I have found the parameters that get the best fit to the data using chi squared and the scipy fmin function. I am trying to get the uncertainties on these parameters by finding the values on the chi squared = chi squared min + 1 contour, and I know how to do this for uncorrelated parameters. I have an idea of how to do it for correlated parameters but it seems like it will take a lot of code, so I was wondering if there was something simple that I'm missing?
def polynomial(x_value, parameters):
a = parameters[0]
b = parameters[1]
return (a + b) * x_value**2 + (a - b) * x_value - 2.6
def calculate_chi_squared(data, prediction):
return np.sum((prediction - data[:, 1])**2 / data[:, 2]**2)
data = np.genfromtxt('polynomial_data_2.csv', delimiter=',')
chi_squared = lambda parameters: calculate_chi_squared(data, polynomial_2(data[:, 0], parameters))
result = optimize.fmin(chi_squared, INITIAL_GUESS_2, full_output=True)
optimised_parameters = result[0]
If you have two variables
x
,y
and uncertaintys
ony
you can directly perform the operation withcurve_fit
:Create synthetic dataset:
Fit parameters and get correct parameters Covariance matrix:
You can found details in documentation for
sigma
andabsolute_sigma
switches.