I'm having trouble understandig what is wrong with the following piece of code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.odr import *
def gauss(p,x):
return p[0]*np.exp(-(x-p[1])**2/(2*p[2]**2)+p[4]) + p[3]
# Create a model for fitting.
gg = Model(gauss)
x = np.arange(0, 350)
# Create a RealData object using our initiated data from above.
data = RealData(x, y_data, sx=0, sy=y_data_err)
# Set up ODR with the model and data.
odr = ODR(data, gg, beta0=[0.1, 1., 1.0, 1.0, 1.0])
# Run the regression.
out = odr.run()
# Use the in-built pprint method to give us results.
out.pprint()
x_fit = np.linspace(x[0], x[-1], 1000)
y_fit = gauss(out.beta, x_fit)
plt.figure()
plt.errorbar(x, xy_data xerr=0, yerr=y_data_err, linestyle='None', marker='x')
plt.plot(x_fit, y_fit)
plt.show()
This was straight up copied from here with only changing the model. The error that I get is
scipy.odr.odrpack.odr_error: number of observations do not match
But as far as I can tell beta0
has five parameters, which is exactly as many as gauss
needs to work. Would be great if someone could point to the error-source or my misconception.
Here is a graphing fitter with your equation, comparing both ODR and curve_fit on one graph. The example uses scipy's differential_evolution genetic algorithm module to determine initial parameter estimates for the solvers, and that module implements the Latin Hypercube algorithm to ensure a thorough search of parameter space which requires bounds within which to search. In this example, those bounds are taken from the data maximum and minimum values. As your post did not include data, I have used my own test data in the example. In this example the two fitted curves look very similar, diverging slightly at the plotted extremes.