curve_fit with a catenary returns huge pars, for no apparent reason

120 views Asked by At

I have taken a picture of a chain, sampled the position of every single ring of such chain in pixel, in oreder to get the best-fit pars for a catenary of model a*cosh((x-x0)/a) + c, where a, c and x0 are the pars im looking for.

When plotting my sampled points onto the original pictuare they coincide with the chain, but the pars i get are many thousand times what i'd expect.

from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
import numpy as np
import matplotlib

fname="C:/Users/marti/OneDrive/Desktop/Pier/Catenaria.txt"    #directory and file name
plt.xlabel("x [pixels]")
plt.ylabel("y [pixels]")
plt.imshow(img)




x,y=np.loadtxt(fname, unpack=True)

sigma_y=3.
plt.errorbar(x, y, sigma_y, fmt=".")

def cat(x, a, c, x0):
    return c + a * np.cosh((x - x0) / a)

popt, pcov = curve_fit(cat, x, y, p0=(1800.,500., 2052.))
a_hat, c_hat, x0_hat = popt
sigma_a, sigma_c, sigma_x0 = np.sqrt(pcov.diagonal())
print(a_hat, sigma_a, c_hat, sigma_c, x0_hat, sigma_x0)
plt.plot(x, cat(x, a_hat, c_hat, x0_hat))




plt.show()

This is the code and the following are the sampled points:

396.6   1055.3
409.4   1079.2
418.3   1103.7
432.1   1129.2
439.4   1157.0
453.2   1180.9
466.0   1204.8
478.2   1229.8
489.3   1251.4
501.6   1277.0
512.7   1303.6
527.7   1328.1
538.8   1354.7
552.1   1378.1
563.2   1404.7
573.2   1424.2
590.4   1450.3
605.4   1478.6
627.7   1515.3
638.8   1536.9
654.3   1563.6
668.2   1587.5
680.4   1610.8
696.0   1636.9
711.0   1659.7
724.3   1681.9
739.9   1706.3
754.8   1728.6
769.3   1752.4
784.3   1775.2
798.7   1798.0
814.8   1819.6
828.2   1842.4
844.8   1862.4
862.6   1885.7
878.2   1908.5
892.0   1930.2
910.9   1953.0
927.6   1973.0
943.1   1996.3
959.8   2016.3
976.5   2036.3
995.9   2057.9
1012.0   2077.4
1029.2   2095.7
1045.3   2115.7
1067.6   2140.7
1088.7   2160.7
1106.4   2181.3
1126.4   2201.8
1149.2   2222.4
1165.3   2239.0
1187.0   2257.3
1204.8   2272.9
1227.0   2290.7
1247.0   2310.7
1270.9   2327.3
1286.4   2340.7
1305.8   2357.3
1329.7   2375.1
1353.6   2394.0
1379.7   2410.7
1398.1   2421.8
1420.3   2435.1
1442.5   2448.4
1466.9   2463.4
1488.6   2475.6
1513.0   2488.4
1537.5   2501.8
1553.6   2510.1
1578.6   2519.5
1606.9   2533.4
1630.8   2544.0
1658.0   2554.5
1680.2   2562.3
1709.1   2574.0
1740.2   2581.7
1766.9   2590.6
1794.6   2596.2
1819.6   2601.7
1844.6   2608.4
1873.0   2611.2
1901.3   2615.1
1925.7   2617.9
1954.1   2620.6
1982.4   2622.9
2006.3   2623.4
2036.8   2624.0
2061.3   2623.4
2086.8   2620.6
2117.4   2620.1
2141.2   2617.3
2167.9   2612.3
2194.6   2609.0
2221.2   2604.0
2250.1   2599.5
2276.8   2594.5
2301.2   2587.3
2333.4   2577.3
2371.2   2565.7
2399.5   2554.6
2422.3   2547.9
2447.8   2539.0
2474.5   2527.9
2497.8   2515.1
2522.8   2504.6
2548.9   2494.0
2573.4   2483.5
2593.9   2467.9
2620.0   2452.9
2647.2   2434.6
2671.1   2419.6
2688.9   2406.3
2710.0   2390.7
2734.4   2377.9
2755.6   2357.9
2777.2   2341.8
2800.5   2323.5
2822.8   2308.5
2844.4   2288.5
2866.6   2269.6
2886.6   2253.0
2908.9   2232.4
2930.5   2212.4
2950.0   2189.7
2969.4   2173.0
2989.4   2154.1
3004.4   2134.7
3023.8   2114.7
3043.3   2093.0
3059.4   2072.5
3076.6   2051.9
3093.8   2033.0
3111.0   2011.9
3128.8   1991.9
3146.0   1968.0
3166.6   1945.8
3182.7   1923.6
3197.1   1903.6
3219.3   1873.6
3239.9   1847.0
3255.5   1823.6
3272.7   1798.1
3291.0   1775.9
3303.2   1750.9
3320.4   1728.1
3334.3   1707.0
3350.4   1683.1
3366.5   1658.7
3378.2   1637.0
3394.3   1612.0
3406.0   1589.2
3420.4   1565.4
3433.2   1541.5
3448.2   1520.9
3462.1   1493.7
3476.0   1469.3
3490.4   1445.9
3505.4   1419.3
3514.3   1397.1
3529.3   1370.4
3541.5   1349.8
3555.4   1321.5
3566.5   1299.3
3578.2   1277.6
3591.5   1254.3
3607.0   1223.2
3617.0   1198.8
3630.4   1172.7
3643.2   1146.6
3654.3   1121.0
3663.1   1098.2
3674.3   1074.3
3687.6   1051.0

I tried to decrease the number of points and the initial values, but nothing changed.

2

There are 2 answers

0
JJacquelin On

I tried with my own regression software. The fitting is very good.

enter image description here

I cannot say why your calculus failed. I suspect that the "guessed" initial values of the parameters are too far from the correct values. Especially the parameter a should be negative.

IN ADDITION :

An even better fit would be obtained with the more general cosh function (below four parameters instead of three).

In order to avoid the problems of "guessed" initial values and of convergence of iterative method the results below are obtained with a non-iterative method which doesn't require initial values.

enter image description here

The general principle of this method is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales . Since the application to the cosh function isn't treated in the paper the missing algorithm is shown below (Very simple to code in any math software).

enter image description here

Note : The calculus involves numerical integrations for Sk and SSk. This introduces additional deviation depending on the distribution of the points and on the scatter. In the present case the number of points is large and the scatter low. So the deviation due to numerical integration is negligable. Nevertheless if an even better accurate fitting is wanted one can use any non-linear regression software starting the iteration process from the very good values obtained above.

1
jlandercy On

TL;DR

Just change the initial guess to respect the concavity of your data and you will find the global optimum.

Quick fix

For the dataset you provided:

import io
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy import optimize

data = pd.read_fwf(io.StringIO("""396.6   1055.3
409.4   1079.2
...
3674.3   1074.3
3687.6   1051.0"""), header=None, names=["x0", "y"])
data["sy"] = 4.75  # Your estimation of sigma is probably too optimistic

def cat(x, a, c, x0):
    return c + a * np.cosh((x - x0) / a)

And the following initial guess (remind cosh has positive concavity while your data looks like a reversed cosh which exhibits a negative concavity):

popt, pcov = optimize.curve_fit(
    cat, data.x0.values, data.y.values, sigma=data.sy.values,
    absolute_sigma=True, p0=(-1000., 3000., 2000.)
) # Changing initial guess to ensure correct concavity while keeping expected magnitude

The fitting procedure converges with fairly acceptable error on parameters.

# (array([-1050.29821705,  3669.27789371,  2037.82887594]),
#  array([[ 0.31722843, -0.07992235,  0.00233664],
#         [-0.07992235,  0.14980518, -0.00053077],
#         [ 0.00233664, -0.00053077,  0.07416564]]))

enter image description here

Error landscape

The intrinsic reason why your fit fails is because your model shapes an error landscape with multiple local minima and for your initial guess the fitting procedure converges towards the wrong optimum.

The error landscape around global optimum looks like:

enter image description here

But there are also another local minimum on the positive side of a:

enter image description here

Which is the solution you have found for your initial guess.

Normalizing

Fitting procedure are sensitive to scale and normalizing is often needed to make the problem solvable. For instance your model is easy to scale.

First, let's change your model for:

def cat(x, a, c, x0):
    return c - a * np.cosh((x - x0) / a)

Then we scale data by an arbitrary factor k in order to make variable more or less close to unity (we chose this transformation because it allows to easily recover parameters afterwards):

k = 2000
data /= k

Fitting now performs directly from default initial value (1,1,1):

popt, pcov = optimize.curve_fit(
    cat, data.x0.values, data.y.values, sigma=data.sy.values,
    absolute_sigma=True
) 

The problem is sufficiently normalized to be solved as this. Scaled parameters are about:

# (array([0.52514911, 1.83463895, 1.01891444]),
#  array([[ 7.93071474e-08,  1.99806113e-08, -5.84150321e-10],
#         [ 1.99806113e-08,  3.74513037e-08, -1.32687392e-10],
#         [-5.84150321e-10, -1.32687392e-10,  1.85414148e-08]]))

Original parameters can be recovered by applying the inverse transformation:

popt * k
# array([1050.29821688, 3669.27789367, 2037.82887592])