How to efficiently select neighbour in 1-dimensional and n-dimensional space for Simulated Annealing

1.8k views Asked by At

I would like to use Simulated Annealing to find local minimum of single variable Polynomial function, within some predefined interval. I would also like to try and find Global minimum of Quadratic function.

Derivative-free algorithm such as this is not the best way to tackle the problem, so this is only for study purposes.

While the algorithm itself is pretty straight-forward, i am not sure how to efficiently select neighbor in single or n-dimensional space.

Lets say that i am looking for local minimum of function: 2*​x^​3+​x+​1 over interval [-0.5, 30], and assume that interval is reduced to tenths of each number, e.g {1.1, 1.2 ,1.3 , ..., 29.9, 30}.

What i would like to achieve is balance between random walk and speed of convergence from starting point to points with lower energy.

If i simply select random number form the given interval every time, then there is no random walk and the algorithm might circle around. If, on the contrary, next point is selected by simply adding or subtracting 0.1 with the equal probability, then the algorithm might turn into exhaustive search - based on the starting point.

How should i efficiently balance Simulated Annealing neighbor search in single dimensional and n-dimensional space ?

2

There are 2 answers

6
tucuxi On BEST ANSWER

So you are trying to find an n-dimensional point P' that is "randomly" near another n-dimensional point P; for example, at distance T. (Since this is simulated annealing, I assume that you will be decrementing T once in a while).

This could work:

double[] displacement(double t, int dimension, Random r) {
      double[] d = new double[dimension];
      for (int i=0; i<dimension; i++) d[i] = r.nextGaussian()*t;
      return d;
}

The output is randomly distributed in all directions and centred on the origin (notice that r.nextDouble() would favour 45º angles and be centred at 0.5). You can vary the displacement by increasing t as needed; 95% of results will be within 2*t of the origin.

EDIT:

To generate a displaced point near a given one, you could modify it as

double[] displaced(double t, double[] p, Random r) {
      double[] d = new double[p.length];
      for (int i=0; i<p.length; i++) d[i] = p[i] + r.nextGaussian()*t;
      return d;
}

You should use the same r for all calls (because if you create a new Random() for each you will keep getting the same displacements over and over).

0
David Soroko On

In "Numerical Recepies in C++" there is a chapter titled "Continuous Minimization by Simulated Annealing". In it we have

A generator of random changes is inefficient if, when local downhill moves exist, it nevertheless almost always proposes an uphill move. A good generator, we think, should not become inefficient in narrow valleys; nor should it become more and more inefficient as convergence to a minimum is approached.

They then proceed to discuss a "downhill simplex method".