I have the following problem.
I have a finite set L of points in a n-dimensional space, and I have a model.
I want to find the point x, in a specific search space, which maximizes the distance to the set of points d(x, L).
The distance d(x, L) though is the minimum Manhattan distance from x to any of the points σ in L, i.e.:
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML-full"></script>
$$d(x, L) = \min_{\sigma \in L} d(x, \sigma) = \min_{\sigma \in L} \sum_{i = 1}^n |x_i - \sigma_i|$$
Therefore, the final optimization problem will be:
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS_HTML-full"></script>
$$\max_{x \in \text{SearchSpace}} \min_{\sigma \in L} \sum_{i=1}^n |x_i - \sigma_i|$$
Now, I am trying to solve the problem using linear programming, with the PuLP library in python. Though, I am facing problems with the absolute value.
I've been trying in many ways so far, trying to follow what described here sum of absolute values constraint in semi definite programming, but did not manage. This is what I have so far:
# Define the dimensionality of the problem
n = len(model) #dimension of the space
m = len(L) #number of points in L
# Create the LP problem
prob = plp.LpProblem("Maximize minimal manhattan distance", plp.LpMaximize)
# Define the decision variables
x = plp.LpVariable.dicts("x", range(n), lowBound=0, cat = 'Continuous')
# Define the variable to maximize
z = plp.LpVariable("z", lowBound=0, cat = 'Continuous')
print(x, z)
# Define the constraints for x, i.e. the search space constraints
for i in range(n):
if i == 0:
prob += x[i] >= model[i][0]
prob += x[i] <= model[i][1]
else:
prob += x[i] >= model[i][0] + x[i-1]
prob += x[i] <= model[i][1] + x[i-1]
# Define the constraints for the absolute value of the difference between x and each point in L
# first way, still not working
for j in range(m): #for every sigma in L
#prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)])
diff = plp.LpVariable.dicts("diff_" + str(j), range(n), lowBound=0, cat = 'Continuous')
#y = plp.LpVariable.dicts("y_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
#w = plp.LpVariable.dicts("w_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), lowBound=0, cat = 'Continuous')
diff_minus = plp.LpVariable.dicts("diff_minus" + str(j), range(n), lowBound=0, cat = 'Continuous')
for i in range(n):
#print(x[i])
prob += diff[i] == L[j][i] - x[i]
prob += diff[i] == diff_plus[i] - diff_minus[i]
prob += diff_plus[i] >= 0
prob += diff_minus[i] >= 0
prob += z <= plp.lpSum([diff_plus[i] + diff_minus[i] for i in range(n)])
print(prob)
# Define the objective function
prob += z
# Solve the problem
prob.solve()
# Print the results
print("Status:", plp.LpStatus[prob.status])
print("Objective value:", plp.value(prob.objective))
print("z:", plp.value(z))
optimal_point = [plp.value(x[i]) for i in range(n)]
print('Optimal Point:', optimal_point)
The problematic part is the following:
# Define the constraints for the absolute value of the difference between x and each point in L
for j in range(m): #for every sigma in L
#prob += z <= plp.lpSum([abs(x[i] - L[j][i]) for i in range(n)])
diff = plp.LpVariable.dicts("diff_" + str(j), range(n), lowBound=0, cat = 'Continuous')
#y = plp.LpVariable.dicts("y_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
#w = plp.LpVariable.dicts("w_var" + str(j), range(n), lowBound=0, cat = 'Continuous')
diff_plus = plp.LpVariable.dicts("diff_plus" + str(j), range(n), lowBound=0, cat = 'Continuous')
diff_minus = plp.LpVariable.dicts("diff_minus" + str(j), range(n), lowBound=0, cat = 'Continuous')
for i in range(n):
#print(x[i])
prob += diff[i] == L[j][i] - x[i]
prob += diff[i] == diff_plus[i] - diff_minus[i]
prob += diff_plus[i] >= 0
prob += diff_minus[i] >= 0
prob += z <= plp.lpSum([diff_plus[i] + diff_minus[i] for i in range(n)])
because I should first use the new variable z as it is a maximin problem, and then write the absolute values as constraints, which I am not being able to do apparently.
How should I solve it? THX!
Without an example of your input data to test your code, I can't guarantee this solution will work for your problem:
Basically, I've created a function called
add_absthat returns a variable representing the absolute value of somepulp.LpVariableorpulp.LpAffineExpression. Then I used this function to define the absolute difference ofL[j][i] - x[i]from your original code.I've then tested the code using the following "dummy" values for
modelandL:model = [(0, 10), (0, 10)]L = [(1, 2), (3, 4), (5, 6)]