Penalty Function For Linear Optimization PuLP

97 views Asked by At

Let's say I have 2 decision variables a, b and 2 constants constant_a, constant_b. I want to maximize a and b but add a penalty whenever a is superior to constant_a or b is superior to constant_b.

The penalty should be -0.05 * (a - constant_a) (and an equal setup for b).

How may i implement this in PuLP? Note that this is a simplified version of my actual linear optimization problem.

I tried elastic constraints but this way I may only specifiy a fixed penalty value (and not -0.05 * (decision variable - constant)).

1

There are 1 answers

6
Reinderien On BEST ANSWER

I will answer the immediate question of how to construct such a penalty, but in context I will maintain that it still probably doesn't accomplish what OP imagines it will.

import pulp

constant_a = 3
constant_b = 3
a = pulp.LpVariable(name='a', upBound=5, cat=pulp.LpContinuous)
b = pulp.LpVariable(name='b', upBound=5, cat=pulp.LpContinuous)
a_penalty = pulp.LpVariable(name='a_penalty', lowBound=0, cat=pulp.LpContinuous)
b_penalty = pulp.LpVariable(name='b_penalty', lowBound=0, cat=pulp.LpContinuous)

prob = pulp.LpProblem(name='penalty', sense=pulp.LpMinimize)
prob.objective = a_penalty + b_penalty - a - b
prob.addConstraint(name='a_pty_lower', constraint=a_penalty >= 0.05*(a - constant_a))
prob.addConstraint(name='b_pty_lower', constraint=b_penalty >= 0.05*(b - constant_b))
prob.addConstraint(name='fake', constraint=a + b == 7)

print(prob)
prob.solve()
assert prob.status == pulp.LpStatusOptimal
print(f'a={a.value()}  a_penalty={a_penalty.value()}')
print(f'b={b.value()}  b_penalty={b_penalty.value()}')
a=4.0  a_penalty=0.05
b=3.0  b_penalty=0.0

The penalty variables have a lower bound of 0 (taken when the associated variables are below the penalty limit), no upper bound, and a constraint that they have a lower limit of the penalty value when subtracted from the constraint limit.