Solving multiple independent LPs parallel in python and key error occurs

395 views Asked by At

My name is Boyu. I am a college student and newbie in python and Gurobi. Currently, one step of my model is solving 5 independent LPs. These LPs are independent and each has the same number of variables and constraints. The only difference between these LPs is the values of the coefficient and they are all known before running the model.

First, I start building 5 LPs sequentially:

from gurobipy import *
from gurobipy import GRB

a={1:2,2:2,3:8,4:7,5:3}
b={1:3,2:5,3:6,4:8,5:5}
c={1:4,2:2,3:3,4:5,5:7}
d={1:1,2:7,3:3,4:2,5:9}
object_val={}
x={}
y={}
z={}
m={}

for i in [1,2,3,4,5]:
# Create a new model
    m[i]=Model()

# Create variables
x[i] = m[i].addVar(vtype=GRB.CONTINUOUS)
y[i] = m[i].addVar(vtype=GRB.CONTINUOUS)
z[i] = m[i].addVar(vtype=GRB.CONTINUOUS)

# Set objective
m[i].setObjective(x[i] + y[i] + 2 * z[i] , GRB.MAXIMIZE)

# Add constraint: x + a y + b z <= c
m[i].addConstr(x[i] + a[i] * y[i] + b[i] * z[i] <= c[i])

# Add constraint: x + y >= 1
m[i].addConstr(x[i] + y[i] >= d[i])

Second, I defined the function to solve a single LP model and save it as "test.py":

def test(i):
# Optimize model
    m=i[1]
    m.optimize()
    return m.objVal

Third, I create the input data for the function will solved by parallel:

inputs=[]
for i in [1,2,3,4,5]:
    inputs.append([i,m[i]])

Finally, I tried to use "multiprocessing" package to solve these 5 LPs in parallel:

import test
import multiprocessing
if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=4)
    pool.map(test.test, inputs)
    pool.close()
    pool.join()
    print('done')

However, an error occurs, it said "KeyError: 'getstate'"

KeyError                                  Traceback (most recent call last)
<ipython-input-17-0b3639c06eb3> in <module>()
      1 if __name__ == '__main__':
      2     pool = multiprocessing.Pool(processes=4)
----> 3     pool.map(test.test, inputs)
      4     pool.close()
      5     pool.join()

C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py in map(self, func, iterable, chunksize)
    264         in a list that is returned.
    265         '''
--> 266         return self._map_async(func, iterable, mapstar, chunksize).get()
    267 
    268     def starmap(self, func, iterable, chunksize=None):

C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py in get(self, timeout)
    642             return self._value
    643         else:
--> 644             raise self._value
    645 
    646     def _set(self, i, obj):

C:\ProgramData\Anaconda3\lib\multiprocessing\pool.py in _handle_tasks(taskqueue, put, outqueue, pool, cache)
    422                         break
    423                     try:
--> 424                         put(task)
    425                     except Exception as e:
    426                         job, idx = task[:2]

C:\ProgramData\Anaconda3\lib\multiprocessing\connection.py in send(self, obj)
    204         self._check_closed()
    205         self._check_writable()
--> 206         self._send_bytes(_ForkingPickler.dumps(obj))
    207 
    208     def recv_bytes(self, maxlength=None):

C:\ProgramData\Anaconda3\lib\multiprocessing\reduction.py in dumps(cls, obj, protocol)
     49     def dumps(cls, obj, protocol=None):
     50         buf = io.BytesIO()
---> 51         cls(buf, protocol).dump(obj)
     52         return buf.getbuffer()
     53 

model.pxi in gurobipy.Model.__getattr__()

KeyError: '__getstate__'

Could anybody give me some help for that? I am a newbie for gurobi and python and it will be really really appreciated if someone can give me some help.

Thanks.

Boyu

1

There are 1 answers

0
Anup Agarwal On

You need to create a separate environment for each model instance.

# Assuming: import gurobipy as gp

m[i] = gp.Model(env=gp.Env(""))

For further reference:

https://groups.google.com/forum/#!topic/gurobi/_LztwSqj-14

https://www.gurobi.com/documentation/9.0/refman/py_env2.html