How do I persist dask-DAGs on distributed cluster accross multiple calls and keep intermediate results?

138 views Asked by At

I try to submit a dask-DAG across several calls of distributed client, but am unable to persist the intermediate results on the cluster. Could you point out, how could I go about this?

from distributed import Client
c = Client()


dsk0 = {'a': 1, 'b': (lambda x: 2*x, 'a')}
keys0 = ['a', 'b']
futures0 = c._graph_to_futures(dsk0, keys0)
fb = futures0['b']
b = fb.result()  # Yields correctly 2

dsk1 = {'c': (lambda x: 3*x, 'a')}
keys1 = ['c']
futures1 = c._graph_to_futures(dsk1, keys1)
fc = futures1['c']
c = fc.result()  # Yields 'aaa', instead of 3

Thanks in advance!

Markus

1

There are 1 answers

0
MRocklin On BEST ANSWER

I recommend using dask.delayed and the client.compute method

from dask import delayed
from distributed import Client
client = Client()

a = delayed(1)
b = delayed(lambda x: 2 * x)(a)

a_future, b_future = client.compute([a, b])

>>> b_future.result()
2

c = delayed(lambda x: 3 * x)(a_future)
c_future = client.compute(c)

>>> c_future.result()
3

Internal functions that deal with graphs directly like _graph_to_futures are a bit more error prone and generally for internal use.