What are the different runtimes in MLRun?

130 views Asked by At

I'm trying to get a feel for how MLRun executes my Python code. What different runtimes are supported and why would I use one vs the other?

1

There are 1 answers

0
Nick Schenone On BEST ANSWER

MLRun has several different ways to run a piece of code. At this time, the following runtimes are supported:

  • Batch runtimes
    • local - execute a Python or shell program in your local environment (i.e. Jupyter, IDE, etc.)
    • job - run the code in a Kubernetes Pod
    • dask - run the code as a Dask Distributed job (over Kubernetes)
    • mpijob - run distributed jobs and Horovod over the MPI job operator, used mainly for deep learning jobs
    • spark - run the job as a Spark job (using Spark Kubernetes Operator)
    • remote-spark - run the job on a remote Spark service/cluster (e.g. Iguazio Spark service)
  • Real-time runtimes
    • nuclio - real-time serverless functions over Nuclio
    • serving - higher level real-time Graph (DAG) over one or more Nuclio functions

If you are interested in learning more about each runtime, see the documentation.