When running a pyspark job there's a significant launch overhead. is it possible to run 'lightweight' jobs that don't use an external daemon? (mainly for testing with small data sets)
Is possible to run spark (specifically pyspark) in process?
1.1k views Asked by Ophir Yoktan At
1
Update
My answer is not true anymore.
There is pysparkling project now that provides
It is still early version - but it lets you run your PySpark application with pure Python. But YMMV - Spark API evolves fast, and pysparkling might not have all latest API implemented.
I would still use full fledged PySpark for my tests - to make sure that my application works as it should on my target platform - which Apache Spark.
Previous answer
No, there is no way to run only Spark as single Python process only. PySpark is only thin API layer on top of Scale code. The latter should be run inside of JVM.
My company are heavy user of PySpark and we run unit tests for spark jobs continuously. There are not that much overhead while running Spark jobs in local mode. It is true that it starts JVM, but it is an order of magnitude faster than our old tests for Pig code.
If you have a lot tasks to run (i.e. many unit tests) you can try to reuse Spark Context - this will reduce the amount of time spent for starting up daemon for every test case. Please keep in mind that in this case you need to clean up after every test case (i.e. unpersist if your program cached some rdds).
In our company we decided to start new Spark Context for every test case - to keep them clean for now. Running Spark in local mode is fast enough for us at least for now.