Dynamic Allocation for Spark Streaming

5.6k views Asked by At

I have a Spark Streaming job running on our cluster with other jobs(Spark core jobs). I want to use Dynamic Resource Allocation for these jobs including Spark Streaming. According to below JIRA Issue, Dynamic Allocation is not supported Spark Streaming(in 1.6.1 version). But is Fixed in 2.0.0

JIRA link

According to the PDF in this issue, it says there should be a configuration field called spark.streaming.dynamicAllocation.enabled=true But I dont see this configuration in the documentation.

Can anyone please confirm,

  1. Can't I enable dynamic resource allocation for Spark Streaming in 1.6.1 version.
  2. Is it available in Spark 2.0.0 . If yes, what configuration should be set (spark.streaming.dynamicAllocation.enabled=true or spark.dynamicAllocation.enabled=true)
1

There are 1 answers

3
mrsrinivas On

Can I enable Dynamic Resource Allocation for Spark Streaming for 1.6.1 version?

Yes, you can enable by setting up dynamic allocation to any spark application with spark.dynamicAllocation.enabled=true But I have few issues with streaming application(mentioned in SPARK-12133)

  1. Your executors may never be idle since they run something every N seconds
  2. You should have at least one receiver running always
  3. The existing heuristics don't take into account length of batch queue

So, they are added new property(spark.streaming.dynamicAllocation.enabled) in Spark 2.0 for streaming apps alone.

Is it available in Spark 2.0.0 . If yes, what configuration should be set spark.streaming.dynamicAllocation.enabled or spark.dynamicAllocation.enabled ?

Must be spark.streaming.dynamicAllocation.enabled if the application is streaming one, otherwise go head with spark.dynamicAllocation.enabled

Edit: (as per comment on 2017-JAN-05)

This is not documented as of today, But I get this property and implementation at Spark source code. github: ExecutorAllocationManager.scala (Unit tests github: ExecutorAllocationManagerSuite.scala) class has been included in Spark 2.0 and this implementation is not there in Spark 1.6 and below.