We are planning to setup Apache Spark 3.0 outside of existing HDP 2.6 cluster and to submit the jobs using yarn(v2.7) in that cluster without upgrade or modifying. Currently users are using Spark 2.3 which is included in the HDP stack. Goal is to enable Apache Spark 3.0 outside if HDP cluster without interrupting the current jobs.
What are the best approaches for this? Setup apache 3.0 client nodes outside of HDP cluster and submit it from new client nodes?
Any recommendations on this? Things to avoid conflict with current HDP stack and its components?
Built spark 3.0.1 from the spark source code 3.0.1 with specific(HDP 2.6) Hadoop, Hive version. Then deployed it in HDP client nodes only. Spark 3.0.1 pre-built binaries were having compatibility issues with Hive 1.2.1 as it was built with latest hive.
Build options: