Python 2 with Spark 2.0

157 views Asked by At

How do we create a spark service for Python 2/or 3 with Spark 2.0 . Whenever I create a new service and associate it with a python notebook its Python 2 with Spark 1.6. Why cant I see the configuration of the service I am creating like in Data bricks free edition? I want to use the SparkSession api introduced in Spark 2.0 to create your spark session variable, hence the question.

2

There are 2 answers

0
Sumit Goyal On

You can choose the Python and Spark version while:

a. Creating a new notebook in Data Science Experience:

DSX `Project` --> Overview--> `+ add notebooks` --> `Choose the language` (Python2/R/Scala/Python3) and Spark version (1.6/2.0/2.1).

b. Change the kernel of an existing notebook:

From any running notebook, on the notebook menu choose `Kernel` and then choose the language and Spark version combination of your choice.
0
Roland Weber On

You cannot see the configuration of the service you are creating, because you're not creating a service with its own configuration. The Apache Spark as a Service instances in Bluemix and Data Science Experience are getting execution slots in a shared cluster. The configurations of that shared cluster are managed by IBM.

The Jupyter Notebook server of your instance has kernel specs for each supported combination of language and Spark version. To switch your notebook to a different combination, select "Kernel -> Change Kernel -> (whatever)". Or select language and Spark version separately when creating a notebook.