How do we create a spark service for Python 2/or 3 with Spark 2.0 . Whenever I create a new service and associate it with a python notebook its Python 2 with Spark 1.6. Why cant I see the configuration of the service I am creating like in Data bricks free edition? I want to use the SparkSession api introduced in Spark 2.0 to create your spark session variable, hence the question.
Python 2 with Spark 2.0
157 views Asked by Vik M At
2
There are 2 answers
0
On
You cannot see the configuration of the service you are creating, because you're not creating a service with its own configuration. The Apache Spark as a Service instances in Bluemix and Data Science Experience are getting execution slots in a shared cluster. The configurations of that shared cluster are managed by IBM.
The Jupyter Notebook server of your instance has kernel specs for each supported combination of language and Spark version. To switch your notebook to a different combination, select "Kernel -> Change Kernel -> (whatever)". Or select language and Spark version separately when creating a notebook.
You can choose the Python and Spark version while:
a. Creating a new notebook in Data Science Experience:
b. Change the kernel of an existing notebook: