I am using Databricks community edition for running spark workloads . I understand it uses kernels to run the notebooks
- Is there anyway to identify which kernel the notebooks uses to run ?
- How exactly the notebook will run behind the scenes( Very little information available) ?
Regarding the first question, Databricks has a limited number of supported "main" languages - Scala, Python, R, and SQL - you can set them as primary language when creating the notebook. Besides language set on the notebook level, you can use another language for a given cell by using
magics
, like,%scala
,%python
,%r
,%sql
. There are also additional magics, for example,%sh
for execution of the shell code on the driver,%fs
for working with files on DBFS, etc. All of this you can find in the documentation.Regarding the 2nd question - actual implementation is not public, but it should work similarly to the Spark implementations -
pyspark
, etc.