I want to execute a query on a Serverless SQL Warehouse to grant access to a group of users to a specific schema on the catalog (we don't have Unity Catalog enabled yet, and it will take a while until we can get it).
The schema has to be a parameter, so I've created a .sql file containing this statement:
GRANT READ_METADATA, SELECT, USAGE ON SCHEMA {{schema_name}} TO `users`;
Then I'm using the Jobs API (https://docs.databricks.com/api/azure/workspace/jobs/submit#tasks-sql_task) to run a SQL task like this:
{
"tasks": [
{
"task_key": "sql_grant_permission",
"sql_task": {
"file": {
"path": "/folder/sql_file_containing_the_grant_statement.sql"
},
"parameters": {
"schema_name": "test_schema"
},
"warehouse_id": "xxxxxxxxxxxxxxxx"
}
}
],
"timeout_seconds": 1200
}
But the parameter schema_name is being evaluated as a string, so the statement ends up as:
GRANT READ_METADATA, SELECT, USAGE ON SCHEMA 'test_schema' TO `users`;
Which results on an error:
Any ideas/suggestions to solve this?
EDIT:
By default, the parameters are taken as string, there is now way to use it as schema in
GRANT
statement to run in sql warehouse. So create python notebook task with below code and trigger the job.Install
databricks-sql-connector
from PYPi and restart the cluster.Next, call this task using API with below body and with api endpoint
You will get
job_id
when you create a task as shown in below image.If you see in query history it ran successful.