Reputation: 143
Is it possible to export published SQL files in your Synapse workspace to your sandbox environment via code and without the use of pipelines?
If not is it somehow possible to access your published SQL files via a notebook with e.g. pySpark, Scala, SparkSQL, C# etc?
Upvotes: 0
Views: 453
Reputation: 11294
If not is it somehow possible to access your published SQL files via a notebook with e.g. pySpark, Scala, SparkSQL, C# etc?
You can get the list of SQL scripts
from the Synapse workspace using following REST API.
https://<synapse_workspace_name>.dev.azuresynapse.net/sqlScripts?api-version=2020-12-01
Use this REST API in Synapse notebook(Pyspark).
First create a Service principal and secret. Give the access for that Service principal to Synapse by following below steps.
Here, these are my SQL scripts in the workspace attached to a dedicated SQL pool named rakeshdedipool
.
Generate the bearer token for the service principal. I followed the code in this SO Answer by @Saideep Arikontham which uses msal
for generating the bearer token.
If you want, you can use postman also for the bearer token.
Now, use the bearer token in Pyspark and you can see the list of SQL Scripts.
import requests
# api-endpoint
URL = "https://rakeshsynapse.dev.azuresynapse.net/sqlScripts?api-version=2020-12-01"
# sending get request and saving the response as response object
r = requests.get(url = URL, headers = {"Authorization":f"Bearer {result['access_token']}"})
print(r.json())
You can get the scripts like and use as per your requirement in this Notebook.
for i in r.json()['value']:
print("script : ", i['properties']['content']['query'])
(OR) Use the Powershell script Export-AzSynapseSqlScript to export the script files to a Storage account and you can also try with Python SDK.
Upvotes: 1