Madhur Gupta
Madhur Gupta

Reputation: 15

Best way to bulk load data from Azure Synapse Serverless SQL pools into Azure storage or Databricks Spark

I am trying to bulk load data from Azure Synapse serverless SQL pools into Azure Storage or directly into Databricks Spark (using JDBC driver). What is the best way to do this bulk loading assuming we only know the external table name and don't know the location of the file underneath? Is there any metadata query to know the location of the file as well?

Upvotes: 0

Views: 589

Answers (1)

GregGalloway
GregGalloway

Reputation: 11625

The files are already in Azure storage since Synapse Serverless SQL has no “in-database” storage.

Assuming it’s an external table (not a view with OPENROWSET) then sys.external_tables has a location column with the path to the file or folder.

If you don’t already know which storage account and container it’s in, you may need to join to sys.external_data_sources for that information.

Upvotes: 2

Related Questions