malcolm richard
malcolm richard

Reputation: 65

Databricks- Can we variablize the mount_point name during creation by passing the value from SQL lookup table

Thanks in Advance this site is of greathelp!!!!

Question:

Can we get variabliize the mount_point name and the filename while creating the data frame.

Mount name: select company from comaytable(pass the comapy name as mountpoint variable)

source = "wasbs://[email protected]", mount_point = "/mnt/"VARIABLIZENAME", extra_configs = {"fs.azure.sas.uiasasps.dmodssdsdgarea.blob.core.windows.net":dbutils.secrets.get(scope = "AIdsT", key = "keydmodslaarea")}) print("=> Succeeded")

File Name Variablzie:

df = spark.read.format("csv").option("sep", ",").options(header= "true", inferschema='true').option('escape','"').load("/mnt/AT/VARIABLIZE.csv")

Can we pass this values from datafactory also i can make use of it if necessary

Upvotes: 1

Views: 955

Answers (2)

Himanshu Kumar Sinha
Himanshu Kumar Sinha

Reputation: 1806

Just to understand you have a ADF where you are calling a lookup ( running a SQL query ) and the intent is that you want to pass the value from the Lookup to a notebook .

If thats the case we can achieve this by implementing a Lookup acitivity and a foreach ( to loop in the all the records . Inside the foreach please use a Notebook activity , point this to the notebook which you want to run and pass value of the company ( something as @item() etc ) https://learn.microsoft.com/en-us/azure/data-factory/control-flow-lookup-activity .

On the notebook you can use the widget and get the value as incoming parameter .

CompanyName = dbutils.widgets.get("CompanyName")

Please let me know if you have any questions .

Upvotes: 0

CHEEKATLAPRADEEP
CHEEKATLAPRADEEP

Reputation: 12788

You may checkout the steps mentioned below:

Step1: Declaring the variables:

mountname = 'test'
csvname = 'original.csv'
path = "dbfs:/mnt/{0}/{1}".format(mountname,csvname)

Step2: Mounting the storage account

dbutils.fs.mount(
  source = "wasbs://[email protected]/",
  mount_point = "/mnt/{0}".format(mountname),
  extra_configs = {"fs.azure.sas.test.chepra.blob.core.windows.net":"gv7nXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXlOiA=="})
print("=> Succeeded") 

Step3: Creating the Spark Dataframe

df = spark.read.format("csv").option("sep", ",").options(header= "true", inferschema='true').option('escape','"').load("{0}".format(path))

enter image description here

Upvotes: 0

Related Questions