Reputation: 215
I want to create a simple azure data factory process to read a file (csv) from blob storage and write it to an azure sql database using data flows.
the source dataset has a column with the name "myTime" of type "string".
I added a "derived column" to add a new column with the name "customTime" with the expression "currentTimestamp()"
finally, in the sql sink, I mapped "customTime" to my DateTime field on the database.
If I preview the data in data flows, everything looks alright, I can see both fields (myTime, customTime). When I debug the pipeline, I get the following exception:
Activity myActivity failed: DF-SYS-01 at Sink 'sqlsink': java.sql.BatchUpdateException: Invalid column name 'myTime'
Any idea why the sql sink is linked to "myTime" and not "customTime"? I don't see any reference to "myTime" except that this is part of the input schema.
Thank you very much and best regards Michael
Upvotes: 2
Views: 8378
Reputation: 16431
This error is caused by the column mapping. You create the new column "customTime
", but in your table the column name is "myTime".
What I found that is the csv file column name and schema must be same with the Sink table, or it will gives an error when debugging, this my error message:
Solution 1:: choose the "Recreate table" in Sink settings. But this will change your table schema:
myTime
tocustomTime
.
Solution 2: Add the same column name with Sink table in Derived Column's Settings.
I test with the same opreation with and it worked succeeded.
I read a file (csv) from blob storage and write it to my Azure SQL database using data flows. I create a the Sink table with same schema.
This My csv file:
Step 2: Derived Column's Settings
Hope this helps.
Upvotes: 1