Reputation: 134
My idea is to take the csv file as a source & synapse table as a sink using upsert. whenever I am changing the rows of csv file in data flow source preview it is showing old data.
This is my source file I changed the last indicator to FALSE and uploaded it to blob.
Again I ran preview for source still indicator is reference to old data
Upvotes: 0
Views: 712
Reputation: 3838
If you are changing the source data while in the same continuous data flow debug session, then the source data is being cached in Spark data frames. You will need to invalidate the cache. You can do this by changing the name of your source transformation. That will force ADF to re-read the source data. Change change the name "source1" to "source" before you hit "refresh" in data preview.
Upvotes: 1