Reputation: 191
trying to set up a data flow to de-duplicate data but there seems to be a simple error that I cannot fix, the data set is set up like so and the preview works correctly - updated file path and imported schema
the data flow is set up like this
however I get this error when trying Data preview - updated -> got the columns from importing the schema in the dataset
I have tried editing wildcard paths in source options and so far that has not worked i tried
daily/*csv and *.csv
the structure of the blob account looks like so:
source options from the answer given:
still the error: Path does not resolve to any file(s). Please make sure the file/folder exists and is not hidden. At the same time, ensure special character is not included in file/folder name, for example, name start with _
each directory is from an azure export and creates its own monthly folder - this works on data factory but each csv has month to date cost so I was trying to use data flows to only take new data instead of all of the data with duplicates
Upvotes: 3
Views: 3594
Reputation: 5074
You can use the wildcard path below to get the files of the required type.
Input folder path:
Azure data flow:
Wildcard paths : daily/*.csv
Upvotes: 4