Besarion
Besarion

Reputation: 149

Azure Data factory Delimited Text file ignoring imported Schema

I get a weekly file in which has up to 34 columns but sometimes the first line of the file only has 29 columns. I have imported a schema with 34 columns but when I preview the data, data factory, just ignores the schema I've made for the file and shows the first 29 fields.

Apparently we cant ask for headers to be added to file. How do I force data factory to just read the file as having 34 columns because I've given it the schema. Adding the missing 5 pipes which are the delimiter fixes the issue but I don't want to have to do that every week.

Kind Regards.

Upvotes: 0

Views: 679

Answers (1)

NiharikaMoola
NiharikaMoola

Reputation: 5074

I have repro’d with some sample data using data flow.

  1. Create the delimited text dataset and select column delimiter as no delimiter to read the file as single column data.

d

  1. In the source, the first row contains 3 columns delimited by pipe | and the second row has 5 columns when delimited with |.

enter image description here

  1. Using derived column transformation, split the column into multiple columns based on |.

ex: split(Column_1, '|')[1]

enter image description here

enter image description here

Upvotes: 1

Related Questions