Reputation: 13
I've multiple csv files and multiple tables. The table name is file name and column name is first row of csv file.
Now I want to add default value of empty string to the sink table.
Consider my scenario,
employee: id int, name varchar, is_active bit NULL
id|name|is_active
1|raja|
Now I'm trying to copy the csv data to PostgreSQL table its throwing error. Expected result is default value if its empty value.
Upvotes: 0
Views: 2861
Reputation: 1450
As per the issue, tried to repro the scenario and here is the following outcome which was successfully copied. You have to use
Source Dataset: employee.csv
from Azure Blob Storage
Sink Dataset : Here, I have used the sink as Azure SQL DB for some limitations but as you have used PostgreSQL is almost similar.
Copy Activity Settings:
Under the mapping settings
there will be type conversion, where you have to import schema else you can dynamically add
Output:
Alternative to use DataFlow
- if you have multiple data fields, you need to use the derived column transformation to generate new columns in your data flow or to modify existing fields.
For more details, refer Derived column transformation in mapping data flow.
You can even refer to this Microsoft Q&A post for more insights: Copy Task failure because of conversion failure
Upvotes: 0
Reputation: 733
You can use NULLIF in PostgreSQL:
NULLIF(argument_1,argument_2);
The NULLIF function returns a null value if argument_1
equals to argument_2
, otherwise it returns argument_1
.
This way you can replace NULL value with some other value
If your error is related to Type mismatch then consider typecasting the column first
Thanks!
Upvotes: 0