Reputation: 15
I was trying to export the table to external storage s3 with overwrite.
But overwrite option seemed to be not working.
Below is the query which was used for export:
copy into @s3_stage/2020/08/20/test_20200820
from (select object_construct('id', id, 'user_id', user_id) from sample_table)
file_format = ( type = json )
overwrite = true;
Result files were outputted as test_20200820_0_*_0.json.gz witch was not guaranteed to be overwritten.
s3_output_files
Thanks.
Upvotes: 1
Views: 153
Reputation: 1774
Why don't you just run a rm command before the COPY
rm @s3_stage
Upvotes: 0
Reputation: 1222
If I understand your issue correctly, the problem is that the generated filenames are NOT DETERMINISTIC due to the automatic "splitting" that occurs during the export from Snowflake using the COPY INTO <location>
command... is that correct?
If so, have you tried using the SINGLE = TRUE
option (to sidestep the file splitting operation)? For more details, please see the related Snowflake docs page.
Upvotes: 2