Reputation: 565
I have a dbt model that looks like this:
{{
config(
materialized='incremental',
incremental_strategy='insert_overwrite',
partition_by = {'field': 'conversion_at_utc', 'data_type': 'timestamp'}
)
}}
with base as (
select
conversion_id,
user_id,
campaign,
timestamp_seconds(time) as conversion_at_utc,
from {{ source('xyz', 'users_campaigns_conversion') }}
where {{ select_date_increments('date(timestamp_seconds(time))')}}
)
select * from base
time is the unix-timestamp (seconds since 1970).
When I run this model for the first time, it succeeds and creates a table that is partitioned by day and field conversion_at_utc. However, when I run it a second time, I get the error "Cannot replace a table with a different partitioning spec. Instead, DROP the table, and then recreate it. New partitioning spec is interval(type:day,field:conversion_at_utc) and existing spec is none"
Any idea which the existing spec could be none?
Thanks in advance
Upvotes: 0
Views: 753
Reputation: 26
This may not be the exact same issue you had, but I was getting the same error and discovered that it was due to a temp table dbt created that hadn't expired or been dropped by dbt yet. I was messing around with the partition column and at some point the job failed so the temp table was just silently hanging behind the scenes for 12 hours (when it was set to expire). This temp table gets created when you don't configure specific partitions to replace as part of your insert_overwrite strategy. If the job succeeds, the table gets dropped toward the end of the script.
Manually dropping that temp table in GBQ fixed the issue for me. The name of the temp table can be found in the script dbt generates upon building your model. It's usually named something like yourmodel__dbt_tmp.
Upvotes: 0