My dbt model on BQ is not refresh incremental

My dbt model on BQ is not refresh incremental

I create my dbt model and this model dont refresh incremental. One way that i can to refresh the table that delete table in BQ and dbt run or dbt run --full-refresh

This my error message

dbt run -t prod
11:19:45 Running with dbt=1.8.0
11:19:46 Registered adapter: bigquery=1.8.1
11:19:46 Unable to do partial parsing because config vars, config profile, or config target have changed
11:19:47 [WARNING]: Configuration paths exist in your dbt_project.yml file which do not apply to any resources.
There are 1 unused configuration paths:

  • models.BigQueryDBT.example
    11:19:47 Found 1 model, 3 sources, 469 macros
    11:19:47
    11:19:48 Concurrency: 4 threads (target=‘prod’)
    11:19:48
    11:19:48 1 of 1 START sql incremental model dbt_local.enriched_payment_methods … [RUN]
    11:19:54 BigQuery adapter: https://console.cloud.google.com/bigquery?project=howly-prod&j=bq:us-central1:2d949037-fa04-40df-815c-7d09f3dacbc2&page=queryresults
    11:19:54 1 of 1 ERROR creating sql incremental model dbt_local.enriched_payment_methods . [ERROR in 5.57s]
    11:19:54
    11:19:54 Finished running 1 incremental model in 0 hours 0 minutes and 7.13 seconds (7.13s).
    11:19:54
    11:19:54 Completed with 1 error and 0 warnings:
    11:19:54
    11:19:54 Database Error in model enriched_payment_methods (models\payment-models\enriched_payment_methods.sql)
    UPDATE/MERGE must match at most one source row for each target row
    compiled Code at target\run\BigQueryDBT\models\payment-models\enriched_payment_methods.sql
{{
    config(
        materialized='incremental',
        unique_key='order_id',
        partition_by={
            "field": "updated_at",
            "data_type": "DATETIME",
            "granularity": "day"
    },
    cluster_by = "payment_method_type"
 )
}}

Hey @daniil.tkachenko, this error message suggests that you have duplicates in your source table. As you’re using order_id as your unique key I’d check the row counts when grouping by this and see where this might be happening. After that you can figure out a strategy to de-duplicated your data or perhaps add another identifier that is truly unique. An example could be something like concatenating order_id + row_num() after partitioning by order_id.

The reason that adding the --full-refresh flag works is that this tells dbt to drop the table before recreating it, as if you’re just exectuing a standard CREATE TABLE... statement. Which is also what’s happening if you manually delete the BQ table and then execute dbt run - as there’s no table currently, dbt will just build it for the first time and not conduct incremental checks.