Redshift intermittent error "could not complete because of conflict with concurrent transaction"

The problem I’m seeing

We’re using dbt cloud, with a Redshift Serverless warehouse. We have been running scheduled dbt jobs with 4 threads for months without an issue.

Starting December 19, 2025 (which is the release date of dbt core v1.11), we started having intermittent dbt job failures, with the error:

could not complete because of conflict with concurrent transaction

This always happens on the same model (dim_country), which is among the first 4 models to be processed by our 4-thread job.

We have ruled out external conflicts (with other jobs, processes or manual queries) - this errors occurs when nothing else is using this table. The only queries running in parallel are the models processed by the other threads of the same job.

When reviewing the debug logs, we see that the failed statement is always the following:

drop table if exists "dwh"."marts"."dim_country__dbt_backup" cascade;

The jobs are always scheduled on a round hour (6am, 6pm). We have tried changing to 3am but the issue persists.

When manually re-running the failed jobs, they always complete successfully.

My question

Can anyone think of the root cause for this issue? Also, is it a coincidence that it started on the release date of the latest major dbt version?

Would appreciate any help.

Update January 12, 2026

Quickly updating that once I’ve updated my my dbt cloud settings to work with the Compatible dbt version instead of the Latest version, the issue seemed to have stopped.

When using the Latest, the logs showed dbt version in use was 2025.12.20, which I believe maps to dbt 1.11.1 or 1.11.2 - that’s when we saw the issue. When using Compatible, logs showed version 2025.12.19, which I believe maps to dbt 1.11.0.

Maybe that could be a hint to what may be causing the issue?

Thanks

I’m also encountering this problem. We’re also using dbt cloud with a Redshift Serverless cluster, and the problem also started appearing on 2025-12-19. Our pipeline has a much larger number of models and to date, it’s always been a different model that’s trying to access a table that’s locked up by some other process. This issue has occurred during scheduled runs, in MR CI/CD pipeline builds (which create a MR-specific schema name for the build), and in manual runs, and we typically “resolve” the issue by just running the pipeline again (without changing any code).

I assumed someone in my org had started running some process that manually backs up tables (for some reason), but given that your issue also appeared the same day, I suspect there might be some other change at play.