I have an issue to understand DBT execution plan. I am running all (126) models and one of them is running more then an hour, but when I run this ‘slow_model’ separately (-m slow_model) its running only 3 minutes. I am creating a table and all of the subtables for "slow_model’ are also tables.
I analyse the source for the model in Postgre’s and execution plan was pretty good. There have to be some change of execution with more models running but don’t see it or don’t understand what is the difference.
1 of 1 OK created table model dbt_intermediate.imd_roles [SELECT 28371 in 194.88s]
90 of 126 OK created table model dbt_intermediate.imd_roles [SELECT 28371 in 3024.68s]
I just ran into a similar problem in Postgres. First i recommend to look into the logs. They are in the logs/dbt.log file. There you can see the exact query that is being sent to the postgres db, the order and the transactions being created. In my case i was having that issue in my incremental models and the difference in time was because i ran my “slow_model” first and it created the table (like a full-refresh) and the execution time was almost the same as the select, but when a second run was made (running the slow_model with the others) it took a lot of time. I found that the amount of indexes were creating a very slow insert so i removed them and got like 75% time reduction. Did you find what happened in your case ?