Dbt temp table not optimized

When we set materialization=incremental then dbt create a temp table and store all transformed data and then it delete data from the actual table based on primary key and insert same data from temp table match same primary key. The problem with dbt temp, it doesn’t have any index or partition when it joining data with a physical table with primary key having large volume of data it doesn’t not perform well. Can we crate our own temp table in dbt model while during model execution? Is there any other thing we can do to fast join the data since from temp table it’s a sequencial reading since it doesn’t not have index.

What does your model’s yaml definition look like? Does data_type: string(30) not work under the column definition?

Note: @Mike Stanley originally posted this reply in Slack. It might not have transferred perfectly.

Wrong thread, sorry!

Note: @Mike Stanley originally posted this reply in Slack. It might not have transferred perfectly.

You need to look at the different incremental strategies. The default strategy and the alternatives available vary by adapter. You can also write your own strategies if none of the default ones suit you.

Note: @Mike Stanley originally posted this reply in Slack. It might not have transferred perfectly.