How Could DBT Models be Customised for Improved Maintenance and Achievement?

Hello everyone :smiling_face_with_three_hearts:

I need guidance on the latest techniques for maintaining and improving the performance of dbt models as I work on a project that involves changing our ETL procedures to dbt. We have lots of complex changes in our data storage; including multiple joins; collections; and window functions.

Which organisational techniques for dbt models work well? I have encountered suggestions to organise models according to business domains but I am not sure how to successfully manage connections and find a balance between complexity and usefulness.

What common performance issues should one be aware of when creating dbt models? Are there any specific structures on Snowflake; for instance; that are known to be costly?

What method do you use for testing and recording your dbt models? For the purpose of confirming accuracy and quality; I have been using the built in dbt test system; yet I am searching for solutions to improve our test strategy.

I am thinking about managing big datasets using progressive models. Which procedures work best for establishing incremental models in dbt?, how do you choose between various creation strategies such as slow; temporary; view; table; and so on?

What kind of CI//CD pipeline connections have you made with dbt? Our goals are to automate the deployment process and ensure that any changes to our dbt models undergo proper testing and validation.

Also; I have gone through some posts relevant to this https://discourse.getdbt.com/t/what-is-the-best-way-to-servicenow-use-the-cosmos-library-with-tags-given-this-architecture/13132 But I have not found any solution. I am so excited to learn the information from this community and your suggestions to use to improve our dbt solution.

It would be great to learn about any tools that you have found to be most beneficial. Any guidance on search improvement would be helpful.

Thank you :innocent: