dbt cloud is a great option depending on the size of your team and data engineering maturity.
At GitLab, we run dbt in production via Airflow. Our DAGs are defined in this part of our repo. We run Airflow on Kubernetes in GCP. Our Docker images are stored in this project.
For CI, we use GitLab CI. In merge requests, our jobs are set to run in a separate Snowflake database (a clone). Here’s all the job definitions for dbt. The rest of the CI pipeline is defined here.
General principles, I think, are that you want to have your MRs run dbt using real data but writing to either a dev schema or a separate DB clone like we do. If you make dbt reference environment variables for where to write then you can control it quite nicely that way. (See our profile here for details on that).
Hope this is useful!