I think you mix too many things. You should separate the logic if you want a scalable and easily maintainable solution.
dbt should run the models because of its primary purpose. For monitoring, you should implement something that doesn’t depend on the models themselves.
Package dbt_artifacts
collects dbt run results and stores them for future analysis. You don’t need to analyze dbt on the run for optimizations. If you want to control the duration of the models, you could add a timeout
parameter to the profiles.yml
file to fail long-running models after a specific number of seconds.
However, if you want to track dbt progress, it depends on how you are running dbt. Either you check CLI output, where you can see the model status and how many models are left, or you could run each dbt model as a separate task, for example, in Airflow, and check the status and duration of each task.
Checking CLI output is a manual task, but you could separate monitoring and dbt processes with Airflow
I am not very familiar with dbt Cloud, but maybe @joellabes could explain if dbt Cloud has some kind of solution.