Long story short:
I have all my run
jobs to execute particular groups of models, all sharing the same directory.
I thought that by using dbt run --models specific_model
the job would work if all its components were working.
In parallel, I am building a new model and in a rush, I left it broken when I did the commit to the master branch of my repo. I figured because this new model is separate from all the other executions it wouldn’t be a problem but then I woke up with a load of emails about failed jobs (every single job in fact).
I am including a snippet of what happened below but I wanted to know if this is a known issue, an intended feature or if I need to set-up something else in the job definitions to avoid this happening.
Run log:
running dbt with arguments Namespace(cls=<class 'dbt.task.run.RunTask'>, debug=False, exclude=None, full_refresh=False, log_cache_events=False, models=['staging.misc.intercom'], profile='user', profiles_dir='/tmp/jobs/2174923/.dbt', project_dir=None, record_timing_info=None, single_threaded=False, strict=False, target='default', test_new_parser=False, threads=None, use_cache=True, vars='{}', version_check=True, warn_error=False, which='run')
Error:
2019-11-07 06:06:56,308 (MainThread): Compilation Error in model acquisition_volumes_timeseries (models/biz/acquisition/acquisition_volumes_timeseries.sql) Model 'model.freetrade.acquisition_volumes_timeseries' depends on model 'all_platforms_attribution_events' which was not found or is disabled
Please note that models don’t share a directory or anything, and also they are not ref
each other.