Compiling models will break runs for different models

Long story short:
I have all my run jobs to execute particular groups of models, all sharing the same directory.
I thought that by using dbt run --models specific_model the job would work if all its components were working.
In parallel, I am building a new model and in a rush, I left it broken when I did the commit to the master branch of my repo. I figured because this new model is separate from all the other executions it wouldn’t be a problem but then I woke up with a load of emails about failed jobs (every single job in fact).

I am including a snippet of what happened below but I wanted to know if this is a known issue, an intended feature or if I need to set-up something else in the job definitions to avoid this happening.

Run log:
running dbt with arguments Namespace(cls=<class ''>, debug=False, exclude=None, full_refresh=False, log_cache_events=False, models=['staging.misc.intercom'], profile='user', profiles_dir='/tmp/jobs/2174923/.dbt', project_dir=None, record_timing_info=None, single_threaded=False, strict=False, target='default', test_new_parser=False, threads=None, use_cache=True, vars='{}', version_check=True, warn_error=False, which='run')

2019-11-07 06:06:56,308 (MainThread): Compilation Error in model acquisition_volumes_timeseries (models/biz/acquisition/acquisition_volumes_timeseries.sql) Model 'model.freetrade.acquisition_volumes_timeseries' depends on model 'all_platforms_attribution_events' which was not found or is disabled

Please note that models don’t share a directory or anything, and also they are not ref each other.

Hi @Goldmember, my two cents idea : dbt has to parse all models in order to build the models’ DAG, wathever the models’ selection. So dbt fails on your broken model, and give up, as it cannot infer the complete DAG. But maybe that’s not the problem, as one could think DAG could be ignored when unrelated models are selected for run.

Best regards


Yeah, my rationale was around the fact that the DAGs are separate ones for the executed models, but I guess is a small limitation we have.

@fetanchaud is spot-on here - dbt needs to compile the whole project in order to run any models at all. The rationale here is that if a model can’t be compiled, then dbt can’t know what the correct shape of the graph is. The specified model might be wholly unrelated to the models selected in a dbt run, or it might be a parent/child of the selected model selected in a run like dbt run --model +my_model+.

I can definitely imagine adding a flag to dbt that would surface compilation errors as warnings. If a model with a compilation error is referenced by a model included in the dbt run, then this would result in a proper error. If the model is instead unrelated, then the compilation error could just be surfaced as a warning.

I don’t think this is something we’re going to prioritize in the near future, but I do agree that the current implementation is a little bit unintuitive!

Thanks for adding up to @fetanchaud response. Keep up the great work!