We are evaluating dbt to use in our data engineering workflow.
Currently, we use Azure Data factory to handle the complete flow where each target table is populated independent of other tables of the databases, so we have multiple pipelines running parallelly populating different target tables.
So, if we start to use dbt , can we still trigger models independently at the same time from different pipelines?
If yes, what would be the behavior if there are dependency conflicts between the models, or triggering same models, dbt log gathering etc.
If no, does it mean, the calling environment needs to have checks to see that no dbt job is currently running before executing new one?
In addition, we plan to call the dbt job from Azure data factory, is there a way to collect the invocation_id and other details of that specific run along with the exit status.