Hi all folks here.
I’m looking for a way to ensure that the downstream models what we expose in our BI tool (Looker) have passed all data tests, and if they haven’t, don’t materialize the data.
For that process, what I was planning to do was to execute a series of transformations that produce the exact tables that will be fed into Looker (somehow namespaced as “staging” tables), run all the assertions, and if everything goes well, copy that data to the dataset that Looker reads.
However, this requires running a combination of a
-
dbt run
, dbt test
- and lastly another
dbt run
(for the last copy)
For more context, in our company we’re in the process of migrating from Dataform to DBT. One of the functionality of Dataform is to write assertions that can be defined as a dependency, so if an assertion fails, downstream models don’t execute.
How can I achieve this pattern in DBT?
Many thanks!