Best practices to enforce models without failing assertions

Hi all folks here.

I’m looking for a way to ensure that the downstream models what we expose in our BI tool (Looker) have passed all data tests, and if they haven’t, don’t materialize the data.

For that process, what I was planning to do was to execute a series of transformations that produce the exact tables that will be fed into Looker (somehow namespaced as “staging” tables), run all the assertions, and if everything goes well, copy that data to the dataset that Looker reads.

However, this requires running a combination of a

  • dbt run,
  • dbt test
  • and lastly another dbt run (for the last copy)

For more context, in our company we’re in the process of migrating from Dataform to DBT. One of the functionality of Dataform is to write assertions that can be defined as a dependency, so if an assertion fails, downstream models don’t execute.

How can I achieve this pattern in DBT?

Many thanks!

Hi, you may want to check out this discourse post about “blue/green deployment”: Performing a blue/green deploy of your dbt project on Snowflake

From the post:

What’s a blue/green deployment?
Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments. Let’s take a website as an example — when you need to deploy a new version of your website, the new version is created in a separate environment. After the build is successful, traffic is routed to the new build. If anything goes wrong, you just don’t switch over the traffic

I haven’t implemented this strategy personally but it sounds like it could help you out with your usecase!