What are the dbt commands you run in your production deployment of dbt?

Hi community!

I’d like to know how different companies run their development and production runs of dbt. In particular:

  • What are the execution steps that you run in production? Do you have different “layers” that you run and then test? For example, based on @claire’s recent post on How Fishtown Analytics structure their dbt projects the sequence seems to be:

    1. Run ‘staging’ models
    2. Test ‘staging’ models (same source file for raw tables + staging tables)
    3. Run ‘marts’ models
    4. Test ‘marts’ models (same schema file for intermediate tables + final tables)
  • What is the syntax for these different steps? Do you do something specific?

  • Do you use more than one database/schema during this process or just the same database/schema for the whole process?

For context,

Thanks!

3 Likes

The environments we have

We typically think about three environments:

  • dev: A dbt user developing on their own computer. Each dbt user has a separate dev environment
  • ci: When we open up a Pull Request on a dbt repo, our transformations are run to check for any failures. We have one ci environment for each PR.
  • prod: The “source of truth” – our regularly scheduled runs write to this environment

Typically, we implement environments using schemas within the same database. You can read more about it here. We use dbt Cloud for both our ci and prod runs – Cloud takes care of running against a separate schema for each PR we open, and tears them down for us.

The commands we run in CI and prod

The commands we tend to run in both CI and prod are:

$ dbt deps
$ dbt seed
$ dbt run
$ dbt test

Testing at the end of the run in prod can often feel less than ideal though, because if a test fails, the model has already been created.

Understanding why tests fail

An important thing to consider here is why tests fail. Conceptually, I think we can sort test failures into three categories:
:one:. The SQL in your model doesn’t do what you intended. For example, you forgot to write a distinct, resulting in duplicate values.
:two:. An assumption about your source data is wrong. For example, you assumed a key in a source table was unique;
:three:. An previously-true assumption about your underlying source data is no longer true. For example, your data source started including duplicate IDs, or your ETL process is duplicating rows.

In scenarios :one: and :two:, 99% of the time, we catch these failing tests in dev or ci, and the code doesn’t get merged until they are fixed.

However, scenario :three: presents a bit of a challenge – failing tests will happen on a production run, meaning the (potentially unreliable) model has already been created in our production schema.

So how should I handle scenario :three:?

Generally, we rely on notifications to tell us that a production test is failing (Cloud has built in email and Slack notifications), and try to investigate it as promptly as possible, communicating to any stakeholders if data in our production analytics environment is unreliable. These kinds of failures happen so infrequently that we’re comfortable with this approach.

If you’re not comfortable with this, it may be more desirable to instead prevent your model from being created when a test fails. We have lots of discussion on this GitHub issue about how we might be able to change dbt to support testing during a dbt run, with a model only being created if the tests pass (the summary of the comments is: it’s really hard).

In the meantime, some changes in v0.13.0 could help out here! Since scenario :three: arises when an assumption about a data source has changed, you could define these data sources as Sources. Once they are defined as Sources, you can write your tests that represent the assumptions about your data against your sources instead of your models! Then in your production run, implement logic like:

  1. Test my sources first
  2. If the tests pass, run all my models
  3. Test all my models (for good measure, though we expect all the tests to pass)

This workflow might look something like:

$ dbt deps
$ dbt seed
$ dbt test --models source:*
$ dbt run
$ dbt test --exclude source:*

Hope that helps! Let me know if anything is unclear, this answer went pretty deep!

5 Likes

Great write up!

In one of my projects, which was developed prior to sources becoming available, I run a set of dbt test --model myfolder.base before dbt run --model myfolder+. Then, I test everything except for the base models that were tested initially.

If testing sources is not enough, I imagine one could use tags to flag models that need to be tested prior to the project being built.

One extra note! We typically run any snapshot (formerly archive) commands separately to our prod runs! This is because we don’t want to tie the timing of these commands to the runs of our project, so separate them out.

Further, we also run snapshot-freshness commands as separate jobs for the same reason :slight_smile: