Testing with fixed data set

I would like to test our models with fixed test data, not the real data. These tests would verify the model output against a known input data. The goal is to separate the testing of the model logic and the actual data.

I’m thinking about seeding the model source tables with fixed data and then running the tests. We’re using Snowflake and usually conduct development and testing in cloned test databases, hence truncating sources and replacing with test data is not a problem.

A problem I see with my approach is that these logic -tests would fail if run against CI or PROD databases, because there I can’t seed the source tables. Is there a way to group tests and only run data tests for example?

Is anyone practising this type of logic testing?

2 Likes

Hey @jtke! Such a good question! I have done a version of this where I use fake data while developing models: usually my fake data consists of all my known edge cases so that I can check that my SQL works. Unfortunately, I usually only do that during development, and switch out my references to the fake data before merging the code.

However, I have thought about extending this to be more long-lived in my project! Here’s what I would do:

1. Add seed files for your expected input and output to your dbt project

Name them sensibly, like input_subscription_events and expected_subscription_periods. Normally I like to design these seed files to have lots of tricky cases so I can be sure that my SQL is working as intended. Don’t forget to run dbt seed.

If you want to run things with “real”, but static, data, you could take the time to recreate the tables in a separate schema – check out this post on some approaches there!

2. In your models, ref the static data

But wrap it up in some conditional logic! (Yup, you can ref seeds if you decide to go down that route).

-- subscription_periods.sql

with subscription_events as (
  {% if target.name == 'ci' %}
  select * from {{ ref('input_subscription_events') }}
  {% else %}
  select * from {{ ref('stg_subscription_events') }}
  {% endif %}
)
...

3. Add an “equality” test to check that the expected input matches the expected output

We use this test to check that two relations match exactly. You can see us use this testing pattern all throughout dbt_utils in its own integration tests (e.g. here)

# schema.yml
version: 2

models:
  - name: subscription_periods
    tests:
      - dbt_utils.equality:
          compare_model: ref('expected_subscription_periods')

4. Run your test with --target ci

The magic incanation here is:

dbt test --target ci

It should all pass if your transformations work!

But then when you run it in any other environment, that test will fail, so…

5. Parameterize the equality test to accept an environment argument.

You’ll have to add your own version of the equality test to your project, and add an extra argument. You want to get to something like this:

# schema.yml

version: 2

models:
  - name: subscription_periods
    tests:
      - jaffle_shop.equality: # replace with your project name
          compare_model: ref('expected_subscription_periods')
          env: ci

I haven’t done this last step for this particular macro, but I wrote about the structure of making a test environment aware in another Discourse article.

Now when you execute dbt test --target ci, you’ll be comparing your static data. In any other environment, you’ll be running the real-deal, and tests shouldn’t fail in either.

Let me know how you go!

3 Likes

Hi

The approach I’m now pursuing is a bit similar, but I found it easier to use a separate run-unit-tests.sh -script to organize files to correct places and then call dbt inside the script. I have grouped data-tests by model under “tests/” and the setup script copies relevant tests under “tests/run” at runtime.

Test input data is in seed files named like “test__<model_source_ref>.csv” and I use a macro to substitute the real ref with the test-ref if target==“unit-test”. Preferrably I would like like to override ref() and make it do the substitution transparently. Don’t know if that’s possible. Anyway, I would like to avoid having all that conditional logic in the model file that you show in step #2.

I wish I could override project settings, like test-path, with command line parameters. Then I would not need to copy files around. Now I might end up modifying the project.yml in the setup script before running the unit tests, depending on how complex the seeding becomes.

This is still work in progress and not functional. Hopefully I get it working today.

1 Like

I got the basics working and I can test a module with predefined data set. I made a bit ugly macro to replace the model ref with a test data source when target=unit-test. Any ideas how to make this better:

{% macro ref2(table_name) %}

{% if target.name == ‘unit-test’ %}
{{ return(ref(table_name).database + ‘.’ + ref(table_name).schema + ‘.test__’ + ref(table_name).table_name) }}
{% else %}
{{ return(ref(table_name)) }}
{% endif %}

{% endmacro %}

1 Like

You can use the get_relation method here to do this a bit more cleanly (docs) – it’s longer but IMO more readable.

{% macro ref_for_env(model_name) %}

{%- set normal_ref_relation = ref(model_name) -%}

{% if target.name == 'unit-test' %}

{%- set test_ref_relation = adapter.get_relation(
      database = normal_ref_relation.database,
      schema = normal_ref_relation.schema,
      identifier = 'test__' ~ normal_ref_relation.identifier
) -%}
      
{{ return(test_ref_relation) }}

{% else %}

{{ return(normal_ref_relation) }}

{% endif %}

{% endmacro %}

^ You might want to do some whitespace control on that.

2 Likes

I wanted to continue the conversation because I think having an easier method to do unit testing would definitely improve dbt.
In my opinion, using behave would be a great fit for dbt as it will make tests as readable by analysts as models are. I played with other methods but I think this is easier to read. Let’s say we want to unit test for a model called “fom” that counts number of days in a month:

  Scenario: run a sample unit test
     Given calendar is loaded with this data
         | dt         | first_day_of_month |
         | 2020-01-01 | 2020-01-01         |
         | 2020-01-02 | 2020-01-01         |
         | 2020-02-02 | 2020-02-01         |
      When we run the load for fom
      Then the results of the model are
         | first_day_of_month | count_days |
         | 2020-01-01         | 2          |
         | 2020-02-01         | 1          |

There’s a working POC in GitHub - jmriego/dbt-bdd
Would something like this make sense? Any feedback would be great as we are prioritizing this again and would be happy to contribuite it to dbt if it’s a good fit

@jmriego I’ve spent a few days working on something quite similar to this and I believe lack of self-contained unit-tests for DBT models remains on of the biggest painpoints for mature use of this project. Most important to me is to create an easy-to-define unit-test format that does not require any changes to how existing DBT models are written so it can be used against any number of mature DBT projects with hundreds of models seamlessly. It should also require only having to define the fakes for a models’ immediate dependencies, not its transitive dependencies.

I’m not keen on the recommended approach of a super-ref macro that may dynamically swap between real models and fakes, as now this feels like testing infecting the implementation. To test Model C which has depends on Source A and Model B, the general strategy for me is to create fakes for Source A (the relevant tables therein) and Model B, as you’ve done in your BDD given clause via dbt-seed. This sounds simple but a number of problems here:

  1. The model being tested must be materialized ephemerally, or else the compiled sql will simply refer to the corresponding namespace in the DB giving us no surface to inject our fakes.
  2. The model being tested must have its model dependencies materialized non-ephemerally or else the compiled sql will bring the model-dependencies sql in-line as a CTE which means we cannot inject a fake.
  3. When creating seeds for the test’s dependencies, there will be node namespace collisions between the seed fakes and the defined models (ie. the fake seed B vs. the Model B). These are sorta deliberate, as we want the fake seeds to be defined in the namespaces of the models they are faking, however DBT commands won’t work on such a project structure as it must construct a DAG on all its invocations.

The main issue is that it is quite wacky to get around some of the holes in DBT’s internal APIs and in its current form it exists as a meta-program that makes external calls to the DBT binary rather than something easily pluggable within DBT. The program basically has to re-write the DBT project file (it must have different versions between creating the fakes and running the unit-tests to avoid namespace collisions when constructing DBT’s acyclic graph) to inject a custom seed-configuration and make sure model configurations have the correct materialization strategy.

The program then has to create its own property-file to define the model-tests (which for now use dbtutil.equality) - the ugliness here is that DBT allow property-files to be defined in multiple files scattered throughout the defined project’s source-path and there again will be namespace collisions. DBT does not provide a mechanism to ignore certain property-files or establish some order-of-precedence; it indiscriminately tries to load and parse all .yml extension files in the source-path, so now the program must modify any of these files, temporarily changing their extensions to force DBT to ignore them (yuck!)

I’ve gotten a working version of this, but not happy with the gross file-system side-effects and general hacky-smell of the process. The process fits terribly in DBT’s existing abstractions and a more native-solution would be preferable. Until then, this is what I’ve settled on.

Definitely agree with you on that @clee , I both think unit testing is really important for mature use of the project and tests should not requrie modifying the existing DBT models.

The example I put there doesn’t really use a super-ref macro but it still requires modifying the existing ref function. Using your example of testing Model C which has depends on Source A and Model B, the testing scripts will run dbt jobs but passing it variables looking like this: {'A': 'testcase1', 'B': testcase1'}

So the ref function doesn’t guess where the test case should be but you can configure it at run time. As you I didn’t find a perfect solution that would allow fakes but I think this is good enough for testing transformations