The problem I’m having
It looks like a custom model config value is not respected during the execution of unit tests; so a model that has a custom config and builds perfectly fine with dbt run, doesn’t pass its unit tests (as described below).
The context of why I’m trying to do this
I have a macro foo in a dbt package a_package that is used by many dbt projects in our org. I’m using a custom model config to toggle the behaviour of the macro.
{% macro should_use_foo_v2() %}
-- It is more complex than this one line...
{{ return(config.get('use_foo_macro_v2', default=false)) }}
{% endmacro %}
{% macro foo(arg1, arg2) %}
{% if a_package.should_use_foo_v2() %}
{{ a_package.foo_v2(arg1, arg2) }}
{% else %}
{{ a_package.foo_v1(arg1, arg2) }}
{% endif %}
{% endmacro %}
{% macro foo_v1(arg1, arg2) %}
-- ...
{% endmacro %}
{% macro foo_v2(arg1, arg2) %}
-- ...
{% endmacro %}
I’m testing foo v2 macro with a model a_model by running its v1 macro version and v2 version side-by-side.
-- models/a_model_v1.sql
{{ config(
materialized='incremental'
) }}
with t as (
{{ a_package.foo(
relation=ref('bar')
) }}
)
select * from t
-- models/a_model_v2.sql
{{ config(
materialized='incremental',
use_foo_macro_v2=true,
) }}
with t as (
{{ a_package.foo(
relation=ref('bar')
) }}
)
select * from t
Leveraging a model config is giving us a lot of flexibility on limiting how widely we test the new v2 version.
We can use the v2 in just one model, all models in a sub-directory, or all models within a project.
However, I haven’t figured out a simple way to run the same unit tests for both versions.
I was hoping to set it up like below so that I can run additional expectations only for v2 (using versions.include: [2]), and all the other expectations against both versions of the model, reusing the exact same test definitions.
models:
- name: "a_model"
latest_version: 1
versions:
- v: 1
- v: 2
unit_tests:
- name: "a_model__existing_behaviour"
# No `versions`, and the test runs for both v1 and v2 models.
...
- name: "a_model__new_behaviour"
versions: # Test the new behaviour only against the v2 model.
include: [2]
...
Unfortunately, here, all test executions for the v2 model run with use_foo_macro_v2=false (the config’s default value), even though in a_model_v2.sql, use_foo_macro_v2 is set to true.
Is there a way to run the v2 model unit tests with the model config values respected so that I don’t need to have slightly different version of test cases maintained like below? Thank you!
What I’ve already tried
I can reduce the duplication of test cases with YAML anchors, but this is quite complex.
unit_test__cases:
test_case_001: &test_case_001
name: "a_model__existing_behaviour"
overrides: &test_case_001__overrides
macros: &test_case_001__macros
is_incremental: true
vars:
given:
...
unit_tests:
- *test_case_001
- <<: *test_case_001
name: "a_model__existing_behaviour__for_v2"
overrides:
<<: *test_case_001__overrides
macros:
<<: *test_case_001__macros
should_use_foo_v2: true
# btw, if it's possible to deep-merge dicts, it would be a bit easier...
- name: "a_model__new_behaviour"
versions:
include: [2]
overrides:
macros:
should_use_foo_v2: true