output the failing test query

The problem I’m having

I am running a test on a model that checks for the existance of certain data in a join with other tables ; if the any rows are returned, the test fails. I want to see the results of the rows that are returned, not just the failed test.

The context of why I’m trying to do this

I am creating various models with various data requirements. If the data I’m looking for doesn’t exist, I want to be alerted to this fact; the result of the test macro has the data I’m looking for - but how do I surface this data so that I can easily start researching it?

What I’ve already tried

  • I have the test returning as expected
  • I have slack and email alerts set up so that when the job fails, I’m notified

Some example code or error messages

This is the generic test that is applied to the model

{% test quotes_with_null_timestamp_exist_in_spreadsheet(model, quote_field, spreadsheet_model, spreadsheet_quote_field) %}

with quote_view_quotes as (
    select {{ quote_field }}
    from {{ model }}
    where "INITIAL SUBMISSION TIMESTAMP" is null
),

spreadsheet_quotes as (
    select distinct {{ spreadsheet_quote_field }}
    from {{ spreadsheet_model }}
)

select qv.{{ quote_field }}
from quote_view_quotes qv
inner join spreadsheet_quotes sq
on qv.{{ quote_field }} = sq.{{ spreadsheet_quote_field }}

{% endtest %}

This is how the test is invoked:

models:
  - name: quote_view
    description: "Testing automatic view creation with permissions"
    columns:
      - name: QUOTE_NUMBER
        description: the new Quote/CSIND number , should only be 1 listing per item
        tests:
        - unique              
    tests:                    
      - quotes_with_null_timestamp_exist_in_spreadsheet:                          
          quote_field: "NEWBUSINESSQUOTENUMBER"
          spreadsheet_model: "{{ ref('spreadsheet_quote_funnel') }}"                            
          spreadsheet_quote_field: "QUOTE_NO_"

I have the slack app set up, and the run sends appropriate errors; however, the actual result of the test doesn’t show up; all I see is what’s shown beloe - the build error.
In fact, unless you start digging into the build error, it’s not obvious that the problem was with one of the tests.

Is there a way to add more granularity to dbt Cloud output runs so that research could be made easier?

1 Like