source freshness results are coerced as string on dbt-spark-livy adapter

The problem I’m having

my environment:

dbt: v1.3.1
adapter: dbt-spark-livy v1.3.1 (https://github.com/cloudera/dbt-spark-livy)

the results from running source freshness being coerced as str instead of timestamp

$ dbt --profiles-dir $PWD/profiles/ source freshness
07:16:37  Database Error in source checkout_v2 (models/bnpl/sources.yml)
07:16:37    Expected a timestamp value when querying field 'ingestion_time' of table bnpl.checkout_v2 but received value of type 'str' instead

noticed that all values in ingestion_time are timestamp (as sparksql schema)

I have tried to debug the result

(Pdb) result.print_table()
| max_loaded_at        | snapshotted_at       |
| -------------------- | -------------------- |
| 2023-02-06T17:03:30Z | 2023-02-08T07:01:45Z |

and investigate dbt core, looks like all returned result have been coerced to string datatype in agate table.
is that the bug from adapter or dbt core? if it was adapter impl, which module I should looking to fix it?
appreciate your help

By default, I would assume it’s a bug in the adapter, so I’d encourage you to open an issue on its GitHub page!

1 Like