Hi,
Newbie here.
The problem I’m having
I want to consume data from multiple Iceberg tables using Apache spark and then transform the data and
write parquet files to AWS S3 filesystem.
My source tables are accessed through the AWS Glue Catalog and I configure my model to write parquet files partitioned by a date column and incremental materialization (insert-overwrite).
The job then creates the parquet files in S3 as expected but also creates a table in the AWS Glue catalog which I do not want.
Is there a way to tell dbt to not create a table and only write the parquet files to S3?
Thanks for the help!