Hello,
I need to create a job that generates views based on data from Snowflake information schema.
First step is to pull all relevant tables for the process based information_schema tables.
Second step is to iterate over the results from the 1st step, and for each table, create a view over the table in another data base.
What is the best why to implement this in dbt?
example:
1st step
select table_name from old_db.information_schema.tables
where table_schema = ‘FINANCE’
2nd step
for each table in tables:
run query : create or replace view new_db.finance.table as select * from old_db.finance.table
You should avoid writing custom DDL to create tables - this is the sort of thing that dbt abstracts away for you and ensures you don’t have to worry about things like running nodes in the right order.
Further, you won’t be able to ref these different models if they’re not created as individual dbt model files as dbt believes in a 1 model file = 1 database object paradigm.
If you need help bootstrapping a lot of these models, you could consider using the codegen package which will help generate your staging models: GitHub - dbt-labs/dbt-codegen: Macros that generate dbt code
1 Like
Thanks @joellabes, I will remove my answer in order not to induce bad practices!
Always learning
1 Like
you can write a macro with for loop . running on all the objects .