We have an ingest process that first writes data to our data lake (hosted on S3) and builds external tables in Snowflake pointing to that. We would like to implement a post hook that invokes a dbt pipeline via the API when a new table is added to a specific folder. The question is how to best pass the name of this table to the pipeline. If invoking the command manually, the command line would look something like this:
dbt run --select +mart/destination --vars ‘new_tablename: <new_tablename>’
The API doesn’t (as far as I can tell from the docs) allow injecting vars or env_vars as part of the JSON body to the API. Is there an accepted trick to doing that, or is it considered a better practice to inject the entire command line using the “steps override” field of the body. To my mind, having to inject the entire command line is a code smell because the calling program should only have to provide information (the parameter) rather than implementation (the command line, which would otherwise be stored only within dbt cloud). However, I see no other way to do it. Am I missing something?
Architect - Information Management - RGA