Pattern for passing run parameters through API

We have an ingest process that first writes data to our data lake (hosted on S3) and builds external tables in Snowflake pointing to that. We would like to implement a post hook that invokes a dbt pipeline via the API when a new table is added to a specific folder. The question is how to best pass the name of this table to the pipeline. If invoking the command manually, the command line would look something like this:

dbt run --select +mart/destination --vars ‘new_tablename: <new_tablename>’

The API doesn’t (as far as I can tell from the docs) allow injecting vars or env_vars as part of the JSON body to the API. Is there an accepted trick to doing that, or is it considered a better practice to inject the entire command line using the “steps override” field of the body. To my mind, having to inject the entire command line is a code smell because the calling program should only have to provide information (the parameter) rather than implementation (the command line, which would otherwise be stored only within dbt cloud). However, I see no other way to do it. Am I missing something?

Thanks
Eric Buckley
Architect - Information Management - RGA

Sorry for the slow reply @ebuckley! I haven’t spent much time using the Cloud API, but I was under the impression that you could configure variables inside of it. Maybe I’m wrong!

As you’ve probably noticed the API’s documentation is a bit anemic right now (we’re working on it!) but we do have a reasonably robust set of Postman examples: Introducing the dbt Cloud API Postman Collection: a tool to help you scale your account management | dbt Developer Blog

If there’s nothing in there about controlling variables, then I’d definitely encourage you to open a ticket with the dbt cloud support team requesting this - I agree with you about separation of implementation and information