Using databricks job compute with DBT

Currently,i am running dbt jobs with all purpose cluster deployed through github and under service principle. I wants to switch to Job compute. The problem that i am facing is the http_path. i am able to provide the http_path varaible as it is static for the all-purpose cluster and cluster id doesnot change. However, for job compute the cluster id continues changing for each run. Is there a way to dynamically extract the id before each run or other ways of integating dbt jobs for job-compute cluster
Many thanks for your help.

This is not possible. The job-cluster does not support the APIs required by dbt to execute the sql commands. However, using the <https://docs.databricks.com/en/workflows/jobs/how-to/use-dbt-in-workflows.html|dbt task> it is possible to use jobs-compute for the CLI and run the sql commands on a sql warehouse.

Note: @Jasper Koning originally posted this reply in Slack. It might not have transferred perfectly.

This is not true. It is possible to run dbt on job clusters with dbt-spark. You will need to install dbt on the cluster (via custom image or dependant library) and then create a profile like this:

  host: 127.0.0.1
  method: session
  schema: '{{ env_var(''DBT_USER_SCHEMA'') }}'
  threads: 12
  type: spark

The CLI will run on the job cluster and it will execute the spark sql workloads on the job cluster as well.