How to chain dbt Cloud jobs so the second job uses the models just built by the first job?

Hi everyone,

I’m working on dbt Cloud job orchestration and I have a question about chaining jobs and ensuring freshness of models.

Current setup

  • I have a 1-hourly dbt job that builds a set of models.

  • I also have another dbt job (for example, a 12:00 AM job) that should run after the hourly job finishes.

  • Some models are shared / overlapping between these jobs.

  • These models are then used in another tool (Hightouch) as the source.

What I want to achieve

I would like to:

  1. Trigger a second dbt job immediately after the first job finishes (e.g., “run job B after job A succeeds”), and

  2. Make sure that job B uses the models that were just built by job A,

  3. Not models from some older run “way back” in time, and

  4. Avoid rebuilding the same models twice if possible (to control cost and run time).

My main question

In dbt Cloud, what is the recommended way to:

  • Chain jobs so that job B is guaranteed to run on the most recent state / models produced by job A?

  • Ensure that when job B runs, it uses the latest built tables/views from job A (and not outdated versions)?

  • Optionally: avoid re-building overlapping models in job B if they were just built in job A.

I’d really appreciate:

  • Best practices for job chaining in dbt Cloud

  • Whether this is usually handled with:

    • specific job configuration,

    • environments,

    • state/defer patterns, or

    • something else (e.g. using the Cloud API or webhooks)

Thanks in advance for any guidance or examples! :folded_hands: