Run stops after 300s regardless to profiles.yml configuration. Bigquery plugin

Hey all,

im trying to run a large model and I’m running into a
“Runtime Error in model… Query exceeded configured timeout of 300s”
**when trying to run a specific file.

This happens after about 5 minutes,

but my timeout in the profiles.yml file is defined to be 10 minutes:
job_execution_timeout_seconds: 600

I’ve updated my version:

  • installed: 1.6.4
  • latest: 1.6.4 - Up to date!


  • bigquery: 1.6.6 - Up to date!

I’ve also tried to reduce the “job_execution_timeout_seconds” to be 60 seconds to check if it will stop after 60 seconds but it still runs about 5 minutes and then stops.

Never had this problem :frowning:

But what I would try:

In the docs you have this:

The job_execution_timeout_seconds represents the number of seconds to wait for the underlying HTTP transport. It doesn’t represent the maximum allowable time for a BigQuery job itself. So, if dbt-bigquery ran into an exception at 300 seconds, the actual BigQuery job could still be running for the time set in BigQuery’s own timeout settings.

So, maybe try changing the default_query_job_timeout_ms in BigQuery’s own settings

Has anyone ever found a resolution to this? I have a similar problem. I have configured no timeouts, and even set job_retries to 5, but I get regular query timeouts after 900s, and dbt doesn’t retry even once. These are normal query jobs.

I’ve also checked to make sure it’s not BigQuery itself, and the job actually continue running and complete just fine. So I’m really wondering what I need to do to make dbt wait indefinitely.

For reference, here’s the output of dbt debug:

13:42:03  Running with dbt=1.7.7
13:42:03  dbt version: 1.7.7
13:42:03  python version: 3.11.7
13:42:03  python path: *****
13:42:03  os info: macOS-14.3.1-x86_64-i386-64bit
13:42:04  Using profiles dir at *****
13:42:04  Using profiles.yml file at *****
13:42:04  Using dbt_project.yml file at *****
13:42:04  adapter type: bigquery
13:42:04  adapter version: 1.7.4
13:42:04  Configuration:
13:42:04    profiles.yml file [OK found and valid]
13:42:04    dbt_project.yml file [OK found and valid]
13:42:04  Required dependencies:
13:42:04   - git [OK found]

13:42:04  Connection:
13:42:04    method: oauth
13:42:04    database: *****
13:42:04    execution_project: *****
13:42:04    schema: *****
13:42:04    location: EU
13:42:04    priority: interactive
13:42:04    maximum_bytes_billed: None
13:42:04    impersonate_service_account: None
13:42:04    job_retry_deadline_seconds: None
13:42:04    job_retries: 5
13:42:04    job_creation_timeout_seconds: None
13:42:04    job_execution_timeout_seconds: None
13:42:04    keyfile: None
13:42:04    timeout_seconds: None
13:42:04    refresh_token: None
13:42:04    client_id: None
13:42:04    token_uri: None
13:42:04    dataproc_region: None
13:42:04    dataproc_cluster_name: None
13:42:04    gcs_bucket: None
13:42:04    dataproc_batch: None
13:42:04  Registered adapter: bigquery=1.7.4
13:42:06    Connection test: [OK connection ok]

13:42:06  All checks passed!

Ok, this seems to be a known bug in the dbt-bigquery, see here.

I had faced same issue where model timeout after 300 seconds.
when we setup a Bigquery connection for a dbt project using key authentication - There is a Job execution timeout seconds setting available which is by default set to 300 seconds.

If the dbt cloud project is already setup - Then

  1. go to settings >> project >> dbt Project Name
  2. Click on Bigquery connection link
  3. edit Job execution timeout seconds setting option