dbt pipelines are failing because of rate limits
For a specific GCP project, I’m getting the error 429 Exceeded rate limits: too many table update operations for this table for some models.
It seems like the dbt seed and dbt run commands are in some cases hitting the threshold of 5 table update operations over 10 seconds.
This is not something I can solve changing the concurrency or retry in profiles.yml for the dbt-bigquery adapter: since it is a 429, the adapter does not attempt the retry.
A couple of facts make this issue weird:
- The same dbt refresh commands were executing successfully until there was a spike of dbt jobs in the production dbt project
- The same commands execute without issues in the dev environment, even if it’s definitely not a matter of data volumes or concurrent jobs
I submitted the case to Gemini, providing the logs, and the answer I got is that the production environment got slightly quicker (probably after the spike), and this is causing the rate of commands to be higher.
I can’t tap into the logic of the dbt commands to introduce delays, so the other option is to make BigQuery a bit slower in the prod envirionment perhaps using reserved slots instead of on demand compute.
But my question is: have you ever found a problem like this?