Can you persist manifest.json in S3?

Hey out there,

We run our dbt processes via airflow + kubernetes. For each process, the manifest exists on the pod where the process is executed, but as soon as the pod shuts down, the manifest is gone. We have been looking at using dbt retry to allow us to pick up from a failure point, but it relies on having access to the manifest from the previous execution. I’m wondering if anybody has done anything where manifests from executions get uploaded to cloud storage, in order to be used in subsequent processes (or just for general analysis).

We do use dbt artifacts too, but that is not useful for the retry command.

thanks in advance!

One idea we mighty try is using s3fs FUSE (GitHub - s3fs-fuse/s3fs-fuse: FUSE-based file system backed by Amazon S3) to allow S3 locations to be mounted as local volumes, and then use --target-path to redirect the outputs of the dbt executions.