How to setup a process within dbt so that it only refresh when there is new data

Hi Everyone,

I wanted to understand if there is any process within dbt where we will only refresh models when there is change in data ingested.

currently the project ingest data from aws s3 location which are json files partitioned by column and finally create external table and schema using dbt-external-tables package (GitHub - dbt-labs/dbt-external-tables: dbt macros to stage external sources)

Then thereafter we create raw models to read data from these external tables and dedup the data.

I wanted to understand if there is a way where can check if new data is being received in latest ingestion and accordingly we refresh those dbt models as of ongoing schedule or cadence otherwise not.

I am using below dbt packages
dbt-core==1.8.7
dbt-spark[PyHive]==1.8.0