We use it with BigQuery, and if you are using incremental loads with the append_new_columns config, then the new columns will be null for rows populated previously (pretty much what the documentation says). If you want to populate the values for the new column for the previously generated rows, you need to do a full refresh (run with “-f” or “–full-refresh” flag).
A little late response coming from me.
I have never really used this functionality so i cannot prove that my idea will help you.
But here is my reasoning:
dbt helps you only generating the code that you would like to see based on your model configuration.
So if you configure the model as insert-only incremental model (as a result of not setting a unique-key) then of course your incremental run will only influence new inserted rows.
If, however, you want to do updates of existing rows when applicable then dbt needs to generate a merge with inserts and updates - you will invike that by setting a unique-key in your config.
Of course your incremental logic needs to be in such way that it provides the data also for your existing rows in your target.
If you will try it out then I woud be much obliged reading about your results.