I’m getting the error ‘Schema has to be provided to write_pandas when a database is provided’ when attempting convert a dataframe to pandas in a python script. I’m using snowflake.
The good news is that issue you linked above has been resolved and will come out in dbt-snowflake 1.4.1.
We’ve only seen this with custom schema names configured - is it possible that when you changed to testing without a custom schema, a config file didn’t save or something?
I just tested dbt-snowflake 1.4.1 and can confirm the issue still persist @joellabes . The changelog of 1.4.1 also does not address this issue, so I am not surprised.
OK - we don’t really have enough to reproduce this, can you post the full code file (not just the final line of code) for the model you’re trying to run as well as the logs from when you try doing a dbt run?
@troyel yes my mistake, it didn’t go out in 1.4.1, but confirmed with the PM that it is actively being worked on! I don’t have a version to share sorry
We’re experiencing the same issue for one of our Python dbt models that uses a custom schema. Wanted to hop on this thread to be notified of when this would be resolved! (Should also mention that we are on dbt-snowflake==1.4.1)
@patkearns10 and I managed to get to the bottom of this by live-debugging with a very helpful & generous user who was running into the issue!
I’m not sure why this bug is cropping up for some Snowflake users, and not others; I believe it should have been solved at the source in snowflake-connector-python==3.0 (included in dbt-snowflake>=1.4).
For anyone still experiencing the issue, this seems to be a valid workaround:
We’ll see if there’s a way to include those session.use_* calls within the dbt materialization code, so that you don’t need to write it in every Python model that returns a Pandas dataframe.