Hi, I’m new in dbt, and I would like to know if some options are possible in dbt core, as I’m struggling to find the correct ways to do it:
Is it possible to hardcode the column names of a model, without needing to use asalias? I saw there is an option to define the column names of models (columns | dbt Developer Hub), but it doesn’t seem to work?
Example:
I have a seed seed1 with a column col1.
I want to build a model that does: select * from {{ ref('seed1') }}
I can’t find a way for my model to name the resulting column colA instead, without using: select col1 as colA from {{ ref('seed1') }}.
Is it possible to only specify to run one model, and all the relative models will infer? Or do I only have these 2 options:
use a flat dbt run, but that may run models I didn’t want to run
manually write every relative models and my final model in my dbt run command?
Example:
I have 3 models:
models/model1.sql:select * from {{ ref('seed1') }}
models/model2.sql:select * from {{ ref('model2') }}
models/model3.sql:select * from {{ ref('model3') }}
Is there a way to just run model3without mentioning model1 and model2 (so dbt would infer it)?Currently, if I try do to dbt run --select model3, I will get ORA-00942: table or view does not exist.
And dbt compile --select model3will return: select * from default_sch.model2
instead of: select * from (select * from (select * from default_sch.seed1))
Is there a way to specify an already existing schema for an already existing table, without overwriting the macro generate_schema_name.sql by omitting the default_schema when a custom schema is given, as it is not recommended per the documentation ( Custom schemas | dbt Developer Hub )?
Are seeds always csv files, or can they be other formats too?
The same way I can load a csv file as a seed, can I generate a csv file as the output of my model?
(5) If you want to dump a table from your database to a csv file, you could probably write a post-hook that invokes the relevant command for your database (like unload or something). Documentation on post-hooks lives here: https://docs.getdbt.com/reference/resource-configs/pre-hook-post-hook
Additional question, when I perform an INSERT (like with materialized='incremental' and incremental_strategy='append'), I can see that by default the insert matches the columns by name rather than by position.
Is there a way to make it matches by position?
Also, is it possible to omit a column in the insert so the values is inserted as NULL? Or do I need to explicitly mention cast(null as varchar2(50)) as col4 in the selection, for example, if the column I want to omit is col4 with a varchar2(50) type?
A dbt model is just a SQL query that describes a table. If, when running a model, a particular column should be null (always or sometimes), you’ll need to build that logic into your model’s .sql file. But your model should be a select query that populates all columns in the destination table.