What would be the best practice when working with a large dataset that is the source for multiple views and tables with different granularities? In the current model, we create aggregations in each model separately (weekly, daily, monthly…) so we recreate the same calculation each time (just group by different dimension), which makes it harder to maintain, test, and check data quality issues.
Is the materialization of the metrics in the semantic layer a better practice for handling this?
1 Like
The best solution I found so far for this was to create a macro, parameritise it and then in each weekly/daily/monthly mondel, call the macro with specific granularity.
I hate the fact that I have to keep the model code in a macro but that was the best option to avoid repeating huge chunks of code.
I think there were big discussions about this in GitHub discussions, but the limitations were the fact whether one file should have the ability to generate multiple tables/views. Right now it’s pretty tight 1:1
1 Like