In a previous post, I addressed the function of the last non empty aggregation in SSAS (OLAP). Specifically, I looked at what happens when there are ‘missing’ chucks of data in the fact table and how the OLAP engine natively carries forward the balance to an aggregated time member without regard for the what is the ‘true’ last date. That is, we expect the last date of stock to be applied for all aggregated date values regardless of whether data exists or not. This post looks at the LNE calculation in the tabular model.

Generally, we expect the stock schema to take the form of the fact (balances) surrounded by the dimension table. The fact table (Stock) holds the [Quantity on Hand] (on_hand) for a given date with the Dates table showing a natural hierarchy of Year, Month and Date.

Summing the column on_hand will allow the calculation of daily balances (we will be able to show correct balances when a date is selected) however, if we use any aggregated Date field, the [Stock on Hand] value will be incorrect because the formula adds the amount for all days filtered by the date restriction.

For example, if we consider a very small subset of data (shown as ‘Source Data’) and [Stock on Hand] defined as

Stock On Hand:=Sum([on_hand])

We can see that Jan 2011 for Product A (20)=15+4+1 and for product B (2011)=10+8

Source Data |
[Stock On Hand] using Sum |

Clearly, the addition for aggregate time members does not give us what we want.

The next thought is to change the stock filter context to the max of time. That is, return the sum of stock for the last date in the selected time period. For example;

Stock On Hand:=CALCULATE(sum([on_hand]), FILTER(ALL(Dates[date]), Dates[date]=max(Dates[date])))

The problem with this approach is that the aggregate values will only show for the last date in the month (or year). This is not a problem when we have full year worth of data in the fact and the maximum of the date filter is the last date in the period. But in this case we don’t and usually the most recent data will not be complete for the year. For Product A in the picture below, the total for Jan-2011 is correct (1) because the maximum date for Jan is 31-Jan and has data (see source). However, in the year context (ie 2011) the maximum date is 31-Dec-2011 and there is no stock for this and therefore the sum is empty.

Source Data |
[Stock On Hand] using max(date.date) |

To solve this problem, it is important to remember that the filter context applies a filter to the fact. So, for the filter context of Jan-2011 (say when Jan-2011 is on a row), the data in the fact is filtered for all the dates in Jan. What we are really interested in therefore is the last date in the fact for the current filter context.

Stock On Hand:=CALCULATE(sum([on_hand]), filter(Stock, Stock[date_key]=Max(Stock[date_key])))

In this formula, we recognise that the filter context has been applied to the fact table and use this to derive the maximum date for the current filter period and then use this to further filter the fact.

Source Data |
[Stock On Hand] using max(Stock.date) |

**Alternate Approach using Transactions**

In addition to using the quantity of stock as a field in the fact, we may also consider using the transactional amount. In this case the use of summing against the Date table filter context works because we are interested in summing all data ** less than** the max date of the filter context. In this situation, our formula would be;

Stock On Hand:=CALCULATE(sum([qty_mvt]), FILTER(Stock, Stock[date_key]<=max(Dates[date])))

Nice Paul – easy to understand and follow.

Hi,

thank you, but may be

Stock On Hand:=CALCULATE(sum([qty_mvt]), FILTER(Dates, Dates[date]<=max(Stock[date_key])))

Hi 100tsky,

The two formula work in the same way and give the same results

Paul

Hi Paul, the main reason to use Tabular is the simple/quick implementation, so my question is why would someone do all these complex workarounds instead of using MD?

Hi Iman,

There’s a couple of reasons why tabular may be a better fit and youve hit the nail on the head. Its simple to implement, so if I could flip the question back to you … why would you implement a complex solution when a simple one will suffice?

I also think the consistent design experience is a real positive move forward. However, at the end of the day its just another arrow in the toolset.

Oh, there is PowerView and some performance benefits!

Paul, I’m sure you agree that Tabular was meant to bring SSAS to a wider audience. To me it’s “SSAS for dummies” kind of product, agree?! But when I see the limitations and not-so-easy workarounds that you and other fellow bloggers and high skilled people are suggesting, I feel something is just not right.

I just can’t get my head around why we should use Tabular for complex scenarios when the implementation could be harder than a proper Multidimensional solution.

Hi Iman, I don’t think that’s the only reason for tabular (after all there is a new engine) but I think it will expand the audience and that’s a good thing. I’m going to paraphrase you’re comment as ‘choose the right tool for the job’ because I see plenty of SSAS installations that are poorly implemented and are really just big pivot tables. In these situations, I don’t think there is a need for OLAP and welcome TABULAR. I also think TABULAR through powerpivot is such a cool thing because there are heaps of analysts that have been doing this in years in Access and Excel and this should make life sooooo much easier.

People been playing with tabular for a little while and will continue to develop new techniques to use the technology to provide more business value. At the same time though, don’t disregard the techniques that have been developed in OLAP. For example utility dimensions for time calcs. This is widely used but you could suggest that it is a work around – and this is whats being done with the tabular model now!

Pingback: Homepage

Hi Paul,

Thanks for the post. Any idea how to modify the measure to calculate sum(on_hand) if each product had a different inventory date (without using qty_mvt) ? I have a dataset that only has the last value for different accounts on the last respective transaction date for each account. So I need to aggregate the lastnonblank for each account. So far I’m unable to find a solution using DAX.

Thanks,

Ben