Extracting Connection Information from Pivot Tables

Extracting Connection Information from Pivot Tables

It’s pretty easy to add pivot tables to worksheets – even easier to add them when the source is a cube. However there are a couple of ways to go about adding them which could make your life easier – that’s one of the things we look at in this post.

When you first create a connection to an OLAP source (and I am including tabular in that), a connection is created in the work book. So just for fun, let’s go through the process and connect to the cube. On the Data tab chose From Analysis Services from the Other Sources dropdown.

We’re asked for a server (not shown) so let’s put in the correct details and then click next. Then we get a list of cubes and perspectives to connect to (I’m using adventure works here so the names appearing in the image below would be familiar to most).

We then have the option to specify a few additional details. Perhaps the most important of these are the filename that stores the connection and the friendly name. Most of the time though we probably discard putting any thought into this and just click next. However in the image below, I’ve specified the filename and the friendly name. Finally we can click Finish and get to work on the pivot table.

The Next Pivot Table

Now when it comes to the next pivot table we can run through the process again – if we chose the same names we’ll get a warning but let’s not go into that. Alternatively, we can just insert a new pivot table and select an existing connection.

You can see this in the screen flow below. When we create a new pivot, we can choose the external source and, then the connection.

There’s one big reason why you’d want to insert a new pivot like this – it’s a lot easier to maintain the workbook when you need to understand where data is coming from (or you need to change data sources).

The Detail – Getting Connection and Pivot Info

It would be a pretty ordinary post if that’s all I had to say (no code after all) – but now the fun starts. Workbooks never finish up as simple and elegant as we’d like – NEVER. It’s easy to add new data sources, new pivots and even pivots of data sourced from other pivots. Throw 20-30 of them in a workbook and your left scratching your head as to where the information is coming from and what do I need to update (say when the data source changes). If you only had one connection it would be easy, but life’s not like that and there’s connections flying around everywhere. So I needed to look at each pivot, determine its source and connection string.

Here’s the code:

Sub GetPivotData()

  Dim wks As Worksheet

  Dim pt As PivotTable

  For Each wks In Worksheets

    For Each pt In wks.PivotTables

      Debug.Print wks.Name & “:” & pt.Name

      Debug.Print “Cache Index : ” & pt.CacheIndex

      GetCache (pt.CacheIndex)

Debug.Print “— break —“

    Next pt

  Next wks

End Sub

 

Function GetCache(Index As Long)

  Dim wb As Workbook

  Set wb = ActiveWorkbook

  Dim pc As PivotCache

  Set pc = wb.PivotCaches(Index)

 

  If pc.OLAP Then

    Debug.Print “Is OLAP : ” & pc.OLAP

    Debug.Print “Connection Name : ” & pc.WorkbookConnection

    Debug.Print “Connection String : ” & pc.Connection

  Else

    Debug.Print “Is OLAP : ” & pc.OLAP

    Debug.Print “Data Range : ” & pc.SourceData

    End If

End Function

 

My Sub (GetPivotData) loops through each sheet on the workbook, getting each pivot on the sheet and then calls a function (GetCache) to get the cache (or connection) information for that pivot. While I could get connection information by just adapting the function to iterate over the slicer caches (as below), I’d like to associate each pivot table with its connection.

Additionally, I want to identify any pivots that are not based on an external (OLAP) source. To do this, I can use the Cache.OLAP property and retrieve the appropriate connection info.

 

 

Advertisements

Creating Tables in Excel with VBA and External Data – Part II

In Part I, we looked at adding a data table to a Workbook which uses an external data source (we used OLAP but the process can be applied to any source). This post looks at manipulating an existing table.

Without VBA, we can manually manage the table – both the its properties and the underlying query. Simply right click on the table and select the Edit Query.. or External Data Properties from the popup menu. Changes are made to the table data are made automatically.

If we chose to edit the query, we can simply overwrite the Command Text (as seen in the image below). These changes well be automatically applied (including resizing the table by rows or columns for a different sized data set) once the OK button is clicked.

For External Data Properties, we can configure how the table reacts with new data. For example, you may notice that, the table accommodates additional rows and columns however, when the query returns a table with fewer rows and columns, the table retains its old sizing (number of columns) and includes a blank columns (for data that previously existed). You can manually resize this (dragging the bounds of the tables border) or set the properties of the table to overwrite existing data. If you want to ensure that this option exists and that new sizes are automatically incorporated into the table – make sure that the check box for Overwrite is marked in External Data Properties.


VBA

Now to the VBA – As commandant Lassard would say “There are many, many, many, many fine reasons to use VBA”. We have so much flexibility but let’s keep it simple, here’s what we I’ve set up.

Cell B1 is data validated based on the cell range D1:D2 – nice and simple. When we change that cell, the table updates for the new country.

In order to determine if the there is a change in or data (the Country selected) we have to create a worksheet event to capture and test the change. I have gone into this in some detail here and the code is below. Note that this needs to be added to the sheet code (not in a separate bas module). All we do is check that our only B1 is updated and then call the refresh subroutine.

Private Sub Worksheet_Change(ByVal Target As Range)

  ‘ update table if cell 1,B is changed
If Target.Cells.Count = 1 And Target.Cells.Row = 1 And Target.Cells.Column = 2 Then UpdateMyTable

End Sub

Now for the updating component – the bit that’s called when cell(B1) is changed. I think this is pretty straight forward but I’ll walk through it anyway. First, the code;

Public Sub UpdateMyTable()

  ‘ ensure that any new changes are reflected in the table dimensions
Sheet1.ListObjects(“Table_abax_sql3”).QueryTable.RefreshStyle = xlOverwriteCells

  ‘ set the comand text
Sheet1.ListObjects(“Table_abax_sql3”).QueryTable.CommandText = NewQuery(Sheet1.Cells(1, 2))
Sheet1.ListObjects(“Table_abax_sql3”).Refresh

End Sub

Private Function NewQuery(Country As String) As String

NewQuery = “select {[Measures].[Reseller Sales Amount] } on 0, ” & _
“[Product].[Category].[Category] on 1 ” & _
“from [Adventure Works] ” & _
“where [Geography].[Geography].[Country].&[” & Country & “]”


End Function

I’ve kept the same format as in the original post. The function NewQuery determines what the MDX should be – based on the provided country. All we is set the tables command to the new mdx (in (QueryTable.CommandText)) and refresh it.

I’ve also set the refresh style so that any changes in the command (grid size) are automatically reflected in the worksheet table.

That’s about the size of it! – I hope you find it useful.

Microsoft Tabular Modeling Cookbook

I am pleased to announce that my tabular modelling book is finished.  The title is succinct – ‘Microsoft Tabular Modeling Cookbook’ and its available from packt (and others) at http://www.packtpub.com/microsoft-tabular-modeling-cookbook/book

There is (naturally) a lot in the book – but it is designed to get you up and running fast. We look at all flavours of modelling – both power pivot and SSAS and pretty much everything in between.

Of course any comments and feedback is welcome!

SQL Server and Power Pivot – Tabular Modeling Cookbook

I have not been blogging lately, truth is, I’ve been busy writing a book on Tabular Modelling!

The book is quiet comprehensive, and really focusses on what I perceive the lifecycle of tabular modelling to be (from an analytical perspective at least). That is, modelling and answering questions in Power Pivot (Excel 2010) and then migrating the model to servers, maintenance and management.

If you are interested in checking out the preview, the link is here; http://www.packtpub.com/sql-server-and-power-pivot-tabular-modeling-cookbook/book

Managing Connections In Power Pivot

Demonstrations that show importing data into Power Pivot do this from database icon in the home window (as identified below). However, once data is in your model and you wish to add new tables it is wiser to reuse the existing connection rather than create a new connection for each import. Each time that table(s) are added by this dialogue, a new connection is created within the model with multiple connections possible referring to the same source. For model simplicity, it is better to have a single connection referring to a single data source.

Standards for Import

The standard method importing data into Power Pivot is to use the import from database icon as shown in the diagram below (when a database is being used). This utility has features that allow you to detect relationships between tables and may suffice from two models. That is, when there are no existing tables and the model.

What this utility does is create a new connection to the data source each time the Wizard is invoked. Consider for example, an existing model that has the product table from Adventure Works imported. The initial import did not include the ProductCategory and ProductSubCateogory table. Naturally, we would want to include this in the model.

If we use this Wizard to reimport the tables, a new connection is set up. This connection is associated with the two new tables. We can see this by selecting the “Existing Connections” icon in the design.

Note that there are two connections in the model (as below) which was caused by the “re-import”of data once the initial population of the model had occurred.

Altering an Existing Connection

We can add data to the existing model and reuse an existing connection at the same time. If we wish to do this you simply reopen the existing connections properties and append tables (or add a query). Click the ‘Open’ button from the dialogue and select either the ‘Select from a list of tables…’ or ‘Write a query that will….’.

 

This will append any new tables to the model utilising existing connection.

Perhaps the main reason for reusing the connection is that it keeps the model succinct and manageable. Once a table has been imported through a connection the connection properties for that table cannot be changed without first deleting all the tables using the connection. Naturally this would increase the amount of rework required to the model should consolidation of connections be required at a later stage.

Finally, the connections are retained within the model even after all the tables using connection have been deleted. The connection must manually be deleted from the ‘Existing Connections’ dialogue.

 

 

 

OLAP Rigidity wins against Tabular?

There are many reasons why you might choose a Tabular Model over Multidimensional one. Marco Russo discusses some of the pros of tabular in his blog here which general relate to the flexibility that the tabular model gives you. However, one reason surprising reason for choosing multidimensional may be its rigid dimensional structure and the certainty that this structure gives in changing data environments.

A Multidimensional dimension provides many ways to record information against an attribute. Most importantly these are the attributes KEY, NAME and VALUE which are set under the dimension source properties. For example we can see the [State-Province] attribute from the Geography dimension in Adventure Works as below. The Key is the only property that must be set for an attribute so if no name is specified, the key will automatically be applied as the name.

Note that in the tabular engine, there is no concept of the KEY or Name. These are not specified as native DAX refers to column value(s).

Referring to Attribute Members

When an MDX query refers to members of the attribute it can do so by using the member name or the members unique name. Thus, the two MDX statements are equivalent;

— query using members unique name

select

[Measures].[Internet Sales Amount] on 0,

[Product].[Product Categories].[Category].&[4] on 1

from [Direct Sales]

 

— query using members name

select

[Measures].[Internet Sales Amount] on 0,

[Product].[Product Categories].[Category].[Accessories] on 1

from [Direct Sales]

 

There are many reasons why you would choose the first query over the second, namely, the second can give unexpected values when there is more than one member with the name being sought. That is, if there were two product categories (with different keys) and the same name (as Accessories), then the query would return only the first member. One might expect that the query would aggregate the results but this does not occur.

Once the query has been written, the members unique name is confusing and many people complain that it lacks reference to the underlying data (after all its impossible to know what does .&[4] means anyway). However, it is still the best way to refer to a member and this is the way most client tools generate MDX (using the unique_member_name).

Note that since the tabular engine has no concept of a key for an attribute (you don’t uniquely specify the key or name for an attribute) the MDX equivalent passed to the tabular engine uses what we would otherwise consider the name as the key. Thus, as in the example above, the tabular equivalent for this member is [Product].[Category].[Category].&[Accessories]. To an OLAP (multidimensional) developer, this type of key (ie the name as the key) is generally considered a real NO NO and contravenes best practice.

 

Client Queries and the interesting stuff.

For tabular models, the unique name for a member is generated by the name. Furthermore, most client tools refer to the key in their script. I have even seen some tools hard code the published member for parameters. So, what happens when the underlying attribute name changes? The change could break your report.

We can easily demonstrate how this error would occur using Excel as a client. Suppose I have a pivot based on a multidimensional cube and I convert the simple pivot to formulas (as below). The Accessories member in Cell A2 is hardcoded (the formula is in D2 and uses the key [4] as above). If the name of this member changes, the spread sheet still works fine, and the new name is displayed.

However, for the tabular example (as below), when the hardcoded name is changed, the MDX returns a #N/A error for the CUBEMEMBER function.

In the second picture, the name of the member has changed to [Accessories & Other] however the ‘report’ still relies on the hard coded value .&[Accessories] and the error occurs.

Conclusion

The use of a ‘key’ for each member would offer a way to solve the problem as addressed above and this is only available in multidimensional dimensions. To be fair though, the problem is as much a function of design as it is of structure. A multidimensional dimension would still raise the issue if the attributes name had been used as a key (but no-one does that do they?).

DAX III – Filtering Queries with CALCULATETABLE and FILTER

So far the queries that we have created in DAX have focused on the entire result set (with the exception of the start at clause in evaluate). Naturally, we would want to restrict the rowset in some way and this is the focus of this post – how to filter row-set tables in DAX queries. There are two ways to do this, firstly, we can use the CACULCUATETABLE syntax and secondly we can use a FILTER that covers the evaluate syntax.

BOL defines the function of CALCULATETABLE as “Evaluates a table expression in a context modified by the given filters” taking the form;

 

CALCULATETABLE(<expression>,<filter1>,<filter2>,…)

 

There are many applications of how we can use this, however, let’s look at the first of reducing the rows returned, say for the first six months of 2000. We can evaluate this query like this.

evaluate 
(
    calculatetable
    (
 
        addcolumns
        (
    
            summarize
            (
                ‘dates’
                , ‘dates'[year]
                , ‘dates'[Month]
            )
        
            , “sum_amount”calculate(sum(‘Sales'[amount]))
        )
        
        
    , dates[year]=2000
    , dates[month]>=1
    , dates[month]<=6
    )
)

 

One of the restrictions of using CALCULATETABLE to restrict rows is that the filter cannot be applied to calculated measures. So if we extended the filter arguments to include a condition on ‘sales_amount’, we would get an error.

evaluate 
(
    calculatetable
    (
 
        addcolumns
        (
    
            summarize
            (
                ‘dates’
                , ‘dates'[year]
                , ‘dates'[Month]
            )
        
            , “sum_amount”calculate(sum(‘Sales'[amount]))
        )
        
        
    , dates[year]=2000
    , dates[month]>=1
    , dates[month]<=6
    , calculate(sum(‘sales'[amount])) > 10000
    )
)

A function ‘CALCULATE’ has been used in a True/False expression that is used as a table filter expression. This is not allowed.

 

Note that the filter arguments in calculate table are applied to the underlying table regardless of the fields returned (projected or selected). So, if we wanted to sum for the first 6 months sales for the year 2000, we could simply remove the ‘dates'[month] field from the summarize and the calculation is applied only for Jan to Jun.

evaluate 
(
    calculatetable
    (
 
        addcolumns
        (
    
            summarize
            (
                ‘dates’
                , ‘dates'[year]
            )
        
            , “sum_amount”calculate(sum(‘Sales'[amount]))
        )
        
        
    , dates[year]=2000
    , dates[month]>=1
    , dates[month]<=6
   
    )
)

 

We could also include this restriction in the definition of the calculation and achieve the same results (see sum_amount2 below).

evaluate 
(
    calculatetable
    (
 
        addcolumns
        (
    
            summarize
            (
                ‘dates’
                , ‘dates'[year]
                
            )
        
            , “sum_amount”calculate(sum(‘Sales'[amount]))
            , “sum_amount2”calculate(sum(‘sales'[amount])
                                        , all(dates)
                                        , ‘dates'[Year]=2000
                                        , ‘dates'[Month] <6
                                        , ‘dates'[Month] >=1
                                        )
        )
        
        
    , dates[year]=2000
    , ‘dates'[Month] <6
    , ‘dates'[Month] >=1

    )
)

 

From the rowset point of view, we are still faced with the problem of how to restrict rows based on a calculated value. In order to do this, we can apply the FILTER function against the table. Filter returns a subset of the table and takes the form;

 

FILTER(<table>,<filter>)

 

Consider yearly sales for the first six months of the year as below;

evaluate 
(
    calculatetable
    (
         addcolumns
        (
            summarize
            (
                ‘dates’
                , ‘dates'[year]       
            )
            , “sum_amount”calculate(sum(‘Sales'[amount]))
        )
    , ‘dates'[Month] <6
    , ‘dates'[Month] >=1
    )
)

 

A filter can be applied to this rowset so that only sales over 50,000 are returned. Note that it is wrapped around the CALCUALTETABLE.

evaluate 
(

    filter
    (
    
            calculatetable
            (
                 addcolumns
                (
                    summarize
                    (
                        ‘dates’
                        , ‘dates'[year]
                    )
                    , “sum_amount”calculate(sum(‘Sales'[amount]))
                )
            , ‘dates'[Month] <6
            , ‘dates'[Month] >=1
            )
    
            , calculate(sum(‘Sales'[amount])) > 50000 
    )

)

 

In these situations, it may be best to define the measure explicitly;

define measure ‘dates'[sum_amount] = calculate(sum(‘Sales'[amount]))

evaluate 
(

    filter
    (
    
            calculatetable
            (
                 addcolumns
                (
                    summarize
                    (
                        ‘dates’
                        , ‘dates'[year]
                    )
                    , “sum_amount”‘dates'[sum_amount]
                )
            , ‘dates'[Month] <6
            , ‘dates'[Month] >=1
            )
    
            , ‘dates'[sum_amount] > 50000 
    )

)