Changing Excel Table Connections in VBA

In previous posts, I’ve discussed using VBA to extract connection information from Pivot Tables and hacking a tables command so that you can programmatically change the results of a query. But what if you have a table and you want to change its connection and its command?

This can be achieved pretty easy using the tables .QueryTable object. In fact, we can change the entire table using this class.

So how does it work? Suppose, I’ve inserted some data into a worksheet from SQL Server. The menu path is DATA à From Other Sources à From SQL Server. Next I get the familiar connection windows to connect to a server and then select a table (as below).

I’ll give the table a friendly name (adventure_works) and return the data to a table. I get a connected table which is sourced from a query and can be updated (either right click somewhere in the table and select Refresh from the popup menu or select the refresh button from the TABLE TOOLS à DESIGN (ribbon) menu).

I can also use the DESIGN of the table tools to give my table a decent name by just replacing its name (as in the following image).

Now I want to change the database server (and potentially the query) that my table uses. There are 2 things to keep in mind here. Firstly, we need to change the query’s connection string (no brainer eh) and secondly, the table (in Excel) was added as a table.

As in the previous post, I can refer to the table as a list object in VBA, this is a property of the sheet, so, in order to reference the table I’ll have to use some code like this (note the table was on Sheet2).

Dim li As ListObject

Set li = Sheet2.ListObjects(“aw_employees”)

For the connection string, I can refer to it through the ListObjects QueryTable class. So I could determine what the current connection string or set it by using the following code;

Debug.Print li.QueryTable.Connection ‘retrieve the connectionstring

li.QueryTable.Connection = “OLEDB;Provider=SQLOLEDB.1;…(this is my connection string) ..”

If I then refresh the table (using li.Refresh), one may think that’s the end of the story and our works completed – but that’s not quite true just yet.

Remember that we added the table as an SQL table? Because of this, the ‘query’ for the .querytable actually refers to a table including the SQL database and has its .CommandType as a table. If we debug the code to determine the command (as below) we could see this

After we change the connection, we could set the new table to use employee (without a database) with something simple (like);

li.QueryTable.CommandText = “dbo.DimEmployee”

li.Refresh

Or, we could change the query type of an Command, and provide its text. This would suffice for that;

li.QueryTable.CommandType = xlCmdSql

li.QueryTable.CommandText = “select * from dbo.DimEmployee”

li.Refresh

That’s about it, we can change the tables connection, its query and have these changes reflected in our table.

The Wisdom of the (Tableau) Crowd?

Ok … I’ll admit it … Tableau has some great features.

One of the standout features of the new release (8.2) is the new data access method – ability to model data by dropping sheets from a file and manipulating it visually as a data source.  The benefit of this is that it removes the manual requirement of connections that are typically associated with user end reporting tools and combining data.  Prior versions could do this to some extent by linking data sources however, the new version allows it to happen is what could be effectively called a model designer.  Another slightly different approach (and one which is way cool) was developed by Targit (they used a folder share as a directory share to drop files which could then be consumed by the application automatically). There’s an obvious benefit to the tight integration of data and its downstream analysis – especially for business users that are not so concerned with technical model design but the analysis of data and using it in the tool. Removing layers of integration delivers on the promise of traditional pitches that use catch cries of

spend less time working on data and more time understanding it

To be fair though, there is an assumption (IMO) that the data being used is pretty clean to begin with … all we are really defining aggregation paths, streamlining the import of data(s).  A common negation for this type of activity (where users mange and consume their own data) is that the production of data is not verified by IT or officially sanctioned (most noticeably it is raised from an IT centric standpoint). There could be lots written about that discussion point but for now we will just accept that it is going to happen anyway and leave it.

Another great feature is the idea of story boards – where you can trace (for want of a better word) thought patterns used in investigation and present a complete view of the analysis – I’ve heard it dubbed a Power Point Killer (and there may be some truth in that where Power Point is used with a heavy reliance on data integration). Again, the concept is not new (Yellowfin has had it enabled for at least one version now).

But I digress and should move onto crowds … or more importantly the vendor conference. I was staggered to discover that the Tableau conference in Seattle was sold out early with 5,200 attendees (yes that’s right 5,200). That’s a remarkable number when you consider that the PASS Summit on SQL Server (incorporating the full MS stack) gets (an advertised more than) 5,000 attendees (I am assuming that it’s just over the 5,000 so somewhere in the range of 5,000 – 5,100). Since this includes both the database administration and the BI arm, those dedicated to BI would fall far short of the 5,000. In contrast, the PASS BA conference which specifically targets Business Analytics gets 750+ attendees (again I’ll assume were talking 750-760 attendees). Given the ubiquity of the MS platform as a tool, the similar number of attendees at the Tableau conference is astonishing.

 

Brisbane SQL Saturday

SQL Saturday in Brisbane is Back!!

Im happy to announce that we are concreting arrangements for a Brisbane SQL Saturday .. its simply the best way to get SQL Server and BI training from some of Australia’s (and perhaps some international) best trainers and consultants.  Of course, the great thing is, ITS FREE and run by the SQL community members.  So tell your boss, family and friends.

We’ve scoped out a great venue at Queensland University of Technology and are in the process of finalising some paperwork now– so leave the 20th September open for some great SQL and BI goodness.

Like to know more?  Keep an eye out on the SQL Saturday site for the event of ping me and I’ll add you to the distribution list.

Update : 21-07-2013

Ok …. we could not host the event on SQL Saturdays site – will that stop us?

NO we’ve created our own web site to manage the event.  Check out www.sqlgroup.com.au/home.aspx to find out more.

Same great content, same great day, same venue – Just another name

Flawless Excel in Business Intelligence

Somewhat prompted by some recent posts about the inherent error associated with Excel as a BI tool (and more importantly how some magical tool solves this problem) I thought I’d discuss Excel as a BI tool. In-fact, it’s one of my favourite pondering points “what role should Excel should play business intelligence”

As I’ve stated about it’s a discussion that’s often prompted by claims that a product improves the inherent risk associated with Excel (and to a lesser extent Access). Vendors often suggest that their Excel support mitigates the inherent risk associated with Excel because it adds a corporate security and authorised data capabilities the users trusted tool (often it is argued that Excel the only tool business users want to use or perhaps can use).

There are several flavours presented in the pitch (often several of them are presented in the same discussion) so let’s look at a few of the key selling (or scaring) points;

  1. Excel has inherent risk and is error prone. For this, I’ll take an extreme case since it suites my discussion. The nature of these stories is that Excel caused some monumental loss because of an error in a formula or there was some other error in the spreadsheet. There’s nothing like a bit of fear so make you sit up and pay attention is there? If you have not had this error yet don’t worry because you will … it’s only a matter of time (in fact we’ll even through in some statistics to show just how common this can be). It’s no big deal if you haven’t found it yet because you just haven’t looked hard enough. Infact some vendors even go so far to claim that users are arrogant (now that’s going to mend the bridge between IT and Business). But then again, there’s nothing like creating a bit of division is there – ‘oh yeah now I remember divide and conquer’
  2. The Excel (or Access System) is undocumented and the removal of the Key Person that developed the tool would cause monumental loss to the company. How could any responsible manager allow a system or some reporting tool to operate if it is reliant on a key person … this should never happen should it? If you want to be a good manager (and there’s the no back-out argument), ensure business continuality you need a system that’s IT sanctioned with proper documentation – so who wouldn’t want or need that that?
  3. Oh yes – did I mention that IT hasn’t sanctioned your Excel (or Access) work? They don’t support it and you can’t go to them with problems. This really enforces the risk associated with point 1 & 2. If there’s an issue then you are own your own. This is a very compelling argument that enforces a CYA mentality – perhaps we should rewrite the argument to the words ‘if you can’t transfer blame, you better not take responsibility for it’.
  4. End users should focus on data analysis and not data preparation. Here we are presented with the classic position that too much of time is spent in data preparation and not analysis. Could this possibly be a hint to suggest that an unproductive employee that could get so much more done if only used the tool.
  5. IT takes too long or does not do it right. This is such a common selling point that it is almost taken for granted (especially in the realm of reporting). A pitch that focuses on the ‘corporate lifecycle of reporting’ may go something like this;
    1. The business user describes the report to the IT analyst
    2. Sometime later the IT guys get around to writing the spec for the report and pass it to the dev queue where the report writer creates it, this doesn’t happen straight away though because the report writer is so busy. So say 6 months later the user gets their report.
    3. The report is inevitability wrong and the cycle begins again.
    4. By now the business has moved on and the business user needs a new report.
  6. Users need Excel interaction. If there’s been no justification presented above we need one resounding point – the final nail in the coffin (as it where). The pitch here focuses on user needs. We often hear very compelling statements along the lines that users just want to use Excel because that’s what they know or it’s just the way they work. Picture the preparation of a presentation we the analyst grabs some information from one report, creates a LOOKUP from data from another and then creates a chart to paste into Power Point. Based on this story it’s easy to see why users always are going to use Excel (well so the pitch goes).

I have listed a more than a few points here but I think they can be summarised into a few categories.

  1. Relying on Excel for the manipulation of data is a risky proposition (so you need some involvement of IT to provide corporate data to manipulate) and
  2. Users need Excel.

Now the vendor presents their tool – something to solve all the problems listed above, but how appropriate are such tools in reality? Aside from the idea that they present a very narrow perspective of what BI can do, we will now focus on how the vendors tool satisfies the gaping hole in the end users tool set.

Now (the vendor) present the age of enlightenment in their product … our (their) tool solves all these problems because it;

  1. Is managed by IT –hosted so that there will not be any issues associated with lost or deleted files.
  2. Has gone through a rigorous testing process.
  3. Can be used by anyone. The software is so intuitive that any user can pick it up and use it
  4. Delivers data to users in the format they want, that is, its accessible in Excel.

As convincing as these arguments might appear on the surface there’s a real problem with this line of thinking and we can sum this up in two ways;

Firstly to paraphrase Lyndsay Wise (W|T) – if users need Excel then the BI tool is not doing what it should. While I could go into a long winded discussion, I think the statement is succinct. Where users have to bring data out into Excel to manipulate it (you don’t tend to hear the buzz word Mash Up anymore do you?) then there’s a pretty convincing argument that the sanctioned solution just does cut it (sanctioned as in corporate structure). If we think about it for a minute longer and push the case for Excel we’d realise that what we are really saying is that our business model is so inflexible it needs an additional layer of manipulation in order to provide any real value. This may be the case but is so shouldn’t the problem be attacked at the source and not downstream, that is, when it enters the model and not when it’s spat out?

Now for the second point – one where the tool somehow removes error where there is some form of manipulation required. I am very interested to know exactly how any tool achieves this magical proposition (and have never seen a convincing argument by a vendor). At best, you’ve got a cell location (or multiple) that links back to corporate (and supposedly agreed) data. However, if we look at process of what Excel is being used for, we are faced with exactly the same issue that would be present had they not had a tool. That is, they are free to use the data how they wish. After all, that’s the reason why Excel is so popular – it allows the user (of any skill level) a programmatic interface for working with data. The tool adds no comfort than would otherwise be available from Excel.

If we would like to suggest that the availability of data in Excel is the key to Excel’s woes as a corporate BI tool (and those nasty errors that are so well published) then we are simply adding another layer for users to get wrong. After all the placement of data relies on the end user finding the right information to begin with!

Not surprisingly the argument for safety around Excel as a BI tool resolves in the user and not the tool. Once its placed in Excel, data is at the direction of the user and it is their ability that we seek mitigation against.

 

Packt’s 10 Year Celebration

Usually, the cost of IT & reference books is … well pricey to say the least.  Unfortunately, if you wait for a sale, you save some cash but end up with old tech books.

Well luckily Packt Publishing is having a 10 year celebration.  No e-book over $10.  If you are in the market, it might be worthwhile checking it out .  But here’s the kicker.  Its only lasting 10 days.

 

Extracting Connection Information from Pivot Tables

Extracting Connection Information from Pivot Tables

It’s pretty easy to add pivot tables to worksheets – even easier to add them when the source is a cube. However there are a couple of ways to go about adding them which could make your life easier – that’s one of the things we look at in this post.

When you first create a connection to an OLAP source (and I am including tabular in that), a connection is created in the work book. So just for fun, let’s go through the process and connect to the cube. On the Data tab chose From Analysis Services from the Other Sources dropdown.

We’re asked for a server (not shown) so let’s put in the correct details and then click next. Then we get a list of cubes and perspectives to connect to (I’m using adventure works here so the names appearing in the image below would be familiar to most).

We then have the option to specify a few additional details. Perhaps the most important of these are the filename that stores the connection and the friendly name. Most of the time though we probably discard putting any thought into this and just click next. However in the image below, I’ve specified the filename and the friendly name. Finally we can click Finish and get to work on the pivot table.

The Next Pivot Table

Now when it comes to the next pivot table we can run through the process again – if we chose the same names we’ll get a warning but let’s not go into that. Alternatively, we can just insert a new pivot table and select an existing connection.

You can see this in the screen flow below. When we create a new pivot, we can choose the external source and, then the connection.

There’s one big reason why you’d want to insert a new pivot like this – it’s a lot easier to maintain the workbook when you need to understand where data is coming from (or you need to change data sources).

The Detail – Getting Connection and Pivot Info

It would be a pretty ordinary post if that’s all I had to say (no code after all) – but now the fun starts. Workbooks never finish up as simple and elegant as we’d like – NEVER. It’s easy to add new data sources, new pivots and even pivots of data sourced from other pivots. Throw 20-30 of them in a workbook and your left scratching your head as to where the information is coming from and what do I need to update (say when the data source changes). If you only had one connection it would be easy, but life’s not like that and there’s connections flying around everywhere. So I needed to look at each pivot, determine its source and connection string.

Here’s the code:

Sub GetPivotData()

  Dim wks As Worksheet

  Dim pt As PivotTable

  For Each wks In Worksheets

    For Each pt In wks.PivotTables

      Debug.Print wks.Name & “:” & pt.Name

      Debug.Print “Cache Index : ” & pt.CacheIndex

      GetCache (pt.CacheIndex)

Debug.Print “— break —“

    Next pt

  Next wks

End Sub

 

Function GetCache(Index As Long)

  Dim wb As Workbook

  Set wb = ActiveWorkbook

  Dim pc As PivotCache

  Set pc = wb.PivotCaches(Index)

 

  If pc.OLAP Then

    Debug.Print “Is OLAP : ” & pc.OLAP

    Debug.Print “Connection Name : ” & pc.WorkbookConnection

    Debug.Print “Connection String : ” & pc.Connection

  Else

    Debug.Print “Is OLAP : ” & pc.OLAP

    Debug.Print “Data Range : ” & pc.SourceData

    End If

End Function

 

My Sub (GetPivotData) loops through each sheet on the workbook, getting each pivot on the sheet and then calls a function (GetCache) to get the cache (or connection) information for that pivot. While I could get connection information by just adapting the function to iterate over the slicer caches (as below), I’d like to associate each pivot table with its connection.

Additionally, I want to identify any pivots that are not based on an external (OLAP) source. To do this, I can use the Cache.OLAP property and retrieve the appropriate connection info.

 

 

SQL Saturday Lisbon Portugal

Paul te Braak:

I can only express my appreciation also – thanks guys!

Originally posted on Paul Turley's SQL Server BI Blog:

We’re on our way home from Lisbon, Portugal after speaking at SQL Saturday #267. Having had the pleasure to attend and speak at quite a few SQL Saturday events over the years, I’ve never seen a group of event organizers work so tirelessly with as much passion and dedication. Thanks to Niko Neugebauer, Vitor Pombeiro, Paulo Matos, Quilson Antunes, Murilo Miranda, André Batista and Paulo Borges for the late nights and long hours you all invested to make this a very special SQL Saturday. The event was a great success; as well as a special day of sightseeing the day afterward for all the speakers. After recruiting an impressive list of speakers from all over the globe, these volunteers went well beyond the call of duty to chauffer speakers from the airport to hotels, the event venues and around the city. It was quite a treat. Thank you!

View original