Aggregate Functions are the topic of this months T-SQL Tuesday. An interesting one, and it made me think about what I’ve done, that could be considered interesting, with relation to Aggregation.
One thing that sprung to mind was some work I did on a Data Warehouse. I worked on a Project a while back (a few years now), that included a data source from an ERP system, that was effectively a table populated from a series of Excel worksheets. The table was setup so that each cell in the worksheet had it’s own row. This had resulted in 6,435 (cells A1 to I715) rows, per project, per financial period, so 6435*200 (and then some) * 12 (so 15,444,000) per year. The code and table samples below are representative of the process that we followed, and the table structures have been appropriately anonymised, but you get the general idea.
It wasn’t necessary to load all the source data into the data warehouse, since there was alot of information that we didn’t need. Effectively, this was the process that we had.
To get the values out for the project, in the correct form, the following T-SQL was used:
SELECT project_id,xl_month,xl_year, MAX(CASE WHEN xl_cellref ='A1' THEN xl_value END) AS 'A1', MAX(CASE WHEN xl_cellref ='A2' THEN xl_value END) AS 'A2' FROM dbo.xl_test GROUP BY project_id, xl_month,xl_year
After a bit of time with this running, we made some changes, and ended up with the following:
SELECT project_id, xl_month, xl_year, [A1], [A2] FROM ( SELECT project_id, xl_month, xl_year, xl_cellref, xl_value FROM dbo.xl_test) AS xl_test PIVOT( MAX(xl_value) FOR xl_cellref IN ([A1],[A2]) ) AS aPivot
This (and some of the other changes we made) actually improved the performance of the DWH load by approximately 25%, however, I’d imagine a fair chunk of that was down to the fact that Pivot is quicker than a dozen or so case statements.