Are you realising the full potential SQL Server?
One of our Sydney consultants, Dawie Kruger, shares some of his thoughts on the features and functionality available in Microsoft SQL Server that he feels are being underutilised by organisations.
I was privileged enough to attend Microsoft Ignite 2017 this year on the Gold Coast. There was considerable talk about the new features that will be offered in SQL Server 2016 Service Pack 1. If you are not familiar with this as yet Microsoft is releasing over 15 Enterprise Edition features in SQL Server Standard and Express editions. These features include In-Memory OLTP, Partitioning, Compression and also additional security features like Always Encrypted.
These are some great new features and like me, my fellow attendees agreed this is a fantastic step forward.
One of the sessions I attended was Explore In-Memory OLTP Architectures and Customer Case Studies”” [presented by Sunil Agarwal (Principal Program Manager at Microsoft Corporation). One of the first questions in this session was “Who is currently using In-Memory or Compression in SQL Server?” I was surprised by how few hands were raised.
This got me thinking, “How many companies are running the latest versions of software but they are not utilising it to its full potential?” At one of my previous clients the CIO embarked on a strategy to identify how the existing applications could be used to their full potential as an alternative to upgrading to the “Latest and Greatest”. We found that in most cases that features and capabilities were not being utilised, and once configured properly these had a significant impact on performance and operational support so as to reduce the urgency of upgrading the applications.
In other cases, I have seen how applying features like Compression or Partitioning on tables in Microsoft SQL Server not only had a performance and stability improvement but also won back gigabytes of hard disk space that is still a hot commodity on today’s VM Hosts.
In this blog I will focus on two technologies, namely Compression and In-Memory OLTP in the Microsoft SQL Server Database that I feel are underutilised in the DW/BI space.
Table compression in SQL Server has been available in two flavours since SQL Server 2012 and in my personal experience is always something to consider. The example below is from a site where compression resulted in an average 38% reduction in disc space required by the solution.
We did see a slight increase in the build time of some of the compressed tables during the daily ETL run however the improved query speed more than compensated for this with cube processing times decreasing by 50%.
In-Memory OLTP is a technology that allows you to store database tables in server memory (RAM). A demonstration at Ignite highlighted the improvements made in SQL Server 2016 to in memory tables and the benefits to be gained if used correctly. In SQL Server 2016 there are 3 different durability levels (Fail Safe) for In-Memory tables:
- Full Durability – Data is stored in memory with every transaction committed to disk. (Full Redundancy)
- Delayed Durability – Data stored in memory with a delay in writing to disk.
- Non-Durable – Data stored in memory only.
These options allow for multiple different uses in handling and processing data. For example, instead of adding pressure to TempDBb with Temp Tables why not create a Non-Durable in-memory table. Or why not land data in an in-memory table allowing you to load multiple sources at the same time into a single table without locking or latching.
To see the full benefits offered by the use of In-Memory OLTP there are some development practices to take into consideration. At present you can get substantially higher performance gains when you use In-Memory compiled stored procedures to load your data into In-Memory tables. On a test bench we saw record inserts jump from 3,000 to 8,000 per second by simply converting the SQL table to In-Memory, then again jump to 12,000 inserts per second by compiling the stored procedures.
One aspect that is truly exciting is that Table Compression and In-Memory Tables are compatible technologies and can be used in conjunction with each other and I strongly encourage you to explore these more if you aren’t already leveraging this functionality.
A disclaimer here is that the figures seen in this blog are indicative examples only and are dependent on the type and volume of data used. Just as you would investigate and test practices like indexing strategies, these features should be properly tested before being implemented in production systems.