Tuesday, July 29, 2014

How to Edit Read-Only Non-clustered Columnstore Data

As I've discussed in some of my previous posts, creating a non-clustered Columnstore index will make the index as well as the base table read-only. Which means you can’t insert, update, or delete any data until your drop the index. This may seem like a huge issue, but in reality it’s not that much of a problem. Keep in mind the Columnstore index feature is targeted at data warehouses that modify data infrequently. In the examples below, I go through two methods you can use to edit your read-only data.

To get started, we need to create a test table and insert a few rows.

USE master;
GO

CREATE DATABASE TEST;
GO

USE TEST;
GO

CREATE TABLE dbo.Table1
(
   col1 INT
  ,col2 VARCHAR(20)
);
GO

INSERT INTO dbo.Table1 (col1, col2) VALUES (1, 'Test Value 1');
INSERT INTO dbo.Table1 (col1, col2) VALUES (2, 'Test Value 2');
INSERT INTO dbo.Table1 (col1, col2) VALUES (3, 'Test Value 3');
INSERT INTO dbo.Table1 (col1, col2) VALUES (4, 'Test Value 4');
INSERT INTO dbo.Table1 (col1, col2) VALUES (5, 'Test Value 5');
INSERT INTO dbo.Table1 (col1, col2) VALUES (6, 'Test Value 6');
INSERT INTO dbo.Table1 (col1, col2) VALUES (7, 'Test Value 7');
INSERT INTO dbo.Table1 (col1, col2) VALUES (8, 'Test Value 8');
INSERT INTO dbo.Table1 (col1, col2) VALUES (9, 'Test Value 9');
GO

Next, we'll add a non-clustered Columnstore index to the table.

CREATE NONCLUSTERED COLUMNSTORE INDEX nci_Table1 
  ON dbo.Table1(col1,col2);
GO

At this point, we have effectively made this table read-only. We can read from it all day long, but if we attempt to update a value, we will get an error.

SELECT * FROM dbo.Table1;
GO


UPDATE dbo.Table1 SET col2 = 'changed a value' WHERE col1 = 1;
GO

Msg 35330, Level 15, State 1, Line 41 UPDATE statement failed because data cannot be updated in a table that has a nonclustered columnstore index. Consider disabling the columnstore index before issuing the UPDATE statement, and then rebuilding the columnstore index after UPDATE has completed.

Once again, I love Microsoft’s error message. It gives you very good information on one method to use for changing data in a non-clustered Columnstore index.

Disabling the non-clustered Columnstore index gives you the ability to make changes to the data; however, it can't be used for any queries while it's disabled. I like to wrap all the commands into a single transaction using the XACT_ABORT option. This guarantees that any error will rollback the entire set and not just one single statement.

SET XACT_ABORT ON;
GO

BEGIN TRANSACTION;
GO

ALTER INDEX nci_Table1 ON dbo.Table1 DISABLE;
GO

UPDATE Table1 SET col2 = 'changed a value' WHERE col1 = 1;
GO

ALTER INDEX nci_Table1 ON dbo.Table1 REBUILD;
GO

COMMIT TRANSACTION;
GO


Looking at the table again, we can see the first row was definitely changed.

SELECT * FROM dbo.Table1;
GO


This is probably the easiest method to change your data; however, it’s also the most resource intensive. SQL Server will need to rebuild the index for the entire table not just for that one row that we changed. So if your table has millions or even billions of rows, it could take a lot time to rebuild and utilize a lot of resources. This is probably something you don’t want to do in the middle of your business day.

The second method we’ll cover, involves using partition switching. First, we’ll create the same table but partition it into 3 parts.

USE TEST
GO

CREATE PARTITION FUNCTION myRangePF1 (INT) 
  AS RANGE LEFT FOR VALUES (3,6);
GO 

CREATE PARTITION SCHEME myRangePS1 
  AS PARTITION myRangePF1 
  ALL TO ([PRIMARY]);
GO 

CREATE TABLE dbo.PartitionedTable (col1 INT, col2 VARCHAR(20)) 
  ON myRangePS1 (col1);
GO

INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (1, 'Test Value 1');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (2, 'Test Value 2');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (3, 'Test Value 3');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (4, 'Test Value 4');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (5, 'Test Value 5');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (6, 'Test Value 6');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (7, 'Test Value 7');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (8, 'Test Value 8');
INSERT INTO dbo.PartitionedTable (col1, col2) VALUES (9, 'Test Value 9');
GO

CREATE NONCLUSTERED COLUMNSTORE INDEX nci_PartitionedTable 
  ON dbo.PartitionedTable(col1,col2);
GO

SELECT * FROM dbo.ParitionedTable;
GO


This table has the same data as before, but internally it's partitioned. Using sys.partitions to get the details, you can see there are a total of 3 partitions, each with 3 rows.

SELECT
     SCHEMA_NAME(t.schema_id) AS SchemaName
    ,OBJECT_NAME(i.object_id) AS ObjectName
    ,p.partition_number AS PartitionNumber
    ,fg.name AS FilegroupName
    ,rows AS 'Rows'
    ,au.total_pages AS 'TotalDataPages'
    ,CASE boundary_value_on_right
        WHEN 1 THEN 'less than'
        ELSE 'less than or equal to'
     END AS 'Comparison'
    ,value AS 'ComparisonValue'
    ,p.data_compression_desc AS 'DataCompression'
    ,p.partition_id
FROM sys.partitions p
    JOIN sys.indexes i ON p.object_id = i.object_id AND p.index_id = i.index_id
    JOIN sys.partition_schemes ps ON ps.data_space_id = i.data_space_id
    JOIN sys.partition_functions f ON f.function_id = ps.function_id
    LEFT JOIN sys.partition_range_values rv ON f.function_id = rv.function_id AND p.partition_number = rv.boundary_id
    JOIN sys.destination_data_spaces dds ON dds.partition_scheme_id = ps.data_space_id AND dds.destination_id = p.partition_number
    JOIN sys.filegroups fg ON dds.data_space_id = fg.data_space_id
    JOIN (SELECT container_id, sum(total_pages) as total_pages
            FROM sys.allocation_units
            GROUP BY container_id) AS au ON au.container_id = p.partition_id 
    JOIN sys.tables t ON p.object_id = t.object_id
WHERE i.index_id < 2
ORDER BY ObjectName,p.partition_number;
GO


To use partition switching, you have to create a non-partitioned table that matches your partitioned table in every way; including all the indexes and constraints. For this example, we need to edit row 1 that resides in the first partition, so we need to create a non-partitioned table that has a constraint that mimics the first partition; col1 <= 3.

CREATE TABLE dbo.NonPartitionedTable
(
   col1 INT CHECK (col1 <= 3)
  ,col2 VARCHAR(20)
);
GO

CREATE NONCLUSTERED COLUMNSTORE INDEX nci_NonPartitionedTable
  ON dbo.NonPartitionedTable(col1,col2);
GO

Once we have the table created, we can perform the switch.

SET XACT_ABORT ON;
GO

BEGIN TRANSACTION;
GO

ALTER TABLE dbo.PartitionedTable SWITCH PARTITION 1 TO dbo.NonPartitionedTable; 
GO

ALTER INDEX nci_NonPartitionedTable ON dbo.NonPartitionedTable DISABLE;
GO

UPDATE NonPartitionedTable SET col2 = 'changed a value' WHERE col1 = 2;
GO

ALTER INDEX nci_NonPartitionedTable ON dbo.NonPartitionedTable REBUILD;
GO

ALTER TABLE dbo.NonPartitionedTable SWITCH TO dbo.PartitionedTable PARTITION 1; 
GO

COMMIT TRANSACTION;
GO

Finally, all we have to do is just switch that data back into the partitioned table.

SELECT * FROM dbo.ParitionedTable;
GO


I know this may look exactly like what we did in the first method, and it is. However, by having our initial table partitioned, it gives us the ability to only rebuild the non-clustered Columnstore index on smaller subset of data instead of the entire table. If this partitioned table had millions of rows, then that ALTER INDEX REBUILD command might only need to run against a fraction of those rows and therefore complete much quicker and utilize far fewer resources. Not only can both of these methods can be used to edit existing data in a table that has a non-clustered Columnstore index, but they can also be used to insert or delete rows.

On a final note, I will recommend that you always partition any table that has a Columnstore index. This feature is designed for very large tables, and having it partitioned gives you much more flexibility than if it's not. And not just for working with Columnstore indexes, but other tasks as well.

You can read more about Columnstore indexes and partition switching from Books Online.

Tuesday, July 8, 2014

Columnstore Table Analyzer

As I’ve discussed in some of my previous posts, there are quite a few data types that cannot be part of a Columstore index. While there are fewer restrictions in SQL Server 2014, they still exist. I find myself constantly looking back at Books Online trying to make sure data types in my tables don’t contain any of those restricted data types. It would be much easier to know from day one which tables I need to redesign, or at least which columns I need to exclude from a non-clustered Columnstore index. This is why I have created the following script.

-- Find columns in user tables that cannot be included in a columnstore index.
-- These restrictions apply to both clustered and non-clustered columnstore indexes.
-- SQL Server 2014: http://msdn.microsoft.com/en-us/library/gg492153(v=sql.120).aspx
-- SQL Server 2012: http://msdn.microsoft.com/en-us/library/gg492153(v=sql.110).aspx


-- Get the version number of SQL Server
DECLARE @ServerVersion TINYINT = CONVERT(INT,SUBSTRING(CONVERT(VARCHAR,SERVERPROPERTY('ProductVersion')),1,(CHARINDEX('.',(CONVERT(VARCHAR,SERVERPROPERTY('ProductVersion'))))-1)))

IF @ServerVersion = 11
-- This section is only for SQL Server 2012
BEGIN
  SELECT 
     s.name AS 'SchemaName'
    ,o.name AS 'TableName'
    ,c.name AS 'ColumnName'
    ,'ColumnType' = CASE t.name
      WHEN 'decimal' THEN t.name + '(' + CONVERT(VARCHAR,c.precision) + ',' + CONVERT(VARCHAR,c.scale) + ')'
      WHEN 'numeric' THEN t.name + '(' + CONVERT(VARCHAR,c.precision) + ',' + CONVERT(VARCHAR,c.scale) + ')'
      WHEN 'varchar' THEN
        CASE c.max_length
          WHEN -1 THEN 'varchar(max)'
          ELSE 'varchar(' + CONVERT(VARCHAR,c.max_length) + ')'
        END
      WHEN 'nvarchar' THEN 
        CASE c.max_length
          WHEN -1 THEN 'nvarchar(max)'
          ELSE 'nvarchar(' + CONVERT(VARCHAR,c.max_length) + ')'
        END
      WHEN 'datetimeoffset' THEN t.name + '(' + CONVERT(VARCHAR,c.scale) + ')'
      ELSE t.name
     END
    ,'ColumnAttribute' = CASE 
      WHEN (c.is_filestream = 1) THEN 'Filestream'
      WHEN (c.is_sparse = 1) THEN 'Sparse'
      ELSE ''
    END
  FROM sys.columns c 
  JOIN sys.objects o ON c.object_id = o.object_id
  JOIN sys.types t ON c.user_type_id = t.user_type_id
  JOIN sys.schemas s ON o.schema_id = s.schema_id
  WHERE o.is_ms_shipped <> 1
  -- These types cannot be part of a SQL Server 2012 columnstore index
  AND (
    t.name IN 
         ('binary'
      ,'varbinary'
      ,'ntext'
      ,'text'
      ,'image'
      ,'uniqueidentifier'
      ,'rowversion'
      ,'timestamp'
      ,'sql_variant'
      ,'hierarchyid'
      ,'geography'
      ,'geometry'
      ,'xml')
  OR (
      -- Decimal & numeric cannot have a precision over 18
      t.name IN ('decimal','numeric') 
      AND c.precision > 18)
  OR (
      -- Varchar(max) and nvarchar(max)
      t.name = 'datetimeoffset'
      AND c.scale > 2)
  OR (
      -- Varchar(max) and nvarchar(max)
      t.name IN ('varchar','nvarchar') 
      AND c.max_length = -1)
  OR (
      -- Filestream
      c.is_filestream = 1)
  OR (
      -- Sparse
      c.is_sparse = 1)
  )
  ORDER BY s.name,o.name,c.column_id
END
ELSE IF @ServerVersion = 12
-- This section is only for SQL Server 2014
BEGIN
  SELECT 
     s.name AS 'SchemaName'
    ,o.name AS 'TableName'
    ,c.name AS 'ColumnName'
    ,'ColumnType' = CASE t.name
      WHEN 'varchar' THEN
        CASE c.max_length
          WHEN -1 THEN 'varchar(max)'
          ELSE 'varchar(' + CONVERT(VARCHAR,c.max_length) + ')'
        END
      WHEN 'nvarchar' THEN 
        CASE c.max_length
          WHEN -1 THEN 'nvarchar(max)'
          ELSE 'nvarchar(' + CONVERT(VARCHAR,c.max_length) + ')'
        END
      ELSE t.name
     END
    ,'ColumnAttribute' = CASE 
      WHEN (c.is_filestream = 1) THEN 'Filestream'
      WHEN (c.is_sparse = 1) THEN 'Sparse'
      ELSE ''
    END
  FROM sys.columns c 
  JOIN sys.objects o ON c.object_id = o.object_id
  JOIN sys.types t ON c.user_type_id = t.user_type_id
  JOIN sys.schemas s ON o.schema_id = s.schema_id
  WHERE o.is_ms_shipped <> 1
  -- These types cannot be part of a SQL Server 2014 columnstore index
  AND (
    t.name IN 
      ('ntext'
      ,'text'
      ,'image'
      ,'rowversion'
      ,'timestamp'
      ,'sql_variant'
      ,'hierarchyid'
      ,'geography'
      ,'geometry'
      ,'xml')
  OR (
      -- Varchar(max) and nvarchar(max)
      t.name IN ('varchar','nvarchar') 
      AND c.max_length = -1)
  OR (
      -- Filestream
      c.is_filestream = 1)
  OR (
      -- Sparse
      c.is_sparse = 1)
  )
  ORDER BY s.name,o.name,c.column_id
END
ELSE
BEGIN
  RAISERROR ('This script only works on SQL Server 2012 and SQL Server 2014.',16,1);
END
GO

Running this against your database will output an organized list of tables along with the column name and data type that cannot be used within a Columnstore index.


The script can be used if you plan to create a clustered or non-clustered index, since the data type restrictions would apply to both. The script can also be used to analyze databases in either SQL Server 2012 or 2014.

You can read more about the limitations and restrictions of Columnstore indexes in Books Online.

Tuesday, June 3, 2014

Columnstore Memory Grant Issue

In a previous post about non-clustered columnstore indexes, I mentioned the creation of an index is a very memory intensive operation. Sometimes the memory grant needed exceeds what is currently available on your server. So what do you do about it?

SQL Server requires a minimal amount of memory in order to create a columnstore index. This can be calculated as Memory Grant Request in MB = ((4.2 * number of columns in the columnstore index) + 68) * Degree of Parallelism + (number of string columns * 34). If there is not enough physical memory available to create the columnstore index, SQL Server will throw an error.

The test server I’m using for the examples below has 2 CPUs and 4GB of memory.


The Max Degree of Parllelism is set to 0 and Max Server Memory is set to 4095MB.

EXEC sp_configure 'max degree of parallelism';
GO
EXEC sp_configure 'max server memory (MB)';
GO


I have enlarged a table, FactInternetSales, in the AdventureWorksDW2012 database using Kalen Delaney’s script. The table has 247 million rows and contains 26 columns; three of which are string columns. If we want to create a non-clustered columnstore index on the entire table, then using the formula above we could estimate a memory grant of 456MB would be needed to build the index.
To create the index we’ll use this query.

USE AdventureWorksDW2012;
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi_FactInternetSalesBig ON dbo.FactInternetSalesBig
(
    ProductKey,
    OrderDateKey,
    DueDateKey,
    ShipDateKey,
    CustomerKey,
    PromotionKey,
    CurrencyKey,
    SalesTerritoryKey,
    SalesOrderNumber,
    SalesOrderLineNumber,
    RevisionNumber,
    OrderQuantity,
    UnitPrice,
    ExtendedAmount,
    UnitPriceDiscountPct,
    DiscountAmount,
    ProductStandardCost,
    TotalProductCost,
    SalesAmount,
    TaxAmt,
    Freight,
    CarrierTrackingNumber,
    CustomerPONumber,
    OrderDate,
    DueDate,
    ShipDate
);
GO

Once we execute that query, we can view the memory grant using this query.

SELECT
     dop
    ,requested_memory_kb
    ,required_memory_kb
    ,ideal_memory_kb
    ,granted_memory_kb
FROM sys.dm_exec_query_memory_grants
WHERE session_id = 55;
GO


The requested memory grant was actually 525MB, but it still wasn’t too far from our estimation. What would happen to the create index query if we change the max server memory to 1024MB? Let’s find out.

EXEC sp_configure 'max server memory (MB)',1024;
GO
RECONFIGURE;
GO

Oops, we got an error.

The statement has been terminated.
Msg 8658, Level 17, State 1, Line 1
Cannot start the columnstore index build because it requires at least 341424 KB, while the maximum memory grant is limited to 189696 KB per query in workload group 'default' (2) and resource pool 'default' (2). Retry after modifying columnstore index to contain fewer columns, or after increasing the maximum memory grant limit with Resource Governor.

This is telling us we need an absolute minimum of 341MB of memory to create the colunstore index. That seems weird since we have 1024MB of memory configured for the server. Well not really. The Resource Governor is always running in the background for all SQL Servers, and every query executes inside the default workgroup pool. That default workload group has a limit of granting no more than 25% of available memory to one single query. This can be verified by running the following query.

SELECT
     name
    ,request_max_memory_grant_percent
FROM sys.dm_resource_governor_workload_groups
WHERE name = 'default';
GO


There are a couple of workarounds to this problem.

First, we can try to reduce the max degree of parallelism when creating the index. Using the formula above, that would have an estimated memory grant of 279MB. That’s still more than our minimum allowable grant size, but let’s try it anyway. All we have to do is add the WITH (MAXDOP=1) hint to the end of the create index statement.

USE AdventureWorksDW2012;
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi_FactInternetSalesBig ON dbo.FactInternetSalesBig
(
    ProductKey,
    OrderDateKey,
    DueDateKey,
    ShipDateKey,
    CustomerKey,
    PromotionKey,
    CurrencyKey,
    SalesTerritoryKey,
    SalesOrderNumber,
    SalesOrderLineNumber,
    RevisionNumber,
    OrderQuantity,
    UnitPrice,
    ExtendedAmount,
    UnitPriceDiscountPct,
    DiscountAmount,
    ProductStandardCost,
    TotalProductCost,
    SalesAmount,
    TaxAmt,
    Freight,
    CarrierTrackingNumber,
    CustomerPONumber,
    OrderDate,
    DueDate,
    ShipDate
) WITH (MAXDOP=1);
GO

Oops, we got the same error again.

The statement has been terminated.
Msg 8658, Level 17, State 1, Line 1
Cannot start the columnstore index build because it requires at least 341424 KB, while the maximum memory grant is limited to 189696 KB per query in workload group 'default' (2) and resource pool 'default' (2). Retry after modifying columnstore index to contain fewer columns, or after increasing the maximum memory grant limit with Resource Governor.

Now it’s on to our next workaround; changing the Resource Governor settings. As I mentioned above, the Resource Governor limits each query to have a memory grant of 25% of the total available. We can easily be changed by using the following query.

ALTER WORKLOAD GROUP [default] WITH (request_max_memory_grant_percent = 50);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

The setting change is dynamic, but let’s double check it.

SELECT
     name
    ,request_max_memory_grant_percent
FROM sys.dm_resource_governor_workload_groups
WHERE name = 'default';
GO


Now let’s try to create the columnstore index again.

Ahh success! We can check the memory grant while it’s running.

SELECT
     dop
    ,requested_memory_kb
    ,required_memory_kb
    ,ideal_memory_kb
    ,granted_memory_kb
FROM sys.dm_exec_query_memory_grants
WHERE session_id = 55;
GO


Once the columnstore index has been created, you should change the default workload group back to its default of 25. That way it doesn’t adversely affect other queries running on your server.

ALTER WORKLOAD GROUP [default] WITH (request_max_memory_grant_percent = 25);
GO
ALTER RESOURCE GOVERNOR RECONFIGURE;
GO

Another workaround is to exclude some of the 26 columns from our create index statement. Using the formula above, if we removed the last six columns of the table, we’d only need a 186MB memory grant. However, that may not be an option, because our user queries may need those columns to be part of the colunstore index to get the maximum performance for their queries.

Finally, as last workaround, you could add more memory to the server, but unless your server is VM that might be bit impossible for most situations.

For more info on columnstore indexes, check out the Columnstore Index FAQ on TechnetBooks Online, and my other blog posts.

Tuesday, May 20, 2014

Comparison of Columnstore Compression

SQL Server 2012 introduced non-clustered columnstore indexes, and SQL Server 2014 gave us clustered columnstore indexes. Both share the same technology for performance boosts, and they both share the same algorithms for compression. However, the compression will depend on the data you are storing.

SQL Server uses a mechanism of row groups, segments, encoding and compression to store the data for columnstore indexes. First the data is horizontally divided into row groups, where each group contains approximately 1 million rows. Next, the row groups are vertically partitioned into segments, where each column of the table is its own segment. Those segments are then encoded and compressed before being stored on disk. So if you create a colulmnstore index on a table that has five columns and 100,000 rows, it would result in one row group with five segments.

Compression of any type will definitely vary depending on the data that needs to be compressed. Columnstore indexes use RLE (run-length encoding) compression which works best when you have a lot of repetitive values. Just to see how this works, let’s compare the compression for data that is random to the compression for data that is highly redundant.

First, we’ll create the following tables that contain only a single column using the CHAR data type. Each table will store 1 million rows. One table will store random values for each row, and the other will store identical values for each row.

CREATE TABLE dbo.IdenticalCharacter(Col1 CHAR(50) NULL);
GO
CREATE TABLE dbo.RandomCharacter(Col1 CHAR(50) NULL);
GO

Now we'll populate the tables will artificial data.

DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO IdenticalCharacter VALUES (REPLICATE('a',50));
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO
DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO RandomCharacter
    SELECT CAST(NEWID() AS CHAR(50));
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO

Next, we’ll create a non-clustered columnstore index on each table.

CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.IdentitcalCharacter(Col1);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.RandomCharacter(Col1);
GO

Now we can calculate the size of each columnstore index using the script below. This is a modified version of the script from Books Online.

SELECT
     SchemaName
    ,TableName
    ,IndexName
    ,SUM(on_disk_size_MB) AS TotalSizeInMB
FROM
(
    SELECT
        OBJECT_SCHEMA_NAME(i.OBJECT_ID) AS SchemaName,
        OBJECT_NAME(i.OBJECT_ID ) AS TableName,
        i.name AS IndexName
        ,SUM(css.on_disk_size)/(1024.0*1024.0) AS on_disk_size_MB
    FROM sys.indexes i
    INNER JOIN sys.partitions p
        ON i.object_id = p.object_id
    INNER JOIN sys.column_store_segments css
        ON css.hobt_id = p.hobt_id
    WHERE i.type_desc = 'NONCLUSTERED COLUMNSTORE'
        AND i.index_id NOT IN (0,1)
        AND p.index_id NOT IN (0,1)
    GROUP BY OBJECT_SCHEMA_NAME(i.OBJECT_ID) ,OBJECT_NAME(i.OBJECT_ID ),i.name

    UNION ALL

    SELECT
         OBJECT_SCHEMA_NAME(i.OBJECT_ID) AS SchemaName
        ,OBJECT_NAME(i.OBJECT_ID ) AS TableName
        ,i.name AS IndexName
        ,SUM(csd.on_disk_size)/(1024.0*1024.0) AS on_disk_size_MB
    FROM sys.indexes i
    INNER JOIN sys.partitions p
        ON i.object_id = p.object_id
    INNER JOIN sys.column_store_dictionaries csd
        ON csd.hobt_id = p.hobt_id
    WHERE i.type_desc = 'NONCLUSTERED COLUMNSTORE'
        AND i.index_id NOT IN (0,1)
        AND p.index_id NOT IN (0,1)
    GROUP BY OBJECT_SCHEMA_NAME(i.OBJECT_ID) ,OBJECT_NAME(i.OBJECT_ID ),i.name
) AS SegmentsPlusDictionary
GROUP BY SchemaName,TableName,IndexName
ORDER BY SchemaName,TableName,IndexName;
GO


As you can see, the random data values did not compress nearly as much. 30MB of the total 41MB was just for the dictionaries, while only 11MB was needed for the segments. When you graph the data in Excel, it makes it little bit easier to see the difference in size.


We can also compare a few other data types (integer and decimal), and we’ll see similar differences, but just not as drastic.

CREATE TABLE dbo.IdenticalInteger(Col1 INT NULL);
GO
CREATE TABLE dbo.RandomInteger(Col1 INT NULL);
GO
CREATE TABLE dbo.IdenticalDecimal(Col1 DECIMAL(18,8) NULL);
GO
CREATE TABLE dbo.RandomDecimal(Col1 DECIMAL(18,8) NULL);
GO
DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO IdenticalInteger VALUES (1);
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO
DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO RandomInteger VALUES (@Counter);
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO
DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO IdenticalDecimal VALUES (1.0);
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO
DECLARE @Counter int = 0;
WHILE @Counter < 1000000
BEGIN
    INSERT INTO RandomDecimal VALUES (CAST((RAND() * @Counter) AS DECIMAL(18,8)));
    SET @Counter = @Counter+1;
END
CHECKPOINT;
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.IdenticalInteger(Col1);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.RandomInteger(Col1);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.IdenticalDecimal(Col1);
GO
CREATE NONCLUSTERED COLUMNSTORE INDEX csi1 ON dbo.RandomDecimal(Col1);
GO

Now run the same query from above to see the size difference of the indexes.


Here we see similar results as before; the highly redundant data gets compressed far more than the random data. Again, graphing the data makes it a little easier to visualize the difference.


The compression algorithm used for columnstore indexes is different from the row and page data compression originally released in SQL Server 2008; however, it’s interesting to compare the differences.

For this example, we’ll use actual data from a trading application. The fact table contains over 106 million rows, 26 columns which are mostly float data types, and is just over 21GB in size. Let’s compare the various types of compression: row, page, non-clustered columnstore index for SQL Server 2012, non-clustered columnstore index for SQL Server 2014, clustered columnstore index for SQL Server 2014, and lastly a clustered columnstore index with archival compression for SQL Server 2014.


The graph above shows row and page compression already have compressed the data significantly, but the various columnstore indexes compress it even further. There are a couple of things to keep in mind. First, the non-clustered columnstore indexes are a separate copy from the base table, so the 7199MB 2012 columnstore index would be in addition to the 21GB of raw data for a total of nearly 29GB. The two clustered columnstore indexes only need 3.4GB and 2.6GB to store the entire dataset. That’s a 6X and 8X compression difference.

You may wonder why the 2012 non-clustered columnstore index is larger (7199MB) than the just using ordinary row or page compression, but there really is nothing wrong. It’s just a good example showing the differences in the compression algorithms. Although this example doesn’t show it, there are some cases where the columnstore compressed data could be higher than the actual data.

As you can see, columnstore indexes have to potential to highly compress your data, but it really depends on the type of data you’re storing and redundancy of the data values. The only way to know for sure to is to test it with your own data.

For more info on columnstore indexes, check out the Columnstore Index FAQ on TechnetBooks Online, and my other blog posts.

Tuesday, April 29, 2014

What is a Non-Clustered Columnstore Index?

First introduced in SQL Server 2012, the Columnstore index is a new in-memory feature that allows for the creation of indexes that are stored in a column-wise fashion. It is targeted for data warehouses and can improve query performance by 10 to 100x. A columnstore index stores data in a different type of index page as well as heavily compressing the data. It also introduces a new batch execution mode that greatly increases processing speed and reduces CPU utilization. The combination of the new storage type, the compressed data, and batch mode processing allows SQL Server to read less data and greatly improve query performance.

So how does this new index differ from traditional indexes? In all versions prior to 2012, SQL Server uses a B-tree structure (see below) for all indexes, including both clustered and non-clustered.


At the leaf level of the B-tree indexes, SQL Server stores the indexed data in a row-wise fashion. So a single index page will store all data from all columns for one or more rows. For example, if we have create an index with three columns (C1, C2, and C3) for dbo.Table1, the internal structure would look something like this.


Index page 1 would store columns C1, C2, and C3 for rows R1, R2 and R3. Index page 2 would store columns C1, C2, and C3 for rows R4, R5 and R6, and so on and so on. This is referred to as a rowstore, because it stores the data in a row-wise fashion.

If we were to issue a simple query to select column C1 from the Table1, then SQL Server would need to read all three index pages to get the values from all 9 rows. Each read would be a logical I/O, so a total of 3 logical I/O’s would be needed.

SELECT C1 FROM dbo.Table1;
GO


Now if we create a columnstore index on the same data, the logical storage of that data would look something like this.


Index page 1 would store column C1 for rows R1 through R9. Index page 2 would store columns C2 for rows R1 through R9, and so on and so on. This is referred to as a columnstore index, because it stores the data in a column-wise fashon.

If we issued the same query, SQL Server could use the new columnstore index and would only need to read index page 1 to get all the values of column C1 for rows R1 through R9. This results in 1 logical I/O since only a single index page needs to be read.


For this simple query we have a 3x performance improvement. Now 1 I/O vs 3 I/O’s does doesn’t really matter, but remember this feature is targeted at data warehouses that could have tables containing millions or even billions of rows. Once you use columnstore indexes on tables that large, then the performance gains in I/O are much more noticeable.

The second advantage of using columnstore indexes is the data stored within the index is heavily compressed. This compression algorithm is different than row or page compression or even backup compression, and it usually works best for character or numeric data that have a lot of repeating values. Having the data compressed, means SQL Server needs to read fewer pages to return all of the data. Going back to our Table1 example from above, if the indexes were storing lots more data then the layout would really look more like this.


Because of that highly compressed data, the columnstore index might only need a few dozen index pages to store the same amount of data that would normally take a rowstore index several thousand.

The third way columnstore indexes help improve performance is through the use of batch mode processing. This new execution mode allows the CPU to process approximately 1000 row at a time. This is more efficient for the CPU to process a batch of data instead of one row at a time. The result is faster query processing and lower CPU utilization. In the example query plan below, you can see SQL Server issued 581725 batches to process the ~247 million rows. That's an average of 425 rows processed per batch.


Creating a columnstore index is as easy as creating any other non-clustered index. Just specify the name of the index, the table, and which columns are to be included. The best practice from Microsoft suggests adding all columns of the base table into the columnstore index. See the example below.

CREATE NONCLUSTERED COLUMNSTORE INDEX csi_FactInternetSales
  ON dbo.FactInternetSales
(
  ProductKey,
  OrderDateKey,
  DueDateKey,
  ShipDateKey,
  CustomerKey,
  PromotionKey,
  CurrencyKey,
  SalesTerritoryKey,
  SalesOrderNumber,
  SalesOrderLineNumber,
  RevisionNumber,
  OrderQuantity,
  UnitPrice,
  ExtendedAmount,
  UnitPriceDiscountPct,
  DiscountAmount,
  ProductStandardCost,
  TotalProductCost,
  SalesAmount,
  TaxAmt,
  Freight,
  CarrierTrackingNumber,
  CustomerPONumber,
  OrderDate,
  DueDate,
  ShipDate
);
GO

All of this is really great news for SQL Server lovers; however, as with all new features, there are usually limitations and columnstore indexes are no exception. Below is a list of some of those restrictions.
  1. The base table is READ-ONLY.
  2. The columnstore index cannot be altered; only dropped and recreated.
  3. The columnstore index cannot be created as clustered index.
  4. Certain data types cannot be used.
  5. Cannot include a sparse column.
  6. Cannot include a column with Filestream data.
  7. The index cannot act as a primary or foreign key.
  8. Cannot be combined with replication, change tracking, or change data capture.
  9. Uses the buffer pool, so columnstore index pages can be flushed from memory.
  10. Enterprise Edition only
I’m sure the first item on the list got your attention. A limitation that has the base table and all of its data is in a read-only state after creating a columnstore index. Any attempt to change data within the base table (dbo.Table1) will result in an error.

INSERT dbo.Table1 VALUES ('Oscar', 'The new DBA', 'Charlotte, NC');
GO

Msg 35330, Level 15, State 1, Line 1
INSERT statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the INSERT statement, then rebuilding the columnstore index after INSERT is complete.

This may seem like a huge issue, but in reality it’s not that much of a problem.  Keep in mind this feature is targeted at data warehouses that modify data infrequently. Additionally, there are several workarounds for updating or changing data; for example, deleting the columnstore index making the change and then recreating the columnstore index. I’ll cover these workarounds in more detail in a future blog post.

In my opinion, the limitation of the data types is a bigger issue than the data being in a read-only state; that’s because there is no workaround. In essence, this will affect the design of your database. All of these data types are not allowed for use within a columnstore index.
  1. binary and varbinary
  2. ntext , text, and image
  3. varchar(max) and nvarchar(max)
  4. uniqueidentifier
  5. rowversion (and timestamp)
  6. sql_variant
  7. decimal (and numeric) with precision greater than 18 digits
  8. datetimeoffset with scale greater than 2
  9. CLR types (hierarchyid and spatial types)
  10. xml
If you have an existing data warehouse that you want to use this feature and the tables are littered with uniqueidentifier or XML data types, then you’re only choice is to redesign the database. That’s usually the last thing a development team wants to hear just so they can implement a new database feature. Even if you try to add one of these columns to a columnstore index, SQL Server will throw an error about the restricted data type.

Msg 35343, Level 16, State 1, Line 1
CREATE INDEX statement failed. Column 'IDNumber' has a data type that cannot participate in a columnstore index. Omit column 'IDNumber'.

You can still create a columnstore index on the table by excluding the restricted column; however, if queries need data from that column then it could be much more expensive to use the columnstore index than a traditional rowstore index. The following example excluded the IDNumber column from the columnstore index, so the query processor needed to do a row lookup to get that missing data.

SELECT * FROM dbo.Table1;
SELECT * FROM dbo.Table1 WITH (INDEX(ColumnStoreIndex));
GO


As you can see the result of the forced columnstore plan did not fare well against the table scan. Its query cost is nearly twice as high and table scan needed only 1 logical read compared to 38 for the columnstore index.

Another issue you might run into is the amount of available physical memory on the server. SQL Server requires a minimal amount of memory in order to create a columnstore index. This can be calculated as Memory Grant Request in MB = ((4.2 * number of columns in the columnstore index) + 68) * Degree of Parallelism + (number of string columns * 34). If there is not enough physical memory available to create the columnstore index, SQL Server will throw an error.

The statement has been terminated.
Msg 8658, Level 17, State 1, Line 2
Cannot start the columnstore index build because it requires at least 345520 KB, while the maximum memory grant is limited to 84008 KB per query in workload group 'default' (2) and resource pool 'default' (2). Retry after modifying columnstore index to contain fewer columns, or after increasing the maximum memory grant limit with Resource Governor.

Microsoft did an excellent job of creating a very detailed error message that even includes a few suggestions to avoid this error. One of which is to alter the Resource Governor to allow larger memory grants. However, if that is not an option, then you can use the MAXDOP hint to reduce the degree of parallelism when creating the columnstore index. By reducing or even removing parallelism, it will reduce the memory grant requirements. I’ll cover these memory workarounds in more detail in a future blog post.

UPDATE: Read my post on Columnstore Memory Grant Issue to learn how to avoid this issue.

Let’s look at some examples that involve a lot more data.

Using the FactInternetSales from the AdventureWorksDW2012 database, I have expanded it to nearly 250 million rows. You can get the code to expand the table from Kalen Delaney's blog. We can run the following query to determine the order quantity and total sales grouped by education level.

SELECT
   dc.EnglishEducation AS EducationLevel
  ,COUNT(*) AS OrderQuantity
  ,CAST(SUM(fis.SalesAmount) AS DECIMAL(15,2)) AS SalesAmount
FROM dbo.FactInternetSales fis
  INNER JOIN dbo.DimCustomer dc ON fis.CustomerKey = dc.CustomerKey
GROUP BY dc.EnglishEducation
ORDER BY dc.EnglishEducation
OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX);
GO

This query uses the IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX hint which tells the query optimizer to ignore a columnstore index that might be on this table. In order for SQL Server to process this query, it will need the value of every row for the CustomerKey and SalesAmount columns from that FactInternetSales. The traditional rowstore index will require SQL Server to read every page from the table to get those values. What we get is a query that requires 5934694 logical reads and over 28 minutes of runtime to complete. Run the same query again, but this time remove the hint and allow it use the columnstore index.

SELECT
   dc.EnglishEducation AS EducationLevel
  ,COUNT(*) AS OrderQuantity
  ,CAST(SUM(fis.SalesAmount) AS DECIMAL(15,2)) AS SalesAmount
FROM dbo.FactInternetSales fis
  INNER JOIN dbo.DimCustomer dc ON fis.CustomerKey = dc.CustomerKey
GROUP BY dc.EnglishEducation
ORDER BY dc.EnglishEducation;
GO

What you’ll notice is an almost instantaneous response from SQL Server. SQL Server only needed to do 6822 logical reads from FactInternetSales, and the runtime was reduced to about 4 seconds. That’s over 800x fewer IO’s and about 420x faster runtime.

Workloads that do a lot of table or index scans, such as the example above, will benefit most from using a columnstore index. However, singleton lookups will not perform as well. We can use the same query but this time we'll specify order numbers starting with a certain value. We'll also use an index hint WITH (INDEX(csi_FactInternetSales)) to force the query optimizer to use the columnstore index.

SELECT
   dc.EnglishEducation AS EducationLevel
  ,COUNT(*) AS OrderQuantity
  ,CAST(SUM(fis.SalesAmount) AS DECIMAL(15,2)) AS SalesAmount
FROM dbo.FactInternetSales fis WITH (INDEX(csi_FactInternetSales))
  INNER JOIN dbo.DimCustomer dc ON fis.CustomerKey = dc.CustomerKey
WHERE fis.SalesOrderNumber LIKE 'SO437%'
GROUP BY dc.EnglishEducation
ORDER BY dc.EnglishEducation;
GO

Forcing the query optimizer to use the columnstore index was a bad idea in this case. The columnstore index was used, but it had to scan the entire set of data to look for values starting with 'SO437%'. This resulted in over 1.7 million logical reads and about 68 seconds before returning the results. Now remove the index hint and run the query again.

SELECT
   dc.EnglishEducation AS EducationLevel
  ,COUNT(*) AS OrderQuantity
  ,CAST(SUM(fis.SalesAmount) AS DECIMAL(15,2)) AS SalesAmount
FROM dbo.FactInternetSales fis
  INNER JOIN dbo.DimCustomer dc ON fis.CustomerKey = dc.CustomerKey
WHERE fis.SalesOrderNumber LIKE 'SO437%'
GROUP BY dc.EnglishEducation
ORDER BY dc.EnglishEducation;
GO

We still had to do 11254 logical reads on the rowstore index for the table, but we were able to get the results in less than a second.

As you can see, if your workload meets the requirements, then the benefits of using columnstore indexes far out weight their disadvantages. I highly recommend lots of testing if you intend to implement columnstore indexes. It takes careful planning for loading or modifying data, designing your tables to account for the data type restrictions, and knowing how to write your queries to get the most performance from the columnstore indexes.

For more info on columnstore indexes, check out the Columnstore Index FAQ on TechnetBooks Online, and my other blog posts.