Performance Optimization by using Included Columns and Field Selects

Since version 2012, Dynamics AX supports included columns in indices although SQL Server supports it  for quite a long time. Here are some examples how and why it is good practice to use included columns in an index. I’m using Dynamics AX 2012 R3 Cu12 on Windows Server 2016 and SQL Server 2016 with Contoso Demo data for this example

Cluster Index.

The cluster index can be defined using multiple fields and is used to defined the order of records stored in the table. Even more important is the fact, that if a table has a clustered index all the data is stored in the table, i.e. the cluster index IS the table!


Take a look at the space allocated by the indices. About 219 MB are used to store actual data and 167 MB are used to store index information


The following SQL Statement reveals the size in detail

SUM(s.[used_page_count]) * 8 AS IndexSizeKB
sys.indexes ind
sys.tables t ON ind.object_id = t.object_id
sys.dm_db_partition_stats AS s ON s.[object_id] = ind.[object_id]
AND s.[index_id] = ind.[index_id]
order by IndexSizeKB desc

The table data is stored in the TransOriginIdx

name IndexSizeKB
I_177TRANSORIGINIDX    226992 ~ 221 MB
I_177ITEMIDX 24872
I_177RECID 23416
I_177DIMIDIDX 22192

Index Usage with Field Select

Here is an example of a select statement with field select on the InventTrans table

while select ItemId,DatePhysical
from inventTrans
InventTrans.ItemId == ‚0001‘ &&
inventTrans.DatePhysical >= str2Date(‚1.1.2011‘,123)

{ .. }

The trace parser reveals the actual SQL Statement sent to the database


What happens is what you would expect, SQL uses the ItemIdx for this query


Only 5 logical reads where necessary



Select Non-Index fields

When the query selects fields which are not part of the index, SQL server has to perform a lookup in the Cluster Index for each record identified by the ItemIdx to get all the other fields. For example the Voucher and Qty are not part of the ItemIdx.


213 logical reads were necessary to fetch the data


This can get even worse, when performing the lookup becomes to expensive. This can happen when the query returns a larger number of records. For example, when querying for another ItemId. In this example SQL server does not use the ItemIdx anymore, but performs a search in the clustered index instead. The ItemIdx became completely useless for this query.


SQL server required 1345 logical reads to fetch the data!



Included Columns

Since version 2012 Dynamics AX supports the definition of Included Columns for indices. These columns are not used to sort the index. These are just fields which are stored within the index to avoid costly lookups in the clustered index. In Dynamics AX you just add columns to the index and set the property IncludedColumn to Yes.


You can find the included columns in SQL server when viewing the properties of the index


When the statement from above is executed again, SQL server can use the included columns from the index and does not perform costly lookups in the clustered index.


Only 6 logical reads are required to fetch the data. This is a huge optimization compared to the 1345 reads without included columns.


SQL Backup Restore fails due to insufficient free space

A customer recently tried to restore a Dynamics AX database backup from the Live system to the Testing environment. The SQL Server data disk had 17 GB space left, and the size of database backup (.bak) file was about 11 GB.


However, SQL Server refused to restore the database because of insufficient free space. 


The reason was the file layout in the original Database. Typically, the database files and log files pre-allocate space to avoid costly file operations when the content of the database grows. In this case, the database file had 20 GB space and the log although the content of the database was only about 11 GB.


When a backup is created, SQL only backups the content of the files and adds additional information about the file layout. When the database is restored, the database files will be allocated like in the source database. Therefore 21 GB were needed but not available on disk.

The solution was to increase the storage on the Test system, restore the database and afterwards shrink the database file.

Smoked! Consumer SSD in 24/7 Server environment

When a customer asks for storage systems, SSDs are a frequent option to boost performance. However, you’ll definitely face the question why enterprise SSDs are so expensive. Especially in an SMB, where you have to discuss each Cent, the question arises if it would be possible to use cheap consumer SSDs instead. Most storage systems use SAS drives while consumer SSDs have SATA ports. Depending on the controller you might not even be able to include a SATA SSD in your storage system.

However, I had such a situation where a SSD was used as direct attached storage within an IBM x3300 M4 server system. SQL Server was native installed on the machine and an Ocz Agility 3 SSD was used for tempdb only. After 22167 hours of usage (923 days), the first signs of problems showed up. Tempdb files were corrupt and the OS reported I/O errors accessing the files.


Using SSD in a storage system or direct attached storage in a server will significantly improve the system performance. However, don’t use consumer SSD in your system. They will probably fail within 2 – 3 years and you don’t get support for 24/7 environment. Enterprise SSDs a more expensive but more durable. Depending on your support contract you get spare parts up to 5 years.

smoked ssd