Skip to main content

SQL Server on VMware vSphere Best Practices: Host, Disk, CPU and Memory Considerations

Part 1: SQL Server on VMware vSphere Best Practices: Host, Disk, CPU and Memory Considerations
Part 2: SQL Server on VMware vSphere Best Practices: Availability Group, Licensing and Other Considerations

I realized recently that I never documented my process for designing high-performance Microsoft SQL environments on VMware vSphere. For the most part, what you’re about to read is a brain dump, and is intended to serve as an outline. Please make sure that you research all of the configurations for your specific deployment requirements, rather than take everything I say and dump it in to your shiny new Microsoft SQL environment. In Part 1, we’ll dive into some of the more common settings and configurations that are done in a virtualized SQL environment.

ESXi Host Considerations

Power Management

A good rule of thumb is to set Power Management to High Performance on the ESXi host. This needs to be done both in the BIOS and from within the vSphere Client to take effect.

Second Level Address Transaltion (SLAT)

It’s necessary that the host supports Second Level Address Transaltion. Most 64-bit processors support this technology. When referencing Intel processors, SLAT = Extended Page Tables. When referencing AMD processors, SLAT = Nexted Page Tables or Rapid Virtualization Indexing.

So what is SLAT? It enables the CPU to maintain the mapping between virtual memory used by virtual machines and the physical memory on the host. If, for whatever reason, this mapping cannot be maintained then the hypervisor has to do the work which will affect performance.

Microsoft has conducted studies proving that SLAT has a direct impact on performance in virtual environments by:

  • Considerably reducing the host processing overhead to around 2%.
  • Reducing the host memory requirements by around 1MB per running virtual machine.

Do Not Overcommit Host CPU

Arguably the most important lesson here is don’t overcommit. If the host is overcommitted and has resource heavy applications like SQL running on virtual machines on that host, performance issues are inevitable. Try to keep the hypervisor as up-to-date as possible, as each new iteration comes with new maximums enabling scalability.

A good rule of thumb for initial sizing of a SQL server is to ensure the total number of vCPUs assigned to the virtual machine does not exceed the number of physical CPU sockets (as opposed to the logical cores) available on the host.

VM Disk Considerations

SQL Server accesses data and log files with very different I/O patterns. Accessing SQL data files is random, for the most part. Transaction log access is sequential, and sequential data is more efficient than random data access. Separating files that have different access patterns will help streamline I/O and optimize performance.

Virtual HardwareVolume LetterVolume LabelPurpose
Hard disk 1C:\OSOperating System
Hard disk 2Z:\PagingPaging
Hard disk 3D:\DataDatabases
Hard disk 4T:\TempDBTempDB
Hard disk 5I:\IndexesIndexes
Hard disk 6L:\LogsTransaction Logs
Hard disk 7Q:\QuorumUsed only for Availability Group enabled SQL servers
Hard disk 8B:\BackupBackups, batch processing, full text catalogs

Shared Instances vs. Dedicated Instances

If an application has a lot of schemas or stored procedures, it could potentially impact other applications that share that same SQL instance. Instance resources could become divided or locked, which will cause performance issues for the other applications sharing that instance. Since troubleshooting application performance issues at the database level can be difficult, you’ll want to weigh the cost/benefit of licensing against it.

Currently, Microsoft licenses SQL Server per core and not per instance. That said, to guarantee performance, make sure to choose dedicated SQL Server instances when possible. Don’t instance every single application, and dont lump unrelated applications into the same instances.

TempDB Right-sizing & Configuration

It is best practice to inflate TempDB files to their full size to avoid issues with disk fragmentation. Page contention occurs on GAM, SGAM and PFS pages when SQL writes special system pages to allocate new database objects. Latches will lock these pages in memory, which means on a particularly busy SQL Server it can take time for Latches to lock a system page in TempDB. This behavior, known as Latch Contention, always results in slower queries.

Here’s how you right size TempDB in a VMware virtual environment:

For VMs with less than or equal to 8 vCPU Cores: TempDB files equals # of vCPU Cores

For VMs with greater than 8 vCPU Cores: 8 TempDB files

Beginning with SQL Server 2016, the number of CPU cores visible to the operating system will automatically be detected during SQL Server installation, and SQL will calculate and configure the number of TempDB files required automatically.

Trace Flag 1118 – Full Extents Only

Microsoft advises in KB 2154845 that Trace Flag 1118 can help to reduce allocation contention in TempDB. Trace Flag 1118 instructs SQL Server to avoid mixed extents and use full extents. Learn more about extents here.

When Trace Flag 1118 is set, each newly allocated object in every database on that instance will get a private 64KB of data. Since TempDB is typically where most objects are created, it makes the most difference here.

Trace Flag 1117 – Grow All Files in a FileGroup Equally

Microsoft advises in KB 2154845 that Trace Flag 1117 can be useful for TempDB, which is commonly configured with multiple data files. Trace Flag 1117 will change the behavior of file growth. If set, when one data file in a FileGroup grows, it will force other files in that FileGroup to grow as well. Trace Flag 1117 is recommended in the Fast Track Architecture Guide from Microsoft.

Trace Flag 1117 will impact every database on the instance and not just TempDB. Alternatively, TempDB files can be pre-grown so that they fill the TempDB drive, leaving just enough room for any free space monitoring that is in place to avoid alarms.

Whenever possible avoid using Trace Flags, because it leaves less room for edge conditions. Learn more about Trace Flags here.

vCPU Considerations

Hyper-Threading

Hyper-threading is an Intel technology that exposes two hardware threads from a single physical CPU core, usually referred to as logical CPUs. Hyper-threading typically improves the overall host CPU performance between 10% and 30% by keeping the processor pipeline busier and allowing the hypervisor more opportunities to schedule CPU clock cycles. Contrary to popular belief, it does not double the number of CPUs or cores. Enable Hyper-threading in the BIOS of the ESXi host.

Learn more about compute capacity limits in SQL Server here.

Non-Uniform Memory Access (NUMA and vNUMA)

SQL Server is NUMA aware, and performs well on NUMA hardware (or virtual machines) with no configuration required. Make sure that CPU Hot Add is disabled in the VM settings to ensure that vNUMA is enabled in the Guest OS.

Learn more about CPU Hot Add performance in vSphere 6.7.

For larger SQL deployments with more databases, consider dividing them into smaller virtual machines that can fit on a single NUMA node rather than spanning NUMA nodes to improve performance.

Maximum Degree of Parallelism (MAXDoP)

By default, SQL Server will use all available vCPUs during query execution. That’s great for large queries, but it can cause performance problems and limit concurrency. Appropriately configuring MAXDoP depends on the SQL Server machine. There are three factors to look at when determining MAXDoP:

  1. Asymmetric Multiprocessing (SMP).
  2. Non-Uniform Memory Access (NUMA and vNUMA).
  3. Hyper-Threading.

Microsoft recommends the following when configuring MAXDoP:

Server ConfigurationNumber of ProcessorsGuidance
Server with single NUMA nodeLess than or equal to 8 logical processorsKeep MAXDOP at or below # of logical processors
Server with single NUMA nodeGreater than 8 logical processorsKeep MAXDOP at 8
Server with multiple NUMA nodesLess than or equal to 16 logical processors per NUMA nodeKeep MAXDOP at or below # of logical processors per NUMA node
Server with multiple NUMA nodesGreater than 16 logical processors per NUMA nodeKeep MAXDOP at half the number of logical processors per NUMA node with a MAX value of 16

Learn more about Maximum Degree of Parallelism (MAXDoP) here.

Cost Threshold for Parallelism

By default, SQL Server sets Cost Threshold for Parallelism to 5. Small queries with a cost under approximately 50 typically don’t see any performance improvements when comparing parallel vs. serial execution. In some cases, they even degrade slightly due to the startup and computational overhead of going parallel.

The cost threshold figure is used by the optimizer when evaluating multi-threaded plans. The default value of 5 is a typically a very low setting, and is really only appropriate for Online Transaction Processing (OLTP) applications. For non-OLTP systems, it’s widely recommended to set the cost threshold to 50 and tune it up or down as appropriate.

Measure the critical queries in an application and adjust the cost threshold to ensure performance.

vMEM Considerations

Min Server Memory Option

Sets the minimum amount of memory that the SQL instance has at its disposal. Set this too high, and the operating system won’t have enough resources to do background processing for anything other than SQL which can cause performance issues.

Learn more about SQL Server memory configuration options here.

Max Server Memory Option

Sets the maximum amount of memory that the SQL instance has at its disposal. This setting is generally used to define boundaries for different instances and applications so that the overall memory of the server is not consumed, negatively affecting performance of databases, appliactions and even the operating system of the server.

If several SQL instances exist on the server, they’ll all be competing for the same resources. Setting the Max Server Memory option for each instance ensures that the instances continue to operate at peak performance. A good rule of thumb is to reserve 4-6GB of memory for the operating system. In an environment where the SQL server has 16GB of total memory assigned, assign no more than 10-12GB of memory for the SQL instances.

Learn more about SQL Server memory configuration options here.

Max Worker Threads

Max Worker Threads is an advanced option and so should not be altered without approval from an experienced database administrator.

Setting this option will optimize performance when a large numbers of clients are connected to SQL Server. Typically, a separate thread is created for each query request. In the event that hundreds of simultaneous connections are made to SQL Server, a single thread per query request would consume large amounts of compute resources. The Max Worker Threads option improves performance by enabling SQL Server to create a pool of Worker Threads that will service a larger number of query requests.

By default, Max Worker Threads is set to 0. This allows SQL Server to automatically configure the number of worker threads at startup, and will work for most SQL configurations.

When should the Max Worker Threads option be leveraged? If the average Work Queue Length for each scheduler is above 1, there may be benefit from adding more threads to the system.

There will only be benefit if the load is not CPU-bound, and not experiencing any other heavy waits.

Learn more about the Max Worker Threads option here.

Index Create Memory Option

Index Create Memory is an advanced option and so should not be altered without approval from an experienced database administrator.

Setting this option controls the maximum amount of memory that is initially allocated for creating indexes.

By default, Index Create Memory is set to 0. This allows SQL Server to automatically manage the maximum amount of memory that is initially allocated for creating indexes.

When should the Index Create Memory Option be leveraged? If the SQL Server has difficulties creating indexes, consider increasing the value of this option.

Learn more about the Index Create Memory option here.

Min Memory Per Query

When a query is run, SQL Server attempts to allocate the optimum amount of memory for that query to run efficiently.

By default, the Min Memory Per Query option allocates >=1024 KB for each query run. Microsoft best practice is to leave this setting at the default value of 0, which allows SQL Server to dynamically manage the amount of memory allocated for index creation operations.

If SQL Server has more memory than it needs to run efficiently, the performance of some queries could be boosted if you increase this setting which can improve overall performance. If there is no free memory available, increasing this setting will decrease overall performance rather than increase it.

Learn more about the Min Memory Per Query option here.

  • Latest Posts
Infrastructure Architect

Scott is an experienced professional and recognized leader specializing in the design and implementation of enterprise virtualization, enterprise storage, server infrastructures and operations. A self-starter able to work both independently and in a team-oriented environment, Scott has a proven ability to create and automate solutions to improve productivity, reliability and performance.

Throughout his professional career, Scott has proven successes implementing technology and service improvement initiatives and has a demonstrated ability to think strategically about business, create technical definition around objectives in complex situations, develop solution strategies, motivate and mobilize resources, and deliver end-to-end technology solutions.

Scott has achieved multiple industry recognized certifications in virtualization, networking, cloud, storage, converged infrastructure, hyperconverged infrastructure and other technologies, and is honored to be a VMware vExpert in 2018-2022, and vExpert EUC 2020-2022.

Achievements

VMware vExpert EUC 2020-2022
VMware vExpert 2022
VMware vExpert 2021
VMware vExpert 2020
VMware vExpert 2019
VMware vExpert 2018

Double VCP – Data Center Virtualization & Desktop Mobility
Double VCP – Data Center Virtualization & Cloud Management
Double VCP – Data Center Virtualization & Network Virtualization
Double VCP – Network Virtualization & Desktop Mobility
Double VCP – Network Virtualization & Cloud Management
Double VCP – Cloud Management & Desktop Mobility

Dell Technologies Proven Professional Exam Developer – 2021

Certifications

VMware Certified Associate – Digital Business Transformation 2020
VMware Certified Professional – Desktop and Mobility 2020
VMware Certified Professional – Data Center Virtualization 2020
VMware Certified Professional 6 – Desktop and Mobility
VMware Certified Professional 6 – Cloud Management and Automation
VMware Certified Professional 6 – Network Virtualization
VMware Certified Professional 6 – Data Center Virtualization
VMware vSphere 6 Foundations
Dell Technologies Cloud Platform Administrator 2020
Dell Technologies Virtual Desktop Infrastructure 2019
Dell Technologies Specialist – Infrastructure Security Version 1.0
Dell Technologies Specialist – Systems Administrator, VxRail Appliance Version 2.0
Dell Technologies Specialist – Systems Administrator, VxRail Appliance Version 1.0
Dell Technologies Associate – Converged Systems and Hybrid Cloud Version 2.0
Dell Technologies and VMware Co-Skilled Associate – Converged Systems and Hybrid Cloud
IBM Enterprise Design Thinking Practitioner
IBM Enterprise Design Thinking Co-Creator
SolarWinds Certified Professional

Spread the love!

Scott Forehand

Scott is an experienced professional and recognized leader specializing in the design and implementation of enterprise virtualization, enterprise storage, server infrastructures and operations. A self-starter able to work both independently and in a team-oriented environment, Scott has a proven ability to create and automate solutions to improve productivity, reliability and performance. Throughout his professional career, Scott has proven successes implementing technology and service improvement initiatives and has a demonstrated ability to think strategically about business, create technical definition around objectives in complex situations, develop solution strategies, motivate and mobilize resources, and deliver end-to-end technology solutions. Scott has achieved multiple industry recognized certifications in virtualization, networking, cloud, storage, converged infrastructure, hyperconverged infrastructure and other technologies, and is honored to be a VMware vExpert in 2018-2022, and vExpert EUC 2020-2022.