Windows Server 2025 - NVMe Configuration for Maximum Performance
Revolution Native NVMe Support
Windows Server 2025 introduces revolutionary native NVMe support , which eliminates 14 years of dependence on SCSI emulation and provides direct access to NVMe capabilities.
In April 2024, Microsoft unveiled the Windows Server 2025 roadmap, which described NVMe support for the OS and detailed significant performance improvements over Windows Server 2022. The company promised a colossal increase in IOPS performance (input-output per second) by 70% thanks to a new optimized function.
In December 2025, when Microsoft confirmed that this feature was recently implemented in Windows Server 2025 and received the status of public access. The company confirmed that the recent Patch Tuesday October 2025 update for Server 2025 ( KB5066835 ) adds built-in NVMe support, although it's currently only available on an optional basis, meaning administrators will need to manually enable it, as it won't be enabled by default.
Microsoft now claims an increase in performance of approximately 80% in IOPS units, which is 10 percentage points more than previously promised. This suggests that further optimizations helped improve performance even more. In addition, an economy of approximately 45% of CPU cycles per I/O operation is promised for random reading of 4K files on NTFS volumes.
If you're wondering, Windows Server 2025 will no longer recognize all storage devices as Small Computer System Interface (SCSI) by default, a standard originally developed for spinning disks such as hard drives.
Microsoft emphasized all the improvements that the new built-in NVMe support for solid state drives provides:
-
A significant increase in IOPS: direct access to NVMe devices through several queues allows you to finally reach the true limits of your equipment's capabilities.
-
Reduced latency: Traditional SCSI stacks rely on separable locks and synchronization mechanisms in the kernel's I/O path to manage resources. Built-in NVMe provides optimized input-output paths without blocking, which significantly reduces the time of each operation.
-
CPU efficiency: a more compact and optimized architecture frees up computing resources for your workloads, not for data storage costs.
-
Future-proof features: Built-in support for advanced NVMe capabilities, such as multi-queue processing and direct data transfer, ensures you're ready for next-generation storage innovations.
Key Performance Improvements
Test results from Microsoft:
- Up to 80% increase in IOPS on 4K random read workloads
- Up to 45% reduction in CPU usage per I/O operation
- Elimination of latency from the SCSI translation layer
- Multi-queue support for up to 64,000 queues instead of 1 SCSI queue

Requirements for Native NVMe
System Requirements
| Requirement | Details |
|---|---|
| Operating system | Windows Server 2025 + KB5066835 (October 2025 update or newer) |
| Driver | Microsoft StorNVMe.sys (standard Windows NVMe driver) |
| Hardware | NVMe SSD (PCIe Gen3/4/5) |
| Recommended | PCIe Gen5 NVMe for maximum performance |
IMPORTANT: If vendor-specific drivers (Samsung, Intel/Solidigm) are used, Native NVMe will NOT work. A standard Microsoft driver is required.
Step-by-step instructions for enabling Native NVMe
Method 1: Registry (For Single Servers)
Step 1: Install Updates
# Убедитесь, что установлен KB5066835 или новее
Get-HotFix | Where-Object {$_.HotFixID -eq "KB5066835"}
Step 2: Check Current Driver
# Проверьте, что используется Microsoft NVMe драйвер
Get-PnpDevice -Class "DiskDrive" | Get-PnpDeviceProperty -KeyName DEVPKEY_Device_DriverProvider
Step 3: Enable Native NVMe via Registry
# Откройте PowerShell с правами администратора и выполните:
reg add HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides /v 1176759950 /t REG_DWORD /d 1 /f
Step 4: Reboot
Restart-Computer -Force
Method 2: Group Policy (For Multiple Servers)
Step 1: Download the Group Policy MSI
- Download Group Policy MSI from Microsoft
- Install on a domain controller
Step 2: Configure GPOs
- Open the Group Policy Management Console (gpmc.msc)
- Create a new GPO or modify an existing one
- Go to:
Local Computer Policy > Computer Configuration > Administrative Templates > KB5066835 251014_21251 Feature Preview > Windows 11, version 24H2, 25H2 - Enable the policy for Native NVMe support
- Apply the GPO to the required servers
Step 3: Update Group Policy
gpupdate /force
Restart-Computer -Force
Verification of successful inclusion
Method 1: Device Manager
- Open Device Manager (devmgmt.msc)
- Find the "Storage disks" or "Disk drives" section
- NVMe devices should be clearly displayed under this section
- Check the properties of the driver - it should be StorNVMe.sys
Method 2: PowerShell Verification
# Проверьте NVMe devices
Get-PnpDevice -Class "DiskDrive" | Where-Object {$_.FriendlyName -like "*NVMe*"}
# Проверьте driver details
Get-PnpDevice -Class "DiskDrive" | Get-PnpDeviceProperty -KeyName DEVPKEY_Device_DriverVersion
# Проверьте что используется новый stack
Get-StorageSubSystem | Select-Object FriendlyName, HealthStatus, Model
Method 3: Performance Monitor
Configuring IOPS Monitoring:
- Open Performance Monitor (perfmon.msc)
- Add a counter: Physical Disk > Disk Transfers/sec
- Select the appropriate NVMe drive
- Start monitoring
Performance testing
Using DiskSpd for Benchmark
Installing DiskSpd:
# Скачайте DiskSpd от Microsoft
# https://github.com/Microsoft/diskspd
Basic Test (4K Random Read):
# Тест используемый Microsoft для демонстрации 80% gain
diskspd.exe -b4k -r -Su -t8 -L -o32 -W10 -d30 C:\testfile.dat
Test parameters:
-
-b4k- Block size 4KB -
-r- Random I/O -
-Su- Disable software caching -
-t8- 8 threads -
-L- Measure latency -
-o32- Outstanding I/O operations 32 -
-W10- Warmup 10 seconds -
-d30- Duration 30 seconds
Advanced Tests:
# Sequential Read Test
diskspd.exe -b128k -d60 -Sh -L -o32 -t4 -r -w0 C:\testfile.dat
# Sequential Write Test
diskspd.exe -b128k -d60 -Sh -L -o32 -t4 -r -w100 C:\testfile.dat
# Mixed Read/Write (70% read, 30% write)
diskspd.exe -b4k -d60 -Sh -L -o32 -t8 -r -w30 C:\testfile.dat
Configuration Optimization for Different Workloads
1. SQL Server and OLTP Databases
Recommended Settings:
# Настройте MPIO для multi-path I/O (если применимо)
Enable-WindowsOptionalFeature -Online -FeatureName "MultiPathIO" -All
# Оптимизируйте queue depth для SQL
# Используйте Device Manager > NVMe Properties > Advanced
SQL Server Specific:
-- Проверьте latency в SQL Server
SELECT
database_name,
file_id,
io_stall_read_ms,
io_stall_write_ms,
num_of_reads,
num_of_writes
FROM sys.dm_io_virtual_file_stats(NULL, NULL) AS divfs
2. Hyper-V and Virtualization
NVMe for VM Storage:
# Создайте VM с NVMe storage для максимальной производительности
New-VM -Name "VM1" -MemoryStartupBytes 8GB -Generation 2
# Добавьте NVMe диск
New-VHD -Path "D:\VMs\VM1\disk.vhdx" -SizeBytes 500GB -Dynamic
Add-VMHardDiskDrive -VMName "VM1" -Path "D:\VMs\VM1\disk.vhdx"
# Включите Storage QoS для контроля IOPS
Set-VMHardDiskDrive -VMName "VM1" -MinimumIOPS 100 -MaximumIOPS 10000
3. Storage Spaces Direct (S2D)
Configuration for S2D Campus Cluster:
Requirements:
- All-flash storage (only NVMe or SSD)
- Inter-rack latency ≤ 1ms
- RDMA networking is recommended
- Windows Server 2025 native NVMe support
Adjustment:
# Включите Storage Spaces Direct
Enable-ClusterStorageSpacesDirect -PoolFriendlyName "S2D Pool" -CacheState Enabled
# Create a volume optimized for NVMe
New-Volume -FriendlyName "Volume1" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName "S2D Pool" -Size 1TB -ResiliencySettingName Mirror
# Check health
Get-StoragePool | Get-PhysicalDisk
4. File Server and SMB
Optimization for File Serving:
# Включите SMB Direct (RDMA) для низкой latency
Set-SmbServerConfiguration -EnableSMBQUIC $true
# Настройте SMB Multichannel
Set-SmbClientConfiguration -EnableMultiChannel $true
# Оптимизируйте SMB для NVMe
Set-SmbServerConfiguration -MaxThreadsPerQueue 256
Advanced Configuration Settings
Registry Tweaks for Performance
# Оптимизируйте NTFS для NVMe
# Disable Last Access Time (уменьшает write operations)
fsutil behavior set disablelastaccess 1
# Increase NTFS memory usage for cache
fsutil behavior set memory usage 2
# Configure MPIO recovery interval (if used)
reg add "HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters" /v PathRecoveryInterval /t REG_DWORD /d 30 /f
reg add "HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters" /v UseCustomPathRecoveryInterval /t REG_DWORD /d 1 /f
Power Management Optimization
# Настройте power plan для максимальной производительности
powercfg /setactive 8c5e7fda-e8bf-4a96-9a85-a6e23a8c635c
# Отключите power saving для NVMe devices
# Device Manager > NVMe Controller > Properties > Power Management
# Снимите галочку "Allow the computer to turn off this device to save power"
Write Cache Optimization
# Включите write caching (если безопасно с UPS/battery backup)
# Use only with proper power protection!
# PowerShell command to enable write caching
$disks = Get-PhysicalDisk | Where-Object {$_.BusType -eq "NVMe"}
foreach ($disk in $disks) {
Set-PhysicalDisk -UniqueId $disk.UniqueId -Usage AutoSelect
}
Monitoring and Troubleshooting
Performance Monitoring Setup
Create a Custom Data Collector Set:
# Создайте performance baseline
$counterSets = @(
"\PhysicalDisk(*)\Disk Read Bytes/sec",
"\PhysicalDisk(*)\Disk Write Bytes/sec",
"\PhysicalDisk(*)\Disk Reads/sec",
"\PhysicalDisk(*)\Disk Writes/sec",
"\PhysicalDisk(*)\Avg. Disk sec/Read",
"\PhysicalDisk(*)\Avg. Disk sec/Write",
"\PhysicalDisk(*)\Current Disk Queue Length",
"\Processor(*)\% Processor Time",
"\Memory\Available MBytes"
)
# Создайте data collector set
$collectorSet = New-Object -COM Pla.DataCollectorSet
$collectorSet.DisplayName = "NVMe Performance Monitor"
$collectorSet.Duration = 3600 # 1 hour
$collectorSet.SchedulesEnabled = $true
Windows Admin Center Monitoring
- Open Windows Admin Center
- Connect to the server
- Go to the Storage section
- Monitor:
- IOPS (read/write)
- Throughput (MB/s)
- Latency (ms)
- Queue depth
Troubleshooting Common Issues
Problem 1: Native NVMe does not activate
# Проверьте, что используется Microsoft driver
Get-PnpDevice -Class "DiskDrive" | Get-PnpDeviceProperty | Where-Object {$_.KeyName -like "*Driver*"}
# Если используется vendor driver, переключитесь на Microsoft driver:
# Device Manager > NVMe Controller > Update Driver > Browse > Let me pick > Standard NVMe Driver
Problem 2: Performance did not improve
# Проверьте, что registry key установлен правильно
Get-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Policies\Microsoft\FeatureManagement\Overrides" -Name "1176759950"
# Проверьте Event Viewer для ошибок
Get-EventLog -LogName System -Source "stornvme" -Newest 50
Problem 3: Compatibility Issues
- Some consumer-grade NVMe may show reduced performance
- PCIe Gen5 drives show the highest gain
- Older NVMe (Gen3) may have a smaller improvement
Best Practices and Recommendations
Do's (Recommended):
- Test in a lab environment first before production deployment
- Create a full backup before enabling Native NVMe
- Monitor performance before and after to measure real gains
- Use enterprise-grade NVMe for critical applications
- Update the firmware of NVMe drives to the latest versions
- Check compatibility with vendor-specific drivers
Don'ts (Not recommended):
- Do not include in production without testing
- Do not use with vendor-specific drivers (will not work)
- Do not expect gains on old hardware (PCIe Gen3/4)
- Do not turn off unnecessarily after successful activation
- Do not forget about UPS if aggressive write caching is enabled
Expected Results by Workload Types
SQL Server / OLTP
- Transaction throughput : +40-60% increase
- Query latency : -30-50% reduction
- CPU overhead : -40-45% on I/O operations
Hyper-V VMs
- VM IOPS : +60-80% improvement
- VM boot time : -20-30% faster
- Storage latency : -40-50% less
File Server / SMB
- Throughput : +50-70% on sequential workloads
- IOPS : +70-80% on random workloads
- Concurrent users : +30-50% more supported users
Storage Spaces Direct
- Cluster performance : +60-75% IOPS
- Rebuild speed : +40-50% faster
- Resync operations : -30-40% time
Conclusion
Native NVMe support in Windows Server 2025 is a fundamental improvement of the storage stack, which eliminates the 14-year-old limitation of SCSI emulation.
Key Advantages:
- Up to 80% increase in IOPS on enterprise workloads
- Up to 45% reduction in CPU overhead on storage operations
- A significant reduction in latency for all types of I/O
- Multi-queue architecture uses the full potential of NVMe
- Future-proof for the next generations of NVMe hardware
Recommendations for Implementation:
- Start with a non-production environment for testing
- Measure baseline performance before switching on
- Gradually roll out to production servers
- Monitor closely for the first few days
- Use Windows Admin Center for centralized monitoring
This improvement is especially valuable for organizations with I/O-intensive workloads such as databases, virtualization platforms, and high-performance file services.
