Sales leaders of DELL servers in Ukraine.

Quick search filter of DELL servers

Співіснування накопичувачів стандартів SAS, SATA та NVMe в одному сервері добігає кінця. Server Solutions

The coexistence of SAS, SATA and NVMe drives in one server is coming to an end.

SAS data storage technology has been the undisputed choice for organizing enterprise data warehouses for two decades - thanks to the balance of performance and cost, high levels of reliability and scalability. It uses SAS extenders as a cost-effective way to scale and manage extremely large data sets.

Struggle for speed

If it was only a matter of storage volumes. The demands of performance-critical applications led to the emergence of NVMe, a high-speed data exchange protocol with flash drives. Enterprise-grade solid-state NVMe SSDs outperform SAS media in terms of latency and are significantly more efficient in both sequential and random access operations. Under intensive simultaneous requests, they have no equal, because the parallelization of I/O processing is embedded in the protocol, and the PCIe bus provides direct CPU access to data on several lanes.

Technologies do not replace each other instantly. In addition, different drives have different roles in the data storage infrastructure. Standards coexist side by side for a long time. It's easier with a mixture of SAS and SATA drives - they are controlled by the same controllers. Disk baskets, connecting boards, cable management - everything is shared. NVMe SSDs are another matter. By their very nature, they don't need intermediaries (controllers on the PCIe bus) passing the data stream through them. Although the U.2 standard (2.5” NVMe SSD) has taken root in mass servers - for compatibility with typical hot-swap disk baskets, the I/O subsystem of servers has undergone radical changes. Hybrid connecting boards with two sets of switching, two signal systems, two bundles of cables appeared in the servers: separately for SAS/SATA, separately for U.2.

Wrong path Tri - mode / U .3

With the introduction and influence of RAID controller developers, the industry has taken a step down the deceptive path of "versatility". The new generation of tri-mode (Tri-mode) RAID controllers is able to serve SAS/SATA/NVMe drives. The later proposed U.3 storage standard with universal 2.5" bays and auto-detection of the disk type strengthened the binding of disk subsystems to such controllers. Three-mode platform U.3 built on a uniform design of the connecting board and a single type of connector. Its mandatory elements: (1) three-mode controller; (2) universal connectors; (3) universal management structure (Universal Bus Management).

The perceived usefulness of automatic disk type detection, a single cable connection and data traffic routing turned out to be an illusion. Even sharing SAS and SATA drives under one controller is rare. Adding NVMe to this combination makes even less sense - connecting a Tri-mode controller to a PCIe bus with a maximum of 16 lines kills the performance potential (one NVMe requires four lines). There is no mention of its scalability. Such controllers are much more expensive than their SAS/SATA predecessors - which in itself devalues ​​arguments about savings on cables and user comfort.

Separation of powers

In the U.2 connector, the SAS/SATA lines are separated from the NVMe lines, allowing system developers to independently scale the solution using available SAS expanders and PCIe switches. For example, ASUS 2U / 24 x 2.5” server platforms are NVMe-ready, all disk slots accept U.2 or U.3 SSDs. Some of them are connected to SATA ports on the motherboard. By adding a hardware HBA or RAID controller, SAS/SATA SSD (HDD) can be installed in 8 bays.

Since Tri-mode RAID controllers limit the capabilities of NVMe, the search for productive solutions is aimed at unlocking the potential of flash drives by increasing parallelism, bus and processing devices. Hunting for productivity excludes the unification of the worlds of SAS and NVMe with common elements of communal storage. Beyond the computing host, everything is separate: connection circuitry, cabling, data protection mechanisms using RAID.

NVMe above all else

Everything is going to the point that application servers with mainly "hot" data will switch completely to NVMe. There is not a lot of such data, and it is not difficult to collect several tens of terabytes of capacious NVMe SSDs (and not so expensive, taking into account the purpose of the servers). Platforms 1U / 10-12 x 2.5” under U.2/ U.3 SSD, single- or dual-processor, become the basis of database servers, high-speed computers, analytics servers, hyperconverged infrastructure nodes. There is no place for a SAS/SATA SSD, let alone an HDD.

Fate HDD

High-capacity mechanical disks remain the carriers of "cold" data: film libraries, video surveillance systems, storage of backup copies, archives. When HDDs make up the bulk of the budget, the specific cost of storage is crucial. Typically, such systems are dominated by streaming operations with sequential access to data, where the benefits of SSDs are almost invisible - so they are not there.

If hundreds of terabytes of data are needed, a modular distribution is appropriate: the control server is separate, the mechanical disks are separate - in a JBOD shelf, connected to the host via 12G SAS. This is how high storage density is achieved and comfortable conditions are provided for the discs. JBOD anatomy resists the two main enemies of HDD - induced vibration and overheating. A modern JBOD holds dozens of hot-swappable 3.5" drives, has redundant I/O modules, and multiple SAS ports for connecting hosts and other shelves. Storage capacity is scaled by cascading them.

An example is a popular line WD Ultrastar Data60 with a capacity of up to one and a half petabytes in 4U.

Departure

The distribution of roles and "housing" between media of different types is logical building a capacious productive ZFS storage. There is no alternative to mechanical disks in volume storage. But in addition to the main data pools, there are auxiliary data: the L2ARC adaptive replacement cache of the second level, the ZIL record intent log (SLOG), metadata (tables, indexes, pointers that define the address structure of the pools). Usually they are mixed with the main data on the same (slow) carriers. Performance degradation from colocation can be reduced by storing secondary data on SATA SSDs or NVMe. This separation speeds up addressing, reading, and helps protect data without losing performance.

Developers of the most common (thanks to open source) TrueNAS storage OS on the OpenZFS file system provide an example: “The optimal SLOG device is a small flash-based device such as an SSD or NVMe card, due to their inherent high performance, low latency, and of course, resilience in the event of power loss. You can mirror your SLOG devices as an extra precaution and you'll be surprised what a speed boost you can get with just a few gigabytes of dedicated log storage. Your drive pool will have the write performance of a flash array with the capacity of a traditional mechanical disk array. That's why we ship every TrueNAS mechanical disk system with SLOG on high-performance flash storage, a standard option on our FreeNAS Certified line.”

So we make a capacious JBOD the "body" of storing hundreds of terabytes of data. We connect it via 12G SAS to the "head" - a 1U server with CPU, RAM, network cards and NVMe sets for auxiliary data. We are setting up. We enjoy productivity.

How can we help?

The Server Solutions company sells Dell PowerEdge R760 and Dell PowerEdge R760xs servers throughout Ukraine, among our customers are small, medium and large businesses. If you or your company needs advice and the purchase of high-quality server equipment, then you should contact us.

Leave a comment

Please note, comments need to be approved before they are published.