SmartStateIndia
Experts View

5 Ways in which NVMe-oF is Enabling Businesses

Khalid Wani,  Director- Sales, India, Western Digital 

As businesses contend with data’s perpetual growth, they need to rethink how data is captured, preserved, accessed, and transformed. NVMe has a significant impact on businesses and what they can do with data, particularly Fast Data for real-time analytics and emerging technologies.

The NVMe protocol capitalizes on parallel, low latency data paths to the underlying media, like high-performance processor architectures, offering significantly higher performance and lower latencies than legacy SAS and SATA protocols. It not only accelerates existing applications that require high performance, but it also enables new applications and capabilities for real-time workload processing in the data center and at the Edge. Designed for high performance and non-volatile storage media, NVMe is the only protocol that stands out in the highly demanding and compute-intensive enterprise, cloud, and edge data ecosystems.

NVMe™ is the protocol of today, but it’s also the foundation for next-gen IT infrastructure. New solutions on the market point to why it’s time for businesses to take advantage of NVMe and reap the benefits of NVMe-oF™ today – here are five reasons.

  1. NVMe Lets You Sprint in Running Spikes

Though flash storage has always been faster than disk drives, the interface has still held it back since it has been deployed in mainstream data center environments. SAS and SATA were a good starting point because it enabled SSDs to look like the disks they were replacing. But those interfaces were designed for disk and can’t handle the performance capability of flash storage. It’s a bit like asking an Olympic sprinter to wear ski-boots.

The introduction of the NVMe interface for SSDs was the next step. It is designed for flash with increased bandwidth, efficiency, and parallelism that can exploit NAND’s inherent low latency.

Western Digital’s Ultrastar® DC SN840 data center SSD is third-generation solution with a vertically integrated in-house NVMe controller, firmware, and 96-layer 3D TLC NAND technology. With low latency and dual-port high availability, it’s a future-ready solution that lets you power new, data-intensive applications.

  1. Extending NVMe Performance Outside the Server

To benefit from the speed of NVMe, SSDs need to sit on the PCIe bus, close to the processors, or locally attached. The PCIe bus cannot be extended outside the server, and so while each server can be individually accelerated, that leads to mini silos of SSDs that cannot be easily shared between hosts.

Enter NVMe-over-Fabrics, or NVMe-oF, the next step in data center infrastructure improvement. NVMe-oF allows NVMe-based storage to be shared between hosts at comparable performance to locally attached SSDs.

  1. Faster or Less Expensive – Pick Two

SSD based Storage platforms (JBOFs) with NVMe-oF technology supports significantly higher performance-intensive workloads at a lower price.  Western Digital is projecting around 17% savings compared to a conventional SAS JBOF. This stems from our vertical integration and silicon-to-systems mindset.

Western Digital’s OpenFlex Data24 NVMe-oF Storage Platform extends the high performance of NVMe flash to shared storage. It provides low-latency sharing of NVMe SSDs over a high-performance Ethernet fabric to deliver similar performance to locally attached NVMe SSDs. Industry-leading connectivity, It is an in-house design, including the RapidFlex™ NVMe-oF controllers and ASICS.

  1. Easy Adoption with Advanced Connectivity

A JBOF is a common approach to sharing storage, where multiple servers can share the resource. Storage can be allocated and reallocated according to the needs of the applications. And, some ways to do this are more comfortable than others.

The OpenFlex Data24 is a 2U enclosure that holds up to 24 DC SN840 NVMe SSDs for a raw capacity up to 368TB. It also contains up to six RapidFlex NVMe-oF controller cards. These cards offer several advantages for connectivity, including ultra-low latency, 100Gb Ethernet (a screaming performance that you can likely already leverage today), and low power.

Up to six hosts can be directly attached without a switch, but the connectivity increases dramatically with a switched fabric for greater flexibility and maximum utilization.

  1. High Composability

Efficiency is the name of the game, and composable disaggregated infrastructure is where the industry is headed. For us, it is part of our broader thinking about how new data infrastructure should look. Not just composable, but also open and interoperable.

Related posts

Building a Pandemic-Centric Business Continuity Plan

SSI Bureau

Deploying AIOps Strategy to Drive Cost Saving

SSI Bureau

How to protect your data networks against a state-sponsored cyber-attack

SSI Bureau

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More