Cloudistics entered into a strategic partnership with Fungible. The Cloudistics development team is working jointly with Fungible on software to drive the next generation of composable infrastructure. A few members of our team have joined Fungible directly, this will ensure synergy and create leadership integrated offerings.

skip to Main Content
What Goes Beyond Hyperconvergence?

This article argues that there is a significant new approach to delivering IT infrastructure to data centers that we are calling Composable.

1. Convergence Background

The first generation of IT infrastructures were “siloed” architectures (see Figure 1) where compute, storage and networking were selected separately.

Figure 1: 1st Generation – Siloed Architecture

Starting from 2007, a second generation of converged infrastructure or CI systems emerged that bundled existing server, storage and networking products, together with management software. This approach reduced from months to weeks the time it took a customer to run their first application following purchase. In both the siloed and converged architectures, SANs were used for storage. Hyperconverged or HCI systems emerged in 2012 – these were tightly coupled compute and storage hardware that eliminated the SAN. Unlike CI, which is really 3 pre-tested products sold together, HCI (see Figure 2) is truly a single product with unified management. Think of HCI systems as the 3rd generation of convergence.

Figure 2: 3rd Generation – Hyperconverged

According to 451 Research’s latest quarterly Voice of the Enterprise survey of IT buyers, hyperconverged infrastructure is currently in use at 40% of organizations, and 451 Research analysts expect that number to rise substantially over the next two years1.

This blog asks “what comes after hyperconverged” and suggests that a 4th generation architecture called composable is imminent.

2. SAN limitations lead to Hyperconverged

A storage area network (SAN) is a network that provides access from multiple servers in a data center to shared block storage, such as a disk array. SANs became popular when 1 Gbps FC (Fibre Channel) became available in 1997.

Direct-Attached Storage (DAS) was an alternative available at the time, but it did not support storage sharing, leading to poor storage utilization.

In the last few years, however, the pendulum has started swinging back in favor of DAS due to the following four problems with SANs.

SANs are slow for SSDs. The network protocol overhead of accessing fast storage devices (SSDs) over a SAN network dominates the total response time. DAS access times can be anywhere from one half to one third of SAN access times.

SANs are expensive. Storage devices in SAN arrays were being priced significantly higher than DAS storage devices.

SANs are proprietary.

SANs are complex to manage. SANs based on the FC standard require unique host bus adapters (HBAs), unique switches, unique cables, and new concepts like zoning and masking. Additional complexity springs from (a) the disconnect between application concepts like virtual machines (VMs) and storage concepts like LUNs and (b) the separate management of compute and storage.

To address these problems, new virtual SAN or vSAN software emerged that eliminates SANs and provides equivalent storage sharing capability.

Hyperconverged or HCI systems use clusters of compute servers with DAS storage, eliminate SANs, and use vSAN software to achieve equivalent functionality. Examples of HCI systems include Nutanix and Simplivity. HCI systems eliminate SAN management, use a single console for integrated management of servers and storage, and provide app-centric storage management.

3. New developments in storage & SAN protocols

Three recent developments point the way forward.

3.1 NVMe over Fabrics (NVMeoF) replacing SCSI

Small Computer System Interface (SCSI) became a standard for connecting between a host and storage in 1986, when hard disk drives (HDDs) and tape were the primary storage media. NVMe, a newer alternative to SCSI with much lower CPU overhead, is designed for use with faster media, such as solid-state drives (SSDs).

NVMe was originally designed for local use over a computer’s PCIe bus. NVMe over Fabrics (NVMeoF) enables the use of alternate transports that extend the distance over which an NVMe host and an NVMe storage subsystem can connect. With NVMeoF, there is no latency difference between local storage and remote storage – no performance difference between DAS and SAN. NVMeoF specifications were released in June 2016.

SAN networks have traditionally used SCSI over FC transport. Future SANs will use the NVMeoF protocol over RDMA transports.

3.2 Emergence of Competitive Networked Storage on Standard x86 Servers

In the past, SAN-attached shared block storage controllers used to be proprietary. They were built using special motherboards and sometimes even with special ASICs, as in the case of 3Par.

Increasingly, shared block storage is built using standard x86 servers and are competitive in cost, performance and scalability to shared block storage built using proprietary hardware. Both scale-out storage controllers and dual controller RAID arrays can be built this way.

3.3 Emergence of App-centric storage management in networked storage

SAN storage used to provide LUN-centric storage management. New SAN vendors (e.g. Tintri) provide app-centric storage management.

3.4 Summary

Future networked block storage can:

  • be built with standard server hardware
  • have simple application-centric management, and
  • have high performance using the NVMeoF protocol.

4. Next-Gen Storage

Let us re-examine the 4 reasons that caused SANs to lose favor.

  1. DAS access times were one-half to one-third of SAN access times. This is no longer true with NVMeoF.
  2. SAN storage devices are more expensive than DAS storage devices. This is no longer true with use of standard x86 servers.
  3. SAN storage is proprietary. SAN storage built on standard x86 servers is no longer proprietary.
  4. SANs are complex to manage. Using standard x86 servers for storage, virtual networking in place of zoning and masking, and app-centric storage management can eliminate SAN complexity.

To summarize, networked block storage of the future will be based on RDMA Ethernet networks (not FC), use NVMeoF (not SCSI), and use virtual networking to eliminate zoning and LUN masking.

5. Next-Generation Convergence

As we have said before, Hyperconverged or HCI systems emerged in 2012 – as the 3rd generation of convergence. HCI combines servers and DAS storage into a single product.

In our view, a 4th generation of convergence will emerge, which we call composable systems or (CS) (see Figure 3). CS goes one step further and combines servers, networked block storage and networking into a single product with one management console. There are 2 crucial differences between CS and HCI. First, CS uses networked block storage, not DAS. Second, networking is included as a 1st class citizen in CS and not ignored. For this blog, we focus on how storage is integrated in CS, and we leave the network integration for a later blog. The same RDMA Ethernet network will be used to connect compute servers to each other and to connect compute servers to x86 server-based networked block storage. The servers in CS that run the storage functions can be different (different RAM, cores, NICs) from the compute servers that run customer apps. While HCI has app-centric storage management, CS has app-centric storage and network management.


Figure 3: 4th generation – Composable

CS systems will be superior to HCI in the following ways.

  1. HCI scales compute and storage in a coupled way — new nodes add both storage and compute resources in varying ratios. CS allows for independent scaling of storage, compute and network resources.
  2. HCI uses Flash caching and data locality to improve performance. Data locality is hard to maintain for workloads with unpredictable access patterns to data. Furthermore, when a workload migrates, HCI will need to migrate data from the original node to the new node. This is unnecessary for CS as storage performance will be the same from any node. Furthermore, host Flash caching is unnecessary in CS as there is no latency difference between local and remote Flash.
  3. Unlike HCI, CS systems can optimize storage performance by using x86 servers that are optimized for running storage functions.
  4. HCI systems provide high availability using either replication (which is expensive relative to RAID) or using erasure coding (which needs more CPU and memory than RAID). Composable systems can implement high performing RAID.
  5. It is harder to guarantee application performance with HCI, since compute resources needed to run customer apps must compete with compute resources needed to run vSAN software on the same set of nodes, and since performance is dependent on data locality.
  6. CS systems need to rebuild data only when a drive fails, and only that drive needs to be rebuilt. HCI systems need to rebuild whenever a node fails, and all drives on that node have to be rebuilt. HCI rebuilds are therefore both more pervasive (many drives rebuilt versus one drive) and more invasive (impacts app performance more, as it runs on same nodes where the customer apps run).

7. Summary and Conclusions

We reviewed historical SAN weaknesses that lead to the emergence of HCI. With newer technologies like NVMeoF, software-defined storage, and applicationcentric storage management, we showed that shared block storage systems of the future would not suffer from these deficiencies.

HCI combines servers and DAS storage into a single 3rd generation product. In our view, a 4th generation of convergence will emerge, which we call composable or CS systems. CS goes one step further than HCI, and combines servers, shared block storage and networking into a single product with one management console. CS can eliminate several limitations of HCI such as coupled scaling, data locality management, inefficient storage redundancy, and so on.

Our conclusion is that CS systems will be preferred in the future. HCI systems will continue to play a role, particularly entry systems for SMB customers, but we may well have seen the peak of the hype cycle on hyperconverged systems.


You may also be interested in: the recording of Cloudistics’ presentation at ActualTech Media MegaCast. Our VP of Sales Steve Conner and VP of Pre-Sales Engineering and Services Carmelo McCutcheon will discover main possibilities of Cloudistics on-premises cloud platform.

Dr. Jai Menon

Chief Scientist, IBM Fellow Emeritus

Jai is the Chief Scientist at Cloudistics, which he joined after having served as CTO for multi-billion dollar Systems businesses (Servers, Storage, Networking) at both IBM and Dell.

Jai was an IBM Fellow, IBM’s highest technical honor, and one of the early pioneers who helped create the technology behind what is now a $20B RAID industry. He impacted every significant IBM RAID product between 1990 & 2010, and he co-invented one of the earliest RAID-6 codes in the industry called EVENODD. He was also the leader of the team that created the industry’s first, and still the most successful, storage virtualization product.

When he left IBM, Jai was Chief Technology Officer for Systems Group, responsible for guiding 15,000 developers. In 2012, he joined Dell as VP and CTO for Dell Enterprise Solutions Group. In 2013, he became Head of Research and Chief Research Officer for Dell.

Jai holds 53 patents, has published 82 papers, and is a contributing author to three books on database and storage systems. He is an IEEE Fellow and an IBM Master Inventor, a Distinguished Alumnus of both Indian Institute of Technology, Madras and Ohio State University, and a recipient of the IEEE Wallace McDowell Award and the IEEE Reynold B. Johnson Information Systems Award. He serves on several university, customer and company advisory boards.

Back To Top
×Close search
Search