Nimble Storage - Turning Shared Storage on its Head

The storage marketplace has been in a state of evolution for the last 15 years. Technologies such as virtual storage, duplication, snapshotting and replication have led to products that are more efficient, easier to manage and more flexible than ever before.

One thing however, has not changed. Storage depends on the disk you choose and how you configure it. If speed is needed, the capacity is less and more spindles are needed. If capacity is needed, you need larger disks that are slower, sacrificing speed. There really is no practical way to get both. The other consistent fact is that disk technology has not changed in the last ten years and is not likely going to change moving forward. Spinning mechanical disks are as fast as they are going to get. They may get bigger, but they are not going to get faster. Some concepts have helped with this conundrum. Vendors have developed new RAID technologies with less performance overhead. Others let you spread I/O across a large number of disks, resulting in more spindles that can read and write data. With all of this the problem remains, the speed of the disk is not going to improve and the more digital we become, the more demand we place on our storage systems.

Disk storage

In an attempt to fix the issue, new storage technologies have been introduced into the market. SSD and NVRam currently are being used because of their incredibly high performance capabilities. However, they are far more expensive than spinning disk and much smaller in capacity, even when compared to the fastest SAS based disks. Disks such as SSD have also been found to have volatility issues and short life spans, especially when they encounter a lot of random write I/O (90% of our data is random). This further complicates things.

Flash SSD

Vendors have also created the concept of Hybrid Storage; a combination of some or all of these higher performing storage technologies, mixed with traditional spindle based technologies for greater performance. This concept allows for a best of both worlds scenario, where disk capacity and flash speed are combined. The issue has been that combining storage technologies have not been as successful as hoped. Mostly because of sacrifices that have been made to implement them. For one, Hybrid Storage is processor intensive, moving blocks requires work (as auto tiring has proven) and storage systems were built to write and read blocks not move them. In most cases, they also only solve half the problem by caching reads. The hope is that offloading the reads will reduce spindle overhead for writes. In the end, attempts to this point at a hybrid or caching solution have proven to be a Band-Aid at best.

Regardless of the solution, there are certain truths that remain the same.

  • Mainstream storage platforms depend on spindles and disk for speed and capacity.
  • Spinning disks write sequential data incredibly fast and random data slowly.
  • Solid State disk reads and writes random blocks very fast. It is stable and reliable with reads, but is unstable and unreliable with writes. There is only a negligible performance boost over spinning disk with sequential writes.
  • In most cases, the most efficient RAID for capacity effects performance. The more protection there is (RAID 6) the less random I/O a disk can produce.
  • The highest performing RAID that protects data (RAID 1+0) results in the loss of the greatest amount of capacity (50%).

And that is where Nimble Storage comes in.  Nimble was founded from the ground up, understanding the limitations of existing storage systems. They also realized that Hybrid Storage is truly the best method to have; the highest performing storage with usable capacity. The differentiator with Nimble is that they set out to create storage that only takes advantage of the strengths of the various components, without being limited by their weaknesses.

The result is a file system called CASL and a concept that will prove to be the most disruptive technology in storage to date. The primary concept separates performance from spinning disk. If performance is created without disk, then we can achieve fast performance with high capacity. To do this, data processing and compression occurs inline, prior to writing data, rather than the other way around.

CASL

With CASL, processor intensive operations happen prior to writing data. On Nimble storage, data is written from the host in the native block size to a write cache layer of NVRAM (DRAM). The write is then committed, compressed and sequentially written to disk. Sequential writes mean that the needle does not move and IOPS are improved by 100X. At the same time, that data is written to the spinning disk; and active data and metadata are sequentially written to SSD providing a read cache for random reads. Active blocks from that point forward are moved from spinning disk to read cache SSD, as necessary. This unique concept of dual caching, combined with sequential writes, completely removes the dependency on spinning disk for performance. Therefore, 7200-RPM midline SAS disks are used to provide the highest levels of capacity with RAID 6 dual parity protection. Upgrades to performance can be made by increasing DRAM and processor without a forklift upgrade or migration to new disk. Capacity may be increased by adding more disk without reconfiguring any part of the controller. Furthermore, since the dependency on disk performance is nonexistent on a Nimble Storage system, management of the SAN is greatly reduced.

To summarize, Nimble Storage is not just disrupting the storage market, they are turning it on its head. By leveraging the best aspects of all storage technologies and removing performance dependency on disk, Nimble has developed a system that starts with 18,000+ IOPS and 16 TB of RAW disk in a 3U form factor.

Weather deploying an I/O demanding solutions such as Virtual Desktops and Microsoft Exchange or deploying a backup solution for your most critical data, Nimble Storage can reduce the need for multiple rack of storage to just a few rack units.