Why efficiency matters: NetApp versus Isilon for scale-out

Disclaimer: I am a NetApp employee. The views I express are my own.

As the marketplace for scale-out offerings heats up, it’s interesting to see the different approaches that different companies take with their product development. The two leaders in scale-out NAS & SAN are NetApp and Isilon, and they both take rather different approaches to scale-out technology when it comes to performance. In this post, I will attempt to quantify the result of those differences: how NetApp does more, with less, than a comparable Isilon offering.

In terms of reference material, I’ll be drawing from the SPECsfs2008_nfs.v3 submissions from both NetApp and Isilon. NetApp’s 4-node FAS6240 submission is here, and Isilon’s 28-node S200 submission is here. First things first: I picked the two submissions that most closely matched one another in terms of performance. As you can see from the sfs2008 overview, there are a lot of submissions to choose from. I chose NetApp’s smallest published cluster-mode offering, and then looked for an Isilon submission that was roughly equivalent.

As part of this exercise, I put together list prices based on data taken from here (NetApp) and here (Isilon). I chose list prices because there is no standard discount rate from one vendor, or one customer, to another. If you have an updated list price sheet for either vendor, please let me know. Here are the results:

NetApp

  • 260,388 IOps
  • $1,086,808 list
  • 288 disks for 96TB usable

Isilon

  • 230,782 IOps
  • $1,611,932 list
  • 672 disks for 172TB usable

Doing some basic math, that’s $4.17 per IOp for NetApp and $6.98 per IOp for Isilon.

And to get a roughly equivalent level of performance, Isilon needs more than twice as many disks to do so! That brings us full-circle to the point about efficiency. NetApp does more with less disk, which means we get significantly more performance per disk than Isilon does:

“But wait”, I hear you cry, “NetApp has FlashCache! That’s cheating because you’re driving up the IOps of the disks via cache!” It’s true – the submission did include FlashCache. 2TB in total; 512GB in each of the four FAS6240 controllers. Isilon’s submission had solid-state media of their own; 5.6TB in total from a 200GB SSD in each of the 28 nodes.

“But wait”, I hear you cry, “RAID-DP has all of that overhead! WAFL, snap reserve space; you must be short-stroking to get those numbers!” Wrong again – we’re using more space than the competition.

In order to meet roughly the same performance goal, Isilon needed to provision almost twice as much space as an equivalent NetApp offering. That’s hardly storage efficient. It’s not cost-efficient, either, because you have to buy all those spindles to reach a performance goal even if you don’t have that much data you need to run at that speed.

“But wait”, I hear you cry, “those NetApp boxes are huge! They must be chock-full of RAM. And CPUs. And NVRAM too!” True again; each NetApp controller has 48GB of RAM for a total of 192GB. By contrast, Isilon has 1344GB of RAM. Isilon does have slightly less NVRAM (14GB) compared to NetApp (16GB).

“But wait”, I hear you cry, “NetApp requires 10Gb Ethernet for that performance!”. Yes, and so does Isilon. Let’s see how efficient we get not only from the nodes themselves, but also from the load-generating clients, too:

Although 10Gb Ethernet switch ports are coming down in price, they’re still not particularly cheap. And look at the client throughputs: Isilon struggled to get more than 10,000 IOps from each client, which means you have to scale out your client architecture as well. Which, of course, means more money.

“But wait”, I hear you cry, “NetApp is still going to use more power and space with their big boxes!” Not true. Here are the environmental specs for both:

Isilon did use less rack space (56RU) than NetApp (64RU). The environmental data were taken from here (Isilon) and here (NetApp).

Every single graph pictured above was compiled with data taken only from the three sources listed: the SPECSFS submissions themselves (found via Google), the list-price lists (found via Google) and the environmental submissions (found via Google). I will gladly provide the .xls file from which I generated the graphs if anyone’s interested.

Thoughts and comments are welcome!