What is your Flash Strategy?

By | October 4, 2016

It’s been a while since I have posted anything, though we were a bit busy in the Flash market.  Just in case you missed some of the awesome:

NetApp on establishing a new #1 in the “Top Ten” SPC-2 Price-Performance ranking.

NetApp Continues Flash Momentum: Gartner Magic Quadrant

NetApp Takes Top Honors for Customer Implementation, Brand Leadership at Flash Memory Summit 2016

So pats on the back all around!

And now that about ends what I have to say about Flash for this article.  Now let’s talk about flash.

While at NetApp Insight last week I had a chance to meet with several customers and partners at the FlexPod booth (and a big thanks to everyone who stopped by and said hello).  One of the questions that came up a few times is “How Do I Address a Flash Strategy?”.  Well normally the question was worded more like “flash is the future and I need the future now, how do I future?”

This would lead to questions around Flash and the best ways to implement it but before we dive down that hole lets tackle the concept of a “Flash Strategy”.  The best way to figure out how to best use the future is to look at the past.  If we go back to the early 2000s SATA was a technology that was moving into most enterprise datacenters as some cost effective and physical dense alternatives to FibreChannel disk, and dare we say, a good contender to tape.  It was a new tool in the tool chest but we weren’t running out in droves asking “What’s Our SATA Strategy?”.  I could ask what’s different between then and now and get a slew of answers from “SATA was slow so no one cared” too “the market wasn’t full of startups” and in a few cases “SATA is cheaper then FC so storage companies didn’t want to sell it”.

Those would all be good explanations (except the last, which is in conspiracy theory territory and I don’t have my foil hat on) and we could debate their merits for hours, or we could simply accept that it was a new tool in the tool chest and we didn’t need to make it something it’s not.  If we look at Flash and remove all of the marketing buzz and flashy hype (pun intended) and reduce it to flash, were left with a logical evolution in the tool chest.  The screw driver now has a motor and I don’t need to turn screws by hand anymore.

Flash at its heart is media, and while from a geeky standpoint I find it sexy, it’s not.  It’s a place where you store data for applications.  Now those are two words we should focus in on as applications and data are why infrastructure exist at all (sexy technology is rarely made for its own sake).  Since we have reduced flash to media what is it providing?

It provides two things at its core:

Consistent IO at predictable latency.
Superior long term density at lower owner ship cost.

In essence it provides a reliable media at a lower long term cost.  Now from an engineering side we fixate in on the other things that the array can do like make your 1s and 0s smaller, span continents, and speak 10 different languages.  The thing is, the storage industry has already been doing this for years (some of us longer than others), the media just became faster and more reliable.  Evolution of the product has taken the screw driver and made it a power drill.

So we shouldn’t be sitting around trying to figure out how we are going to reinvent our datacenter to be more “flash-centric” or “flash-dynamic” or just plain flashy.  What we should be asking is “What is my strategy around my data and applications”.  In the end all of the features we put into arrays of different shapes and sizes are designed to support those two elements.  That is where the focus of the effort should be placed as once you know what you’re trying to solve, the questions to answer tends to answer themselves.

From a NetApp standpoint there are three major ways to tackle application and data needs.  There are several tools in the tool chest:

Unified Architecture Approach (FAS/AFF) – You have a large swath of applications that use different protocols with different needs.  Maybe you also want to speed up your backup and recovery times or do application integration into your backup.  Some might call it the general purpose storage strategy; I call it the Swiss Army Knife solution.

Purpose Built Approach (E/EF) – For when high throughput and low latency just need to be faster or the cost lower.  These systems consistently place on the SPC (http://www.storageperformance.org/results/benchmark_results_spc2_top-ten) top list for performance and cost.  You have an application that’s is already feature rich but needs to go somewhere quickly and reliably.

Consumption IT (SolidFire) – The newest entry to the list (you can call it a portfolio if you must, I will call it a tool in the tool chest).  When your application needs to be cloud connected, done on a consumption based model and you don’t want to worry about how it should be built, there’s no better choice.

In each of these cases were addressing the application and data need.  Do we need a strategy on how to flash-ersize it all?  No, we just need to know how best to reliably handle it.

The answer to “What’s My Flash Strategy” should always be

“Solve my Application Challenges”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.