Posts Tagged ‘disk’

SSDs – it’s all about moving the bottleneck

January 19, 2009

So – SSDs, or “solid state drives” – what’s the big deal?

Well, if you’ve not spent much time learning about these, read the following.  If you already have, move on to the next paragraph or send me corrections to the following one.

Solid State Drives are basically hard drives, but with no moving parts.  Instead of a rotating platter and heads that read the strings of zeros and ones stored on the magnetic coating of the platter, an SSD uses some sort of memory technology and has no moving parts.  Why is this a good idea? Simple – speed.

Believe it or not, when you have a disk drive with platters that rotate at 15K RPM and drive heads that move a maximum of an inch, if the drive is reading the data from the inside drive track (the one closest to the spindle) it takes a long time to get the head to the outside track of the platter.  Sure, those itty bitty heads move fast and sure that drive platter is spinning fast as well, but in computer terms – it takes an eon to get that head to move to read that outside track. This is one of the reasons drive vendors add cache to their drives.

The basic idea is simple – if your PC is wanting to read a given sector off a hard drive, there is a good probability that it wants to read the next sector as well.  To make that second read as fast as possible, the drive doesn’t wait for the system to ask for the data, it just goes on and reads it anyway, then stores it in the cache on the drive.  Sounds good, and it is.  This works great, has for years.  But, there’s a catch (surprised?).

Reading the next sector and storing it in cache sure does make it faster to read, but what if your PC didn’t want that next sector?  What if it really wanted a sector on a different part of the drive?  Well, that stored read is not used and the (microseconds) it took to get that data are wasted.  That’s called a “cache miss.”  Cache misses occur when PC, in particular a server or storage array, accesses drives randomly (true random access will cause a 100% cache miss, every time).  When on earth would access be that random?  Databases……….

In the case of a database (and, no, it does not matter which one – MySQL, SQL, Oracle, etc) the data in them is often access randomly.  The database doesn’t know what record will be asked for next, so neither it nor the drive can accurately predict what “next” sector to read to put into cache. 

Dang, now what? Ahhhhh…….enter SSDs.

SSDs, like their name indicates, are solid state devices – they have no moving parts.  No spinning platters, no moving heads, no servo motors.  Why is this a good thing? Remember – when you want to access data randomly from a conventional (rotating) drive, you may have to wait for that head to move from the inside track to the outside – with an SSD there is no head.  No waiting.  Random access is just as fast as sequential access.  How cool is that? Turns out, very cool indeed.  Whether or not you believe the published benchmark numbers, the fact that SSDs are faster than even the fastest 15K ROM FC drives can’t be argued.  An SSD will generally outperform a large number of conventional drives, no question.

So – why not just load up every storage array with SSDs and have the fastest storage array on earth?  Turns out, it is all about moving the bottleneck. Once you understand where the bottlenecks are, you can make some more informed choices about moving them, and designing in balance in your systems.

Suppose you have a storage array with, say, slots for a dozen hard drives. Suppose you are currently running in that storage array a dozen 15K RPM SAS drives in, say, a single RAID 5 array (the details of whether or not this is optimal aren’t relevant for this discussion).  Suppose that array is managed by a PCIe RAID controller in a x4 slot.  Suppose that array is connected to the rest of the world via a single 4 gig FC port.  So – where’s the bottleneck in random I/O?  Easy.  I can tell you w/o even benchmarking the system.  The bottleneck is the drives.  How do you know?  Simple – rotating drives, unless used in massive quantities, are always the bottleneck w/random I/O.  It isn’t the controller, it isn’t the controller’s bus, and it isn’t the FC connection.  It is the drives.

Now, what happens when you add SSDs to that same array and look for the bottleneck?  Well, it gets interesting, that’s what.

Now you have to start asking some questions:

Q: Can the processor on the RAID card keep pace with parity bit calculation if I use SSDs? That is, will the processor on the card be overwhelmed?

Q: If not, what about the bus the RAID card uses? Is a PCIe gen 1 x4 enough?

 Q: If it is, what about that 4 gig FC connection?

See? I told you it gets interesting!  I won’t work through all the numbers, but as you can see SSDs can be a real game changer.  Their incredibly high I/O and throughput speeds force us to think a bit more about designing storage systems.  They force us to ask – where’s the bottleneck?

Next post – some interesting s/w that exploits the speed of SSDs, but doesn’t require that you ditch all your rotating disks.