Solid-state storage has become ubiquitous in mobile devices and increasingly in laptops as well. But it's also revolutionizing storage in corporate datacenters, cloud computing services and beyond.
Forget the portability factor, solid-state drives, or SSDs, are finding massive growth potential as a way for companies to speed up large-scale storage systems.
Historically, storage devices used in business have been either magnetic tape or hard disk drives, both of which spin, use a lot of energy and throw off a lot of heat (which usually requires expensive cooling).
SSDs, on the other hand, are much faster and more efficient, and generate much less heat - making them ideal for use in datacenters. The only reasons they haven't already become a direct replacement for rotating disks is due to their relatively high cost per gigabyte and the high volumes of data writing required in many datacenter applications.
Although SSD vendors try to engineer around these issues with advanced compression and deduplication techniques - as well as redirecting how and where data is stored - SSDs today essentially provide answers for limited use cases (e.g. small databases), but fall short when it comes to working with large loads of data. This includes the massive amounts of unstructured data that represents the bulk of storage growth today.
That said, the adoption of solid state storage as a complementary solution to standard hard disk (HDD) storage for enterprise applications continues to grow quickly. Worldwide solid state storage industry revenue reached $5 billion in 2011, a 105% increase from the $2.4 billion in revenue achieved in 2010, according to analysts at research firm IDC. SSD vendors are also giving enterprises and storage companies improved technical options. Some 18 million higher-capacity SSDs (ranging from 80GB to 512GB) will ship in 2013, and that number is expected to grow to 69 million units by 2016, estimate analysts with IHS iSuppli.
But if solid state drives aren’t going to kill off spinning hard drives or magnetic tape, where do all those SDD units fit into the storage ecosystem in efficient and cost-effective ways?
Today's Best Uses For SSD In Storage
Solid state drives are most useful for access to data that is needed fast but not in high volumes. SSD technology is also justifiable in applications where lag time or latency could mean lost dollars - such as in trading platforms and financial systems. These attributes also make SSDs valuable as the first line of access for cloud-based storage offerings. Data that needs to be accessed fast goes on the SSD, while less-critical data can be stored on more traditional slower, less expensive spinning disks. This arrangement lets cloud storage providers leverage SSDs along with other storage technologies to provide a broad range of services at the lowest cost.
“The challenge they are solving is that the IOPS (input/output - or I/O - operations per second) are superior in SSDs but the cost per gigabyte storage is orders of magnitude higher than rotational hard drives,” explains Sanjay Parekh, co-founder of a solid state hard drive cloud storage startup SolidFire. “They have developed a layer on top of SSDs that drive the effective storage cost down while maintaining high IOPS.”
As prices eventually come down, Parekh notes, SSDs should replace most other storage technologies. But there will always be a role for backup systems like tape and magnetic hard drives. A distributed system using multiple storage technologies is best for long-term data security, he adds, as it isn't wise to put all of your data on one storage medium.
Room To Grow With SSD
For now, using SSD as an acceleration tier on top of traditional storage can provide the best of all storage worlds. Relatively small, high-performance SSD units can be used to speed up all applications by replacing or augmenting memory caches.
“It instantly delivers performance on demand, while enabling hot data to stay on higher-cost SSDs and majority of data at rest to sit on well-protected traditional disk drives,” says Kirill Malkin, CTO of Starboard Storage. “That way, the SSD subsystem delivers performance efficiency and the disk subsystem optimizes capacity efficiency. Furthermore, using multiple SSD layers with varied performance and capacity characteristics enables highly effective, just-in-time I/O optimization for consolidating multiple applications on a single storage platform with right sized resources for every workload and use case. Notably, this approach requires a complete rethinking of the storage stack involving dynamic pooling of all resources and implementing performance controls.”
When used as a performance tier in a more traditional storage environment, SSD can extend the life of that architecture. But this kind of architecture is inherently inefficient because it involves heavy "shadow" transfers between the solid state and rotating storage devices, he adds. Successful "tuning" of such storage systems requires an intimate knowledge of the application behavior, making it more difficult to set up and use.
Given these issues, properly sizing the solid state storage tier can be very difficult, often ending up with significant over-provisioning to achieve the required performance target, Malkin adds. The SSD tiers in traditional storage systems architectures are often limited in the way they fit in with current complex storage systems or legacy RAID [redundant array of independent disks] controllers and RAID group management.”
Beyond SSD - What Does The Future Hold?
But for SSDs to take over the storage ecosystem, manufacturers will have to shrink the technology's price premium compared to traditional storage solutions. Today, an HDD might cost $0.24 cents per gigabyte, while an SSD could cost $2 or more per gigabyte. That's a hefty difference when you start talking about equipping large data centers with SSDs.
One way vendors have tried to balance price, capacity and performance is with multi-level cell (MLC) SSDs. However, due to their inherently low endurance, MLC-based SSDs quickly wear out in write-intensive uses. Another approach is TLC flash or triple-bit-per-cell flash SSDs - but the problem then becomes one of physics. Adding cells means shrinking the components smaller than 14 nanometers, making it difficult to manage and more of a work in progress than an actual solution to today's storage problems.
"Since SSDs became a viable option for the enterprise performance, cost and reliability have been critical points of evaluation. However, as IT purchasers begin to better understand the technology's strengths and weaknesses, we are seeing more importance placed on the balance between endurance and cost," according to John Scaramuzzo, president of SMART Storage Systems.
As an alternative, many companies are now considering hybrid storage systems. Hybrid systems combine spinning disk and SSD storage media to balance capacity and performance. Large-capacity hybrid storage devices can cost $500 a terabyte, making them appropriate for a single super-system but not necessarily as a way to cut costs in a large datacenter storage array.
For the foreseeable future, all types of storage formats will likely coexist, with SSDs added to traditional storage systems as the predominant model. (That's largely because of the large installed base and well-understood architecture of traditional storage systems.) Ultimately though, as SSD prices fall due to Moore's Law, experts expect the industry to gradually migrate to a world where all-SSD systems are used for a continually growing subset of applications. For workloads where SSD's performance improvements can't be financially justified, SSD-accelerated systems will continue to be used until the SSD price premium becomes insignificant.
No comments:
Post a Comment