As discussed in Part 1 of this series, Enterprise SSHD Basics, a solid state hybrid drive (SSHD) for enterprise applications presents a favorable value proposition in the current storage market. By adding a solid state (or NAND flash) component to the core architecture of a traditional hard disk drive (HDD), SSHDs present unique opportunities for enterprise environments to achieve improved performance—without sacrificing storage capacity. Furthermore, the operation of these drives delivers performance levels above those of traditional HDDs, with continued overall endurance.
To understand the potential value of SSHDs for enterprise applications, it is helpful to know the performance possibilities that accompany these devices. This paper provides insight regarding the performance benefits of enterprise SSHDs, the best method of measuring performance at the drive level and the optimal server configurations for enterprise SSHD solutions.
Enterprise SSHDs prioritize frequently accessed data and use nonvolatile cache (NVC) to enable faster write response times, thus delivering faster access to hot, mission-critical data. In the enterprise market, performance is largely measured by IOPS (input/output operations per second). Using a complementary combination of NAND flash and traditional magnetic media, an enterprise SSHD produces significantly higher IOPS performance than the fastest 15K HDDs while delivering excellent cost-per-GB performance.
Because enterprise SSHDs cache data at the I/O level, they can also be helpful in alleviating performance barriers that may occur in a tiered storage solution. An enterprise SSHD addresses delays that can take place as data moves between tiers, acting as a performance-tuning knob.
The most important consideration in estimating drive-level SSHD performance is workload definition. In recent history, the most popular way of measuring drive-level performance in enterprise applications has been the use of a tool such as Iometer to measure the performance of pure random reads and writes at different queue depths and transfer lengths. In addition, Iometer has been used to measure the performance of pure sequential reads and writes. The Iometer test is relied on because it is easy to understand and is not associated with a particular drive manufacturer.
Iometer is particularly useful for measuring the mechanical capability of a traditional HDD to locate and move data. The Iometer test can demonstrate the following:
However, an important matter that Iometer does not measure is caching. Because an enterprise SSHD uses caching to improve performance, the Iometer tool shows little of the performance improvement that is possible with enterprise SSHDs.1 Caching requires locality in the workload in order to improve read performance. In actual workloads, non-sequential data requests are never truly random in nature. Database, transaction processing, file serving, email, VDI (virtual desktop infrastructure) and other common applications tend to have a large amount of semantic locality. This means that if data is referenced at one point in time, it is likely that the same or nearby data will be referenced again in the near future. In other words, accesses usually congregate around data that is important at a particular time.
Typically, Iometer tests do not have locality. The caching feature of enterprise SSHDs improves real-world I/O performance even though these improvements are not evidenced by Iometer measurements.
A key challenge with enterprise SSHDs is to identify benchmarks that represent real workloads. One promising tool is the SPC Benchmark 1C. This is a standardized benchmark of the Storage Performance Council for measuring performance of storage subsystems. A particular version of the benchmark tests individual drives specifically. It was created to simulate real-world enterprise workloads and, consequently, has a wide variety of I/O patterns, as well as an adequate level of locality.
SPC-1C Benchmark modeling of enterprise SSHDs shows improvements that approach those observed under actual workloads with these drives. By using the performance measurements of the SPC-1C Benchmark (or similar performance tests with locality), customers can see the positive value that SSHDs bring to the enterprise space. The SPC-1C Benchmark sufficiently represents the advantages of enterprise SSHDs and is independently administered and maintained, thereby dispelling questions as to its integrity.
Seagate recently submitted SPC Benchmark 1C audit reports to verify the performance benefit of an enterprise SSHD compared to a base HDD. The test results within the reports reinforce the performance levels displayed in Figure 1, which shows that the SPC-1C Benchmark performance of the enterprise SSHD was close to twice that of the HDD.2
An enterprise SSHD can be used in two principal server configurations: direct-attached storage (DAS) and storage area networks (SAN). Various factors make these drives of greater advantage in some configurations over others.
When considering direct attachment to a server, some view the right solution as host-based software that optimizes placement of data across solid state and magnetic media, using two separate device types—SSD and HDD.
Having the SSD directly attached is appealing because minimizing latency is the main purpose of an SSD, and attaching it directly to the host avoids the SAN delays in accessing data. In fact, there are several approaches that take this to the limit. PCIe SSDs, PACs (Performance Accelerator Cards), and SSDs natively attached via integrated SATA or SAS host interfaces get the storage as close to the host software stack as possible (Figure 2).
Companies developing this capability can tout at least one clear advantage since a host-based data monitor possesses keen understanding of which data should be located in solid state storage. A storage administrator can use this information to locate on the SSD exactly those files that are most important to his applications. This type of configuration could be considered a challenge to the benefits of an enterprise SSHD in the market and in system architectures. However, file-level granularity is often not fine enough for optimal results.
An enterprise SSHD brings an economical solid state enhancement to server configurations. With both the HDD and the NAND flash component occupying the same drive slot and controller electronics, the user realizes much of the solid state benefit at a minimal resource expense. This occurs with no CPU-consuming overhead and zero data movement between devices needed to accomplish optimization. Therefore, the enterprise SSHD looks quite attractive for configurations where cost savings is a top priority.
Also, if some data has high-write activity levels, and a solid state read cache exists, it may be desirable to hold such data on the SSHD, where the NVC can accelerate the writing without stress to the write endurance of the solid state.
For a number of reasons, the SSHD architecture looks even more promising in SANs. First, since storage sits on a network accessible by any system, a particular server has knowledge only of its contribution to the overall workload, not of all the activity on the shared drive. In a two-drive solution, a single system that tries to make data location decisions—moving data from HDDs to SSDs to optimize the performance of its current workload—might make the wrong decisions for other applications using the same storage. The advantage of the enterprise SSHD lies in the fact that when several systems are accessing it, it can optimize the workload with an understanding of the global I/O demands. Note that the systems do not have to be accessing the same data for this to apply. If two array controllers are accessing different LUNs (Logical Unit Numbers) where each includes space on the same device, the SSHD advantage applies (Figure 3).
Second, because write caching on an SSHD is at the drive level where the data is stored, there is no cache coherency overhead to maintain a valid copy of the data. As we saw with DAS, there are several architectural approaches that put solid state storage as close as possible to the processor. This, however, poses a special problem in a networked storage environment. If any of the data was also being updated—as regular updates are common for performance-critical information—extensive intersystem coherency reconciliation would be required, or users could not be sure that they were seeing the latest version of the data.
If write caching is at another location—in a host, for instance—then a particular function must always check to ensure that the current copy of the data is the latest version. As the number of elements in the storage network increases, so does the overhead. However, the enterprise SSHD’s ability to accelerate writes via its write caching can overcome this issue.
Furthermore, a networked SSHD is located at the convergence of system I/O activity, and it knows the combined workload imposed on it. These advantages give the SSHD a special opportunity to improve performance in a way unique to any other element in a SAN environment. It is likely that enterprise SSHDs will find their clearest advantage in networked storage.
With an ideal combination of traditional storage and solid state, as well as intelligent caching capabilities, SSHDs offer compelling performance enhancements for enterprise environments. The SPC Benchmark 1C has made it possible to measure and verify these performance improvements at the drive level. What’s more, the enterprise SSHD delivers an integrated architecture that lends it special advantages in both DAS and SAN configurations.
Read The Value of an Enterprise SSHD: Part 3, Tiered Storage to understand how SSHDs can be a valuable addition to tiered solutions. Read these Storage Performance Council Executive Summaries: SPC-1C, IBM 600GB 10K 6Gbps SAS 2.5-Inch G2HS Hybrid (C00016) and SPC-1C, IBM 600GB 10K 6Gbps SAS 2.5-Inch SFF Slim-HS HDD (C00015).
For the last article in the series, please review The Value of an Enterprise SSHD: Part 4, Reliability and Overall Value Proposition >