Optimizing energy usage through Seagate® PowerChoice™ technology
As new technology continues to evolve, there is likely to be a greater need to carefully analyze deployment costs. In order to achieve the level of analysis required to lower overall costs, businesses will need to build greater expertise across the entire cloud environment—from software applications down to the hardware. There are numerous factors at play when it comes to a data center’s total cost of ownership (TCO), including cooling costs and how often the storage devices are utilized. Power usage accounts for a significant chunk of data center TCO, and will likely account for more as performance and capacity demands increase.
Data center power consumption has been rising over the past several years. As a recent New York Times article By James Glanz titled “The Cloud Factories: Power, Pollution and the Internet” noted, U.S. data centers consumed a total of 76 billion kilowatt-hours of power in 2010, amounting to approximately 2% of all the energy used in the country. Despite growing capacity demands, data center operators are tasked with lowering energy consumption without sacrificing performance.
Optimization within the framework of a complex company data center requires dynamic technology. As storage capacity demands increase due to virtualization and growing volumes of data, managing power usage in an intelligent way is critical for reducing TCO. Seagate PowerChoice technology—specifically developed for enterprise environments—offers more energy efficiency and control over the amount of power hard drives consume.
PowerChoice technology improves on earlier technology and allows for energy-saving during periods of command inactivity, allowing for greater power reductions. As idle time increases, the power-saving benefits also increase, and the drives will still quickly respond to commands even after long idling periods. In addition, PowerChoice technology supports four customizable modes to give businesses significantly more control of power usage by allowing for up to a 54% reduction in the amount of energy used.
For further context, a 2011 Ars Technica article by Casey Johnston titled “Ask Ars: are "green" hard drives really all that green?” found that a 1TB hard disk drive that runs at 3Gb/s with a 32MB cache consumes an average of 8.4W. For a configuration running 1000 such hard drives 24 hours per day, this translates to approximately 201.6kWh of electricity used per day—or 73,584kWh per year. According to the U.S. Bureau of Labor statistics, the average price per kWh in the United States is $0.135. This means running 1000 hard drives would cost approximately US$9933. However, PowerChoice technology would cut this configuration’s usage down to 39,735kWh per year, bringing the cost down to about US$5364.27. But the actual savings to a data center manager are likely to be much greater, considering the much higher-capacity and performance hard drives found in today's data centers. Furthermore, as data centers expand to include even more storage, the value of efficient technology increases dramatically.
These numbers are, in fact, in line with the power usage one expert estimated for Facebook's servers. The projection says each Facebook server uses around 300W of electricity, not to mention the company potentially manages more than 180,000 machines. The average power usage for other data centers is likely to be higher, since Facebook places an even greater premium on efficiency.
Carrying those savings across the total number of drives likely to be found in today's data center yields even more impressive results. Some estimates say a large data center like the kind run by Facebook hosts as many as 100,000 hard drives, according toSebastian Anthony’s article “How big is the cloud?” on ExtremeTech.com. But even a data center at 10% of the total capacity of Facebook would potentially see tens of thousands of dollars in savings.