As new technology continues to evolve, there is likely to be a greater need to carefully analyse deployment costs. In order to achieve the level of analysis required to lower overall costs, businesses will need to build greater expertise across the entire cloud environment—from software applications down to the hardware. There are numerous factors at play when it comes to a data centre’s total cost of ownership (TCO), including cooling costs and how often the storage devices are utilised. Power usage accounts for a significant chunk of data centre TCO, and will probably account for more as performance and capacity demands increase.
Data centre power consumption has been rising over the past several years. As a recent New York Times article By James Glanz entitled “The Cloud Factories: Power, Pollution and the Internet” noted, US data centres consumed a total of 76 billion kilowatt-hours of power in 2010, amounting to approximately 2% of all the energy used in the country. Despite growing capacity demands, data centre operators are tasked with lowering energy consumption without sacrificing performance.
Optimisation within the framework of a complex company data centre requires dynamic technology. As storage capacity demands increase due to virtualisation and growing volumes of data, managing power usage in an intelligent way is critical for reducing TCO. Seagate PowerChoice technology—specifically developed for enterprise environments—offers more energy efficiency and control over the amount of power hard drives consume.
PowerChoice technology improves on earlier technology and allows for energy-saving during periods of command inactivity, allowing for greater power reductions. As idling time increases, the power-saving benefits also increase, and the drives will still quickly respond to commands even after long idling periods. In addition, PowerChoice technology supports four customisable modes to give businesses significantly more control of power usage by allowing for up to a 54% reduction in the amount of energy used.
For further context, a 2011 Ars Technica article by Casey Johnston entitled “Ask Ars: are "green" hard drives really all that green?” found that a 1TB hard disc drive that runs at 3Gb/s with a 32MB cache consumes an average of 8.4W. For a configuration running 1,000 such hard drives 24 hours per day, this translates to approximately 201.6kWh of electricity used per day—or 73,584kWh per year. According to US Bureau of Labor statistics, the average price per kWh in the United States is $0.135. This means running 1,000 hard drives would cost approximately US$9,933. However, PowerChoice technology would cut this configuration’s usage down to 39,735kWh per year, bringing the cost down to about US$5,364.27. But the actual savings for a data centre manager are likely to be much greater, considering the much higher capacity and performance hard drives found in today's data centres. Furthermore, as data centres expand to include even more storage, the value of efficient technology increases dramatically.
These numbers are, in fact, in line with the power usage one expert estimated for Facebook's servers. The projection says each Facebook server uses around 300W of electricity, not to mention the company potentially manages more than 180,000 machines. The average power usage for other data centres is likely to be higher, since Facebook places an even greater premium on efficiency.
Carrying those savings across the total number of drives likely to be found in today's data centre yields even more impressive results. Some estimates say a large data centre of the kind run by Facebook hosts as many as 100,000 hard drives, according to Sebastian Anthony’s article “How big is the cloud?” on ExtremeTech.com. But even a data centre at 10% of the total capacity of Facebook would potentially see tens of thousands of dollars in savings.