CapEx vs. OpEx: New Opportunities for Data Transfer and Edge Storage
Learn about the CapEx vs OpEx approaches to storage and how it effects the costs of data storage, management, transfer and utilization.
The world of storage is in radical flux. Organizations are evaluating new approaches to cope with the rate of change, and new approaches exist for optimizing data transfer and edge storage for greater performance, resilience, and price advantages. Whether we’re talking about petabytes of data transfer, or small storage devices used at the edge, new opportunities are emerging that will give organizations new ways to control their costs.
Setting the Stage
In addition, data locality is shifting. In the past, organizations kept the bulk of their data in on-premise data centers or in a single cloud. Today, organizations have data sets on multiple storage platforms in different locations, in more than one cloud, and at the edge. By 2025, IDC predicts 12.6ZB of installed capacity—HDD, flash, tape, optical—will be managed by enterprises. Cloud service providers will manage 51% of this capacity, which means that 49% of the capacity will be managed by enterprises at the edge and in the core.
Data movement is also increasing. On average, organizations now periodically transfer about 36% of their data from edge to core, with volumes doubling over two years. Gartner predicts that by 2025, 75% of enterprise-generated data will be created at the edge, and much of that will be moved from core to cloud.
Finally, managing and leveraging all this data in a cost-effective way is becoming a top priority. Organizations are already seeing their storage budgets exceed expectations, and cloud cost overruns are becoming a new normal. In 2020, 42.6% of organizations ended the year overrunning their cloud budgets. Organizations are trying to cope by transitioning to as-a-service offerings that give them more granular control over their costs.
Data is a critical asset and it needs to be used efficiently, not just by services, but also by organizations seeking data-driven operational effectiveness, ROI calculations, and opportunities for added investment. And the need for new solutions that support these requirements is particularly pronounced where edge storage and data transfer are concerned. With data needing to be transferred from endpoint to edge to core and cloud and potentially back again, new technologies matter. Unprecedented, unpredictable data growth combined with accelerated shifts in data locality and movement—plus increasing pressures for cost-controlled data management and utilization—mean that old approaches to storage can’t keep up with the need to use data efficiently, and new ideas are needed.
Leading organizations recognize the value of data as an asset. They also recognize the impact that data management inefficiencies can have, blocking or limiting the value of that asset. Some organizations have tried to tackle these inefficiencies by looking at storage from a holistic position, exploring limits, opportunities, and options across all storage types rather than having one team look at primary storage, another at edge storage, and so on. This approach, often referred to as StorageOps, gives organizations a better understanding of advantages and limits, opportunities and disadvantages.
Others adopt a comprehensive, end-to-end approach to improving speed and agility to data pipelines, often referred to as DataOps. DataOps improves the accessibility, availability, and integration of data while limiting cost overruns. Rooted in agile methodologies, it aims to keep every aspect of data, from ingest to protection to movement to utilization, as optimized and performant as possible.
Regardless of approach, every organization needs to manage the costs of data storage, management, transfer, and utilization. That’s why understanding the CapEx vs OpEx approach to storage matters.
As you probably already know, many organizations seek to manage their storage budgets more effectively by shifting from a CapEx model to an OpEx model for storage acquisition. To put it more simply, when confronted with the need for storage, organizations have two options:
This approach can often lock organizations into substantial up-front cost, depreciation over three to five years, management costs, responsibilities for repair and maintenance, the need to add drive expansion over time if data sets grow, and sunk cost problems if a platform doesn’t provide the feature set, scale, or performance needed over time. It can often reduce available cash, force organizations into ongoing, sometimes unpredictable maintenance and upgrade expenses, and result in technological lock-in.
This approach results in organizations managing an ongoing bill from a service provider (a public cloud) or providers (multiple public clouds). These bills are usually tied to capacity or number of objects under management. The service provider is responsible for repair and maintenance. Adding capacity is seamless and just results in an increase in the monthly bill. In some countries, OpEx procurement offers tax advantages. Finally, in theory, if the service isn’t working well for you, it’s easier to migrate to another service without worrying about sunk costs or the need to decommission an on-premise storage system.
Increasingly, organizations treat storage as an operational expense. Relying on a cloud provider or providers to provide a turnkey storage-as-a-service platform simplifies access and gives StorageOps or DataOps teams a way to add new services without substantial upfront costs. We’re increasingly seeing the shift from CapEx to OpEx for all kinds of storage, including mass data transfer and edge storage. An OpEx model for storage can make a lot of sense—that is, unless costs start to become unpredictable and expenses mount.
Pricing 1.0: Traditional OpEx Storage
The reality of storage-as-a-service is that it’s often complicated by a confusing and opaque cost model, and this is especially true for edge storage and data migration capabilities provided by leading cloud vendors.
Examining the pricing page for a leading cloud provider, here’s what you’ll see for one data center region on a single cloud:
In other words, it’s a pricing model that’s almost impossible to understand and incredibly difficult to manage. As data sets grow, as needs change, costs become more and more unpredictable, can spiral out of control, and result in budget overruns and executive frustration.
For this same cloud provider’s edge storage portfolio, made up of several devices used for edge aggregation, automated data transfer, and even physical data shipments, the pricing models are just as complex. They offer:
To make matters worse, their edge storage portfolio is a locked ecosystem, it only works with their cloud. You end up with CapEx-style lock-in, you’re just paying for it on a monthly basis instead of as an up-front purchase.
In a hybrid, multi-cloud world with edge infrastructure, data is spread across dozens, hundreds, or even thousands of locations. These cost complexities and tightly guarded ecosystems undermine everything that storage or data teams are trying to do. You can see how easy it would be to end up with cost overruns and budget challenges with an OpEx model that’s so convoluted and unpredictable.
StorageOps and DataOps teams crave simplicity. As we’ve established, there’s a rapidly growing need for edge storage and data transfer capabilities, but pricing models and ecosystem limits are going to restrict adoption.
There’s an opportunity for a vendor to create a pricing model which, for the sake of this paper, we can call OpEx 2.0. It’s a model that recognizes the needs of organizations for simplicity, clarity, and openness. It would be characterized by all the OpEx advantages (pay-as-you-go, no upfront costs, no maintenance, no sunk costs) but would also offer streamlined pricing and an open ecosystem. Organizations could pay for what they need, when they need it, with a model that’s simple and transparent.
Some vendors have worked to deliver an OpEx 2.0 pricing model for their edge storage and data transfer service for enterprises. The idea is to offer a vendor-agnostic, multicloud edge storage solution with clear pricing and no lock-in. Those solutions enable businesses to aggregate, store, move, and activate their data without the limitations of a closed ecosystem or cost overruns. An ideal solution offers a self-managed subscription service that allows organizations to increase or decrease their storage requirements, on the fly, without penalty or complexity. Whether they need data transfer and edge storage for month by month, or they need it on an annual basis, the right vendor could deliver a wide range of devices with the features needed for success.
The right service would include, at one low price:
At Seagate, we deliver what you need. With our edge storage and data transfer portfolio, you can build your edge storage or data transfer infrastructure on an OpEx 2.0 model that sidesteps all the unnecessary complexities of others’ OpEx models.
Enterprise customers are increasingly considering “pay-as-you-go” subscription-based storage models to accommodate physical data transfer from edge to cloud. Spending on public cloud services alone will more than double from $217 billion in 2021 to $524 billion in 2025, according to IDC’s Semiannual Public Cloud Services Tracker (2021H1, November 2021), with storage infrastructure as a service (IaaS) comprising $93 billion of the total. In addition, a Gartner survey, Competitive Landscape: Consumption-Based Pricing for On-Premises Infrastructures (October 2020), found that 53 percent of companies agreed that acquiring technology via subscriptions is now their preferred model.
IT departments are also more apt to consider the physical movement of data as integral to their overall data strategies. Competing data transfer solutions exist, but most do not offer the same level of multicloud integration that Lyve Mobile can provide. Also, many enterprises are making the move to more cloud-based “as-a-service" data solutions rather than add more on-premise hardware. Lyve Mobile was designed to be complimentary to a wide array of such services, including those involving cloud storage and strict compute abilities.
Customers in industries such as media and entertainment, geoscience, and healthcare, who generate mass sets of unstructured data, rely on the flexibility and affordability that subscription models allow. Paying only for the hardware, software, and services you need keeps costs down while assuring customers that their data infrastructure supports their operations, from end point to cloud.
Benefits to subscription-based storage include:
Using the Lyve Mobile OpEx-based subscription model, partners can address multiple needs for ongoing transfers and standard lift-and-shift applications while reducing the overall operating costs and TCO for their customers.
Leveraging flexible payment models allows Lyve Mobile customers to scale projects up or down on a month-to-month basis, enabling them to maximize their data transfer services while saving costs on under-utilized assets that typically occur in a traditional purchase model. This strategy also helps partners simplify device management in the field, quickly refreshing/switching out technology as project requirements change or age, without the need for disposition of end-of-life assets.
The Lyve Mobile edge storage and data transfer as-a-service family can help your organization avoid the limitations of OpEx 1.0 with an open platform that comes with clear-cut pricing.
To learn more about Lyve Mobile visit: https://www.seagate.com/products/data-transport.