4 OLD IDEAS For Lowering Storage Costs, In Today’s Cloud Era

1. In the 1980’s: Competitive pricing worked. 

The first method for driving down storage costs was implemented in the 80’s. In the beginning IBM was king, and during their reign they were able to set their prices as they pleased. When plug-compatible manufacturers (PCMs) started entering the storage scene (Hitachi, Fujitsu, Amdahl and others) the suggestion of bringing in one of those storage platforms to send a message to IBM about their high prices made a lot of sense. If you brought in a non-IBM solution subsequent prices from IBM got a lot more competitive.   

We’ll revisit IDEA 1 in a few paragraphs. For now, let’s talk about the 2nd historical phenomenon that made storage more affordable.

2. In the 1990’s: Silos for specific applications and departmental needs worked. 

In the 90’s there was a rush to departmental systems. Although these systems had the drawback of not scaling, entry prices for small and intermediate systems were cheaper. This drove down the price per gigabyte to about half that of the large scale mainframe and enterprise class storage.

3. In the early 2000’s: Data AVAILABILITY was most importantand it didn’t matter where the data was stored. 

Enter the 2000’s. Client server waves had waned, and ERP and Y2K didn’t disrupt our data centers. A new era of storage began that lasted less than 4 years where SLAs, QoS, and departmental chargebacks were the norm.  No one cared where their data was housed. IT departments often contracted the departmental users for specific SLA’s, and QoS which would manage applications and deliver data when and where it was needed. They provided a lot of 999999’s in uptime, and all was good for a time. 

Then the era of cloud began and disrupted this system. While IT departments were failing to meet their SLA’s, cost targets, and availability targets, Cloud services entered the scene and added new choices, lower costs, and better solutions for departments with budgets. A new competition between IT departments and outside vendors began, and companies like Salesforce dominated their segment. 

4. Today: Data needs to be agile, secure, low cost, accessible, and flexible.  

For a time companies didn’t care much if they were buying a cloud service or CRM software as long as they were able to get at their data. Companies are now realizing that this mentality is not sufficient. Having data and access to it isn’t enough. Data should be: 

  • Always available and easily accessible through permission-based access
  • Accurately analyzed
  • Securely backed-up
  • Simple to share
  • Reliably secured
  • Usable by multiple applications from different vendors 
  • Managed by policies defined by IT staff

You might now be wondering, “does such a data nirvana solution exist?” It does now.

Many of you have object storage and might be replacing expensive and slow legacy storage platforms with better and lower cost ways of archiving and storing your files. We offer a solution that will work for you regardless of your current data setup and the hardware you’re using. You have the necessary elements already in your data center. You have storage silos, but they don’t talk to each other (which is the key for competitive array pricing that was deployed in 1980’s). You also have cloud services, but getting data between them requires s/3 protocols and other data migration tools that you likely don’t have. How can you connect your existing storage silos to effectively utilize all of your existing data?  

Strongbox Data is your solution.We offer multi-vendor tools that provide data protection, data management, and permission-based data delivery across the spectrum of tape, arrays, flash, object, and cloud.  

Interested in learning more? Check out http://www.strongLink.com

Get in Touch or Schedule a Demo

Experience the magic of StrongLink.
Test drive StrongLink in your own environment.