When I saw the news earlier today that Amazon was launching a new long-term data storage platform with rock-bottom prices, I thought it would be useful only for a small group of people. But after speaking to a few experts in the big data, infrastructure, and data storage fields, it’s clear that this is a much bigger deal than people seem to realize.

Today, Amazon announced a service called Amazon Glacier, which comes as a part of Amazon Web Services, the company’s wildly popular cloud infrastructure service. Glacier provides long-term data storage for the incredibly low price of $0.01 per GB of data stored per month.

To put that in perspective, it means that for $10 per month, users can store an entire terabyte of data. And to put *that* into perspective, the entirety of Wikipedia’s database is 8.5 GB. The cost of backing that up to Amazon Glacier for a year? Roughly $1.

But for non-data nerds, what does this mean? It means that companies have an incredibly reliable backup solution that can be employed for any number of purposes, including cheaper consumer backups and more reliable services. Say it with me now: data loss be gone! And it comes as close to free as you can get.

What’s important to note about this, though, is that the service is geared towards a very specific set of users, and Amazon is placing restrictions to that effect. In its announcement post, Amazon noted that some key customers could include large corporations, media companies with tons of content to store, and scientific or research clients.

When asked about this move in the market, Ben Uretsky, a cofounder of DigitalOcean, a pseudo-competitor of Amazon’s, Uretsky noted that, “storage in general is one of the hardest aspects of infrastructure to manage,” and Amazon’s entry will be very disruptive for all parties.

In fact, Uretsky went so far as to say that DigitalOcean may “actually use [Glacier] to store our long term data for DigitalOcean because of the extremely low price point.” It says something when a company wins over a competitor based on price, to the point where the two companies work together.

Of course, there are drawbacks to the price point. The data can’t be retrieved at the insanely fast speeds Amazon Web Services normally run at, with data being retrieved in the three to five hour range following a request. And retrieving the data becomes costly if it is continually needed.

However, as a backup solution and redundant measure against data loss, it is a major disruption in the cloud infrastructure industry.

[Image courtesy MoneyBlogNewz]