Iron Mountain punts subterranean data storage

Underground cloud vault


Iron Mountain - well-known for storing paper records and tape cartridges in secured holes in the ground - has started up an underground cloud-based Virtual File Service.

The cloud is truly subterranean, being based on an underground data centre in the USA.

Calling it "the industry's first enterprise solution (sic) for cloud-based file archiving," Mountain says that its VFS is for static data files and that it's cheaper to store in its vaults than keep stuff on your own. The benefits are the classic ones of moving CAPEX costs to OPEX, assurance of meeting compliance, and regulatory needs. WORM facilities are available, scalability spike coverage, and lower costs. It has a customer, BRUNS-PAK, talking of an "extremely competitive cost."

Storage consultancy ESG reckons VFS facilitates compliance with SEC 17a-4, which deals with the retention of business records. Read about this here (PDF).

The VFS service provides an on-site appliance that acts like a file server and uses CIFS and NFS protocols, as does Nirvanix' cloud service. Files stored on the appliance are sent via a virtual private network link to an Iron Mountain data center whose contents are continuously replicated to a second data centre.

Iron Mountain says you can "move files from primary storage to the VFS appliance using customer-created scripts or off-the-shelf products. Alternatively, you can use the VFS appliance as the disk target for an off-the-shelf backup program. You can also incorporate the VFS service into existing HSM or File Virtualization deployment, using in-house archiving products."

Iron Mountain has a partnership with F5, involving F5's ARX file virtualization products, which customers can use to automate movement between primary file shares and VFS Appliance file shares under a single ARX namespace.

Iron Mountain also says customers can "archive data at LAN speed (versus WAN) with the VFS Data Shuttle service." What this actually means is that an initial file load can be accomplished by physically transporting encrypted disk media to the Iron Mountain site if the amount of data is huge.

The thing about an enterprise-class cloud file store - apart from good retention period policies and compliance features - is that it should never go down and never, ever lose files. As Iron Mountain has lost quite a few tapes whilst trucking them to/from customer sites in its time: GE Money in January 2008 with 650,00 customer records lost; Long Island Railroad employee data in April 6, 2006; Time Warner in April, 2005 with 40 tapes lost; and Los Angeles-based City National Bank also in 2005. You might want to have cast iron service level agreements in place before taking on this hole-in-the-ground cloud file storage service. ®

Similar topics


Other stories you might like

  • Mega's unbreakable encryption proves to be anything but
    Boffins devise five attacks to expose private files

    Mega, the New Zealand-based file-sharing biz co-founded a decade ago by Kim Dotcom, promotes its "privacy by design" and user-controlled encryption keys to claim that data stored on Mega's servers can only be accessed by customers, even if its main system is taken over by law enforcement or others.

    The design of the service, however, falls short of that promise thanks to poorly implemented encryption. Cryptography experts at ETH Zurich in Switzerland on Tuesday published a paper describing five possible attacks that can compromise the confidentiality of users' files.

    The paper [PDF], titled "Mega: Malleable Encryption Goes Awry," by ETH cryptography researchers Matilda Backendal and Miro Haller, and computer science professor Kenneth Paterson, identifies "significant shortcomings in Mega’s cryptographic architecture" that allow Mega, or those able to mount a TLS MITM attack on Mega's client software, to access user files.

    Continue reading
  • Oracle shrinks on-prem cloud offering in both size and cost
    Now we can squeeze required boxes into a smaller datacenter footprint, says Big Red

    Oracle has slimmed down its on-prem fully managed cloud offer to a smaller datacenter footprint for a sixth of the budget.

    Snappily dubbed OCI Dedicated Region Cloud@Customer, the service was launched in 2020 and promised to run a private cloud inside a customer's datacenter, or one run by a third party. Paid for "as-a-service," the concept promised customers the flexibility of moving workloads seamlessly between the on-prem system and Oracle's public cloud for a $6 million annual fee and a minimum commitment of three years.

    Big Red has now slashed the fee for a scaled-down version of its on-prem cloud to $1 million a year for a minimum period of four years.

    Continue reading
  • HashiCorp tool sniffs out configuration drift
    OK, which of those engineers tweaked the settings? When infrastructure shifts away from state defined by original code

    HashiConf HashiCorp has kicked off its Amsterdam conference with a raft of product announcements, including a worthwhile look into infrastructure drift and a private beta for HCP Waypoint.

    The first, currently in public beta, is called Drift Detection for Terraform Cloud, and is designed to keep an eye on the state of an organization's infrastructure and notify when changes occur.

    Drift Detection is a useful thing, although an organization would be forgiven for thinking that buying into the infrastructure-as-code world of Terraform should mean everything should remain in the state it was when defined.

    Continue reading

Biting the hand that feeds IT © 1998–2022