The policy that helped Anonymous hack AAPT

How vigilant is your host or public cloud provider?


Anonymous' theft of data from a dormant AAPT server might not have been possible had the telco used a different host.

AAPT has said the Cold Fusion server Anonymous accessed was, essentially, forgotten. In its un-patched state it was therefore easy meat.

One question the Anonymous incident therefore raises is just why the server was there at all. El Reg suspects that's been asked rather tersely within the halls of AAPT, and expects that the IT department there has probably not admitted that servers get lost all the time.

If you doubt that's a fact, consider the evergreen market for network discovery tools that scour a network and report back with a list of every piece of attached kit. Consider, too, the phenomenon of virtual machine sprawl, which raises its head when IT departments summon oodles of virtual machines into existence and then forget them.

Lost servers on a LAN aren't a big deal. But the Anonymous/AAPT incident shows hosted servers rather raise the stakes.

Which is why we decided to ask several hosting and cloud providers what they do when they see an orphaned server. Telstra, Optus and AWS have not responded to those queries.

But Melbourne IT, where AAPT's server resided, has, explaining its stance as follows:

In Melbourne IT’s hosting environment there are either active servers or decommissioned servers. Customers use their servers for different purposes, whether they be production environments, testing environments or disaster recovery services. Some servers could be kept on standby by customers for business continuity or for changing project demands; others exist for regulatory compliance where data needs to be stored for a certain number of years.

How customers decide to use their servers can change from month to month or year to year. How often the content on those servers is updated, or what content is stored on those servers, is at the customer’s discretion. Given such a wide range of usage by our customers, the concept of a ‘dormant server’ does not exist.

Therefore all active servers are treated as active unless we have received notice from the customer to decommission the service (or Melbourne IT decommissions the server due to a breach of contract by the customer). Decommissioned servers are removed from the active server pool and the data is erased.

In other words, if you forget about a server hosted at Melbourne IT and keep paying for it, the company will run it forever.

That's a contrast to the policy at Macquarie Telecom's public cloud outfit Ninefold, where Chairman and Co-Founder Peter James told us a rather different regime operates.

“If no activity has taken place for three calendar months, we contact the customer prior to the third month to indicate there has been no activity and that account closure is pending,” he wrote. “Then, at the customer’s request, their account is open or closed.”

Given the near-inevitability of the occasional absent-minded server loss, it therefore seems that being a customer of an outfit with a policy like Ninefold's is probably preferable to working with the kind of policies in place at Melbourne IT. ®

Broader topics


Other stories you might like

  • Oracle shrinks on-prem cloud offering in both size and cost
    Now we can squeeze required boxes into a smaller datacenter footprint, says Big Red

    Oracle has slimmed down its on-prem fully managed cloud offer to a smaller datacenter footprint for a sixth of the budget.

    Snappily dubbed OCI Dedicated Region Cloud@Customer, the service was launched in 2020 and promised to run a private cloud inside a customer's datacenter, or one run by a third party. Paid for "as-a-service," the concept promised customers the flexibility of moving workloads seamlessly between the on-prem system and Oracle's public cloud for a $6 million annual fee and a minimum commitment of three years.

    Big Red has now slashed the fee for a scaled-down version of its on-prem cloud to $1 million a year for a minimum period of four years.

    Continue reading
  • Mega's unbreakable encryption proves to be anything but
    Boffins devise five attacks to expose private files

    Mega, the New Zealand-based file-sharing biz co-founded a decade ago by Kim Dotcom, promotes its "privacy by design" and user-controlled encryption keys to claim that data stored on Mega's servers can only be accessed by customers, even if its main system is taken over by law enforcement or others.

    The design of the service, however, falls short of that promise thanks to poorly implemented encryption. Cryptography experts at ETH Zurich in Switzerland on Tuesday published a paper describing five possible attacks that can compromise the confidentiality of users' files.

    The paper [PDF], titled "Mega: Malleable Encryption Goes Awry," by ETH cryptography researchers Matilda Backendal and Miro Haller, and computer science professor Kenneth Paterson, identifies "significant shortcomings in Mega’s cryptographic architecture" that allow Mega, or those able to mount a TLS MITM attack on Mega's client software, to access user files.

    Continue reading
  • HashiCorp tool sniffs out configuration drift
    OK, which of those engineers tweaked the settings? When infrastructure shifts away from state defined by original code

    HashiConf HashiCorp has kicked off its Amsterdam conference with a raft of product announcements, including a worthwhile look into infrastructure drift and a private beta for HCP Waypoint.

    The first, currently in public beta, is called Drift Detection for Terraform Cloud, and is designed to keep an eye on the state of an organization's infrastructure and notify when changes occur.

    Drift Detection is a useful thing, although an organization would be forgiven for thinking that buying into the infrastructure-as-code world of Terraform should mean everything should remain in the state it was when defined.

    Continue reading

Biting the hand that feeds IT © 1998–2022