Sysadmin blog Anyone who says public cloud computing is "pay for what you use" is trying to rip you off. The public cloud is pay for what you provision, and that is a completely different thing.
In order to move away from the model that pretty much every existing application uses – one where you provision the peak amount of resources required and then the application sits idle until it's needed – you need to throw away your code and rewrite it from the ground up.
This means either convincing your software vendor to scrap their codebase and start over; having your in-house developers do the same; or both. Very few applications are an island, and in the real world one application feeds information into another, which then feeds information into another.
Suddenly your simple point-of-sale "application" is 15 interwoven packages, some of which are written by third-party vendors, some of which are written in-house, and all of which must be on 100 per cent of the time.
To even suggest that tossing all your applications out the window is a viable path forward is lunacy of the highest order. Large enterprises could probably afford to move one app cluster at a time. Any company approaching a natural replacement cycle would probably consider a next-gen "burstable" architecture for their new application. But nobody out there has the money to just chuck it all out the window "because cloud."
In case I'm not being blunt enough here: the advantage of hybrid cloud setups like VMware's vCloud Air is that you don't have to throw away your investment in 30 years of applications. You can move them unmodified into VMware's cloud and you throw away your applications only when it is of benefit to you to do so, not as it fits into the desires of some mega-corporation to migrate all its customers to a subscription-based approach to licensing.
But the concept that we will just magically see benefits from the cloud if we all just hold hands, sing Kumbaya, and "only use what we need" is patently ridiculous. The amount of money required to ditch applications, recode them, and then retrain all our staff is prohibitive.
There's a good reason mainframes are still around: nobody is particularly interested in the massive investment involved in rewriting and redesigning those programs, so we collectively drag mainframes along well past what industry pundits felt should be their "best before" date.
What held true for mainframes holds true for "traditional" x86 applications: they are here, we're heavily invested in them, and they aren't going away for decades. Before you take to the comments in a blinded rage and vent your almighty fury at the very notion, I encourage you to try to simply accept the above.
Seek a state of emotional zen in which you can cope with the difficult concept that one size does not fit all, and that the old technology does not disappear simply because something new and shiny had come into being.