This article is more than 1 year old
The mainframe comes of age ... again?
Approaching the platform
Economic pressure has led to more finance directors and CFOs scrutinising expenditure to a painstaking level of detail. The aim is to ensure that IT can deliver what the business needs at the lowest cost while still meeting the never-diminishing expectations of the board and shareholders.
As a result, in-depth examinations into service availability, security, IT performance and cost control have in many cases become routine functions. But few organisations have good models for exactly how each component - in a hugely complex infrastructure - is linked to individual business services. This makes it difficult to accurately and effectively evaluate the economics and return of IT platforms.
In the past many assessments of IT have been aimed at evaluating the total cost of ownership of systems and solutions. In reality these often tended to degenerate into simplistic analysis of easily measured and directly attributable acquisition expenses and running costs.
It is only recently that attention has turned to some of the major contributors to operational expenditure, especially those associated with electricity consumption, cooling, building / facilities costs and the manpower required to keep systems running. But this creates challenges.
Grainy picture
In environments built on industry standard components, many of these operational costs are allocated in big buckets, and it is very difficult to allocate them to each system or IT service with any degree of certainty, let alone granularity.
As a consequence, attempts to use forms of resource chargeback against business services delivered are extremely complex, often expensive to perform, and likely to lead to highly political discussions at management / IT meetings. The result is that IT and the business often compromise and adopt an average charge per user that can bear little resemblance to reality as different types of users have wildly divergent usage patterns.
The pressure to model cost of service against usage is certain to increase as organisations seek to make the most of their IT resources by creating highly responsive resource pools (“private cloud” or “dynamic infrastructure”) to minimise IT costs while maximising business value.
Many vendors are looking to add capabilities to measure resource usage more granularly. This is something at which certain platforms, most notably the mainframe, have always excelled. How organisations react as they do get a better handle on cost metrics, especially when considering highly centralised and consolidated yet flexible infrastructure, has yet to play out.
The mainframe is likely to do very well when its power / performance and scalable management are compared to industry standard systems. This is partly a consequence of the platform’s architecture and design, but also down to the fact that mainframes typically run consistently at utilisation levels higher than many other platforms can reach for any sustained period of time.
C-level managers have to make difficult choices when beset with an array of options, and the pressure to justify decisions in terms of monetary factors can be almost overwhelming. With “total cost of ownership” visibility slowly increasing, many platform selection decisions are entering a new phase.
When looking at centralised and consolidated infrastructures the question now is whether the mainframe is worthy of greater consideration than it currently achieves, both if the organisation already has such systems in place but also perhaps as a new investment.
It is clear that getting the skills and tools in place to implement dynamic IT will be a challenge whichever route is taken, and contrary to common perception may even justify the investment in mainframe technology if the organisation does not currently use it.
So, with current trends, is that 40 year-old platform coming of age again? ®