Industry Comment We’ve spent 20 years assuming that we add memory and disk in large numbers and CPUs in small numbers. What if all three scaled in the same way? Now, that would be a game changing innovation, one that would spawn a new age for business applications and raise the bar on IT productivity and business efficiency.
Remember way back when PCs had a grand sum of 64 kilo bytes of memory? These days, we count the memory in small laptops in hundreds of mega bytes and the memory in big servers in fractions of tera bytes. The same thing happened to disk space: mega bytes to peta bytes. What’s next? exa, zeta, and yotta.
But when it comes to CPUs, we still mostly dabble in single digits. An 8-way server feels like a pretty large system. The 32-way, 64-way, and 200-way systems feel just huge. Even when we scale out, anything beyond a couple of hundred CPUs begins to challenge our ability to manage and operate the systems. It’s no accident that they call these systems a “complex.”
A major shift is coming. Over the next few years, your ordinary applications will be able to tap into systems with, say, 7,000 CPUs, 50 tera bytes of memory, and 20 peta bytes of storage. In 2005, Azul Systems will ship compute pools with as many as 1,200 CPUs per a single standard rack (1.2 kilo cores! - I like the sound of that!)
What would change about application design if you could do this? Well, think back to what applications were like when you had just 128K of memory in your PC and a 512KB hard drive. The difference between the capabilities and flexibility of applications in those days and now is the level of improvement that we are talking about.
If you could count CPUs the same way that you count memory, some problems would simply become uninteresting and others would transform in a qualitative way. And completely new possibilities would emerge.
Deployment and administration of applications would also change dramatically. Do you ever worry about how much storage an individual user might need? Probably not. You just install a NAS device with a tera byte of storage and let everyone share it. This approach works because no single user is likely to fill it up quickly, and you can plan storage capacity across all your users rather than each individual one. Do you ever worry about the utilization level of an individual byte of memory? I hope not. You have so many bytes that you measure utilization at the aggregate level.
If you had hundreds of CPUs in a miniaturized “big-iron” system that were available to your applications, you could adopt the same strategy for applications. No need to plan capacity for each individual application. Let all of your users share a huge compute pool and plan capacity across many applications. In the process, you also fundamentally change the economics of computing. Well, that’s exactly what Azul Systems is pioneering.
This is a whole new way of looking at the CPU, and therefore, the function of “compute.” This approach is gaining mainstream acceptance. The industry has reached 2 or 4 CPUs on a chip for large symmetric multiprocessing (SMP) systems; and for systems limited to one chip, tens of functional units in one CPU. Some companies have announced future chips with as many as 8 CPUs on a single chip. With 24 CPUs on a chip that can be used in an SMP system, Azul has already set the bar much higher. And that’s just the beginning!
Get ready for an era when you can order CPUs by the thousands. And get ready for the new language of that era: Do we say: 2.5 kilo CPUs? Do we call this kilo core, or mega core processing? And since it goes way past current multi-core technology, do we call it poly-core technology?
Here is a possible headline in 2005:
Poly-core Technology to Enable Kilo Core Processing. Happy Apps Hail Freedom!!
Happy 2005! ®
Azul Systems has created one of the most radical processor designs to date. Its Vega processor sits at the heart of a Java crunching server due out in the first half of this year. More information on the company's upcoming products can be found here.