This article is more than 1 year old
Sun talks future systems, N1, and WinFS
The big picture
Quite apart from the Java politics, Sun's software business isn't short of challenges. On the big iron side it has to cope with tightly integrated IBM systems that refused to get any cruftier, with Big Blue moving its mainframe know-how into its Unix systems. On the other there's Microsoft's WinFS, which promises to do away with hierarchical storage altogether and remove the distinction between local and remote storage. (About time, too). And then there's Linux, which improves all the time.
But John Fowler, who's Sun's software CTO, took some relish in dealing with these when we caught up with him recently, and he's terrific value. He used to work with M&A, which we didn't know, but it was soon apparent he has as much an eye for the disruptive small details as the big picture.
N1 and big bandwidth
N1 he said, is a project that will take several years to fulfill. The real context is part of improving the Solaris deployment environment. Sun aims Java, which virtualizes the application environment, to application developers. It aims Solaris at IT administrators, and the business of deploying and managing resources must be improved.
This isn't just pure altruism, of course. The competitive threat is still Wintel, and the reason masses of Dell PCs have not superseded big iron isn't because Intel doesn't make chips that perform fast enough, but that vertical scalability still looks better value than horizontal scalability. That is, when you can even make the comparison. Sun's nightmare remains an N1 for Wintel.
But N1 for Solaris still looked a bit nebulous to us. We invited John to muse on what datacenter systems would look like several years out. If we're not sure where they're going, it's at least useful to know where they're coming from.
Fowler said that applications would resemble bundled containers.
"You're still going to have threads - you're still going to have processes - you're still going to have low-level constructs; but there'll be more volume in the network."
"We'll still have file sharing administration tools and so on. So you're still going to have some operating system and still kind of runtime environment."
We wondered how autonomous these containers might be: surely software should look after itself, but Fowler said he said there'd still be an OS and a supervisory process.
"Some things lend themselves to decomposition," he said, citing SETI, "and you can picture containers, such as application servers, being in that category. So you run lots and lots and lots of containers."
"Concurrency is an interesting question. Solaris has the ability to constructively handle more than 1 million threads, and it has a huge address space; so once you extend those out they will be very interesting characteristics to have. You'll be able to have lots of network connections at once."
Higher bandwidth interconnects, what Fowler called the "high performance data center bus", will mean more resources become more widely available.
At the same time, there'll be more specialization on the network, and this will take place alongside consolidation: the two aren't mutually exclusive.
"Business applications are more complicated - they need non-volatile highly reliable storage behind them; there are security issues, and so there's a limit to how far you can take that. The idea that there'll be more specialization on different parts of the network is certainly going to happen. We've got SANS, we have network switches, SSL - they've added a lot of complexity".
Fowler gave an example of how the higher bandwidth in the network would oblige application developers to assemble their software rather differently:
"We're working on an experimental technology to see what would app servers look like if you developed them using extraordinarily low latency and high performance interconnects. Well, we discovered that you'd actually build app servers quite differently to how you do today. Replication of state and failover would be done differently. So in this networked data center you're going to have a lot of familiar components like operating systems, but some of the real-time containers will evolve differently over time."
He had an interesting riff on the ease of use issue which is at the core of N1.
"There's a fellow who works for me, Dave Patterson [UC Berkeley - RISC pioneer], who's researching what it takes to reach five nines. Dave is a very clear thinker and did lots of study and discovered that lots of people are working on the wrong thing.
"Basic hardware and software reliability is not the number one challenge: the number one challenge is recovery - ensuring you have the shortest service outage time. Number two is usability: the average data center has DNS, firewalls, store subsystems, load balanced routers. Most people's problems come from misconfiguration."
When Microsoft goofed the DNS settings on its microsoft.com servers recently, he figured the site would have to be up for the next two hundred years to achieve five-nines uptime.
"In N1 what you really got to get after is how you assemble provisioning as a coherent whole - you just push a button - where reliability happens and it can be verified. Our main pitch is TCO. The real power of it is more reliability."
On the other hand, he contrasted Microsoft's "push button" ease of use pitch with "no button" thinking.
"There's a difference with Microsoft, they stick a GUI on top. They say, 'bring on the frosting people'".
He gave Clustra, the high availability database that Sun acquired last year as an example. Some Clustra systems had been continuously available for three years. The real problem with databases is administrative, he argued, where the DBA must do index rebuilds.
"Clustra had eliminated that problem because it was doing constant indexing. So the GUI has gone, along with the Rebuild button."
(Sun is looking to use that technology in next year's Application Server update.)
Microsoft file system caper
He didn't seem too worried about WinFS, which is slated for inclusion in Longhorn (although a "Shorthorn" interim will most likely precede it, despite protestations to the contrary from Microsoft executives this week).
Been there, didn't do that, reckoned Fowler:
"Microsoft has been talking about putting a database in the file system for a very, very long time. And every time it gets scaled back or drops out."
But surely there's a lot of merit in having consistent queries across the enterprise?
"But people have been using proprietary content management systems for a long time. Five or ten years from now in an OS you'll have semantics much closer unstructured search and content management, as opposed to block I/O calls like read() and write().
"Look in between proprietary solutions like content management systems, and dumb hierarchical file systems like NTFS, there's meta data." We did wonder how long it would take to get U**X people to agree on meta data, and Fowler agreed that there wasn't much consistency. But it would get there.
Jxta pose
Sun was still serious about technologies to virtualize the network, and Ingrid Van Denttogen who joined us, had been working on Jxta. Ingrid said that they'd just tested Jxta on a Nokia 9210 (the interview was recorded on my loaned Communicator).
I wondered if ZeroConf (embodied in Apple's Rendezvous) wouldn't do the job just as well?
"ZeroConf is for local LAN discovery," said Fowler. "Jxta can discover resources across any boundary - it can find resources halfway across the world. The problem at that scale is that there's no real administration model for DNS." (ZeroConf is DNS based).
Our friend Jonathan Eunice at Illuminata made a great observation on N1 recently. The inhibitor to adoption, he pointed out, wasn't technological as political: for such an initiative to succeed it needs to breakdown corporate fiefdoms. But even a five per cent improvement would still add to the bottom line. But there's probably never been a better time to try. ®