This article is more than 1 year old

Deconstructing databases with Jim Gray

A genuine guru

It's easier to understand sequential programs than parallel programs because the state space is much smaller. If you have parallel programs then the so-called Cartesian product - the product of the number of states in each of the programs - is much higher. For example, if there are five programs then you don't have five times as many states; you have two to the fifth times as many states, which is 32 times as many. So it's much harder to think about parallel programming, much harder to get parallel programs right.

Do you think compilers would ever get intelligent enough to parallelise our code for us, or would it always require human intelligence?

I hope it will. Actually, I'm convinced as far as human intelligence is concerned it's hopeless. That's what scares me so much. So what it's actually going to require is a simple programming model. I think a dataflow programming model is where we're headed and that happens to be the way you program GPUs. You program them as dataflow. If you look at the GPU sort, most of it is data flow.

Parallel programming is hard because of the state space explosion, so you have to think in general terms. You be able to say "do this general thing to all those objects", and you have to generalise and think about the general case. Whereas in sequential programming you have narrowed everything down to a very specific case and you know at a certain point in the program quite a bit about the state of this object.

If you look at SQL Server Integration Services it's dataflow programming. You write transforms as simple VB or C++ or C# transforms that take in data streams and spit out data streams. They are sequential programs but they run using pipeline partition parallels.

Real time analytics

We use databases: relational for storage and multi dimensional for analysis. But these are both simply logical ways of thinking about data - conveniences for the human rather than the machine. We currently hold these as separate data stores but will we ever move to a stage when a single store of data services both ways of looking at data? Surely, if we can find such a storage mechanism we will have reached to goal of real-time analytics?

Absolutely. In addition, there is the XML stuff which you can think of as XML documents but it's shredded inside and stored in formats that are convenient. There's the text data and the Microsoft research technologies are using bitmap indexes internally to keep in a pyramid of them to allow quick indexing to the text information. The data mining systems are actually storing data as data models and beginning to give you probabilistic reasoning about data, which I think is a really exciting development.

I think we end up representing the information in a variety of ways depending on what the need is and we that take the information and replicate it. Typically, the multi-dimensional structure is a replica of underlying fact tables. You can think of it as index structures if you want to, but for many people, because actually they deal directly with the dimensional data, that's the reality and everything else is just irrelevant.

SQL has gotten so huge, and there are so many different parts to it that there are people who have no idea what multi-dimensional data is. There are people who have no idea how the data mining models work, and are still experts. It is difficult to be cognisant of all the things that are in SQL Server because it's gotten to be so huge. I certainly have a hard time keeping up. ®

More about

TIP US OFF

Send us news


Other stories you might like