Anticipating where bottlenecks are going to develop in a live database has been one of the most bankable skills any self-respecting database administrator can have, yet researchers may now have figured out a set of algorithms that can do this automatically.
The DBSeer predictive modeling method, described in two academic papers authored by researchers at MIT and Microsoft, gives companies a way to model the ins and outs of their particular database so they can save on data center infrastructure and avoid downtime.
The DBSeer modeling method helps administrators spot DB problems without having to manually test out different configurations of the database under different load environments, the researchers write (PDF).
Its creators hope DBSeer can deal with the main shortcoming of running a database-as-a-service in an on-premise virtualized environment, which is that the isolation of compute power, per-VM billing, and the lack of information about the particulars of the underlying infrastructure makes tuning a database in the private cloud "more challenging than in conventional deployments."
"You can now answer many questions about your database that were previously only answered through 'try it and find out for yourself'," the lead author of the papers, Barzan Mozafari, tells The Register via email.
"Now in many cases we can predict what will happen without actually trying those configurations out. This can dramatically reduce the cost of testing and deploying your database configuration."
So far, the researchers have created an implementation of DBSeer that can help model performance for transactional MySQL workloads, but they believe it can be extended to other databases as well.
The system has proved so efficient that it has already piqued the interest of Teradata, which has tasked several of its engineers with the job of porting the DBSeer algorithm to its own software.
The system works by observing query-level logs and the OS statistics generated by a live database management system.
"It's a non-intrusive approach, i.e. it doesn't require modifying the database engine," Mozafari says. "It simply observes the load that comes into the database and the performance and resource consumption of the database and tries to understand the relationship between the two."
This allows DBSeer to model the CPU, RAM, network, disk I/O, and number of acquired locks per table, for various MySQL configurations.
To test the algorithm, the researchers generated 20 mixtures of the transaction processing performance council (TPC-C) benchmark with different ratios of transaction types. The average error rates of DBSeer's predictions ranged between 0 and 25 percent, with its I/O model performing best, with an average margin of error of 1 percent.
With a variance that low, we can see why Teradata would be interested in porting the technology to work with its own.
The researchers are due to deliver a further paper (draft PDF here) at the SIGMOD conference in June in New York, which will give further information on how to apply DBSeer to performance and resource modeling in highly-concurrent OLTP workloads.
The researchers hope that DBSeer can be extended to still other databases, including NoSQL ones.
"Row-store (NoSQL) ones are much simpler to model/predict because they are more linear (due to lack of locking) than a traditional transactional DB," Mozafari says.
If technologies like DBSeer are adopted, companies will be able to automate some of the tasks done by DBAs and make sure they're not provisioning more hardware for their databases than they actually need.
What has got El Reg's database desk all a-flutter is the thought of DBSeer being integrated into an off-premise rentable cloud, like, say, Amazon Web Services.
This would give database developers a technology that could give them real anticipated I/O performance for an off-site database, and go some way toward solving the numerous reliability concerns people have over running a database in the cloud. ®