DB2: the Viper is coming

More of a King Cobra, really


Comment The next release of IBM's DB2 (for both z series and distributed systems), which is code-named ‘Viper’, will be generally available in the not too distant future: “mid-summer” for distributed systems, according to IBM. It is therefore appropriate to consider some of the new features it will introduce, and its impact on the market.

Of course, the biggest feature of Viper is that it includes an XML storage engine as well as a relational one. I have gone into some depth discussing the technology underpinning this on previous occasions and I will not repeat myself.

However, it is worth pointing out that this doesn't just mean that you can use either XQuery or SQL to address the database, and it doesn't just mean that you can combine SQL and XML data within the same query—it also has a direct impact on performance, both in terms of the database itself and in the development of facilities that use XML storage. For example, performance comparisons by early adopters of Viper indicate performance gains on queries of 100 times or more, development benefits of between four and 16 times (depending on whether the comparison was with a character large object or shredded data), the ability to add fields to a schema in a matter of minutes as opposed to days, and so on.

However, XML support is by no means the only significant feature of Viper. For general-purpose use, perhaps the next most significant capability is the compression that will be provided. Now, null and default value compression, index compression for multi-dimensional clustering and back-up compression are all available pre-Viper but in Viper there is also row compression.

Effectively, this works by having multiple algorithms that work with different datatypes (by column) and by looking for patterns that can be tokenised, stored once and accessed by dictionary. According to IBM this results in typical savings of between 35–80 per cent depending on the data being compressed. In particular, there are special facilities for SAP built into the release, so that the savings in SAP environments should be at the higher end of these expectations.

You might ask what the overhead of using compression is? After all, the act of compressing and de-compressing the data takes time. However, buffer pools are also compressed, which means that more data can be held in memory, so there is less need for I/O. As a result, applications will often actually be speeded up because the reduction in I/O more than offsets the compression overhead. Neat.

Note, however, that compression only applies to relational data in this release.

The next big deal is the introduction of range partitioning. Now, you wouldn't think that range partitioning was of major significance. Indeed, you might think that IBM was late in delivering it, since many other vendors have had it for years. However, it is not just the range partitioning that is important, nor even that you can use it for sub-partitions along with the existing hash capabilities. No, it is the combination of both of these along with multi-dimensional clustering that is important: in other words you can distribute your data using hashing, sub-partition it by range and then organise those sub-partitions by dimension, while contiguously storing relevant data in order to minimise I/O.

And talking about distributing data, in this release IBM has extended its data managed storage, though the story, which started with the current release, is not yet complete. Basically, the idea here is that the database will support different storage types (for example, disk drives of different speeds) and you can define policies to assign particular data elements to different storage types. In other words, IBM is building ILM (information lifecycle management) directly into the database. While it has not formally stated as such this is clearly the direction in which the company is headed.

Since we are on the topic of different hardware configurations, another new feature is that the database will automatically recognise the hardware configuration while it is installing and it will automatically set defaults (for example, for self-tuning memory, the configuration advisor and so on) accordingly. The software will similarly recognise if this is an SAP system and set defaults accordingly.

Along with this, as you might expect, there are a number of enhanced and extended autonomic features. One I particularly like is that utilities, such as table re-organisation, backup or runstats (all of which can be automated after the input of initial parameters) can be throttled. That is, you can set these to run dependent on how much priority they have relative to user performance. Thus you could insist that re-organisation is really important or, at the other end of the scale you could state that it must have no impact on live performance, or anywhere in-between.

Other features include the removal of Tablespace limits; label-based access control, which allows you to implement hierarchical security at row level; a new Eclipse-based DB2 Developer Workbench (replacing the previous DB2 Development Center) with full XML support; and a Visual XQuery Builder, amongst others.

How much impact will Viper have? There are a lot of applications (more than many companies realise) that need to combine XML and SQL data, and IBM is about to have a clear lead in the market in these areas. Then add Viper's SAP-specific characteristics: even with the previous release, DB2 was increasing its share of the SAP market and it has picked up not just new customers but those migrating from other platforms—this trend is likely to continue. On top of that, compression will reduce the total cost of ownership as will, in their own ways, the new automated management features and the automatic storage support. Finally, consider the performance benefits of adding range partitioning to multi-dimensional clustering for query environments.

To answer my question: how much impact will Viper have? A lot: less a viper more of a King Cobra.

Copyright © 2006, IT-Analysis.com


Other stories you might like

  • Quantum internet within grasp as scientists show off entanglement demo
    Teleportation of quantum information key to future secure data transfer

    Researchers in the Netherlands have shown they can transmit quantum information via an intermediary node, a feature necessary to make the so-called quantum internet possible.

    In recent years, scientists have argued that the quantum internet presents a more desirable network for transferring secure data, in addition to being necessary when connecting multiple quantum systems. All of this has been attracting investment from the US government, among others.

    Despite the promise, there are still vital elements missing for the creation of a functional quantum internet.

    Continue reading
  • Drone ship carrying yet more drones launches in China
    Zhuhai Cloud will carry 50 flying and diving machines it can control with minimal human assistance

    Chinese academics have christened an ocean research vessel that has a twist: it will sail the seas with a complement of aerial and ocean-going drones and no human crew.

    The Zhu Hai Yun, or Zhuhai Cloud, launched in Guangzhou after a year of construction. The 290-foot-long mothership can hit a top speed of 18 knots (about 20 miles per hour) and will carry 50 flying, surface, and submersible drones that launch and self-recover autonomously. 

    According to this blurb from the shipbuilder behind its construction, the Cloud will also be equipped with a variety of additional observational instruments "which can be deployed in batches in the target sea area, and carry out task-oriented adaptive networking to achieve three-dimensional view of specific targets." Most of the ship is an open deck where flying drones can land and be stored. The ship is also equipped with launch and recovery equipment for its aquatic craft. 

    Continue reading
  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading
  • Cloud security unicorn cuts 20% of staff after raising $1.3b
    Time to play blame bingo: Markets? Profits? Too much growth? Russia? Space aliens?

    Cloud security company Lacework has laid off 20 percent of its employees, just months after two record-breaking funding rounds pushed its valuation to $8.3 billion.

    A spokesperson wouldn't confirm the total number of employees affected, though told The Register that the "widely speculated number on Twitter is a significant overestimate."

    The company, as of March, counted more than 1,000 employees, which would push the jobs lost above 200. And the widely reported number on Twitter is about 300 employees. The biz, based in Silicon Valley, was founded in 2015.

    Continue reading
  • Talos names eight deadly sins in widely used industrial software
    Entire swaths of gear relies on vulnerability-laden Open Automation Software (OAS)

    A researcher at Cisco's Talos threat intelligence team found eight vulnerabilities in the Open Automation Software (OAS) platform that, if exploited, could enable a bad actor to access a device and run code on a targeted system.

    The OAS platform is widely used by a range of industrial enterprises, essentially facilitating the transfer of data within an IT environment between hardware and software and playing a central role in organizations' industrial Internet of Things (IIoT) efforts. It touches a range of devices, including PLCs and OPCs and IoT devices, as well as custom applications and APIs, databases and edge systems.

    Companies like Volvo, General Dynamics, JBT Aerotech and wind-turbine maker AES are among the users of the OAS platform.

    Continue reading
  • Despite global uncertainty, $500m hit doesn't rattle Nvidia execs
    CEO acknowledges impact of war, pandemic but says fundamentals ‘are really good’

    Nvidia is expecting a $500 million hit to its global datacenter and consumer business in the second quarter due to COVID lockdowns in China and Russia's invasion of Ukraine. Despite those and other macroeconomic concerns, executives are still optimistic about future prospects.

    "The full impact and duration of the war in Ukraine and COVID lockdowns in China is difficult to predict. However, the impact of our technology and our market opportunities remain unchanged," said Jensen Huang, Nvidia's CEO and co-founder, during the company's first-quarter earnings call.

    Those two statements might sound a little contradictory, including to some investors, particularly following the stock selloff yesterday after concerns over Russia and China prompted Nvidia to issue lower-than-expected guidance for second-quarter revenue.

    Continue reading
  • Another AI supercomputer from HPE: Champollion lands in France
    That's the second in a week following similar system in Munich also aimed at researchers

    HPE is lifting the lid on a new AI supercomputer – the second this week – aimed at building and training larger machine learning models to underpin research.

    Based at HPE's Center of Excellence in Grenoble, France, the new supercomputer is to be named Champollion after the French scholar who made advances in deciphering Egyptian hieroglyphs in the 19th century. It was built in partnership with Nvidia using AMD-based Apollo computer nodes fitted with Nvidia's A100 GPUs.

    Champollion brings together HPC and purpose-built AI technologies to train machine learning models at scale and unlock results faster, HPE said. HPE already provides HPC and AI resources from its Grenoble facilities for customers, and the broader research community to access, and said it plans to provide access to Champollion for scientists and engineers globally to accelerate testing of their AI models and research.

    Continue reading

Biting the hand that feeds IT © 1998–2022