This article is more than 1 year old

Bugs in beta weather model used to trash climate science

Shock revelation: devs test complex code on more than one super

Development work on a not-yet-prime-time weather forecasting model has been seized on as proof that climate models can't be trusted.

The reason? Folks who aren't keen on climate change discovered this paper in the journal of the American Meteorological Society, in which Song-You Hong of South Korea's Yonsei University Department of Atmospheric Sciences runs some tests over a weather model called GRIMs (Global/Regional Integrated Model).

Weather forecasting (as is climate modelling, but that's a different story) is one of the default workloads of high-performance computing, and consumes a significant slice of the world's supercomputer processor times at any given moment.

What Hong has documented, and what has been seized on by Anthony Watts of Wattsupwiththat, is that the GRIMs model, when run under different HPC environments, produces different results. As he puts it in the abstract of the paper:

“The system dependency, which is the standard deviation of the 500-hPa geopotential height averaged over the globe, increases with time. However, its fractional tendency, which is the change of the standard deviation relative to the value itself, remains nearly zero with time. In a seasonal prediction framework, the ensemble spread due to the differences in software system is comparable to the ensemble spread due to the differences in initial conditions that is used for the traditional ensemble forecasting.”

The reason, he states, is due to how different environments handle rounding – and that has Wattsupwiththat particularly excited: “It makes you wonder if some of the catastrophic future projections are simply due to a rounding error.”

Watts reproduces the table below as proof of how bad things are.

GRIM test results

Smoking gun? No, just testing unfinished weather forecast models

on different machines. Image: An Evaluation of the

Software System Dependency of a Global Atmospheric Model

Hong, et al

As noted William Connelly over at ScienceBlogs: “trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts”, which is something that “dates back to Lorenz’s original stuff on chaos”.

Just as interesting to The Register is that a little bit of further research suggests that the model under test in Song-You Hong's paper is relatively new. Here, for example, is a paper describing the model, prepared for the First GRIMs Workshop in 2011.

As is clear from this paper (slide 5), the models Hong is testing were first designed in 2008, are still under development, and GRIM is slated for use in weather forecasting … in 2015.

In other words, the reason for conducting a test such as Hong's seems to be that he's working on a new model, and it's being tested in different computing environments to identify ways in which the model's code needs to be polished to make sure it produces consistent results in different environments.

Chris Samuel, a Melbourne-based HPC senior system administrator working in Melbourne, told The Register it's not unusual to want to test against different environments, because complex environments offer myriad opportunities for divergences to creep in.

The authors are working to see if the program produces the same results at different scales, and Samuel noted that in the paper, Hong says the tests identified a bug in the weather code.

Divergence between different systems isn't a new issue, he said. Both sysadmins and users will, in fact, use a range of strategies to address this.

One is to have many parallel installations using different versions of packages, libraries, and compilers, so that “users can pick what they want to build against,” he said.

Another defence is to pick a code version and stick with it. Yet another is to do testing on virtual machines, “but that, of course, doesn't necessarily play so well with classic HPC jobs”.

And even then, “you have OS distribution churn underneath all that to complicate matters further.”

In such a fluid world, testing seems prudent, at least to The Register. ®

More about

TIP US OFF

Send us news


Other stories you might like