Q. So away from the modelling, which is questionable as evidence, it seems that climate science really comes down disputes about how you process the numbers statistically. Tell me where Bayes fits in.
Lewis: Bayesian maths went out of favour in a big way because of its subjectivity. It came back in favour about 30-40 years ago, and it's stayed: it's very well suited to computer simulation and Monte Carlo methods. It has some good theoretical basis. The problem is that people haven’t got to grips thoroughly with the choice of Prior.
Q. A prior is a "seed" or a "nudge"?
Lewis: Yes, it's a nudge or seeding. In most cases, it gets overridden by the data and it doesn’t matter what prior you use. But it's problematic in climate science - there isn't much data: we only have one climate. And the data is very noisy. Because of that, the PRIOR has a huge influence.
In an ideal world they would not use either Uniform or Expert Priors. They would either use an Objective Bayesian method, or they would use an non-Bayesian method, such as more classical statistical method like profile likelihood. Profile likelihood and objective Bayesian give almost the same results, although it’s not 100 per cent accurate.
The Uniform Prior hugely fastens the tail, even for a quite well defined estimate - Gregory.
ECS estimates in the IPCC AR5 WG1 report, simplified. The bars show a 5-95 per cent certainty range. The blob is the best estimate. The mauve dashed bar illustrates the effect of using a uniform prior on the same data. Lewis (2012) is included for comparison.
Click here for source.
Lewis: You can see how much this matters in the AR5's chart of sensitivity estimates. Those mauve lines are Gregory 2006. [Unlike almost all contemporary ECS estimates Gregory's 2006 study didn't run too hot] The original study basis is the short solid bar - the best estimate is 1.6, and the top is 3.5. The dashed mauve line is what happens when they put it on a Uniform Prior Basis - it goes up 50 per cent. It pushes the top of the 95 per cent certainty range from 3.5˚C up to 8˚C. Some go higher, actually, to 14˚C - but they cut them all off at 10˚C and renormalise them. That's entirely the Uniform Prior.
So catastrophic warming predictions are a trick of a one subjective statistical input?
Lewis: Or models. The problem with expert priors here is that they dominate the results. The studies are valueless - all they show you is what Prior or seed you put in. The level of understanding of Bayes in climate science is very poor.
Q. How many use objective or subjective priors?
Lewis: The Objective Prior school is in the minority. However the Subjective Prior Bayesian school - where you pick your own Prior - only produces Subjective - only valid for the person doing the study. If you read Lindsey, he's one of the great statisticians and he died recently, there’s a lovely quote from him: “the probability you get is personal to the investigator" If you hear Rougier lecturing, he doesn't say "this is a climate table" - "he says this is my climate table".
It's left as an exercise as a reader to account for how "global warming" became "catastrophic warming" over a 20-year period. But this dynamic can't be discounted:
"I can’t overstate the HUGE amount of political interest in the project as a message that the Government can give on climate change to help them tell their story. They want the story to be a very strong one and don’t want to be made to look foolish," DEFRA official Kathryn Humphrey wrote to the head of the East Anglia climate research unit in 2009. Many such pleadings have flown between the bureaucracy and the Academy in recent years - and politicians echoed the demand to be told "what to do" recently.
Nobody wanted to demur. And here we all are. ®
Lewis & [Marcel] Crok: "How the IPCC hid the good news on Global Warming" (Short Version and Long Versions from here)