Your humble Devil took another tack.
Alongside the various email trails, the leak also included much of the computer code that these "scientists" were using to generate their climate models. Whilst the MSM, reluctantly, did run stories on the email trails—by then, the uproar was such that they could barely ignore the scandal—not one mainstream paper (as far as I could see) even admitted that the code had been released (presumably because, being journalists, they wouldn't have the first clue as to how to analyse it).
So, your humble Devil decided to concentrate on the code and, in particular, the comments in the Harry Read Me file. I went hunting around, and found people who were trying to run the code, and analyse these comments—and what we found was horrendous.
So, come with me on a wonderful journey as the CRU team realise that not only have they lost great chunks of data but also that their application suites and algorithms are total crap; join your humble Devil and Asimov as we dive into the HARRY_READ_ME.txt (thanks to The Englishman) file and follow the trials and tribulations of Ian "Harry" Harris as he tries to recreate the published data because he has nothing else to go on!
Thrill as he "glosses over" anomalies; let your heart sing as he gets some results to within 0.5 degrees; rejoice as Harry points out that everything is undocumented and that, generally speaking, he hasn't got the first clue as to what's going on with the data!
Chuckle as one of CRU's own admits that much of the centre's data and applications are undocumented, bug-ridden, riddled with holes, missing, uncatalogued and, in short, utterly worthless.
And wonder as you realise that this was v2.10 and that, after this utter fiasco, CRU used the synthetic data and wonky algorithms to produce v3.0!
You'll laugh! You'll cry! You won't wonder why CRU never wanted to release the data! You will wonder why we are even contemplating restructuring the world economy and wasting trillions of dollars on the say-so of data this bad.
Essentially, the data was in a mess, the scientists were entering "synthetic" data (guesstimates, essentially), the code was producing meaningless answers, and some of the results had been interpreted entirely wrongly. (IIRC, FrancisT tried running the actual code (though I cannot find the exact post just now) and found that it would error but just carry on running—no trapping at all).
In other words, the data and software that these people were using to produce the climate models was crap—garbage in, garbage out.
So, we knew that the models were fantasies—it's just that no one reported on it.
Today Anthony Watts has highlighted a new paper which corroborates that assertion.
New peer reviewed paper finds the same global forecast model produces different results when run on different computers
Did you ever wonder how spaghetti like this is produced and why there is broad disagreement in the output that increases with time?
Increasing mathematical uncertainty from initial starting conditions is the main reason. But, some of it might be due to the fact that while some of the models share common code, they don’t produce the same results with that code owing to differences in the way CPU’s, operating systems, and compilers work. Now with this paper, we can add software uncertainty to the list of uncertainties that are already known unknowns about climate and climate modeling.
So, now it's official—the code is crap. Therefore the climate models are crap.
And we're busy beggaring the world on the say-so of corrupt scientists, inaccurate data, flakey software and Green loons.
People should hang for this...