Michael Mann in his own words on the stolen CRU emails

Read time: 8 mins

With all the wild accusations flying around over the illegally obtained email correspondence from the University of East Anglia Climate Research Unit, I thought I would ask one of the scientists in the middle of the issue to provide some context.

Penn State University climate scientist, Dr. Michael Mann, whose name appears in some of the stolen emails, provided me with a run-down of the emails that involve him. His responses provide some much needed context and give you an idea of just how wildly some people have blown this story out of proportion.

What follows is quotes taken directly from the stolen emails, followed by Dr. Mann’s response:

1. “I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (i. e. from 1981 onwards) and from 1961 for Keith’s to hide the decline.” (from Phil Jones).

Phil Jones has publicly gone on record indicating that he was using the term “trick” in the sense often used by people, as in “bag of tricks”, or “a trick to solving this problem …”, or “trick of the trade”.

In referring to our 1998 Nature article, he was pointing out simply the following: our proxy record ended in 1980 (when the proxy data set we were using terminates) so, it didn’t include the warming of the past two decades.

In our Nature-article we therefore also showed the post-1980 instrumental data that was then available through 1995, so that the reconstruction could be viewed in the context of recent instrumental temperatures. The separate curves for the reconstructed temperature series and for the instrumental data were clearly labeled.

The reference to “hide the decline” is referring to work that I am not directly associated with, but instead work by Keith Briffa and colleagues.

The “decline” refers to a well-known decline in the response of only a certain type of tree-ring data (high-latitude tree-ring density measurements collected by Briffa and colleagues) to temperatures after about 1960.

In their original article in Nature in 1998, Briffa and colleagues are very clear that the post-1960 data in their tree-ring dataset should not be used in reconstructing temperatures due to a problem known as the “divergence problem” where their tree-ring data decline in their response to warming temperatures after about 1960.

Hide” was therefore a poor word choice, since the existence of this decline, and the reason not to use the post 1960 data because of it, was not only known, but was indeed the point emphasized in the original Briffa et al Nature article. There is a summary of that article available on this NOAA site.

There have been many articles since then trying to understand the reason for this problem, which applies largely to only one very specific type of proxy data (tree-ring wood density data from higher latitudes).

As for my research in this area more generally, there was a study commissioned by the National Academies of Science back in 2006 to assess the validity of paleoclimate reconstructions in general, and my own work in specific. A summary of that report, and link to it, is available here.

The New York Times   (6/22/06), in an article about the report entitled “Science Panel Backs Study on Warming Climate” had the following things to say:

“A controversial paper asserting that recent warming in the Northern Hemisphere was probably unrivaled for 1,000 years was endorsed today, with a few reservations, by a panel convened by the nation’s preeminent scientific body…At a news conference at the headquarters of the National Academies, several members of the panel reviewing the study said they saw no sign that its authors had intentionally chosen data sets or methods to get a desired result. “I saw nothing that spoke to me of any manipulation,” said one member, Peter Bloomfield, a statistics professor at North Carolina State University. He added that his impression was the study was “an honest attempt to construct a data analysis procedure.”


2. “Perhaps we’ll do a simple update to the Yamal post. As we all know, this isn’t about truth at all, its about plausibly deniable accusations.” (from me)


This refers to a particular tree-ring reconstruction of Keith Briffa’s.

These tree-ring data are just one of numerous tree-ring records used to reconstruct past climate.  Briffa and collaborators were criticized (unfairly in the view of many of my colleagues and me) by a contrarian climate change website based on what we felt to be a misrepresentation of their work.

A further discussion can be found on the site “RealClimate.org” that I co-founded and help run. It is quite clear from the context of my comments that what I was saying was that the attacks against Briffa and colleagues were not about truth but instead about making plausibly deniable accusations against him and his colleagues.

We attempted to correct the misrepresentations of Keith’s work in the RealClimate article mentioned above, and we invited him and his co-author Tim Osborn to participate actively in responding to any issues raised in the comment thread of the article which he did.  


3. “Can you delete any emails you may have had with Keith re AR4? Keith will do likewise. He’s not in at the moment -minor family crisis. Can you also email Gene and get him to do the same? I don’t have his new email address. We will be getting Caspar to do likewise.” (from Phil Jones)


This was simply an email that was sent to me, and can in no way be taken to indicate approval of, let alone compliance with, the request. I did not delete any such email correspondences.


4. “I think we have to stop considering Climate Research as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers in, this journal” (from me)


This comment was in response to a very specific incident regarding a paper by Soon and Baliunas published in the journal “Climate Research”.

An editor of the journal, with rather contrarian views on climate change, appeared to several of us to be gaming the system to let through papers that clearly did not meet the standards of quality for the journal. The chief editor (Hans von Storch), and half of the editorial board, resigned in protest of the publication of the paper, after the publisher refused to allow von Storch the opportunity to write an editorial about how the peer review process had failed in this instance. [editor note: DeSmogBlog has a full rundown of this story here]

Please see e.g. this post at RealClimate. (3rd bullet item–see the various links, which lead to letters from chief editor Von Storch, and an article by the journalist Chris Mooney about the incident).

Scientists all choose journals in which we publish and we all recommend to each other and our students which journals they should publish in.

People are free to publish wherever they can and are free to recommend some journals over others.

For an example of this behavior in daily life, people make choices and recommendations all the time in their purchasing habits. It is highly unusual for a chief editor and half of an editorial board to resign and that indicates a journal in turmoil that should possibly be avoided. Similarly, authors are allowed to cite any papers they want, although usually the editor will note incorrect or insufficient citing.

I support the publication of “skeptical” papers that meet the basic standards of scientific quality and merit.

I myself have published scientific work that has been considered by some as representing a skeptical point of view on matters relating to climate change  (for example, my work demonstrating the importance of natural oscillations of the climate on multidecadal timescales). 

Skepticism in the truest scientific sense of the word is good and is indeed essential to science.  Skepticism should not be confused, however, with contrarianism that does not meet the basic standards of scientific inquiry.


5 “ ‘It would be nice to try to contain the putative “MWP” (from me)


In this email, I was discussing the importance of extending paleoclimate reconstructions far enough back in time that we could determine the onset and duration of the putative “Medieval Warm Period”.

Since this describes an interval in time, it has to have both a beginning and end. But reconstructions that only go back 1000 years, as most reconstructions did at the time, didn’t reach far enough back to isolate the beginning of this period, i.e. they are not long enough to “contain” the interval in question.

In more recent work, such as the IPCC Fourth Assessment Report published in 2007, the paleoclimate reconstructions stretch nearly 2000 years back in time, which is indeed far enough back in time to “contain” or “isolate” this period in time.


For political and financial reasons, a handful of very vocal critics have been using these e-mails to create the impression that they somehow refute the decades of research by thousands of scientists finding that climate change is a serious issue caused by our over reliance on fossil fuels like oil and coal.

That’s an unfair claim, as many of the more rational media outlets have pointed out.

From the context Dr. Mann has provided above, it is clear that these emails are being used as political weapons in a last ditch effort to stop the world from taking the necessary action to reduce fossil fuel dependency and minimize the devastating effects of climate change.

Get DeSmog News and Alerts



Yeah how foolish does mann think the public really is? The above link provides a more factual account of the hacked e-mails as well as some relevant context. Is it believable for Micheal mann and Jones to have two different stories? Desmog seeems to think so.

Was that a personal attack, or is that the type of response you normally provide in the face of uncomfortable information, and really not meant for me personally?

The “troll” comments are a compliment to desmog (and a complement)

You have open commenting and so people are attracted. They can say something smart or dumb. You can shoot it down if you want or let it stand. It’s good. Unmolested comment sections are always the best in the end.

It’s easy to find sites full of like minded people no matter where you stand on the topic of climate.

Those sites are a good place to go if you need sleep….so boring.

Kevin we’re trying to enlighten you and get back into reality. BTW are you guys still hosting Monbiot? I want to shake the global warming guru’s hand who can admitt to the massive deception that has occured.
Kevin no hard fellings I know you have to do your job like the rest of us. I’m sure you go home and read inhofe’s website to regain sanity. I’ll have to take you out for a beer when this is all over and we can laugh about how the world almost got sucked in.

That links provided is just lots of arcane argument and defamatory commentary. For instance, why does it have to be “fraudulent” to adjust a series of data? Perhaps one needs to review the literature to discern why.

Anyway, is this all the latest source code?

Regardless, given the complexity, and how this material was never supposed to come forward, if this is a fraud, I am confused as to why the black box made to produce the results is so complex.

What we are seeing here is a very complex topic people are expecting to be settled by sound bites and speculation. And yet, if those sounds bites are not totally thorough, then GOTCHA!

I suspect the better approach would be to crawl through all of the related published documentation, and then see what they were up to. Otherwise, how is one truly to argue what was meant by something like:


valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,2.6,2.6,2.6]*0.75 ; fudge factor

Anyone not willing to do so, but who is willing to publish defamatory dissertations, clearly is not being objective.

If you are interested in what has been found with the code and the data, it is publicly available at wattsupwiththat.com. There you can fine a lot of code level details, versions, data sets etc.

Note: The wattsupwiththat.com web site appears to take a position on the controversy.

I realize that you are not programmers but I think most people understand
the meaning of the terms “ARTIFICIAL CORRECTION” and “fudge factor”.
This is from the source code that was in the leaked information. It is its own context.
It is an unambiguous set of directions to a computer to perform some operations.
It can only be interpreted by the computer in one way, the way the author wrote it!
Computers do not tell the truth, they only say what they have been told to say.

Forget the emails, look at the code, note the ‘fudge factor’

From FOI2009/FOIA/documents/harris-treebriffa_sep98_e.pro

; PLOTSALLREGION MXD timeseries from age banded and from hugershoff

; standardised datasets.

; Reads Harry’s regional timeseries and outputs the 1600-1992 portion

; with missing values set appropriately. Uses mxd, and just the

; “all band” timeseries




2.6,2.6,2.6]*0.75 ; fudge factor

if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’

Some code removed here for brevity.


; Now normalise w.r.t. 1881-1960



Some code removed here for brevity.







; Now plot them

So you think code comments created by someone attempting to update someone else’s obsolete programming are a damning forensic analysis? It’s called thinking out loud, and usually it’s a written representation of the new programmer trying to understand the original design. Par for the course in any large project.

But you’re not a programmer are you? Just excitable and aching to step on to the playing field, with a brain that can’t hold more than a single word at a time. Fudge! Artificial! Trick! Communist! Gubmint! algore!

Forget the comments. Look at the CODE.

The code is doing an interpolation using hard coded constant values that were typed in by a person to alter the outcome of the model just prior to output.

Comments can be misleading, debug only code.

Why should I believe this code hasn’t been modified given the fact that comments were added by someone in a way that would result in errors being thrown? C-family languages use a semicolon to indicate the end of a statement and anything coming after that will be treated as a new line of code unless formatted as a comment, e.g., starting the new line with a number sign or “/*”. If you added the comments – that obviously weren’t put there by the coders – then there is every reason to believe that you added the fudges, etc – in which case you are clearly bull-fudging us…

You should not believe me. You should check the facts yourself, ask questions, listen to the answers, and then test what you are being told against things you yourself can prove by experiment.

The $ sign at the end of the ‘valadj’ declaration is just a line continuation character in Fortran, so the vector initialization includes the next line also.

The computer code does not spin at all! Like mann and jones with seerate stories trying to explain why they wanted to trick the world into thinking it was warming.

No the computer code does not spin at all and it shows that it was wa deliberate attempt to manipulate scientific data to make us all think it was warming!

Global warming is beyond a shadow of a doubt the biggest world hoax ever exposed!

Probably the comment should have said “empirical” correction rather than “very artificial”. This seems to be in connection with the tree data which has to be corrected for known reasons that have been fully documented elsewhere. It is hard enough to read code, an accurate assessment cannot be done out of context. Duh.

Obviously C-family language.

In which case the line…


… would throw an error.

Which means someone added that line afterwards and it wasn’t part of the actual code.

Perhaps it is time to fess up.

It is Fortran. I guess if you wanted it to be ‘C’ so it must be ‘C’ even thought it is Fortran. You might think that if you tell enough people that it is ‘C’ then somehow either it will become ‘C’ or they will just become a believer and know you are telling them the facts.

In the end though, it will still be Fortran, no matter what anyone says.

Fortran usually doesn’t use semicolons, but I see that for example Fortran 95 Language ISO/IEC 1539:1995 permits statements to be separated by semicolons:

“Put more than one statement per line by separating statements with a semicolon ; . Null statements are OK, so lines can end with semicolons.”

As opposed to some C-family languages that use # to begin single line comments, this version of Fortran uses exclamation points:

“The rest of any line is a comment starting with an exclamation mark ! .”

I don’t see anything about multiline comments – although that would be irrelevant to my criticism.

In any case, judging from:

Compact Fortran 95 Language Summary

… the very same criticism that I raised earlier would apply. Comments do not automatically start after a semicolon. A semicolon deliminates on statement from another. Therefore the line that I quoted would raise an error – just as it would in C-family languages. At least in the version of Fortran that I am citing.

I assume the flavor of Fortran you are using is different…?

If so cite the specs. Give us references.

Otherwise my criticism stands: the code you give wouldn’t run. It would raise errors. Therefore you added the comment. So what else did you add? That is unless the “code” was passed on to you by someone else that added the comment and possibly mangled the code on purpose.

Then it would appear that you’ve been had. As far as I can see, it isn’t code that was actually in use. Couldn’t have been as there is no flag indicating that the “comment” should not be interpreted as a line of code. Therefore it would have thrown an error. Semicolons don’t separate lines of code from comments. Not in C-family languages. Not in Fortran 95. At best they only separate statements from other statements – so that you can put more than one statement on the same line, or in C-family languages where semicolons are mandatory, permit you to break up a given statement onto more than one line so that it is more human-readable.

It appears to be IDL: http://en.wikipedia.org/wiki/IDL_(programming_language)

From wikipedia:
IDL is vectorized, numerical, and interactive, and is commonly used for interactive processing of large amounts of data (including image processing). The syntax includes many constructs from Fortran and some from C.”

From the IDL Programming Language manual (www.ittvis.com/portals/0/pdfs/idl/building.pdf)

“Files containing IDL programs, procedures, and functions are assumed to have the filename extension .pro.”

“In IDL, the semicolon (;) is the comment character. When IDL encounters the semicolon, it ignores the remainder of the line.”

So I guess that is why it looks like both ‘C’ and Fortran.

Yes, it looks like IDL. Not that surprising that IDL would do things similarly but with a twist. But Fortran at least is common in climatology. Heavy on the math and a lot of legacy – from what I understand.

However, the use of the constant “valadj” (valid adjustment? value adjustment?) may simply be a scaling factor for drawing the results. It wouldn’t make much sense to graph results that are at a variance with the data that is being plotted, would it?

Just imagine the conversation: “The number says 3 here, but you’ve gone and drawn a 5! Now what’s up with that?”

No reason to assume that the code is designed to skew the results. But adjusting the way in which something is drawn – the format – by means of a constant factor, well, that is a different story.

However, it is quite possible that there is a bug in the code that is getting corrected at the end of a process of calculation rather than at the source. As a coder that sort of thing would make me somewhat uncomfortable, but sometimes you have to move on and a band-aid solution is better than no solution at all. But it may also be an adjustment intended to compensate for how something else is drawn – so that they will be using the same scales. Hard to say.

In any case, the code is actually more opaque than the letters. Particularly if you don’t have the wider context. Would be nice if it were a little better documented. But not quite the evidence of a grand conspiracy that you took it for, eh?

I have made no such claim.
I have not at any time said anything other then ‘look at this’.
I have not said why it might be being done or even if it was used at all.
I have made no editorial comments nor have I made or directed any personal comments to or about anyone.

What I think is not relevant, it is what it is, and will continue to be what it is, regardless of my opinion.

If it was used then it would be a curious place to make a data correction, just before output, rater then as part of the modeling logic.

It could be as you have speculated, ‘scaling factor for drawing the results’, but how can you know without looking into it further.

We have the code, we know what the language is, you appear to be a programmer, it should not be difficult to determine the effect of using the valadj at this point in the processing.

That is if we choose to look.

I don’t really have the time for looking over the code – even assuming you were able to get all that is relevant. I spent several hours last night redoing, checking and rechecking material involving over $230 million used to finance the same network of organizations Exxon uses for its disinformation campaign – and that has sucked me into a much larger project. As it is I tend to bite off more than I can chew.

If that weren’t enough I am also catching up on school work after a case of the flu. Finally, as I indicated, it doesn’t appear that you found the smoking gun that you were looking for. Give us some genuine analysis by one of the “contrarian” programmers if you wish and then we (either myself or other programmers on the side of climate science) will go from there. Assuming someone has the time.

Our IT guy, who is completely neutral in the AGW debate, read some of the code published and told me it does not look good. Apparently, they appear to have fudged some data in some program. What I am expecting is that the denial machine will pass the data to computer geeks, who translate the code into a language understandable to the general public. And in a few days, they may have more ammunition emerging, with the usual twist added.

Yes, read the code! But include the whole thing.

The code you removed “for brevity” includes the following:


Every other time “yearlyadj” (the adjusted array) shows up, it is being modified; here and here alone is it being used in a calculation (specifically to modify tsin from yyy to yyy+yearlyadj; the pair of lines directly below it are identical but do not use the adjustment).

This is the /only/ place where the adjusted array is actually used to produce a result - on a line that is *commented out* (i.e. has no function in the final code but can be readily decommented for debugging purposes). Even if you are not familiar with the language, the use of a semicolon to denote a comment is obvious from the passage you quoted.

Is this the best you can do? Omitting code that shows adjustments are not used in the final version, and critically doing so without providing any evidence that fudged results made it into the peer-reviewed literature beyond this baseless allegation?

It does appear that ‘valadj’( the abitrary constants) are used to create ‘yearlyadj’ which is then added to ‘densall’ (the result of the modeling) which is then output as part of filer_cru ‘tsin=densall’.

This all happens after data loading and nomalization, and is the last thing before plotting.

; Now plot them
cpl_barts,x,densall,title=’Age-banded MXD from all sites’,$
endfor ;

Did Phil jones use the word “trick” only once in the whole email series hacked?

Shoddy having to justify stolen private communication to a bunch of liars, cheaters, destructive nit pickers (so-called climate skeptics).

There’s a lot of email correspondence to go through, but I only see the one mention.

If you have the package of leaked information you could check for yourself. On the other hand you could choose to simply believe what you have been told.

0843161829.txt:but I swear I pulled every trick out of my sleeve trying to milk
0942777075.txt: I’ve just completed Mike’s Nature trick of adding in the real temps
1065206624.txt: a warning to the rest of us. So the trick is to find the middle ground between
1103236623.txt: after this time), you cover ALOT. The trick may be to decide on the main message and use
1103236623.txt: So, the trick is for you to lead us (Dick, Keith, me - maybe Julie - ENSO expert) to
1179416790.txt:if that doesn’t do the trick I don’t know what will,
1188508827.txt:what he wants. Going to news medium will not do his trick because he

You could try and read the question. The question was how many times Phil Jones used the word “trick”. Only one of your examples (the Mann-trick) is from Phil Jones.

Tricky to find anything suspect in your selection. I think your cherry picking does not do the trick. Err, what is wrong with adding the real temps? Should he have added the wrong temps?

As said, bad situation having to justify your stolen privacy against a bunch of sharks. In EEC countries, stolen, i.e. illegitimately acquired data are not allowed as evidence in court. Hence, if some smeary lowlife was going through and commenting on my emails, I would take legal action against them. That’s voyeurism! Anybody with character would keep their nose out. I have not even searched for these emails online.

Isn’t Trick the name of Sarah Palin’s son? Or was it Trig or Tripp?

My error. I simply asked the computer to find the lines with the word ” trick ” in them and that list is what the computer returned. I got what I was looking for. I should have understood the original question more fully before I presented those results. Sorry.

Actually this is exactly what was needed – in order to see how the climatologists generally use the term “trick.” As near as I can tell it is the same way that I would use it as either a programmer or a writer – “to do something with excellence or insight” rather than as it would be meant if someone were to speak of “dishonestly tricking someone else,” which is of course how the denialists would have you read it.

A word to point 4 (peer review): I totally understand this statement. I was in the situation where a bunch a continuum mechanics deniers around a professor called Tim Bell (no pun intended) tried to sneak their articles into top-of-the-line peer reviewed journals. For ca. 15 years, they were kept to secondary publications, until some reviewers got deceived. On the other hand, I had real problems publishing a rebuttal of that garbage, simply because the reviewers were thinking in 2D and not in 3D, as required. I can show you dozens of publications on the topic in peer-reviewed journals that are absolute rubbish and damaging for this field of science.

Well, it has its flaws. One thing I disliked is that the reviewer does not have to give their name. I was always revealing my identity but my PhD advisor advised me against it before I had a job. Obviously, it can be very political. Just with everything in life, there are good people and there are shysters - picture somebody with a denier agenda reviewed your work. The other thing I disliked is that the author cannot critisize the review. As said, I got shot down by a guy who didn’t have a clue - anonymously. Years later, I talked to the guy about the review, not knowing that he had been the reviewer. That was funny because I really embarassed him. The paper got eventually published in this very magazine - after a re-write.

And if you want to know how the authors of this paper tackled the deniers, you can download it here: http://files.me.com/jkraus/0z7dmc

if this is his “explanation” then they are in as much trouble as i thought. (wait till the next series of e-mails come out and debunk his explanations). #5 answer is absurd but of course passes the standard here. why would he be interested in “including” (containing?) something he considers putative? and “try to contain” does not mean include. why would it have been “nice” to do if its putative anyway? if mann were say russian and had a language issue you might get away with it but last i checked he is american. as for # 1. i guess that’s an admission. boy are those guys screwed but at least there is a chance that we wont be.
kevin, you asked if trolls were being paid and i said yes, by e-boom haha. i guess that didn’t pass the test for responses here. i thought it was appropriate that people know that you think the trolls (or anyone else with an opinion) shouldn’t be paid by “big oil” yet most of you “appear” to be paid by e-boom (an alternative energy site). i’m just sayin’…..

CNN reported on the flap due to hacking last night. Quoted Inhofe.

NPR/Jim Lehrer had a discussion with 2 guys on Obama and climate control. The AEI guy did not
dispute AGW. His points were moderate.

Did everyone hear the good news today? China agreed to “reduce” emissions. By that they mean they will only agree to nobinding targets based on emissions intensity like Harper has set up in Canada. They set a date of 2020 to reduce emissions intensity by up to 40%.

This means that if you go by their average economic growth they will increase emissions by the equivalent of 17 Canada’s by 2020 if they stick to their non binding targets! LMAO

Kyoto is dead, the global waring hoax is dead! Can you really see desmog here in 10 years telling us that the earth will melt and we will all die and still have any credibility left? China has bought us enough time for even the most deluded to wake up from the GW cult!

Yay China! let the good times roll!!!!!!

there will be no such thing as real emission control unless there is some kind of mass depopulation. A worldwide catastrophic depression would slow things down a bit maybe, but if you really want Carbon 350, you have to have massive depopulation. I think that should be pretty obvious.

Kevin Grandia,

Beginning with:

Competitive Enterprise Institute to sue RealClimate blogger over moderation policy, comment 19
November 26, 2009 at 1:49 pm

… I outline a possible response by the pro-science community to the personal attacks upon individual climatologists including the hacking of email belonging to climatologists at Hadley CRU for the purpose of a disinformation campaign and the declaration of the intent to launch a lawsuit against Gavin Schmidt of Real Climate. I give figures for the funding of the Competitve Enterprise Institute by Exxon, Scaife, Bradley, Koch and Coors foundations and – while it yet has to show up – more broadly funding received by denialist organizations from the Scaife, Bradley, Koch and Coors foundations where the denialist organizations are also part of the Exxon-funded network for attacking climatology


Unknown hackers have broken into Lorne Gunters mailserver at Edmontons CRU Hadley East Anglia Centre and removed 2 TB of emails. The stolen messages are presently held in a landfill near Edmonton - but they will have to be removed soon as they stink strongly and emit lots of gas. Gunter himself was not capable of a comment after being hospitalized following a polar bear attack. The bear had been floating down the Saskatchewan river on a piece of arctic ice, removed from the ever growing arctic ice mass.

One of Gunters emails is published here:
To: xxx
Thank you for your e-mail. Rest assured I read every one. However, as I receive as many as 800 a day, I am unable to answer them all.

We will examine whether Gunters publications will have to be removed from the records based on the wrongdoings revealed by this mail. Stay tuned.

You just have to be amazed by Mann’s bare-faced cheek.
The arch cherry-picker himself accusing others of cherry-picking.

Note he does not refer to the Wegman report (http://www.climateaudit.org/pdf/others/07142006_Wegman_Report.pdf)

or some of the less flattering parts of the report (http://books.nap.edu/openbook.php?record_id=11676&page=83)

The standard proxy reconstructions based on linear regression are generally reasonable statistical methods for estimating past temperatures but may be associated with substantial uncertainty.

There is a need for more rigorous statistical error characterization for proxy reconstructions of temperature that includes accounting for temporal correlation and the choice of principal components.

The variability of proxy reconstructed temperatures will be less than the variability of the actual temperatures and may not reproduce the actual temperature pattern at particular timescales.

Examining the prediction of the reconstruction in a validation period is important, but the length of this period sets limits on a statistical appraisal of the uncertainty in the reconstruction. Most critically, the relatively short instrumental temperature record provides very few degrees of freedom1 for verifying the low-frequency content of a reconstruction.

The differences among a collection of proxy reconstructions that have not been deliberately created as a representative statistical sample may not reveal the full uncertainty in any one of them.

As part of their statistical methods, Mann et al. used a type of principal component analysis that tends to bias the shape of the reconstructions.

Huybers (2005) and Bürger and Cubasch (2005) raise an additional concern that must be considered carefully in future research: There are many choices to be made in the statistical analysis of proxy data, and these choices influence the conclusions. Huybers (2005) recommends that to avoid ambiguity simple averages should be used rather than principal components when estimating spatial means. Bürger and Cubasch (2005) use several dozen statistical methods to generate examples of reconstructions; these reconstructions differ substantially even though they are based on the same data.

Sampled trees should not show signs of disturbance factors such as insect infestation, grazing, fire damage, human utilization, fungal infestation, or mistletoe attack (Schweingruber 1988, Fritts and Swetnam 1989). NOTE UNLIKE MANN’S METHOD

Ensure that sample (site, tree, and core) selection is based on a priori rather than a posteriori criteria (Schweingruber 1988, Fritts and Swetnam 1989). NOTE UNLIKE MANN’S METHOD

etc. etc.