Monday, 3 June 2013

June 7: Noel Cressie - Bayesian Hierarchical ANOVA for Climate Change Projections

Next up is Noel Cressie, an expert in spatial statistics who was written two incredibly thorough and detailed textbooks on the subject. His recent focus is on Bayesian hierarchical approaches for modelling spatial data. He has suggested the following paper as being particularly relevant for his talk:

Bayesian Hierarchical ANOVA of Regional Climate-Change Projections from NARCCAP Phase II

There is a fair amount of equations in the paper, but please don't let this be a deterrent! Noel recently gave a talk on this work at UNSW and as an attendee, I can say that it was incredibly exciting and very approachable for non-statisticians.

Feel free to post any questions or comments here in anticipation of the study group at 2-3pm on Friday at UNSW.


  1. started with the difference between GCMs and RCMs - spatial scale, which as seen last week, imply different factors are important and/or different factors can be included in the model. GCMs are very coarse relative to what can be achieved within a region. Also, instead of trying to get an approximate model that works reasonably well for the whole globe, the RCM can focus on the region of interest (north america here) and aim for more accurate predictions in this region.

  2. Some general qns/comments about what the goal of the paper was and what its contribution was: a significant challenge that is addressed in this paper is making inference about maps ("spatial fields"), in this case, given 8 maps (for two different RCMs for 4 seasons) where we want to see how much variation across maps is explained by RCM vs season vs neither. This is made especially tricky by the size of the maps: each has tens of thousands of pixels. The approach uses a Bayesian hierarchical model with a Cressie special trick ("Spatial Random Effects models") which makes the analysis not just do-able but also exact (in a Bayesian sense) even for big ugly maps.

  3. There was a bit of discussion about Frequentist vs Bayesian methods, and how they differ fundamentally in what they mean by "probability" (relative frequency vs uncertainty), and practically in terms of the types of problems that can be addressed. Bayesian methods, via MCMC, enable exact inference (well, accepting the Bayesian interpretation of probability) even for quite complicated problems, of which this paper is a case in point. A Frequentist approach on the other hand would at best provide large-sample approximate inference for this type of question.

    The two philosophies, via their contrasting underlying definitions of what probability means, are to my understanding incompatible. But this doesn't seem to bother many, most modern analysts switch between Frequentist and Bayesian analysis without feeling the slightest bit guilty about it!