It looks like you're new here. If you want to get involved, click one of these buttons!
I've been thinking a bit about the complexity of the climate models and in particular the GCMs that require months of supercomputer time for output.
It isn't obvious to me that problems of this sort benefit from that level of sophistication or detail when the general assumptions may not hold. Nevertheless, sophisticated models seem to be more and more common. My question is what additional predictive benefit do we gain from these sophisticated models? I suppose that when they don't hold up we know we are missing something in the model but I'm skeptical of our ability to incorporate sophistication when we don't comprehend the basics.
We can't predict a hurricane track out more than a few days in most cases. Yet it seems to me that the GCMs are far more complicated and make far more assumptions than hurricane models.
For example, in Pacala's comments in the video linked to at the end of Stabilization wedges he indicates that if CO2 fertilization doesn't hold up as a sink process, then the problem of global warming may be more than four times worse than we assumed (i.e. we'll need 34 wedges instead of 8). Yet, we run multi-decadal simulations of weather models that incorporate assumptions like the one for CO2 fertilization.
In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.
So what am I missing here? How useful is it to model at increasing levels of sophistication when we're not sure of some of the more important basics? Or more importantly, how important it is to make public the predictions of these models when it seems to me they are not yet ready to predict anything other than broad ranges of variability.
I am worried that all this sophistication makes it seem like we know more than we really do while at the same time it obscures and makes seem less certain what we are pretty confident in.
The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them. The general public cannot understand them so they have to "trust" the scientists. The scientists themselves may make programming errors that they are not even aware of. Yet the core problem is fairly easy to understand and not disputed. So science ends up creating a PR problem that is easily exploited by those who benefit from the status quo.
It seems to me that this may be one area where we can help the larger communications effort in a concrete way. If we can lay out simply and in an easy-to-understand manner the basics, and what those basics indicate, that should be more convincing.