#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Sophisticated climate models

I've been thinking a bit about the complexity of the climate models and in particular the GCMs that require months of supercomputer time for output.

It isn't obvious to me that problems of this sort benefit from that level of sophistication or detail when the general assumptions may not hold. Nevertheless, sophisticated models seem to be more and more common. My question is what additional predictive benefit do we gain from these sophisticated models? I suppose that when they don't hold up we know we are missing something in the model but I'm skeptical of our ability to incorporate sophistication when we don't comprehend the basics.

We can't predict a hurricane track out more than a few days in most cases. Yet it seems to me that the GCMs are far more complicated and make far more assumptions than hurricane models.

For example, in Pacala's comments in the video linked to at the end of Stabilization wedges he indicates that if CO2 fertilization doesn't hold up as a sink process, then the problem of global warming may be more than four times worse than we assumed (i.e. we'll need 34 wedges instead of 8). Yet, we run multi-decadal simulations of weather models that incorporate assumptions like the one for CO2 fertilization.

Another sample problem listed on Wikipedia (paper here):

In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.

So what am I missing here? How useful is it to model at increasing levels of sophistication when we're not sure of some of the more important basics? Or more importantly, how important it is to make public the predictions of these models when it seems to me they are not yet ready to predict anything other than broad ranges of variability.

I am worried that all this sophistication makes it seem like we know more than we really do while at the same time it obscures and makes seem less certain what we are pretty confident in.

The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them. The general public cannot understand them so they have to "trust" the scientists. The scientists themselves may make programming errors that they are not even aware of. Yet the core problem is fairly easy to understand and not disputed. So science ends up creating a PR problem that is easily exploited by those who benefit from the status quo.

It seems to me that this may be one area where we can help the larger communications effort in a concrete way. If we can lay out simply and in an easy-to-understand manner the basics, and what those basics indicate, that should be more convincing.

• Options
1.

Yeah, that's been troubling me since I joined Azimuth, which is the reason that I dedicated most of my time budget to climate science.

But: Curtis said:

We can't predict a hurricane track out more than a few days in most cases. Yet it seems to me that the GCMs are far more complicated and make far more assumptions than hurricane models.

Sure, but in climate science we look at large scale tendencies over longer time scales like 10, 30 or 100 years. I don't have a problem with this, because I was trained in statistical physics, which is very successful by ignoring microdynamics and looking on averages instead. And most prominent economists are not very successful traders.

Second but:

The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them.

The problem is that the stakeholders have changed. Scientists have used models for themselves to better understand certain aspects of climate dynamics. In this situation it is Ok that no one except them can understand the models. They draw their conclusions from model runs, and from other sources, and write about their conclusions. These conclusions can then be discussed in the scientific community without the need that everyone has a full understanding of the model. Now the stakeholders include you and me...

Here is what troubles me:

• Do GCM include the most important processes? Do we include minor processes while ignoring more important ones?

• How can we compare model results with reality? We don't have enough historical climate data!

• Do GCM include the most important sub-grid processes, i.e. is the parameterization correct? How do we know?

• What about artefacts that are introduced by the discrete approximation?

• What about artefacts due to numerical instability?

• What about artefacts by interactions of modules that where not anticipated by the authors? What e.g. if one module produces an input for another module that this simply wasn't supposed to be able to handle?

• What about simple coding errors in the models themselves?

• What about coding errors in the many libraries that GCM use?

For example: Does anyone believe that the numerical algebra package LAPACK is bug free? (I mean besides the known bugs that are documented on the LAPACK page.)

All of these concerns seem to make any politically motivated criticism of climate models superfluous, there are enough scientifically sound questions left open...

Comment Source:Yeah, that's been troubling me since I joined Azimuth, which is the reason that I dedicated most of my time budget to climate science. But: Curtis said: <blockquote> <p> We can't predict a hurricane track out more than a few days in most cases. Yet it seems to me that the GCMs are far more complicated and make far more assumptions than hurricane models. </p> </blockquote> Sure, but in climate science we look at large scale tendencies over longer time scales like 10, 30 or 100 years. I don't have a problem with this, because I was trained in statistical physics, which is very successful by ignoring microdynamics and looking on averages instead. And most prominent economists are not very successful traders. Second but: <blockquote> <p> The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them. </p> </blockquote> The problem is that the stakeholders have changed. Scientists have used models for themselves to better understand certain aspects of climate dynamics. In this situation it is Ok that no one except them can understand the models. They draw their conclusions from model runs, and from other sources, and write about their conclusions. These conclusions can then be discussed in the scientific community without the need that everyone has a full understanding of the model. Now the stakeholders include you and me... Here is what troubles me: - Do GCM include the most important processes? Do we include minor processes while ignoring more important ones? - How can we compare model results with reality? We don't have enough historical climate data! - Do GCM include the most important sub-grid processes, i.e. is the parameterization correct? How do we know? - What about artefacts that are introduced by the discrete approximation? - What about artefacts due to numerical instability? - What about artefacts by interactions of modules that where not anticipated by the authors? What e.g. if one module produces an input for another module that this simply wasn't supposed to be able to handle? - What about simple coding errors in the models themselves? - What about coding errors in the many libraries that GCM use? For example: Does anyone believe that the numerical algebra package LAPACK is bug free? (I mean besides the known bugs that are documented on the LAPACK page.) All of these concerns seem to make any politically motivated criticism of climate models superfluous, there are enough scientifically sound questions left open...
• Options
2.

Curtis said:

If we can lay out simply and in an easy-to-understand manner the basics, and what those basics indicate, that should be more convincing.

Sure, that's why we have a page about climate models, one about the most simplest ones, namely EBM, why John wrote about the ENSO and interviewed Nathan Urban and Tim Palmer etc. etc.

Comment Source:Curtis said: <blockquote> <p> If we can lay out simply and in an easy-to-understand manner the basics, and what those basics indicate, that should be more convincing. </p> </blockquote> Sure, that's why we have a page about [[climate models]], one about the most simplest ones, namely [[EBM]], why John wrote about the [[ENSO]] and interviewed Nathan Urban and Tim Palmer etc. etc.
• Options
3.
edited January 2011

I think this can only really be answered by someone who's a climate scientist, but I'd say that there's a difference between not being very confident of some statement and being in sufficient doubt about the situation that the current best understanding shouldn't be used in current simulations. (Of course, it would also be interesting to perform some runs with very different assumptions about things like ocean $CO_2$ capacity and see how the long term climatological conclusions change.)

One specific comment:

The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them.

I think the issue is more that, for various reasons, people have become less willing to delegate judgement to experts in areas where they don't have the time to devote to fully understand them. For instance, one could equally say " The biggest problem with this instiki wiki system is that it's very opaque to anyone but the experts who work on it." Thre are very few areas which are complex which aren't opaque to those who don't devote teh time to be experts.

Comment Source:I think this can only really be answered by someone who's a climate scientist, but I'd say that there's a difference between not being very confident of some statement and being in sufficient doubt about the situation that the current best understanding shouldn't be used in current simulations. (Of course, it would also be interesting to perform some runs with very different assumptions about things like ocean $CO_2$ capacity and see how the long term climatological conclusions change.) One specific comment: > The biggest problem with these sophisticated simulations is that they are very opaque to anyone but the scientists who work on them. I think the issue is more that, for various reasons, people have become less willing to delegate judgement to experts in areas where they don't have the time to devote to fully understand them. For instance, one could equally say " The biggest problem with this instiki wiki system is that it's very opaque to anyone but the experts who work on it." Thre are very few areas which are complex which aren't opaque to those who don't devote teh time to be experts.
• Options
4.

In the back of my mind I wonder about a few things. I know it would be more useful to do some investigating and put down some answers, so I apologise in advance.

• One of the most successful models was based on an assumption of the Maximum Entropy Production Principle. When it was applied to Titan it got a different answer to other models, and it turned out to be right. So why was that abandoned for more straightforward models? The guy who invented the MEPP model (Prof Paltridge) got to lead CSIRO Atmospheric Physics, but left to go to the University of Tasmania. I think MEPP is used as a consistency check on models.

• One of the ways the weather moves energy from the equator towards the poles is in rotating fluids (eddies, cyclones). Yet it seems that climate models don't see these unless they're very big? Which makes me wonder if it wouldn't be better (or at least different and valuable for that reason) for models to have bigger cells and more complex interactions. In particular one might look at energy crossing cells boundaries in the form of various sorts of momentum, which breakdown by dimension: (0) expansion (from pressure difference) is a sort of scalar momentum; (1) bulk movement (wind, current) is a vector; (2) rotation is a bivector. Which then makes me wonder why there isn't a trivector (pseudoscalar) momentum. [Of course the ocean doesn't have pressure differences, but I guess it must have places where the sea level is depressed and others where it is raised and this must fill the same role.] (Hmm, having written this down makes me realise that these things are not preserved at all, since they are being created by heat flows and always tending to dissipate as heat, so maybe they're not the right things to look at.)

Comment Source:In the back of my mind I wonder about a few things. I know it would be more useful to do some investigating and put down some answers, so I apologise in advance. * One of the most successful models was based on an assumption of the Maximum Entropy Production Principle. When it was applied to Titan it got a different answer to other models, and it turned out to be right. So why was that abandoned for more straightforward models? The guy who invented the MEPP model (Prof Paltridge) got to lead CSIRO Atmospheric Physics, but left to go to the University of Tasmania. I think MEPP is used as a consistency check on models. * One of the ways the weather moves energy from the equator towards the poles is in rotating fluids (eddies, cyclones). Yet it seems that climate models don't see these unless they're very big? Which makes me wonder if it wouldn't be better (or at least different and valuable for that reason) for models to have bigger cells and more complex interactions. In particular one might look at energy crossing cells boundaries in the form of various sorts of momentum, which breakdown by dimension: (0) expansion (from pressure difference) is a sort of scalar momentum; (1) bulk movement (wind, current) is a vector; (2) rotation is a bivector. Which then makes me wonder why there isn't a trivector (pseudoscalar) momentum. [Of course the ocean doesn't have pressure differences, but I guess it must have places where the sea level is depressed and others where it is raised and this must fill the same role.] (Hmm, having written this down makes me realise that these things are not preserved at all, since they are being created by heat flows and always tending to dissipate as heat, so maybe they're not the right things to look at.)
• Options
5.

Tim van Beek wrote:

in climate science we look at large scale tendencies over longer time scales like 10, 30 or 100 years. I don't have a problem with this, because I was trained in statistical physics, which is very successful by ignoring microdynamics and looking on averages instead.

That makes sense, I actually have much more faith in the longer-term tendencies than the microdynamics.

I also think it is important for scientists to have sophisticated models as long as one does not believe them more than is actually warranted. There may be emergent phenomena that are not obvious from simple extrapolations.

Tim also wrote:

The problem is that the stakeholders have changed. Scientists have used models for themselves to better understand certain aspects of climate dynamics. In this situation it is Ok that no one except them can understand the models. They draw their conclusions from model runs, and from other sources, and write about their conclusions. These conclusions can then be discussed in the scientific community without the need that everyone has a full understanding of the model. Now the stakeholders include you and me...

Yes, that is the problem. This is no longer a matter for scientists alone. The coming problems will affect us all. So at least some of the models need to be simpler and easier to defend against the inevitable attacks made by those who benefit from the status quo.

David Tweed wrote:

I think the issue is more that, for various reasons, people have become less willing to delegate judgement to experts in areas where they don't have the time to devote to fully understand them. For instance, one could equally say " The biggest problem with this instiki wiki system is that it's very opaque to anyone but the experts who work on it." There are very few areas which are complex which aren't opaque to those who don't devote the time to be experts.

Delegation to experts has certainly lost its appeal. I think the gradual increase in awareness of the ways in which those in power have been misleading the public over the last century or more, has led to a lot of cynicism. People don't trust experts because they believe they have agendas. Further, most people are incapable of independently evaluating the experts.

The problem is not the opacity per se, it is the combination of the opacity and the importance of the implications of the model on all of us who inhabit this planet. I can't think of a single other circumstance where a typical person is expected to take another's opinion for something so important to their life when the underlying models used to predict the events are so complex. In short, most people are just not going to ever be able to understand the typical GCM. Or even to decide between the judgments of competing experts.

From my experience, there are three levels of competence:

• Incompetence - inability to evaluate or perform any tasks. When it comes to climate issues, most people are not competent.

• Competence for evaluating expertise - the ability to read and understand the basics and to determine who is a true expert. Most engineers and scientists should be able to reach this level of competence.

• Expertise - the ability to contribute and evaluate other's work at a deep level. Even most climatologists are probably not truly expert. In my experience, less than 5% of the people working in a typical domain are true experts. These are the people other experts trust. Does this match what the rest of you see in science? Or is there a higher level of true expertise?

I don't see how we'll get most people on the planet to the point where they'll be able to competently evaluate the experts. The problems are simply too complex. This is a real problem since as you mentioned "people are less willing to delegate judgment." We will need to think about addressing this problem as it will keep people from acting unless/until they dig into it themselves.

From my perspective, what is missing is credibility not expertise.

Comment Source:Tim van Beek wrote: >in climate science we look at large scale tendencies over longer time scales like 10, 30 or 100 years. I don't have a problem with this, because I was trained in statistical physics, which is very successful by ignoring microdynamics and looking on averages instead. That makes sense, I actually have much more faith in the longer-term tendencies than the microdynamics. I also think it is important for scientists to have sophisticated models as long as one does not believe them more than is actually warranted. There may be emergent phenomena that are not obvious from simple extrapolations. Tim also wrote: >The problem is that the stakeholders have changed. Scientists have used models for themselves to better understand certain aspects of climate dynamics. In this situation it is Ok that no one except them can understand the models. They draw their conclusions from model runs, and from other sources, and write about their conclusions. These conclusions can then be discussed in the scientific community without the need that everyone has a full understanding of the model. Now the stakeholders include you and me... Yes, that is the problem. This is no longer a matter for scientists alone. The coming problems will affect us all. So at least some of the models need to be simpler and easier to defend against the inevitable attacks made by those who benefit from the status quo. --- David Tweed wrote: >I think the issue is more that, for various reasons, people have become less willing to delegate judgement to experts in areas where they don't have the time to devote to fully understand them. For instance, one could equally say " The biggest problem with this instiki wiki system is that it's very opaque to anyone but the experts who work on it." There are very few areas which are complex which aren't opaque to those who don't devote the time to be experts. Delegation to experts has certainly lost its appeal. I think the gradual increase in awareness of the ways in which those in power have been misleading the public over the last century or more, has led to a lot of cynicism. People don't trust experts because they believe they have agendas. Further, most people are incapable of independently evaluating the experts. The problem is not the opacity per se, it is the combination of the opacity **and** the importance of the implications of the model on all of us who inhabit this planet. I can't think of a single other circumstance where a typical person is expected to take another's opinion for something so important to their life when the underlying models used to predict the events are so complex. In short, most people are just not going to ever be able to understand the typical GCM. Or even to decide between the judgments of competing experts. From my experience, there are three levels of competence: * Incompetence - inability to evaluate or perform any tasks. When it comes to climate issues, most people are not competent. * Competence for evaluating expertise - the ability to read and understand the basics and to determine who is a true expert. Most engineers and scientists should be able to reach this level of competence. * Expertise - the ability to contribute and evaluate other's work at a deep level. Even most climatologists are probably not truly expert. In my experience, less than 5% of the people working in a typical domain are true experts. These are the people other experts trust. Does this match what the rest of you see in science? Or is there a higher level of true expertise? I don't see how we'll get most people on the planet to the point where they'll be able to competently evaluate the experts. The problems are simply too complex. This is a real problem since as you mentioned "people are less willing to delegate judgment." We will need to think about addressing this problem as it will keep people from acting unless/until they dig into it themselves. From my perspective, what is missing is **credibility not expertise**.
• Options
6.
edited January 2011

I agree with most of what everyone already said. I think this is a place where the Azimuth Project has a chance to make a difference. We can't compete with the teams of experts who are running big complicated climate simulations. But there are other things, equally important, that we might do better!

Curtis wrote:

It isn't obvious to me that problems of this sort benefit from that level of sophistication or detail when the general assumptions may not hold. Nevertheless, sophisticated models seem to be more and more common. My question is what additional predictive benefit do we gain from these sophisticated models?

Some obvious remarks:

1. Until we know the future, it's hard to tell how good we are at predicting it.

2. So, it's hard to know for sure how much any given change in a model improves its prediction ability.

3. Nonetheless, if you learn about an important effect, like how meltwater trickling down to the ice-rock interface in Greenland is drastically speeding up the motion of glaciers, you feel stupid if you don't add it into your model.

4. So, models will keep getting more complicated.

5. Nonetheless, we will keep being surprised by reality.

We can't compare our models to the future, but we can compare them to each other. Check out the IPCC report, which shows what lots of different models say about the Earth's future temperature and precipitation in three different scenarios:

Now if I were really good I could say which models were more 'sophisticated', and show you a chart like this where only 'more sophisticated' models were used, and another chart where 'less sophisticated' models were used. And then you might say 'oh, it really looks like the more sophisticated models are converging to the same answer'. Or you might not!

But you'll right away notice one thing. The choice of 'scenario' matters just as much as the model! And the 'scenario' involves what people do! So, our predictive ability is strongly limited by our ability to predict what we will do.

Here are the scenarios, in case you're interested. I should put this onto the Azimuth Library:

A1

The A1 scenarios are of a more integrated world. The A1 family of scenarios is characterized by:

* Rapid economic growth.
* A global population that reaches 9 billion in 2050 and then gradually declines.
* The quick spread of new and efficient technologies.
* A convergent world - income and way of life converge between regions. Extensive social and cultural interactions worldwide.


There are subsets to the A1 family based on their technological emphasis:

* A1FI - An emphasis on fossil-fuels (Fossil Intensive).
* A1B - A balanced emphasis on all energy sources.
* A1T - Emphasis on non-fossil energy sources.


A2

The A2 scenarios are of a more divided world. The A2 family of scenarios is characterized by:

* A world of independently operating, self-reliant nations.
* Continuously increasing population.
* Regionally oriented economic development.
* Slower and more fragmented technological changes and improvements to per capita income.


B1

The B1 scenarios are of a world more integrated, and more ecologically friendly. The B1 scenarios are characterized by:

* Rapid economic growth as in A1, but with rapid changes towards a service and information economy.
* Population rising to 9 billion in 2050 and then declining as in A1.
* Reductions in material intensity and the introduction of clean and resource efficient technologies.
* An emphasis on global solutions to economic, social and environmental stability.


B2

The B2 scenarios are of a world more divided, but more ecologically friendly. The B2 scenarios are characterized by:

* Continuously increasing population, but at a slower rate than in A2.
* Emphasis on local rather than global solutions to economic, social and environmental stability.
* Intermediate levels of economic development.
* Less rapid and more fragmented technological change than in A1 and B1.

Comment Source:I agree with most of what everyone already said. I think this is a place where the Azimuth Project has a chance to make a difference. We can't compete with the teams of experts who are running big complicated climate simulations. But there are other things, equally important, that we might do better! Curtis wrote: > It isn't obvious to me that problems of this sort benefit from that level of sophistication or detail when the general assumptions may not hold. Nevertheless, sophisticated models seem to be more and more common. My question is what additional predictive benefit do we gain from these sophisticated models? Some obvious remarks: 1. Until we know the future, it's hard to tell how good we are at predicting it. 2. So, it's hard to know for sure how much any given change in a model improves its prediction ability. 3. Nonetheless, if you learn about an important effect, like how meltwater trickling down to the ice-rock interface in Greenland is drastically speeding up the motion of glaciers, you feel stupid if you don't add it into your model. 4. So, models will keep getting more complicated. 5. Nonetheless, we will keep being surprised by reality. We can't compare our models to the future, but we can compare them to each other. Check out the IPCC report, which shows what [lots of different models](http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch8s8-2.html#table-8-1) say about the Earth's future temperature and precipitation in [three different scenarios](http://en.wikipedia.org/wiki/Special_Report_on_Emissions_Scenarios#Scenario_families): <img src = "http://www.ipcc.ch/publications_and_data/ar4/wg1/en/fig/figure-10-5-l.png" alt = ""/> Now if I were really good I could say which models were more 'sophisticated', and show you a chart like this where only 'more sophisticated' models were used, and another chart where 'less sophisticated' models were used. And then you might say 'oh, it really looks like the more sophisticated models are converging to the same answer'. Or you might not! But you'll right away notice one thing. The choice of 'scenario' matters just as much as the model! And the 'scenario' involves what people do! So, **our predictive ability is strongly limited by our ability to predict what _we_ will do**. Here are the scenarios, in case you're interested. I should put this onto the Azimuth Library: *** A1 The A1 scenarios are of a more integrated world. The A1 family of scenarios is characterized by: * Rapid economic growth. * A global population that reaches 9 billion in 2050 and then gradually declines. * The quick spread of new and efficient technologies. * A convergent world - income and way of life converge between regions. Extensive social and cultural interactions worldwide. There are subsets to the A1 family based on their technological emphasis: * A1FI - An emphasis on fossil-fuels (Fossil Intensive). * A1B - A balanced emphasis on all energy sources. * A1T - Emphasis on non-fossil energy sources. A2 The A2 scenarios are of a more divided world. The A2 family of scenarios is characterized by: * A world of independently operating, self-reliant nations. * Continuously increasing population. * Regionally oriented economic development. * Slower and more fragmented technological changes and improvements to per capita income. B1 The B1 scenarios are of a world more integrated, and more ecologically friendly. The B1 scenarios are characterized by: * Rapid economic growth as in A1, but with rapid changes towards a service and information economy. * Population rising to 9 billion in 2050 and then declining as in A1. * Reductions in material intensity and the introduction of clean and resource efficient technologies. * An emphasis on global solutions to economic, social and environmental stability. B2 The B2 scenarios are of a world more divided, but more ecologically friendly. The B2 scenarios are characterized by: * Continuously increasing population, but at a slower rate than in A2. * Emphasis on local rather than global solutions to economic, social and environmental stability. * Intermediate levels of economic development. * Less rapid and more fragmented technological change than in A1 and B1.
• Options
7.

John said:

And then you might say 'oh, it really looks like the more sophisticated models are converging to the same answer'.

Steve Easterbrook has explained that a central part of quality assurance is the comparison of different models developed independently by different research groups with one another. While I agree that this is a very important point, I do have an objection: There is a strong tendency that different people with a similar background make the same mistakes when they try to solve the same problem. A good example of this effect is the Monty Hall Problem. I guess that there is a 99.5% chance to get the same, wrong answer from anyone who has not heard of this problem before (including me, by the way).

There is of course also the danger of group thinking, and since the community of climate modellers is rather small and exclusive, I sense a danger of emerging group thinking, too. Then, in addition, developers tend to be sloppy when testing their own software, "because they know that it works", which is the reason why some companies assign dedicated test teams with no connection to the developer team whatsoever. This, too, is a danger in the process of quality insurance of GCMs. Etc.

All these objections are best examined by outsiders, rather than insiders, of the climate modeling community (I'm thinking about dedicated professionals like Steve Easterbrook here, not me).

Comment Source:John said: <blockquote> <p> And then you might say 'oh, it really looks like the more sophisticated models are converging to the same answer'. </p> </blockquote> Steve Easterbrook has explained that a central part of quality assurance is the comparison of different models developed independently by different research groups with one another. While I agree that this is a very important point, I do have an objection: There is a strong tendency that <i>different people</i> with a similar background make the <i>same mistakes</i> when they try to solve the <i>same problem</i>. A good example of this effect is the <a href="http://en.wikipedia.org/wiki/Monty_Hall_problem">Monty Hall Problem</a>. I guess that there is a 99.5% chance to get the same, wrong answer from anyone who has not heard of this problem before (including me, by the way). There is of course also the danger of group thinking, and since the community of climate modellers is rather small and exclusive, I sense a danger of emerging group thinking, too. Then, in addition, developers tend to be sloppy when testing their own software, "because they know that it works", which is the reason why some companies assign dedicated test teams with no connection to the developer team whatsoever. This, too, is a danger in the process of quality insurance of GCMs. Etc. All these objections are best examined by outsiders, rather than insiders, of the climate modeling community (I'm thinking about dedicated professionals like Steve Easterbrook here, not me).
• Options
8.

All these objections are best examined by outsiders, rather than insiders, of the climate modeling community (I'm thinking about dedicated professionals like Steve Easterbrook here, not me).

Sorry, are you calling him an insider or an outsider?

Anyway, I agree with everything you said!

Comment Source:> All these objections are best examined by outsiders, rather than insiders, of the climate modeling community (I'm thinking about dedicated professionals like Steve Easterbrook here, not me). Sorry, are you calling him an insider or an outsider? Anyway, I agree with everything you said!
• Options
9.

Steve Easterbrook himself is neither a climate scientist nor engaged in developing climate models, so I'd call him an outsider.

Comment Source:Steve Easterbrook himself is neither a climate scientist nor engaged in developing climate models, so I'd call him an outsider.
• Options
10.

All of this is a classic debate within climate science. The modeling community tradition has been to move towards the maximum possible model complexity. Those of us closer to the decision side of the science question this. Is it preferable to devote all your computational resources to the most complex (and supposedly "best") possible model? Perhaps this may give you a less biased answer, but you have little quantitative understanding of the uncertainty or sensitivity to assumptions. The modeling community's answer has been to move to multi-model ensembles, so you have several "best estimates" from different complex models. But this still gives you little ability to explore the space of model uncertainty: you're still stuck with the assumptions made by a small number of modeling groups, instead of one. Some of us think that more computational resources should be devoted to "perturbed physics ensembles", which are a larger number of runs of somewhat less sophisticated models, to explore the space of parametric and structural assumptions. But sophisticated models are still important, to tell us which processes are most likely to be poorly represented in the simpler models.

Comment Source:All of this is a classic debate within climate science. The modeling community tradition has been to move towards the maximum possible model complexity. Those of us closer to the decision side of the science question this. Is it preferable to devote all your computational resources to the most complex (and supposedly "best") possible model? Perhaps this may give you a less biased answer, but you have little quantitative understanding of the uncertainty or sensitivity to assumptions. The modeling community's answer has been to move to multi-model ensembles, so you have several "best estimates" from different complex models. But this still gives you little ability to explore the space of model uncertainty: you're still stuck with the assumptions made by a small number of modeling groups, instead of one. Some of us think that more computational resources should be devoted to "perturbed physics ensembles", which are a larger number of runs of somewhat less sophisticated models, to explore the space of parametric and structural assumptions. But sophisticated models are still important, to tell us which processes are most likely to be poorly represented in the simpler models.
• Options
11.

The question that bugs me most: How much of "Gaia" is built into the models.

It's not only that James Lovelock (of Gaia theory fame) is way more "pessimistic" than the average climatologist.

Methinks common sense and following the news gives enough to worry that standard climate models might be too optimistic: The paradigm is Pakistan 2009/2010. First extreme drought, killing soil life. Then extreme deluge, washing it all away.

So it looks that by these growing extremes of drought and deluge lots of soil and plant life gets lost, diminishing biological carbon storage capacity.

Comment Source:The question that bugs me most: How much of "Gaia" is built into the models. It's not only that James Lovelock (of Gaia theory fame) is way more "pessimistic" than the average climatologist. Methinks common sense and following the news gives enough to worry that standard climate models might be too optimistic: The paradigm is Pakistan 2009/2010. First extreme drought, killing soil life. Then extreme deluge, washing it all away. So it looks that by these growing extremes of drought and deluge lots of soil and plant life gets lost, diminishing biological carbon storage capacity.
• Options
12.

It is now popular to build "Earth system models" with dynamic biospheres. You do see major effects in some of the dynamic vegetation models, such as large-scale dieback of rainforest in the Amazon, creating a weakened carbon sink. Some of them also model biologic effects on the ocean carbon pump (e.g., in NZPD models).

Comment Source:It is now popular to build "Earth system models" with dynamic biospheres. You do see major effects in some of the dynamic vegetation models, such as large-scale dieback of rainforest in the Amazon, creating a weakened carbon sink. Some of them also model biologic effects on the ocean carbon pump (e.g., in NZPD models).
• Options
13.

Yes I added an image on Earth Science that shows this from Sammonds and Thompson with their dust scenario

Comment Source:Yes I added an image on [[Earth Science]] that shows this from Sammonds and Thompson with their dust scenario
• Options
14.
edited January 2011

It is now popular to build "Earth system models" with dynamic biospheres.

That sounds really interesting. I somehow doubt these models incorporate living organisms' extreme "ingenuity" in finding new niches and exploiting them. So, I can easily imagine a model that predicts a large-scale dieoff of Amazon rainforest as conditions there depart from what the current rainforest is used to... but it's harder to imagine a model that will predict what new species will invade that dying rainforest, and what happens next.

Martin mentions "Gaia". One thing that puzzles me about Lovelock's pessimism is that it doesn't seem to take "Gaia" into account.

More precisely - let me avoid the complex dispute about the Gaia Hypothesis here - Lovelock's recent books spend a lot of time discussing feedback mechanisms that will make global warming worse, and almost no time discussing feedback mechanisms that will make it not so bad. But it's the latter sort of mechanism - the stabilizing "negative feedback" - which lies at the heart of his work!

If I ever interview Lovelock I'll grill him on this.

Comment Source:> It is now popular to build "Earth system models" with dynamic biospheres. That sounds really interesting. I somehow doubt these models incorporate living organisms' extreme "ingenuity" in finding new niches and exploiting them. So, I can easily imagine a model that predicts a large-scale dieoff of Amazon rainforest as conditions there depart from what the current rainforest is used to... but it's harder to imagine a model that will predict what new species will invade that dying rainforest, and what happens next. Martin mentions "Gaia". One thing that puzzles me about Lovelock's pessimism is that it doesn't seem to take "Gaia" into account. More precisely - let me avoid the complex dispute about the Gaia Hypothesis here - Lovelock's recent books spend a lot of time discussing feedback mechanisms that will make global warming _worse_, and almost no time discussing feedback mechanisms that will make it _not so bad_. But it's the latter sort of mechanism - the stabilizing "negative feedback" - which lies at the heart of his work! If I ever interview Lovelock I'll grill him on this.
• Options
15.

Current Earth system models mostly work on a gross "plant functional type" level. Something like: "it's too dry for tropical rainforest trees, so turn all the trees in this grid cell into grasses". There ultra-fine level ecosystem models that model individual plants and competition between them. Modelers are trying to bridge the gap, e.g. by introducing statistical representations of individual plant dynamics. (See, for example, Harvard's Ecosystem Demography Model ED2.) Of course, it's hard to say how skillfully one can predict the statistical dynamics of ecosystems, even if you could run such a model.

Comment Source:Current Earth system models mostly work on a gross "plant functional type" level. Something like: "it's too dry for tropical rainforest trees, so turn all the trees in this grid cell into grasses". There ultra-fine level ecosystem models that model individual plants and competition between them. Modelers are trying to bridge the gap, e.g. by introducing statistical representations of individual plant dynamics. (See, for example, Harvard's Ecosystem Demography Model ED2.) Of course, it's hard to say how skillfully one can predict the statistical dynamics of ecosystems, even if you could run such a model.
• Options
16.

I'd be inclined to the view that questions of skill at the modeling level become moot until there's sufficient data gathering to both codify ecosystem reactions and what ecosystem components are currently "occupying" various regions on earth at a fine granularity. Which seems unlikely (unless we get some sort of techno-breakthrough that lets it be automated with some kind of robot, of whatever scale.)

Comment Source:I'd be inclined to the view that questions of skill at the modeling level become moot until there's sufficient data gathering to both codify ecosystem reactions and what ecosystem components are currently "occupying" various regions on earth at a fine granularity. Which seems unlikely (unless we get some sort of techno-breakthrough that lets it be automated with some kind of robot, of whatever scale.)
• Options
17.

Nathan Urban wrote:

All of this is a classic debate within climate science. The modeling community tradition has been to move towards the maximum possible model complexity. Those of us closer to the decision side of the science question this.

and:

But this still gives you little ability to explore the space of model uncertainty: you're still stuck with the assumptions made by a small number of modeling groups, instead of one. Some of us think that more computational resources should be devoted to "perturbed physics ensembles", which are a larger number of runs of somewhat less sophisticated models, to explore the space of parametric and structural assumptions. But sophisticated models are still important, to tell us which processes are most likely to be poorly represented in the simpler models.

The exploration of parameter space you are doing also reminds me very much of how I have approached the development of algorithmic trading strategies. I built software that was specifically designed to explore the parameter space for algorithmic trading ideas. When testing these ideas, it is very important to get a feel for the influence of changes in the various parameters and how they interrelate in order to have some faith that the algorithm represents a trading idea that will hold up in the future. The simulations are not of models but of algorithms applied to past historical data. In the trading case, what we term Monte Carlo analysis is the rearrangement of the equity curves of performance of a given algorithm using a specific set of parameters. So we might run a simulation computing the daily account equity assuming trades for a given algorithm and parameter values, and then rearrange that daily equity curve in 10,000 different random ways to see how this affected the performance metrics we cared about.

In trading, you are concerned far more with the character of the performance over the long-term than with predicting anything specific. We know the future will not be like the past, but that it is likely that the character of the markets in the future would be similar to those in the past.

I suppose that climate science has meteorological roots, so it is understandable that many climatologists would be working on models designed to predict specifics since that is what meteorologists care about. But it seems to me that understanding the uncertainty is the most important factor because we could be off in either direction. If things are worse than the average assumption we are in big trouble.

So it seems like we care more about the big-picture character of the future climate trends than the specifics. Which makes me believe that your approach is more important than building one giant master model that is so computation intensive that you can only run "the best" set of parameters.

Comment Source:Nathan Urban wrote: >All of this is a classic debate within climate science. The modeling community tradition has been to move towards the maximum possible model complexity. Those of us closer to the decision side of the science question this. and: >But this still gives you little ability to explore the space of model uncertainty: you're still stuck with the assumptions made by a small number of modeling groups, instead of one. Some of us think that more computational resources should be devoted to "perturbed physics ensembles", which are a larger number of runs of somewhat less sophisticated models, to explore the space of parametric and structural assumptions. But sophisticated models are still important, to tell us which processes are most likely to be poorly represented in the simpler models. Your comments prompted me to read the Azimuth interviews John did earlier. I had previously read the first one but not all four of them. Your interview addressed my question pretty specifically. The exploration of parameter space you are doing also reminds me very much of how I have approached the development of algorithmic trading strategies. I built [software that was specifically designed to explore the parameter space](http://www.tradingblox.com) for algorithmic trading ideas. When testing these ideas, it is very important to get a feel for the influence of changes in the various parameters and how they interrelate in order to have some faith that the algorithm represents a trading idea that will hold up in the future. The simulations are not of models but of algorithms applied to past historical data. In the trading case, what we term Monte Carlo analysis is the rearrangement of the equity curves of performance of a given algorithm using a specific set of parameters. So we might run a simulation computing the daily account equity assuming trades for a given algorithm and parameter values, and then rearrange that daily equity curve in 10,000 different random ways to see how this affected the performance metrics we cared about. In trading, you are concerned far more with the character of the performance over the long-term than with predicting anything specific. We know the future will not be like the past, but that it is likely that the character of the markets in the future would be similar to those in the past. I suppose that climate science has meteorological roots, so it is understandable that many climatologists would be working on models designed to predict specifics since that is what meteorologists care about. But it seems to me that understanding the uncertainty is the most important factor because we could be off in either direction. If things are worse than the average assumption we are in big trouble. So it seems like we care more about the big-picture character of the future climate trends than the specifics. Which makes me believe that your approach is more important than building one giant master model that is so computation intensive that you can only run "the best" set of parameters.
• Options
18.

Hey, I'm glad Curtis and Nathan have a common interest in Monte Carlo methods and prediction!

Comment Source:Hey, I'm glad Curtis and Nathan have a common interest in Monte Carlo methods and prediction!