#### Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Options

# Another look at climate sensitivity

I will start with this tomorrow after lunch (here) John. And I decided to have a separate chat thread for that! I'll read the paper first, so you will probably see questions after that.

It's probably good to start with the simplest models and work our way up. Earlier here on the Forum we had a lot of fun discussing models that exhibit abrupt climate change in response to changes in the "insolation" (very roughly, the amount of sunlight hitting the Earth). Some of these models have been proposed to study the "Snowball Earth" era about 850 million years ago, and also the more recent glacial cycles of the last 1.8 million years or so.

In order of increasing complexity, these are four simple models:

1) In our page Snowball Earth we discuss a paper by I. Zaliapin and M. Ghil called "Another look at climate sensitivity". There's a link to this paper. Equations (11), (12), (13) and (14) in this paper describe a simple energy balance model. There's really just a single equation in the end, equation (15). If you unravel all the notation, it says:

$$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ where $T$ is temperature as a function of time $t$, and $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$ are constants.
«1

• Options
1.

Great!

Comment Source:Great!
• Options
2.
edited March 2011

How do you want to do 1) John? My intention was to use Sage and get the relevant equations and start to see what I can do both with analytical and numerical features of Sage. Some Qs

I assume you want to focus on slow dynamics in 3. which they also do in the paper?

Are you thinking of using the same approach as you did with the Hopf bifurcation, but with Saddle-node instead?

If so how would you like to add noise?

Comment Source:How do you want to do 1) John? My intention was to use Sage and get the relevant equations and start to see what I can do both with analytical and numerical features of Sage. Some Qs I assume you want to focus on slow dynamics in 3. which they also do in the paper? Are you thinking of using the same approach as you did with the Hopf bifurcation, but with Saddle-node instead? If so how would you like to add noise?
• Options
3.

For me the interesting story is that there is (or could be) more than one meta-stable state and that there is another locally stable state that is radically different from the one observed today. That is interesting because a lot of the discussion of climate change revolves around the statements that

a) climate has always been changing,

b) but it has been stable within reasonable bounds for millions of years.

The model can show that the combination of a trend and of stochastic forcing can easily push the climate over the barrier from one locally stable state to another, but does so using a physically reasonable, if very oversimplified, approximation to the thermodynamics of the atmosphere, unlike the generic example of stochastic resonance.

Comment Source:For me the interesting story is that there is (or could be) more than one meta-stable state and that there is another locally stable state that is radically different from the one observed today. That is interesting because a lot of the discussion of climate change revolves around the statements that a) climate has always been changing, b) but it has been stable within reasonable bounds for millions of years. The model can show that the combination of a trend and of stochastic forcing can easily push the climate over the barrier from one locally stable state to another, but does so using a physically reasonable, if very oversimplified, approximation to the thermodynamics of the atmosphere, unlike the generic example of [[stochastic resonance]].
• Options
4.

Yes I assume that it is feasible to use $\mu$ and follow how it is done in the paper and then gradually add noise in some form that is also motivated as a physical property. So it is just to find the correct ones

Comment Source:Yes I assume that it is feasible to use $\mu$ and follow how it is done in the paper and then gradually add noise in some form that is also motivated as a physical property. So it is just to find the correct ones
• Options
5.
edited March 2011

Staffan wrote:

I assume you want to focus on slow dynamics in 3. which they also do in the paper?

What do you you mean by "3."?

In the Zaliapin-Ghil paper they take this equation

$$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ where $T$ is temperature as a function of time $t$, and $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$ are constants. Then in section 3.3.2 they consider the "slow, quasi-adiabatic" dynamics of this equation. That's just a fancy of way of saying they assume

$$\frac{d T}{d t} = 0$$ and solve for $T$ as a function of $\mu$, holding other things fixed at specific values. Here $\mu Q_0$ is the '"insolation", that is, very roughly, the amount of sunlight reaching the Earth. $Q_0$ is some "standard" amount of insolation, and $\mu$ is some parameter near $1$ which they call the "fractional insolation change". They get this graph: So, we see that if $\mu$ is low, there is just one solution for $T$, which we could call the "Snowball Earth" solution. As we increase $\mu$ we reach a value there there are three solutions: two stable ones drawn as solid curves (the "Snowball Earth" solution and the "Hot Earth" solution) together with an intermediate unstable one drawn as a dashed curve. As we increase $\mu$ even more there is only one solution: the "Hot Earth" solution.

So far this is not exciting enough to deserve a computer program, but let me stop here for now.

Comment Source:Staffan wrote: > I assume you want to focus on slow dynamics in 3. which they also do in the paper? What do you you mean by "3."? In the [Zaliapin-Ghil paper](http://www.nonlin-processes-geophys.net/17/113/2010/npg-17-113-2010.pdf) they take this equation $$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ where $T$ is temperature as a function of time $t$, and $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$ are constants. Then in section 3.3.2 they consider the "slow, quasi-adiabatic" dynamics of this equation. That's just a fancy of way of saying they assume $$\frac{d T}{d t} = 0$$ and solve for $T$ as a function of $\mu$, holding other things fixed at specific values. Here $\mu Q_0$ is the '"insolation", that is, very roughly, the amount of sunlight reaching the Earth. $Q_0$ is some "standard" amount of insolation, and $\mu$ is some parameter near $1$ which they call the "fractional insolation change". They get this graph: <img src = "http://math.ucr.edu/home/baez/zaliapin_ghil_snowball_earth.jpg" alt = ""/> So, we see that if $\mu$ is low, there is just one solution for $T$, which we could call the "Snowball Earth" solution. As we increase $\mu$ we reach a value there there are three solutions: two stable ones drawn as solid curves (the "Snowball Earth" solution and the "Hot Earth" solution) together with an intermediate unstable one drawn as a dashed curve. As we increase $\mu$ even more there is only one solution: the "Hot Earth" solution. So far this is not exciting enough to deserve a computer program, but let me stop here for now.
• Options
6.
edited March 2011

Next, we might take the same equation:

$$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ but treat $\mu$ not as a constant but as varying as a function of time. The simplest interesting case would be to let $\mu$ oscillate around an average value $\mu_0$:

$$\mu(t) = \mu_0 + a \cos(\omega t)$$ It'll be interesting if the oscillations are big enough that:

• when $\mu$ is large the only stable solution (from the "adiabatic" analysis in my last comment) is the "Hot Earth" solution, but

• when $\mu$ is small the only stable solution is the "Snowball Earth" solution.

Then if we start with $T(0)$ "hot" and the oscillation frequency $\omega$ is not too fast, the temperature $T(t)$ should stay "Hot" until $\mu$ gets sufficiently small, at which point the Earth should "snap" rather suddenly to the "Snowball" solution... and then when $\mu$ gets big enough it should "snap back" to the "Hot" solution.

The next step would be to add noise, but it would be nice to see this first. In particular, it would be nice if we could agree on some interesting values of all these parameters $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$, and see what this graph looks like for those parameters: That'll tell us interesting values for the parameters $\mu_0$ and $a$ in this formula:

$$\mu(t) = \mu_0 + a \cos(\omega t)$$ Of course all of this is just an oversimplified model - not much to do with the actual Earth, except in a rough way. Things get a bit more realistic when we reach the Crucifix-Rougier model.

Comment Source:Next, we might take the same equation: $$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ but treat $\mu$ not as a constant but as varying as a function of time. The simplest interesting case would be to let $\mu$ oscillate around an average value $\mu_0$: $$\mu(t) = \mu_0 + a \cos(\omega t)$$ It'll be interesting if the oscillations are big enough that: * when $\mu$ is large the only stable solution (from the "adiabatic" analysis in my last comment) is the "Hot Earth" solution, but * when $\mu$ is small the only stable solution is the "Snowball Earth" solution. Then if we start with $T(0)$ "hot" and the oscillation frequency $\omega$ is not too fast, the temperature $T(t)$ should stay "Hot" until $\mu$ gets sufficiently small, at which point the Earth should "snap" rather suddenly to the "Snowball" solution... and then when $\mu$ gets big enough it should "snap back" to the "Hot" solution. The next step would be to add noise, but it would be nice to see this first. In particular, it would be nice if we could agree on some interesting values of all these parameters $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$, and see what this graph looks like for those parameters: <img src = "http://math.ucr.edu/home/baez/zaliapin_ghil_snowball_earth.jpg" alt = ""/> That'll tell us interesting values for the parameters $\mu_0$ and $a$ in this formula: $$\mu(t) = \mu_0 + a \cos(\omega t)$$ Of course all of this is just an oversimplified model - not much to do with the actual Earth, except in a rough way. Things get a bit more realistic when we reach the [Crucifix-Rougier model](http://www.azimuthproject.org/azimuth/show/Bayesian%20prediction%20of%20the%20next%20glacial%20inception).
• Options
7.

Sorry I meant section 3, like you talked about here. I got the basic equation into Sage by just copying the source version here and pasting into a Sage cell and adding a keyword %latex to the cell (This is just for me to remember. I'll continue a bit and add time varying $\mu$ . Should we have page here on Azimuth now or should we wait?

Comment Source:Sorry I meant section 3, like you talked about here. I got the basic equation into Sage by just copying the source version here and pasting into a Sage cell and adding a keyword %latex to the cell (This is just for me to remember. I'll continue a bit and add time varying $\mu$ . Should we have page here on Azimuth now or should we wait?
• Options
8.

Staffan wrote:

Sorry I meant section 3, like you talked about here.

Okay, I was afraid you might have meant stage 3) in this grand plan. I'm glad you're still working on 1) and 2) here:

In order of increasing complexity, these are four simple models:

1) In our page Snowball Earth we discuss a paper by I. Zaliapin and M. Ghil called "Another look at climate sensitivity". There's a link to this paper. Equations (11), (12), (13) and (14) in this paper describe a simple energy balance model. There's really just a single equation in the end, equation (15). If you unravel all the notation, it says:

$$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ where $T$ is temperature as a function of time $t$, and $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$ are constants.

2) I can also imagine a stochastic version of the Zaliapin-Ghil model, which should be more interesting, but more complicated.

3) A more complicated deterministic model is described in Bayesian prediction of the next glacial inception:

$$\frac{d}{d t}I(t) = -a_1 \left(k_{\mu} \mu(t) + k_{\theta} \theta(t) - K_I I(t)\right) + k_R R(t)$$ > $$\frac{d}{d t}\mu(t) = b_1 \mu(t) - b_2 \mu(t)^2 - b_3 \mu(t)^3 - b_{\theta} \theta(t)$$ > $$\frac{d}{d t}\theta(t) = - c_1 I(t) - K_{\theta} \theta(t)$$ The meaning of these equation is described on that page, and in more detail in the paper by Crucifix and Rougier, which the page links to. That paper also proposes values for all the constants involved here.

4) The stochastic version of the Crucifix-Rougier model is even more interesting and complicated. That's what their paper is really about.

Comment Source:Staffan wrote: > Sorry I meant section 3, like you talked about here. Okay, I was afraid you might have meant stage 3) in this grand plan. I'm glad you're still working on 1) and 2) here: > In order of increasing complexity, these are four simple models: > 1) In our page [[Snowball Earth]] we discuss a paper by I. Zaliapin and M. Ghil called "Another look at climate sensitivity". There's a link to this paper. Equations (11), (12), (13) and (14) in this paper describe a simple energy balance model. There's really just a single equation in the end, equation (15). If you unravel all the notation, it says: > $$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh \kappa(T - T_c)}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ > where $T$ is temperature as a function of time $t$, and $c, Q_0, c_1, c_2, \kappa, T_c, \sigma, m, T_0$ are constants. > 2) I can also imagine a stochastic version of the Zaliapin-Ghil model, which should be more interesting, but more complicated. > 3) A more complicated deterministic model is described in [[Bayesian prediction of the next glacial inception]]: > $$\frac{d}{d t}I(t) = -a_1 \left(k_{\mu} \mu(t) + k_{\theta} \theta(t) - K_I I(t)\right) + k_R R(t)$$ > $$\frac{d}{d t}\mu(t) = b_1 \mu(t) - b_2 \mu(t)^2 - b_3 \mu(t)^3 - b_{\theta} \theta(t)$$ > $$\frac{d}{d t}\theta(t) = - c_1 I(t) - K_{\theta} \theta(t)$$ > The meaning of these equation is described on that page, and in more detail in the paper by Crucifix and Rougier, which the page links to. That paper also proposes values for all the constants involved here. > 4) The stochastic version of the Crucifix-Rougier model is even more interesting and complicated. That's what their paper is really about.
• Options
9.

There is a small typo in your unraveled eq.17 the ice albedo feedback should be:

$$c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2})$$

Comment Source:There is a small typo in your unraveled eq.17 the ice albedo feedback should be: $$c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2})$$
• Options
10.

I left the Sage draft worksheet public, so you can see and comment: I have some small things to "unravel" in the code but you might start thinking about what to try for parameter values, or double checking constants in the Sage code with zhalian,ghil paper. But you will not see the @interactive part unless you register at:

this alternative server. It is not so heavily loaded, so I have been able to work more concentrated :-)

Comment Source:I left the Sage draft worksheet public, so [you can see](http://demo.sagenb.org/home/pub/92/) and comment: I have some small things to "unravel" in the code but you might start thinking about what to try for parameter values, or double checking constants in the Sage code with zhalian,ghil paper. But you will not see the @interactive part unless you register at: [this alternative server](http://demo.sagenb.org). It is not so heavily loaded, so I have been able to work more concentrated :-)
• Options
11.

Has anyone verified Figure 5? I tried, but failed. It's roughly the right shape, but looks off by a factor of 2 (or worse). Here is the code:

import numpy
import scipy
import matplotlib
from pylab import *

Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)**(-6)
sigma = 5.6697e-8; Tc = 273.0; kappa= 0.05

T = arange(100.0, 400.0+1.0, 1.0)
mu = (sigma*T**4*(1 - m*tanh((T/T0)**6)))/(Q0*(1 - (c1 + .5*c2*(1 - tanh(kappa*(T - Tc))))))

plot(mu, T, '-', lw=2)

xlabel('Fractional Insolation Change (mu)')
ylabel('Temperature (T)')
grid(True)

show()


I'm new to Python so could be doing something silly.

An idea I have to take this in the desired direction of a dynamic insolation driver is to

1. Get the steady state correct (which means getting this chart right).
2. Begin at some point on the chart that is in steady state. This will be our initial conditions.
3. Turn on a driver $\mu(t) = \mu_0 + u(t) a sin(\omega t)$, where $u(t)$ is the unit step function.
4. Solve the time evolution of $T$ numerically.

PS: Only a mathematician would be so unthoughtful about notation as to write down something like $\mu(0) = \mu_0 + a$ ;)

Comment Source:Has anyone verified Figure 5? I tried, but failed. <img src="http://www.azimuthproject.org/azimuth/files/ZG.png" width = "500" alt = ""/> It's roughly the right shape, but looks off by a factor of 2 (or worse). Here is the code: import numpy import scipy import matplotlib from pylab import * Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)**(-6) sigma = 5.6697e-8; Tc = 273.0; kappa= 0.05 T = arange(100.0, 400.0+1.0, 1.0) mu = (sigma*T**4*(1 - m*tanh((T/T0)**6)))/(Q0*(1 - (c1 + .5*c2*(1 - tanh(kappa*(T - Tc)))))) plot(mu, T, '-', lw=2) xlabel('Fractional Insolation Change (mu)') ylabel('Temperature (T)') grid(True) show() I'm new to Python so could be doing something silly. An idea I have to take this in the desired direction of a dynamic insolation driver is to 1. Get the steady state correct (which means getting this chart right). 1. Begin at some point on the chart that is in steady state. This will be our initial conditions. 1. Turn on a driver $\mu(t) = \mu_0 + u(t) a sin(\omega t)$, where $u(t)$ is the unit step function. 1. Solve the time evolution of $T$ numerically. PS: Only a mathematician would be so unthoughtful about notation as to write down something like $\mu(0) = \mu_0 + a$ ;)
• Options
12.
Eric: your factor of 2 comes from a small typo: "T0 = (1.9e-15)(-6)" should be "T0 = (1.9e-15)(1/6.)".

I'm new here; I hope I'll be able to find the time to contribute more than just typo-finding.
Comment Source:Eric: your factor of 2 comes from a small typo: "T0 = (1.9e-15)**(-6)" should be "T0 = (1.9e-15)**(1/6.)". I'm new here; I hope I'll be able to find the time to contribute more than just typo-finding.
• Options
13.

Thanks Matt! That's probably it. Or maybe **(-1/6). I'll fix it when I get home and report back.

Comment Source:Thanks Matt! That's probably it. Or maybe **(-1/6). I'll fix it when I get home and report back.
• Options
14.

Yep. That was definitely a typo, but it still doesn't fix things completely. The new code looks like:

import numpy
import scipy
import matplotlib
from pylab import *

Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)**(-1.0/6.0)
sigma = 5.6697e-8; Tc = 273.0; kappa= 0.05

T = arange(100.0, 400.0+1.0, 1.0)
mu = (sigma*T**4*(1 - m*tanh((T/T0)**6)))/(Q0*(1 - (c1 + .5*c2*(1 - tanh(kappa*(T - Tc))))))

plot(mu, T, '-', lw=2)

xlabel('Fractional Insolation Change (mu)')
ylabel('Temperature (T)')
grid(True)

show()


One interesting difference between Matlab and Python is that you need to pay attention to data types. For example, in Python

T0 = (1.9e-15)**(-1/6)


comes out to be

T0 = 526315789473684.2


It took a second to figure out why. It is because Python seems to be rounding -1/6 down to the nearest integer, which is -1. So it is computing (1.9e-15)**(-1).

Here's the new chart: Still not quite right. I'm missing something obvious...

Comment Source:Yep. That was definitely a typo, but it still doesn't fix things completely. The new code looks like: import numpy import scipy import matplotlib from pylab import * Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)**(-1.0/6.0) sigma = 5.6697e-8; Tc = 273.0; kappa= 0.05 T = arange(100.0, 400.0+1.0, 1.0) mu = (sigma*T**4*(1 - m*tanh((T/T0)**6)))/(Q0*(1 - (c1 + .5*c2*(1 - tanh(kappa*(T - Tc)))))) plot(mu, T, '-', lw=2) xlabel('Fractional Insolation Change (mu)') ylabel('Temperature (T)') grid(True) show() One interesting difference between Matlab and Python is that you need to pay attention to data types. For example, in Python T0 = (1.9e-15)**(-1/6) comes out to be T0 = 526315789473684.2 It took a second to figure out why. It is because Python seems to be rounding -1/6 down to the nearest integer, which is -1. So it is computing (1.9e-15)**(-1). Here's the new chart: <img src="http://www.azimuthproject.org/azimuth/files/ZG_II.png" width = "500" alt = ""/> Still not quite right. I'm missing something obvious...
• Options
15.
edited March 2011

You're hitting a Python version specific thing. Before version 3.0, Python decided to copy the C behaviour that int/int= result rounded down to nearest below int. For version 3 onwards it was decided "the C behaviour is counter-to-reasonable-expectations, so we'll change it to int/int=float". (Incidentally, it appears Java has the C behaviour, so beware of this when doing anything in Java as well.)

(I can't comment on any issues in the rest of the code because I haven't been following the original paper you're reproducing.)

Comment Source:You're hitting a Python version specific thing. Before version 3.0, Python decided to copy the C behaviour that int/int= result rounded down to nearest below int. For version 3 onwards it was decided "the C behaviour is counter-to-reasonable-expectations, so we'll change it to int/int=float". (Incidentally, it appears Java has the C behaviour, so beware of this when doing anything in Java as well.) (I can't comment on any issues in the rest of the code because I haven't been following the original paper you're reproducing.)
• Options
16.
edited March 2011

Hmm. My own sign mistake (i.e. 1/6 in the exponent rather than -1/6) actually reproduces their figure. So I think they messed up: they flipped the sign, giving essentially a flat 0.6 for $1 - m \tanh((T/T_0)^6)$ rather than a function that interpolates between $1$ at $T \to 0$ and $0.6$ for $T \gt 350 \mathrm{K}$. Nothing qualitative changes, but I think quantitatively the plot in their paper is wrong and yours is now correct.

Comment Source:Hmm. My own sign mistake (i.e. 1/6 in the exponent rather than -1/6) actually reproduces their figure. So I think they messed up: they flipped the sign, giving essentially a flat 0.6 for $1 - m \tanh((T/T_0)^6)$ rather than a function that interpolates between $1$ at $T \to 0$ and $0.6$ for $T \gt 350 \mathrm{K}$. Nothing qualitative changes, but I think quantitatively the plot in their paper is wrong and yours is now correct.
• Options
17.

Eric,

I reproduce your figure when I implement the same equations in R. I tried to implement it right from the paper, without copying your code (except for the constants, which I double checked, including going back to the literature to verify the T0 value).

Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)^(-1/6)
sigma = 5.6697e-8; Tc = 273; kappa = 0.05

T = seq(100, 400, by=1)

alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2
lhs = Q0*(1-alpha)

g = 1 - m*tanh((T/T0)^6)
rhs = sigma*T^4 * g

mu = rhs/lhs # mu Q0(1-alpha) = sigma g T^4

plot(mu, T, xlim=c(0,3), ylim=c(100,400), type="l", lwd=2, col="blue", xlab="Fractional insolation change (mu)", ylab="Temperature (T)", main="Zaliapin and Ghil (2010), Fig. 5", xaxs="i", yaxs="i")

abline(h=seq(100, 400, by=50), lty="dotted", col="gray")
abline(v=seq(0, 2.5, by=0.5), lty="dotted", col="gray")

Comment Source:Eric, I reproduce your figure when I implement the same equations in R. I tried to implement it right from the paper, without copying your code (except for the constants, which I double checked, including going back to the literature to verify the T0 value). Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0 = (1.9e-15)^(-1/6) sigma = 5.6697e-8; Tc = 273; kappa = 0.05 T = seq(100, 400, by=1) alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2 lhs = Q0*(1-alpha) g = 1 - m*tanh((T/T0)^6) rhs = sigma*T^4 * g mu = rhs/lhs # mu Q0(1-alpha) = sigma g T^4 plot(mu, T, xlim=c(0,3), ylim=c(100,400), type="l", lwd=2, col="blue", xlab="Fractional insolation change (mu)", ylab="Temperature (T)", main="Zaliapin and Ghil (2010), Fig. 5", xaxs="i", yaxs="i") abline(h=seq(100, 400, by=50), lty="dotted", col="gray") abline(v=seq(0, 2.5, by=0.5), lty="dotted", col="gray")
• Options
18.

Matt,

I think that's it. When plotting they used 1/6 instead of -1/6 defining T0. Their equations themselves are correct.

Comment Source:Matt, I think that's it. When plotting they used 1/6 instead of -1/6 defining T0. Their equations themselves are correct.
• Options
19.
edited March 2011

Updated R code:

Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4
sigma = 5.6697e-8; Tc = 273; kappa = 0.05

T0 = (1.9e-15)^(-1/6) # correct
##T0 = (1.9e-15)^(1/6) # ZG10 bug

T = seq(100, 400, by=1)

# fractional insolation change
mu = function(T)
{
alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2
lhs = Q0*(1-alpha)

g = 1 - m*tanh((T/T0)^6)
rhs = sigma*T^4 * g

mu = rhs/lhs # mu Q0(1-alpha) = sigma g T^4
}

# EBM bifurcation diagram
plot(mu(T), T, xlim=c(0,3), ylim=c(100,400), type="l", lwd=2, col="blue", xlab="Fractional insolation change (mu)", ylab="Temperature (T)", main="Zaliapin and Ghil (2010), Fig. 5 (corrected)", xaxs="i", yaxs="i")

abline(h=seq(100, 400, by=50), lty="dotted", col="gray")
abline(v=seq(0, 2.5, by=0.5), lty="dotted", col="gray")

# modern Earth state
T = 300
points(mu(T), T, pch=21, col="black", bg="green", cex=2)

Comment Source:Updated R code: Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4 sigma = 5.6697e-8; Tc = 273; kappa = 0.05 T0 = (1.9e-15)^(-1/6) # correct ##T0 = (1.9e-15)^(1/6) # ZG10 bug T = seq(100, 400, by=1) # fractional insolation change mu = function(T) { alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2 lhs = Q0*(1-alpha) g = 1 - m*tanh((T/T0)^6) rhs = sigma*T^4 * g mu = rhs/lhs # mu Q0(1-alpha) = sigma g T^4 } # EBM bifurcation diagram plot(mu(T), T, xlim=c(0,3), ylim=c(100,400), type="l", lwd=2, col="blue", xlab="Fractional insolation change (mu)", ylab="Temperature (T)", main="Zaliapin and Ghil (2010), Fig. 5 (corrected)", xaxs="i", yaxs="i") abline(h=seq(100, 400, by=50), lty="dotted", col="gray") abline(v=seq(0, 2.5, by=0.5), lty="dotted", col="gray") # modern Earth state T = 300 points(mu(T), T, pch=21, col="black", bg="green", cex=2)
• Options
20.

Ah. Thanks for confirming :)

Now the fun can begin :)

Comment Source:Ah. Thanks for confirming :) Now the fun can begin :)
• Options
21.

I just used their bug value $T_0^{-6}$ but took care to not raise it ZG eq (14) and got the correct plots for figure ZG fig 4, so I went straight for next step to add time varying mu, see link above to open nb. But I will double check and plot fig 5 in Sage as well . I added Nathan's R code to the Sage notebook. But if You want to run that R cell in the notebook you need to either run it on your own machine with Sage installed or change the R code to save instead of plotting.

Comment Source:I just used their bug value $T_0^{-6}$ but took care to not raise it ZG eq (14) and got the correct plots for figure ZG fig 4, so I went straight for next step to add time varying mu, see link above to open nb. But I will double check and plot fig 5 in Sage as well . I added Nathan's R code to [the Sage notebook](http://demo.sagenb.org/home/pub/92/). But if You want to run that R cell in the notebook you need to either run it on your own machine with Sage installed or change the R code to save instead of plotting.
• Options
22.
Here is that sage code - note it knows about symbolic lhs, rhs of an equation, zg16 and it speaks that universal language that starts with an m :-)

# steady state in zg and their constant values from 3.2
Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0_6= 1.9e-15
sigma_sb= 5.6697e-8; Tc = 273.0

plot zg figure 4 to see if correct

var('T')
mu = 1.0; kappa= 0.05

zg16 = muQ0(1 - (c1 + c2(1 - tanh(kappa(T - Tc)))/2.0)) == sigma_sbT^4 *(1 - m tanh((T^6/T0_6)))
zg16ri = zg16.lhs()
zg16ro = zg16.rhs()

pro = plot(zg16ro,(T,150,400),rgbcolor='red')
pri = plot(zg16ri,(T,150,400))

add the two plots to see the 3 fixed points when mu= 1

pri + pro
Comment Source:Here is that sage code - note it knows about symbolic lhs, rhs of an equation, zg16 and it speaks that universal language that starts with an m :-) # steady state in zg and their constant values from 3.2 Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4; T0_6= 1.9e-15 sigma_sb= 5.6697e-8; Tc = 273.0 # plot zg figure 4 to see if correct var('T') mu = 1.0; kappa= 0.05 zg16 = mu*Q0*(1 - (c1 + c2*(1 - tanh(kappa*(T - Tc)))/2.0)) == sigma_sb*T^4 *(1 - m* tanh((T^6/T0_6))) zg16ri = zg16.lhs() zg16ro = zg16.rhs() pro = plot(zg16ro,(T,150,400),rgbcolor='red') pri = plot(zg16ri,(T,150,400)) # add the two plots to see the 3 fixed points when mu= 1 pri + pro
• Options
23.
edited March 2011

My code shouldn't have 'border=TRUE' in the last line; I forgot to take that out. I edited the code above accordingly.

Comment Source:My code shouldn't have 'border=TRUE' in the last line; I forgot to take that out. I edited the code above accordingly.
• Options
24.

Ok I'll changed that in that in the Sage worksheet. I also changed my Sage equations and constants so they agree with everybody else here who uses $tanh((\frac {T} {T_0})^6)$

Comment Source:Ok I'll changed that in that in the Sage worksheet. I also changed my Sage equations and constants so they agree with everybody else here who uses $tanh((\frac {T} {T_0})^6)$
• Options
25.

...and when i try this i get the plots in fig4 wrong . when kappa= 0.05 and mu = 1 and T_0 with your value you should get three fix points, but I only get one. Nathan did you also plot fig4 ?

Comment Source:...and when i try this i get the plots in fig4 wrong . when kappa= 0.05 and mu = 1 and T_0 with your value you should get three fix points, but I only get one. Nathan did you also plot fig4 ?
• Options
26.

They have the same $T_0$ bug in Figure 4. With that bug, I replicate their figure.

Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4
sigma = 5.6697e-8; Tc = 273; kappa = 0.05

T0 = (1.9e-15)^(-1/6) # correct
##T0 = (1.9e-15)^(1/6) # ZG10 bug

T = seq(150, 400, by=1)

Ri = function(T, mu)
{
alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2
Ri = mu * Q0 * (1-alpha)
}

Ro = function(T)
{
g = 1 - m*tanh((T/T0)^6)
Ro = sigma * g * T^4
}

# plot outgoing radation as a function of temperature
plot(T, Ro(T), xlim=c(150,400), ylim=c(0,630), type="l", lty="dashed", lwd=2, col="darkgreen", xlab="Temperature, T", ylab="Radiation, R", main="Zaliapin and Ghil (2010), Fig. 4", xaxs="i", yaxs="i")

# plot incoming radiation, for different fractional insolations
lines(T, Ri(T, mu=0.5), lwd=2, col="blue")
lines(T, Ri(T, mu=1.0), lwd=2, col="blue")
lines(T, Ri(T, mu=2.0), lwd=2, col="blue")

Comment Source:They have the same $T_0$ bug in Figure 4. With that bug, I replicate their figure. Q0 = 342.5; c1 = 0.15; c2 = 0.7; m = 0.4 sigma = 5.6697e-8; Tc = 273; kappa = 0.05 T0 = (1.9e-15)^(-1/6) # correct ##T0 = (1.9e-15)^(1/6) # ZG10 bug T = seq(150, 400, by=1) # incoming radiation Ri = function(T, mu) { alpha = c1 + c2*(1-tanh(kappa*(T-Tc)))/2 Ri = mu * Q0 * (1-alpha) } # outgoing radiation Ro = function(T) { g = 1 - m*tanh((T/T0)^6) Ro = sigma * g * T^4 } # plot outgoing radation as a function of temperature plot(T, Ro(T), xlim=c(150,400), ylim=c(0,630), type="l", lty="dashed", lwd=2, col="darkgreen", xlab="Temperature, T", ylab="Radiation, R", main="Zaliapin and Ghil (2010), Fig. 4", xaxs="i", yaxs="i") # plot incoming radiation, for different fractional insolations lines(T, Ri(T, mu=0.5), lwd=2, col="blue") lines(T, Ri(T, mu=1.0), lwd=2, col="blue") lines(T, Ri(T, mu=2.0), lwd=2, col="blue")
• Options
27.
edited March 2011

ok, thanks that explains things . I tried both your R scripts in my local R installaton and you are correct. Its a "science bug"!

I also added the other R script to the Sage worksheet.

Comment Source:ok, thanks that explains things . I tried both your R scripts in my local R installaton and you are correct. Its a "science bug"! I also added the other R script to the Sage worksheet.
• Options
28.

Eric: Have you found any fun parameters for the added mu oscillation ? In the worksheet I intend to look at a two things: just having a fixed sine and adjusting that with one parameter - strength which applies to the whole $\mu$ in ZG10 to see what happens (first I was doing an interactive version which is still there, but I have a feeling that it becomes to tedious to look for interesting things).

The other was too look into more realistic positive feedback oscillations that could potentially trigger a fold bifurcation into instability eg global homoclinic bifurcations,phase locking, where some examples would be anything simple that is incrementally changing in nature and all the way to natural disasters.

So should we do both or just settle for the first for now?

Comment Source:Eric: Have you found any fun parameters for the added mu oscillation ? In the worksheet I intend to look at a two things: just having a fixed sine and adjusting that with one parameter - strength which applies to the whole $\mu$ in ZG10 to see what happens (first I was doing an interactive version which is still there, but I have a feeling that it becomes to tedious to look for interesting things). The other was too look into more realistic positive feedback oscillations that could potentially trigger a fold bifurcation into instability eg global homoclinic bifurcations,phase locking, where some examples would be anything simple that is incrementally changing in nature and all the way to natural disasters. So should we do both or just settle for the first for now?
• Options
29.

Hi Staffan,

I haven't had much free time lately, so haven't made any progress beyond what is here. I am a little bummed about their error though. It does seem to be a bit of a blow to a significant portion of their discussion.

Once I get some time, I planned to implement a finite difference (with central differencing) solution for a sinusoidal insolation just to play around. I would not be bummed if you beat me to it.

Comment Source:Hi Staffan, I haven't had much free time lately, so haven't made any progress beyond what is here. I am a little bummed about their error though. It does seem to be a bit of a blow to a significant portion of their discussion. Once I get some time, I planned to implement a finite difference (with central differencing) solution for a sinusoidal insolation just to play around. I would not be bummed if you beat me to it.
• Options
30.

Don't worry I am also bummed about their error. But right now I left the Sage worksheet, with some interactive 3d plots so for now I leave it like that and ill fiddle around a bit now and then until John comes and spank my hands :-) I have some other ideas i want to put up on Azimuth before that.

Comment Source:Don't worry I am also bummed about their error. But right now I left the Sage worksheet, with some interactive 3d plots so for now I leave it like that and ill fiddle around a bit now and then until John comes and spank my hands :-) I have some other ideas i want to put up on Azimuth before that.
• Options
31.
edited April 2011

Hi guys! I'm sorry that I've been neglecting this climate project... I've been absorbed in that "network theory" project. They're both part of Azimuth and they're even both part of the same huge mega-project of understanding models of stochastic systems... but I should pay attention to you folks.

Can somebody summarize the progress so far? I'll try, and you can correct me:

1) You guys took the equation

$$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ and looked for equilibrium states with $d T / d t = 0$, getting

$$\mu = \frac{ \sigma T^4 (1 - m tanh((T/T_0)^6))}{Q_0 (1 - (c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2}))}$$ which expresses $\mu$ (fractional insolation change) as a function of $T$ (the Earth's average temperature).

2) You graphed $\mu$ as a function of $T$ and discovered that Zaliapin and Ghil's graph was wrong.

3) You even figured out what they did wrong: they used $T_0 = (1.9 \cdot 10^{-15})^{1/6}$ when creating their graph, instead of what they said they were using, $T_0 = (1.9 \cdot 10^{-15})^{-1/6}$.

4) Eric and Staffan are "bummed" because they're wrong.

All this sounds great except 4). You should be happy because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I love it when that happens!

Besides, this will be a great advertisement for the Azimuth Project, and the whole idea of "open source science". Here we are, a bunch of "amateurs", and we start trying to replicate the simplest model we can find... and then, right away, you guys discover a mistake in the published literature!

Unfortunately it sounds like the mistake is too small to matter much... but still, it's amusing.

What next? Is there anything I can do to help?

I could help polish the page Another look at climate sensitivity, but I don't understand the graphs at the bottom: which graph came from which procedure. If someone writes a quick note to make that clear, I can add more detail.

Comment Source:Hi guys! I'm sorry that I've been neglecting this climate project... I've been absorbed in that "network theory" project. They're both part of Azimuth and they're even both part of the same huge mega-project of understanding models of stochastic systems... but I should pay attention to you folks. Can somebody summarize the progress so far? I'll try, and you can correct me: 1) You guys took the equation $$c \frac{d T}{d t} = \mu Q_0 (1 - (c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2})) - \sigma T^4 (1 - m tanh((T/T_0)^6))$$ and looked for equilibrium states with $d T / d t = 0$, getting $$\mu = \frac{ \sigma T^4 (1 - m tanh((T/T_0)^6))}{Q_0 (1 - (c_1 + c_2 \frac{1 - tanh (\kappa(T - T_c))}{2}))}$$ which expresses $\mu$ (fractional insolation change) as a function of $T$ (the Earth's average temperature). 2) You graphed $\mu$ as a function of $T$ and discovered that [Zaliapin and Ghil's graph](http://www.azimuthproject.org/azimuth/show/Another+look+at+climate+sensitivity) was wrong. 3) You even figured out what they did wrong: they used $T_0 = (1.9 \cdot 10^{-15})^{1/6}$ when creating their graph, instead of what they said they were using, $T_0 = (1.9 \cdot 10^{-15})^{-1/6}$. 4) Eric and Staffan are "bummed" because they're wrong. All this sounds great except 4). You should be _happy_ because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I _love_ it when that happens! Besides, this will be a great advertisement for the Azimuth Project, and the whole idea of "open source science". Here we are, a bunch of "amateurs", and we start trying to replicate the simplest model we can find... and then, right away, you guys discover a mistake in the published literature! Unfortunately it sounds like the mistake is too small to matter much... but still, it's amusing. What next? Is there anything I can do to help? I could help polish the page [[Another look at climate sensitivity]], but I don't understand the graphs at the bottom: which graph came from which procedure. If someone writes a quick note to make that clear, I can add more detail.
• Options
32.

I'm not bummed about finding a mistake. It happens. I'm just a bit bummed that the mistake is significant enough to void much of the discussion in the paper.

I admit that I have some preconceived notions that climate science was kind of soft to begin with and easy-to-find-mistakes like that enforce the notion and imply the publishing standards are pretty poor.

Comment Source:I'm not bummed about finding a mistake. It happens. I'm just a bit bummed that the mistake is significant enough to void much of the discussion in the paper. I admit that I have some preconceived notions that climate science was kind of soft to begin with and easy-to-find-mistakes like that enforce the notion and imply the publishing standards are pretty poor.
• Options
33.

Eric said:

imply the publishing standards are pretty poor

I wouldn't be surprized if this kind of mistake could get passed the referees in other fields too.

Comment Source:Eric said: > imply the publishing standards are pretty poor I wouldn't be surprized if this kind of mistake could get passed the referees in other fields too.
• Options
34.
edited April 2011

This is an old issue about what referees are supposed to do (at least in experimental sciences, in "pure logic" sciences it might be different). My view, and the instructions I received back when I did refereeing, was that refereeing is supposed to determine if the experimental procedure seems sound, if the investigation is novel and "significant" (which is the most moveable feast criterion), and if the results that were claimed to be obtained have been interpreted appropriately. With the exception of papers that were clearly so poorly put together that the likelihood of results that do not accurately reflect what would happen if you followed the experimental description (whether from typos/mistakes or something else) looked appreciable, it's not expect that (under the current system) the referee will confirm that the results are indeed obtained from the experiment. That is supposed to come from independent replication of other groups on their way to trying to build on these results.

Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases.

Comment Source:This is an old issue about what referees are supposed to do (at least in experimental sciences, in "pure logic" sciences it might be different). My view, and the instructions I received back when I did refereeing, was that refereeing is supposed to determine if the experimental procedure seems sound, if the investigation is novel and "significant" (which is the most moveable feast criterion), and if the results that were claimed to be obtained have been interpreted appropriately. With the exception of papers that were clearly so poorly put together that the likelihood of results that do not accurately reflect what would happen if you followed the experimental description (whether from typos/mistakes or something else) looked appreciable, it's not expect that (under the current system) the referee will confirm that the results are indeed obtained from the experiment. That is supposed to come from independent replication of other groups on their way to trying to build on these results. Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases.
• Options
35.

Eric, why do you say the mistake voids much of their discussion? The qualitative structure of the bifurcation doesn't change. The value $\mu = 1$, which they want to correspond to our current climate, leads in the corrected plot only to a cold-Earth scenario, so it's clear that these parameter choices don't fit the real world very well. But I'm not sure how much this should bother me.

I think understanding the paper really requires putting it in a broader context, which I haven't had the time to do the reading for yet. Being new here, I don't want to derail the ongoing discussion, which I have only a fuzzy sense of the goal of. But the questions that come to mind when I read the paper are:

1. How reliable are the toy models of ice-albedo feedback and the greenhouse effect? Should I believe they accurately capture at least semi-quantitatively the dynamics of these mechanisms, so that plugging them into a simple energy balance model is not a crazy model for the state of the Earth?

2. If these are good toy models, to what extent are their parameters known? Can we pick different, reasonable values that produce a new bifurcation diagram that passes near a real-world temperature value on the "warm" branch, and exhibits the same qualitative structure?

3. To what extent should I believe that the existence of bifurcations in a one-dimensional model (i.e. an energy balance model that averages over the whole Earth) is a good guide to the dynamics of the whole system? It suggests pretty dramatic effects, like hysteresis, that I'm not sure I believe would apply to a more complete climate model. It might be that a more complicated model, for instance, would lead to dynamics where the globe separates into regions of different temperatures in a way that makes the averaged curve monotonic. (I'm thinking by loose analogy to how effective potentials in physics are always convex, even in the presence of phase transitions, because minimum-energy states don't have to be homogeneous but can have regions of different phases. If this analogy doesn't help anyone else, please ignore it.)

The part of the paper that I was bummed about was the beginning, where they discuss Roe & Baker's paper on feedbacks and climate sensitivity. The Roe & Baker paper seems to have gotten a lot of attention; I was aware of it because RealClimate blogged about it when it came out. If Zaliapin and Ghil's discussion of it represents it accurately, this is pretty depressing, since it sounds like Roe and Baker made trivial mistakes in thinking about what Taylor series mean! Of course, Roe and Baker have written a Comment in response, and Zaliapin and Ghil have written a Reply to the comment. Sorting out who is correct on this basic point about series expansions and their implication for uncertainties in outputs with feedback versus uncertainties in inputs should be easy to do (though it might turn out that their real disagreement isn't about trivial mathematics at all, but rather about reasonable ranges for the values of various parameters).

Comment Source:Eric, why do you say the mistake voids much of their discussion? The qualitative structure of the bifurcation doesn't change. The value $\mu = 1$, which they want to correspond to our current climate, leads in the corrected plot only to a cold-Earth scenario, so it's clear that these parameter choices don't fit the real world very well. But I'm not sure how much this should bother me. I think understanding the paper really requires putting it in a broader context, which I haven't had the time to do the reading for yet. Being new here, I don't want to derail the ongoing discussion, which I have only a fuzzy sense of the goal of. But the questions that come to mind when I read the paper are: 1. How reliable are the toy models of ice-albedo feedback and the greenhouse effect? Should I believe they accurately capture at least semi-quantitatively the dynamics of these mechanisms, so that plugging them into a simple energy balance model is not a crazy model for the state of the Earth? 2. If these are good toy models, to what extent are their parameters known? Can we pick different, reasonable values that produce a new bifurcation diagram that passes near a real-world temperature value on the "warm" branch, and exhibits the same qualitative structure? 3. To what extent should I believe that the existence of bifurcations in a one-dimensional model (i.e. an energy balance model that averages over the whole Earth) is a good guide to the dynamics of the whole system? It suggests pretty dramatic effects, like hysteresis, that I'm not sure I believe would apply to a more complete climate model. It might be that a more complicated model, for instance, would lead to dynamics where the globe separates into regions of different temperatures in a way that makes the averaged curve monotonic. (I'm thinking by loose analogy to how effective potentials in physics are always convex, even in the presence of phase transitions, because minimum-energy states don't have to be homogeneous but can have regions of different phases. If this analogy doesn't help anyone else, please ignore it.) The part of the paper that I was bummed about was the beginning, where they discuss Roe & Baker's paper on feedbacks and climate sensitivity. The Roe & Baker paper seems to have gotten a lot of attention; I was aware of it because RealClimate blogged about it when it came out. If Zaliapin and Ghil's discussion of it represents it accurately, this is pretty depressing, since it sounds like Roe and Baker made trivial mistakes in thinking about what Taylor series mean! Of course, Roe and Baker have written a [Comment in response](http://www.nonlin-processes-geophys.net/18/125/2011/npg-18-125-2011.pdf), and Zaliapin and Ghil have written a [Reply to the comment](http://www.nonlin-processes-geophys.net/18/129/2011/npg-18-129-2011.pdf). Sorting out who is correct on this basic point about series expansions and their implication for uncertainties in outputs with feedback versus uncertainties in inputs should be easy to do (though it might turn out that their real disagreement isn't about trivial mathematics at all, but rather about reasonable ranges for the values of various parameters).
• Options
36.

Sorry for the curt responses. I'm not very emotional about this paper, just very short of time. It's no big deal. Basically, I wasn't very enthusiastic about the paper when I first saw it, but decided to work through it anyway hoping to learn something. I don't think I learned much and the paper contains an irritating error that demonstrates a frustrating carelessness on the part of the author.

Anyway, I think there are cool things to be done and I still hope to be productive. Best wishes.

Comment Source:Sorry for the curt responses. I'm not very emotional about this paper, just very short of time. It's no big deal. Basically, I wasn't very enthusiastic about the paper when I first saw it, but decided to work through it anyway hoping to learn something. I don't think I learned much and the paper contains an irritating error that demonstrates a frustrating carelessness on the part of the author. Anyway, I think there are cool things to be done and I still hope to be productive. Best wishes.
• Options
37.

David said:

Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases.

I think it would be significant progress if the authors published and versioned their code along with the paper, so that at least they themselves can reproduce what they have done, and others are able to confirm their suspicians by looking at the published code.

Matt said:

Being new here, I don't want to derail the ongoing discussion, which I have only a fuzzy sense of the goal of.

The overall goal is to gain a better understanding of the most simple models used in climate science, by reproducing the results of published papers - this is a first step into more serious investigations. Part of this is reproducing computer generated graphics.

I have a hidden agenda here, which is:

a) demonstrate how the software part is done with modern tools,

b) demonstrate how the code should be structured and published along with results.

But that's just me, the overall goal is pretty much thinking and learning about the questions you asked.

...this is pretty depressing, since it sounds like Roe and Baker made trivial mistakes in thinking about what Taylor series mean!

Climate science is very demanding, because it is highly inter-disciplinary. That means that a lot of people will make a lot of mistakes that look pretty darn stupid to the experts, when they write about climate science. This does not depress me, I see it as a call to duty.

Physicists and mathematicians are no exception, by the way, I've seen peer-reviewed papers published in physics journals by tenured physicists from major German universisties that are - well - embarrasing. The notorious Gerlich-Tscheuschner paper comes to my mind.

Comment Source:David said: <blockquote> <p> Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases. </p> </blockquote> I think it would be significant progress if the authors published and versioned their code along with the paper, so that at least they themselves can reproduce what they have done, and others are able to confirm their suspicians by looking at the published code. Matt said: <blockquote> <p> Being new here, I don't want to derail the ongoing discussion, which I have only a fuzzy sense of the goal of. </p> </blockquote> The overall goal is to gain a better understanding of the most simple models used in climate science, by reproducing the results of published papers - this is a first step into more serious investigations. Part of this is reproducing computer generated graphics. I have a hidden agenda here, which is: a) demonstrate how the software part is done with modern tools, b) demonstrate how the code should be structured and published along with results. But that's just me, the overall goal is pretty much thinking and learning about the questions you asked. <blockquote> <p> ...this is pretty depressing, since it sounds like Roe and Baker made trivial mistakes in thinking about what Taylor series mean! </p> </blockquote> Climate science is very demanding, because it is highly inter-disciplinary. That means that a lot of people will make a lot of mistakes that look pretty darn stupid to the experts, when they write about climate science. This does not depress me, I see it as a call to duty. Physicists and mathematicians are no exception, by the way, I've seen peer-reviewed papers published in physics journals by tenured physicists from major German universisties that are - well - embarrasing. The notorious <a href="http://arxiv.org/abs/0707.1161">Gerlich-Tscheuschner</a> paper comes to my mind.
• Options
38.
edited April 2011

I just added a summary table of tipping points on Tipping point from Thompson and Sieber excellent survey. It needed to get some perspective on which tipping points /comb of bifurcations.

Yes I was a bit bummed but i implemented most of the things we talked about , and also thought - like Matt - that compared to what ZG10 where bashing it was a minor thing. I think that ZG is correct about the Taylor expansion, but I leave that you mathematicians to resolve

So my "bumminess" had more todo that I needed to get some fresh input and get the discussion going here for details on how to proceed. As the last part i the Sage worksheet is simply an interactive feature - you cannot see that but I will take some static snapshots so you can see.

Comment Source:I just added a summary table of tipping points on [[Tipping point]] from Thompson and Sieber excellent survey. It needed to get some perspective on which tipping points /comb of bifurcations. Yes I was a bit bummed but i [implemented most of the things we talked about](http://demo.sagenb.org/home/pub/92/) , and also thought - like Matt - that compared to what ZG10 where bashing it was a minor thing. I think that ZG is correct about the Taylor expansion, but I leave that you mathematicians to resolve So my "bumminess" had more todo that I needed to get some fresh input and get the discussion going here for details on how to proceed. As the last part i the Sage worksheet is simply an interactive feature - you cannot see that but I will take some static snapshots so you can see.
• Options
39.

John Baez said

All this sounds great except 4). You should be happy because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I love it when that happens!

It is that the scientific spirit? Or an academic careerist spirit? I know you are not an academic careerist. However, there is a fine line between being pleased that you found a mistake, and being pleased that someone else made a mistake that you could find. In my opinion you should not be happy that they are wrong, though it of course nice for Azimuth that people here were the ones to find it.

Comment Source:John Baez said > All this sounds great except 4). You should be _happy_ because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I _love_ it when that happens! It is that the scientific spirit? Or an academic careerist spirit? I know you are not an academic careerist. However, there is a fine line between being pleased that you found a mistake, and being pleased that someone else made a mistake that you could find. In my opinion you should _not_ be happy that they are wrong, though it of course nice for Azimuth that people here were the ones to find it.
• Options
40.

David said:

Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases.

and Tim replied:

I think it would be significant progress if the authors published and versioned their code along with the paper, so that at least they themselves can reproduce what they have done, and others are able to confirm their suspicians by looking at the published code.

I agree. Some journals are beginning to insist on this. One thing I like about R is its support for 'reproducible research'. From CRAN = The Comprehensive R Archive Network:

The goal of reproducible research is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified.

Another link: Reproducible Research with R, LATEX, & Sweave

Comment Source:David said: > Of course, computer simulation papers are on the borderline where it's just about possible to repeat the experiments in some cases, such as this one, and maybe different standards should be settled upon for those cases. and Tim replied: > I think it would be significant progress if the authors published and versioned their code along with the paper, so that at least they themselves can reproduce what they have done, and others are able to confirm their suspicians by looking at the published code. I agree. Some journals are beginning to insist on this. One thing I like about R is its support for 'reproducible research'. From CRAN = The Comprehensive R Archive Network: > The goal of [reproducible research](http://cran.r-project.org/web/views/ReproducibleResearch.html) is to tie specific instructions to data analysis and experimental data so that scholarship can be recreated, better understood and verified. Another link: [Reproducible Research with R, LATEX, & Sweave](http://biostat.mc.vanderbilt.edu/wiki/pub/Main/TheresaScott/ReproducibleResearch.TAScott.handout.pdf)
• Options
41.

These are certainly advances, but it's worth bearing in mind that actively debugging (going looking for non-obvious problems) a large codebase is an intensive job. (Suppose I'm using a version of BLAS that generates incorrect results, or the compiler is miscompiling something because it's doing a wrong analysis, etc.)

Comment Source:These are certainly advances, but it's worth bearing in mind that actively debugging (going looking for non-obvious problems) a large codebase is an intensive job. (Suppose I'm using a version of BLAS that generates incorrect results, or the compiler is miscompiling something because it's doing a wrong analysis, etc.)
• Options
42.

actively debugging (going looking for non-obvious problems) [in] a large codebase is an intensive job.

It certainly is, but more often than compilers or BLAS going wrong, somebody just makes a sign error.

Comment Source:> actively debugging (going looking for non-obvious problems) [in] a large codebase is an intensive job. It certainly is, but more often than compilers or BLAS going wrong, somebody just makes a sign error.
• Options
43.

Sorry again for falling into negative territory. Getting back to the positives:

John wrote:

What next? Is there anything I can do to help?

Although I presume you are not much into writing code, I think it would be nice if you could at least run any code we produce here. Seeing the code in action might help generate ideas.

• Python - I'd go for 2.7 for now. Upgrading to 3.x later should be simple, but some add-ons are not compatible with 3.x yet.
• matplotlib - Again, make sure the version you download corresponds to the Python version.

With the direction I see you going in your current research, at some point I think you will reach the limits of what you can do without computation anyway so it is probably a good idea you start getting your hands dirty now.

Comment Source:Sorry again for falling into negative territory. Getting back to the positives: John wrote: >What next? Is there anything I can do to help? Although I presume you are not much into writing code, I think it would be nice if you could at least run any code we produce here. Seeing the code in action might help generate ideas. Since I already made all the obvious mistakes, I can help you avoid them. I suggest you install the following bare-bones stuff: * [Python](http://www.python.org/getit/) - I'd go for 2.7 for now. Upgrading to 3.x later should be simple, but some add-ons are not compatible with 3.x yet. * [Numpy and Scipy](http://www.scipy.org/Download) - Make sure you download the version corresponding to the Python version you downloaded. * [matplotlib](http://sourceforge.net/projects/matplotlib/files/matplotlib/) - Again, make sure the version you download corresponds to the Python version. With the direction I see you going in your current research, at some point I think you will reach the limits of what you can do without computation anyway so it is probably a good idea you start getting your hands dirty now.
• Options
44.

Instead of trying to worry about which is the right version of all of those, you can take the lazy route and get the Enthought python distribution, which puts it all in one convenient package (free for individual academic use). It's worked well for me, at least.

Comment Source:Instead of trying to worry about which is the right version of all of those, you can take the lazy route and get the [Enthought python distribution](http://www.enthought.com/products/epd.php), which puts it all in one convenient package (free for individual academic use). It's worked well for me, at least.
• Options
45.

True. Enthought looks like a good option for academic purposes. Unfortunately, I'm not associated with a university. I almost wish we could keep everything purely open. I think others here are also not affiliated with a university.

Comment Source:True. Enthought looks like a good option for academic purposes. Unfortunately, I'm not associated with a university. I almost wish we could keep everything purely open. I think others here are also not affiliated with a university.
• Options
46.
edited April 2011

I'm not affiliated with a university, but besides that, I believe that implementations of algorithms commonly used in statistical analysis etc. should be free and open source, and there are enough people out there who think likewise and already have put a lot of work into open source, free implementations. Our best example at this point is the Sage project.

Of course I've used Maple for my diploma thesis, to solve a large system of linear equations in closed form: If you have limited time to produce a certain result and some commercial software provided by your university is the best choice, I won't blame you if you use it :-)

Comment Source:I'm not affiliated with a university, but besides that, I believe that implementations of algorithms commonly used in statistical analysis etc. should be free and open source, and there are enough people out there who think likewise and already have put a lot of work into open source, free implementations. Our best example at this point is the Sage project. Of course I've used Maple for my diploma thesis, to solve a large system of linear equations in closed form: If you have limited time to produce a certain result and some commercial software provided by your university is the best choice, I won't blame you if you use it :-)
• Options
47.
edited April 2011

Eric wrote:

I'm just a bit bummed that the mistake is significant enough to void much of the discussion in the paper.

That would be great — it would mean we could easily publish a paper correcting theirs! But alas, I don't see that this mistake voids anything in their paper. It changes the shape of one graph slightly, but that's all. And all that matters about this graph is its general shape: catastrophe theory is robust against small perturbations. So nothing significant changes.

Am I confused? Maybe I missed some other consequence of their mistake?

Comment Source:Eric wrote: > I'm just a bit bummed that the mistake is significant enough to void much of the discussion in the paper. That would be _**great**_ &mdash; it would mean we could easily publish a paper correcting theirs! But alas, I don't see that this mistake voids _anything_ in their paper. It changes the shape of one graph slightly, but that's all. And all that matters about this graph is its general shape: catastrophe theory is robust against small perturbations. So nothing significant changes. Am I confused? Maybe I missed some other consequence of their mistake?
• Options
48.
edited April 2011

John Baez said:

All this sounds great except 4). You should be happy because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I love it when that happens!

Graham Jones wrote:

It is that the scientific spirit? Or an academic careerist spirit? I know you are not an academic careerist. However, there is a fine line between being pleased that you found a mistake, and being pleased that someone else made a mistake that you could find. In my opinion you should not be happy that they are wrong, though it of course nice for Azimuth that people here were the ones to find it.

I was being ironic, of course. It's just very amusing for me to hear people being "bummed" by finding a mistake in someone else's paper, since most academics I know are quite pleased to catch errors in other people's work.

It's true that there's a fine line here. But there actually is something beneficial about having lots of ambitious careerists eager to find and catch errors: it means errors tend to get caught!

Comment Source:John Baez said: > All this sounds great except 4). You should be happy because they're wrong - and you caught it! Where's your scientific spirit? There's nothing more fun than finding a mistake in someone else's work! I love it when that happens! Graham Jones wrote: > It is that the scientific spirit? Or an academic careerist spirit? I know you are not an academic careerist. However, there is a fine line between being pleased that you found a mistake, and being pleased that someone else made a mistake that you could find. In my opinion you should not be happy that they are wrong, though it of course nice for Azimuth that people here were the ones to find it. I was being ironic, of course. It's just very amusing for me to hear people being "bummed" by finding a mistake in someone else's paper, since most academics I know are quite pleased to catch errors in other people's work. It's true that there's a fine line here. But there actually is something beneficial about having lots of ambitious careerists eager to find and catch errors: it means errors tend to get caught!
• Options
49.

Ok. Back in this comment, I wrote:

An idea I have to take this in the desired direction of a dynamic insolation driver is to

1. Get the steady state correct (which means getting this chart right).
2. Begin at some point on the chart that is in steady state. This will be our initial conditions.
3. Turn on a driver $\mu(t) = \mu_0 + u(t) a sin(\omega t)$, where $u(t)$ is the unit step function.
4. Solve the time evolution of $T$ numerically.

Here are the results...

I started with a tiny amplitude $a = .01$: The little red blurb in the third subplot denotes the temperature variation around equilibrium.

Then I bumped up the amplitude a bit to $a = .05$: Lo and behold, the temperature suddenly jumps down 120 degrees and remains stable there.

The picture looks similar with $a = .1$: Although a bit extreme, here is $a = .5$: It looks like it doesn't take much to launch us into an ice age and it seem next to impossible to warm up once we're in an ice age. Or so it seems to this layman...

Here is the code:

import numpy
import scipy
import matplotlib
from pylab import *

sigma = 5.6697e-8; Q0 = 342.5; kappa= 0.05;

def alpha(kappa,T):
c1 = 0.15; c2 = 0.7; Tc = 273.0;
return c1 + .5*c2*(1 - tanh(kappa*(T - Tc)))

def g(T):
m = 0.4; T0 = (1.9e-15)**(-1/6.);
return 1 - m*tanh((T/T0)**6)

def Ro(sigma,T):
return sigma*g(T)*T**4

def Ri(mu,Q0,kappa,T):
return mu*Q0*(1 - alpha(kappa,T))

def mu_eq(sigma,Q0,kappa,T):
return Ro(sigma,T)/Ri(1.,Q0,kappa,T)

def get_mu(mu0,a,omega,t):
mu = 1.*t;
mu[t < 0] = mu0;
mu[t >= 0] = mu0+a*sin(omega*t[t >=0])
return mu

T_init = 300.;
mu_init = mu_eq(sigma,Q0,kappa,T_init);

a = .5; omega = .1; c = 1.;

nt = 100.;
dt = 2*pi/(nt*omega);
nmin = -nt;
nmax = 9*nt;
time = arange(nmin, nmax, dt);

mu = get_mu(mu_init,a,omega,time);

Rnet = 1.*time;
T = 1.*time;

i = 0;
T_prev = 1.*T_init;
for t in time:
Rnet[i] = Ri(mu[i],Q0,kappa,T_prev)-Ro(sigma,T_prev)
T[i] = T_prev+(dt/c)*Rnet[i]
T_prev = T[i]
i = i+1;

subplot(311)
plot(time, mu,'b-', lw=2)
grid(True)
xlabel('Time')
ylabel('mu')

subplot(312)
plot(time, T,'b-', lw=2)
grid(True)
xlabel('Time')
ylabel('Temperature')

subplot(313)
Tp = arange(100.0, 400.0+1.0, 1.0)
plot(Tp, mu_eq(sigma,Q0,kappa,Tp), 'b-', T, mu, 'r-', lw=2)
grid(True)
xlabel('Temperature')
ylabel('mu')

show()


I'm new to Python and I'm sure this is a total hack job, so if anyone has any tips for writing this more elegantly, please let me know.

There could be bugs as well, so comments welcome.

Comment Source:Ok. Back in [this comment](http://www.math.ntnu.no/~stacey/Mathforge/Azimuth/comments.php?DiscussionID=569&Focus=3621#Comment_3621), I wrote: >An idea I have to take this in the desired direction of a dynamic insolation driver is to >1. Get the steady state correct (which means getting this chart right). >1. Begin at some point on the chart that is in steady state. This will be our initial conditions. >1. Turn on a driver $\mu(t) = \mu_0 + u(t) a sin(\omega t)$, where $u(t)$ is the unit step function. >1. Solve the time evolution of $T$ numerically. Here are the results... I started with a tiny amplitude $a = .01$: <img src="http://www.azimuthproject.org/azimuth/files/ZG_FD_01.png" width = "500" alt = ""/> The little red blurb in the third subplot denotes the temperature variation around equilibrium. Then I bumped up the amplitude a bit to $a = .05$: <img src="http://www.azimuthproject.org/azimuth/files/ZG_FD_05.png" width = "500" alt = ""/> Lo and behold, the temperature suddenly jumps down 120 degrees and remains stable there. The picture looks similar with $a = .1$: <img src="http://www.azimuthproject.org/azimuth/files/ZG_FD_10.png" width = "500" alt = ""/> Although a bit extreme, here is $a = .5$: <img src="http://www.azimuthproject.org/azimuth/files/ZG_FD_50.png" width = "500" alt = ""/> It looks like it doesn't take much to launch us into an ice age and it seem next to impossible to warm up once we're in an ice age. Or so it seems to this layman... Here is the code: import numpy import scipy import matplotlib from pylab import * sigma = 5.6697e-8; Q0 = 342.5; kappa= 0.05; def alpha(kappa,T): c1 = 0.15; c2 = 0.7; Tc = 273.0; return c1 + .5*c2*(1 - tanh(kappa*(T - Tc))) def g(T): m = 0.4; T0 = (1.9e-15)**(-1/6.); return 1 - m*tanh((T/T0)**6) def Ro(sigma,T): return sigma*g(T)*T**4 def Ri(mu,Q0,kappa,T): return mu*Q0*(1 - alpha(kappa,T)) def mu_eq(sigma,Q0,kappa,T): return Ro(sigma,T)/Ri(1.,Q0,kappa,T) def get_mu(mu0,a,omega,t): mu = 1.*t; mu[t < 0] = mu0; mu[t >= 0] = mu0+a*sin(omega*t[t >=0]) return mu T_init = 300.; mu_init = mu_eq(sigma,Q0,kappa,T_init); a = .5; omega = .1; c = 1.; nt = 100.; dt = 2*pi/(nt*omega); nmin = -nt; nmax = 9*nt; time = arange(nmin, nmax, dt); mu = get_mu(mu_init,a,omega,time); Rnet = 1.*time; T = 1.*time; i = 0; T_prev = 1.*T_init; for t in time: Rnet[i] = Ri(mu[i],Q0,kappa,T_prev)-Ro(sigma,T_prev) T[i] = T_prev+(dt/c)*Rnet[i] T_prev = T[i] i = i+1; subplot(311) plot(time, mu,'b-', lw=2) grid(True) xlabel('Time') ylabel('mu') subplot(312) plot(time, T,'b-', lw=2) grid(True) xlabel('Time') ylabel('Temperature') subplot(313) Tp = arange(100.0, 400.0+1.0, 1.0) plot(Tp, mu_eq(sigma,Q0,kappa,Tp), 'b-', T, mu, 'r-', lw=2) grid(True) xlabel('Temperature') ylabel('mu') show() I'm new to Python and I'm sure this is a total hack job, so if anyone has any tips for writing this more elegantly, please let me know. There could be bugs as well, so comments welcome.
• Options
50.

Hi Eric, it might be an idea to do the development on a wiki-page, with some pointers comments here, so that the work is more permanently recorded.

Comment Source:Hi Eric, it might be an idea to do the development on a wiki-page, with some pointers comments here, so that the work is more permanently recorded.