Options

Blog on information and entropy for biological systems

The blog for the workshop I'm running on "Information and entropy for biological systems" is getting some nice posts, including slides for talks:

  • Welcome!
  • “Pop” Presentation on a Project on Schrödinger’s Paradox
  • Information Theory, Replicator Dynamics, and Distributed Reinforcement Learning
  • Optimal Estimation of Entropy
  • Link Between Population Genetics and the Boltzmann Distribution
  • Introductory Talk (by John Baez)
  • Maximum Entropy as a Foundation for Theory Building in Ecology (by John Harte)

I'll post about all this on the Azimuth blog soon, right after David Spivak's series. But if you want you can go look now.

Comments

  • 1.

    It looks like an interesting workshop. I looked at your introduction and Harte's slides. There is one slide I do not like in your talk, the one on joint entropy and mutual information. I don't think you need to bring in X and Y. There is a joint distribution and two marginal distributions. The rest follows from H() which you've already defined. I put a diagram at the bottom of http://www.azimuthproject.org/azimuth/show/Sandbox which you might like to use.

    The Wikipedia page on mutual information calls X and Y random variables, then treats them as sets too, which is confusing.

    As is traditional with entropy, there is a sign error. You have I(X;Y) = H(X,Y)-H(X)-H(Y). Wikipedia, under Relation to other quantities, has H(X)+H(Y)-H(X,Y).

    Comment Source:It looks like an interesting workshop. I looked at your introduction and Harte's slides. There is one slide I do not like in your talk, the one on joint entropy and mutual information. I don't think you need to bring in X and Y. There is a joint distribution and two marginal distributions. The rest follows from H() which you've already defined. I put a diagram at the bottom of http://www.azimuthproject.org/azimuth/show/Sandbox which you might like to use. The Wikipedia page on mutual information calls X and Y random variables, then treats them as sets too, which is confusing. As is traditional with entropy, there is a sign error. You have I(X;Y) = H(X,Y)-H(X)-H(Y). Wikipedia, under Relation to other quantities, has H(X)+H(Y)-H(X,Y).
  • 2.
    edited April 2015

    I don't think you need to bring in X and Y. There is a joint distribution and two marginal distributions. The rest follows from H() which you've already defined.

    Yes, I know. I mainly brought in these random variables because 1) everybody talks that way, so I should explain the way people talk, 2) Wikipedia provided this picture for free.

    I wanted to avoid the word "marginal" since that's yet another piece of jargon to explain, and I never need it later. That's why I said:

    A 'pair of random variables' $X$ and $Y$ is a probability distribution on a set $S \times T$. This gives probability distributions on $S$ and on $T$.

    ... called marginals, but I didn't want to say that...

    We may thus define three entropies: the joint entropy $H(X,Y)$ and the individual entropies $H(X)$ and $H(Y)$.

    I'll see if your picture helps me out.

    As is traditional with entropy, there is a sign error. You have I(X;Y) = H(X,Y)-H(X)-H(Y). Wikipedia, under Relation to other quantities, has H(X)+H(Y)-H(X,Y).

    Thanks a million! They're right: H(X)+H(Y)-H(X,Y) is nonnegative and it measures the "amount of information the random variables X and Y have in common". For example, if X = Y it's the same as H(X).

    Comment Source:> I don't think you need to bring in X and Y. There is a joint distribution and two marginal distributions. The rest follows from H() which you've already defined. Yes, I know. I mainly brought in these random variables because 1) everybody talks that way, so I should explain the way people talk, 2) Wikipedia provided this picture for free. I wanted to avoid the word "marginal" since that's yet another piece of jargon to explain, and I never need it later. That's why I said: > A 'pair of random variables' $X$ and $Y$ is a probability distribution on a set $S \times T$. This gives probability distributions on $S$ and on $T$. ... called marginals, but I didn't want to say that... > We may thus define three entropies: the **joint entropy** $H(X,Y)$ and the **individual entropies** $H(X)$ and $H(Y)$. I'll see if your picture helps me out. > As is traditional with entropy, there is a sign error. You have I(X;Y) = H(X,Y)-H(X)-H(Y). Wikipedia, under Relation to other quantities, has H(X)+H(Y)-H(X,Y). Thanks a million! They're right: H(X)+H(Y)-H(X,Y) is nonnegative and it measures the "amount of information the random variables X and Y have in common". For example, if X = Y it's the same as H(X).
  • 3.

    Graham: your picture is good for explaining how a probability distribution on $S \times T$ projects down to give probability distributions on $S$ and on $T$, but I think the slide I've got, together with the words I'll say, will do a decent job of rapidly explaining the buzzword "mutual information" - which is what I really needed to explain.

    Comment Source:Graham: your picture is good for explaining how a probability distribution on $S \times T$ projects down to give probability distributions on $S$ and on $T$, but I think the slide I've got, together with the words I'll say, will do a decent job of rapidly explaining the buzzword "mutual information" - which is what I really needed to explain.
  • 4.

    Hi Graham, I thought your picture would help me understand mutual information better but the perspective meant I couldn't see or work out what the represented numbers might be.

    Comment Source:Hi Graham, I thought your picture would help me understand mutual information better but the perspective meant I couldn't see or work out what the represented numbers might be.
  • 5.

    Jim,

    My diagram illustrates a joint probability distribution and marginal distributions. See http://en.wikipedia.org/wiki/Joint_probability_distribution which has a picture for the continuous case.

    Once you have got the hang of joint and marginal distributions, mutual information is a small extra step.

    Comment Source:Jim, My diagram illustrates a joint probability distribution and marginal distributions. See [http://en.wikipedia.org/wiki/Joint_probability_distribution](http://en.wikipedia.org/wiki/Joint_probability_distribution) which has a picture for the continuous case. Once you have got the hang of joint and marginal distributions, mutual information is a small extra step.
  • 6.
    edited April 2015

    It was a great workshop, and I'll blog about it as soon as I recover! All the talk slides are available now, from here:

    It took a while to extract them from some of the speakers.

    Videos of talks will be made available by the end of next week, supposedly.

    Now I'm going to Penn State to speak in John Roe's class "Mathematics for Sustainability".

    Comment Source:It was a great workshop, and I'll blog about it as soon as I recover! All the talk slides are available now, from here: * [Information and entropy in biological systems (part 3)](https://johncarlosbaez.wordpress.com/2015/04/06/information-and-entropy-in-biological-systems-part-3/), _Azimuth Blog_, 6 April 2015. It took a while to extract them from some of the speakers. Videos of talks will be made available by the end of next week, supposedly. Now I'm going to Penn State to speak in John Roe's class "Mathematics for Sustainability".
Sign In or Register to comment.